-
Notifications
You must be signed in to change notification settings - Fork 542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot use ResultSet.getObject to retrieve java.sql.Date (JDBC specification violation?) #1409
Comments
Thanks @LDVSOFT, very well explained! Agree it's an issue to fix. Will add an option for explicit opt-in and change the default mapping to legacy Date/Timestamp. |
Thanks @zhicwu! That should work out. I think that even on the new mapping driver should still allow to fetch I've noticed that my test case also checks that |
Actually, JDBC spec, the same B.3 table, requires to return |
To add some more information on a point raised by @LDVSOFT, there is an issue with As expected it updates the value in |
This issue prevents our custom Clickhouse dialect for Spark from reading Arrays of DateTime & Date from Clickhouse: |
Good day, @dolfinus ! Thank you for reporting this! Thanks! |
Sorry, but the way this connector is implemented is not suitable for my cases. |
@dolfinus ok (if possible, would you please describe what would make our connector work for you?) |
What I expect from all Spark connectors is that they will be implemented as custom DataFrameReader/DataFrameWriter. In that case I can include all required .jars to Spark session, and then pass table or raw SQL query (in Clickhouse syntax) to DataFrameReader, to get a data I need. Same for writing data to a table - I just pass dataframe to DataFrameWriter with a set of options, and it's done. Spark connector for Clickhouse does not implement DataFrameReader/DataFrameWriter, but instead it is implemented as custom catalog. In this case, user is not able to create a dataframe on top of Clickhouse SQL query, instead they should convert query to Spark syntax (there some of function names are not supported at all). Users also have to convert |
Thank you a lot for the feedback, @dolfinus ! |
Moved to next release because of too many breaking changes. |
Describe the bug
At some point ClickHouse JDBC driver switched to using
java.time.
types to represent temporal values. While I salute this change and would prefer too to abandonjava.sql.Date
, the way it was done, unfortunately, breaks some end consumers that expect old types to be returned by default. While originally we detected it by trying to read data from ClickHouse to Spark (by using the latter), I was able to provide a more direct counterexample.Steps to reproduce
SELECT d FROM some_table
whered
has typeDate
or similar (nullable or array),resultSet.getObject("d")
orresultSet.getObject("d", java.sql.Date.class)
.java.time.LocalDate
, that could be correct under some type maps but not the default one,ClassCastException
, that in my eyes violates JDBC specification directly.To be exact, I am referencing to JDBC 4.3 specification, appendix B (quotation with my style):
All
ResultSet.getObject
methods that don't take a type map directly tell in their documentation that they follow the mapping above.Related:
ResultSet.getDate
works as expected.resultSet.getArray("ds").getArray()
) are broken the same way (java.time.LocalDate[]
instead ofjava.sql.Date[]
).TIMESTAMP
withjava.time.Instant
vsjava.sql.Timestamp
.PreparedStatement
parameters, but I would check their behaviour too.I've checked the behaviour of other JDBC drivers against the respectful databases I use, including PostgreSQL, and they work as expected. Older ClickHouse JDBC drvier (I guess around 0.3.1) worked for us.
Expected behaviour
I guess the right thing to do would be one or more of:
java.time.
support forgetObject(<label or number>, java.lang.Class<*>)
andgetObject(<label or number>, <mapping>)
methods while preserving the old defaults,java.time.
to be returned by default, if wanted (I guess it could be achieved by this option changing the connection-default type map),Either way, don't change behaviour from specification until explicit opt-in for one.
Code example
When we detected the issue, after applying the hotfix I was working on a reproducer to understand if the issue was consistent with other API usage or not, here it goes. It uses JUnit 5, TestContainers and Kotlin.
Technically it only fails on the last lines, but I believe the consistent reporting of
java.time.LocalDate
s is not the behaviour I expect.Configuration
Environment
com.clickhouse
,clickhouse-jdbc
,0.4.6
,http
specifier.ClickHouse server
The text was updated successfully, but these errors were encountered: