Retain precision when casting JSON number to VARCHAR#28917
Retain precision when casting JSON number to VARCHAR#28917findepi wants to merge 6 commits intotrinodb:masterfrom
Conversation
d45a3c0 to
ccc9ea0
Compare
|
Added Benchmark. Results follow. BeforeAfter |
|
For some reason the benchmark shows performance improvement for the affected case. I think there results address @dain's concern (#28882 (comment)). |
6df9050 to
6c10312
Compare
|
The current implementation suffers from the problem that |
- add test cases with numbers with leading/trailing zeros. - verify that casting string -> JSON -> array(varchar) and an optimized path behave the same.
Before the change, when casting a JSON number containing decimal point to VARCHAR, the number would be converted first to DOUBLE. It resulted in unnecessary loss of information.
6c10312 to
14700b4
Compare
|
IMO we should just retain the original text from the JSON. I don't see an upside to normalizing to the text, and the downside is:
|
The general purpose `jsonParse` utility was lossy when it comes to numbers containing a decimal point. This affected `json_parse` SQL function, `JSON` SQL type constructor and connectors which use `jsonParse` to canonicalize JSON representation on remote data read (e.g. PostgreSQL).
together with `Fix numeric precision loss in JSON parsing`, this works now
Per benchmarks, this turned not to be an issue?
The "upside" is that I found a bug in current implementation, which invalidated this assumption. However, combined with #28916 (cherry picked here to run CI), this works as expected, via decimals. However, with
We at least agree that lossy cast to varchar is a problem, i.e. #28881 is a bug. |
overview
Normalize JSON numeric values using Java BigDecmial in JSON to VARCHAR cast. Some examples:
related
.
release notes