Handle overflow in IntegerNumberToVarcharCoercer#14179
Conversation
Before the change, `IntegerNumberToVarcharCoercer` could produce a `Slice` value that's not valid value for destination `VarcharType`.
There was a problem hiding this comment.
See #5015
I think we decided that the behaviour should match Hive (since this is coercion within connector not the engine) which ignores the bounds?
(However I also think that the Hive behaviour makes no sense and I prefer this implementation)
It cannot ignore bounds. The code before change is obviously wrong: it can produce a value of length n+1 for a varchar(n) data type, which is obviously illegal. Thus, @hashhar I am not removing any compatibility with Hive. I am replacing silent correctness problem with explicit check.
I actually didn't check what's the Hive behavior. And also prefer this version to alternatives (truncate, replace with NULL, etc). |
hashhar
left a comment
There was a problem hiding this comment.
I misunderstood your goal here.
LGTM.
|
Hive behaviour is documented in #5015
|
yes, but not worth it IMO if we want to implement actual hive logic from #5015 (comment), this should go with a test. |
| long value = fromType.getLong(block, position); | ||
| Slice converted = utf8Slice(String.valueOf(value)); | ||
| if (!toType.isUnbounded() && countCodePoints(converted) > toType.getBoundedLength()) { | ||
| throw new TrinoException(INVALID_ARGUMENTS, format("Varchar representation of %s exceed %s bounds", value, toType)); |
|
Do we want a release note for this? |
|
Nope |
Before the change,
IntegerNumberToVarcharCoercercould produce aSlicevalue that's not valid value for destinationVarcharType.Fixes #5015