-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-29600][SQL] ArrayContains function may return incorrect result for DecimalType #26811
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 4 commits
a186dfd
71b7ad3
d2ce3ae
7412368
6c3545d
7b12d6c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -850,7 +850,7 @@ class DataFrameFunctionsSuite extends QueryTest with SharedSparkSession { | |||||||||||||||||||||||||
| val errorMsg1 = | ||||||||||||||||||||||||||
| s""" | ||||||||||||||||||||||||||
| |Input to function array_contains should have been array followed by a | ||||||||||||||||||||||||||
| |value with same element type, but it's [array<int>, decimal(29,29)]. | ||||||||||||||||||||||||||
| |value with same element type, but it's [array<int>, decimal(38,29)]. | ||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why precision becomes 38 in this case?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala Lines 864 to 869 in 1fc353d
For query array_contains(array(1), .01234567890123456790123456780)e.inputTypes will return Seq(Array(Decimal(38,29)), Decimal(38,29)) and above code will cast .01234567890123456790123456780 as Decimal(38,29).Previously, when we were using findWiderTypeForTwo, decimal types were not getting upcasted but findWiderTypeWithoutStringPromotionForTwo will successfully upcast DecimalType
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Before this PR, we were using
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you mean why in above test case query, An integer cannot be casted to decimal with scale > 28.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yea I get that we can't do cast here. My question is: since we can't do cast, we should leave the expression un-touched. But now we add cast to one side and leave the expression unresolved. Where do we add that useless cast?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala Lines 864 to 869 in 1fc353d
This code is to cast left and right expression one by one. Here,
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Above code is creating new expression by updating only right child.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ah thanks for finding this out! |
||||||||||||||||||||||||||
| """.stripMargin.replace("\n", " ").trim() | ||||||||||||||||||||||||||
| assert(e1.message.contains(errorMsg1)) | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
|
|
@@ -863,6 +863,21 @@ class DataFrameFunctionsSuite extends QueryTest with SharedSparkSession { | |||||||||||||||||||||||||
| |value with same element type, but it's [array<int>, string]. | ||||||||||||||||||||||||||
| """.stripMargin.replace("\n", " ").trim() | ||||||||||||||||||||||||||
| assert(e2.message.contains(errorMsg2)) | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| checkAnswer( | ||||||||||||||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Since this is a bug, can you split these three tests into a separate test unit and add a test title with the jira ID(SPARK-29600)?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Also, can you update the title, too?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure. I'll update |
||||||||||||||||||||||||||
| sql("select array_contains(array(1.10), 1.1)"), | ||||||||||||||||||||||||||
| Seq(Row(true)) | ||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| checkAnswer( | ||||||||||||||||||||||||||
| sql("SELECT array_contains(array(1.1), 1.10)"), | ||||||||||||||||||||||||||
| Seq(Row(true)) | ||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| checkAnswer( | ||||||||||||||||||||||||||
| sql("SELECT array_contains(array(1.11), 1.1)"), | ||||||||||||||||||||||||||
| Seq(Row(false)) | ||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| test("arrays_overlap function") { | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maropu @srowen do we still need a migration guide? It looks an obvious bug to me that we forget to do type coercion for decimal types. And I don't think a user would expect Spark to fail
array_containswith compatible decimal types.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see. The current latest fix looks reasonable. This fix is not a behaviour change but a bug fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @cloud-fan @maropu
I will revert these changes from migration guide.