-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-12848][SQL] Change parsed decimal literal datatype from Double to Decimal #10796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
f5ecab6
d0166be
d73632c
954f819
0f5f6e0
5c72b58
0ac091a
df8e237
1b0508f
8bf2a27
9afd729
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -661,12 +661,11 @@ object HiveTypeCoercion { | |
| case e if !e.childrenResolved => e | ||
| // Find tightest common type for If, if the true value and false value have different types. | ||
| case i @ If(pred, left, right) if left.dataType != right.dataType => | ||
| findTightestCommonTypeToString(left.dataType, right.dataType).map { widestType => | ||
| findWiderTypeForTwo(left.dataType, right.dataType).map { widestType => | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Just curious why we need to change this?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This failed on the HiveCompatibilitySuite.udf_if test on the following query: |
||
| val newLeft = if (left.dataType == widestType) left else Cast(left, widestType) | ||
| val newRight = if (right.dataType == widestType) right else Cast(right, widestType) | ||
| If(pred, newLeft, newRight) | ||
| }.getOrElse(i) // If there is no applicable conversion, leave expression unchanged. | ||
|
|
||
| // Convert If(null literal, _, _) into boolean type. | ||
| // In the optimizer, we should short-circuit this directly into false value. | ||
| case If(pred, left, right) if pred.dataType == NullType => | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -18,6 +18,7 @@ | |
| package org.apache.spark.sql.sources | ||
|
|
||
| import java.io.File | ||
| import java.net.URI | ||
|
|
||
| import org.apache.spark.sql.{AnalysisException, QueryTest} | ||
| import org.apache.spark.sql.catalyst.expressions.UnsafeProjection | ||
|
|
@@ -65,6 +66,11 @@ class BucketedWriteSuite extends QueryTest with SQLTestUtils with TestHiveSingle | |
|
|
||
| private val df = (0 until 50).map(i => (i % 5, i % 13, i.toString)).toDF("i", "j", "k") | ||
|
|
||
| def tableDir: File = { | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @cloud-fan could you take a look at this?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This looks reasonable, just curious about why the previous one not working after your PR....
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I have no idea. Nothing in the PR seems to touch this - it seems a bit random. The only think I am doing here is to make absolutely sure the paths are the same; the difference in paths is caused by the use of the hive current database in the path name. |
||
| val identifier = hiveContext.sqlParser.parseTableIdentifier("bucketed_table") | ||
| new File(URI.create(hiveContext.catalog.hiveDefaultTableFilePath(identifier))) | ||
| } | ||
|
|
||
| /** | ||
| * A helper method to check the bucket write functionality in low level, i.e. check the written | ||
| * bucket files to see if the data are correct. User should pass in a data dir that these bucket | ||
|
|
@@ -127,7 +133,6 @@ class BucketedWriteSuite extends QueryTest with SQLTestUtils with TestHiveSingle | |
| .bucketBy(8, "j", "k") | ||
| .saveAsTable("bucketed_table") | ||
|
|
||
| val tableDir = new File(hiveContext.warehousePath, "bucketed_table") | ||
| for (i <- 0 until 5) { | ||
| testBucketing(new File(tableDir, s"i=$i"), source, 8, Seq("j", "k")) | ||
| } | ||
|
|
@@ -145,7 +150,6 @@ class BucketedWriteSuite extends QueryTest with SQLTestUtils with TestHiveSingle | |
| .sortBy("k") | ||
| .saveAsTable("bucketed_table") | ||
|
|
||
| val tableDir = new File(hiveContext.warehousePath, "bucketed_table") | ||
| for (i <- 0 until 5) { | ||
| testBucketing(new File(tableDir, s"i=$i"), source, 8, Seq("j"), Seq("k")) | ||
| } | ||
|
|
@@ -161,7 +165,6 @@ class BucketedWriteSuite extends QueryTest with SQLTestUtils with TestHiveSingle | |
| .bucketBy(8, "i", "j") | ||
| .saveAsTable("bucketed_table") | ||
|
|
||
| val tableDir = new File(hiveContext.warehousePath, "bucketed_table") | ||
| testBucketing(tableDir, source, 8, Seq("i", "j")) | ||
| } | ||
| } | ||
|
|
@@ -176,7 +179,6 @@ class BucketedWriteSuite extends QueryTest with SQLTestUtils with TestHiveSingle | |
| .sortBy("k") | ||
| .saveAsTable("bucketed_table") | ||
|
|
||
| val tableDir = new File(hiveContext.warehousePath, "bucketed_table") | ||
| testBucketing(tableDir, source, 8, Seq("i", "j"), Seq("k")) | ||
| } | ||
| } | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is
\\d+(\\.\\d*)correct or\\d+(\\.\\d+)?Do we allow
123.parsed as decimal?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We currently do. See this lexer rule: