-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-16030] [SQL] Allow specifying static partitions when inserting to data source tables #13769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can have another PR to change other exceptions in this node from SparkException to AnalysisException.
|
Test build #60805 has finished for PR 13769 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are new tests.
|
Test build #60812 has finished for PR 13769 at commit
|
… to data source tables
| } | ||
| } | ||
|
|
||
| partitionList.sliding(2).foreach { v => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sliding(2) can be dangerous for single-element collections:
scala> Seq(1).sliding(2).foreach(println)
List(1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use the following check instead:
partitionList.dropWhile(_.isDefined).collectFirst {
case Some(_) =>
throw new AnalysisException("...")
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
|
|
||
| // The sum of the number of static partition columns and columns provided in the SELECT | ||
| // clause needs to match the number of columns of the target table. | ||
| if (staticPartitions.size + sourceAttributes.size != targetAttributes.size) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we already have this check somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in PreprocessTableInsertion
|
Test build #60831 has finished for PR 13769 at commit
|
| class DataSourceAnalysisSuite extends SparkFunSuite with BeforeAndAfterAll { | ||
|
|
||
| private var targetAttributes: Seq[Attribute] = _ | ||
| private var targetPartitionSchema: StructType = _ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these vars? Seems that they are immutable throughout this test suite.
|
Test build #60833 has finished for PR 13769 at commit
|
|
LGTM except for minor issues. I'm merging this to master and branch-2.0 first since this is an critical issue. We can address the comments later. |
…to data source tables ## What changes were proposed in this pull request? This PR adds the static partition support to INSERT statement when the target table is a data source table. ## How was this patch tested? New tests in InsertIntoHiveTableSuite and DataSourceAnalysisSuite. **Note: This PR is based on #13766. The last commit is the actual change.** Author: Yin Huai <[email protected]> Closes #13769 from yhuai/SPARK-16030-1. (cherry picked from commit 905f774) Signed-off-by: Cheng Lian <[email protected]>
What changes were proposed in this pull request?
This PR adds the static partition support to INSERT statement when the target table is a data source table.
How was this patch tested?
New tests in InsertIntoHiveTableSuite and DataSourceAnalysisSuite.
Note: This PR is based on #13766. The last commit is the actual change.