-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-21165] [SQL] [2.2] Use executedPlan instead of analyzedPlan in INSERT AS SELECT #18386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -127,11 +127,11 @@ case class PreprocessTableCreation(sparkSession: SparkSession) extends Rule[Logi | |
| val resolver = sparkSession.sessionState.conf.resolver | ||
| val tableCols = existingTable.schema.map(_.name) | ||
|
|
||
| // As we are inserting into an existing table, we should respect the existing schema and | ||
| // adjust the column order of the given dataframe according to it, or throw exception | ||
| // if the column names do not match. | ||
| // As we are inserting into an existing table, we should respect the existing schema, preserve | ||
| // the case and adjust the column order of the given DataFrame according to it, or throw | ||
| // an exception if the column names do not match. | ||
| val adjustedColumns = tableCols.map { col => | ||
| query.resolve(Seq(col), resolver).getOrElse { | ||
| query.resolve(Seq(col), resolver).map(Alias(_, col)()).getOrElse { | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Need to add an alias for enforcing the query to preserve the original name of table schema, whose case could be different from the underlying query schema
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ah good catch! |
||
| val inputColumns = query.schema.map(_.name).mkString(", ") | ||
| throw new AnalysisException( | ||
| s"cannot resolve '$col' given input columns: [$inputColumns]") | ||
|
|
@@ -168,15 +168,9 @@ case class PreprocessTableCreation(sparkSession: SparkSession) extends Rule[Logi | |
| """.stripMargin) | ||
| } | ||
|
|
||
| val newQuery = if (adjustedColumns != query.output) { | ||
| Project(adjustedColumns, query) | ||
| } else { | ||
| query | ||
| } | ||
|
|
||
| c.copy( | ||
| tableDesc = existingTable, | ||
| query = Some(newQuery)) | ||
| query = Some(Project(adjustedColumns, query))) | ||
|
|
||
| // Here we normalize partition, bucket and sort column names, w.r.t. the case sensitivity | ||
| // config, and do various checks: | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is problematic. The physical plan may have different schema from logical plan(schema name may be different), and the writer should respect the logical schema as that what users expects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. We should always use
analyzed.output