-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-25579][SQL] Use quoted attribute names if needed in pushed ORC predicates #22597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -106,4 +106,14 @@ abstract class OrcTest extends QueryTest with SQLTestUtils with BeforeAndAfterAl | |
| df: DataFrame, path: File): Unit = { | ||
| df.write.mode(SaveMode.Overwrite).orc(path.getCanonicalPath) | ||
| } | ||
|
|
||
| protected def checkPredicatePushDown(df: DataFrame, numRows: Int, predicate: String): Unit = { | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @HyukjinKwon . I refactor this since it's repeated three times now. |
||
| withTempPath { file => | ||
| // It needs to repartition data so that we can have several ORC files | ||
| // in order to skip stripes in ORC. | ||
| df.repartition(numRows).write.orc(file.getCanonicalPath) | ||
| val actual = stripSparkFilter(spark.read.orc(file.getCanonicalPath).where(predicate)).count() | ||
| assert(actual < numRows) | ||
| } | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this condition take the backtick in column name into account? For instance,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for review. I'll consider that, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@HyukjinKwon . Actually, Spark 2.3.2 ORC (native/hive) doesn't support a backtick character in column names. It fails on writing operation. And, although Spark 2.4.0 broadens the supported special characters like
.and"in column names, the backtick character is not handled yet.So, for that one, I'll proceed in another PR since it's an improvement instead of a regression.
Also, cc @gatorsmile and @dbtsai .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For
ORCandAVROimprovement, SPARK-25722 is created.