[SPARK-34385][SQL] Unwrap SparkUpgradeException in v2 Parquet datasource#31497
[SPARK-34385][SQL] Unwrap SparkUpgradeException in v2 Parquet datasource#31497MaxGekk wants to merge 2 commits intoapache:masterfrom
SparkUpgradeException in v2 Parquet datasource#31497Conversation
|
Kubernetes integration test starting |
|
Kubernetes integration test status success |
| case e: ParquetDecodingException => | ||
| if (e.getMessage.contains("Can not read value at")) { | ||
| if (e.getCause.isInstanceOf[SparkUpgradeException]) { | ||
| throw e.getCause |
There was a problem hiding this comment.
If this is user-facing is it possible that it returns null? If it returns null message won't be helpful.
There was a problem hiding this comment.
In practice, this never happens since ParquetDecodingException always wraps an exception there https://github.com/apache/parquet-mr/blob/ee30b13bb5c3f6848c76641d3b93c9858e6746cb/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/InternalParquetRecordReader.java#L253-L255
|
Test build #134955 has finished for PR 31497 at commit
|
dongjoon-hyun
left a comment
There was a problem hiding this comment.
+1, LGTM. Thank you, @MaxGekk and all.
Merged to master.
What changes were proposed in this pull request?
Unwrap
SparkUpgradeExceptionfromParquetDecodingExceptionin v2FilePartitionReaderin the same way as v1 implementation does:spark/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala
Lines 180 to 183 in 3a299aa
Why are the changes needed?
SparkUpgradeExceptionmore visible.Does this PR introduce any user-facing change?
Yes, it can.
How was this patch tested?
By running the affected test suites: