-
Notifications
You must be signed in to change notification settings - Fork 29k
[WIP][SPARK-30538][SQL] Control spark sql output small file by merge small partition #27248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe refer to the wrong JIRA number by "SPARK-20538", do you mean "SPARK-30538"?
| val partitionIndexToSize = parent.mapPartitionsWithIndexInternal((index, part) => { | ||
| // TODO make it more accurate | ||
| Map(index -> rowSize * part.size).iterator | ||
| }).collectAsMap() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it too costly to trigger an action to compute all the RDDs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it too costly to trigger an action to compute all the RDDs?
this is after compute rdds and coalese computed rdd.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it too costly to trigger an action to compute all the RDDs?
Sorry for I have make some mistake, it will re-compute last stage. but won't recompute all stage
Yea, thanks |
|
Can one of the admins verify this patch? |
|
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
|
Will someone be able to offer review here? |
What changes were proposed in this pull request?
Add a method to control small file problem
Why are the changes needed?
Spark SQL always generate too many small files of insert type SQL result, here we add a control method to minimize the output file number automatic。
Does this PR introduce any user-facing change?
When set
spark.sql.files.mergeSmallFile.enabled=false,we will combine neighbouring small partition, then will reduce output small files
How was this patch tested?
manual tested
spark.sql.files.mergeSmallFile.enabled=false

spark.sql.files.mergeSmallFile.enabled=true