Skip to content

Conversation

@JkSelf
Copy link
Contributor

@JkSelf JkSelf commented Oct 22, 2019

What changes were proposed in this pull request?

OptimizeLocalShuffleReader rule is very conservative and gives up optimization as long as there are extra shuffles introduced. It's very likely that most of the added local shuffle readers are fine and only one introduces extra shuffle.

However, it's very hard to make OptimizeLocalShuffleReader optimal, a simple workaround is to run this rule again right before executing a query stage.

Why are the changes needed?

Optimize more shuffle reader to local shuffle reader.

Does this PR introduce any user-facing change?

No

How was this patch tested?

existing ut

@JkSelf JkSelf changed the title fix the flaky test in multi-joins with local shuffle reader [SPARK-29552][SQL] fix the flaky test in multi-joins with local shuffle reader Oct 22, 2019
@JkSelf
Copy link
Contributor Author

JkSelf commented Oct 22, 2019

@cloud-fan Please help me review. Also thanks for your offline help.

@cloud-fan
Copy link
Contributor

ok to test

@cloud-fan
Copy link
Contributor

add to whitelist

@transient private val queryStageOptimizerRules: Seq[Rule[SparkPlan]] = Seq(
ReuseAdaptiveSubquery(conf, subqueryCache),
// Here we need put the OptimizeLocalShuffleReader rule before
// ReduceNumShufflePartitions rule to avoid the further optimizaiton.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the comment needs to explain 2 things:

  1. why execute this rule twice
  2. why it must be run before OptimizeLocalShuffleReader

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

@SparkQA
Copy link

SparkQA commented Oct 22, 2019

Test build #112459 has finished for PR 26207 at commit 997e994.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

assert(bhj.size == 3)
// additional shuffle exchange introduced, only one shuffle reader to local shuffle reader.
checkNumLocalShuffleReaders(adaptivePlan, 1)
checkNumLocalShuffleReaders(adaptivePlan, 2)
Copy link
Contributor

@HeartSaVioR HeartSaVioR Oct 22, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to confirm, would the change make this value consistently be 2? Because the value has changed to 2 but the value was actually flaky (neither 1 or 2 consistently) depending on the situation/randomness (maybe).

You may want to run the same for what I've discovered, 1) solely in local dev, 2) test suite in local dev, 3) trigger CI for 5 times or alike.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HeartSaVioR With this patch, the value will consistently be 2. Because we already optimize all the possible local shuffle reader. And I have run in local dev and also the test suite, the value are all 2. Thanks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK thanks for confirming.

@viirya
Copy link
Member

viirya commented Oct 22, 2019

I think this PR title is not accurate as this is not just fix for flaky test, right?

@JkSelf JkSelf changed the title [SPARK-29552][SQL] fix the flaky test in multi-joins with local shuffle reader [SPARK-29552][SQL] Execute the "OptimizeLocalShuffleReader" rule when creating new query stage and then can optimize the shuffle reader to local shuffle reader as much as possible. Oct 23, 2019
@JkSelf
Copy link
Contributor Author

JkSelf commented Oct 23, 2019

@viirya updated the title. Thanks.

// optimizations should be stage-independent.
@transient private val queryStageOptimizerRules: Seq[Rule[SparkPlan]] = Seq(
ReuseAdaptiveSubquery(conf, subqueryCache),
// We will revert the all local shuffle reader node in OptimizeLocalShuffleReader rule
Copy link
Contributor

@cloud-fan cloud-fan Oct 23, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To polish it a little bit:

When adding local shuffle readers in `OptimizeLocalShuffleReader`, we revert all the local
readers if additional shuffles are introduced. This may be too conservative: maybe there is
only one local reader that introduces shuffle, and we can still keep other local readers.
Here we re-execute this rule with the sub-plan-tree of a query stage, to make sure necessary
local readers are added before executing the query stage.
This rule must be executed before `ReduceNumShufflePartitions`, as local shuffle readers can't
change number of partitions.

…e twice and before ReduceNumShufflePartitions
@SparkQA
Copy link

SparkQA commented Oct 23, 2019

Test build #112511 has finished for PR 26207 at commit 1bc418e.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 23, 2019

Test build #112516 has finished for PR 26207 at commit b372636.

  • This patch fails due to an unknown error code, -9.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

retest this please

@SparkQA
Copy link

SparkQA commented Oct 23, 2019

Test build #112517 has finished for PR 26207 at commit b372636.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@cloud-fan
Copy link
Contributor

thanks, merging to master!

@cloud-fan cloud-fan closed this in 7e8e4c0 Oct 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants