[SPARK-33042][SQL][TEST] Add a test case to ensure changes to spark.sql.optimizer.maxIterations take effect at runtime#29919
Conversation
…s take effect at runtime
|
(Not a Reviewer) LGTM. Thanks for making this behavior explicitly verified with a test case! |
|
ok to test |
| val maxIterationsEnough = 10 | ||
| val analyzed = Project(Alias(Literal(iterations), "attr")() :: Nil, OneRowRelation()).analyze | ||
|
|
||
| conf.setConf(SQLConf.OPTIMIZER_MAX_ITERATIONS, maxIterationsNotEnough) |
There was a problem hiding this comment.
Could you use withSQLConf instead?
| } | ||
| } | ||
|
|
||
| class OptimizerMaxIterationsSuite extends PlanTest { |
There was a problem hiding this comment.
hm, since AnalysisSuite has tests for ANALYZER_MAX_ITERATIONS, how about making the name more general, e.g., OptimizerSuite?
| val analyzed = Project(Alias(Literal(iterations), "attr")() :: Nil, OneRowRelation()).analyze | ||
|
|
||
| conf.setConf(SQLConf.OPTIMIZER_MAX_ITERATIONS, maxIterationsNotEnough) | ||
| val optimizer = new SimpleTestOptimizer() { |
There was a problem hiding this comment.
How about using SparkOptimizer instead of using SimpleTestOptimizer? That's because the existing test for ANALYZER_MAX_ITERATIONS uses Analyzer directly.
There was a problem hiding this comment.
It seems all other Optimizer-related tests are using SimpleTestOptimizer. I guess it is the proper one to be used based on the context.
|
Test build #129293 has finished for PR 29919 at commit
|
|
Kubernetes integration test starting |
|
Kubernetes integration test status success |
|
Kubernetes integration test starting |
|
Kubernetes integration test status success |
|
Test build #129297 has finished for PR 29919 at commit
|
| } | ||
|
|
||
| class OptimizerSuite extends PlanTest { | ||
| test("Adjust maxIterations") { |
There was a problem hiding this comment.
How about Optimizer exceeds max iterations instead?
| /** | ||
| * A dummy optimizer rule for testing that decrements integer literals until 0. | ||
| */ | ||
| object DecrementLiterals extends Rule[LogicalPlan] { |
There was a problem hiding this comment.
We need this dummy rule for this test? Could we just use object SimpleTestOptimizer instead?
There was a problem hiding this comment.
This test needs to control the number of iterations needed for the plan. With the default batch and rules, it is not clear how many iterations will be needed and we should not fix that value.
|
Looks ok otherwise. |
|
Kubernetes integration test starting |
|
Kubernetes integration test status success |
|
Test build #129360 has finished for PR 29919 at commit
|
|
Merged to master. |
What changes were proposed in this pull request?
Add a test case to ensure changes to
spark.sql.optimizer.maxIterationstake effect at runtime.Why are the changes needed?
Currently, there is only one related test case: https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala#L156
However, this test case only checks the value of the conf can be changed at runtime. It does not check the updated value is actually used by the Optimizer.
Does this PR introduce any user-facing change?
No
How was this patch tested?
unit test