-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-25352][SQL] Perform ordered global limit when limit number is bigger than topKSortFallbackThreshold #22344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -98,7 +98,8 @@ case class LocalLimitExec(limit: Int, child: SparkPlan) extends UnaryExecNode wi | |
| /** | ||
| * Take the `limit` elements of the child output. | ||
| */ | ||
| case class GlobalLimitExec(limit: Int, child: SparkPlan) extends UnaryExecNode { | ||
| case class GlobalLimitExec(limit: Int, child: SparkPlan, | ||
| orderedLimit: Boolean = false) extends UnaryExecNode { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. what does orderedLimit mean here?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It means this global limit won't change input data order. This is used on sort + limit case which is usually taken by But if limit number is more than
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. what do you mean by "it's not goes for TakeOrderedAndProjectExec"?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When limit number is more than
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. please document it in code.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I see. I will document it in the pr.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. code needs to be documented. we won't find this pr discussion a year from now by looking at the source code, trying to figure out what it means. also the doc needs to be readable. the current doc for the config flag is unfortunately unparsable.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see. Let me submit a pr later to address those document. Really appreciate your comment.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. thanks @viirya can you write a design doc or put it in the classdoc of limit on how we handle limits? your sequence of prs are making limits much more complicated (with optimizations) and very difficult to reason about. i think we can make it easier to reason about, if we actually document the execution strategy.
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok. I will do it too. |
||
|
|
||
| override def output: Seq[Attribute] = child.output | ||
|
|
||
|
|
@@ -126,7 +127,9 @@ case class GlobalLimitExec(limit: Int, child: SparkPlan) extends UnaryExecNode { | |
| // When enabled, Spark goes to take rows at each partition repeatedly until reaching | ||
| // limit number. When disabled, Spark takes all rows at first partition, then rows | ||
| // at second partition ..., until reaching limit number. | ||
| val flatGlobalLimit = sqlContext.conf.limitFlatGlobalLimit | ||
| // The optimization is disabled when it is needed to keep the original order of rows | ||
| // before global sort, e.g., select * from table order by col limit 10. | ||
| val flatGlobalLimit = sqlContext.conf.limitFlatGlobalLimit && !orderedLimit | ||
|
|
||
| val shuffled = new ShuffledRowRDD(shuffleDependency) | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,81 @@ | ||
| /* | ||
| * Licensed to the Apache Software Foundation (ASF) under one or more | ||
| * contributor license agreements. See the NOTICE file distributed with | ||
| * this work for additional information regarding copyright ownership. | ||
| * The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| * (the "License"); you may not use this file except in compliance with | ||
| * the License. You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| package org.apache.spark.sql.execution | ||
|
|
||
| import scala.util.Random | ||
|
|
||
| import org.apache.spark.sql.internal.SQLConf | ||
| import org.apache.spark.sql.test.SharedSQLContext | ||
|
|
||
|
|
||
| class LimitSuite extends SparkPlanTest with SharedSQLContext { | ||
|
|
||
| private var rand: Random = _ | ||
| private var seed: Long = 0 | ||
|
|
||
| protected override def beforeAll(): Unit = { | ||
| super.beforeAll() | ||
| seed = System.currentTimeMillis() | ||
| rand = new Random(seed) | ||
| } | ||
|
|
||
| test("Produce ordered global limit if more than topKSortFallbackThreshold") { | ||
| withSQLConf(SQLConf.TOP_K_SORT_FALLBACK_THRESHOLD.key -> "100") { | ||
| val df = LimitTest.generateRandomInputData(spark, rand).sort("a") | ||
|
|
||
| val globalLimit = df.limit(99).queryExecution.executedPlan.collect { | ||
| case g: GlobalLimitExec => g | ||
| } | ||
| assert(globalLimit.size == 0) | ||
|
|
||
| val topKSort = df.limit(99).queryExecution.executedPlan.collect { | ||
| case t: TakeOrderedAndProjectExec => t | ||
| } | ||
| assert(topKSort.size == 1) | ||
|
|
||
| val orderedGlobalLimit = df.limit(100).queryExecution.executedPlan.collect { | ||
| case g: GlobalLimitExec => g | ||
| } | ||
| assert(orderedGlobalLimit.size == 1 && orderedGlobalLimit(0).orderedLimit == true) | ||
| } | ||
| } | ||
|
|
||
| test("Ordered global limit") { | ||
| val baseDf = LimitTest.generateRandomInputData(spark, rand) | ||
| .select("a").repartition(3).sort("a") | ||
|
|
||
| withSQLConf(SQLConf.LIMIT_FLAT_GLOBAL_LIMIT.key -> "true") { | ||
| val orderedGlobalLimit = GlobalLimitExec(3, baseDf.queryExecution.sparkPlan, | ||
| orderedLimit = true) | ||
| val orderedGlobalLimitResult = SparkPlanTest.executePlan(orderedGlobalLimit, spark.sqlContext) | ||
| .map(_.getInt(0)) | ||
|
|
||
| val globalLimit = GlobalLimitExec(3, baseDf.queryExecution.sparkPlan, orderedLimit = false) | ||
| val globalLimitResult = SparkPlanTest.executePlan(globalLimit, spark.sqlContext) | ||
| .map(_.getInt(0)) | ||
|
|
||
| // Global limit without order takes values at each partition sequentially. | ||
| // After global sort, the values in second partition must be larger than the values | ||
| // in first partition. | ||
| assert(orderedGlobalLimitResult(0) == globalLimitResult(0)) | ||
| assert(orderedGlobalLimitResult(1) < globalLimitResult(1)) | ||
| assert(orderedGlobalLimitResult(2) < globalLimitResult(2)) | ||
| } | ||
| } | ||
| } | ||
|
|
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@viirya sorry to be a little late to the party. This pattern is repeated 4x can you just move it into a helper function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hvanhovell OK. I will create a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, please add
spacein-between s and@.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw here we really need to document what the strategies are. when there were only two cases it's not a big deal because it'd take a few seconds to understand. but this block is pretty large now that's difficult to understand. see join strategy documentation for example.