Skip to content

Commit 7caecae

Browse files
kimminw00dongjoon-hyun
authored andcommitted
[MINOR][DOCS] Fix typos at ExecutorAllocationManager.scala
### What changes were proposed in this pull request? This PR fixes some typos in <code>core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala</code> file. ### Why are the changes needed? <code>spark.dynamicAllocation.sustainedSchedulerBacklogTimeout</code> (N) is used only after the <code>spark.dynamicAllocation.schedulerBacklogTimeout</code> (M) is exceeded. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? No test needed. Closes #29351 from JoeyValentine/master. Authored-by: JoeyValentine <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]> (cherry picked from commit dc3fac8) Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent cfe62fc commit 7caecae

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,8 @@ import org.apache.spark.util.{Clock, SystemClock, ThreadUtils, Utils}
4545
* executors that could run all current running and pending tasks at once.
4646
*
4747
* Increasing the target number of executors happens in response to backlogged tasks waiting to be
48-
* scheduled. If the scheduler queue is not drained in N seconds, then new executors are added. If
49-
* the queue persists for another M seconds, then more executors are added and so on. The number
48+
* scheduled. If the scheduler queue is not drained in M seconds, then new executors are added. If
49+
* the queue persists for another N seconds, then more executors are added and so on. The number
5050
* added in each round increases exponentially from the previous round until an upper bound has been
5151
* reached. The upper bound is based both on a configured property and on the current number of
5252
* running and pending tasks, as described above.

0 commit comments

Comments
 (0)