Skip to content

Conversation

@redsanket
Copy link

What changes were proposed in this pull request?

We want to change the default percentage to 100 for spark.shuffle.server.chunkFetchHandlerThreadsPercent. The reason being
currently this is set to 0. Which means currently if server.ioThreads > 0, the default number of threads would be 2 * #cores instead of server.io.Threads. We want the default to server.io.Threads in case this is not set at all. Also here a default of 0 would also mean 2 * #cores

How was this patch tested?

Manual
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Please review http://spark.apache.org/contributing.html before opening a pull request.

@redsanket redsanket changed the title Change the spark.shuffle.server.chunkFetchHandlerThreadsPercent default to 100 [SPARK-25641] Change the spark.shuffle.server.chunkFetchHandlerThreadsPercent default to 100 Oct 4, 2018
@redsanket
Copy link
Author

@tgravescs @abellina plz take a look thanks

@tgravescs
Copy link
Contributor

ok to test

Copy link
Contributor

@abellina abellina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 pending tests

@SparkQA
Copy link

SparkQA commented Oct 4, 2018

Test build #96943 has finished for PR 22628 at commit 0258d19.

  • This patch fails Java style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Oct 4, 2018

Test build #96951 has finished for PR 22628 at commit a95c608.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@tgravescs
Copy link
Contributor

+1

@srowen
Copy link
Member

srowen commented Oct 6, 2018

I'm not clear from the description what the issue is. I get that the number of threads is undesirable but can you illustrate with a clearer example?

@tgravescs
Copy link
Contributor

the default of 0 gets you the netty default which is always 2 * the # of cores. It ignores if you set the io.serverThreads to set the # of shuffle threads. With a default of 100, it actually applies the setting for io.serverThreads or if you don't have it set then uses the 2 * # of cores. So having a default of 100 is better here.

io.serverThreads = 10 and spark.shuffle.server.chunkFetchHandlerThreadsPercent=100 then the # of threads here for chunked fetches is 10
io.serverThreads=10 and spark.shuffle.server.chunkFetchHandlerThreadsPercent=0 then the # of threads here for chunked fetches is 2*#cores

Its better to have a default of 100 to keep the current behavior.

@redsanket
Copy link
Author

Thanks @tgravescs for explaining the issue

@tgravescs
Copy link
Contributor

pulled into master, thanks @redsanket

@asfgit asfgit closed this in 6353425 Oct 8, 2018
@zsxwing
Copy link
Member

zsxwing commented Oct 8, 2018

@tgravescs
Copy link
Contributor

not sure what you are asking, if you are saying its flaky then file a jira and please cc me and Sanket.

The link above doesn't load for me, please give me permissions if its needed.

jackylee-ch pushed a commit to jackylee-ch/spark that referenced this pull request Feb 18, 2019
…sPercent default to 100

## What changes were proposed in this pull request?

We want to change the default percentage to 100 for spark.shuffle.server.chunkFetchHandlerThreadsPercent. The reason being
currently this is set to 0. Which means currently if server.ioThreads > 0, the default number of threads would be 2 * #cores instead of server.io.Threads. We want the default to server.io.Threads in case this is not set at all. Also here a default of 0 would also mean 2 * #cores

## How was this patch tested?
Manual
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Please review http://spark.apache.org/contributing.html before opening a pull request.

Closes apache#22628 from redsanket/SPARK-25641.

Lead-authored-by: Sanket Chintapalli <[email protected]>
Co-authored-by: Sanket Chintapalli <[email protected]>
Signed-off-by: Thomas Graves <[email protected]>
XUJiahua pushed a commit to XUJiahua/spark that referenced this pull request Apr 9, 2020
…sPercent default to 100

We want to change the default percentage to 100 for spark.shuffle.server.chunkFetchHandlerThreadsPercent. The reason being
currently this is set to 0. Which means currently if server.ioThreads > 0, the default number of threads would be 2 * #cores instead of server.io.Threads. We want the default to server.io.Threads in case this is not set at all. Also here a default of 0 would also mean 2 * #cores

Manual
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Please review http://spark.apache.org/contributing.html before opening a pull request.

Closes apache#22628 from redsanket/SPARK-25641.

Lead-authored-by: Sanket Chintapalli <[email protected]>
Co-authored-by: Sanket Chintapalli <[email protected]>
Signed-off-by: Thomas Graves <[email protected]>
(cherry picked from commit 6353425)
(cherry picked from commit b4cc97a3579ac01061b8c3fce52ea16542aa07e4)

Change-Id: If487d8ac1366933bc3eb4fe9822c586f23081904
(cherry picked from commit e3b58210eb1c5c2a9a040a7b96305043c927bfbf)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants