-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-18353][CORE] spark.rpc.askTimeout defalut value is not 120s #15833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…doc about spark.rpc.askTimeout default
| val conf = new SparkConf() | ||
| val driverArgs = new ClientArguments(args) | ||
|
|
||
| conf.set("spark.rpc.askTimeout", "10") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @zsxwing was there a reason this timeout is hard coded here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess part of the reason is that master/client have low memory usage and as a result unlikely to have long timeouts due to gc.
@srowen I'm not sure if this is a good change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, does that mean that the default should be 10s? the real issue is just that the default ends up being "10" instead of advertised 120s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't actually impact the default for Spark apps, but only the standalone cluster doesn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's right. I suspect that either it's OK to let them use the default timeout of 120s in the end, or, probably bears noting the different practical default of 10s in the docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea maybe we should just note it with a comment explaining why it is 10s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, see my current version, that at least lets spark.rpc.askTimeout be set to something besides 10s in standalone. See https://issues.apache.org/jira/browse/SPARK-18353 discussion too.
|
Test build #68415 has finished for PR 15833 at commit
|
|
Test build #68465 has finished for PR 15833 at commit
|
| val driverArgs = new ClientArguments(args) | ||
|
|
||
| conf.set("spark.rpc.askTimeout", "10") | ||
| if (!conf.contains("spark.rpc.askTimeout")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this ever be set? I don't remember whether the standalone master/worker reads config files or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also we should probably add this to config/package.scala so we don't duplicate it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, to move forward on this: @rxin I think this ends up affecting application launch because this is the bit that runs applications rather than starts workers, if that's what you mean. It will process --conf from the user command line but then override it here. The report in the JIRA seems to confirm that. At the least, this change would not affect the current behavior.
@andrewor14 that makes sense, though I note that the other spark.rpc configs are not in config/package.scala. The ConfigBuilder doesn't quite seem able to express a property whose default is another property either. Should I still try to refactor that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's a separate issue so we can fix it later, but we should fix it since we're just duplicating strings
|
Test build #68771 has finished for PR 15833 at commit
|
|
Going to merge this because it doesn't change any default behavior, just appears to let this configuration be set by the user in some contexts where not possible before. |
|
Merged to master/2.1 |
## What changes were proposed in this pull request? Avoid hard-coding spark.rpc.askTimeout to non-default in Client; fix doc about spark.rpc.askTimeout default ## How was this patch tested? Existing tests Author: Sean Owen <[email protected]> Closes #15833 from srowen/SPARK-18353. (cherry picked from commit 8b1e108) Signed-off-by: Sean Owen <[email protected]>
## What changes were proposed in this pull request? Avoid hard-coding spark.rpc.askTimeout to non-default in Client; fix doc about spark.rpc.askTimeout default ## How was this patch tested? Existing tests Author: Sean Owen <[email protected]> Closes apache#15833 from srowen/SPARK-18353.
What changes were proposed in this pull request?
Avoid hard-coding spark.rpc.askTimeout to non-default in Client; fix doc about spark.rpc.askTimeout default
How was this patch tested?
Existing tests