-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-18353][CORE] spark.rpc.askTimeout defalut value is not 120s #15833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -221,7 +221,9 @@ object Client { | |
| val conf = new SparkConf() | ||
| val driverArgs = new ClientArguments(args) | ||
|
|
||
| conf.set("spark.rpc.askTimeout", "10") | ||
| if (!conf.contains("spark.rpc.askTimeout")) { | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can this ever be set? I don't remember whether the standalone master/worker reads config files or not.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. also we should probably add this to
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OK, to move forward on this: @rxin I think this ends up affecting application launch because this is the bit that runs applications rather than starts workers, if that's what you mean. It will process @andrewor14 that makes sense, though I note that the other
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it's a separate issue so we can fix it later, but we should fix it since we're just duplicating strings |
||
| conf.set("spark.rpc.askTimeout", "10s") | ||
| } | ||
| Logger.getRootLogger.setLevel(driverArgs.logLevel) | ||
|
|
||
| val rpcEnv = | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @zsxwing was there a reason this timeout is hard coded here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess part of the reason is that master/client have low memory usage and as a result unlikely to have long timeouts due to gc.
@srowen I'm not sure if this is a good change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. @pwendell added it in 3d939e5 and changed the value to 10 in another commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, does that mean that the default should be 10s? the real issue is just that the default ends up being "10" instead of advertised 120s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't actually impact the default for Spark apps, but only the standalone cluster doesn't it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's right. I suspect that either it's OK to let them use the default timeout of 120s in the end, or, probably bears noting the different practical default of 10s in the docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea maybe we should just note it with a comment explaining why it is 10s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, see my current version, that at least lets spark.rpc.askTimeout be set to something besides 10s in standalone. See https://issues.apache.org/jira/browse/SPARK-18353 discussion too.