Skip to content

Conversation

@srowen
Copy link
Member

@srowen srowen commented Nov 9, 2016

What changes were proposed in this pull request?

Avoid hard-coding spark.rpc.askTimeout to non-default in Client; fix doc about spark.rpc.askTimeout default

How was this patch tested?

Existing tests

val conf = new SparkConf()
val driverArgs = new ClientArguments(args)

conf.set("spark.rpc.askTimeout", "10")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @zsxwing was there a reason this timeout is hard coded here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess part of the reason is that master/client have low memory usage and as a result unlikely to have long timeouts due to gc.

@srowen I'm not sure if this is a good change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know. @pwendell added it in 3d939e5 and changed the value to 10 in another commit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, does that mean that the default should be 10s? the real issue is just that the default ends up being "10" instead of advertised 120s.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't actually impact the default for Spark apps, but only the standalone cluster doesn't it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's right. I suspect that either it's OK to let them use the default timeout of 120s in the end, or, probably bears noting the different practical default of 10s in the docs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea maybe we should just note it with a comment explaining why it is 10s.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, see my current version, that at least lets spark.rpc.askTimeout be set to something besides 10s in standalone. See https://issues.apache.org/jira/browse/SPARK-18353 discussion too.

@SparkQA
Copy link

SparkQA commented Nov 9, 2016

Test build #68415 has finished for PR 15833 at commit ae2fb2f.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Nov 10, 2016

Test build #68465 has finished for PR 15833 at commit 55cd875.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

val driverArgs = new ClientArguments(args)

conf.set("spark.rpc.askTimeout", "10")
if (!conf.contains("spark.rpc.askTimeout")) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this ever be set? I don't remember whether the standalone master/worker reads config files or not.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also we should probably add this to config/package.scala so we don't duplicate it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, to move forward on this: @rxin I think this ends up affecting application launch because this is the bit that runs applications rather than starts workers, if that's what you mean. It will process --conf from the user command line but then override it here. The report in the JIRA seems to confirm that. At the least, this change would not affect the current behavior.

@andrewor14 that makes sense, though I note that the other spark.rpc configs are not in config/package.scala. The ConfigBuilder doesn't quite seem able to express a property whose default is another property either. Should I still try to refactor that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a separate issue so we can fix it later, but we should fix it since we're just duplicating strings

@SparkQA
Copy link

SparkQA commented Nov 17, 2016

Test build #68771 has finished for PR 15833 at commit e272739.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@srowen
Copy link
Member Author

srowen commented Nov 18, 2016

Going to merge this because it doesn't change any default behavior, just appears to let this configuration be set by the user in some contexts where not possible before.

@srowen
Copy link
Member Author

srowen commented Nov 19, 2016

Merged to master/2.1

@asfgit asfgit closed this in 8b1e108 Nov 19, 2016
asfgit pushed a commit that referenced this pull request Nov 19, 2016
## What changes were proposed in this pull request?

Avoid hard-coding spark.rpc.askTimeout to non-default in Client; fix doc about spark.rpc.askTimeout default

## How was this patch tested?

Existing tests

Author: Sean Owen <[email protected]>

Closes #15833 from srowen/SPARK-18353.

(cherry picked from commit 8b1e108)
Signed-off-by: Sean Owen <[email protected]>
@srowen srowen deleted the SPARK-18353 branch November 19, 2016 12:07
uzadude pushed a commit to uzadude/spark that referenced this pull request Jan 27, 2017
## What changes were proposed in this pull request?

Avoid hard-coding spark.rpc.askTimeout to non-default in Client; fix doc about spark.rpc.askTimeout default

## How was this patch tested?

Existing tests

Author: Sean Owen <[email protected]>

Closes apache#15833 from srowen/SPARK-18353.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants