-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-29330][CORE][YARN] Allow users to chose the name of Spark Shuffle service #26000
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Allow creation of YarnShuffleService with custom service name
|
|
||
| public YarnShuffleService() { | ||
| super("spark_shuffle"); | ||
| this("spark_shuffle"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this still hardcoded? Should we use configured SHUFFLE_SERVICE_NAME?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is still hardcoded. I haven't found a way to access Spark configuration from that constructor and org.apache.hadoop.yarn.server.api.AuxiliaryService requires the name. Do you have a suggestion of how that could be done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I commented below #26000 (comment), if this is just for yarn, put it in YarnShuffleService, like "spark.yarn.shuffle.stopOnFailure"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is hardcoded here. Once the shuffle service name is configured, won't they mismatch? Will it cause problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is hardcoded here. HDP hardcodes another value though (spark2_shuffle). While vanilla Spark would keep working as is and would use the name spark_shuffle, the new configuration option would allow users to point Spark to non-vanilla shuffle service.
The changes to that class are done only to test that changing the name of the service and in the configuration play nicely together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems impossible to register the service with the name passed in the configuration because the configuration is passed after the class is instantiated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. So this config can only be used to let Spark choose which service to connect. It cannot change the name of Shuffle Service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. I guess I could implement a workaround, which would get the config setting from the default Configuration, but that, at least theoretically, wouldn't guarantee that the exact configuration would be passed during service initialization.
| ConfigBuilder("spark.shuffle.service.port").intConf.createWithDefault(7337) | ||
|
|
||
| private[spark] val SHUFFLE_SERVICE_NAME = | ||
| ConfigBuilder("spark.shuffle.service.name").stringConf.createWithDefault("spark_shuffle") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just for yarn external shuffle service, right? If so, just put in YarnShuffleService and name as spark.yarn.shuffle.service.name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's spark.shuffle.service.port right above it, which specifies the port of the said service. If the ports match but the name doesn't, the service wouldn't be located.
EDIT: I would think being consistent with the names is more important than properly namespacing the option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I understand you correctly. Only yarn needs this config, cannot it be in YarnShuffleService like "spark.yarn.shuffle.stopOnFailure"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You only need to make sure ExecutorRunnable, YarnShuffleService uses the same config and configuring it correctly in yarn-site.xml. Isn't? Or I miss anything here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh. Okay. I'm convinced.
Will move the option and the docs to YARN namespace.
docs/configuration.md
Outdated
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.shuffle.service.name</code></td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For yarn, docs/running-on-yarn.md is more suitable for documentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. Will move it there.
| this("spark_shuffle"); | ||
| } | ||
|
|
||
| public YarnShuffleService(String serviceName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it needed to add new constructor? You could not just read configured service name inside original constructor and do initialization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mostly to be able to test it since yarn.nodemanager.aux-services.spark_shuffle.class option assumes no-args constructor. Maybe make it protected?
…and move that option to appropriate place
| </td> | ||
| </tr> | ||
| <tr> | ||
| <td><code>spark.yarn.shuffle.service.name</code></td> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this option under Spark Properties section and not under Configuring the External Shuffle Service section because it's a "client" setting, not the setting of the external shuffle service.
|
ok to test |
|
Test build #111780 has finished for PR 26000 at commit
|
docs/running-on-yarn.md
Outdated
| <td><code>spark.yarn.shuffle.service.name</code></td> | ||
| <td><code>spark_shuffle</code></td> | ||
| <td> | ||
| Name of the external shuffle service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need more description here. This isn't setting what the service runs as, you have to configure that via yarn, this is what executors use for external shuffle service name when launching the container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what "external" word is for, isn't it?
Should I reword it as follows?
Name of the external shuffle service used by executors.
The external service is configured and started by YARN (see Configuring the External Shuffle Service for details). Apache Spark distribution uses namespark_shuffle, but other distributions (e.g. HDP) might use other names.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
many newbie's aren't familiar with what external shuffle service is or even yarn so its best to be clear. How about:
The name of the external shuffle service.
The external shuffle service itself is configured and started by YARN (see Configuring the External Shuffle Service for details). The name specified here must match the name YARN used.
|
|
||
| protected YarnShuffleService(String serviceName) { | ||
| super(serviceName); | ||
| logger.info("Initializing YARN shuffle service for Spark"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets change the log statement to have the servicename in it
docs/running-on-yarn.md
Outdated
| <td><code>spark.yarn.shuffle.service.name</code></td> | ||
| <td><code>spark_shuffle</code></td> | ||
| <td> | ||
| Name of the external shuffle service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
many newbie's aren't familiar with what external shuffle service is or even yarn so its best to be clear. How about:
The name of the external shuffle service.
The external shuffle service itself is configured and started by YARN (see Configuring the External Shuffle Service for details). The name specified here must match the name YARN used.
| this("spark_shuffle"); | ||
| } | ||
|
|
||
| protected YarnShuffleService(String serviceName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the name by itself isn't going to be enough. If you really want it configurable we are going to have to have the port configurable. For instance the config name for the port spark.shuffle.service.port needs to be able to be something like spark.shuffle.service.{serviceName}.port. Otherwise all the spark shuffle servers will try to get the same port and fail. The only other option will be to use 0 for ephemeral but
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name specified here is actually useful only in tests. YARN's service instantiation logic wouldn't even pass the name of the service used in the config to instantiated service. I guess that's the main reason the names and ports are hardcoded or bound to non-namespaced configuration keys.
The way HDP overcomes that is by providing different classpaths with different implementations for different versions of the service (spark_shuffle for Spark 1.6.x and spark2_shuffle for Spark 2+). The only way I see it's possible to pass different parameters to the same implementation of the service is by providing different configs on the classpath.
I will add a comment here stating that the name is actually only used for the tests, but otherwise would always be hardcoded to spark_shuffle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there are a few things getting muddled together here -- one is how you'd support running two shuffle services, and the other is how a client could choose which shuffle service it talks to.
The client can already set the port for the shuffle server with spark.shuffle.service.port, it just can't set the name used in the ExecutorRunnable.
The other thing to add about how the names of the shuffle servers matter in yarn is that the name goes into yarn-site.xml as described in the "Configuring the External Shuffle Service" in running-on-yarn.md.
|
@tgravescs, @viirya Any other concerns? |
|
yes I'm not sure how much this makes sense since you still have to make code modifications and rebuild the shuffle service. You have less to modify but I don't want to confuse users either. |
|
Maybe rename the option from |
squito
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really sure how I feel about this change. First and foremost, I dont' think Apache Spark needs to be making changes to accommodate talking to an HDP cluster, for changes to entirely internal implementation details.
Also, are we opening the door here for Apache Spark supporting running multiple external services? It may not be hard, but do we want to test it and document it? I'm worried about the confusion this will cause.
That said, this is a relatively small change to allow the clients to choose which service they want to talk to, if multiple are running. Maybe that'll become more common with Spark3 coming out, where users will want a spark 2.4 and spark 3 shuffle service running.
| this("spark_shuffle"); | ||
| } | ||
|
|
||
| protected YarnShuffleService(String serviceName) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there are a few things getting muddled together here -- one is how you'd support running two shuffle services, and the other is how a client could choose which shuffle service it talks to.
The client can already set the port for the shuffle server with spark.shuffle.service.port, it just can't set the name used in the ExecutorRunnable.
The other thing to add about how the names of the shuffle servers matter in yarn is that the name goes into yarn-site.xml as described in the "Configuring the External Shuffle Service" in running-on-yarn.md.
| <td><code>spark_shuffle</code></td> | ||
| <td> | ||
| The name of the external shuffle service. | ||
| The external shuffle service itself is configured and started by YARN (see [Configuring the External Shuffle Service](#configuring-the-external-shuffle-service) for details). The name specified here must match the name used in YARN service implementation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would help to mention that must match the name given to the shuffle service in yarn-site.xml under yarn.nodemanager.aux-services.
|
cc @attilapiros |
|
We're closing this PR because it hasn't been updated in a while. This isn't a judgement on the merit of the PR in any way. It's just a way of keeping the PR queue manageable. |
|
thanks for this |
What changes were proposed in this pull request?
This PR allows users to configure used Spark Shuffle Service name to be able to run vanilla Spark on HDP Hadoop clusters.
Why are the changes needed?
As of now, Spark uses hardcoded value
spark_shuffleas the name of the Shuffle Service.HDP distribution of Spark, on the other hand, uses
spark2_shuffle. This is done to be able to run both Spark 1.6 and Spark 2.x on the same Hadoop cluster.Running vanilla Spark on HDP cluster with only Spark 2.x shuffle service (HDP favor) running becomes impossible due to the shuffle service name mismatch.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
The test cases were changed to use the new option.
Patched version of
spark_yarnlibrary was used to run jobs on HDP cluster.