-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-1753 / 1773 / 1814] Update outdated docs for spark-submit, YARN, standalone etc. #701
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
d5fe17b
924f04c
f8ca990
757c184
381fe32
c10e8c7
85a51fc
5b7140a
3cc0649
041017a
4d9d8f7
336bbd9
a8c39c5
25cfe7b
e2c2312
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -43,18 +43,19 @@ Unlike in Spark standalone and Mesos mode, in which the master's address is spec | |
|
|
||
| To launch a Spark application in yarn-cluster mode: | ||
|
|
||
| ./bin/spark-submit --class path.to.your.Class --master yarn-cluster [options] <app jar> [app options] | ||
| ./bin/spark-submit --class path.to.your.Class --master yarn-cluster --deploy-mode cluster [options] <app jar> [app options] | ||
|
|
||
| For example: | ||
|
|
||
| $ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \ | ||
| --master yarn-cluster \ | ||
| --deploy-mode cluster \ | ||
| --num-executors 3 \ | ||
| --driver-memory 4g \ | ||
| --executor-memory 2g \ | ||
| --executor-cores 1 | ||
| examples/target/scala-{{site.SCALA_BINARY_VERSION}}/spark-examples-assembly-{{site.SPARK_VERSION}}.jar \ | ||
| yarn-cluster 5 | ||
| 10 | ||
|
|
||
| The above starts a YARN client program which starts the default Application Master. Then SparkPi will be run as a child thread of Application Master. The client will periodically poll the Application Master for status updates and display them in the console. The client will exit once your application has finished running. Refer to the "Viewing Logs" section below for how to see driver and executor logs. | ||
|
|
||
|
|
@@ -68,11 +69,12 @@ In yarn-cluster mode, the driver runs on a different machine than the client, so | |
|
|
||
| $ ./bin/spark-submit --class my.main.Class \ | ||
| --master yarn-cluster \ | ||
| --deploy-mode cluster \ | ||
| --jars my-other-jar.jar,my-other-other-jar.jar | ||
| my-main-jar.jar | ||
| yarn-cluster 5 | ||
| [app arguments] | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same as above, --master should probably just be "yarn". And to be concrete like the other parts of the example, maybe use "apparg1 apparg2" instead of "[app arguments]"? |
||
|
|
||
| # Viewing logs | ||
| # Debugging your Application | ||
|
|
||
| In YARN terminology, executors and application masters run inside "containers". YARN has two modes for handling container logs after an application has completed. If log aggregation is turned on (with the yarn.log-aggregation-enable config), container logs are copied to HDFS and deleted on the local machine. These logs can be viewed from anywhere on the cluster with the "yarn logs" command. | ||
|
|
||
|
|
@@ -82,6 +84,12 @@ will print out the contents of all log files from all containers from the given | |
|
|
||
| When log aggregation isn't turned on, logs are retained locally on each machine under YARN_APP_LOGS_DIR, which is usually configured to /tmp/logs or $HADOOP_HOME/logs/userlogs depending on the Hadoop version and installation. Viewing logs for a container requires going to the host that contains them and looking in this directory. Subdirectories organize log files by application ID and container ID. | ||
|
|
||
| To review per container launch environment, increase yarn.nodemanager.delete.debug-delay-sec to a | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nit: per-container should be hyphenated
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This isn't available to all users. This only applies if you are running your own cluster and have control over nodemanager settings. I believe it also requires a nodemanager restart. on a hosted cluster you won't be able to change this so I think we should add something about that.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point, will do. |
||
| large value (e.g. 36000), and then access the application cache through yarn.nodemanager.local-dirs | ||
| on the nodes on which containers are launched. This directory contains the launch script, jars, and | ||
| all environment variables used for launching each container. This process is useful for debugging | ||
| classpath problems in particular. | ||
|
|
||
| # Important notes | ||
|
|
||
| - Before Hadoop 2.2, YARN does not support cores in container resource requests. Thus, when running against an earlier version, the numbers of cores given via command line arguments cannot be passed to YARN. Whether core requests are honored in scheduling decisions depends on which scheduler is in use and how it is configured. | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If using --deploy-mode cluster, then --master should just be "yarn".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I didn't realize master=yarn-cluster also sets deployMode