Skip to content

Commit 2b55ed2

Browse files
author
Andrew Or
committed
Document standalone cluster supervise mode
1 parent 6eb1b6f commit 2b55ed2

File tree

1 file changed

+10
-1
lines changed

1 file changed

+10
-1
lines changed

docs/spark-standalone.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -257,7 +257,7 @@ To run an interactive Spark shell against the cluster, run the following command
257257

258258
You can also pass an option `--total-executor-cores <numCores>` to control the number of cores that spark-shell uses on the cluster.
259259

260-
# Launching Compiled Spark Applications
260+
# Launching Spark Applications
261261

262262
The [`spark-submit` script](submitting-applications.html) provides the most straightforward way to
263263
submit a compiled Spark application to the cluster. For standalone clusters, Spark currently
@@ -272,6 +272,15 @@ should specify them through the `--jars` flag using comma as a delimiter (e.g. `
272272
To control the application's configuration or execution environment, see
273273
[Spark Configuration](configuration.html).
274274

275+
Additionally, standalone `cluster` mode supports restarting your application on failure. To use
276+
this feature, you may pass in the `--supervise` flag to `spark-submit` when launching your
277+
application. Then, if you wish to kill an application that is failing repeatedly, you may do so
278+
through:
279+
280+
./bin/spark-class org.apache.spark.deploy.Client kill <master url> <driver ID>
281+
282+
You can find the driver ID through the standalone Master web UI at `http://<master url>:8080`.
283+
275284
# Resource Scheduling
276285

277286
The standalone cluster mode currently only supports a simple FIFO scheduler across applications.

0 commit comments

Comments
 (0)