diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md index 3172b1bca8f0..3453ee912205 100644 --- a/docs/running-on-kubernetes.md +++ b/docs/running-on-kubernetes.md @@ -229,8 +229,11 @@ pod template that will always be overwritten by Spark. Therefore, users of this the pod template file only lets Spark start with a template pod instead of an empty pod during the pod-building process. For details, see the [full list](#pod-template-properties) of pod template values that will be overwritten by spark. -Pod template files can also define multiple containers. In such cases, Spark will always assume that the first container in -the list will be the driver or executor container. +Pod template files can also define multiple containers. In such cases, you can use the spark properties +`spark.kubernetes.driver.podTemplateContainerName` and `spark.kubernetes.executor.podTemplateContainerName` +to indicate which container should be used as a basis for the driver or executor. +If not specified, or if the container name is not valid, Spark will assume that the first container in the list +will be the driver or executor container. ## Using Kubernetes Volumes @@ -932,16 +935,32 @@ specific to Spark on Kubernetes.
spark.kubernetes.driver.podTemplateFilespark.kubernetes.driver.podTemplateFile=/path/to/driver-pod-template.yaml`
+ Specify the local file that contains the driver pod template. For example
+ spark.kubernetes.driver.podTemplateFile=/path/to/driver-pod-template.yaml
+ spark.kubernetes.driver.podTemplateContainerNamespark.kubernetes.driver.podTemplateContainerName=spark-driver
spark.kubernetes.executor.podTemplateFilespark.kubernetes.executor.podTemplateFile=/path/to/executor-pod-template.yaml`
+ Specify the local file that contains the executor pod template. For example
+ spark.kubernetes.executor.podTemplateFile=/path/to/executor-pod-template.yaml
+ spark.kubernetes.executor.podTemplateContainerNamespark.kubernetes.executor.podTemplateContainerName=spark-executor