diff --git a/charts/spark-operator-chart/README.md b/charts/spark-operator-chart/README.md index 7876beba7..39fedc4d2 100644 --- a/charts/spark-operator-chart/README.md +++ b/charts/spark-operator-chart/README.md @@ -111,7 +111,7 @@ See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall) for command docum | controller.envFrom | list | `[]` | Environment variable sources for controller containers. | | controller.volumeMounts | list | `[]` | Volume mounts for controller containers. | | controller.resources | object | `{}` | Pod resource requests and limits for controller containers. Note, that each job submission will spawn a JVM within the controller pods using "/usr/local/openjdk-11/bin/java -Xmx128m". Kubernetes may kill these Java processes at will to enforce resource limits. When that happens, you will see the following error: 'failed to run spark-submit for SparkApplication [...]: signal: killed' - when this happens, you may want to increase memory limits. | -| controller.securityContext | object | `{"capabilities":{"drop":["ALL"]},"fsGroup":185,"runAsNonRoot":true}` | Security context for controller containers. | +| controller.securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"fsGroup":185,"runAsNonRoot":true}` | Security context for controller containers. | | controller.sidecars | list | `[]` | Sidecar containers for controller pods. | | controller.podDisruptionBudget.enable | bool | `false` | Specifies whether to create pod disruption budget for controller. Ref: [Specifying a Disruption Budget for your Application](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) | | controller.podDisruptionBudget.minAvailable | int | `1` | The number of pods that must be available. Require `controller.replicas` to be greater than 1 | @@ -149,7 +149,7 @@ See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall) for command docum | webhook.envFrom | list | `[]` | Environment variable sources for webhook containers. | | webhook.volumeMounts | list | `[]` | Volume mounts for webhook containers. | | webhook.resources | object | `{}` | Pod resource requests and limits for webhook pods. | -| webhook.securityContext | object | `{"capabilities":{"drop":["ALLL"]},"fsGroup":185,"runAsNonRoot":true}` | Security context for webhook containers. | +| webhook.securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALLL"]},"fsGroup":185,"runAsNonRoot":true}` | Security context for webhook containers. | | webhook.podDisruptionBudget.enable | bool | `false` | Specifies whether to create pod disruption budget for webhook. Ref: [Specifying a Disruption Budget for your Application](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) | | webhook.podDisruptionBudget.minAvailable | int | `1` | The number of pods that must be available. Require `webhook.replicas` to be greater than 1 | | spark.jobNamespaces | list | `["default"]` | List of namespaces where to run spark jobs. If empty string is included, all namespaces will be allowed. Make sure the namespaces have already existed. | diff --git a/charts/spark-operator-chart/values.yaml b/charts/spark-operator-chart/values.yaml index 7521b56a7..71d6cfcf7 100644 --- a/charts/spark-operator-chart/values.yaml +++ b/charts/spark-operator-chart/values.yaml @@ -159,6 +159,7 @@ controller: securityContext: runAsNonRoot: true fsGroup: 185 + allowPrivilegeEscalation: false capabilities: drop: - ALL @@ -304,6 +305,7 @@ webhook: securityContext: runAsNonRoot: true fsGroup: 185 + allowPrivilegeEscalation: false capabilities: drop: - ALLL