diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md
index a5da80a68d32..aea7043fd831 100644
--- a/docs/running-on-kubernetes.md
+++ b/docs/running-on-kubernetes.md
@@ -1722,6 +1722,25 @@ spec:
image: will-be-overwritten
```
+#### Customized Kubernetes Schedulers for Spark on Kubernetes
+
+Spark allows users to specify a custom Kubernetes schedulers.
+
+1. Specify scheduler name.
+
+ Users can specify a custom scheduler using spark.kubernetes.scheduler.name or
+ spark.kubernetes.{driver/executor}.scheduler.name configuration.
+
+2. Specify scheduler related configurations.
+
+ To configure the custom scheduler the user can use [Pod templates](#pod-template), add labels (spark.kubernetes.{driver,executor}.label.*) and/or annotations (spark.kubernetes.{driver/executor}.annotation.*).
+
+3. Specify scheduler feature step.
+
+ Users may also consider to use spark.kubernetes.{driver/executor}.pod.featureSteps to support more complex requirements, including but not limited to:
+ - Create additional Kubernetes custom resources for driver/executor scheduling.
+ - Set scheduler hints according to configuration or existing Pod info dynamically.
+
### Stage Level Scheduling Overview
Stage level scheduling is supported on Kubernetes when dynamic allocation is enabled. This also requires spark.dynamicAllocation.shuffleTracking.enabled to be enabled since Kubernetes doesn't support an external shuffle service at this time. The order in which containers for different profiles is requested from Kubernetes is not guaranteed. Note that since dynamic allocation on Kubernetes requires the shuffle tracking feature, this means that executors from previous stages that used a different ResourceProfile may not idle timeout due to having shuffle data on them. This could result in using more cluster resources and in the worst case if there are no remaining resources on the Kubernetes cluster then Spark could potentially hang. You may consider looking at config spark.dynamicAllocation.shuffleTracking.timeout to set a timeout, but that could result in data having to be recomputed if the shuffle data is really needed.