From 74dcb53cc5abf8a4680d9b4f7a6ce8e112cce0f1 Mon Sep 17 00:00:00 2001 From: kerthcet Date: Thu, 25 Nov 2021 11:26:25 +0800 Subject: [PATCH] add migration docs for scheduler component config api from v1beta2 to v1beta3 Signed-off-by: kerthcet --- .../en/docs/reference/scheduling/config.md | 33 ++----- .../en/docs/reference/scheduling/policies.md | 96 +------------------ 2 files changed, 10 insertions(+), 119 deletions(-) diff --git a/content/en/docs/reference/scheduling/config.md b/content/en/docs/reference/scheduling/config.md index bce8d36b5c7f0..dd883d3bac9fe 100644 --- a/content/en/docs/reference/scheduling/config.md +++ b/content/en/docs/reference/scheduling/config.md @@ -20,8 +20,7 @@ by implementing one or more of these extension points. You can specify scheduling profiles by running `kube-scheduler --config `, using the -KubeSchedulerConfiguration ([v1beta1](/docs/reference/config-api/kube-scheduler-config.v1beta1/) -or [v1beta2](/docs/reference/config-api/kube-scheduler-config.v1beta2/)) +KubeSchedulerConfiguration ([v1beta2](/docs/reference/config-api/kube-scheduler-config.v1beta2/)) struct. A minimal configuration looks as follows: @@ -179,30 +178,6 @@ that are not enabled by default: volume limits can be satisfied for the node. Extension points: `filter`. -The following plugins are deprecated and can only be enabled in a `v1beta1` -configuration: - -- `NodeResourcesLeastAllocated`: Favors nodes that have a low allocation of - resources. - Extension points: `score`. -- `NodeResourcesMostAllocated`: Favors nodes that have a high allocation of - resources. - Extension points: `score`. -- `RequestedToCapacityRatio`: Favor nodes according to a configured function of - the allocated resources. - Extension points: `score`. -- `NodeLabel`: Filters and / or scores a node according to configured - {{< glossary_tooltip text="label(s)" term_id="label" >}}. - Extension points: `filter`, `score`. -- `ServiceAffinity`: Checks that Pods that belong to a - {{< glossary_tooltip term_id="service" >}} fit in a set of nodes defined by - configured labels. This plugin also favors spreading the Pods belonging to a - Service across nodes. - Extension points: `preFilter`, `filter`, `score`. -- `NodePreferAvoidPods`: Prioritizes nodes according to the node annotation - `scheduler.alpha.kubernetes.io/preferAvoidPods`. - Extension points: `score`. - ### Multiple profiles You can configure `kube-scheduler` to run more than one profile. @@ -285,7 +260,13 @@ only has one pending pods queue. * A plugin enabled in a v1beta2 configuration file takes precedence over the default configuration for that plugin. * Invalid `host` or `port` configured for scheduler healthz and metrics bind address will cause validation failure. +{{% /tab %}} +{{% tab name="v1beta2 → v1beta3" %}} +* Three plugins' weight are increased by default: + * `InterPodAffinity` from 1 to 2 + * `NodeAffinity` from 1 to 2 + * `TaintToleration` from 1 to 3 {{% /tab %}} {{< /tabs >}} diff --git a/content/en/docs/reference/scheduling/policies.md b/content/en/docs/reference/scheduling/policies.md index 99291c2b373e7..d9a6d92cfe89a 100644 --- a/content/en/docs/reference/scheduling/policies.md +++ b/content/en/docs/reference/scheduling/policies.md @@ -6,99 +6,10 @@ weight: 10 -A scheduling Policy can be used to specify the *predicates* and *priorities* -that the {{< glossary_tooltip text="kube-scheduler" term_id="kube-scheduler" >}} -runs to [filter and score nodes](/docs/concepts/scheduling-eviction/kube-scheduler/#kube-scheduler-implementation), -respectively. +In Kubernetes versions before v1.23, a scheduling policy can be used to specify the *predicates* and *priorities* process. For example, you can set a scheduling policy by +running `kube-scheduler --policy-config-file ` or `kube-scheduler --policy-configmap `. -You can set a scheduling policy by running -`kube-scheduler --policy-config-file ` or -`kube-scheduler --policy-configmap ` -and using the [Policy type](/docs/reference/config-api/kube-scheduler-policy-config.v1/). - - - -## Predicates - -The following *predicates* implement filtering: - -- `PodFitsHostPorts`: Checks if a Node has free ports (the network protocol kind) - for the Pod ports the Pod is requesting. - -- `PodFitsHost`: Checks if a Pod specifies a specific Node by its hostname. - -- `PodFitsResources`: Checks if the Node has free resources (eg, CPU and Memory) - to meet the requirement of the Pod. - -- `MatchNodeSelector`: Checks if a Pod's Node {{< glossary_tooltip term_id="selector" >}} - matches the Node's {{< glossary_tooltip text="label(s)" term_id="label" >}}. - -- `NoVolumeZoneConflict`: Evaluate if the {{< glossary_tooltip text="Volumes" term_id="volume" >}} - that a Pod requests are available on the Node, given the failure zone restrictions for - that storage. - -- `NoDiskConflict`: Evaluates if a Pod can fit on a Node due to the volumes it requests, - and those that are already mounted. - -- `MaxCSIVolumeCount`: Decides how many {{< glossary_tooltip text="CSI" term_id="csi" >}} - volumes should be attached, and whether that's over a configured limit. - -- `PodToleratesNodeTaints`: checks if a Pod's {{< glossary_tooltip text="tolerations" term_id="toleration" >}} - can tolerate the Node's {{< glossary_tooltip text="taints" term_id="taint" >}}. - -- `CheckVolumeBinding`: Evaluates if a Pod can fit due to the volumes it requests. - This applies for both bound and unbound - {{< glossary_tooltip text="PVCs" term_id="persistent-volume-claim" >}}. - -## Priorities - -The following *priorities* implement scoring: - -- `SelectorSpreadPriority`: Spreads Pods across hosts, considering Pods that - belong to the same {{< glossary_tooltip text="Service" term_id="service" >}}, - {{< glossary_tooltip term_id="statefulset" >}} or - {{< glossary_tooltip term_id="replica-set" >}}. - -- `InterPodAffinityPriority`: Implements preferred - [inter pod affininity and antiaffinity](/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity). - -- `LeastRequestedPriority`: Favors nodes with fewer requested resources. In other - words, the more Pods that are placed on a Node, and the more resources those - Pods use, the lower the ranking this policy will give. - -- `MostRequestedPriority`: Favors nodes with most requested resources. This policy - will fit the scheduled Pods onto the smallest number of Nodes needed to run your - overall set of workloads. - -- `RequestedToCapacityRatioPriority`: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape. - -- `BalancedResourceAllocation`: Favors nodes with balanced resource usage. - -- `NodePreferAvoidPodsPriority`: Prioritizes nodes according to the node annotation - `scheduler.alpha.kubernetes.io/preferAvoidPods`. You can use this to hint that - two different Pods shouldn't run on the same Node. - -- `NodeAffinityPriority`: Prioritizes nodes according to node affinity scheduling - preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution. - You can read more about this in [Assigning Pods to Nodes](/docs/concepts/scheduling-eviction/assign-pod-node/). - -- `TaintTolerationPriority`: Prepares the priority list for all the nodes, based on - the number of intolerable taints on the node. This policy adjusts a node's rank - taking that list into account. - -- `ImageLocalityPriority`: Favors nodes that already have the - {{< glossary_tooltip text="container images" term_id="image" >}} for that - Pod cached locally. - -- `ServiceSpreadingPriority`: For a given Service, this policy aims to make sure that - the Pods for the Service run on different nodes. It favours scheduling onto nodes - that don't have Pods for the service already assigned there. The overall outcome is - that the Service becomes more resilient to a single Node failure. - -- `EqualPriority`: Gives an equal weight of one to all nodes. - -- `EvenPodsSpreadPriority`: Implements preferred - [pod topology spread constraints](/docs/concepts/workloads/pods/pod-topology-spread-constraints/). +This scheduling policy is not supported since Kubernetes v1.23. Associated flags `policy-config-file`, `policy-configmap`, `policy-configmap-namespace` and `use-legacy-policy-config` are also not supported. Instead, use the [Scheduler Configuration](/docs/reference/scheduling/config/) to achieve similar behavior. ## {{% heading "whatsnext" %}} @@ -106,4 +17,3 @@ The following *priorities* implement scoring: * Learn about [kube-scheduler Configuration](/docs/reference/scheduling/config/) * Read the [kube-scheduler configuration reference (v1beta2)](/docs/reference/config-api/kube-scheduler-config.v1beta2) * Read the [kube-scheduler Policy reference (v1)](/docs/reference/config-api/kube-scheduler-policy-config.v1/) -