diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md index 343582acb46a0..4387712961c07 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough.md @@ -167,18 +167,18 @@ Here CPU utilization dropped to 0, and so HPA autoscaled the number of replicas ## Autoscaling on multiple metrics and custom metrics You can introduce additional metrics to use when autoscaling the `php-apache` Deployment -by making use of the `autoscaling/v2beta1` API version. +by making use of the `autoscaling/v2beta2` API version. -First, get the YAML of your HorizontalPodAutoscaler in the `autoscaling/v2beta1` form: +First, get the YAML of your HorizontalPodAutoscaler in the `autoscaling/v2beta2` form: ```shell -$ kubectl get hpa.v2beta1.autoscaling -o yaml > /tmp/hpa-v2.yaml +$ kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml ``` Open the `/tmp/hpa-v2.yaml` file in an editor, and you should see YAML which looks like this: ```yaml -apiVersion: autoscaling/v2beta1 +apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: php-apache @@ -194,7 +194,9 @@ spec: - type: Resource resource: name: cpu - targetAverageUtilization: 50 + target: + type: Utilization + averageUtilization: 50 status: observedGeneration: 1 lastScaleTime: @@ -204,8 +206,9 @@ status: - type: Resource resource: name: cpu - currentAverageUtilization: 0 - currentAverageValue: 0 + current: + averageUtilization: 0 + averageValue: 0 ``` Notice that the `targetCPUUtilizationPercentage` field has been replaced with an array called `metrics`. @@ -215,8 +218,8 @@ the only other supported resource metric is memory. These resources do not chan to cluster, and should always be available, as long as the `metrics.k8s.io` API is available. You can also specify resource metrics in terms of direct values, instead of as percentages of the -requested value. To do so, use the `targetAverageValue` field instead of the `targetAverageUtilization` -field. +requested value, by using a `target` type of `AverageValue` instead of `AverageUtilization`, and +setting the corresponding `target.averageValue` field instead of the `target.averageUtilization`. There are two other types of metrics, both of which are considered *custom metrics*: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more @@ -224,31 +227,40 @@ advanced cluster monitoring setup. The first of these alternative metric types is *pod metrics*. These metrics describe pods, and are averaged together across pods and compared with a target value to determine the replica count. -They work much like resource metrics, except that they *only* have the `targetAverageValue` field. +They work much like resource metrics, except that they *only* support a `target` type of `AverageValue`. Pod metrics are specified using a metric block like this: ```yaml type: Pods pods: - metricName: packets-per-second - targetAverageValue: 1k + metric: + name: packets-per-second + target: + type: AverageValue + averageValue: 1k ``` -The second alternative metric type is *object metrics*. These metrics describe a different -object in the same namespace, instead of describing pods. Note that the metrics are not -fetched from the object -- they simply describe it. Object metrics do not involve averaging, -and look like this: +The second alternative metric type is *object metrics*. These metrics describe a different +object in the same namespace, instead of describing pods. The metrics are not necessarily +fetched from the object; they only describe it. Object metrics support `target` types of +both `Value` and `AverageValue`. With `Value`, the target is compared directly to the returned +metric from the API. With `AverageValue`, the value returned from the custom metrics API is divided +by the number of pods before being compared to the target. The following example is the YAML +representation of the `requests-per-second` metric. ```yaml type: Object object: - metricName: requests-per-second - target: + metric: + name: requests-per-second + describedObject: apiVersion: extensions/v1beta1 kind: Ingress name: main-route - targetValue: 2k + target: + type: Value + value: 2k ``` If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. @@ -275,19 +287,25 @@ spec: - type: Resource resource: name: cpu - targetAverageUtilization: 50 + target: + kind: AverageUtilization + averageUtilization: 50 - type: Pods pods: - metricName: packets-per-second + metric: + name: packets-per-second targetAverageValue: 1k - type: Object object: - metricName: requests-per-second - target: + metric: + name: requests-per-second + describedObject: apiVersion: extensions/v1beta1 kind: Ingress name: main-route - targetValue: 10k + target: + kind: Value + value: 10k status: observedGeneration: 1 lastScaleTime: @@ -297,14 +315,47 @@ status: - type: Resource resource: name: cpu - currentAverageUtilization: 0 - currentAverageValue: 0 + current: + averageUtilization: 0 + averageValue: 0 + - type: Object + object: + metric: + name: requests-per-second + describedObject: + apiVersion: extensions/v1beta1 + kind: Ingress + name: main-route + current: + value: 10k ``` Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingress were serving a total of 10000 requests per second. +### Autoscaling on more specific metrics + +Many metrics pipelines allow you to describe metrics either by name or by a set of additional +descriptors called _labels_. For all non-resource metric types (pod, object, and external, +described below), you can specify an additional label selector which is passed to your metric +pipeline. For instance, if you collect a metric `http_requests` with the `verb` +label, you can specify the following metric block to scale only on GET requests: + +```yaml +type: Object +object: + metric: + name: `http_requests` + selector: `verb=GET` +``` + +This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline +determines how to collapse multiple series into a single value, if the name and selector +match multiple series. The selector is additive, and cannot select metrics +that describe objects that are **not** the target object (the target pods in the case of the `Pods` +type, and the described object in the case of the `Object` type). + ### Autoscaling on metrics not related to Kubernetes objects Applications running on Kubernetes may need to autoscale based on metrics that don't have an obvious @@ -312,12 +363,14 @@ relationship to any object in the Kubernetes cluster, such as metrics describing no direct correlation to Kubernetes namespaces. In Kubernetes 1.10 and later, you can address this use case with *external metrics*. -Using external metrics requires a certain level of knowledge of your monitoring system, and it requires a cluster -monitoring setup similar to one required for using custom metrics. With external metrics, you can autoscale -based on any metric available in your monitoring system by providing a `metricName` field in your -HorizontalPodAutoscaler manifest. Additionally you can use a `metricSelector` field to limit which -metrics' time series you want to use for autoscaling. If multiple time series are matched by `metricSelector`, +Using external metrics requires knowledge of your monitoring system; the setup is +similar to that required when using custom metrics. External metrics allow you to autoscale your cluster +based on any metric available in your monitoring system. Just provide a `metric` block with a +`name` and `selector`, as above, and use the `External` metric type instead of `Object`. +If multiple time series are matched by the `metricSelector`, the sum of their values is used by the HorizontalPodAutoscaler. +External metrics support both the `Value` and `AverageValue` target types, which function exactly the same +as when you use the `Object` type. For example if your application processes tasks from a hosted queue service, you could add the following section to your HorizontalPodAutoscaler manifest to specify that you need one worker per 30 outstanding tasks. @@ -325,20 +378,21 @@ section to your HorizontalPodAutoscaler manifest to specify that you need one wo ```yaml - type: External external: - metricName: queue_messages_ready - metricSelector: - matchLabels: - queue: worker_tasks - targetAverageValue: 30 + metric: + name: queue_messages_ready + selector: "queue=worker_tasks" + target: + type: AverageValue + averageValue: 30 ``` -If your metric describes work or resources that can be divided between autoscaled pods the `targetAverageValue` -field describes how much of that work each pod can handle. Instead of using the `targetAverageValue` field, you could use the -`targetValue` to define a desired value of your external metric. +When possible, it's preferrable to use the custom metric target types instead of external metrics, since it's +easier for cluster administrators to secure the custom metrics API. The external metrics API potentially allows +access to any metric, so cluster administrators should take care when exposing it. ## Appendix: Horizontal Pod Autoscaler Status Conditions -When using the `autoscaling/v2beta1` form of the HorizontalPodAutoscaler, you will be able to see +When using the `autoscaling/v2beta2` form of the HorizontalPodAutoscaler, you will be able to see *status conditions* set by Kubernetes on the HorizontalPodAutoscaler. These status conditions indicate whether or not the HorizontalPodAutoscaler is able to scale, and whether or not it is currently restricted in any way. diff --git a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md index 8ff9bf2530a42..edb6f927a73f0 100644 --- a/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md +++ b/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md @@ -57,8 +57,11 @@ or the custom metrics API (for all other metrics). * For per-pod custom metrics, the controller functions similarly to per-pod resource metrics, except that it works with raw values, not utilization values. -* For object metrics, a single metric is fetched (which describes the object - in question), and compared to the target value, to produce a ratio as above. +* For object metrics and external metrics, a single metric is fetched, which describes + the object in question. This metric is compared compared to the target + value, to produce a ratio as above. In the `autoscaling/v2beta2` API + version, this value can optionally be divided by the number of pods before the + comparison is made. The HorizontalPodAutoscaler normally fetches metrics from a series of aggregated APIs (`metrics.k8s.io`, `custom.metrics.k8s.io`, and `external.metrics.k8s.io`). The `metrics.k8s.io` API is usually provided by @@ -85,7 +88,7 @@ The current stable version, which only includes support for CPU autoscaling, can be found in the `autoscaling/v1` API version. The beta version, which includes support for scaling on memory and custom metrics, -can be found in `autoscaling/v2beta1`. The new fields introduced in `autoscaling/v2beta1` +can be found in `autoscaling/v2beta2`. The new fields introduced in `autoscaling/v2beta2` are preserved as annotations when working with `autoscaling/v1`. More details about the API object can be found at @@ -146,7 +149,7 @@ may keep thrashing as usual. ## Support for multiple metrics -Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta1` API +Kubernetes 1.6 adds support for scaling based on multiple metrics. You can use the `autoscaling/v2beta2` API version to specify multiple metrics for the Horizontal Pod Autoscaler to scale on. Then, the Horizontal Pod Autoscaler controller will evaluate each metric, and propose a new scale based on that metric. The largest of the proposed scales will be used as the new scale. @@ -159,7 +162,7 @@ custom metrics is still available, these metrics will not be available for use b annotations for specifying which custom metrics to scale on are no longer honored by the Horizontal Pod Autoscaler controller. Kubernetes 1.6 adds support for making use of custom metrics in the Horizontal Pod Autoscaler. -You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta1` API. +You can add custom metrics for the Horizontal Pod Autoscaler to use in the `autoscaling/v2beta2` API. Kubernetes then queries the new custom metrics API to fetch the values of the appropriate custom metrics. See [Support for metrics APIs](#support-for-metrics-APIs) for the requirements.