-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Arbitrary/Custom Metrics in the Horizontal Pod Autoscaler #117
Comments
cc @kubernetes/autoscaling @jszczepkowski @derekwaynecarr @smarterclayton |
@DirectXMan12 any updates on this issue? Can you provide the actual status of it and update the checkboxes above? |
The proposal is posted, but has not been approved yet. We only recently reached general consensus about the design. Still finalizing the exact semantics. It should be removed from the 1.5 milestone, since no code will have gone into 1.5 |
@DirectXMan12 thank you for clarifying. |
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210) HPA v2 (API Changes) **Release note**: ```release-note Introduces an new alpha version of the Horizontal Pod Autoscaler including expanded support for specifying metrics. ``` Implements the API changes for kubernetes/enhancements#117. This implements #34754, which is the new design for the Horizontal Pod Autoscaler. It includes improved support for custom metrics (and/or arbitrary metrics) as well as expanded support for resource metrics. The new HPA object is introduces in the API group "autoscaling/v1alpha1". Note that the improved custom metric support currently is limited to per pod metrics from Heapster -- attempting to use the new "object metrics" will simply result in an error. This will change once #34586 is merged and implemented.
Automatic merge from submit-queue Convert HPA controller to support HPA v2 mechanics This PR converts the HPA controller to support the mechanics from HPA v2. The HPA controller continues to make use of the HPA v1 client, but utilizes the conversion logic to work with autoscaling/v2alpha1 objects internally. It is the follow-up PR to #36033 and part of kubernetes/enhancements#117. **Release note**: ```release-note NONE ```
@DirectXMan12 please, provide us with the release notes and documentation PR (or links) at https://docs.google.com/spreadsheets/d/1nspIeRVNjAQHRslHQD1-6gPv99OcYZLMezrBe3Pfhhg/edit#gid=0 |
done :-) |
@DirectXMan12 @mwielgus What is planned for HPA in 1.8? |
@MaciekPytel @kubernetes/sig-autoscaling-misc |
Looking forward to the beta launch! |
@bgrant0607 we're hoping to move to v2 to beta in 1.8 (so just stabilization :-) ). |
Regarding the functionality - am I correct in understanding that if you wanted to scale on a load indicator that is fundamentally external to Kubernetes, you would need to
|
@davidopp That is incorrect. In order to scale on a load indicator that's not one of the metrics provided by the resource metrics API (CPU, memory), you need to have some implementation of the custom metrics API (see k8s.io/metrics and kubernetes-incubator/custom-metrics-apiserver). Then, you can either use the "pods" source type, if the metric describes the pods controlled the the target scalable of the HPA (e.g. network throughput), or the "object" source type, if the metric describes an unrelated object (for instance, you might scale on a queue length metric attached to the namespace). In either case, the HPA controller will the query the custom metrics API accordingly. It is up to cluster admins, etc to actually provide a method to collect the given metrics and expose an implementation of the custom metrics API. |
/remove-lifecycle stale |
Hey there @mwielgus and @josephburnett -- 1.18 Enhancements lead here. I wanted to check in and see if you think this Enhancement will be graduating to stable in 1.18 or having a major change in it's current level? The current release schedule is: To be included in the release, this enhancement must have a merged KEP in the |
Hey there @mwielgus and @josephburnett, Enhancements Team reaching out again. We're about a week out from Enhancement Freeze on the 28th. Let us know if you think there will be any activity on this. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Hey there @mwielgus @josephburnett -- 1.19 Enhancements Lead here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19? In order to have this part of the release:
The current release schedule is:
If you do, I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍 Thanks! |
Hi @mwielgus @josephburnett, pinging back again as a reminder. 🙂 |
Tomorrow, Tuesday May 19 EOD Pacific Time is Enhancements Freeze Will this enhancement be part of the 1.19 release cycle? |
@mwielgus @josephburnett -- Unfortunately, the deadline for the 1.19 Enhancement freeze has passed. For now, this is being removed from the milestone and 1.19 tracking sheet. If there is a need to get this in, please file an enhancement exception. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Enhancements Lead here. Any plans for this to graduate in 1.20? Thanks! |
Any updates on whether this will be included in 1.20? Enhancements Freeze is October 6th and by that time we require: The KEP must be merged in an implementable state I note that your design proposals are quite old, please consider updating to the new KEP format. See: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template Thanks |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@DirectXMan12 I read the Enhance HPA Metrics Specificity doc, i also use the hpa with metricLabelSelector, but i found because of metricLabelSelector is |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Arbitrary/Custom Metrics in the Horizontal Pod Autoscaler
The text was updated successfully, but these errors were encountered: