Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow cpu/memory scaler to target a specific container in the pod #1378

Closed
trondhindenes opened this issue Nov 24, 2020 · 7 comments · Fixed by #3513
Closed

Allow cpu/memory scaler to target a specific container in the pod #1378

trondhindenes opened this issue Nov 24, 2020 · 7 comments · Fixed by #3513
Labels
feature-request All issues for new features that have not been committed to needs-discussion

Comments

@trondhindenes
Copy link

trondhindenes commented Nov 24, 2020

Use-Case

One limitation with Kubernetes' default hpa implementation, is that scaling happens based on the "sum" of all containers in the targeted pod. This means that sidecar metrics affect the scaling, and thus sidecar cpu/memory requests/limits have to be very carefully managed in order to achieve the desired scaling dynamics. Especially in cases where sidecar cpu/memory resources are tiny compared to those of the "main" app container, tuning this correctly is often very difficult. The kubernetes metrics-server actually presents per-container metrics, it's just that the default hpa doesnt have a way of selecting which individual container to track metrics for.

Since keda implements its own cpu/memory scaler, it would be so awesome if a new optional field container was added, which allowed using only the metrics of a specific container (ignoring other containers in the pod) for scaling.

@trondhindenes trondhindenes added feature-request All issues for new features that have not been committed to needs-discussion labels Nov 24, 2020
@silenceper
Copy link
Contributor

ref:kubernetes/kubernetes#90691
I see that HPA has added the ContainerResourceMetricSource type, which should solve the problem you mentioned.

Maybe we can add this support in subsequent versions, when this feature is released in kubernetes 1.20

@trondhindenes
Copy link
Author

Oh - my impression was that Keda implements its own cpu/memory scaler and thus wouldn't be bound by the limitations in Kubernetes (which now seem to be mitigated).

@silenceper
Copy link
Contributor

silenceper commented Nov 25, 2020

NO, cpu/memory scaler depend k8s HPA. keda is not implemented separately.

@tomkerkhove
Copy link
Member

Jup, we just provide a user-friendly scaler on top of it so that you can use the same experience for CPU/Memory as if it would be another scaler.

@trondhindenes
Copy link
Author

makes total sense. Thanks for explaining! I guess we can close this.

@prasobhen
Copy link

prasobhen commented Jun 28, 2022

Getting error on EKS fargate cluster
one or more objects failed to apply, reason: HorizontalPodAutoscaler.autoscaling "xxxxxxxx" is invalid: [spec.metrics[0].containerResource: Required value: must populate information for the given metric source (only allowed when HPAContainerMetrics feature is enabled), spec.metrics[1].containerResource: Required value: must populate information for the given metric source (only allowed when HPAContainerMetrics feature is enabled)]

@prasobhen
Copy link

ref:kubernetes/kubernetes#90691 I see that HPA has added the ContainerResourceMetricSource type, which should solve the problem you mentioned.

Maybe we can add this support in subsequent versions, when this feature is released in kubernetes 1.20

Please share the fix for this issue

@zroubalik zroubalik reopened this Aug 5, 2022
@zroubalik zroubalik moved this from Proposed to In Review in Roadmap - KEDA Core Aug 5, 2022
@zroubalik zroubalik moved this to Proposed in Roadmap - KEDA Core Aug 5, 2022
Repository owner moved this from In Review to Ready To Ship in Roadmap - KEDA Core Aug 8, 2022
@tomkerkhove tomkerkhove moved this from Ready To Ship to Done in Roadmap - KEDA Core Aug 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to needs-discussion
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

5 participants