Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

only one of multiple prometheus triggers' are taken into account #759

Closed
dipeti opened this issue Apr 17, 2020 · 7 comments
Closed

only one of multiple prometheus triggers' are taken into account #759

dipeti opened this issue Apr 17, 2020 · 7 comments
Labels
bug Something isn't working

Comments

@dipeti
Copy link

dipeti commented Apr 17, 2020

We're running a Java app that we'd like to scale out based on metrics scraped by Prometheus.
Our intent is to up the number of replicas if either of the metrics below exceeds its threshold:

  1. Number of HTTP requests handled per minute: 100
  2. CPU usage: 20%

Read the ScaledObject deployment descriptor:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: java-app
  namespace: default
spec:
  scaleTargetRef:
    deploymentName: java-app
  pollingInterval: 30
  cooldownPeriod:  300
  minReplicaCount: 1
  maxReplicaCount: 10
  triggers:
    - type: prometheus
      metadata:
        serverAddress: http://prometheus.prom.svc.cluster.local:9090
        metricName: number_of_requests_metric
        threshold: "100"
        query: sum(rate(tomcat_global_request_seconds_count[1m]))
    - type: prometheus
      metadata:
        serverAddress: http://prometheus.prom.svc.cluster.local:9090
        metricName: cpu_usage_metric
        threshold: "20"
        query: sum(system_cpu_usage)*100

Expected Behavior

Call Prometheus for both metrics every 30 seconds and increase the number of replicas as soon as either of the external metrics exceeds the configured threshold.

Actual Behavior

Looking at the description of the HPA created by the ScaledObject only one of the metrics are taken into account. Even though both metrics are listed correctly the current average value for number_of_requests_metric always takes the same value as for cpu_usage_metric.

 ✗ kubectl describe hpa keda-hpa-java-app
Name:                                                  keda-hpa-sb-api
Namespace:                                             default
Labels:                                                app.kubernetes.io/managed-by=keda-operator
                                                       app.kubernetes.io/name=keda-hpa-java-app
                                                       app.kubernetes.io/part-of=java-app
                                                       app.kubernetes.io/version=1.3.0
Annotations:                                           <none>
CreationTimestamp:                                     Fri, 17 Apr 2020 12:12:25 -0400
Reference:                                             Deployment/java-app
Metrics:                                               ( current / target )
  "number_of_requests_metric" (target average value):  8 / 100
  "cpu_usage_metric" (target average value):           8 / 20
Min replicas:                                          1
Max replicas:                                          10
Deployment pods:                                       1 current / 1 desired

However many http requests we send to java-app the metric number_of_requests_metric will always show the same current value as cpu_usage_metric which leads us to believe that the query for the handled http requests is not being picked up by the HPA.

Steps to Reproduce the Problem

  1. Expose some metrics of a running app to Prometheus
  2. Create a ScaledObject that has more than one prometheus triggers
  3. Describe the HPA created by the ScaledObject and observe how the current values for the metrics do no deviate.

Specifications

  • KEDA Version: 1.3
  • Platform & Version: N/A
  • Kubernetes Version: v1.15.10
  • Scaler(s): Prometheus
@dipeti dipeti added the bug Something isn't working label Apr 17, 2020
@dipeti dipeti closed this as completed Apr 17, 2020
@dipeti
Copy link
Author

dipeti commented Apr 17, 2020

KEDA v1.4 has resolved the issue.

@tomkerkhove
Copy link
Member

Does it? I think you'll still have issues because of trigger names or not @zroubalik?

@zroubalik
Copy link
Member

@tomkerkhove as you can see he was using 2 triggers with a different metric names. For prometheus scaler metric name is a mandatory part of the triggers metadata. So this example works.

@ckuduvalli
Copy link
Contributor

@dipeti / @tomkerkhove / @zroubalik
Could you please tell whether system_cpu_usage is a correct prometheus query? Or is it just a placeholder for another expression? I am also trying to use cpu_usage as a metric through Keda prometheus scaler and didn't find the system_cpu_usage metric in prometheus. Hence asking.

@tomkerkhove
Copy link
Member

It's best to use any metric you can find in Prometheus as that's your source of truth, if it's not in there KEDA won't be able to scale

@alegmal
Copy link

alegmal commented Sep 14, 2020

I have the same issue with KEDA 1.4 in GKE

@tomkerkhove as you can see he was using 2 triggers with a different metric names. For prometheus scaler metric name is a mandatory part of the triggers metadata. So this example works.

Sorry I don't understand that - can you please tell if having the same names is a problem or not? And can they be arbitrary chosen or they should indicate something? I was under the impression that I could choose any metric name I want.

EDIT: After testing, it seems that even using query as metric name does not help (container_memory_working_set_bytes)

@zroubalik
Copy link
Member

@alegmal multiple triggers in ScaledObject doesn't work correctly in KEDA v1. I'd recommend you to try KEDA v2 Beta.

https://keda.sh/blog/keda-2.0-beta/

SpiritZhou pushed a commit to SpiritZhou/keda that referenced this issue Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants