Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide support for multiple triggers #476

Closed
tomkerkhove opened this issue Nov 21, 2019 · 17 comments
Closed

Provide support for multiple triggers #476

tomkerkhove opened this issue Nov 21, 2019 · 17 comments
Labels
feature-request All issues for new features that have not been committed to needs-discussion
Milestone

Comments

@tomkerkhove
Copy link
Member

Provide support for multiple triggers that act as an AND.

Use-Case

Queue processing that persists orders in the database.

As of today, we can autoscale based on message count but if our DB DTU is at 99% capacity we'll only make it worse.

Specification

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: {scaled-object-name}
  labels:
    deploymentName: my-queue-worker
spec:
  scaleTargetRef:
    deploymentName: my-queue-worker
  triggers:
  - type: azure-servicebus
    metadata:
      queueName: functions-sbqueue
      queueLength: "5" # Optional. Subscription length target for HPA. Default: 5 messages
  - type: azure-monitor # Requested here: https://github.com/kedacore/keda/issues/155
    metadata:
      resourceUri: <uri>

Kudos to @jornbeyers for idea

@tomkerkhove tomkerkhove added needs-discussion feature-request All issues for new features that have not been committed to labels Nov 21, 2019
@tomkerkhove
Copy link
Member Author

Thoughts @jeffhollan?

@zroubalik
Copy link
Member

Interesting. How would this exactly calculate the final replicas count? Sum across all triggers? Or what is the idea on the scaling logic here?

@tomkerkhove
Copy link
Member Author

I would say that we just scale out if both criteria has met.

As far as I know we just add one instance every time based on the metric but you're more thinking in terms of the HPA underneath it or?

@zroubalik
Copy link
Member

Yeah, but it is interesting scenario for sure. We should look into this.

@fktkrt
Copy link

fktkrt commented Jan 20, 2020

Interesting. How would this exactly calculate the final replicas count? Sum across all triggers? Or what is the idea on the scaling logic here?

The HorizontalPodAutoscaler calculates a value for each metric, then chooses the highest replica count. Should that be modified?

@alegmal
Copy link

alegmal commented Jun 18, 2020

Hi, is there any update on where this feature stands?
Our use case is a service that consumes multiple queues

@marchmallow
Copy link
Contributor

marchmallow commented Jun 22, 2020

We have a similar use case as we watch multiple Redis lists from same deployment.. it would be nice if I could also specify a single trigger with a regex on the queue name, and get the scaler to watch all matching queues then allow also scale on total sum of elements across all queues.

@inuyasha82 FYI

@inuyasha82
Copy link
Contributor

I think the change is not hard to implement, i have an idea even where to put it, i can probably work on it in the next weeks.

@tomkerkhove
Copy link
Member Author

This will be finalized with #733, right @zroubalik ?

@zroubalik
Copy link
Member

@tomkerkhove yes!

@zroubalik zroubalik added this to the v2.0 milestone Aug 5, 2020
@zroubalik
Copy link
Member

zroubalik commented Aug 5, 2020

Solved in #966

@tomkerkhove
Copy link
Member Author

Just verifying - This is only for ScaledObject I presume?

@zroubalik
Copy link
Member

I think it should work for ScaledJob as well.

@TsuyoshiUshio could you please confirm?

@alegmal
Copy link

alegmal commented Oct 25, 2020

Hi,

Could you please provide an example for multiple triggers for rabbitmq?

I have deleted KEDA 1.5 and deployed KEDA 2.0.0-rc2 in our dev cluster.

If I create following yaml (single trigger), everything works:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: <deployment-name>
  namespace: default
spec:
  scaleTargetRef:
    name: <deployment-name>
  pollingInterval: 2
  cooldownPeriod: 60
  minReplicaCount: 1
  maxReplicaCount: 100
  triggers:
  - type: rabbitmq
    metadata:
      hostFromEnv: MQ_INFO
      protocol: amqp
      queueLength: '30'
      queueName: <queue name>

If I add triggers it breaks (I need 5 but even 2 breaks it):

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: <deployment-name>
  namespace: default
spec:
  scaleTargetRef:
    name: <deployment-name>
  pollingInterval: 2
  cooldownPeriod: 60
  minReplicaCount: 1
  maxReplicaCount: 100
  triggers:
  - type: rabbitmq
    metadata:
      hostFromEnv: MQ_INFO
      protocol: amqp
      queueLength: '30'
      queueName: <queue # 1>
  - type: rabbitmq
    metadata:
      hostFromEnv: MQ_INFO
      protocol: amqp
      queueLength: '30'
      queueName:  <queue # 2>

Also, in the documentation:
https://keda.sh/docs/2.0/scalers/rabbitmq-queue/

It shows apiVersion: keda.k8s.io/v1alpha1 but I believe its a mistake so I used keda.sh/v1alpha1

@zroubalik
Copy link
Member

@alegmal and what problem are you exactly facing?

@alegmal
Copy link

alegmal commented Oct 25, 2020

Figured it out.

Apparently scaledobjects.keda.k8s.io was not really deleted and it did not let me delete it.
Workaround that worked was:
kubectl patch crd/scaledobjects.keda.k8s.io -p '{"metadata":{"finalizers":[]}}' --type=merge

After running that command it instantly deleted crd/scaledobjects.keda.k8s.io and now everything seems to work!

@zroubalik
Copy link
Member

@alegmal glad to hear that :) Yeah it is important to delete the old CRDs first (https://keda.sh/docs/2.0/migration/#migrating-from-keda-v1-to-v2)

preflightsiren pushed a commit to preflightsiren/keda that referenced this issue Nov 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to needs-discussion
Projects
None yet
Development

No branches or pull requests

6 participants