Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RabbitMQ ignores unacknowledged messages #638

Closed
otherpirate opened this issue Feb 21, 2020 · 6 comments
Closed

RabbitMQ ignores unacknowledged messages #638

otherpirate opened this issue Feb 21, 2020 · 6 comments
Labels
bug Something isn't working scaler-rabbit-mq stale All issues that are marked as stale due to inactivity

Comments

@otherpirate
Copy link

RabbitMQ scaler is using AMQP library and the method to get a total of messages returns only messages with ready status in queue.
https://github.com/streadway/amqp/blob/master/channel.go#L837

If the queue has messages with unacknowledged (because are processing in consumer) they will be ignored and the scaler can kill the pod that are processing messages making message returns to queue and create an infinite cycle :(

Expected Behavior

Return total of messages in the queue

Actual Behavior

Return the total of ready messages

Steps to Reproduce the Problem

  1. Create a queue
  2. Publish 3 messages in the queue
  3. Consume one message and do not confirm (ACK or NACK), the message will be in unacknowledged status.
  4. Run Keda, the total of messages will be 2 (should be 3)

Specifications

  • KEDA Version: 1.2
  • Platform & Version: Fedora 30
  • Kubernetes Version: I debbuged in local machine without kube
  • Scaler(s): RabbitMQ

Obs

I'm not sure if it's a bug or a feature.
In my use case, it's a bug, but maybe it's should be optional

@otherpirate otherpirate added the bug Something isn't working label Feb 21, 2020
@otherpirate
Copy link
Author

I had the same problem with our current autoscaler component:
The simple solution went to use RestAPI
mbogus/kube-amqp-autoscale#11

We are trying to migrate to Keda and I will be glad to help with coding
Can we fix this issue with RestApi?
Should we create a new parameter/configuration to do that?

@balchua
Copy link
Contributor

balchua commented Feb 21, 2020

I had this issue too. Certain process just takes a bit longer to ack.
Only the rabbitmq management rest api provides this metrics. Which requires that management plugin must be enabled. I guess this is commonly enabled.

@Christoph-Raab
Copy link

Christoph-Raab commented Feb 27, 2020

I found the same issue and from my point of view it comes from a difference in setting the queue treshhold and how the hpa is calculated from that value.

The hpa target value is

(count of ready messages / currentReplicas) / queueLentgh

this translates to

((count of total messages - count of consumers)  / currentReplicas) / queueLentgh

A queueLength of 1 and minPods of 1 results in the following behavior:
1 - No messages -> Target: 0/1 (avg)
2 - First message arrives -> Taget: 0/1 (avg), since the message is consumed by the pod
3 - Second message arrives -> Target 1/1 (avg). Nothing happens
4 - Third message arrives -> Target 2/1 (avg). Now a new pod is started, that can consume the second message, so currentReplicas increases amd Target drops to 1/1 (avg)
5 - Now the first pod finishes the first message and consumes the third message -> target drops to 0/1 (avg).
6 - Now downscaling timer starts, because Target is too low.

Same occures if there are 6 messages, except the target will be 200m/1 (avg) on step 5. I would except that on receiving the second message a new pod would start. With this behaviour the scaler is more or less useless.

I would except that I can set the desired value to a floating value, like 700m, to get a target of x/700m.

As described above, that the total messages are used for calculation, not the ack ones.

@Christoph-Raab
Copy link

Pull request #700 should fix the problem.

@stale
Copy link

stale bot commented Oct 13, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Oct 13, 2021
@stale
Copy link

stale bot commented Oct 20, 2021

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Oct 20, 2021
SpiritZhou pushed a commit to SpiritZhou/keda that referenced this issue Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working scaler-rabbit-mq stale All issues that are marked as stale due to inactivity
Projects
None yet
Development

No branches or pull requests

4 participants