-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS SQS Scaler: Add Visible + NotVisible messages for scaling considerations #1664
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, thanks for the fix!
Could you please update the Changelog as well? (Improvements section)
I wonder whether we should add a note to the docs about this behavior: https://keda.sh/docs/2.1/scalers/aws-sqs/#trigger-specification
WDYT @tomkerkhove ?
…rations Signed-off-by: Ty Brown <[email protected]>
388f6a2
to
8e766e3
Compare
@zroubalik I've updated the CHANGELOG as requested. Happy to update docs as well if y'all want, just let me know. |
Would be good to just doc it indeed! |
kedacore/keda-docs#398 is opened for Doc updates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
This should be documented as a breaking change. Counting the in flight messages as part of the queue length changes the behaviour and makes it less intuitive to select a proper scale up threshold in case of systems with a sustained throughput with occasional bursts. If my goal is to maintain a close to zero queue length at all time, previously it would be just a matter of selecting a random non zero high enough threshold to trigger a scale up. This threshold could be pretty much independent of the number of pods I need in a normal situation (and thus could be chosen equal across multiple similar systems), this is no longer the case with this change. |
This should have been configurable IMHO. I can work around it by changing the polling rate but that increases the delay for this whole batch, which isn't cool. |
I agree with the above. We have some messages which can take like 30mins + to process so therefore Keda adds unnecessary pods. |
I agree with the above. This now causes resource redundancy with pods being told to "stay up" while there is no work for them to do. Can we add in a configurable flag for this ? |
OK, I haven't been aware of such usecase. Anybody willing to contribute the configuration option? |
Happy to take a look with @LiamStorkey! |
@pjaak Did you ever get to implementing this ? |
This change affects the AWS SQS Scaler and how it considers when to scale down a ScaledObject. Before this change, the scaler only considered
AproximateNumberOfMessages
, which only includes pending queue messages.After this change, the scaler will add
ApproximateNumberOfMessages
andApproximateNumberOfMessagesNotVisible
, the latter of which is all in-flight messages.Checklist
Fixes #1663