-
-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extracted queue polling strategy #106
Conversation
end | ||
end | ||
|
||
def pause_queue!(queue) | ||
return if [email protected]?(queue) || Shoryuken.options[:delay].to_f <= 0 | ||
def queue_empty(queue) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assignment Branch Condition size for queue_empty is too high. [16.03/15]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't see how to fix this. The method itself seems pretty straightforward to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, it seems to be happy with the extraction of the delay
method.
I'll do my best to make houndci happy. |
I think the latest RSpec has changed equality behaviour and so the tests are now failing. I'll get that fixed later today. |
In order to be able to customize the order and manner in which Shoryuken polls SQS queues, the previous hard-coded strategy was extracted into a separate class. This meant that some of the method names were altered to be more generic (for example rebalance_queue_weight! became messages_received), otherwise this was a direct extraction.
Mock "receive" calls appear to use a stronger interpretation of equality. Also altered assertions around raised errors to suppress new warnings.
Hi @gshutler how is that going so far? Are you using it in your setup? |
It's been working well. Been running for at least a couple of weeks in production now without issue. |
end | ||
|
||
private | ||
|
||
def receive_messages(queue, limit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reason for making this private? This is a breaking api change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because it can be without breaking the implementation, reducing the surface area of the class, and making the protocol of a "fetcher" more obvious if a different implementation were wanted at some point.
I was a little bit sceptic, but idea is not a bad one. The PR is too intrusive though. I'd really like to cut the scope of this one and make sure we are just isolating that component. Cutting the surface area of class interface and changing method interfaces should be part of some other PRs. |
I'm don't think there's a way to introduce a strategy without it being intrusive. As mentioned in my last comment I tried to minimize the changes made to other methods that weren't directly related to making the polling strategy work. I think the only thing that could be undone whilst retaining the functionality is making |
@gshutler sorry, you're right. I've been looking at PR more closely and it looks good. What I'd like to do is to go maybe a little bit further. |
@mariokostelac that sounds like it makes sense I did try that route at the time but it changed so much internally that I didn't think it would make a very polite PR! I don't have the capacity to work on this at the moment, I ended up creating a custom message processor based on a forking model because of the memory leaks that our application experienced that I was never able to get to the bottom of. Feel free to extend on this work or create a new branch with some of the ideas from it. I do watch this repository so would be happy to cast my eye over the changes if you think that would be helpful. |
Closed in favor of #236 |
Hey @gshutler I know that it might be too late, but have you seen the FIFO queues? It can guarantee the message processing order. https://github.com/phstc/shoryuken/wiki/FIFO-Queues |
@phstc Yeah, they look pretty interesting, though I do think that generally if you rely on precise message ordering you're doing messaging wrong as (for example) at some point a message will fail and then should you stop processing all other messages? We were more looking for "if both queues have messages, we should process the ones from the high queue first". This is because (to simplify) in general our "low" messages are perfectly fine to be processed any time in the next 60 minutes, but our "high" ones should be processed as soon as possible. |
@gshutler I agree with you, they fit for very specific cases, but in the cases they do, they're very useful.
I see that, @atyndall also needs that. I'm starting to work on the concurrent-ruby migration, and after that I plan to review #236 and #263, these PRs should support that. Meanwhile, the only way to do that, is having more than one |
In order to be able to customize the order and manner in which Shoryuken polls SQS queues, the previous hard-coded strategy was extracted into a separate class.
This meant that some of the method names were altered to be more generic (for example rebalance_queue_weight! became messages_received), otherwise this was a direct extraction.
I don't think this is 100% ready for merging, I'm not entirely happy with the names and so forth but I think it's solid enough to start having a discussion around.
My motivation was to be able to have more strict prioritization semantics than provided by the default implementation. I also wanted the option to do an extended poll on the highest priority queue if all queues are empty rather than either continually polling them all or sleeping to not poll anything.
This is my custom strategy that achieves that:
From this, a strategy could be developed that may satisfy #92.