Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RIP Celluloid, Hello concurrent-ruby/Thread pool #291

Merged
merged 47 commits into from
Feb 14, 2017
Merged

RIP Celluloid, Hello concurrent-ruby/Thread pool #291

merged 47 commits into from
Feb 14, 2017

Conversation

phstc
Copy link
Collaborator

@phstc phstc commented Dec 19, 2016

I think Shoryuken can be simplified, it feels like just a Thread pool would be enough for Shoryuken, currently it's completed inspired by Sidekiq, but thinking more on it, doesn't it feel like just a manager that calls fetcher then calls processors (thread pool) should be enough?!?!

Fixes #185

Related to #280, #88

Time to process 1k messages/PutsReq worker

@phstc
Copy link
Collaborator Author

phstc commented Dec 19, 2016

@mariokostelac this is what I was talking about. I do need to review all tests, but so far, it's working, only with a simple ThreadPool + Atomic Integer and Boolean. Could you have a look at it?

module Shoryuken
module Middleware
module Server
class AutoExtendVisibility
EXTEND_UPFRONT_SECONDS = 5

def call(worker, queue, sqs_msg, body)
timer = auto_visibility_timer(worker, queue, sqs_msg, body)
# timer = auto_visibility_timer(worker, queue, sqs_msg, body)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still need to figure out how to implement this.

@phstc phstc mentioned this pull request Dec 19, 2016
@phstc
Copy link
Collaborator Author

phstc commented Dec 19, 2016

I do need to review all tests

That was way easier than I expected.

@phstc
Copy link
Collaborator Author

phstc commented Dec 24, 2016

@mariokostelac I'm adding some "performance" results in the PR description 🍻

limit = available_processors > FETCH_LIMIT ? FETCH_LIMIT : available_processors

sqs_msgs = Array(receive_messages(queue, limit))
logger.info { "Found #{sqs_msgs.size} messages for '#{queue.name}'" }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an aside, have you considered not emitting this when the sqs_msgs are empty?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tjsingleton I haven't, but it seems to be a good idea. The only one issue is because the process will may look like isn't running/moving without that message, in case all queues are empty. WDYT?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this for awhile and as long as the process lifecycle has good log messages, I don't think it's typically expected to "heartbeat" via the log. For instance, do you trust that puma is up and processing requests despite that it's doesn't log?

@phstc
Copy link
Collaborator Author

phstc commented Jan 12, 2017 via email

phstc pushed a commit that referenced this pull request Jan 25, 2017
phstc pushed a commit that referenced this pull request Jan 25, 2017
phstc pushed a commit that referenced this pull request Jan 25, 2017
@phstc phstc changed the base branch from master to v3 February 13, 2017 03:40
@phstc phstc merged commit 4f2ba80 into v3 Feb 14, 2017
@tjsingleton
Copy link

Congrats on getting this merged! 👍

@phstc
Copy link
Collaborator Author

phstc commented Feb 16, 2017

@tjsingleton thanks! I merged it to a v3 branch, which I will add other changes. But I'm expecting to release a new version soon!

@dkoprov
Copy link

dkoprov commented Feb 17, 2017

Great news! Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants