-
-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow access to Shoryuken utilization metrics #672
Comments
@cjlarose Do you have a preferred direction for this functionality? |
I'm experimenting with adding this functionality by developing a That way I can be confident that we'll have the right hooks in Shoryuken so that folks can build their own metrics integration if they need to. Out of curiosity @rbroemeling, would you be interested in statsd integration specifically, or do you expect to use a different platform/protocol? |
Opened #673 as a draft. I ended up adding a new event called |
@cjlarose We're specifically looking for Datadog metrics (i.e., to send metrics to I like the idea of writing an event system into Shoryuken that ensures that people can easily implement their own statistics gathering if/when necessary, that's a great plan. |
Your draft PR looks good, @cjlarose. One concern that I have is that in extreme cases this could cause storms of stats updates (i.e., assume each loop retrieves 10 messages, then each loop will "storm" 20 statsd packets into the statsd listener). In high-load cases, doing two stats reports per job (i.e., one on assignation, one on completion) might be an unnecessary amount of statsd load. Brainstorming some other metrics that might be interesting (though, not positive that these fit with the
|
Awesome to hear that you're interested in Datadog specifically because that's what I was targeting when I was experimenting with the
This is something I thought of, but one thing to consider is that clients like dogstatsd don't send 1 UDP packet for every metric update: instead, a bunch of updates are buffered internally and then the whole buffer is flushed into one big packet (depending on the network MTU). I think I might just try to get an MVP working first, and then we can adjust accordingly. Either way, I think it's possible to defer the responsibility of throttling/debouncing/batching to the client from Shoryuken's perspective. And plus, there might be some clients that actually do want to be notified on every update, so we should at least give them that option in case they need it.
I've been thinking about some the same ideas, too! I think what I'll do is try to wrap up |
Related: #671
The idea here is to create some sort of public API such that users can query for Shoryuken's current runtime state in terms of utilization. Ideally, such an API should consider the possibility that Shoryuken is using multiple processing groups and allow users to discern which utilization metrics are associated with which group.
The most obvious time that users might want access to this data is, of course, when that data changes. One option would be to provide the information to middleware directly. This would allow users to build something akin to
sidekiq-statsd
.This is a little bit awkward, though, because while users would be notified whenever a new job is picked up (
busy_processors
in incremented), they wouldn't be notified whenever a processor becomes available (busy_processors
is decremented), because while executing middleware, a processor is necessarily currently being consumed.Another option would be to expose some callbacks that are guaranteed to be executed any time that the utilization metrics change. I think this ultimately gives users the greatest flexibility on how they want to use the data. For example:
The text was updated successfully, but these errors were encountered: