-
-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Multiple Schedulers #212
Conversation
@oxalorg this looks like it works well though for some reason when paired with django-rq it seems like the scheduler itself is executing the jobs rather than the rqworker process. My test was to run |
Hey @mattjegan I've created a dummy repository to test oxalorg's PR and I didn't come across what you experienced. The jobs are run by You can quickly pull and test this repo: https://github.com/ArionMiles/rqschedtest Let me know if there's any other issues 😄 |
@ArionMiles Thanks for this, everything looks good with that repo. Perhaps it was something else in my env. |
Can you let me know what version of django you were testing on when you came across your problem? My test repo uses Django 1.11. |
If everything's okay with this PR, can we go ahead and merge this? |
Bllllluuuuurrrrrrrrrrrpppppppppppp (placing this useful note here so I get notified when this is merged) |
How are we doing here? We'd love to see this merged :) |
Another comment wishing for merge and asking for status. @oxalorg: could you rebase your branch? I'll be happy to point my |
Sure, give me by the end of the day. I'll have it rebased! ^_^
…On Mon, Apr 6, 2020 at 4:13 PM Nikolai Prokoschenko < ***@***.***> wrote:
Another comment wishing for merge and asking for status. @oxalorg
<https://github.com/oxalorg>: could you rebase your branch? I'll having
point my pipenv to your fork for now.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#212 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA7774MBH2GWLIMED22JTJDRLGW3VANCNFSM4I7YSR7Q>
.
|
Maybe, just maybe, the lock implementation should switch to redispy's |
Sorry it took so long for me to review this. Great PR! |
One other thing, do you mind writing some docs about running multiple scheduler instances? |
🎉 🎉 🍾 🍾 🍷 🍻 🍺 🦃 🌵 🍆 |
Thank you @selwin for merging. I'll send in another PR soon with the updated docs. 😸 |
This PR is in reference to issue #195
It adds the feature of running multiple scheduler instances (on multiple servers).
Each scheduler registers itself, then tries to acquire a lock to process the scheduling work. If the lock gets acquired it will schedule the jobs to the appropriate queues, remove the lock, and sleep for it's interval.
In the meanwhile another scheduler can try to acquire the lock. If it fails, it will simply sleep for its interval and try again later.
This way we don't have a single scheduler doing all the work. Different schedulers may acquire the lock at different time instances.
I've made sure that even those schedulers who can't acquire a lock have still registered their birth and a heartbeat makes sure the scheduler instance key doesn't expire even if the instance never acquires a lock.
Please let me know how we can proceed :)