-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some jobs seem to be treated as duplicate despite empty queue #440
Comments
It could be that your lock is not getting released when the job finishes. Check that "Unique Digests" tab in Sidekiq admin to see if there are some digests which don't seem to go away? We have also seen an increase in orphaned locks recently and it might be pointing to some issue with the gem, but clearing those locks from redis manually (the delete button doesn't seem to work either: see #438) releases the lock and the job can be scheduled again. Your processes which clean redis probably are responsible for:
else it would stay stuck. |
This is happening very frequently for us - I have to wait for the queue to clear and delete all outstanding unique digests on a daily basis to ensure that jobs actually run. Is there something I should be looking into to mitigate / debug this? |
I'm not really sure what's causing the actual orphaned digests, but in case this can help you, this is the script I wrote to clear them, which runs every 10 minutes. I'll note that in my case, I set it up to only clear the digests if there are currently no other workers running. Naturally, I'm unsure if the nature of your applications affords you that luxury:
|
It's surprising to me that we seem to be the only ones experiencing this, but since this is still happening with the latest version (literally dozens of locks leaked per day), leading to many jobs not running, I'm going to have to do like you: wait for the queue to be empty and clear out any stale locks. By the way, do you have any other sidekiq middleware? Or are you using the Airbrake gem with the Sidekiq::RetryableJobsFilter? |
I'm using sidekiq, sidetiq, sidekiq-status, sidekiq-failures, and sidekiq-unique-jobs |
I'm using:
|
I have a job that I want to run every hour on about 60 items, but a select number of those items (typically around 10 or 15) seem to be treated as duplicates, and thus won't send the job for processing.
I first verified, both on the GUI dashboard and in the console, that the queue is empty and there are no busy or enqueued processes:
When I manually run something like:
MyWorker.perform_async(11497)
, I get the outputnil
. However, when I runMyWorker.perform_async(11496)
,MyWorker.perform_async("11497")
, orMyWorker.set(:queue => :high).perform_async(11497)
, I get the expected :jid response.This seems to be reset every day, because when this runs around 1 AM, the problem items are properly processed, but that will be the only time of the day.
I do run processes a few times throughout the day to clean up Redis and Sidekiq, but I don't think those are related, as I run these scripts 4 times per day, and would thus expect the problem items to process 4 times per day rather than the current one.
The text was updated successfully, but these errors were encountered: