Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
somewhat related to #77
The current behavior of sidekiq-unique-jobs is to unlock a job when job is done regardless of if the job succeeds or fails. From what I can tell this is desired because sidekiq requeues failed jobs and sidekiq-unique-jobs would otherwise prevent them from being queued.
There is one exception and that is when a task is killed due to exceeding a timeout during a worker shutdown. https://github.com/mperham/sidekiq/blob/master/lib/sidekiq/processor.rb#L55-L59 In that case the job is left and the worker picks it back up when it restarts. The task kill generates an exception and sidekiq-unique-job removes the lock even though the job is still there. When the worker starts again it picks up the job again, but non-unique work can be queued because the lock was already removed.
This PR duplicates the logic of the sidekiq processor to also catch the shutdown exception and not unlock the job in that case.
I first noticed this when having a unique reoccurring task that took longer than the reoccurrence (ran for 100 seconds, but started every minute) If the worker was restarted while the task was running it would start up again and at the top of the next minute a duplicate task would start. When the first task would complete it would unlock the lock placed by the second task and the two jobs would leapfrog forever.