You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I have some workers which are meant to run continuously. I'm using Heroku which restarts every 24 hours. I've noticed after the restart there will be locks for JIDs which are no longer running. The reaper eventually clears them, but I'd expect the locks to be cleared when the workers are killed by the restart.
Expected behavior
When a restart kills running workers their locks should be released.
Current behavior
The locks remain for JIDs which no longer exist.
Worker class
This module is included in workers which are meant to run continuously. Their perform is an infinite loop fetching from an API and pushing it to a Redis queue, or doing blocking pops from a Redis queue.
Stoppable just provides a Redis flag to tell them to exit the loop.
moduleDaemonWorkerincludeStoppableextendActiveSupport::Concern@classes=[]# @return [Array] Classes which include DaemonWorkerdefself.classes@classesendincludeddoincludeSidekiq::WorkerDaemonWorker.classes << selfsidekiq_options(lock: :until_and_while_executing,on_conflict: {client: :log,server: :reject})endend
Additional context
Using Redis 6.2.6 provided by redisgreen.net.
# config/initializers/sidekiq.rbrequire"sidekiq/throttled"Sidekiq::Throttled.setup!SidekiqUniqueJobs.config.lock_info=trueSidekiq.configure_serverdo |config|
config.server_middlewaredo |chain|
chain.add(Sidekiq::Middleware::Server::RetryCount)chain.add(SidekiqUniqueJobs::Middleware::Server)endconfig.client_middlewaredo |chain|
chain.add(SidekiqUniqueJobs::Middleware::Client)end# Clean up locks when a job dies.# Not needed in Sidekiq 7.# See https://github.com/mhenrixon/sidekiq-unique-jobs#3-cleanup-dead-locksconfig.death_handlers << ->(job,_ex)dodigest=job['lock_digest']SidekiqUniqueJobs::Digests.new.delete_by_digest(digest)ifdigestendSidekiqUniqueJobs::Server.configure(config)endSidekiq.configure_clientdo |config|
config.client_middlewaredo |chain|
chain.add(SidekiqUniqueJobs::Middleware::Client)endend
Unfortunately, that isn't a problem I can solve directly. That's why I added the orphan reaper to help mitigate it.
The locks should stay unless Sidekiq fails to put the job back on the queue. Otherwise, you'd end up with duplicates, rendering the gem completely useless.
The issue in this situation is the while executing part of your job that could sometimes delay the processing of this job slightly but not indefinitely. I am sure there are some edge cases that I've missed here.
There are also quite a few improvements in the later versions of the gem that might be interesting to you. Especially if you are going to upgrade to Sidekiq 7.
Describe the bug
I have some workers which are meant to run continuously. I'm using Heroku which restarts every 24 hours. I've noticed after the restart there will be locks for JIDs which are no longer running. The reaper eventually clears them, but I'd expect the locks to be cleared when the workers are killed by the restart.
Expected behavior
When a restart kills running workers their locks should be released.
Current behavior
The locks remain for JIDs which no longer exist.
Worker class
This module is included in workers which are meant to run continuously. Their
perform
is an infinite loop fetching from an API and pushing it to a Redis queue, or doing blocking pops from a Redis queue.Stoppable
just provides a Redis flag to tell them to exit the loop.Additional context
Using Redis 6.2.6 provided by redisgreen.net.
The text was updated successfully, but these errors were encountered: