This repository has been archived by the owner on Jun 21, 2023. It is now read-only.
In worker, retry forever with an exponential back off when Redis interactions time out #22
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently, when we time out talking to Redis, we reconnect and retry the operation. For a fail-over scenario where the Redis server has moved to a new host, this behavior works. For scenarios in which the Redis is still available but is overwhelmed with load, repeatedly reconnecting and retrying operations has the potential to make the situation worse.
In this PR, I introduce
Worker#with_exponential_backoff
and use it in theWorker
instead ofwith_retries
.I limit these changes to the worker because backing off and retrying forever in Unicorn processes when enqueuing jobs could cause request timeouts.
I also change the behavior of
with_retries
slightly so that attempts to reconnect also count as a retry attempt. The existing logic can end up trying to reconnect up to 9 times in certain scenarios.