-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Advanced Options
The Sidekiq configuration file is a YAML file that Sidekiq server uses to configure itself, by default located at config/sidekiq.yml
. It is only necessary to create the file if you need to set advanced options, such as concurrency pool size, named queues, etc. Here is an example configuration file:
---
:concurrency: 5
staging:
:concurrency: 10
production:
:concurrency: 10
:queues:
- critical
- default
- low
Note the use of environment-specific subsections. These values will override top-level values. If you don't use the default location, use the -C
flag to tell Sidekiq where the file is:
sidekiq -C config/myapp_sidekiq.yml
Options passed on the command line will also override options specified in the config file.
By default, sidekiq uses a single queue called "default" in Redis. If
you want to use multiple queues, you can either specify them as
arguments to the sidekiq
command, or set them in the Sidekiq
configuration file. Each queue can be configured with an optional weight. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1:
As arguments...
sidekiq -q critical,2 -q default
In the configuration file...
# ...
:queues:
- [critical, 2]
- default
If you want queues always processed in a specific order, just declare them in order without weights:
As arguments...
sidekiq -q critical -q default -q low
In the configuration file...
# ...
:queues:
- critical
- default
- low
This means that any job in the default queue will be processed only when the critical queue is empty.
You can get random queue priorities by declaring each queue with a weight of 1, so each queue has an equal chance of being processed:
# ...
:queues:
- ["foo", 1]
- ["bar", 1]
- ["xyzzy", 1]
You can specify a queue to use for a given job class by declaring it:
class ImportantJob
include Sidekiq::Job
sidekiq_options queue: 'critical'
def perform(*important_args)
puts "Doing critical work"
end
end
Note: Sidekiq does not support mixing ordered and weighted queue modes. If, for example, you need a critical queue that is processed first, and would like other queues to be weighted, you would dedicate a Sidekiq process exclusively to the critical queue, and other Sidekiq processes to service the rest of the queues.
I don't recommend having more than a handful of queues per Sidekiq process. Having lots of queues means you should start grouping Sidekiq processes into roles (e.g. image_worker, email_worker, etc) or simplify your queue setup. Lots of queues makes for a more complex system and Sidekiq Pro cannot reliably handle multiple queues without polling. M Sidekiq Pro processes polling N queues means O(M*N) operations per second slamming Redis.
If you use the sidekiq web interface, newly added queues will first appear there when a job has been enqueued for this queue.
If you'd like to "reserve" a queue so it only handles certain jobs, the easiest way is to run two sidekiq processes, each handling different queues:
sidekiq -q critical # Only handles jobs on the "critical" queue
sidekiq -q default -q low -q critical # Handles critical jobs only after checking for other jobs
Sidekiq offers a number of options for controlling a job behavior.
-
queue: use a named queue for this Job, default
default
-
retry: enable retries for this Job, default
true
. Alternatively, you can specify the max number of times a job is retried (ie. retry: 3) - dead: whether a failing job should go to the Dead queue if it fails retries, default: true
-
backtrace: whether to save any error backtrace in the retry payload to display in web UI,
can be true, false or an integer number of lines to save, default
false
. Be careful, backtraces are big and can take up a lot of space in Redis if you have a large number of retries. You should be using an error service like Honeybadger. - pool: use the given Redis connection pool to push this type of job to a given shard.
- tags: add an Array of tags to each job. You can filter by tag within the Web UI.
These are the options that are supported out of the box.
class HardJob
include Sidekiq::Job
sidekiq_options queue: :crawler, tags: ['alpha', '🥇']
def perform(name, count)
end
end
Default options for all jobs can be set using Sidekiq.default_job_options=
:
Sidekiq.default_job_options = { 'backtrace' => true }
Options can also be overridden per call:
HardJob.set(queue: :critical).perform_async(name, count)
You can tune the amount of concurrency in your Sidekiq process. By default, one sidekiq process creates 5 threads. If that's crushing your machine with I/O, you can adjust it down:
bundle exec sidekiq -c 4
RAILS_MAX_THREADS=3 bundle exec sidekiq
I don't recommend setting concurrency higher than 50. Starting in Rails 5, RAILS_MAX_THREADS
can be used to configure Rails and Sidekiq concurrency. Note that ActiveRecord has a connection pool which needs to be properly configured in config/database.yml
to work well with heavy concurrency. Set pool
equal to the number of threads:
production:
adapter: mysql2
database: foo_production
pool: <%= ENV['RAILS_MAX_THREADS'] || 10 %>
Starting in 7.0, Sidekiq allows you to declare Capsules which can provide single-threaded or serial execution of a queue.
Sidekiq.configure_server do |config|
config.capsule("unsafe") do |cap|
cap.concurrency = 1
cap.queues = %w[queue_a queue_b] # strict priority
# cap.queues = %w[queue_a,3 queue_b,1] # weighted
end
end
Do not declare a capsule for each queue. Instead, "normal" queues should be declared in the config file or via the command line -q
argument. More detailed Capsule documentation can be found here: capsule.md.
Sidekiq includes the connection_pool gem which your Jobs can use. With a connection pool, you can share a limited number of I/O connections among a larger number of threads.
class HardJob
include Sidekiq::Job
MEMCACHED_POOL = ConnectionPool.new(size: 10, timeout: 3) { Dalli::Client.new }
def perform(args)
MEMCACHED_POOL.with do |dalli|
dalli.set('foo', 'bar')
end
end
end
This ensures that even if you have lots of concurrency, you'll only have 10 connections open to memcached per Sidekiq process.
Sidekiq allows you to use -e production
, RAILS_ENV=production
or APP_ENV=production
to control the current environment. APP_ENV is nice because the name is not tech-specific (unlike RAILS_ENV or RACK_ENV).
Transactional push enqueues Sidekiq jobs only when any surrounding Active Record transaction successfully commits. Transactional push doesn't support push_bulk
to prevent potential excessive memory utilization.
Be careful if you are using multiple databases. Sidekiq has no context to determine which connection pool to use so the database connection associated with ActiveRecord::Base
is always used.
To make use of transactional push, include after_commit_everywhere
in your Gemfile
.
# Gemfile
gem "after_commit_everywhere"
To enable it for all jobs, place this in your Sidekiq initializer:
# config/initializers/sidekiq.rb
Sidekiq.transactional_push!
This will enable it only for a specific type of job:
class MyJob
include Sidekiq::Job
sidekiq_options client_class: Sidekiq::TransactionAwareClient
def perform
# Your job's code here
end
end
With Sidekiq::TransactionAwareClient
in use, jobs will not be queued for execution in case of an ActiveRecord::Rollback
error, ensuring only successfully committed transactions generate Sidekiq jobs.
Previous: Error Handling Next: Scheduled Jobs