You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I like to make a feature request for add support for pushing a large batch of jobs to the Faktory server, something similar to what Sidekiq offers with #push_bulk.
The network latency of enqueueing a job individually is starting to add up when dealing with a large data set.
Sample code
markers = ids.each_slice(10).to_a
markers.each do |m|
SomeWorker.perform_async(m[0], m[-1])
end
Is there a workaround for something like this for the time being?
The text was updated successfully, but these errors were encountered:
The only workaround today is to use a pool of connections and threads. Faktory does not offer a PUSH operation which takes multiple payloads. Here's sample code using 10 threads to push in parallel:
I like to make a feature request for add support for pushing a large batch of jobs to the Faktory server, something similar to what Sidekiq offers with
#push_bulk
.The network latency of enqueueing a job individually is starting to add up when dealing with a large data set.
Sample code
Is there a workaround for something like this for the time being?
The text was updated successfully, but these errors were encountered: