Skip to content

Commit

Permalink
feat: Fill all available slots (max_proc) when polling, closes #173
Browse files Browse the repository at this point in the history
  • Loading branch information
jpmckinney committed Jul 24, 2024
1 parent 38bcc34 commit 9d1a81b
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 5 deletions.
1 change: 1 addition & 0 deletions docs/news.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ Documentation
Changed
~~~~~~~

- Every :ref:`poll_interval`, up to :ref:`max_proc` processes are started by the default :ref:`poller`, instead of only one process. (The number of running jobs will not exceed :ref:`max_proc`.)
- Drop support for end-of-life Python version 3.7.

Web UI
Expand Down
10 changes: 5 additions & 5 deletions scrapyd/poller.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,17 @@ def __init__(self, config):
@inlineCallbacks
def poll(self):
for project, queue in self.queues.items():
# If the "waiting" backlog is empty (that is, if the maximum number of Scrapy processes are running):
if not self.dq.waiting:
return
if (yield maybeDeferred(queue.count)):
while (yield maybeDeferred(queue.count)):
# If the "waiting" backlog is empty (that is, if the maximum number of Scrapy processes are running):
if not self.dq.waiting:
return
message = (yield maybeDeferred(queue.pop)).copy()
# The message can be None if, for example, two Scrapyd instances share a spider queue database.
if message is not None:
message["_project"] = project
message["_spider"] = message.pop("name")
# Pop a dummy item from the "waiting" backlog. and fire the message's callbacks.
return self.dq.put(message)
self.dq.put(message)

def next(self):
"""
Expand Down

0 comments on commit 9d1a81b

Please sign in to comment.