From 2b63d649dc56166fe239005d8b4e38a071a56089 Mon Sep 17 00:00:00 2001 From: James McKinney <26463+jpmckinney@users.noreply.github.com> Date: Fri, 26 Jul 2024 00:50:02 -0400 Subject: [PATCH] docs: Copy-edit --- docs/api.rst | 4 ++-- docs/config.rst | 12 ++++++------ docs/contributing/index.rst | 4 ++-- docs/index.rst | 2 +- docs/news.rst | 10 +++++++++- 5 files changed, 20 insertions(+), 12 deletions(-) diff --git a/docs/api.rst b/docs/api.rst index 5b88bf2d..fb0477ad 100644 --- a/docs/api.rst +++ b/docs/api.rst @@ -57,7 +57,7 @@ Parameters entry_points = {'scrapy': ['settings = projectname.settings']}, ) - Do this easily with the `scrapyd-deploy` command from the `scrapyd-client `__ package. + Do this easily with the ``scrapyd-deploy`` command from the `scrapyd-client `__ package. Example: @@ -241,7 +241,7 @@ Get the pending, running and finished jobs of a project. - Pending jobs are in :ref:`spider queues`. - Running jobs have Scrapy processes. -- Finished jobs are in job storage. +- Finished jobs are in :ref:job storage`. .. note:: diff --git a/docs/config.rst b/docs/config.rst index 54c0377e..7b1ebb6c 100644 --- a/docs/config.rst +++ b/docs/config.rst @@ -152,7 +152,7 @@ Default ``scrapyd.spiderqueue.SqliteSpiderQueue`` Options - ``scrapyd.spiderqueue.SqliteSpiderQueue`` stores spider queues in SQLite databases named after each project, in the :ref:`dbs_dir` directory - - Implement your own, using the ``ISpiderQueue`` interface + - Implement your own, using the :py:interface:`~scrapyd.interfaces.ISpiderQueue` interface Also used by - :ref:`addversion.json` webservice, to create a queue if the project is new - :ref:`schedule.json` webservice, to add a pending job @@ -183,7 +183,7 @@ Options - The launcher adds :ref:`max_proc` capacity at startup, and one capacity each time a Scrapy process ends. - The :ref:`application` starts a timer so that, every :ref:`poll_interval` seconds, jobs start if there's capacity: that is, if the number of Scrapy processes that are running is less than the :ref:`max_proc` value. - - Implement your own, using the ``IPoller`` interface + - Implement your own, using the :py:interface:`~scrapyd.interfaces.IPoller` interface .. _poll_interval: @@ -280,11 +280,11 @@ The directory in which to write Scrapy items. An item feed is written to ``{items_dir}/{project}/{spider}/{job}.jl``. -If this option is non-empty, the `FEEDS `__ Scrapy setting is set as follows, resulting in feeds being written to the specified directory as JSON lines: +If this option is non-empty, the `FEEDS `__ Scrapy setting is set as follows, resulting in items being written to the above path as JSON lines: .. code-block:: json - {"value from items_dir": {"format": "jsonlines"}} + {"file:///path/to/items_dir/project/spider/job.jl": {"format": "jsonlines"}} Default ``""`` (empty), because it is recommended to instead use either: @@ -408,7 +408,7 @@ Options - ``scrapyd.eggstorage.FilesystemEggStorage`` writes eggs in the :ref:`eggs_dir` directory .. note:: Eggs are named after the ``version``, replacing characters other than ``A-Za-z0-9_-`` with underscores. Therefore, if you frequently use non-word, non-hyphen characters, the eggs for different versions can collide. - - Implement your own, using the ``IEggStorage`` interface: for example, to store eggs remotely + - Implement your own, using the :py:interface:`~scrapyd.interfaces.IEggStorage` interface: for example, to store eggs remotely .. _eggs_dir: @@ -439,7 +439,7 @@ Default Options - ``scrapyd.jobstorage.MemoryJobStorage`` stores jobs in memory, such that jobs are lost when the Scrapyd process ends - ``scrapyd.jobstorage.SqliteJobStorage`` stores jobs in a SQLite database named ``jobs.db``, in the :ref:`dbs_dir` directory - - Implement your own, using the ``IJobStorage`` interface + - Implement your own, using the :py:interface:`~scrapyd.interfaces.IJobStorage` interface .. _finished_to_keep: diff --git a/docs/contributing/index.rst b/docs/contributing/index.rst index 2631dbb1..00c1336e 100644 --- a/docs/contributing/index.rst +++ b/docs/contributing/index.rst @@ -41,7 +41,7 @@ To install an editable version for development, clone the repository, change to .. code-block:: shell - pip install -e . + pip install -e .[test,docs] Developer documentation ----------------------- @@ -99,7 +99,7 @@ A **finished job** is an object with the attributes ``project``, ``spider``, ``j - ISpiderQueue - IPoller - ScrapyProcessProtocol - - Job + - IJobStorage * - Project - *not specified* - _project diff --git a/docs/index.rst b/docs/index.rst index fe9755fe..0cc4d1f7 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -28,7 +28,7 @@ Upload a project This involves building a `Python egg `__ and uploading it to Scrapyd via the `addversion.json `_ webservice. -Do this easily with the `scrapyd-deploy` command from the `scrapyd-client `__ package. Once configured: +Do this easily with the ``scrapyd-deploy`` command from the `scrapyd-client `__ package. Once configured: .. code-block:: shell diff --git a/docs/news.rst b/docs/news.rst index 329e0c0d..8ff5f20a 100644 --- a/docs/news.rst +++ b/docs/news.rst @@ -3,6 +3,14 @@ Release notes .. changelog +Unreleased +---------- + +Added +~~~~~ + +- Default webservices can be disabled. See :ref:`config-services`. + 1.5.0b1 (2024-07-25) -------------------- @@ -345,7 +353,7 @@ Added Changed ~~~~~~~ -- Move scrapyd-deploy command to `scrapyd-client `__ package. (:commit:`c1358dc`, :commit:`c9d66ca`, :commit:`191353e`) +- Move ``scrapyd-deploy`` command to `scrapyd-client `__ package. (:commit:`c1358dc`, :commit:`c9d66ca`, :commit:`191353e`) - Allow the :ref:`items_dir` setting to be a URL. (:commit:`e261591`, :commit:`35a21db`) - Look for a ``~/.scrapyd.conf`` file in the user's home directory. (:commit:`1fce99b`)