Run your spider with a previously crawled request chain.
pip install scrapy-time-machine
Lets say your spider crawls some page everyday and after some time you notice that an important information was added and you want to start saving it.
You may modify your spider and extract this information from now on, but what if you want the historical value of this data, since it was first introduced to the site?
With this extension you can save a snapshot of the site at every run to be used in the future (as long as you don't change the request chain).
To enable this middlware, add this information to your projects's settings.py
:
DOWNLOADER_MIDDLEWARES = {
"scrapy_time_machine.timemachine.TimeMachineMiddleware": 901
}
TIME_MACHINE_ENABLED = True
TIME_MACHINE_STORAGE = "scrapy_time_machine.storages.DbmTimeMachineStorage"
scrapy crawl sample -s TIME_MACHINE_SNAPSHOT=true -s TIME_MACHINE_URI="/tmp/%(name)s-%(time)s.db"
This will save a snapshot at /tmp/sample-YYYY-MM-DDThh-mm-ss.db
scrapy crawl sample -s TIME_MACHINE_RETRIEVE=true -s TIME_MACHINE_URI=/tmp/sample-YYYY-MM-DDThh-mm-ss.db
If no change was made to the spider between the current version and the version that produced the snapshot, the extracted items should be the same.
There is a sample Scrapy project available at the examples directory.