You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
it might make sense to do this as a separate module, and this might already exist, but I want this API:
// for example purposes assume urlStream is an object stream that emits a bunch of URLs as strings, or objects for request optionsvarnugget=require('nugget')vardownloader=nugget.createDownloadStream()// options could be the parallelism, and also you could pass defaults for request here as options.request or options.defaults maybepump(urlStream,downloader,function(err){if(err)throwerrconsole.log('done downloading')})
Internally, createDownloadStream would start a configurable sized parallel queue (maybe powered by https://www.npmjs.com/package/run-parallel-limit). It would return a writable stream that you write urls into.
For every URL received, it should add it to the queue. It should emit events for when it starts and finishes each URL, as well as expose download progress through a static property/object somewhere on the createDownloadStream instance.
Error handling, it should only destroy the stream with an error if it's a catastrophic error. Maybe you can pass in a function that gets called with the (err, resp, body) for each request and that way you can handle the response yourself if you want?
Finally, when it downloads, it should do it like nugget/wget where it saves the resource to a file on disk. The file it saves as should be configurable in the object you write in as input. If you just write a single URL string as input, it should do with nugget does by default -- just use the http filename.
The text was updated successfully, but these errors were encountered:
it might make sense to do this as a separate module, and this might already exist, but I want this API:
Internally, createDownloadStream would start a configurable sized parallel queue (maybe powered by https://www.npmjs.com/package/run-parallel-limit). It would return a writable stream that you write urls into.
For every URL received, it should add it to the queue. It should emit events for when it starts and finishes each URL, as well as expose download progress through a static property/object somewhere on the createDownloadStream instance.
Error handling, it should only destroy the stream with an error if it's a catastrophic error. Maybe you can pass in a function that gets called with the
(err, resp, body)
for each request and that way you can handle the response yourself if you want?Finally, when it downloads, it should do it like
nugget
/wget
where it saves the resource to a file on disk. The file it saves as should be configurable in the object you write in as input. If you just write a single URL string as input, it should do withnugget
does by default -- just use the http filename.The text was updated successfully, but these errors were encountered: