You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Paginating through datasets can be slow especially for large datasets. Adding a parallel query capability through requests-futures or similar would be useful since you can opt-in to the behavior if you have the dependency installed.
The implementation would need to handle the following (the same way the current client code handles them):
Parsing the JSON response
Handling error responses
Handling 429 rate limits
Ideally it will be transparent to the end-user. They can just iterate through a query as usual but behind the scenes it will pull multiple pages in parallel, with a configurable number of "workers".
The text was updated successfully, but these errors were encountered:
Paginating through datasets can be slow especially for large datasets. Adding a parallel query capability through
requests-futures
or similar would be useful since you can opt-in to the behavior if you have the dependency installed.The implementation would need to handle the following (the same way the current client code handles them):
Ideally it will be transparent to the end-user. They can just iterate through a query as usual but behind the scenes it will pull multiple pages in parallel, with a configurable number of "workers".
The text was updated successfully, but these errors were encountered: