-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New repo maintainer(s) / alternative projects? #51
Comments
After a bit more looking around, I discovered pyrate-limiter, which is actively maintained. It uses the leaky bucket algorithm, which accomplishes the same thing as the 'sliding window' feature described in #31. I also came across aiolimiter, which is an async implementation of the same algorithm. After reviewing those two libraries, I think contributing to one or both of those would be preferable to forking |
Thanks for following up on this. I tried using aiolimiter but it just does not seem to work as well. Ultimately a mix of @deckar01 (fixed logic) and @evemorgen (asyncio compatibility) would have been ideal. Please let us know if you come across other similar maintained packages. |
I'm curious, what issues did you run into with aiolimiter? I have use cases for both synchronous and asynchronous requests, and I haven't yet worked on the async ones. For those, I'd like to compare aiolimiter, the async fork of ratelimit, and maybe also see what it would take to add async support to pyrate-limiter (which does have a couple extra features I like). |
I might be using it wrong but it managed to fail the two toy examples I ran ... import asyncio
from datetime import datetime
from time import perf_counter
from aiolimiter import AsyncLimiter
# API limits: 5 requests per second, up to 10 requests per second in bursts
rate_limit = AsyncLimiter(5, 1)
async def coro(ref):
async with rate_limit:
print('coro', f"t + {(perf_counter() - ref):>7.5f}s")
async def main():
print(f"Limit should be 5 calls per second\n\n")
ref = perf_counter()
tasks_continuous = [coro(ref) for _ in range(10)]
print('Start of 10 coro calls (burst)')
print(datetime.now())
await asyncio.gather(*tasks_continuous)
print('Finished')
print(datetime.now())
print('\n', '#'*80, '\n')
await asyncio.sleep(1)
ref = perf_counter()
tasks_first_burst = [coro(ref) for _ in range(3)]
print('Start of 3 coro calls then short pause, then 7 more')
print(datetime.now())
await asyncio.gather(*tasks_first_burst)
await asyncio.sleep(0.99)
tasks_second_burst = [coro(ref) for _ in range(7)]
await asyncio.gather(*tasks_second_burst)
print('Finished')
print(datetime.now())
asyncio.run(main()) Gives me the following:
As you can see it failed in both cases since both times I end up with more than 5 requests in a 1 second span. For comparison, @sleep_and_retry
@limits(calls=5, period=1)
async def coro(ref):
print('coro', f"t + {(perf_counter() - ref):>7.5f}s") output:
Please let me know your thoughts as I am probably doing something wrong |
Isn't that the expected behavior? After the first 5 requests, it reaches the rate limit and starts inserting a delay of 0.2s before each subsequent request, for an average of 5 requests/second. The total expected request time should be tasks_continuous = [coro(ref) for _ in range(55)]
start_time = datetime.now()
await asyncio.gather(*tasks_continuous)
print(f'Elapsed time: {datetime.now() - start_time}') Which for me results in: Elapsed time: 0:00:10.045647 Is the issue that you would expect those first 5 requests to be spread across 1 second rather than being run immediately? Or for there to be a 1s pause after the first 5 requests instead of a 0.2s pause? |
The average rate becomes 5/sec but that is not what I expect: I would like my requests to go through as quickly as possible as long as there is 'capacity' available. On top of that if you look at the first example, you can see that 9 requests went through in the first second despite the limit being 5/sec. |
Ah! I see what you mean now. In that case I think it would be worth creating an issue for that on the aiolimiter repo. |
Hi @tomasbasham, first of all thanks for this handy package. It solves a small but common enough problem such that this is currently the top Google result for "python rate limit" and similar searches.
It appears that the repo has accumulated a number of unanswered issues and PRs over the last couple years. It's understandable if you don't have time to maintain this, but since there are others who are willing and able to make improvements to it, would you be willing to either:
Plan B
Otherwise, would anyone else like to volunteer to do the following?
ratelimit
I would suggest @deckar01's fork, containing changes described in #31, as a good starting point. I would also like to see @evemorgen and @Jude188's changes from issue #26 / PR #39 included. I believe the changes from these two forks could be integrated with the use of aiosqlite to support both sliding log persistence and async calls.
The text was updated successfully, but these errors were encountered: