-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
asyncio support #66
Comments
Not yet ;-) But why do you need to benchmark coroutines? Aren't those io-bound? Lets say you could do benchmarks on two levels:
pytest-benchmark could support benchmarking the total time of a coroutine, and that time would include time spent outside the coroutine (like the ioloop). Would that be enough? |
Yup, but in my tests I'm mocking the I/O part and wanted to check the impact of the code around the I/O part. For example imagine the following code: async def test_my_awesome_fn():
await my_awesome_fn()
async def my_awesome_fn():
do_some_stuff()
do_other_stuff()
await actual_io_call()
do_post_stuff()
clean_up_and_so() In my tests I'm mocking the coroutine but would like to check the impact of the rest of the calls/code. The solution ofc would be to benchmark directly the non coroutines functions, but given the way I currently have the tests, felt more natural to just surround the Hope I explained it well |
Are you mocking/stubbing the |
Yeah, I am :) |
Any examples? I need something for integration tests anyway. |
My idea would be to benchmark the API calls from the cache class: https://github.com/argaen/aiocache/blob/master/aiocache/cache.py#L147 and I would benchmark it with tests similar to https://github.com/argaen/aiocache/blob/master/tests/ut/test_cache.py#L152. One for each API call testing different serializers and plugins. |
I might be sounding strange - but what if I need to benchmark e2e loop+function+network time for given function? for now I haven't found any other solution then either measure it "manually" or do this:
but I assume (correct me if I am wrong) what there is an extra time taken to spin up the loop and to stop it down compared to just waiting for certain coroutine to complete. So in my opinion where should be an option to run benchmark on a living loop there each run is single (pseudo):
with an API like:
or
second one is integrates nicely with cases there this is applicable - is testing e2e interaction with third-party solutions like databases. In short - when you dont have time/resources to develop and run entire app under load-testing framework to see if approach is good. also regarding what I said above, it would be interesting to know top performance (stripped) for given coroutine if it runs as "in-real-life", i.e. min/max/avg served results (ops/sec) for "parallel" runs. |
Also, not all coroutines are I/O bound - there are tons of constructions based on asyncio there code could work without I/O and coroutines are used to actually perform coprograms on a single thread. |
@dikderoy it's not for coroutines specifically you can provide a custom class Stopwatch:
def __init__(self, timer=time.perf_counter):
self._timer = timer
self._offset = 0
self._stopped = False
self._stop_time = 0
self._stop_real_time = 0
def __call__(self):
if self._stopped:
return self._stop_time
return self._timer() - self._offset
def stop(self):
if self._stopped:
return
self._stopped = True
self._stop_real_time = self._timer()
self._stop_time = self._stop_real_time - self._offset
def start(self):
if not self._stopped:
return
self._stopped = False
t = self._timer()
self._offset += t - self._stop_real_time
stopwatch = Stopwatch()
@pytest.marks.benchmark(timer=stopwatch)
def test_part_of_run(benchmark):
stopwatch.stop()
@benchmark
def runner():
# setup code
# ...
stopwatch.start()
# do work
time.sleep(0.1)
stopwatch.stop() It's probably a common-enough use case that something should be included in the library. |
Ok, I came here (among several other places) looking for a solution. I found a way to make it work and would like to share here for anyone coming later on. The fixture below called aio_benchmark wraps benchmark and can be used with both sync and async function. It works for me as is. ================================================ @pytest.yield_fixture(scope='function')
def aio_benchmark(benchmark):
import asyncio
import threading
class Sync2Async:
def __init__(self, coro, *args, **kwargs):
self.coro = coro
self.args = args
self.kwargs = kwargs
self.custom_loop = None
self.thread = None
def start_background_loop(self) -> None:
asyncio.set_event_loop(self.custom_loop)
self.custom_loop.run_forever()
def __call__(self):
evloop = None
awaitable = self.coro(*self.args, **self.kwargs)
try:
evloop = asyncio.get_running_loop()
except:
pass
if evloop is None:
return asyncio.run(awaitable)
else:
if not self.custom_loop or not self.thread or not self.thread.is_alive():
self.custom_loop = asyncio.new_event_loop()
self.thread = threading.Thread(target=self.start_background_loop, daemon=True)
self.thread.start()
return asyncio.run_coroutine_threadsafe(awaitable, self.custom_loop).result()
def _wrapper(func, *args, **kwargs):
if asyncio.iscoroutinefunction(func):
benchmark(Sync2Async(func, *args, **kwargs))
else:
benchmark(func, *args, **kwargs)
return _wrapper |
Can you elaborate how you use it?
|
To anyone who ends up here with the same issue, I found mbello's solution to be very good. To anyone wondering how to make it work, here is an example test: pytest.mark.asyncio
async def test_something(aio_benchmark):
@aio_benchmark
async def _():
await your_async_function() A few notes:
|
@monkeyman192 , tested on Python 3.9 still works. Thanks. |
I couldn't get the proposed solution to work. I think it had to do with the fact that I use async fixtures which makes pytest manage an event loop as well. So I fiddled around and figured out that you can request the event loop from pytest as a fixture. This is what I ended up with. @pytest_asyncio.fixture
async def aio_benchmark(benchmark, event_loop):
def _wrapper(func, *args, **kwargs):
if asyncio.iscoroutinefunction(func):
@benchmark
def _():
return event_loop.run_until_complete(func(*args, **kwargs))
else:
benchmark(func, *args, **kwargs)
return _wrapper Usage: async def some_async_function_to_test(some_async_fixture):
...
def test_benchmark_please(some_async_fixture, aio_benchmark):
aio_benchmark(some_async_function_to_test, some_async_fixture) |
@robsdedude @pytest_asyncio.fixture
async def aio_benchmark(benchmark):
async def run_async_coroutine(func, *args, **kwargs):
return await func(*args, **kwargs)
def _wrapper(func, *args, **kwargs):
if asyncio.iscoroutinefunction(func):
@benchmark
def _():
future = asyncio.ensure_future(
run_async_coroutine(func, *args, **kwargs)
)
return asyncio.get_event_loop().run_until_complete(future)
else:
benchmark(func, *args, **kwargs)
return _wrapper |
Is there anything pytest-asyncio can do to simplify this? I'm not a heavy pytest-benchmark user, but if someone has an idea to simplify the integration of both plugins from the pytest-asyncio side, I encourage them to file an issue in the pytest-asyncio tracker. |
are there any plans on adding support for benchmarking coroutines?
The text was updated successfully, but these errors were encountered: