Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

asyncio support #66

Open
argaen opened this issue Jan 23, 2017 · 16 comments
Open

asyncio support #66

argaen opened this issue Jan 23, 2017 · 16 comments

Comments

@argaen
Copy link

argaen commented Jan 23, 2017

are there any plans on adding support for benchmarking coroutines?

@ionelmc
Copy link
Owner

ionelmc commented Jan 23, 2017

Not yet ;-)

But why do you need to benchmark coroutines? Aren't those io-bound?

Lets say you could do benchmarks on two levels:

  • Micro. And then you need to "remove" the io parts from your tests somehow (how?)
  • Macro. Why include ioloop-time in your benchmark?

pytest-benchmark could support benchmarking the total time of a coroutine, and that time would include time spent outside the coroutine (like the ioloop). Would that be enough?

@argaen
Copy link
Author

argaen commented Jan 23, 2017

Yup, but in my tests I'm mocking the I/O part and wanted to check the impact of the code around the I/O part. For example imagine the following code:

async def test_my_awesome_fn():
    await my_awesome_fn()

async def my_awesome_fn():
    do_some_stuff()
    do_other_stuff()
    await actual_io_call()
    do_post_stuff()
    clean_up_and_so()

In my tests I'm mocking the coroutine but would like to check the impact of the rest of the calls/code. The solution ofc would be to benchmark directly the non coroutines functions, but given the way I currently have the tests, felt more natural to just surround the my_awesome_fn() than each of the different calls. Also, in some cases maybe we are not that lucky and not everything is wrapped in functions and there is logic directly in the my_awesome_fn.

Hope I explained it well

@ionelmc
Copy link
Owner

ionelmc commented Jan 23, 2017

Are you mocking/stubbing the actual_io_call()? My initial proposal assumes you do, otherwise the timing is going to be inflated/unreliable.

@argaen
Copy link
Author

argaen commented Jan 28, 2017

Yeah, I am :)

@ionelmc
Copy link
Owner

ionelmc commented Jan 28, 2017

Any examples? I need something for integration tests anyway.

@argaen
Copy link
Author

argaen commented Jan 28, 2017

My idea would be to benchmark the API calls from the cache class: https://github.com/argaen/aiocache/blob/master/aiocache/cache.py#L147 and I would benchmark it with tests similar to https://github.com/argaen/aiocache/blob/master/tests/ut/test_cache.py#L152.

One for each API call testing different serializers and plugins.

@dikderoy
Copy link

dikderoy commented Jun 15, 2018

I might be sounding strange - but what if I need to benchmark e2e loop+function+network time for given function?

for now I haven't found any other solution then either measure it "manually" or do this:

def run():
    return loop.run_until_complete(fn())

benchmark(run)

but I assume (correct me if I am wrong) what there is an extra time taken to spin up the loop and to stop it down compared to just waiting for certain coroutine to complete.

So in my opinion where should be an option to run benchmark on a living loop there each run is single (pseudo):

measure_start()
await coro(...)
measure_stop()

with an API like:

benchmark_coro(coro, *args, **kwargs, _loop:asyncio.AbstractEventLoop)

or

await benchmark_coro(coro, *args, **kwargs)

second one is integrates nicely with pytest-asyncio.

cases there this is applicable - is testing e2e interaction with third-party solutions like databases.
f.ex. it is known in mongo - if you want to check if document exists - it is faster to use find and see if cursor contains any data to fetch then find_one which always fetches data.

In short - when you dont have time/resources to develop and run entire app under load-testing framework to see if approach is good.

also regarding what I said above, it would be interesting to know top performance (stripped) for given coroutine if it runs as "in-real-life", i.e. min/max/avg served results (ops/sec) for "parallel" runs.

@dikderoy
Copy link

Also, not all coroutines are I/O bound - there are tons of constructions based on asyncio there code could work without I/O and coroutines are used to actually perform coprograms on a single thread.
F.ex.: a queue based complex conveyor calculation

@chrahunt
Copy link

chrahunt commented Jan 6, 2019

@dikderoy it's not for coroutines specifically you can provide a custom timer to @pytest.marks.benchmark that can allow more fine-grained control over what gets timed.

class Stopwatch:
    def __init__(self, timer=time.perf_counter):
        self._timer = timer
        self._offset = 0
        self._stopped = False
        self._stop_time = 0
        self._stop_real_time = 0

    def __call__(self):
        if self._stopped:
            return self._stop_time
        return self._timer() - self._offset

    def stop(self):
        if self._stopped:
            return
        self._stopped = True
        self._stop_real_time = self._timer()
        self._stop_time = self._stop_real_time - self._offset

    def start(self):
        if not self._stopped:
            return
        self._stopped = False
        t = self._timer()
        self._offset += t - self._stop_real_time


stopwatch = Stopwatch()


@pytest.marks.benchmark(timer=stopwatch)
def test_part_of_run(benchmark):
    stopwatch.stop()
    @benchmark
    def runner():
        # setup code
        # ...
        stopwatch.start()
        # do work
        time.sleep(0.1)
        stopwatch.stop()

It's probably a common-enough use case that something should be included in the library.

@mbello
Copy link

mbello commented Jan 18, 2020

Ok, I came here (among several other places) looking for a solution. I found a way to make it work and would like to share here for anyone coming later on.

The fixture below called aio_benchmark wraps benchmark and can be used with both sync and async function. It works for me as is.

================================================

@pytest.yield_fixture(scope='function')
def aio_benchmark(benchmark):
    import asyncio
    import threading
    
    class Sync2Async:
        def __init__(self, coro, *args, **kwargs):
            self.coro = coro
            self.args = args
            self.kwargs = kwargs
            self.custom_loop = None
            self.thread = None
        
        def start_background_loop(self) -> None:
            asyncio.set_event_loop(self.custom_loop)
            self.custom_loop.run_forever()
        
        def __call__(self):
            evloop = None
            awaitable = self.coro(*self.args, **self.kwargs)
            try:
                evloop = asyncio.get_running_loop()
            except:
                pass
            if evloop is None:
                return asyncio.run(awaitable)
            else:
                if not self.custom_loop or not self.thread or not self.thread.is_alive():
                    self.custom_loop = asyncio.new_event_loop()
                    self.thread = threading.Thread(target=self.start_background_loop, daemon=True)
                    self.thread.start()
                
                return asyncio.run_coroutine_threadsafe(awaitable, self.custom_loop).result()
    
    def _wrapper(func, *args, **kwargs):
        if asyncio.iscoroutinefunction(func):
            benchmark(Sync2Async(func, *args, **kwargs))
        else:
            benchmark(func, *args, **kwargs)

    return _wrapper

@gammazplaud
Copy link

gammazplaud commented Jul 27, 2020

Can you elaborate how you use it?

Ok, I came here (among several other places) looking for a solution. I found a way to make it work and would like to share here for anyone coming later on.

The fixture below called aio_benchmark wraps benchmark and can be used with both sync and async function. It works for me as is.

================================================

@pytest.yield_fixture(scope='function')
def aio_benchmark(benchmark):
    import asyncio
    import threading
    
    class Sync2Async:
        def __init__(self, coro, *args, **kwargs):
            self.coro = coro
            self.args = args
            self.kwargs = kwargs
            self.custom_loop = None
            self.thread = None
        
        def start_background_loop(self) -> None:
            asyncio.set_event_loop(self.custom_loop)
            self.custom_loop.run_forever()
        
        def __call__(self):
            evloop = None
            awaitable = self.coro(*self.args, **self.kwargs)
            try:
                evloop = asyncio.get_running_loop()
            except:
                pass
            if evloop is None:
                return asyncio.run(awaitable)
            else:
                if not self.custom_loop or not self.thread or not self.thread.is_alive():
                    self.custom_loop = asyncio.new_event_loop()
                    self.thread = threading.Thread(target=self.start_background_loop, daemon=True)
                    self.thread.start()
                
                return asyncio.run_coroutine_threadsafe(awaitable, self.custom_loop).result()
    
    def _wrapper(func, *args, **kwargs):
        if asyncio.iscoroutinefunction(func):
            benchmark(Sync2Async(func, *args, **kwargs))
        else:
            benchmark(func, *args, **kwargs)

    return _wrapper

@monkeyman192
Copy link

To anyone who ends up here with the same issue, I found mbello's solution to be very good. To anyone wondering how to make it work, here is an example test:

pytest.mark.asyncio
async def test_something(aio_benchmark):
    @aio_benchmark
    async def _():
        await your_async_function()

A few notes:

  • This requires the pytest-asyncio library
  • I didn't give the function a name because otherwise my linter complained
  • I am not sure if this is the exact intended way of using it, but it did work for me.

@ghost
Copy link

ghost commented Sep 28, 2021

@monkeyman192 , tested on Python 3.9 still works. Thanks.

@robsdedude
Copy link

robsdedude commented May 25, 2022

I couldn't get the proposed solution to work. I think it had to do with the fact that I use async fixtures which makes pytest manage an event loop as well. So I fiddled around and figured out that you can request the event loop from pytest as a fixture. This is what I ended up with.

@pytest_asyncio.fixture
async def aio_benchmark(benchmark, event_loop):
    def _wrapper(func, *args, **kwargs):
        if asyncio.iscoroutinefunction(func):
            @benchmark
            def _():
                return event_loop.run_until_complete(func(*args, **kwargs))
        else:
            benchmark(func, *args, **kwargs)

    return _wrapper

Usage:

async def some_async_function_to_test(some_async_fixture):
    ...

def test_benchmark_please(some_async_fixture, aio_benchmark):
    aio_benchmark(some_async_function_to_test, some_async_fixture)

@jan-kubica
Copy link

jan-kubica commented Apr 16, 2024

@robsdedude
Thank you for the snippet, here is a very slight modification addressing a depreciation warning that the fixture raised.

@pytest_asyncio.fixture
async def aio_benchmark(benchmark):
    async def run_async_coroutine(func, *args, **kwargs):
        return await func(*args, **kwargs)

    def _wrapper(func, *args, **kwargs):
        if asyncio.iscoroutinefunction(func):

            @benchmark
            def _():
                future = asyncio.ensure_future(
                    run_async_coroutine(func, *args, **kwargs)
                )
                return asyncio.get_event_loop().run_until_complete(future)
        else:
            benchmark(func, *args, **kwargs)

    return _wrapper

@seifertm
Copy link

Is there anything pytest-asyncio can do to simplify this? I'm not a heavy pytest-benchmark user, but if someone has an idea to simplify the integration of both plugins from the pytest-asyncio side, I encourage them to file an issue in the pytest-asyncio tracker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants