|
| 1 | +# Django sync or async, that's the question |
| 2 | + |
| 3 | +Test the performance and concurrency processing of a Django view calling an "external" API with the following servers: |
| 4 | + |
| 5 | +- uWSGI (WSGI) |
| 6 | +- uWSGI with Gevent (WSGI) |
| 7 | +- Gunicorn (gthread) (WSGI) |
| 8 | +- Gunicorn with Gevent (WSGI) |
| 9 | +- Uvicorn (ASGI) |
| 10 | + |
| 11 | + |
| 12 | +## View calling an "external" API |
| 13 | + |
| 14 | +We are testing a Django view which will call an "external" API several times. |
| 15 | + |
| 16 | +We'll use the [httpx](https://www.python-httpx.org/) package for this, as it provides both a sync and async API. |
| 17 | + |
| 18 | +The "external" API, which runs locally with `uvicorn` using `asyncio.sleep` to simulate latency selects a random country from a predefined list: |
| 19 | + |
| 20 | +https://github.com/maerteijn/django-sync-or-async/blob/1c5a6e4738f111c3b09e173af2f3d0c02ca0f8b0/src/django_sync_or_async/views.py#L14-L22 |
| 21 | + |
| 22 | + |
| 23 | +## Overview |
| 24 | + |
| 25 | +We will test the performance implications with the following configurations: |
| 26 | + |
| 27 | +``` |
| 28 | +
|
| 29 | + ┌─────────────────────────────┐ |
| 30 | + ┌────────────────────┤ uwsgi-2-threads (:8000) │ |
| 31 | + │ │ (1 process, 2 threads) │ |
| 32 | + │ └─────────────────────────────┘ |
| 33 | + │ ┌─────────────────────────────┐ |
| 34 | + │ ┌─────────────────┤ uwsgi-100-threads (:8001) │ |
| 35 | + │ │ │ (1 process, 100 threads) │ |
| 36 | + │ │ └─────────────────────────────┘ |
| 37 | + │ │ ┌─────────────────────────────┐ |
| 38 | + ▼ ▼ │ uwsgi-gevent (:8002) │ |
| 39 | + ┌───────────┐ ┌───────┤ (1 process, 100 "workers") │ |
| 40 | + │API (:5000)│◄──┘ └─────────────────────────────┘ |
| 41 | + │ (uvicorn) │◄──┐ ┌─────────────────────────────┐ |
| 42 | + └───────────┘ └───────┤ gunicorn-100-threads (:8003)│ |
| 43 | + ▲ ▲ │ (1 process, 100 threads) │ |
| 44 | + │ │ └─────────────────────────────┘ |
| 45 | + │ │ ┌─────────────────────────────┐ |
| 46 | + │ └────────────────┤ gunicorn-gevent (:8004) │ |
| 47 | + │ │(1 process, 100 "workers") │ |
| 48 | + │ └─────────────────────────────┘ |
| 49 | + │ ┌─────────────────────────────┐ |
| 50 | + └────────────────────┤ uvicorn (:8005) │ |
| 51 | + │ (1 process) │ |
| 52 | + └─────────────────────────────┘ |
| 53 | +
|
| 54 | +``` |
| 55 | +### Concurrency |
| 56 | + |
| 57 | +The view which will be benchmarked calls our "really slow" external API three times so we can also test these calls are done in parallel instead of sequential. The slowest response time is around 600ms, so this is about the longest time it takes the API should generate a response because the API calls should be executed in parallel. To achieve this we use a `ThreadPoolExecutor` for the sync view: |
| 58 | + |
| 59 | +https://github.com/maerteijn/django-sync-or-async/blob/1c5a6e4738f111c3b09e173af2f3d0c02ca0f8b0/src/django_sync_or_async/views.py#L25-L38 |
| 60 | + |
| 61 | +> [!NOTE] |
| 62 | +> The standard `ThreadPoolExecutor` with actual system threads is used when using the `uwsgi-2-threads`, ` uwsgi-100-threads` and `gunicorn-100-threads` configurations. When using gevent, [threads are monkey patched to be cooperative](https://www.gevent.org/api/gevent.monkey.html), so new greenlets will be spawned when using the `ThreadPoolExecutor`. |
| 63 | +
|
| 64 | +For the `uvicorn` version (ASGI), the parallel calls are implemented with `asyncio.gather`: |
| 65 | + |
| 66 | +https://github.com/maerteijn/django-sync-or-async/blob/1c5a6e4738f111c3b09e173af2f3d0c02ca0f8b0/src/django_sync_or_async/views.py#L41-L56 |
| 67 | + |
| 68 | + |
| 69 | +## Installation |
| 70 | + |
| 71 | +### Requirements |
| 72 | + |
| 73 | +- Python 3.12 (minimum) |
| 74 | +- virtualenv (recommended) |
| 75 | + |
| 76 | + |
| 77 | +### Install the packages |
| 78 | + |
| 79 | +First create a virtualenv in your preferred way, then install all packages with: |
| 80 | +```bash |
| 81 | +make install |
| 82 | +``` |
| 83 | + |
| 84 | +## Running the services |
| 85 | + |
| 86 | +Make sure you are allowed to have many file descriptions open: |
| 87 | +```bash |
| 88 | +ulimit -n 32768 |
| 89 | +``` |
| 90 | + |
| 91 | +Now run the supervisor daemon which will start all services: |
| 92 | +```bash |
| 93 | +$ supervisord |
| 94 | +``` |
| 95 | + |
| 96 | +This will start the API and all the different uwsgi / asgi services. Press `ctrl+c` to stop it. |
| 97 | + |
| 98 | + |
| 99 | +## Run the benchmarks |
| 100 | + |
| 101 | +You can run the benchmarks for each individual server by selecting the relevant port number so comparison can be made after running them: |
| 102 | + |
| 103 | +For `uwsgi`: |
| 104 | +```bash |
| 105 | +make locust HOST=http://localhost:8000 |
| 106 | +``` |
| 107 | + |
| 108 | +This will start a locust interface, accessible via http://localhost:8089 |
| 109 | + |
| 110 | +For `uwsgi-100-threads`: |
| 111 | +```bash |
| 112 | +make locust HOST=http://localhost:8001 |
| 113 | +``` |
| 114 | + |
| 115 | +Etcetera, see the [port numbers in the overview](#overview). |
| 116 | + |
| 117 | +### uwsgitop |
| 118 | + |
| 119 | +You can see detailed information during the benchmarks for the uWSGI processes using `uwsgitop`: |
| 120 | +```bash |
| 121 | +uwsgitop http://localhost:3030 # <-- for the 1 process 2 threads variant |
| 122 | +uwsgitop http://localhost:3031 # <-- for the 1 process 100 threads variant |
| 123 | +uwsgitop http://localhost:3032 # <-- for the 1 process gevent variant |
| 124 | +``` |
0 commit comments