-
-
Notifications
You must be signed in to change notification settings - Fork 723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittent slow routing #791
Comments
You're only using as many requests as you're using connections, there's a single request per connection, which means that you're mostly benchmarking the the TCP handshake. Also 1000 requests makes for a very short-running benchmark, too short to give any meaningful results. When you increase the number of requests per connection (e.g. -n 500000), you'll come closer to benchmarking the actual routing, although it's very unlikely that any differences in performance will be due to the routing implementation (at least in your example). The version of go-wrk you're using also seems to have some issues, there are other go-wrk implementations, that are a lot faster. With 1000 connections you should also be aware of your ulimit -n for maximum open file descriptors, which is just 1024 by default on many systems. |
New logs generated on Linux and Windows! 🚀 The examples (both built using Warp: #![deny(warnings)]
use warp::Filter;
fn main() -> std::io::Result<()> {
let hello = warp::path("hello").map(|| "Hello, world!");
let another = warp::path("another").map(|| "Another route");
let routes = warp::get().and(hello.or(another));
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()?
.block_on(async {
warp::serve(routes).run(([0, 0, 0, 0], 3030)).await;
});
Ok(())
} Actix: use actix_web::{get, App, HttpResponse, HttpServer};
#[get("/hello")]
async fn hello() -> HttpResponse {
HttpResponse::Ok()
.content_type("text/plain")
.body("Hello, world!")
}
#[get("/another")]
async fn another() -> HttpResponse {
HttpResponse::Ok()
.content_type("text/plain")
.body("Another route")
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| App::new().service(hello).service(another))
.bind("0.0.0.0:8080")?
.run()
.await
} Logs generated using Warp on Windows (best result for 5 consecutive hits): Warp / WRK: wrk -t10 -c1000 -d10s --latency http://172.19.80.1:3030/hello
Running 10s test @ http://172.19.80.1:3030/hello
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 23.13ms 15.18ms 169.78ms 66.93%
Req/Sec 4.41k 2.75k 31.35k 96.67%
Latency Distribution
50% 21.49ms
75% 31.62ms
90% 42.84ms
99% 68.74ms
409658 requests in 10.09s, 50.79MB read
Requests/sec: 40587.89
Transfer/sec: 5.03MB Warp / AB: ab -n 10000 -c 1000 -k http://172.19.80.1:3030/hello
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.19.80.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 172.19.80.1
Server Port: 3030
Document Path: /hello
Document Length: 13 bytes
Concurrency Level: 1000
Time taken for tests: 0.240 seconds
Complete requests: 10000
Failed requests: 0
Keep-Alive requests: 10000
Total transferred: 1540000 bytes
HTML transferred: 130000 bytes
Requests per second: 41657.29 [#/sec] (mean)
Time per request: 24.005 [ms] (mean)
Time per request: 0.024 [ms] (mean, across all concurrent requests)
Transfer rate: 6264.87 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.1 0 17
Processing: 0 4 1.6 4 21
Waiting: 0 4 1.6 4 21
Total: 0 5 2.8 4 37
Percentage of the requests served within a certain time (ms)
50% 4
66% 5
75% 5
80% 6
90% 7
95% 8
98% 17
99% 18
100% 37 (longest request) Actix / WRK: wrk -t10 -c1000 -d10s --latency http://172.19.80.1:8080/hello
Running 10s test @ http://172.19.80.1:8080/hello
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 30.92ms 45.25ms 769.86ms 92.65%
Req/Sec 4.74k 1.00k 19.57k 85.77%
Latency Distribution
50% 17.91ms
75% 28.51ms
90% 48.61ms
99% 250.68ms
474784 requests in 10.10s, 52.07MB read
Requests/sec: 47018.85
Transfer/sec: 5.16MB Actix / AB: ab -n 10000 -c 1000 -k http://172.19.80.1:8080/hello
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.19.80.1 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 172.19.80.1
Server Port: 8080
Document Path: /hello
Document Length: 13 bytes
Concurrency Level: 1000
Time taken for tests: 0.243 seconds
Complete requests: 10000
Failed requests: 0
Keep-Alive requests: 10000
Total transferred: 1390000 bytes
HTML transferred: 130000 bytes
Requests per second: 41137.20 [#/sec] (mean)
Time per request: 24.309 [ms] (mean)
Time per request: 0.024 [ms] (mean, across all concurrent requests)
Transfer rate: 5584.05 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 7.5 0 89
Processing: 1 19 13.4 16 102
Waiting: 1 19 13.4 16 102
Total: 1 21 18.5 16 158
Percentage of the requests served within a certain time (ms)
50% 16
66% 17
75% 22
80% 24
90% 38
95% 52
98% 80
99% 119
100% 158 (longest request) Logs generated using Actix on Linux (best result for 5 consecutive hits): Warp / WRK: wrk -t10 -c1000 -d10s --latency http://192.168.100.2:3030/hello
Running 10s test @ http://192.168.100.2:3030/hello
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 6.82ms 4.92ms 50.92ms 69.41%
Req/Sec 14.79k 1.68k 25.47k 72.40%
Latency Distribution
50% 5.91ms
75% 9.54ms
90% 13.55ms
99% 21.76ms
1473248 requests in 10.09s, 182.65MB read
Requests/sec: 146031.21
Transfer/sec: 18.10MB Warp / AB: ab -n 10000 -c 1000 -k http://192.168.100.2:3030/hello
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.100.2 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 192.168.100.2
Server Port: 3030
Document Path: /hello
Document Length: 13 bytes
Concurrency Level: 1000
Time taken for tests: 0.383 seconds
Complete requests: 10000
Failed requests: 0
Keep-Alive requests: 10000
Total transferred: 1540000 bytes
HTML transferred: 130000 bytes
Requests per second: 26092.63 [#/sec] (mean)
Time per request: 38.325 [ms] (mean)
Time per request: 0.038 [ms] (mean, across all concurrent requests)
Transfer rate: 3924.09 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 4 12.5 0 51
Processing: 20 31 3.5 30 50
Waiting: 0 31 3.5 30 38
Total: 20 35 13.8 30 80
Percentage of the requests served within a certain time (ms)
50% 30
66% 33
75% 35
80% 35
90% 50
95% 76
98% 78
99% 78
100% 80 (longest request) Actix / WRK: wrk -t10 -c1000 -d10s --latency http://192.168.100.2:8080/hello
Running 10s test @ http://192.168.100.2:8080/hello
10 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.18ms 6.32ms 57.44ms 76.35%
Req/Sec 14.53k 2.36k 30.48k 70.20%
Latency Distribution
50% 5.63ms
75% 11.06ms
90% 15.99ms
99% 25.85ms
1447518 requests in 10.10s, 158.75MB read
Requests/sec: 143353.46
Transfer/sec: 15.72MB Actix / AB: ab -n 10000 -c 1000 -k http://192.168.100.2:8080/hello
[sudo] password for silvioprog:
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.100.2 (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: 192.168.100.2
Server Port: 8080
Document Path: /hello
Document Length: 13 bytes
Concurrency Level: 1000
Time taken for tests: 0.392 seconds
Complete requests: 10000
Failed requests: 0
Keep-Alive requests: 10000
Total transferred: 1390000 bytes
HTML transferred: 130000 bytes
Requests per second: 25513.78 [#/sec] (mean)
Time per request: 39.194 [ms] (mean)
Time per request: 0.039 [ms] (mean, across all concurrent requests)
Transfer rate: 3463.30 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 4 14.0 0 74
Processing: 20 29 2.8 29 74
Waiting: 1 29 2.8 29 36
Total: 20 34 14.5 29 101
Percentage of the requests served within a certain time (ms)
50% 29
66% 31
75% 32
80% 33
90% 62
95% 71
98% 85
99% 93
100% 101 (longest request) Since both projects are very fast on Linux and Windows, I'm going to close this issue and take a beer! 😅🍺 |
Hi.
I'm facing a problem that I primarily issued as a "keep-alive" problem, however, I'm not sure which layer is responsible to make the Warp routing intermittently so slow in my application.
I'm going to show a minimal example to reproduce the problem, and compare it with Actix, but feel free to compare with other libs.
Consider the following Warp example:
and the Actix one:
now, notice the following logs:
Warp:
Actix:
I'm using go-wrk to do the tests.
After repeating the tests against Warp, sometimes it returned some logs with ~4k reqs/sec, but in many tests it supports only 0.8k reqs/sec. 🙁
The text was updated successfully, but these errors were encountered: