-
-
Notifications
You must be signed in to change notification settings - Fork 723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why warp is so slow? (compared to nginx) #557
Comments
Why? Because Nginx is battle optimized to the teeth. That said, some possible follow ups:
To answer your P.S. 2:
You can configure #[tokio::main(max_threads = 10_000)]
async fn main() {
println!("Hello world");
} |
Maybe you've already done this, but try adding the
to your |
I think a lot of people hit this because the example in warp's README only enables the |
Hi guys, thanks for answering. I'm going to prepare a new environment for testing and apply all suggested configurations. In the new tests I'll add other projects too, like Actix, Rocket etc., and back with any feedback soon ... |
|
@aslamplr it would be awesome to have |
It's there. Check the other test types. |
Hi guys. Finally, more tests done and warp got a good performance after applying the suggested configurations. 😃 However, actix is the new winner as the attached logs shows. 😅 The content used was: <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Hello world benchmark</title>
</head>
<body>
This is a static content to check the performance of the following HTTP
servers:
<ul>
<li>actix-http</li>
<li>deno</li>
<li>microhttpd</li>
<li>nginx</li>
<li>nodejs</li>
<li>warp</li>
</ul>
</body>
</html> and the respective codes: actix: use std::io;
use actix_http::{HttpService, Response};
use actix_server::Server;
use futures_util::future;
#[actix_rt::main]
async fn main() -> io::Result<()> {
Server::build()
.bind("hello-world", "0.0.0.0:8080", || {
HttpService::build()
.client_timeout(1000)
.client_disconnect(1000)
.finish(|_req| {
let mut res = Response::Ok();
future::ok::<_, ()>(res.body(
r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Hello world benchmark</title>
</head>
<body>
This is a static content to check the performance of the following HTTP
servers:
<ul>
<li>actix-http</li>
<li>deno</li>
<li>microhttpd</li>
<li>nginx</li>
<li>nodejs</li>
<li>warp</li>
</ul>
</body>
</html>"#,
))
})
.tcp()
})?
.run()
.await
} deno: import { serve } from "https://deno.land/std/http/server.ts";
const s = serve({ port: 8080 });
console.log("http://corin.ga:8080/");
for await (const req of s) {
req.respond({
body: `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Hello world benchmark</title>
</head>
<body>
This is a static content to check the performance of the following HTTP
servers:
<ul>
<li>actix-http</li>
<li>deno</li>
<li>microhttpd</li>
<li>nginx</li>
<li>nodejs</li>
<li>warp</li>
</ul>
</body>
</html>` });
} #include <stdio.h>
#include <memory.h>
#include <microhttpd.h>
#define PAGE \
"<!DOCTYPE html>\n\
<html lang=\"en\">\n\
<head>\n\
<meta charset=\"UTF-8\" />\n\
<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n\
<title>Hello world benchmark</title>\n\
</head>\n\
<body>\n\
This is a static content to check the performance of the following HTTP\n\
servers:\n\
<ul>\n\
<li>MHD</li>\n\
<li>nginx</li>\n\
</ul>\n\
</body>\n\
</html>"
static enum MHD_Result ahc_echo(void *cls, struct MHD_Connection *con,
const char *url, const char *method,
const char *version, const char *upload_data,
size_t *upload_data_size, void **ptr) {
struct MHD_Response *res;
enum MHD_Result ret;
if ((void *)1 != *ptr) {
*ptr = (void *)1;
return MHD_YES;
}
*ptr = NULL;
res = MHD_create_response_from_buffer(strlen(PAGE), PAGE,
MHD_RESPMEM_PERSISTENT);
ret = MHD_queue_response(con, MHD_HTTP_OK, res);
MHD_destroy_response(res);
return ret;
}
int main() {
struct MHD_Daemon *d;
d = MHD_start_daemon(
MHD_USE_EPOLL_INTERNAL_THREAD | MHD_SUPPRESS_DATE_NO_CLOCK |
MHD_USE_EPOLL_TURBO,
8080, NULL, NULL, &ahc_echo, NULL, MHD_OPTION_CONNECTION_TIMEOUT,
(unsigned int)120, MHD_OPTION_THREAD_POOL_SIZE,
(unsigned int)sysconf(_SC_NPROCESSORS_ONLN), MHD_OPTION_CONNECTION_LIMIT,
(unsigned int)10000, MHD_OPTION_END);
getchar();
MHD_stop_daemon(d);
return 0;
} nginx: (using
nodejs: const http = require("http");
const hostname = "0.0.0.0";
const port = 8080;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader("Content-Type", "text/plain");
res.end(`<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Hello world benchmark</title>
</head>
<body>
This is a static content to check the performance of the following HTTP
servers:
<ul>
<li>actix-http</li>
<li>deno</li>
<li>nginx</li>
<li>nodejs</li>
<li>sagui</li>
<li>warp</li>
</ul>
</body>
</html>`);
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
}); warp: #![deny(warnings)]
use warp::Filter;
#[tokio::main(max_threads = 10_000)]
async fn main() {
let routes = warp::any().map(|| r#"<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Hello world benchmark</title>
</head>
<body>
This is a static content to check the performance of the following HTTP
servers:
<ul>
<li>actix-http</li>
<li>deno</li>
<li>microhttpd</li>
<li>nginx</li>
<li>nodejs</li>
<li>warp</li>
</ul>
</body>
</html>"#);
warp::serve(routes).run(([0, 0, 0, 0], 8080)).await;
} the runner was: #!/bin/sh
set -e
wrk -t10 -c1000 -d10s --latency http://corin.ga:8080/ > "wrk-$1.log" All logs attached bellow: wrk-actix.log Machine: $ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 23
Model: 24
Model name: AMD Ryzen 7 3700U with Radeon Vega Mobile Gfx
Stepping: 1
Frequency boost: enabled
CPU MHz: 1130.695
CPU max MHz: 2300.0000
CPU min MHz: 1400.0000
BogoMIPS: 4591.63
Virtualization: AMD-V
L1d cache: 128 KiB
L1i cache: 256 KiB
L2 cache: 2 MiB
L3 cache: 4 MiB
NUMA node0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled v
ia prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user
pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditiona
l, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtr
r pge mca cmov pat pse36 clflush mmx fxsr sse s
se2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtsc
p lm constant_tsc rep_good nopl nonstop_tsc cpu
id extd_apicid aperfmperf pni pclmulqdq monitor
ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes
xsave avx f16c rdrand lahf_lm cmp_legacy svm ex
tapic cr8_legacy abm sse4a misalignsse 3dnowpre
fetch osvw skinit wdt tce topoext perfctr_core
perfctr_nb bpext perfctr_llc mwaitx cpb hw_psta
te sme ssbd sev ibpb vmmcall fsgsbase bmi1 avx2
smep bmi2 rdseed adx smap clflushopt sha_ni xs
aveopt xsavec xgetbv1 xsaves clzero irperf xsav
eerptr arat npt lbrv svm_lock nrip_save tsc_sca
le vmcb_clean flushbyasid decodeassists pausefi
lter pfthreshold avic v_vmsave_vmload vgif over
flow_recov succor smca OS: $ cat /etc/fedora-release
Fedora release 32 (Thirty Two) |
For me Actix had a massive memory leak issue. And after trying to solve it, without any success, joined to the warp fun club. |
now, actix-web is released 3.0, it's solved memory leak. |
The major bumping up actix/actix-web#1554 |
The only thing I know, a month ago when I tried the latest actix-web, and did my first tests, I did catch memory leaks immediately. And I need a solution that I can trust. Memory leak at first is not that trusty - at least for me. |
testing locally now, I get about 2x the performance of nginx with warp and tokio 1.0 |
Hey guys. I'll develop a web server that will run (as service) primarily on Windows. The project must be written in Rust running on top of a HTTP server library with SSE support, like Warp or Actix. Having that in mind, I'll provide logs from new tests, but from a new environment:
and new checks I'll be done, like:
I'm excited to return with new feedback! 😃 P.S.: I would like to use any Rust tool instead of |
@silvioprog Did you try this one https://github.com/tsenart/vegeta? |
@silvioprog what do you exactly want to benchmark: Rest APIs? Websocket? Why are you looking specifically for a Rust tool to run benchmarks? As far as I know, raw HTTP calls can also be done with ab. |
@joseluisq: I didn't know this project existed, I'm going to test it. Thank you! 😃
Only the raw HTTP server.
Because I would like to use Rust instead of Go, since the first one is more optimized.
The |
New tests (from Windows)! 🚀 warp code (adapted from the #![deny(warnings)]
use warp::Filter;
#[tokio::main(worker_threads = 10_000)]
async fn main() {
let body = r#"<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>Hello benchmark</title>
</head>
<body>
This is a static content to check the performance of the following HTTP servers:
<ul>
<li>warp</li>
<li>actix-http</li>
</ul>
</body>
</html>"#;
let route = warp::any().map(move || body);
warp::serve(route).run(([0, 0, 0, 0], 8080)).await;
} actix-web code (adapted from the use actix_web::{web, App, HttpServer};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().service(web::resource("/").to(|| async {
r#"<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<title>Hello benchmark</title>
</head>
<body>
This is a static content to check the performance of the following HTTP servers:
<ul>
<li>warp</li>
<li>actix-http</li>
</ul>
</body>
</html>"#
}))
})
.bind("0.0.0.0:8080")?
.run()
.await
} Command line used (go-wrk):
CPU, memory and threads used in stand-by (i.e. just open, waiting for requests): warp actix Lastly, the generated logs: warp actix I'm really surprise with the amount of memory and threads used by warp 😶... but I believe this can be configured and improved, so feel free to correct me. |
@silvioprog It's because your Warp setup is using a high number of worker threads: #[tokio::main(worker_threads = 10_000)]
async fn main() {} Maybe try something similar like Actix, just specifying few number of worker threads via Tokio runtime. fn main() -> Result {
let N = 8;
let threads = num_cpus::get() * N;
tokio::runtime::Builder::new_multi_thread()
.worker_threads(threads)
.enable_all()
.build()?
.block_on(async {
// warp server stuff
});
Ok(())
} I hope that can help |
In our internal benchmarking we found that the default behaviour of Tokio (not specifying YMMV, but in general if you're going to do something like |
@jbg that's correct, Tokio uses one thread per core by default. |
@joseluisq, you solved the problem! 👏👏👏 Now I get the same benchmarking results with warp/actix. Sent as new PR: #786. Thank you very much! 🙂 |
Latest tests using WRK/AB to hit the Warp/Actix: #791 (comment) |
Hi.
I have been testing warp and nginx with a minimal content (just a "hello world" message) and noticed warp is very slow, but I'm not sure if the slowness is related to warp or something I'm doing wrong.
The nginx config and content:
The warp stuff:
cargo build --release sudo ./target/release/examples/hello # using sudo just to get all kernel configurations
Environment
Finally, the tests using wrk!
wrk
results (avg, after three intervaled tests) for nginxwrk
results (avg) for warp(using wrk compiled by sources on Fedora30 with Clang 8.0)
Notice warp is about three times slower than nginx.
P.S. 1.: I did more tests using ApacheBench and JMeter with five machines (clients) connected to an external server remotely (with some limits due to internet bandwidth) providing a larger content (around 150 kB), but got slower results in warp again. 😕
P.S. 2: Notice the
nginx.conf
on top of this message. How to apply those configs in warp? (speciallyworker_processes
,worker_cpu_affinity
andworker_connections
).Why?
I don't know if there is any specific configuration to increase the warp speed. I would appreciate it and retest to get better results.
TIA for any help!
The text was updated successfully, but these errors were encountered: