-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(feat) reenable benches/end_to_end.rs #2937
(feat) reenable benches/end_to_end.rs #2937
Conversation
benches/end_to_end.rs
Outdated
for _ in 0..self.parallel_cnt { | ||
b.iter(|| { | ||
let req = make_request(); | ||
rt.block_on(send_request(req)); | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as you can see @seanmonstar we do this in sync way... If i try to run this async way, i will have an error because the connection is not ready...
What is the new approach for handling parallel request with the new "conn" things? I tried with poll_ready, to wait until the connection is ready before making the requests...
What do you think is the best here ? Opening n connections to the server or using an approach like using the poll ready function ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also i see on src/client/conn/mod.rs in the example this line:
request_sender.ready().await?;
wich i think is wrong because i can't find that method on the code 💢 (maybe im wrong)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, we don't want to use block_on
there. Especially since the parallel option is meant to send them all at the same time.
- For HTTP/1, you'd need to make
parallel_cnt
amount of connections, and then spawn tasks looping onready()
and thensend_request()
. - For HTTP/2, you can make just one connection, and send
parallel_cnt
requests at a time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Robertoskr. When I was working on this I called the ready
method from tower::ServiceExt
on the request sender: request_sender.ready().await
. From the comments above it seems like you tried this, but maybe you forgot to import the trait?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Just tried, that and works, but also tried with a tokio::sync::Lock and performs better, don't know if its safe thought, but the code works well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and for the trait: Yes, did't know that trait existed 😹
@seanmonstar @oddgrd Ready for review ! |
Hey, that's cool that you figured it out! I can have a look at this in the coming week, but I'm a new contributor myself so you'll have to wait for Sean to sign off on this whenever he has time. 🙂 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Robertoskr. I had a look at this, and when I tried to run the benchmarks on your branch it seems some of the HTTP2 ones take quite long to finish. On my computer your branch took ~290 seconds to complete, but on the commit before these benchmarks were disabled they completed in ~80s. I left a couple of comments, but unfortunately I wasn't able to solve this myself so I can't be of much help here.
If you'd like you can have a look at my attempt, maybe it will help you. I was able to get it down to ~100s, but some of the HTTP2 benches are unstable. I might keep looking into this, if I figure it out I'll let you know. Good luck!
local_addr | ||
} | ||
|
||
async fn handle_request(_req: Request<Body>) -> Result<Response<Body>, Infallible> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this function you return the request body, but you should be returning the response_body
from the Opts
structs.
// | ||
// Benches http/1 requests | ||
// |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In Rust we can write doc comments with ///
🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice
let mut http = Http::new(); | ||
//configure http based on opts | ||
http.http2_only(opts.http2) | ||
.http2_initial_stream_window_size(opts.http2_stream_window) | ||
.http2_initial_connection_window_size(opts.http2_conn_window) | ||
.http2_adaptive_window(opts.http2_adaptive_window); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may have forgotten to remove this code, as I don't see this variable being used anywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, thanks!
let (http2, http2_stream_window, http2_conn_window, http2_adaptive_window) = ( | ||
opts.http2, | ||
opts.http2_stream_window, | ||
opts.http2_conn_window, | ||
opts.http2_adaptive_window, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you #[derive(Clone)]
for the Opts
struct you can clone opts here and then use it directly without destructuring. (Except for the opts.response_body
, which has to be destructured before being passed to the service_fn
closure.)
let opts = opts.clone();
let body = opts.response_body;
In my computer it runs the same as the previous version of them in 65 seconds |
Thank you for tackling this issue! #3109 was just merged, which re-enables most of these benchmarks, so I'll close this one. |
This benches were disabled becuase of the deprecation of Client and Server,
this pr is to reenable them with the new conn aproach