Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(feat) reenable benches/end_to_end.rs #2937

Closed

Conversation

Robertoskr
Copy link

@Robertoskr Robertoskr commented Aug 9, 2022

This benches were disabled becuase of the deprecation of Client and Server,
this pr is to reenable them with the new conn aproach

@Robertoskr Robertoskr changed the title reenable benches/end_to_end.rs (feat) reenable benches/end_to_end.rs Aug 9, 2022
Comment on lines 335 to 340
for _ in 0..self.parallel_cnt {
b.iter(|| {
let req = make_request();
rt.block_on(send_request(req));
});
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as you can see @seanmonstar we do this in sync way... If i try to run this async way, i will have an error because the connection is not ready...
What is the new approach for handling parallel request with the new "conn" things? I tried with poll_ready, to wait until the connection is ready before making the requests...
What do you think is the best here ? Opening n connections to the server or using an approach like using the poll ready function ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also i see on src/client/conn/mod.rs in the example this line:
request_sender.ready().await?;
wich i think is wrong because i can't find that method on the code 💢 (maybe im wrong)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, we don't want to use block_on there. Especially since the parallel option is meant to send them all at the same time.

  • For HTTP/1, you'd need to make parallel_cnt amount of connections, and then spawn tasks looping on ready() and then send_request().
  • For HTTP/2, you can make just one connection, and send parallel_cnt requests at a time.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, already did the changes, as you can see in the pr, but when i run the benchs, have the following error, do you know how to solve it ? How is the best way to await the request sender until is ready to send a request ?
Screenshot 2022-08-13 at 11 29 36

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Robertoskr. When I was working on this I called the ready method from tower::ServiceExt on the request sender: request_sender.ready().await. From the comments above it seems like you tried this, but maybe you forgot to import the trait?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Just tried, that and works, but also tried with a tokio::sync::Lock and performs better, don't know if its safe thought, but the code works well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and for the trait: Yes, did't know that trait existed 😹

@Robertoskr
Copy link
Author

@seanmonstar @oddgrd Ready for review !

@oddgrd
Copy link
Contributor

oddgrd commented Aug 14, 2022

Hey, that's cool that you figured it out! I can have a look at this in the coming week, but I'm a new contributor myself so you'll have to wait for Sean to sign off on this whenever he has time. 🙂

Copy link
Contributor

@oddgrd oddgrd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Robertoskr. I had a look at this, and when I tried to run the benchmarks on your branch it seems some of the HTTP2 ones take quite long to finish. On my computer your branch took ~290 seconds to complete, but on the commit before these benchmarks were disabled they completed in ~80s. I left a couple of comments, but unfortunately I wasn't able to solve this myself so I can't be of much help here.

If you'd like you can have a look at my attempt, maybe it will help you. I was able to get it down to ~100s, but some of the HTTP2 benches are unstable. I might keep looking into this, if I figure it out I'll let you know. Good luck!

local_addr
}

async fn handle_request(_req: Request<Body>) -> Result<Response<Body>, Infallible> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this function you return the request body, but you should be returning the response_body from the Opts structs.

Comment on lines +301 to +303
//
// Benches http/1 requests
//
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Rust we can write doc comments with /// 🙂

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice

Comment on lines +438 to +443
let mut http = Http::new();
//configure http based on opts
http.http2_only(opts.http2)
.http2_initial_stream_window_size(opts.http2_stream_window)
.http2_initial_connection_window_size(opts.http2_conn_window)
.http2_adaptive_window(opts.http2_adaptive_window);
Copy link
Contributor

@oddgrd oddgrd Aug 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may have forgotten to remove this code, as I don't see this variable being used anywhere.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, thanks!

Comment on lines +445 to +450
let (http2, http2_stream_window, http2_conn_window, http2_adaptive_window) = (
opts.http2,
opts.http2_stream_window,
opts.http2_conn_window,
opts.http2_adaptive_window,
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you #[derive(Clone)] for the Opts struct you can clone opts here and then use it directly without destructuring. (Except for the opts.response_body, which has to be destructured before being passed to the service_fn closure.)

 let opts = opts.clone();
 let body = opts.response_body;

@Robertoskr
Copy link
Author

Hey Robertoskr. I had a look at this, and when I tried to run the benchmarks on your branch it seems some of the HTTP2 ones take quite long to finish. On my computer your branch took ~290 seconds to complete, but on the commit before these benchmarks were disabled they completed in ~80s. I left a couple of comments, but unfortunately I wasn't able to solve this myself so I can't be of much help here.

If you'd like you can have a look at my attempt, maybe it will help you. I was able to get it down to ~100s, but some of the HTTP2 benches are unstable. I might keep looking into this, if I figure it out I'll let you know. Good luck!

In my computer it runs the same as the previous version of them in 65 seconds

@oddgrd
Copy link
Contributor

oddgrd commented Aug 19, 2022

That's odd, because on my computer the http2 ones take a long time (the http1 ones run great):
image

@seanmonstar seanmonstar linked an issue Aug 25, 2022 that may be closed by this pull request
3 tasks
@seanmonstar
Copy link
Member

Thank you for tackling this issue! #3109 was just merged, which re-enables most of these benchmarks, so I'll close this one.

@seanmonstar seanmonstar closed this Jan 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Re-enable the end-to-end, pipeline, and server benchmarks
3 participants