-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support configuring MaxConcurrentStreams
for http2
#14169
Conversation
Codecov Report
@@ Coverage Diff @@
## main #14169 +/- ##
==========================================
- Coverage 75.40% 75.21% -0.19%
==========================================
Files 455 456 +1
Lines 36859 36892 +33
==========================================
- Hits 27792 27749 -43
- Misses 7335 7395 +60
- Partials 1732 1748 +16
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
cc @ptabor @serathius @spzala Please take a look. The fix may also need to be cherry picked to 3.5. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ahrtr thanks, nice work. I have few nits.
f672f28
to
3167e05
Compare
@spzala Resolved all your comments, PTAL, thx. |
There is no update on the original PR (see below) for more then 2 weeks. So Benjamin(@ahrtr) continues to work on the PR. The first step is to rebase the PR, because there are lots of conflicts with the main branch. The change to go.mod and go.sum reverted, because they are not needed. The e2e test cases are also reverted, because they are not correct. ``` etcd-io#14081 ``` Signed-off-by: nic-chen <[email protected]> Signed-off-by: Benjamin Wang <[email protected]>
The golang buildin package `flag` doesn't support `uint32` data type, so we need to support it via the `flag.Var`. Signed-off-by: Benjamin Wang <[email protected]>
The default max stream is 250 in http2. When there are more then 250 streams, the client side may be blocked until some previous streams are released. So we need to support configuring a larger `MaxConcurrentStreams`. Signed-off-by: Benjamin Wang <[email protected]>
Signed-off-by: Benjamin Wang <[email protected]>
3167e05
to
063db88
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Thanks @ahrtr
… max value for each client. Signed-off-by: Benjamin Wang <[email protected]>
Signed-off-by: Benjamin Wang <[email protected]>
063db88
to
8b6c8b4
Compare
Thanks @spzala . cc @serathius Do you have any other comments on this? I only updated some messages ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Talked with @wojtek-t and we concluded this change is ok for main and backport to v3.5
Thank you all for making this change, it helps a lot for Apache APISIX users! |
…each client can open at a time Also refer to etcd-io#14169 (comment) Signed-off-by: Benjamin Wang <[email protected]>
…each client can open at a time Also refer to etcd-io#14169 (comment) Signed-off-by: Benjamin Wang <[email protected]>
…each client can open at a time Also refer to etcd-io#14169 (comment) Signed-off-by: Benjamin Wang <[email protected]>
Continue to work on 14081 .
Related issues:
Benchmark test result:
Note I only created some concurrent streams, and there is no any traffic. The test were performed on a Linux VM with 8GiB memory and 4 CPUs (Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz).
It seems the memory used by concurrent streams isn't very high, it's about 90MB for 1000 concurrent streams, and 108MB for 2000 concurrent streams.
0 stream
0_stream.txt
3 streams
3_stream.txt
100 streams
100_streams.txt
1000 streams
1000_streams.txt
2000 streams
2000_streams.txt