Skip to content

Combine H1 and H2 conn_pool implementations#9668

Merged
mattklein123 merged 50 commits intoenvoyproxy:masterfrom
ggreenway:h2-combined
Jan 29, 2020
Merged

Combine H1 and H2 conn_pool implementations#9668
mattklein123 merged 50 commits intoenvoyproxy:masterfrom
ggreenway:h2-combined

Conversation

@ggreenway
Copy link
Member

@ggreenway ggreenway commented Jan 13, 2020

This allows the H2 connection pool to have more than 2 connections (1 active, 1 draining), which allows things like very low values of max_requests_per_connection to work because there can be multiple draining connections concurrently, as well as supporting multiple active connections.

Risk Level: High
Testing: Unit, integration
Docs Changes: Added to .proto; updated conn pool and circuit breaker docs; added old behavior to deprecation list
Release Notes: added
Fixes #7403
Fixes #9215

Snow Pettersen and others added 19 commits December 13, 2019 22:53
This extracts shared functionality (connection timeouts, drain
callbacks) into the base class. It also introduces a base class for
the ActiveClient, opening up for moving shared client functionality into
the base class as well. Moves the handling of the ready/busy lists up to
the base class, setting us up for reusing these for HTTP/2.

Signed-off-by: Snow Pettersen <snowp@squareup.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
This can be achieved by using the existing cluster config mocks.

Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Create additional connections when the limit is reached.

Signed-off-by: Greg Greenway <ggreenway@apple.com>
It was used in an intermediate version of the change.

Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
Signed-off-by: Greg Greenway <ggreenway@apple.com>
@repokitteh-read-only
Copy link

CC @envoyproxy/api-shepherds: Your approval is needed for changes made to api/.

🐱

Caused by: #9668 was opened by ggreenway.

see: more, trace.

@ggreenway
Copy link
Member Author

cc @snowp

@ggreenway
Copy link
Member Author

I have 2 big concerns with this PR:

  1. I messed up the implementation and it completely breaks the world until the bug is fixed
  2. Subtle behavior changes to existing configurations. Behavior is mostly the same for both H1 and H2 pools, with the exception of circuit breakers. The old behavior was to only look at connection circuit breakers for H1 pools, and only look at request circuit breakers for H2 pools. Now, both types are looked at in all cases. Will this break someone?

@snowp
Copy link
Contributor

snowp commented Jan 13, 2020

Re 2. I have written code myself that doesn't touch the unused circuit breaker for H1/H2 only clusters, so I'm sure there are several cases where people have bumped the max connection CB for a HTTP/1.1 only cluster to permit some concurrency but left the max request CB limit at the default. I'm wondering if we should be runtime guarding the new behavior and then communicate the change of behavior with a timeline for changing the default. Thoughts @mattklein123 ?

Signed-off-by: Greg Greenway <ggreenway@apple.com>
Copy link
Contributor

@snowp snowp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I think this approach makes sense. Might be good to add some test coverage to the new circuit breaking behavior for H1/H2

@ggreenway
Copy link
Member Author

ggreenway commented Jan 27, 2020

But it passed on my machine.

When I ran the tests against the wrong branch. Oops.

The last batch of changes I pushed definitely has a bug, I'm trying to figure out the correct way to fix it.

When there is a pending request, and a connection in the process of connecting, if drainConnection() or addDrainedCallback() is called, should the pending connection be closed, or kept open until it is connected to try to serve the pending request? @mattklein123

@mattklein123
Copy link
Member

When there is a pending request, and a connection in the process of connecting, if drainConnection() or addDrainedCallback() is called, should the pending connection be closed, or kept open until it is connected to try to serve the pending request? @mattklein123

As long as there are pending requests I think we have to allow them to be served, which includes waiting for connection?

a CONNECTING client into DRAINING, which doesn't make sense.

Also minor comment improvements.

Signed-off-by: Greg Greenway <ggreenway@apple.com>
@ggreenway
Copy link
Member Author

That makes sense. I hadn't realized that the two functions serve very different use cases. I think I have the desired behavior for both now.

@ggreenway
Copy link
Member Author

@envoyproxy/api-shepherds PTAL at this. It's not a change to the protos, just a doc change and applying an existing field in a new context.

Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, awesome work. Just a few doc comments and then I think we can ship and hopefully get some bake time!

/wait

Signed-off-by: Greg Greenway <ggreenway@apple.com>
Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, awesome work. @alyssawilk any further comments on this?

@mattklein123 mattklein123 dismissed snowp’s stale review January 29, 2020 18:06

comments addressed

Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - I had a draft with one comment about documenting the overrides in the release notes and you beat me to it :-P

@mattklein123 mattklein123 merged commit be8d4ca into envoyproxy:master Jan 29, 2020
frittentheke added a commit to frittentheke/istio that referenced this pull request Sep 19, 2025
…pplied to certain HTTP version

The circuit-breaker connection pool settings `MaxRequests`and `MaxPendingRequests` apply to HTTP/1.1
as well as HTTP/2 since [1] and [2].

This was actually changed in the Istio docs via [3], but these source code comments were left, potentially
causing confusion.

[1] envoyproxy/envoy#9215
[2] envoyproxy/envoy#9668
[3] istio/api#2428

Signed-off-by: Christian Rohmann <christian.rohmann@inovex.de>
frittentheke added a commit to frittentheke/istio that referenced this pull request Sep 19, 2025
…pplying to particular HTTP version

The circuit-breaker connection pool settings `MaxRequests`and `MaxPendingRequests` apply to HTTP/1.1
as well as HTTP/2 since [1] and [2].

This was actually changed in the Istio docs via [3], but these source code comments were left, potentially
causing confusion.

[1] envoyproxy/envoy#9215
[2] envoyproxy/envoy#9668
[3] istio/api#2428

Signed-off-by: Christian Rohmann <christian.rohmann@inovex.de>
istio-testing pushed a commit to istio/istio that referenced this pull request Sep 20, 2025
…pplying to particular HTTP version (#57705)

The circuit-breaker connection pool settings `MaxRequests`and `MaxPendingRequests` apply to HTTP/1.1
as well as HTTP/2 since [1] and [2].

This was actually changed in the Istio docs via [3], but these source code comments were left, potentially
causing confusion.

[1] envoyproxy/envoy#9215
[2] envoyproxy/envoy#9668
[3] istio/api#2428

Signed-off-by: Christian Rohmann <christian.rohmann@inovex.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

circuit breaking: Change max_requests to work for both HTTP/1 and HTTP/2 Http2: Need connection policy for maximum streams per upstream connection

4 participants