tcp_proxy: convert TCP proxy to use TCP connection pool#3938
Conversation
Converts TcpProxy::Filter and WebSocket::WsHandlerImpl to use Tcp::ConnectionPool to obtain connections. Much of the stats handling and connection timeouts are handled by the connection pool. Stats were manually verified by comparing stats produced by the tcp_proxy_integration_test with and without the connection pool change. *Risk Level*: medium *Testing*: unit/integration testing *Docs Changes*: n/a *Release Notes*: n/a Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
|
One initial question/discussion: It seems weird to me that TcpProxy holds a bare pointer to ConnectionData, and that ConnectionData has a release() method that must be called. Could/should this be reworked so that it holds a unique_ptr, and the destructor takes the place of release()? |
|
I'll see if I can get that to work. I had originally wanted release-on-destruction, but wasn't able to make it work for reasons that I can't remember. |
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
|
@ggreenway PTAL -- the ConnectionData is now passed as a unique_ptr and destroying it releases the connection. |
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
ggreenway
left a comment
There was a problem hiding this comment.
This looks great. I really like that the stats, connect timeouts, etc all moved out of TcpProxy and into the pool.
source/common/tcp_proxy/tcp_proxy.h
Outdated
| Network::ReadFilterCallbacks* read_callbacks_{}; | ||
| Network::ClientConnectionPtr upstream_connection_; | ||
| Tcp::ConnectionPool::Cancellable* upstream_handle_{}; | ||
| Tcp::ConnectionPool::ConnectionDataPtr upstream_connection_; |
There was a problem hiding this comment.
Please rename to upstream_connection_data_ or similar.
Signed-off-by: Stephan Zuercher <stephan@turbinelabs.io>
|
@mattklein123 Do you want to give this a final pass? |
|
Yup will do. |
mattklein123
left a comment
There was a problem hiding this comment.
Nice, love the code sharing.
| std::move(idle_timer), upstream_host, | ||
| std::move(connected_timespan))); | ||
| const Upstream::HostDescriptionConstSharedPtr& upstream_host) { | ||
| DrainerPtr drainer(new Drainer(*this, config, callbacks, std::move(upstream_conn_data), |
There was a problem hiding this comment.
Not actionable, but I wonder if the drain manager could now be moved into the cluster manager to help similar situations elsewhere? cc @ggreenway
There was a problem hiding this comment.
That's an interesting idea. I think that would work fine. It could also live in the tcp pool. That may also avoid some of the complicated lifetime-issues around stats in the current implementation.
There was a problem hiding this comment.
I added a note to follow up on this in #3818.
This reverts commit 028387a. Signed-off-by: Daniel Hochman <danielhochman@users.noreply.github.com>
Converts TcpProxy::Filter and WebSocket::WsHandlerImpl to use
Tcp::ConnectionPool to obtain connections. Much of the stats
handling and connection timeouts are handled by the connection
pool.
Stats were manually verified by comparing stats produced by
the tcp_proxy_integration_test with and without the connection
pool change.
Relates to #3818.
Risk Level: medium
Testing: unit/integration testing
Docs Changes: n/a
Release Notes: n/a
Signed-off-by: Stephan Zuercher stephan@turbinelabs.io