-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Implemented H2 stream level buffer accounting. #16218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
8c99d4f
44a5e05
6956e95
dbc66ad
65e8206
8abaa72
f7ca7dd
922a222
0295ac9
1644e66
091dc3e
2d5fba4
3e5f947
324dbf9
ee69562
d8f2db7
2e65c5d
f204c8a
c06b5d5
7b32a8d
60ca164
03e7a13
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -259,7 +259,18 @@ RequestDecoder& ConnectionManagerImpl::newStream(ResponseEncoder& response_encod | |
| } | ||
|
|
||
| ENVOY_CONN_LOG(debug, "new stream", read_callbacks_->connection()); | ||
| ActiveStreamPtr new_stream(new ActiveStream(*this, response_encoder.getStream().bufferLimit())); | ||
|
|
||
| // Set the account to start accounting if enabled. This is still a | ||
| // work-in-progress, and will be removed when other features using the | ||
| // accounting are implemented. | ||
| Buffer::BufferMemoryAccountSharedPtr downstream_request_account; | ||
| if (Runtime::runtimeFeatureEnabled("envoy.test_only.per_stream_buffer_accounting")) { | ||
antoniovicente marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| downstream_request_account = std::make_shared<Buffer::BufferMemoryAccountImpl>(); | ||
| response_encoder.getStream().setAccount(downstream_request_account); | ||
| } | ||
| ActiveStreamPtr new_stream(new ActiveStream(*this, response_encoder.getStream().bufferLimit(), | ||
| std::move(downstream_request_account))); | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thinking a bit about the future: It would be useful to have a way to go from Account to ActiveStream. Possible strawperson: The account interface could wrap the reset and dump methods so they are no-ops if the account has no owner, or have method to get an optional reference to the owner that the per-worker overload trackers can use to run those operations on the relevant owner objects.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, we'll need something like this later down the line, and will address this then. |
||
|
|
||
| new_stream->state_.is_internally_created_ = is_internally_created; | ||
| new_stream->response_encoder_ = &response_encoder; | ||
| new_stream->response_encoder_->getStream().addCallbacks(*new_stream); | ||
|
|
@@ -571,13 +582,14 @@ void ConnectionManagerImpl::RdsRouteConfigUpdateRequester::requestSrdsUpdate( | |
| } | ||
|
|
||
| ConnectionManagerImpl::ActiveStream::ActiveStream(ConnectionManagerImpl& connection_manager, | ||
| uint32_t buffer_limit) | ||
| uint32_t buffer_limit, | ||
| Buffer::BufferMemoryAccountSharedPtr account) | ||
| : connection_manager_(connection_manager), | ||
| stream_id_(connection_manager.random_generator_.random()), | ||
| filter_manager_(*this, connection_manager_.read_callbacks_->connection().dispatcher(), | ||
| connection_manager_.read_callbacks_->connection(), stream_id_, | ||
| connection_manager_.config_.proxy100Continue(), buffer_limit, | ||
| connection_manager_.config_.filterFactory(), | ||
| std::move(account), connection_manager_.config_.proxy100Continue(), | ||
| buffer_limit, connection_manager_.config_.filterFactory(), | ||
| connection_manager_.config_.localReply(), | ||
| connection_manager_.codec_->protocol(), connection_manager_.timeSource(), | ||
| connection_manager_.read_callbacks_->connection().streamInfo().filterState(), | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.