lazy read disable for the http1 codec#20148
Conversation
Signed-off-by: wbpcode <wbphub@live.com>
|
cc @KBaichoo cc @alyssawilk |
|
/assign @KBaichoo |
Signed-off-by: wbpcode <wbphub@live.com>
|
/retest |
|
Retrying Azure Pipelines: |
KBaichoo
left a comment
There was a problem hiding this comment.
Looks great otherwise, mind posting the performance numbers (compared to the prior implementation) since this is a performance change? Thanks
| // Active downstream request remote complete but there is some remaining data in the read buffer | ||
| // then try to disable the connection reading. Connection reading will be re-enabled after the | ||
| // current active downstream request has completed. | ||
| // This ensures that the remaining data can be consumed after the current active downstream | ||
| // request has completed. |
There was a problem hiding this comment.
This is a bit hard to grok, perhaps:
Eagerly read disable the connection if the downstream is sending pipelined requests as we serially process them. Reading from the connection will be re-enabled after the active request is completed.
| // Active downstream request remote complete but there is some new data comming then try to | ||
| // disable the connection reading. Connection reading will be re-enabled after the current | ||
| // active downstream request has completed. | ||
| // This ensures that the new comming data can be consumed after the current active downstream | ||
| // request has completed. |
There was a problem hiding this comment.
This is a bit hard to grok, perhaps:
Read disable the connection if the downstream is sending additional data while we are working on an existing request. Reading from the connection will be re-enabled after the active request is completed.
Signed-off-by: wbpcode <wbphub@live.com>
|
Benchmark with simplest envoy configuration and concurrency 1. A muti-processes nginx as backend return 1k response body. And a wrk as client.
Result before this PR: Result after this PR: In a simple stress test scenario, this PR can bring about a 4~5% throughput improvement. |
|
/retest |
|
Retrying Azure Pipelines: |
|
/retest |
|
Retrying Azure Pipelines: |
KBaichoo
left a comment
There was a problem hiding this comment.
lgtm
/assign @alyssawilk as senior maintainer for merge
alyssawilk
left a comment
There was a problem hiding this comment.
This looks fantastic! I'm inclined to think it's high enough risk to runtime guard, though you're welcome to get a second opinion from Matt or Snow if you would rather not. Definitely let's add a release note to docs/root/version_history/current.rst nothing this should be nothing but a perf win, but in case there are behavioral changes.
Let's do that and add mock checks in the HTTP/1.1 codec test to regression test, and you'll be good to go!
/wait
|
Get it. A runtime guard is reasonable for me. |
Signed-off-by: wbpcode <wbphub@live.com>
Signed-off-by: wbpcode <wbphub@live.com>
|
/retest |
|
Retrying Azure Pipelines: |
|
cc @alyssawilk Release note added. Tests added. Runtime guard added. 😄 |
|
Throwing over to Matt for a last (non-google) look |
Signed-off-by: wbpcode <wbphub@live.com> Signed-off-by: kuochunghsu <kuochunghsu@pinterest.com>
|
After I had merged the latest main I noticed performance of the io_uring-backed With With |
|
Hi, @rojkov can you provide some more detailed config and also your code base? I can have a try in my local env. This PR reduces the number of calls to Or can we open a new issue to record and discuss this problem? |
|
Hi @wbpcode. The new IoHandle is WIP still. This PR doesn't break anything in the main branch. I just wanted to give a heads-up. The culprit may well be in the new IoHandle actually. I'd appreciate if you took a look. The code can be obtained from #19082. My current config is bootstrap_extensions:
- name: envoy.extensions.io_socket.io_uring
typed_config:
"@type": type.googleapis.com/envoy.extensions.network.socket_interface.v3.IoUringSocketInterface
use_submission_queue_polling: false
read_buffer_size: 8192
io_uring_size: 300
default_socket_interface: "envoy.extensions.network.socket_interface.io_uring"
enable_dispatcher_stats: false
static_resources:
clusters:
name: cluster_0
connect_timeout: 0.25s
circuit_breakers:
thresholds:
- priority: DEFAULT
max_connections: 1000000000
max_pending_requests: 1000000000
max_requests: 1000000000
max_retries: 1000000000
- priority: HIGH
max_connections: 1000000000
max_pending_requests: 1000000000
max_requests: 1000000000
max_retries: 1000000000
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 4500
listeners:
name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
generate_request_id: false
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: cluster_0
http_filters:
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
dynamic_stats: false
layered_runtime:
layers:
- name: static_layer
static_layer:
envoy.reloadable_features.http1_lazy_read_disable: falseFor benchmarking I use Nighthawk. The server config: static_resources:
listeners:
# define an origin server on :10000 that always returns "lorem ipsum..."
- address:
socket_address:
address: 0.0.0.0
port_value: 4500
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
generate_request_id: false
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
http_filters:
- name: test-server # before envoy.router because order matters!
typed_config:
"@type": type.googleapis.com/nighthawk.server.ResponseOptions
response_body_size: 10
v3_response_headers:
- { header: { key: "foo", value: "bar3" } }
- {
header: { key: "foo", value: "bar2" },
append: true,
}
- { header: { key: "x-nh", value: "1" } }
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
dynamic_stats: false
admin:
access_log_path: /tmp/envoy.log
address:
socket_address:
address: 0.0.0.0
port_value: 8081I run the client with And also I pin Envoy to a single core with |
Signed-off-by: wbpcode <wbphub@live.com>
Signed-off-by: wbpcode wbphub@live.com
Commit Message: lazy read disable for the http1 codec
Additional Description:
Ref #19900 for more info.
Risk Level: Low.
Testing: N/A.
Docs Changes: N/A.
Release Notes: N/A.
Platform Specific Features: N/A.
Optional Runtime guard:
envoy.reloadable_features.http1_lazy_read_disable.