Finalize emission and tracking of latencies in response headers#500
Finalize emission and tracking of latencies in response headers#500dubious90 merged 42 commits intoenvoyproxy:masterfrom
Conversation
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Will still fail the test because of mismatched expectations, but this will help merging master periodically in here to verify (almost) all is well. Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
|
@qqustc please review and assign to me once done. Thanks! |
|
Thanks Otto! |
…acking-option Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
dubious90
left a comment
There was a problem hiding this comment.
This feature looks cool, Otto. Thanks for your efforts.
README.md
Outdated
|
|
||
| bazel-bin/nighthawk_client [--stats-flush-interval <uint32_t>] | ||
| [--stats-sinks <string>] ... | ||
| bazel-bin/nighthawk_client [--response-header-with-latency-input |
There was a problem hiding this comment.
No changes here requested, and I think we're in agreement here, but eventually I think we need to move away from TCLAP automating this, because there are now so many flags that it's become difficult to find what you need (e.g. rps, duration).
README.md
Outdated
|
|
||
| Where: | ||
|
|
||
| --response-header-with-latency-input <string> |
There was a problem hiding this comment.
I think the meaning here is a little hard to parse from the name / comment combo.
How do you feel about response_latency_header_name?
README.md
Outdated
| Where: | ||
|
|
||
| --response-header-with-latency-input <string> | ||
| Set an optional response header name, whose values will be tracked in |
There was a problem hiding this comment.
For the description, can I suggest this edit to the first sentence:
Set an optional header name that will be returned in responses, whose values will be tracked in a latency histogram if set.
README.md
Outdated
|
|
||
| --response-header-with-latency-input <string> | ||
| Set an optional response header name, whose values will be tracked in | ||
| a latency histogram if set. Can be used in tandem with the test server |
There was a problem hiding this comment.
For the second sentence, can we edit to something like:
"... if set. e.g. Can be used in tandem with the test server's response option "emit_previous_request_delta_in_response_header" to record elapsed time between request arrivals."
(Adding in the word response option clarifies where the reader should look for the definition of that)
source/client/stream_decoder.h
Outdated
| Envoy::Random::RandomGenerator& random_generator, | ||
| Envoy::Tracing::HttpTracerSharedPtr& http_tracer) | ||
| Envoy::Tracing::HttpTracerSharedPtr& http_tracer, | ||
| std::string response_header_with_latency_input) |
There was a problem hiding this comment.
I'm a little surprised by your use of a std::string copy with std::move here, as opposed to having this be an absl::string_view (or const std::string&, whicever is more common within our code).
And then having doing response_header_with_latency_input_(response_header_with_latency_input), which I'm pretty sure just works.
This suggestion is more in line with the style I'm familiar with. Is there a strong reason not to do that?
There was a problem hiding this comment.
Good catch. I'm surprised too, I must have grown tired of the tried and proven path in this repo :-) Using absl::string_view now.
test/benchmark_http_client_test.cc
Outdated
| client_ = std::make_unique<Client::BenchmarkClientHttpImpl>( | ||
| *api_, *dispatcher_, store_, statistic_, false, cluster_manager_, http_tracer_, "benchmark", | ||
| request_generator, true); | ||
| request_generator, true, ""); |
There was a problem hiding this comment.
This is a lot of arguments here. Would it be okay if we added comments helping users track the corresponding parameter under use? For the new one, it'd be
/*response_header_with_latency_input=*/""
And then, the two boolean parameters could also benefit from this.
| @@ -0,0 +1,34 @@ | |||
| admin: | |||
There was a problem hiding this comment.
Can we add a comment at the top of this, explaining its purpose in just a sentence, and draw attention to the relevant/unique part of the configuration (which I think is the time-tracking filter)
| """Test emission and tracking of response header latencies. | ||
|
|
||
| Run the CLI configured to track latencies delivered by response header from the test-server | ||
| which is set up emit those. Ensure the expected histogram is observed. |
There was a problem hiding this comment.
nit: set up to emit those
| """Test emission and tracking of response header latencies. | ||
|
|
||
| Run the CLI configured to track latencies delivered by response header from the test-server | ||
| which is set up emit those. Ensure the expected histogram is observed. |
There was a problem hiding this comment.
nit: Can we instead say ensure the histogram receives the correct number of inputs or something?
| ]) | ||
| global_histograms = http_test_server_fixture.getNighthawkGlobalHistogramsbyIdFromJson(parsed_json) | ||
| asserts.assertEqual( | ||
| int(global_histograms["benchmark_http_client.origin_latency_statistic"]["count"]), 99) |
There was a problem hiding this comment.
Can we add a second integration test that works as a crash test of sorts, for adding in the header but receiving no responses that include that header?
Not sure of the exact correct assertions. We would expect it to work, but for origin_latency_statistic to equal 0, I think. Working could be defined as the responses_xxx counter increasing?
There was a problem hiding this comment.
Done; we now test using two server side configurations, one with the new time-tracking extension enabled, and one that doesn't have it set up. We check expectations based on which one we're testing. Let me know if that looks good to you. (68ed58a)
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
|
@dubious90 comments addressed in 68ed58a. |
…acking-option Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Also, merged master in here and resolved conflicts. Thanks for the review! |
dubious90
left a comment
There was a problem hiding this comment.
LGTM modulo the one naming comment. Thanks!
README.md
Outdated
| USAGE: | ||
|
|
||
| bazel-bin/nighthawk_client [--stats-flush-interval <uint32_t>] | ||
| bazel-bin/nighthawk_client [--response-latency-header-name <string>] |
There was a problem hiding this comment.
Sorry for extra churn here. Realized I gave you a misleading name (implies this would be response latency, which, while possible to do with this, is not the only usecase).
How do you feel about --latency-response-header-name (just inverting those two words).
| ]) | ||
| global_histograms = http_test_server_fixture.getNighthawkGlobalHistogramsbyIdFromJson(parsed_json) | ||
| asserts.assertEqual(int(global_histograms["benchmark_http_client.latency_2xx"]["count"]), 100) | ||
| # Verify behavior is correct both with and without the timing filter enabled. |
There was a problem hiding this comment.
This is a very helpful comment here. Thanks
Signed-off-by: Otto van der Schaaf <oschaaf@we-amp.com>
Adds a client option and wires it through in TCLAP. Amends tests and code to work with that
instead of the hard coded value. Adds end to end test, and enables test-server extension build.
Follow up to #477
Fixes #360: with this the feature is ready to use.
Signed-off-by: Otto van der Schaaf oschaaf@we-amp.com
TODO: