-
Notifications
You must be signed in to change notification settings - Fork 89
Add stats to record latency metrics for each response code in NH #381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
e5d13ea
37ef0db
ecc9f1a
dd611a9
af1ea54
43d7ee2
77249b9
bdecd69
2c14561
97ccc8a
6876450
8fd0a05
cd6b29e
684fe0b
dc655a4
fc298c5
e1356ec
a9d4d5d
e4d6672
db78892
8903b60
69f270d
0fb2d12
4ce94f2
0f4fc95
190299c
4ef322d
66e1e89
a0cba74
ccc055f
eaf6b84
d6e2a47
2e3762b
a712eb2
a71dad8
be4533c
f8d5696
aee69ae
6b36a64
302807d
49e708e
60e46cd
c801782
9255197
d56991f
cc7ecc3
8d92933
d8f4d31
5c36515
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,134 @@ | ||
| # Nighthawk Statistics | ||
|
|
||
| ## Background | ||
| Currently Nighthawk only outputs metrics at the end of a test run and there are | ||
| no metrics streamed during a test run. The work to stream its metrics is in | ||
| progress. | ||
|
|
||
|
|
||
| ## Statistics in Nighthawk | ||
| All statistics below defined in Nighthawk are per worker. | ||
|
|
||
| For counter metric, Nighthawk use Envoy's Counter directly. For histogram | ||
| metric, Nighthawk wraps Envoy's Histogram into its own Statistic concept (see | ||
| [#391](https://github.com/envoyproxy/nighthawk/pull/391)). | ||
|
|
||
| Name | Type | Description | ||
| -----| ----- | ---------------- | ||
| upstream_rq_total | Counter | Total number of requests sent from Nighthawk | ||
| http_1xx | Counter | Total number of response with code 1xx | ||
| http_2xx | Counter | Total number of response with code 2xx | ||
| http_3xx | Counter | Total number of response with code 3xx | ||
| http_4xx | Counter | Total number of response with code 4xx | ||
| http_5xx | Counter | Total number of response with code 5xx | ||
| http_xxx | Counter | Total number of response with code <100 or >=600 | ||
| stream_resets | Counter | Total number of stream reset | ||
| pool_overflow | Counter | Total number of times connection pool overflowed | ||
| pool_connection_failure | Counter | Total number of times pool connection failed | ||
| benchmark_http_client.latency_1xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 1xx | ||
| benchmark_http_client.latency_2xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 2xx | ||
| benchmark_http_client.latency_3xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 3xx | ||
| benchmark_http_client.latency_4xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 4xx | ||
| benchmark_http_client.latency_5xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 5xx | ||
| benchmark_http_client.latency_xxx | HdrStatistic | Latency (in Nanosecond) histogram of request with code <100 or >=600 | ||
| benchmark_http_client.queue_to_connect | HdrStatistic | Histogram of request connection time (in Nanosecond) | ||
| benchmark_http_client.request_to_response | HdrStatistic | Latency (in Nanosecond) histogram include requests with stream reset or pool failure | ||
| benchmark_http_client.response_header_size | StreamingStatistic | Statistic of response header size (min, max, mean, pstdev values in bytes) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's not clear to me reading this what these statistics are going to look like. If HdrStatistic is a histogram, what is StreamingStatistic? If it's also a histogram, how does it differ from HdrStatistic?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Both As shown in this table, different NH metrics choose different implementations of NH |
||
| benchmark_http_client.response_body_size | StreamingStatistic | Statistic of response body size (min, max, mean, pstdev values in bytes) | ||
| sequencer.callback | HdrStatistic | Latency (in Nanosecond) histogram of unblocked requests | ||
| sequencer.blocking | HdrStatistic | Latency (in Nanosecond) histogram of blocked requests | ||
|
|
||
|
|
||
| ## Envoy Metrics Model | ||
|
|
||
| [Envoy](https://github.com/envoyproxy/envoy) has 3 types of metrics | ||
| - Counters: Unsigned integers (can only increase) represent how many times an | ||
| event happens, e.g. total number of requests. | ||
| - Gauges: Unsigned integers (can increase or decrease), e.g. number of active | ||
| connections. | ||
| - Histograms: Unsigned integers that will yield summarized percentile values. | ||
| E.g. latency distributions. | ||
|
|
||
| In Envoy, the stat | ||
| [Store](https://github.com/envoyproxy/envoy/blob/74530c92cfa3682b49b540fddf2aba40ac10c68e/include/envoy/stats/store.h#L29) | ||
| is a singleton and provides a simple interface by which the rest of the code can | ||
| obtain handles to | ||
| [scopes](https://github.com/envoyproxy/envoy/blob/958745d658752f90f544296d9e75030519a9fb84/include/envoy/stats/scope.h#L37), | ||
| counters, gauges, and histograms. Envoy counters and gauges are periodically | ||
| (configured at ~5 sec interval) flushed to the sinks. Note that currently | ||
| histogram values are sent directly to the sinks. A stat | ||
| [Sink](https://github.com/envoyproxy/envoy/blob/74530c92cfa3682b49b540fddf2aba40ac10c68e/include/envoy/stats/sink.h#L48) | ||
| is an interface that takes generic stat data and translates it into a | ||
| backend-specific wire format. Currently Envoy supports the TCP and UDP | ||
| [statsd](https://github.com/b/statsd_spec) protocol (implemented in | ||
| [statsd.h](https://github.com/envoyproxy/envoy/blob/master/source/extensions/stat_sinks/common/statsd/statsd.h)). | ||
| Users can create their own Sink subclass to translate Envoy metrics into | ||
| backend-specific format. | ||
|
|
||
| Envoy metrics can be defined using a macro, e.g. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This example is helpful, but also opens up other questions. We're defining metrics here, but they're just names. How are they calculated? Can you add a second code snippet below it, explaining how to collect stats?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. looks great. thank you |
||
| ```cc | ||
| // Define Envoy stats. | ||
| #define ALL_CLUSTER_STATS(COUNTER, GAUGE, HISTOGRAM) | ||
| COUNTER(upstream_cx_total) | ||
| GAUGE(upstream_cx_active, NeverImport) | ||
| HISTOGRAM(upstream_cx_length, Milliseconds) | ||
| // Put these stats as members of a struct. | ||
| struct ClusterStats { | ||
| ALL_CLUSTER_STATS(GENERATE_COUNTER_STRUCT, GENERATE_GAUGE_STRUCT, GENERATE_HISTOGRAM_STRUCT) | ||
| }; | ||
| // Instantiate the above struct using a Stats::Pool. | ||
| ClusterStats stats{ | ||
| ALL_CLUSTER_STATS(POOL_COUNTER(...), POOL_GAUGE(...), POOL_HISTOGRAM(...))}; | ||
|
|
||
| // Stats can be updated in the code: | ||
| stats.upstream_cx_total_.inc(); | ||
| stats.upstream_cx_active_.set(...); | ||
| stats.upstream_cx_length_.recordValue(...); | ||
| ``` | ||
|
|
||
| ## Envoy Metrics Limitation | ||
| Currently Envoy metrics don't support key-value map. As a result, for metrics to | ||
| be broken down by certain dimensions, we need to define a separate metric for | ||
| each dimension. For example, currently Nighthawk defines | ||
| [separate counters](https://github.com/envoyproxy/nighthawk/blob/master/source/client/benchmark_client_impl.h#L35-L40) | ||
| to monitor the number of responses with corresponding response code. | ||
|
|
||
| ## Envoy Metrics Flush | ||
| Envoy uses a flush timer to periodically flush metrics into stat sinks | ||
| ([here](https://github.com/envoyproxy/envoy/blob/74530c92cfa3682b49b540fddf2aba40ac10c68e/source/server/server.cc#L479-L480)) | ||
| at a configured interval (default to 5 sec). For every metric flush, Envoy will | ||
| call the function | ||
| [flushMetricsToSinks()](https://github.com/envoyproxy/envoy/blob/74530c92cfa3682b49b540fddf2aba40ac10c68e/source/server/server.cc#L175) | ||
| to create a metric snapshot from Envoy stat store and flush the snapshot to all | ||
| sinks through `sink->flush(snapshot)`. | ||
|
|
||
|
|
||
| ## Metrics Export in Nighthawk | ||
| Currently a single Nighthawk can run with multiple workers. In the future, | ||
| Nighthawk will be extended to be able to run multiple instances together. Since | ||
| each Nighthawk worker sends requests independently, we decided to export per | ||
| worker level metrics since it provides several advantages over global level | ||
| metrics (aggregated across all workers). | ||
| - Per worker level metrics provide information about the performance of each | ||
| worker which will be hidden by the global level metrics. | ||
| - Keep the workers independent which makes it easier/efficient to scale up to | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we clarify that the ability to scale up multiple nighthawks is still under development, so that users don't get confused?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Done. |
||
| multiple Nighthawks with large numbers of workers. (The work to scale up to | ||
| multiple Nighthawks is still under development). | ||
|
|
||
| Envoy metrics can be defined at per worker level using | ||
| [Scope](https://github.com/envoyproxy/envoy/blob/e9c2c8c4a0141c9634316e8283f98f412d0dd207/include/envoy/stats/scope.h#L35) | ||
| ( e.g. `cluster.<worker_id>.total_request_sent`). The dynamic portions of | ||
| metric (e.g. `worker_id`) can be embedded into the metric name. A | ||
| [TagSpecifier](https://github.com/envoyproxy/envoy/blob/7a652daf35d7d4a6a6bad5a010fe65947ee4411a/api/envoy/config/metrics/v3/stats.proto#L182) | ||
| can be specified in the bootstrap configuration, which will transform dynamic | ||
| portions into tags. When per worker level metrics are exported from Nighthawk, | ||
| multiple per worker level metrics can be converted into a single metric with a | ||
| `worker_id` label in the stat Sink if the corresponding backend metric supports | ||
| key-value map. | ||
|
|
||
| ## Reference | ||
| - [Nighthawk: architecture and key | ||
| concepts](https://github.com/envoyproxy/nighthawk/blob/master/docs/root/overview.md) | ||
| - [Envoy Stats | ||
| System](https://github.com/envoyproxy/envoy/blob/master/source/docs/stats.md) | ||
| - [Envoy Stats blog](https://blog.envoyproxy.io/envoy-stats-b65c7f363342) | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -20,6 +20,34 @@ using namespace std::chrono_literals; | |||||||||||||||||||||||||||
| namespace Nighthawk { | ||||||||||||||||||||||||||||
| namespace Client { | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| BenchmarkClientStatistic::BenchmarkClientStatistic(BenchmarkClientStatistic&& statistic) noexcept | ||||||||||||||||||||||||||||
| : connect_statistic(std::move(statistic.connect_statistic)), | ||||||||||||||||||||||||||||
| response_statistic(std::move(statistic.response_statistic)), | ||||||||||||||||||||||||||||
| response_header_size_statistic(std::move(statistic.response_header_size_statistic)), | ||||||||||||||||||||||||||||
| response_body_size_statistic(std::move(statistic.response_body_size_statistic)), | ||||||||||||||||||||||||||||
| latency_1xx_statistic(std::move(statistic.latency_1xx_statistic)), | ||||||||||||||||||||||||||||
| latency_2xx_statistic(std::move(statistic.latency_2xx_statistic)), | ||||||||||||||||||||||||||||
| latency_3xx_statistic(std::move(statistic.latency_3xx_statistic)), | ||||||||||||||||||||||||||||
| latency_4xx_statistic(std::move(statistic.latency_4xx_statistic)), | ||||||||||||||||||||||||||||
| latency_5xx_statistic(std::move(statistic.latency_5xx_statistic)), | ||||||||||||||||||||||||||||
| latency_xxx_statistic(std::move(statistic.latency_xxx_statistic)) {} | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| BenchmarkClientStatistic::BenchmarkClientStatistic( | ||||||||||||||||||||||||||||
| StatisticPtr&& connect_stat, StatisticPtr&& response_stat, | ||||||||||||||||||||||||||||
| StatisticPtr&& response_header_size_stat, StatisticPtr&& response_body_size_stat, | ||||||||||||||||||||||||||||
| StatisticPtr&& latency_1xx_stat, StatisticPtr&& latency_2xx_stat, | ||||||||||||||||||||||||||||
| StatisticPtr&& latency_3xx_stat, StatisticPtr&& latency_4xx_stat, | ||||||||||||||||||||||||||||
| StatisticPtr&& latency_5xx_stat, StatisticPtr&& latency_xxx_stat) | ||||||||||||||||||||||||||||
| : connect_statistic(std::move(connect_stat)), response_statistic(std::move(response_stat)), | ||||||||||||||||||||||||||||
| response_header_size_statistic(std::move(response_header_size_stat)), | ||||||||||||||||||||||||||||
| response_body_size_statistic(std::move(response_body_size_stat)), | ||||||||||||||||||||||||||||
| latency_1xx_statistic(std::move(latency_1xx_stat)), | ||||||||||||||||||||||||||||
| latency_2xx_statistic(std::move(latency_2xx_stat)), | ||||||||||||||||||||||||||||
| latency_3xx_statistic(std::move(latency_3xx_stat)), | ||||||||||||||||||||||||||||
| latency_4xx_statistic(std::move(latency_4xx_stat)), | ||||||||||||||||||||||||||||
| latency_5xx_statistic(std::move(latency_5xx_stat)), | ||||||||||||||||||||||||||||
| latency_xxx_statistic(std::move(latency_xxx_stat)) {} | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| Envoy::Http::ConnectionPool::Cancellable* | ||||||||||||||||||||||||||||
| Http1PoolImpl::newStream(Envoy::Http::ResponseDecoder& response_decoder, | ||||||||||||||||||||||||||||
| Envoy::Http::ConnectionPool::Callbacks& callbacks) { | ||||||||||||||||||||||||||||
|
|
@@ -49,24 +77,26 @@ Http1PoolImpl::newStream(Envoy::Http::ResponseDecoder& response_decoder, | |||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| BenchmarkClientHttpImpl::BenchmarkClientHttpImpl( | ||||||||||||||||||||||||||||
| Envoy::Api::Api& api, Envoy::Event::Dispatcher& dispatcher, Envoy::Stats::Scope& scope, | ||||||||||||||||||||||||||||
| StatisticPtr&& connect_statistic, StatisticPtr&& response_statistic, | ||||||||||||||||||||||||||||
| StatisticPtr&& response_header_size_statistic, StatisticPtr&& response_body_size_statistic, | ||||||||||||||||||||||||||||
| bool use_h2, Envoy::Upstream::ClusterManagerPtr& cluster_manager, | ||||||||||||||||||||||||||||
| BenchmarkClientStatistic& statistic, bool use_h2, | ||||||||||||||||||||||||||||
| Envoy::Upstream::ClusterManagerPtr& cluster_manager, | ||||||||||||||||||||||||||||
| Envoy::Tracing::HttpTracerSharedPtr& http_tracer, absl::string_view cluster_name, | ||||||||||||||||||||||||||||
| RequestGenerator request_generator, const bool provide_resource_backpressure) | ||||||||||||||||||||||||||||
| : api_(api), dispatcher_(dispatcher), scope_(scope.createScope("benchmark.")), | ||||||||||||||||||||||||||||
| connect_statistic_(std::move(connect_statistic)), | ||||||||||||||||||||||||||||
| response_statistic_(std::move(response_statistic)), | ||||||||||||||||||||||||||||
| response_header_size_statistic_(std::move(response_header_size_statistic)), | ||||||||||||||||||||||||||||
| response_body_size_statistic_(std::move(response_body_size_statistic)), use_h2_(use_h2), | ||||||||||||||||||||||||||||
| benchmark_client_stats_({ALL_BENCHMARK_CLIENT_STATS(POOL_COUNTER(*scope_))}), | ||||||||||||||||||||||||||||
| statistic_(std::move(statistic)), use_h2_(use_h2), | ||||||||||||||||||||||||||||
| benchmark_client_counters_({ALL_BENCHMARK_CLIENT_COUNTERS(POOL_COUNTER(*scope_))}), | ||||||||||||||||||||||||||||
| cluster_manager_(cluster_manager), http_tracer_(http_tracer), | ||||||||||||||||||||||||||||
| cluster_name_(std::string(cluster_name)), request_generator_(std::move(request_generator)), | ||||||||||||||||||||||||||||
| provide_resource_backpressure_(provide_resource_backpressure) { | ||||||||||||||||||||||||||||
| connect_statistic_->setId("benchmark_http_client.queue_to_connect"); | ||||||||||||||||||||||||||||
| response_statistic_->setId("benchmark_http_client.request_to_response"); | ||||||||||||||||||||||||||||
| response_header_size_statistic_->setId("benchmark_http_client.response_header_size"); | ||||||||||||||||||||||||||||
| response_body_size_statistic_->setId("benchmark_http_client.response_body_size"); | ||||||||||||||||||||||||||||
| statistic_.connect_statistic->setId("benchmark_http_client.queue_to_connect"); | ||||||||||||||||||||||||||||
| statistic_.response_statistic->setId("benchmark_http_client.request_to_response"); | ||||||||||||||||||||||||||||
| statistic_.response_header_size_statistic->setId("benchmark_http_client.response_header_size"); | ||||||||||||||||||||||||||||
| statistic_.response_body_size_statistic->setId("benchmark_http_client.response_body_size"); | ||||||||||||||||||||||||||||
| statistic_.latency_1xx_statistic->setId("benchmark_http_client.latency_1xx"); | ||||||||||||||||||||||||||||
| statistic_.latency_2xx_statistic->setId("benchmark_http_client.latency_2xx"); | ||||||||||||||||||||||||||||
| statistic_.latency_3xx_statistic->setId("benchmark_http_client.latency_3xx"); | ||||||||||||||||||||||||||||
| statistic_.latency_4xx_statistic->setId("benchmark_http_client.latency_4xx"); | ||||||||||||||||||||||||||||
| statistic_.latency_5xx_statistic->setId("benchmark_http_client.latency_5xx"); | ||||||||||||||||||||||||||||
| statistic_.latency_xxx_statistic->setId("benchmark_http_client.latency_xxx"); | ||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| void BenchmarkClientHttpImpl::terminate() { | ||||||||||||||||||||||||||||
|
|
@@ -79,10 +109,18 @@ void BenchmarkClientHttpImpl::terminate() { | |||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| StatisticPtrMap BenchmarkClientHttpImpl::statistics() const { | ||||||||||||||||||||||||||||
| StatisticPtrMap statistics; | ||||||||||||||||||||||||||||
| statistics[connect_statistic_->id()] = connect_statistic_.get(); | ||||||||||||||||||||||||||||
| statistics[response_statistic_->id()] = response_statistic_.get(); | ||||||||||||||||||||||||||||
| statistics[response_header_size_statistic_->id()] = response_header_size_statistic_.get(); | ||||||||||||||||||||||||||||
| statistics[response_body_size_statistic_->id()] = response_body_size_statistic_.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.connect_statistic->id()] = statistic_.connect_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.response_statistic->id()] = statistic_.response_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.response_header_size_statistic->id()] = | ||||||||||||||||||||||||||||
| statistic_.response_header_size_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.response_body_size_statistic->id()] = | ||||||||||||||||||||||||||||
| statistic_.response_body_size_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.latency_1xx_statistic->id()] = statistic_.latency_1xx_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.latency_2xx_statistic->id()] = statistic_.latency_2xx_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.latency_3xx_statistic->id()] = statistic_.latency_3xx_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.latency_4xx_statistic->id()] = statistic_.latency_4xx_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.latency_5xx_statistic->id()] = statistic_.latency_5xx_statistic.get(); | ||||||||||||||||||||||||||||
| statistics[statistic_.latency_xxx_statistic->id()] = statistic_.latency_xxx_statistic.get(); | ||||||||||||||||||||||||||||
| return statistics; | ||||||||||||||||||||||||||||
| }; | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
|
|
@@ -120,9 +158,9 @@ bool BenchmarkClientHttpImpl::tryStartRequest(CompletionCallback caller_completi | |||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| auto stream_decoder = new StreamDecoder( | ||||||||||||||||||||||||||||
| dispatcher_, api_.timeSource(), *this, std::move(caller_completion_callback), | ||||||||||||||||||||||||||||
| *connect_statistic_, *response_statistic_, *response_header_size_statistic_, | ||||||||||||||||||||||||||||
| *response_body_size_statistic_, request->header(), shouldMeasureLatencies(), content_length, | ||||||||||||||||||||||||||||
| generator_, http_tracer_); | ||||||||||||||||||||||||||||
| *statistic_.connect_statistic, *statistic_.response_statistic, | ||||||||||||||||||||||||||||
| *statistic_.response_header_size_statistic, *statistic_.response_body_size_statistic, | ||||||||||||||||||||||||||||
| request->header(), shouldMeasureLatencies(), content_length, generator_, http_tracer_); | ||||||||||||||||||||||||||||
| requests_initiated_++; | ||||||||||||||||||||||||||||
| pool_ptr->newStream(*stream_decoder, *stream_decoder); | ||||||||||||||||||||||||||||
| return true; | ||||||||||||||||||||||||||||
|
|
@@ -132,35 +170,35 @@ void BenchmarkClientHttpImpl::onComplete(bool success, | |||||||||||||||||||||||||||
| const Envoy::Http::ResponseHeaderMap& headers) { | ||||||||||||||||||||||||||||
| requests_completed_++; | ||||||||||||||||||||||||||||
| if (!success) { | ||||||||||||||||||||||||||||
| benchmark_client_stats_.stream_resets_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.stream_resets_.inc(); | ||||||||||||||||||||||||||||
| } else { | ||||||||||||||||||||||||||||
| ASSERT(headers.Status()); | ||||||||||||||||||||||||||||
| const int64_t status = Envoy::Http::Utility::getResponseStatus(headers); | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| if (status > 99 && status <= 199) { | ||||||||||||||||||||||||||||
| benchmark_client_stats_.http_1xx_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.http_1xx_.inc(); | ||||||||||||||||||||||||||||
| } else if (status > 199 && status <= 299) { | ||||||||||||||||||||||||||||
| benchmark_client_stats_.http_2xx_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.http_2xx_.inc(); | ||||||||||||||||||||||||||||
| } else if (status > 299 && status <= 399) { | ||||||||||||||||||||||||||||
| benchmark_client_stats_.http_3xx_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.http_3xx_.inc(); | ||||||||||||||||||||||||||||
| } else if (status > 399 && status <= 499) { | ||||||||||||||||||||||||||||
| benchmark_client_stats_.http_4xx_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.http_4xx_.inc(); | ||||||||||||||||||||||||||||
| } else if (status > 499 && status <= 599) { | ||||||||||||||||||||||||||||
| benchmark_client_stats_.http_5xx_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.http_5xx_.inc(); | ||||||||||||||||||||||||||||
| } else { | ||||||||||||||||||||||||||||
| benchmark_client_stats_.http_xxx_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.http_xxx_.inc(); | ||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| void BenchmarkClientHttpImpl::onPoolFailure(Envoy::Http::ConnectionPool::PoolFailureReason reason) { | ||||||||||||||||||||||||||||
| switch (reason) { | ||||||||||||||||||||||||||||
| case Envoy::Http::ConnectionPool::PoolFailureReason::Overflow: | ||||||||||||||||||||||||||||
| benchmark_client_stats_.pool_overflow_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.pool_overflow_.inc(); | ||||||||||||||||||||||||||||
| break; | ||||||||||||||||||||||||||||
| case Envoy::Http::ConnectionPool::PoolFailureReason::LocalConnectionFailure: | ||||||||||||||||||||||||||||
| case Envoy::Http::ConnectionPool::PoolFailureReason::RemoteConnectionFailure: | ||||||||||||||||||||||||||||
| benchmark_client_stats_.pool_connection_failure_.inc(); | ||||||||||||||||||||||||||||
| benchmark_client_counters_.pool_connection_failure_.inc(); | ||||||||||||||||||||||||||||
| break; | ||||||||||||||||||||||||||||
| case Envoy::Http::ConnectionPool::PoolFailureReason::Timeout: | ||||||||||||||||||||||||||||
| break; | ||||||||||||||||||||||||||||
|
|
@@ -169,5 +207,22 @@ void BenchmarkClientHttpImpl::onPoolFailure(Envoy::Http::ConnectionPool::PoolFai | |||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| void BenchmarkClientHttpImpl::exportLatency(const uint32_t response_code, | ||||||||||||||||||||||||||||
| const uint64_t latency_ns) { | ||||||||||||||||||||||||||||
| if (response_code > 99 && response_code <= 199) { | ||||||||||||||||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. optional: it seems very odd to me to notate this expression this way. It's more conventional to do:
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ah, I was following the code here. nighthawk/source/client/benchmark_client_impl.cc Lines 140 to 152 in c57811f
|
||||||||||||||||||||||||||||
| statistic_.latency_1xx_statistic->addValue(latency_ns); | ||||||||||||||||||||||||||||
| } else if (response_code > 199 && response_code <= 299) { | ||||||||||||||||||||||||||||
| statistic_.latency_2xx_statistic->addValue(latency_ns); | ||||||||||||||||||||||||||||
| } else if (response_code > 299 && response_code <= 399) { | ||||||||||||||||||||||||||||
| statistic_.latency_3xx_statistic->addValue(latency_ns); | ||||||||||||||||||||||||||||
| } else if (response_code > 399 && response_code <= 499) { | ||||||||||||||||||||||||||||
| statistic_.latency_4xx_statistic->addValue(latency_ns); | ||||||||||||||||||||||||||||
| } else if (response_code > 499 && response_code <= 599) { | ||||||||||||||||||||||||||||
| statistic_.latency_5xx_statistic->addValue(latency_ns); | ||||||||||||||||||||||||||||
| } else { | ||||||||||||||||||||||||||||
| statistic_.latency_xxx_statistic->addValue(latency_ns); | ||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
oschaaf marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| } // namespace Client | ||||||||||||||||||||||||||||
| } // namespace Nighthawk | ||||||||||||||||||||||||||||
| } // namespace Nighthawk | ||||||||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we format this document so that each line is at most 80 characters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.