Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
e5d13ea
Add counter/histogram to record total number of requests and latency …
qqustc Jun 22, 2020
37ef0db
Update stream_decoder.cc
qqustc Jun 23, 2020
ecc9f1a
Fix format source/client/stream_decoder.cc
qqustc Jun 24, 2020
dd611a9
Merge remote-tracking branch 'upstream/master'
qqustc Jun 30, 2020
af1ea54
Add new NH statistic class CircllhistStatistic and SinkableCircllhist…
qqustc Jul 1, 2020
43d7ee2
Update statistic_impl.h
qqustc Jul 6, 2020
77249b9
Update statistic_impl.cc
qqustc Jul 6, 2020
bdecd69
Update statistic_test.cc
qqustc Jul 6, 2020
2c14561
Update circllhist_proto_json.gold
qqustc Jul 6, 2020
97ccc8a
Update statistic_impl.h
qqustc Jul 6, 2020
6876450
Update statistic_test.cc
qqustc Jul 6, 2020
8fd0a05
Refactor SinkableCircllhistStatistic
qqustc Jul 6, 2020
cd6b29e
fix clang_tidy
qqustc Jul 7, 2020
684fe0b
Update statistic_test.cc
qqustc Jul 7, 2020
dc655a4
replace ASSERT with RELEASE_ASSERT
qqustc Jul 7, 2020
fc298c5
update statistic_test.cc
qqustc Jul 7, 2020
e1356ec
update statistic_test.cc
qqustc Jul 8, 2020
a9d4d5d
update statistic_impl.h and change coverage threshold
qqustc Jul 8, 2020
e4d6672
format
qqustc Jul 8, 2020
db78892
update statistic_impl
qqustc Jul 10, 2020
8903b60
update statistic_impl.cc
qqustc Jul 10, 2020
69f270d
Merge branch 'pr1' of https://github.com/qqustc/nighthawk into master
qqustc Jul 13, 2020
0fb2d12
update latency histogram to use nh statistics
qqustc Jul 13, 2020
4ce94f2
delete stat.doc
qqustc Jul 14, 2020
0f4fc95
add latency stat
qqustc Jul 14, 2020
190299c
fix include
qqustc Jul 14, 2020
4ef322d
fix stream_decoder.cc
qqustc Jul 14, 2020
66e1e89
fix factories_test.cc
qqustc Jul 14, 2020
a0cba74
format file
qqustc Jul 14, 2020
ccc055f
fix format
qqustc Jul 14, 2020
eaf6b84
Merge branch 'master' into pr3
qqustc Jul 14, 2020
d6e2a47
Create statistics.md
qqustc Jul 15, 2020
2e3762b
update doc
qqustc Jul 15, 2020
a712eb2
Merge branch 'master' into pr3
qqustc Jul 15, 2020
a71dad8
update stat
qqustc Jul 15, 2020
be4533c
fix mock_benchmark_client_factory.h
qqustc Jul 15, 2020
f8d5696
fix tests
qqustc Jul 16, 2020
aee69ae
Merge branch 'master' into pr3
qqustc Jul 16, 2020
6b36a64
fix format
qqustc Jul 16, 2020
302807d
fix test
qqustc Jul 16, 2020
49e708e
fix format
qqustc Jul 16, 2020
60e46cd
rerun CI
qqustc Jul 16, 2020
c801782
rerun CI
qqustc Jul 16, 2020
9255197
rerun CI
qqustc Jul 16, 2020
d56991f
add constructor to stat struct
qqustc Jul 20, 2020
cc7ecc3
delete module
qqustc Jul 20, 2020
8d92933
rerun CI
qqustc Jul 20, 2020
d8f4d31
change sink_stat_prefix to worker_id
qqustc Jul 22, 2020
5c36515
fix clang
qqustc Jul 22, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
134 changes: 134 additions & 0 deletions docs/root/statistics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
# Nighthawk Statistics
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we format this document so that each line is at most 80 characters?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.


## Background
Currently Nighthawk only outputs metrics at the end of a test run and there are
no metrics streamed during a test run. The work to stream its metrics is in
progress.


## Statistics in Nighthawk
All statistics below defined in Nighthawk are per worker.

For counter metric, Nighthawk use Envoy's Counter directly. For histogram
metric, Nighthawk wraps Envoy's Histogram into its own Statistic concept (see
[#391](https://github.com/envoyproxy/nighthawk/pull/391)).

Name | Type | Description
-----| ----- | ----------------
upstream_rq_total | Counter | Total number of requests sent from Nighthawk
http_1xx | Counter | Total number of response with code 1xx
http_2xx | Counter | Total number of response with code 2xx
http_3xx | Counter | Total number of response with code 3xx
http_4xx | Counter | Total number of response with code 4xx
http_5xx | Counter | Total number of response with code 5xx
http_xxx | Counter | Total number of response with code <100 or >=600
stream_resets | Counter | Total number of stream reset
pool_overflow | Counter | Total number of times connection pool overflowed
pool_connection_failure | Counter | Total number of times pool connection failed
benchmark_http_client.latency_1xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 1xx
benchmark_http_client.latency_2xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 2xx
benchmark_http_client.latency_3xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 3xx
benchmark_http_client.latency_4xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 4xx
benchmark_http_client.latency_5xx | HdrStatistic | Latency (in Nanosecond) histogram of request with code 5xx
benchmark_http_client.latency_xxx | HdrStatistic | Latency (in Nanosecond) histogram of request with code <100 or >=600
benchmark_http_client.queue_to_connect | HdrStatistic | Histogram of request connection time (in Nanosecond)
benchmark_http_client.request_to_response | HdrStatistic | Latency (in Nanosecond) histogram include requests with stream reset or pool failure
benchmark_http_client.response_header_size | StreamingStatistic | Statistic of response header size (min, max, mean, pstdev values in bytes)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me reading this what these statistics are going to look like. If HdrStatistic is a histogram, what is StreamingStatistic? If it's also a histogram, how does it differ from HdrStatistic?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both StreamingStatistic and HdrStatistic are implementations of NH Statistic. The difference is HdrStatistic is a "real" histogram (it stores the distributions and provide detailed percentile values) while StreamingStatistic is not exactly a histogram (it only provide min, max, mean, pstdev values and does not have percentile values)

As shown in this table, different NH metrics choose different implementations of NH Statistic.

benchmark_http_client.response_body_size | StreamingStatistic | Statistic of response body size (min, max, mean, pstdev values in bytes)
sequencer.callback | HdrStatistic | Latency (in Nanosecond) histogram of unblocked requests
sequencer.blocking | HdrStatistic | Latency (in Nanosecond) histogram of blocked requests


## Envoy Metrics Model

[Envoy](https://github.com/envoyproxy/envoy) has 3 types of metrics
- Counters: Unsigned integers (can only increase) represent how many times an
event happens, e.g. total number of requests.
- Gauges: Unsigned integers (can increase or decrease), e.g. number of active
connections.
- Histograms: Unsigned integers that will yield summarized percentile values.
E.g. latency distributions.

In Envoy, the stat
[Store](https://github.com/envoyproxy/envoy/blob/74530c92cfa3682b49b540fddf2aba40ac10c68e/include/envoy/stats/store.h#L29)
is a singleton and provides a simple interface by which the rest of the code can
obtain handles to
[scopes](https://github.com/envoyproxy/envoy/blob/958745d658752f90f544296d9e75030519a9fb84/include/envoy/stats/scope.h#L37),
counters, gauges, and histograms. Envoy counters and gauges are periodically
(configured at ~5 sec interval) flushed to the sinks. Note that currently
histogram values are sent directly to the sinks. A stat
[Sink](https://github.com/envoyproxy/envoy/blob/74530c92cfa3682b49b540fddf2aba40ac10c68e/include/envoy/stats/sink.h#L48)
is an interface that takes generic stat data and translates it into a
backend-specific wire format. Currently Envoy supports the TCP and UDP
[statsd](https://github.com/b/statsd_spec) protocol (implemented in
[statsd.h](https://github.com/envoyproxy/envoy/blob/master/source/extensions/stat_sinks/common/statsd/statsd.h)).
Users can create their own Sink subclass to translate Envoy metrics into
backend-specific format.

Envoy metrics can be defined using a macro, e.g.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This example is helpful, but also opens up other questions. We're defining metrics here, but they're just names. How are they calculated? Can you add a second code snippet below it, explaining how to collect stats?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great. thank you

```cc
// Define Envoy stats.
#define ALL_CLUSTER_STATS(COUNTER, GAUGE, HISTOGRAM)
COUNTER(upstream_cx_total)
GAUGE(upstream_cx_active, NeverImport)
HISTOGRAM(upstream_cx_length, Milliseconds)
// Put these stats as members of a struct.
struct ClusterStats {
ALL_CLUSTER_STATS(GENERATE_COUNTER_STRUCT, GENERATE_GAUGE_STRUCT, GENERATE_HISTOGRAM_STRUCT)
};
// Instantiate the above struct using a Stats::Pool.
ClusterStats stats{
ALL_CLUSTER_STATS(POOL_COUNTER(...), POOL_GAUGE(...), POOL_HISTOGRAM(...))};

// Stats can be updated in the code:
stats.upstream_cx_total_.inc();
stats.upstream_cx_active_.set(...);
stats.upstream_cx_length_.recordValue(...);
```

## Envoy Metrics Limitation
Currently Envoy metrics don't support key-value map. As a result, for metrics to
be broken down by certain dimensions, we need to define a separate metric for
each dimension. For example, currently Nighthawk defines
[separate counters](https://github.com/envoyproxy/nighthawk/blob/master/source/client/benchmark_client_impl.h#L35-L40)
to monitor the number of responses with corresponding response code.

## Envoy Metrics Flush
Envoy uses a flush timer to periodically flush metrics into stat sinks
([here](https://github.com/envoyproxy/envoy/blob/74530c92cfa3682b49b540fddf2aba40ac10c68e/source/server/server.cc#L479-L480))
at a configured interval (default to 5 sec). For every metric flush, Envoy will
call the function
[flushMetricsToSinks()](https://github.com/envoyproxy/envoy/blob/74530c92cfa3682b49b540fddf2aba40ac10c68e/source/server/server.cc#L175)
to create a metric snapshot from Envoy stat store and flush the snapshot to all
sinks through `sink->flush(snapshot)`.


## Metrics Export in Nighthawk
Currently a single Nighthawk can run with multiple workers. In the future,
Nighthawk will be extended to be able to run multiple instances together. Since
each Nighthawk worker sends requests independently, we decided to export per
worker level metrics since it provides several advantages over global level
metrics (aggregated across all workers).
- Per worker level metrics provide information about the performance of each
worker which will be hidden by the global level metrics.
- Keep the workers independent which makes it easier/efficient to scale up to
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we clarify that the ability to scale up multiple nighthawks is still under development, so that users don't get confused?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

multiple Nighthawks with large numbers of workers. (The work to scale up to
multiple Nighthawks is still under development).

Envoy metrics can be defined at per worker level using
[Scope](https://github.com/envoyproxy/envoy/blob/e9c2c8c4a0141c9634316e8283f98f412d0dd207/include/envoy/stats/scope.h#L35)
( e.g. `cluster.<worker_id>.total_request_sent`). The dynamic portions of
metric (e.g. `worker_id`) can be embedded into the metric name. A
[TagSpecifier](https://github.com/envoyproxy/envoy/blob/7a652daf35d7d4a6a6bad5a010fe65947ee4411a/api/envoy/config/metrics/v3/stats.proto#L182)
can be specified in the bootstrap configuration, which will transform dynamic
portions into tags. When per worker level metrics are exported from Nighthawk,
multiple per worker level metrics can be converted into a single metric with a
`worker_id` label in the stat Sink if the corresponding backend metric supports
key-value map.

## Reference
- [Nighthawk: architecture and key
concepts](https://github.com/envoyproxy/nighthawk/blob/master/docs/root/overview.md)
- [Envoy Stats
System](https://github.com/envoyproxy/envoy/blob/master/source/docs/stats.md)
- [Envoy Stats blog](https://blog.envoyproxy.io/envoy-stats-b65c7f363342)
9 changes: 5 additions & 4 deletions include/nighthawk/client/factories.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,10 @@ class BenchmarkClientFactory {
* @param scope stats scope for any stats tracked by the benchmark client.
* @param cluster_manager Cluster manager preconfigured with our target cluster.
* @param http_tracer Shared pointer to an http tracer implementation (e.g. Zipkin).
* @param cluster_name Name of the cluster that this benchmark client will use. In conjunction
* with cluster_manager this will allow the this BenchmarkClient to access the target connection
* pool.
* @param cluster_name Name of the cluster that this benchmark client
* will use. In conjunction with cluster_manager this will allow the this BenchmarkClient to
* access the target connection pool.
* @param worker_id Worker number.
* @param request_source Source of request-specifiers. Will be queries every time the
* BenchmarkClient is asked to issue a request.
*
Expand All @@ -45,7 +46,7 @@ class BenchmarkClientFactory {
Envoy::Stats::Scope& scope,
Envoy::Upstream::ClusterManagerPtr& cluster_manager,
Envoy::Tracing::HttpTracerSharedPtr& http_tracer,
absl::string_view cluster_name,
absl::string_view cluster_name, int worker_id,
RequestSource& request_source) const PURE;
};

Expand Down
113 changes: 84 additions & 29 deletions source/client/benchmark_client_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,34 @@ using namespace std::chrono_literals;
namespace Nighthawk {
namespace Client {

BenchmarkClientStatistic::BenchmarkClientStatistic(BenchmarkClientStatistic&& statistic) noexcept
: connect_statistic(std::move(statistic.connect_statistic)),
response_statistic(std::move(statistic.response_statistic)),
response_header_size_statistic(std::move(statistic.response_header_size_statistic)),
response_body_size_statistic(std::move(statistic.response_body_size_statistic)),
latency_1xx_statistic(std::move(statistic.latency_1xx_statistic)),
latency_2xx_statistic(std::move(statistic.latency_2xx_statistic)),
latency_3xx_statistic(std::move(statistic.latency_3xx_statistic)),
latency_4xx_statistic(std::move(statistic.latency_4xx_statistic)),
latency_5xx_statistic(std::move(statistic.latency_5xx_statistic)),
latency_xxx_statistic(std::move(statistic.latency_xxx_statistic)) {}

BenchmarkClientStatistic::BenchmarkClientStatistic(
StatisticPtr&& connect_stat, StatisticPtr&& response_stat,
StatisticPtr&& response_header_size_stat, StatisticPtr&& response_body_size_stat,
StatisticPtr&& latency_1xx_stat, StatisticPtr&& latency_2xx_stat,
StatisticPtr&& latency_3xx_stat, StatisticPtr&& latency_4xx_stat,
StatisticPtr&& latency_5xx_stat, StatisticPtr&& latency_xxx_stat)
: connect_statistic(std::move(connect_stat)), response_statistic(std::move(response_stat)),
response_header_size_statistic(std::move(response_header_size_stat)),
response_body_size_statistic(std::move(response_body_size_stat)),
latency_1xx_statistic(std::move(latency_1xx_stat)),
latency_2xx_statistic(std::move(latency_2xx_stat)),
latency_3xx_statistic(std::move(latency_3xx_stat)),
latency_4xx_statistic(std::move(latency_4xx_stat)),
latency_5xx_statistic(std::move(latency_5xx_stat)),
latency_xxx_statistic(std::move(latency_xxx_stat)) {}

Envoy::Http::ConnectionPool::Cancellable*
Http1PoolImpl::newStream(Envoy::Http::ResponseDecoder& response_decoder,
Envoy::Http::ConnectionPool::Callbacks& callbacks) {
Expand Down Expand Up @@ -49,24 +77,26 @@ Http1PoolImpl::newStream(Envoy::Http::ResponseDecoder& response_decoder,

BenchmarkClientHttpImpl::BenchmarkClientHttpImpl(
Envoy::Api::Api& api, Envoy::Event::Dispatcher& dispatcher, Envoy::Stats::Scope& scope,
StatisticPtr&& connect_statistic, StatisticPtr&& response_statistic,
StatisticPtr&& response_header_size_statistic, StatisticPtr&& response_body_size_statistic,
bool use_h2, Envoy::Upstream::ClusterManagerPtr& cluster_manager,
BenchmarkClientStatistic& statistic, bool use_h2,
Envoy::Upstream::ClusterManagerPtr& cluster_manager,
Envoy::Tracing::HttpTracerSharedPtr& http_tracer, absl::string_view cluster_name,
RequestGenerator request_generator, const bool provide_resource_backpressure)
: api_(api), dispatcher_(dispatcher), scope_(scope.createScope("benchmark.")),
connect_statistic_(std::move(connect_statistic)),
response_statistic_(std::move(response_statistic)),
response_header_size_statistic_(std::move(response_header_size_statistic)),
response_body_size_statistic_(std::move(response_body_size_statistic)), use_h2_(use_h2),
benchmark_client_stats_({ALL_BENCHMARK_CLIENT_STATS(POOL_COUNTER(*scope_))}),
statistic_(std::move(statistic)), use_h2_(use_h2),
benchmark_client_counters_({ALL_BENCHMARK_CLIENT_COUNTERS(POOL_COUNTER(*scope_))}),
cluster_manager_(cluster_manager), http_tracer_(http_tracer),
cluster_name_(std::string(cluster_name)), request_generator_(std::move(request_generator)),
provide_resource_backpressure_(provide_resource_backpressure) {
connect_statistic_->setId("benchmark_http_client.queue_to_connect");
response_statistic_->setId("benchmark_http_client.request_to_response");
response_header_size_statistic_->setId("benchmark_http_client.response_header_size");
response_body_size_statistic_->setId("benchmark_http_client.response_body_size");
statistic_.connect_statistic->setId("benchmark_http_client.queue_to_connect");
statistic_.response_statistic->setId("benchmark_http_client.request_to_response");
statistic_.response_header_size_statistic->setId("benchmark_http_client.response_header_size");
statistic_.response_body_size_statistic->setId("benchmark_http_client.response_body_size");
statistic_.latency_1xx_statistic->setId("benchmark_http_client.latency_1xx");
statistic_.latency_2xx_statistic->setId("benchmark_http_client.latency_2xx");
statistic_.latency_3xx_statistic->setId("benchmark_http_client.latency_3xx");
statistic_.latency_4xx_statistic->setId("benchmark_http_client.latency_4xx");
statistic_.latency_5xx_statistic->setId("benchmark_http_client.latency_5xx");
statistic_.latency_xxx_statistic->setId("benchmark_http_client.latency_xxx");
}

void BenchmarkClientHttpImpl::terminate() {
Expand All @@ -79,10 +109,18 @@ void BenchmarkClientHttpImpl::terminate() {

StatisticPtrMap BenchmarkClientHttpImpl::statistics() const {
StatisticPtrMap statistics;
statistics[connect_statistic_->id()] = connect_statistic_.get();
statistics[response_statistic_->id()] = response_statistic_.get();
statistics[response_header_size_statistic_->id()] = response_header_size_statistic_.get();
statistics[response_body_size_statistic_->id()] = response_body_size_statistic_.get();
statistics[statistic_.connect_statistic->id()] = statistic_.connect_statistic.get();
statistics[statistic_.response_statistic->id()] = statistic_.response_statistic.get();
statistics[statistic_.response_header_size_statistic->id()] =
statistic_.response_header_size_statistic.get();
statistics[statistic_.response_body_size_statistic->id()] =
statistic_.response_body_size_statistic.get();
statistics[statistic_.latency_1xx_statistic->id()] = statistic_.latency_1xx_statistic.get();
statistics[statistic_.latency_2xx_statistic->id()] = statistic_.latency_2xx_statistic.get();
statistics[statistic_.latency_3xx_statistic->id()] = statistic_.latency_3xx_statistic.get();
statistics[statistic_.latency_4xx_statistic->id()] = statistic_.latency_4xx_statistic.get();
statistics[statistic_.latency_5xx_statistic->id()] = statistic_.latency_5xx_statistic.get();
statistics[statistic_.latency_xxx_statistic->id()] = statistic_.latency_xxx_statistic.get();
return statistics;
};

Expand Down Expand Up @@ -120,9 +158,9 @@ bool BenchmarkClientHttpImpl::tryStartRequest(CompletionCallback caller_completi

auto stream_decoder = new StreamDecoder(
dispatcher_, api_.timeSource(), *this, std::move(caller_completion_callback),
*connect_statistic_, *response_statistic_, *response_header_size_statistic_,
*response_body_size_statistic_, request->header(), shouldMeasureLatencies(), content_length,
generator_, http_tracer_);
*statistic_.connect_statistic, *statistic_.response_statistic,
*statistic_.response_header_size_statistic, *statistic_.response_body_size_statistic,
request->header(), shouldMeasureLatencies(), content_length, generator_, http_tracer_);
requests_initiated_++;
pool_ptr->newStream(*stream_decoder, *stream_decoder);
return true;
Expand All @@ -132,35 +170,35 @@ void BenchmarkClientHttpImpl::onComplete(bool success,
const Envoy::Http::ResponseHeaderMap& headers) {
requests_completed_++;
if (!success) {
benchmark_client_stats_.stream_resets_.inc();
benchmark_client_counters_.stream_resets_.inc();
} else {
ASSERT(headers.Status());
const int64_t status = Envoy::Http::Utility::getResponseStatus(headers);

if (status > 99 && status <= 199) {
benchmark_client_stats_.http_1xx_.inc();
benchmark_client_counters_.http_1xx_.inc();
} else if (status > 199 && status <= 299) {
benchmark_client_stats_.http_2xx_.inc();
benchmark_client_counters_.http_2xx_.inc();
} else if (status > 299 && status <= 399) {
benchmark_client_stats_.http_3xx_.inc();
benchmark_client_counters_.http_3xx_.inc();
} else if (status > 399 && status <= 499) {
benchmark_client_stats_.http_4xx_.inc();
benchmark_client_counters_.http_4xx_.inc();
} else if (status > 499 && status <= 599) {
benchmark_client_stats_.http_5xx_.inc();
benchmark_client_counters_.http_5xx_.inc();
} else {
benchmark_client_stats_.http_xxx_.inc();
benchmark_client_counters_.http_xxx_.inc();
}
}
}

void BenchmarkClientHttpImpl::onPoolFailure(Envoy::Http::ConnectionPool::PoolFailureReason reason) {
switch (reason) {
case Envoy::Http::ConnectionPool::PoolFailureReason::Overflow:
benchmark_client_stats_.pool_overflow_.inc();
benchmark_client_counters_.pool_overflow_.inc();
break;
case Envoy::Http::ConnectionPool::PoolFailureReason::LocalConnectionFailure:
case Envoy::Http::ConnectionPool::PoolFailureReason::RemoteConnectionFailure:
benchmark_client_stats_.pool_connection_failure_.inc();
benchmark_client_counters_.pool_connection_failure_.inc();
break;
case Envoy::Http::ConnectionPool::PoolFailureReason::Timeout:
break;
Expand All @@ -169,5 +207,22 @@ void BenchmarkClientHttpImpl::onPoolFailure(Envoy::Http::ConnectionPool::PoolFai
}
}

void BenchmarkClientHttpImpl::exportLatency(const uint32_t response_code,
const uint64_t latency_ns) {
if (response_code > 99 && response_code <= 199) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optional: it seems very odd to me to notate this expression this way. It's more conventional to do:
response_code >= 100 && response_code < 200

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, I was following the code here.

if (status > 99 && status <= 199) {
benchmark_client_stats_.http_1xx_.inc();
} else if (status > 199 && status <= 299) {
benchmark_client_stats_.http_2xx_.inc();
} else if (status > 299 && status <= 399) {
benchmark_client_stats_.http_3xx_.inc();
} else if (status > 399 && status <= 499) {
benchmark_client_stats_.http_4xx_.inc();
} else if (status > 499 && status <= 599) {
benchmark_client_stats_.http_5xx_.inc();
} else {
benchmark_client_stats_.http_xxx_.inc();
}

statistic_.latency_1xx_statistic->addValue(latency_ns);
} else if (response_code > 199 && response_code <= 299) {
statistic_.latency_2xx_statistic->addValue(latency_ns);
} else if (response_code > 299 && response_code <= 399) {
statistic_.latency_3xx_statistic->addValue(latency_ns);
} else if (response_code > 399 && response_code <= 499) {
statistic_.latency_4xx_statistic->addValue(latency_ns);
} else if (response_code > 499 && response_code <= 599) {
statistic_.latency_5xx_statistic->addValue(latency_ns);
} else {
statistic_.latency_xxx_statistic->addValue(latency_ns);
}
}

} // namespace Client
} // namespace Nighthawk
} // namespace Nighthawk
Loading