Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
c226497
Save state on origin timeing
oschaaf Jun 15, 2020
c6697de
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Jun 15, 2020
28e0d34
Fix clang-tidy, format, and test build
oschaaf Jun 16, 2020
94ad984
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Aug 19, 2020
933c589
stats naming
oschaaf Aug 19, 2020
a1691ec
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Aug 21, 2020
740d0ca
Track request receipt deltas. Configure response header name.
oschaaf Aug 21, 2020
1afe2f1
Save state before context switching
oschaaf Aug 21, 2020
ae0c08e
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Aug 21, 2020
3487e19
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Aug 24, 2020
1d1f03a
Fix format
oschaaf Aug 24, 2020
a9d99eb
Suppress clang-tidy on MUTABLE_CONSTRUCT_ON_FIRST_USE
oschaaf Aug 24, 2020
0aa8630
Proper locking, add TODOs and doc comments.
oschaaf Aug 24, 2020
db01e40
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Aug 24, 2020
5e48513
Small fixes
oschaaf Aug 24, 2020
d9569cb
Add unit test for StreamDecoder
oschaaf Aug 25, 2020
cf1170c
Review feedback
oschaaf Aug 26, 2020
421feb8
check_format introduced proto comment issues. Add punctuation.
oschaaf Aug 26, 2020
af148ea
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Aug 27, 2020
e813e75
Review feedback
oschaaf Aug 27, 2020
505495c
Save state on splitting out a separate extension and stopwatch
oschaaf Aug 27, 2020
5405dc9
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Aug 27, 2020
d858b55
Small corrections
oschaaf Aug 28, 2020
3cf32b8
Add threaded spam test for stopwatch. Add doc comments.
oschaaf Aug 28, 2020
35aee47
clang-tidy: fix for loop
oschaaf Aug 28, 2020
027c00f
Merge remote-tracking branch 'upstream/master' into origin-timings
oschaaf Aug 29, 2020
70502e1
Review feedback: doc comments
oschaaf Aug 29, 2020
a55bbe7
Wire up proto option
oschaaf Aug 31, 2020
d199313
Wire up TCLAP, add OptionImpl tests, regen CLI docs.
oschaaf Aug 31, 2020
a16d8dd
Address clang-tidy nit, better CLI/proto option description.
oschaaf Aug 31, 2020
f3008d9
Build time-tracking extension into test sever. Add end-to-end test.
oschaaf Aug 31, 2020
7774e2d
review feedback
oschaaf Sep 1, 2020
937fd88
Merge branch 'origin-timings' into origin-timings-tracking-option
oschaaf Sep 1, 2020
8152829
Fix clang-tidy nit.
oschaaf Sep 1, 2020
64cb19c
Merge remote-tracking branch 'upstream/master' into origin-timings-tr…
oschaaf Sep 1, 2020
af60b8d
Replace stopwatch shared_ptr to unique_ptr
oschaaf Sep 1, 2020
3996c65
Fix a few small nits & improve option description.
oschaaf Sep 1, 2020
228934a
Merge remote-tracking branch 'upstream/master' into origin-timings-tr…
oschaaf Sep 5, 2020
00b6861
Fix todo
oschaaf Sep 5, 2020
68ed58a
Review feedback
oschaaf Sep 9, 2020
74f9717
Merge remote-tracking branch 'upstream/master' into origin-timings-tr…
oschaaf Sep 9, 2020
594bbbe
Review feedback: option rename
oschaaf Sep 9, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ envoy_cc_binary(
deps = [
"//source/server:http_dynamic_delay_filter_config",
"//source/server:http_test_server_filter_config",
"//source/server:http_time_tracking_filter_config",
"@envoy//source/exe:envoy_main_entry_lib",
],
)
Expand Down
10 changes: 9 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,8 @@ bazel build -c opt //:nighthawk
```
USAGE:

bazel-bin/nighthawk_client [--stats-flush-interval <uint32_t>]
bazel-bin/nighthawk_client [--latency-response-header-name <string>]
[--stats-flush-interval <uint32_t>]
[--stats-sinks <string>] ...
[--no-duration] [--simple-warmup]
[--request-source <uri format>] [--label
Expand Down Expand Up @@ -80,6 +81,13 @@ format>

Where:

--latency-response-header-name <string>
Set an optional header name that will be returned in responses, whose
values will be tracked in a latency histogram if set. Can be used in
tandem with the test server's response option
"emit_previous_request_delta_in_response_header" to record elapsed
time between request arrivals. Default: ""

--stats-flush-interval <uint32_t>
Time interval (in seconds) between flushes to configured stats sinks.
Default: 5.
Expand Down
5 changes: 5 additions & 0 deletions api/client/options.proto
Original file line number Diff line number Diff line change
Expand Up @@ -202,4 +202,9 @@ message CommandLineOptions {
// specified the default is 5 seconds. Time interval must be at least 1s and at most 300s.
google.protobuf.UInt32Value stats_flush_interval = 35
[(validate.rules).uint32 = {gte: 1, lte: 300}];
// Set an optional header name that will be returned in responses, whose values will be tracked in
// a latency histogram if set. Can be used in tandem with the test server's response option
// "emit_previous_request_delta_in_response_header" to record elapsed time between request
// arrivals.
google.protobuf.StringValue latency_response_header_name = 36;
}
1 change: 1 addition & 0 deletions include/nighthawk/client/options.h
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ class Options {
virtual bool noDuration() const PURE;
virtual std::vector<envoy::config::metrics::v3::StatsSink> statsSinks() const PURE;
virtual uint32_t statsFlushInterval() const PURE;
virtual std::string responseHeaderWithLatencyInput() const PURE;

/**
* Converts an Options instance to an equivalent CommandLineOptions instance in terms of option
Expand Down
8 changes: 5 additions & 3 deletions source/client/benchmark_client_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -83,13 +83,15 @@ BenchmarkClientHttpImpl::BenchmarkClientHttpImpl(
BenchmarkClientStatistic& statistic, bool use_h2,
Envoy::Upstream::ClusterManagerPtr& cluster_manager,
Envoy::Tracing::HttpTracerSharedPtr& http_tracer, absl::string_view cluster_name,
RequestGenerator request_generator, const bool provide_resource_backpressure)
RequestGenerator request_generator, const bool provide_resource_backpressure,
absl::string_view latency_response_header_name)
: api_(api), dispatcher_(dispatcher), scope_(scope.createScope("benchmark.")),
statistic_(std::move(statistic)), use_h2_(use_h2),
benchmark_client_counters_({ALL_BENCHMARK_CLIENT_COUNTERS(POOL_COUNTER(*scope_))}),
cluster_manager_(cluster_manager), http_tracer_(http_tracer),
cluster_name_(std::string(cluster_name)), request_generator_(std::move(request_generator)),
provide_resource_backpressure_(provide_resource_backpressure) {
provide_resource_backpressure_(provide_resource_backpressure),
latency_response_header_name_(latency_response_header_name) {
statistic_.connect_statistic->setId("benchmark_http_client.queue_to_connect");
statistic_.response_statistic->setId("benchmark_http_client.request_to_response");
statistic_.response_header_size_statistic->setId("benchmark_http_client.response_header_size");
Expand Down Expand Up @@ -166,7 +168,7 @@ bool BenchmarkClientHttpImpl::tryStartRequest(CompletionCallback caller_completi
*statistic_.connect_statistic, *statistic_.response_statistic,
*statistic_.response_header_size_statistic, *statistic_.response_body_size_statistic,
*statistic_.origin_latency_statistic, request->header(), shouldMeasureLatencies(),
content_length, generator_, http_tracer_);
content_length, generator_, http_tracer_, latency_response_header_name_);
requests_initiated_++;
pool_ptr->newStream(*stream_decoder, *stream_decoder);
return true;
Expand Down
4 changes: 3 additions & 1 deletion source/client/benchmark_client_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,8 @@ class BenchmarkClientHttpImpl : public BenchmarkClient,
bool use_h2, Envoy::Upstream::ClusterManagerPtr& cluster_manager,
Envoy::Tracing::HttpTracerSharedPtr& http_tracer,
absl::string_view cluster_name, RequestGenerator request_generator,
const bool provide_resource_backpressure);
const bool provide_resource_backpressure,
absl::string_view latency_response_header_name);
void setConnectionLimit(uint32_t connection_limit) { connection_limit_ = connection_limit; }
void setMaxPendingRequests(uint32_t max_pending_requests) {
max_pending_requests_ = max_pending_requests;
Expand Down Expand Up @@ -162,6 +163,7 @@ class BenchmarkClientHttpImpl : public BenchmarkClient,
std::string cluster_name_;
const RequestGenerator request_generator_;
const bool provide_resource_backpressure_;
const std::string latency_response_header_name_;
};

} // namespace Client
Expand Down
2 changes: 1 addition & 1 deletion source/client/factories_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ BenchmarkClientPtr BenchmarkClientFactoryImpl::create(
std::make_unique<SinkableHdrStatistic>(scope, worker_id));
auto benchmark_client = std::make_unique<BenchmarkClientHttpImpl>(
api, dispatcher, scope, statistic, options_.h2(), cluster_manager, http_tracer, cluster_name,
request_generator.get(), !options_.openLoop());
request_generator.get(), !options_.openLoop(), options_.responseHeaderWithLatencyInput());
auto request_options = options_.toCommandLineOptions()->request_options();
benchmark_client->setConnectionLimit(options_.connections());
benchmark_client->setMaxPendingRequests(options_.maxPendingRequests());
Expand Down
16 changes: 16 additions & 0 deletions source/client/options_impl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -294,6 +294,16 @@ OptionsImpl::OptionsImpl(int argc, const char* const* argv) {
stats_flush_interval_),
false, 5, "uint32_t", cmd);

TCLAP::ValueArg<std::string> latency_response_header_name(
"", "latency-response-header-name",
"Set an optional header name that will be returned in responses, whose values will be "
"tracked in a latency histogram if set. "
"Can be used in tandem with the test server's response option "
"\"emit_previous_request_delta_in_response_header\" to record elapsed time between request "
"arrivals. "
"Default: \"\"",
false, "", "string", cmd);

Utility::parseCommand(cmd, argc, argv);

// --duration and --no-duration are mutually exclusive
Expand Down Expand Up @@ -425,6 +435,7 @@ OptionsImpl::OptionsImpl(int argc, const char* const* argv) {
}
}
TCLAP_SET_IF_SPECIFIED(stats_flush_interval, stats_flush_interval_);
TCLAP_SET_IF_SPECIFIED(latency_response_header_name, latency_response_header_name_);

// CLI-specific tests.
// TODO(oschaaf): as per mergconflicts's remark, it would be nice to aggregate
Expand Down Expand Up @@ -610,6 +621,9 @@ OptionsImpl::OptionsImpl(const nighthawk::client::CommandLineOptions& options) {
no_duration_ = PROTOBUF_GET_WRAPPED_OR_DEFAULT(options, no_duration, no_duration_);
}
std::copy(options.labels().begin(), options.labels().end(), std::back_inserter(labels_));
latency_response_header_name_ = PROTOBUF_GET_WRAPPED_OR_DEFAULT(
options, latency_response_header_name, latency_response_header_name_);

validate();
}

Expand Down Expand Up @@ -781,6 +795,8 @@ CommandLineOptionsPtr OptionsImpl::toCommandLineOptionsInternal() const {
*command_line_options->add_stats_sinks() = stats_sink;
}
command_line_options->mutable_stats_flush_interval()->set_value(stats_flush_interval_);
command_line_options->mutable_latency_response_header_name()->set_value(
latency_response_header_name_);
return command_line_options;
}

Expand Down
4 changes: 4 additions & 0 deletions source/client/options_impl.h
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,9 @@ class OptionsImpl : public Options, public Envoy::Logger::Loggable<Envoy::Logger
return stats_sinks_;
}
uint32_t statsFlushInterval() const override { return stats_flush_interval_; }
std::string responseHeaderWithLatencyInput() const override {
return latency_response_header_name_;
};

private:
void parsePredicates(const TCLAP::MultiArg<std::string>& arg,
Expand Down Expand Up @@ -138,6 +141,7 @@ class OptionsImpl : public Options, public Envoy::Logger::Loggable<Envoy::Logger
bool no_duration_{false};
std::vector<envoy::config::metrics::v3::StatsSink> stats_sinks_;
uint32_t stats_flush_interval_{5};
std::string latency_response_header_name_;
};

} // namespace Client
Expand Down
21 changes: 11 additions & 10 deletions source/client/stream_decoder.cc
Original file line number Diff line number Diff line change
Expand Up @@ -19,16 +19,17 @@ void StreamDecoder::decodeHeaders(Envoy::Http::ResponseHeaderMapPtr&& headers, b
response_header_sizes_statistic_.addValue(response_headers_->byteSize());
const uint64_t response_code = Envoy::Http::Utility::getResponseStatus(*response_headers_);
stream_info_.response_code_ = static_cast<uint32_t>(response_code);
const auto timing_header_name =
Envoy::Http::LowerCaseString("x-nighthawk-do-not-use-origin-timings");
const Envoy::Http::HeaderEntry* timing_header = response_headers_->get(timing_header_name);
if (timing_header != nullptr) {
absl::string_view timing_value = timing_header->value().getStringView();
int64_t origin_delta;
if (absl::SimpleAtoi(timing_value, &origin_delta) && origin_delta >= 0) {
origin_latency_statistic_.addValue(origin_delta);
} else {
ENVOY_LOG_EVERY_POW_2(warn, "Bad origin delta: '{}'.", timing_value);
if (!latency_response_header_name_.empty()) {
const auto timing_header_name = Envoy::Http::LowerCaseString(latency_response_header_name_);
const Envoy::Http::HeaderEntry* timing_header = response_headers_->get(timing_header_name);
if (timing_header != nullptr) {
absl::string_view timing_value = timing_header->value().getStringView();
int64_t origin_delta;
if (absl::SimpleAtoi(timing_value, &origin_delta) && origin_delta >= 0) {
origin_latency_statistic_.addValue(origin_delta);
} else {
ENVOY_LOG_EVERY_POW_2(warn, "Bad origin delta: '{}'.", timing_value);
}
}
}

Expand Down
7 changes: 5 additions & 2 deletions source/client/stream_decoder.h
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,8 @@ class StreamDecoder : public Envoy::Http::ResponseDecoder,
Statistic& response_body_sizes_statistic, Statistic& origin_latency_statistic,
HeaderMapPtr request_headers, bool measure_latencies, uint32_t request_body_size,
Envoy::Random::RandomGenerator& random_generator,
Envoy::Tracing::HttpTracerSharedPtr& http_tracer)
Envoy::Tracing::HttpTracerSharedPtr& http_tracer,
absl::string_view latency_response_header_name)
: dispatcher_(dispatcher), time_source_(time_source),
decoder_completion_callback_(decoder_completion_callback),
caller_completion_callback_(std::move(caller_completion_callback)),
Expand All @@ -57,7 +58,8 @@ class StreamDecoder : public Envoy::Http::ResponseDecoder,
request_headers_(std::move(request_headers)), connect_start_(time_source_.monotonicTime()),
complete_(false), measure_latencies_(measure_latencies),
request_body_size_(request_body_size), stream_info_(time_source_),
random_generator_(random_generator), http_tracer_(http_tracer) {
random_generator_(random_generator), http_tracer_(http_tracer),
latency_response_header_name_(latency_response_header_name) {
if (measure_latencies_ && http_tracer_ != nullptr) {
setupForTracing();
}
Expand Down Expand Up @@ -119,6 +121,7 @@ class StreamDecoder : public Envoy::Http::ResponseDecoder,
Envoy::Tracing::HttpTracerSharedPtr& http_tracer_;
Envoy::Tracing::SpanPtr active_span_;
Envoy::StreamInfo::UpstreamTiming upstream_timing_;
const std::string latency_response_header_name_;
};

} // namespace Client
Expand Down
2 changes: 1 addition & 1 deletion source/server/http_time_tracking_filter.h
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ class HttpTimeTrackingFilterConfig {

private:
const nighthawk::server::ResponseOptions server_config_;
std::shared_ptr<Stopwatch> stopwatch_;
std::unique_ptr<Stopwatch> stopwatch_;
};

using HttpTimeTrackingFilterConfigSharedPtr = std::shared_ptr<HttpTimeTrackingFilterConfig>;
Expand Down
5 changes: 3 additions & 2 deletions test/benchmark_http_client_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -178,8 +178,9 @@ class BenchmarkClientHttpTest : public Test {
// verifyBenchmarkClientProcessesExpectedInflightRequests.
void setupBenchmarkClient(const RequestGenerator& request_generator) {
client_ = std::make_unique<Client::BenchmarkClientHttpImpl>(
*api_, *dispatcher_, store_, statistic_, false, cluster_manager_, http_tracer_, "benchmark",
request_generator, true);
*api_, *dispatcher_, store_, statistic_, /*use_h2*/ false, cluster_manager_, http_tracer_,
"benchmark", request_generator, /*provide_resource_backpressure*/ true,
/*response_header_with_latency_input=*/"");
}

uint64_t getCounter(absl::string_view name) {
Expand Down
1 change: 1 addition & 0 deletions test/factories_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ TEST_F(FactoriesTest, CreateBenchmarkClient) {
EXPECT_CALL(options_, maxActiveRequests()).Times(1);
EXPECT_CALL(options_, maxRequestsPerConnection()).Times(1);
EXPECT_CALL(options_, openLoop()).Times(1);
EXPECT_CALL(options_, responseHeaderWithLatencyInput()).Times(1);
auto cmd = std::make_unique<nighthawk::client::CommandLineOptions>();
EXPECT_CALL(options_, toCommandLineOptions()).Times(1).WillOnce(Return(ByMove(std::move(cmd))));
StaticRequestSourceImpl request_generator(
Expand Down
1 change: 1 addition & 0 deletions test/integration/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ py_library(
data = [
"configurations/nighthawk_http_origin.yaml",
"configurations/nighthawk_https_origin.yaml",
"configurations/nighthawk_track_timings.yaml",
"configurations/sni_origin.yaml",
"//:nighthawk_client",
"//:nighthawk_output_transform",
Expand Down
38 changes: 38 additions & 0 deletions test/integration/configurations/nighthawk_track_timings.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Envoy configuration template for testing the time-tracking http filter extension.
# Sets up the time-tracking extension plus the test-server extension for generating
# responses.
admin:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a comment at the top of this, explaining its purpose in just a sentence, and draw attention to the relevant/unique part of the configuration (which I think is the time-tracking filter)

access_log_path: $tmpdir/nighthawk-test-server-admin-access.log
profile_path: $tmpdir/nighthawk-test-server.prof
address:
socket_address: { address: $server_ip, port_value: 0 }
static_resources:
listeners:
- address:
socket_address:
address: $server_ip
port_value: 0
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
generate_request_id: false
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
http_filters:
# Here we set up the time-tracking extension to emit request-arrival delta timings in a response header.
- name: time-tracking
config:
emit_previous_request_delta_in_response_header: x-origin-request-receipt-delta
- name: test-server
config:
response_body_size: 10
- name: envoy.router
config:
dynamic_stats: false
24 changes: 24 additions & 0 deletions test/integration/test_integration_basics.py
Original file line number Diff line number Diff line change
Expand Up @@ -700,6 +700,30 @@ def test_cancellation_with_infinite_duration(http_test_server_fixture):
asserts.assertCounterGreaterEqual(counters, "benchmark.http_2xx", 1)


@pytest.mark.parametrize('server_config', [
"nighthawk/test/integration/configurations/nighthawk_http_origin.yaml",
"nighthawk/test/integration/configurations/nighthawk_track_timings.yaml"
])
def test_http_h1_response_header_latency_tracking(http_test_server_fixture, server_config):
"""Test emission and tracking of response header latencies.

Run the CLI configured to track latencies delivered by response header from the test-server.
Ensure that the origin_latency_statistic histogram receives the correct number of inputs.
"""
parsed_json, _ = http_test_server_fixture.runNighthawkClient([
http_test_server_fixture.getTestServerRootUri(), "--connections", "1", "--rps", "100",
"--duration", "100", "--termination-predicate", "benchmark.http_2xx:99",
"--latency-response-header-name", "x-origin-request-receipt-delta"
])
global_histograms = http_test_server_fixture.getNighthawkGlobalHistogramsbyIdFromJson(parsed_json)
asserts.assertEqual(int(global_histograms["benchmark_http_client.latency_2xx"]["count"]), 100)
# Verify behavior is correct both with and without the timing filter enabled.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very helpful comment here. Thanks

expected_histogram_count = 99 if "nighthawk_track_timings.yaml" in server_config else 0
asserts.assertEqual(
int(global_histograms["benchmark_http_client.origin_latency_statistic"]["count"]),
expected_histogram_count)


def _run_client_with_args(args):
return utility.run_binary_with_args("nighthawk_client", args)

Expand Down
1 change: 1 addition & 0 deletions test/mocks/client/mock_options.h
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ class MockOptions : public Options {
MOCK_CONST_METHOD0(noDuration, bool());
MOCK_CONST_METHOD0(statsSinks, std::vector<envoy::config::metrics::v3::StatsSink>());
MOCK_CONST_METHOD0(statsFlushInterval, uint32_t());
MOCK_CONST_METHOD0(responseHeaderWithLatencyInput, std::string());
};

} // namespace Client
Expand Down
5 changes: 4 additions & 1 deletion test/options_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,8 @@ TEST_F(OptionsImplTest, AlmostAll) {
"--failure-predicate f2:2 --jitter-uniform .00001s "
"--experimental-h2-use-multiple-connections "
"--experimental-h1-connection-reuse-strategy lru --label label1 --label label2 {} "
"--simple-warmup --stats-sinks {} --stats-sinks {} --stats-flush-interval 10",
"--simple-warmup --stats-sinks {} --stats-sinks {} --stats-flush-interval 10 "
"--latency-response-header-name zz",
client_name_,
"{name:\"envoy.transport_sockets.tls\","
"typed_config:{\"@type\":\"type.googleapis.com/envoy.api.v2.auth.UpstreamTlsContext\","
Expand Down Expand Up @@ -189,6 +190,7 @@ TEST_F(OptionsImplTest, AlmostAll) {
"}\n"
"183412668: \"envoy.config.metrics.v2.StatsSink\"\n",
options->statsSinks()[1].DebugString());
EXPECT_EQ("zz", options->responseHeaderWithLatencyInput());

// Check that our conversion to CommandLineOptionsPtr makes sense.
CommandLineOptionsPtr cmd = options->toCommandLineOptions();
Expand Down Expand Up @@ -246,6 +248,7 @@ TEST_F(OptionsImplTest, AlmostAll) {
ASSERT_EQ(cmd->stats_sinks_size(), options->statsSinks().size());
EXPECT_TRUE(util(cmd->stats_sinks(0), options->statsSinks()[0]));
EXPECT_TRUE(util(cmd->stats_sinks(1), options->statsSinks()[1]));
EXPECT_EQ(cmd->latency_response_header_name().value(), options->responseHeaderWithLatencyInput());

OptionsImpl options_from_proto(*cmd);
std::string s1 = Envoy::MessageUtil::getYamlStringFromMessage(
Expand Down
Loading