diff --git a/docs/root/configuration/cluster_manager/cds.rst b/docs/root/configuration/cluster_manager/cds.rst index 89f2dbcd4b186..dcea74d79710c 100644 --- a/docs/root/configuration/cluster_manager/cds.rst +++ b/docs/root/configuration/cluster_manager/cds.rst @@ -17,16 +17,4 @@ clusters depending on what is required. Statistics ---------- -CDS has a statistics tree rooted at *cluster_manager.cds.* with the following statistics: - -.. csv-table:: - :header: Name, Type, Description - :widths: 1, 1, 2 - - config_reload, Counter, Total API fetches that resulted in a config reload due to a different config - update_attempt, Counter, Total API fetches attempted - update_success, Counter, Total API fetches completed successfully - update_failure, Counter, Total API fetches that failed because of network errors - update_rejected, Counter, Total API fetches that failed because of schema/validation errors - version, Gauge, Hash of the contents from the last successful API fetch - control_plane.connected_state, Gauge, A boolean (1 for connected and 0 for disconnected) that indicates the current connection state with management server +CDS has a :ref:`statistics ` tree rooted at *cluster_manager.cds.* diff --git a/docs/root/configuration/configuration.rst b/docs/root/configuration/configuration.rst index d004bd250ce72..ffbce401b32d1 100644 --- a/docs/root/configuration/configuration.rst +++ b/docs/root/configuration/configuration.rst @@ -20,6 +20,7 @@ Configuration reference rate_limit runtime statistics + xds_subscription_stats tools/router_check overload_manager/overload_manager secret diff --git a/docs/root/configuration/http_conn_man/rds.rst b/docs/root/configuration/http_conn_man/rds.rst index 280e1eeb692c1..7787c24b4eff1 100644 --- a/docs/root/configuration/http_conn_man/rds.rst +++ b/docs/root/configuration/http_conn_man/rds.rst @@ -14,17 +14,6 @@ fetch its own route configuration via the API. Statistics ---------- -RDS has a statistics tree rooted at *http..rds..*. +RDS has a :ref:`statistics ` tree rooted at *http..rds..*. Any ``:`` character in the ``route_config_name`` name gets replaced with ``_`` in the -stats tree. The stats tree contains the following statistics: - -.. csv-table:: - :header: Name, Type, Description - :widths: 1, 1, 2 - - config_reload, Counter, Total API fetches that resulted in a config reload due to a different config - update_attempt, Counter, Total API fetches attempted - update_success, Counter, Total API fetches completed successfully - update_failure, Counter, Total API fetches that failed because of network errors - update_rejected, Counter, Total API fetches that failed because of schema/validation errors - version, Gauge, Hash of the contents from the last successful API fetch +stats tree. diff --git a/docs/root/configuration/listeners/lds.rst b/docs/root/configuration/listeners/lds.rst index a90fe84f6f11e..94511a1cb2213 100644 --- a/docs/root/configuration/listeners/lds.rst +++ b/docs/root/configuration/listeners/lds.rst @@ -36,16 +36,4 @@ Configuration Statistics ---------- -LDS has a statistics tree rooted at *listener_manager.lds.* with the following statistics: - -.. csv-table:: - :header: Name, Type, Description - :widths: 1, 1, 2 - - config_reload, Counter, Total API fetches that resulted in a config reload due to a different config - update_attempt, Counter, Total API fetches attempted - update_success, Counter, Total API fetches completed successfully - update_failure, Counter, Total API fetches that failed because of network errors - update_rejected, Counter, Total API fetches that failed because of schema/validation errors - version, Gauge, Hash of the contents from the last successful API fetch - control_plane.connected_state, Gauge, A boolean (1 for connected and 0 for disconnected) that indicates the current connection state with management server +LDS has a :ref:`statistics ` tree rooted at *listener_manager.lds.* \ No newline at end of file diff --git a/docs/root/configuration/xds_subscription_stats.rst b/docs/root/configuration/xds_subscription_stats.rst new file mode 100644 index 0000000000000..c15bbcc22a1a2 --- /dev/null +++ b/docs/root/configuration/xds_subscription_stats.rst @@ -0,0 +1,23 @@ +.. _subscription_statistics: + +xDS subscription statistics +=========================== + +Envoy discovers its various dynamic resources via discovery +services referred to as *xDS*. Resources are requested via :ref:`subscriptions `, +by specifying a filesystem path to watch, initiating gRPC streams or polling a REST-JSON URL. + +The following statistics are generated for all subscriptions. + +.. csv-table:: + :header: Name, Type, Description + :widths: 1, 1, 2 + + config_reload, Counter, Total API fetches that resulted in a config reload due to a different config + init_fetch_timeout, Counter, Total :ref:`initial fetch timeouts ` + update_attempt, Counter, Total API fetches attempted + update_success, Counter, Total API fetches completed successfully + update_failure, Counter, Total API fetches that failed because of network errors + update_rejected, Counter, Total API fetches that failed because of schema/validation errors + version, Gauge, Hash of the contents from the last successful API fetch + control_plane.connected_state, Gauge, A boolean (1 for connected and 0 for disconnected) that indicates the current connection state with management server diff --git a/docs/root/intro/version_history.rst b/docs/root/intro/version_history.rst index 1ae489d86fefe..8717c80164bd2 100644 --- a/docs/root/intro/version_history.rst +++ b/docs/root/intro/version_history.rst @@ -8,6 +8,7 @@ Version history * config: added access log :ref:`extension filter`. * config: async data access for local and remote data source. * config: changed the default value of :ref:`initial_fetch_timeout ` from 0s to 15s. This is a change in behaviour in the sense that Envoy will move to the next initialization phase, even if the first config is not delivered in 15s. Refer to :ref:`initialization process ` for more details. +* config: added stat :ref:`init_fetch_timeout `. * fault: added overrides for default runtime keys in :ref:`HTTPFault ` filter. * grpc-json: added support for :ref:`ignoring unknown query parameters`. * http: added the ability to reject HTTP/1.1 requests with invalid HTTP header values, using the runtime feature `envoy.reloadable_features.strict_header_validation`. diff --git a/include/envoy/config/subscription.h b/include/envoy/config/subscription.h index bb9861d374ff1..ee21da758d107 100644 --- a/include/envoy/config/subscription.h +++ b/include/envoy/config/subscription.h @@ -98,6 +98,7 @@ using SubscriptionPtr = std::unique_ptr; * Per subscription stats. @see stats_macros.h */ #define ALL_SUBSCRIPTION_STATS(COUNTER, GAUGE) \ + COUNTER(init_fetch_timeout) \ COUNTER(update_attempt) \ COUNTER(update_failure) \ COUNTER(update_rejected) \ diff --git a/source/common/config/delta_subscription_state.cc b/source/common/config/delta_subscription_state.cc index a0dcb5f0b66fb..bec633841aff3 100644 --- a/source/common/config/delta_subscription_state.cc +++ b/source/common/config/delta_subscription_state.cc @@ -24,6 +24,7 @@ DeltaSubscriptionState::DeltaSubscriptionState(const std::string& type_url, void DeltaSubscriptionState::setInitFetchTimeout(Event::Dispatcher& dispatcher) { if (init_fetch_timeout_.count() > 0 && !init_fetch_timeout_timer_) { init_fetch_timeout_timer_ = dispatcher.createTimer([this]() -> void { + stats_.init_fetch_timeout_.inc(); ENVOY_LOG(warn, "delta config: initial fetch timed out for {}", type_url_); callbacks_.onConfigUpdateFailed(Envoy::Config::ConfigUpdateFailureReason::FetchTimedout, nullptr); diff --git a/source/common/config/grpc_mux_subscription_impl.cc b/source/common/config/grpc_mux_subscription_impl.cc index cb50a314dae83..a181b4efa1748 100644 --- a/source/common/config/grpc_mux_subscription_impl.cc +++ b/source/common/config/grpc_mux_subscription_impl.cc @@ -69,6 +69,7 @@ void GrpcMuxSubscriptionImpl::onConfigUpdateFailed(ConfigUpdateFailureReason rea ENVOY_LOG(debug, "gRPC update for {} failed", type_url_); break; case Envoy::Config::ConfigUpdateFailureReason::FetchTimedout: + stats_.init_fetch_timeout_.inc(); disableInitFetchTimeoutTimer(); ENVOY_LOG(warn, "gRPC config: initial fetch timed out for {}", type_url_); break; diff --git a/source/common/config/http_subscription_impl.cc b/source/common/config/http_subscription_impl.cc index 4ee6388955781..908250c794950 100644 --- a/source/common/config/http_subscription_impl.cc +++ b/source/common/config/http_subscription_impl.cc @@ -39,6 +39,7 @@ void HttpSubscriptionImpl::start(const std::set& resource_names) { if (init_fetch_timeout_.count() > 0) { init_fetch_timeout_timer_ = dispatcher_.createTimer([this]() -> void { ENVOY_LOG(warn, "REST config: initial fetch timed out for", path_); + stats_.init_fetch_timeout_.inc(); callbacks_.onConfigUpdateFailed(Envoy::Config::ConfigUpdateFailureReason::FetchTimedout, nullptr); }); diff --git a/test/common/config/filesystem_subscription_impl_test.cc b/test/common/config/filesystem_subscription_impl_test.cc index f4ac009e58f37..840748a72800d 100644 --- a/test/common/config/filesystem_subscription_impl_test.cc +++ b/test/common/config/filesystem_subscription_impl_test.cc @@ -18,20 +18,20 @@ class FilesystemSubscriptionImplTest : public testing::Test, // Validate that the client can recover from bad JSON responses. TEST_F(FilesystemSubscriptionImplTest, BadJsonRecovery) { startSubscription({"cluster0", "cluster1"}); - EXPECT_TRUE(statsAre(1, 0, 0, 0, 0)); + EXPECT_TRUE(statsAre(1, 0, 0, 0, 0, 0)); EXPECT_CALL(callbacks_, onConfigUpdateFailed(Envoy::Config::ConfigUpdateFailureReason::ConnectionFailure, _)); updateFile(";!@#badjso n"); - EXPECT_TRUE(statsAre(2, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(2, 0, 0, 1, 0, 0)); deliverConfigUpdate({"cluster0", "cluster1"}, "0", true); - EXPECT_TRUE(statsAre(3, 1, 0, 1, 7148434200721666028)); + EXPECT_TRUE(statsAre(3, 1, 0, 1, 0, 7148434200721666028)); } // Validate that a file that is initially available results in a successful update. TEST_F(FilesystemSubscriptionImplTest, InitialFile) { updateFile("{\"versionInfo\": \"0\", \"resources\": []}", false); startSubscription({"cluster0", "cluster1"}); - EXPECT_TRUE(statsAre(1, 1, 0, 0, 7148434200721666028)); + EXPECT_TRUE(statsAre(1, 1, 0, 0, 0, 7148434200721666028)); } // Validate that if we fail to set a watch, we get a sensible warning. diff --git a/test/common/config/filesystem_subscription_test_harness.h b/test/common/config/filesystem_subscription_test_harness.h index 03abeb81e72b8..88c18d85216a7 100644 --- a/test/common/config/filesystem_subscription_test_harness.h +++ b/test/common/config/filesystem_subscription_test_harness.h @@ -87,10 +87,11 @@ class FilesystemSubscriptionTestHarness : public SubscriptionTestHarness { } AssertionResult statsAre(uint32_t attempt, uint32_t success, uint32_t rejected, uint32_t failure, - uint64_t version) override { + uint32_t init_fetch_timeout, uint64_t version) override { // The first attempt always fail unless there was a file there to begin with. return SubscriptionTestHarness::statsAre(attempt, success, rejected, - failure + (file_at_start_ ? 0 : 1), version); + failure + (file_at_start_ ? 0 : 1), init_fetch_timeout, + version); } void expectConfigUpdateFailed() override { diff --git a/test/common/config/grpc_subscription_impl_test.cc b/test/common/config/grpc_subscription_impl_test.cc index 563054feea277..5234662912078 100644 --- a/test/common/config/grpc_subscription_impl_test.cc +++ b/test/common/config/grpc_subscription_impl_test.cc @@ -20,7 +20,7 @@ TEST_F(GrpcSubscriptionImplTest, StreamCreationFailure) { EXPECT_CALL(random_, random()); EXPECT_CALL(*timer_, enableTimer(_)); subscription_->start({"cluster0", "cluster1"}); - EXPECT_TRUE(statsAre(2, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(2, 0, 0, 1, 0, 0)); // Ensure this doesn't cause an issue by sending a request, since we don't // have a gRPC stream. subscription_->updateResources({"cluster2"}); @@ -30,28 +30,28 @@ TEST_F(GrpcSubscriptionImplTest, StreamCreationFailure) { expectSendMessage({"cluster2"}, ""); timer_cb_(); - EXPECT_TRUE(statsAre(3, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(3, 0, 0, 1, 0, 0)); verifyControlPlaneStats(1); } // Validate that the client can recover from a remote stream closure via retry. TEST_F(GrpcSubscriptionImplTest, RemoteStreamClose) { startSubscription({"cluster0", "cluster1"}); - EXPECT_TRUE(statsAre(1, 0, 0, 0, 0)); + EXPECT_TRUE(statsAre(1, 0, 0, 0, 0, 0)); EXPECT_CALL(callbacks_, onConfigUpdateFailed(Envoy::Config::ConfigUpdateFailureReason::ConnectionFailure, _)); EXPECT_CALL(*timer_, enableTimer(_)); EXPECT_CALL(random_, random()); subscription_->grpcMux().grpcStreamForTest().onRemoteClose(Grpc::Status::GrpcStatus::Canceled, ""); - EXPECT_TRUE(statsAre(2, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(2, 0, 0, 1, 0, 0)); verifyControlPlaneStats(0); // Retry and succeed. EXPECT_CALL(*async_client_, startRaw(_, _, _)).WillOnce(Return(&async_stream_)); expectSendMessage({"cluster0", "cluster1"}, ""); timer_cb_(); - EXPECT_TRUE(statsAre(2, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(2, 0, 0, 1, 0, 0)); } // Validate that When the management server gets multiple requests for the same version, it can @@ -59,21 +59,21 @@ TEST_F(GrpcSubscriptionImplTest, RemoteStreamClose) { TEST_F(GrpcSubscriptionImplTest, RepeatedNonce) { InSequence s; startSubscription({"cluster0", "cluster1"}); - EXPECT_TRUE(statsAre(1, 0, 0, 0, 0)); + EXPECT_TRUE(statsAre(1, 0, 0, 0, 0, 0)); // First with the initial, empty version update to "0". updateResources({"cluster2"}); - EXPECT_TRUE(statsAre(2, 0, 0, 0, 0)); + EXPECT_TRUE(statsAre(2, 0, 0, 0, 0, 0)); deliverConfigUpdate({"cluster0", "cluster2"}, "0", false); - EXPECT_TRUE(statsAre(3, 0, 1, 0, 0)); + EXPECT_TRUE(statsAre(3, 0, 1, 0, 0, 0)); deliverConfigUpdate({"cluster0", "cluster2"}, "0", true); - EXPECT_TRUE(statsAre(4, 1, 1, 0, 7148434200721666028)); + EXPECT_TRUE(statsAre(4, 1, 1, 0, 0, 7148434200721666028)); // Now with version "0" update to "1". updateResources({"cluster3"}); - EXPECT_TRUE(statsAre(5, 1, 1, 0, 7148434200721666028)); + EXPECT_TRUE(statsAre(5, 1, 1, 0, 0, 7148434200721666028)); deliverConfigUpdate({"cluster3"}, "1", false); - EXPECT_TRUE(statsAre(6, 1, 2, 0, 7148434200721666028)); + EXPECT_TRUE(statsAre(6, 1, 2, 0, 0, 7148434200721666028)); deliverConfigUpdate({"cluster3"}, "1", true); - EXPECT_TRUE(statsAre(7, 2, 2, 0, 13237225503670494420U)); + EXPECT_TRUE(statsAre(7, 2, 2, 0, 0, 13237225503670494420U)); } } // namespace diff --git a/test/common/config/http_subscription_impl_test.cc b/test/common/config/http_subscription_impl_test.cc index 114597e2fa417..258357452c12a 100644 --- a/test/common/config/http_subscription_impl_test.cc +++ b/test/common/config/http_subscription_impl_test.cc @@ -18,11 +18,11 @@ TEST_F(HttpSubscriptionImplTest, OnRequestReset) { EXPECT_CALL(callbacks_, onConfigUpdateFailed(Envoy::Config::ConfigUpdateFailureReason::ConnectionFailure, _)); http_callbacks_->onFailure(Http::AsyncClient::FailureReason::Reset); - EXPECT_TRUE(statsAre(1, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(1, 0, 0, 1, 0, 0)); timerTick(); - EXPECT_TRUE(statsAre(2, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(2, 0, 0, 1, 0, 0)); deliverConfigUpdate({"cluster0", "cluster1"}, "0", true); - EXPECT_TRUE(statsAre(3, 1, 0, 1, 7148434200721666028)); + EXPECT_TRUE(statsAre(3, 1, 0, 1, 0, 7148434200721666028)); } // Validate that the client can recover from bad JSON responses. @@ -36,28 +36,28 @@ TEST_F(HttpSubscriptionImplTest, BadJsonRecovery) { EXPECT_CALL(callbacks_, onConfigUpdateFailed(Envoy::Config::ConfigUpdateFailureReason::ConnectionFailure, _)); http_callbacks_->onSuccess(std::move(message)); - EXPECT_TRUE(statsAre(1, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(1, 0, 0, 1, 0, 0)); request_in_progress_ = false; timerTick(); - EXPECT_TRUE(statsAre(2, 0, 0, 1, 0)); + EXPECT_TRUE(statsAre(2, 0, 0, 1, 0, 0)); deliverConfigUpdate({"cluster0", "cluster1"}, "0", true); - EXPECT_TRUE(statsAre(3, 1, 0, 1, 7148434200721666028)); + EXPECT_TRUE(statsAre(3, 1, 0, 1, 0, 7148434200721666028)); } TEST_F(HttpSubscriptionImplTest, ConfigNotModified) { startSubscription({"cluster0", "cluster1"}); - EXPECT_TRUE(statsAre(1, 0, 0, 0, 0)); + EXPECT_TRUE(statsAre(1, 0, 0, 0, 0, 0)); timerTick(); - EXPECT_TRUE(statsAre(2, 0, 0, 0, 0)); + EXPECT_TRUE(statsAre(2, 0, 0, 0, 0, 0)); // accept and modify. deliverConfigUpdate({"cluster0", "cluster1"}, "0", true, true, "200"); - EXPECT_TRUE(statsAre(3, 1, 0, 0, 7148434200721666028)); + EXPECT_TRUE(statsAre(3, 1, 0, 0, 0, 7148434200721666028)); // accept and does not modify. deliverConfigUpdate({"cluster0", "cluster1"}, "0", true, false, "304"); - EXPECT_TRUE(statsAre(4, 1, 0, 0, 7148434200721666028)); + EXPECT_TRUE(statsAre(4, 1, 0, 0, 0, 7148434200721666028)); } } // namespace diff --git a/test/common/config/subscription_impl_test.cc b/test/common/config/subscription_impl_test.cc index 094c2dbf76b2d..c284bfcda9a6d 100644 --- a/test/common/config/subscription_impl_test.cc +++ b/test/common/config/subscription_impl_test.cc @@ -52,8 +52,9 @@ class SubscriptionImplTest : public testing::TestWithParam { } AssertionResult statsAre(uint32_t attempt, uint32_t success, uint32_t rejected, uint32_t failure, - uint64_t version) { - return test_harness_->statsAre(attempt, success, rejected, failure, version); + uint32_t init_fetch_timeout, uint64_t version) { + return test_harness_->statsAre(attempt, success, rejected, failure, init_fetch_timeout, + version); } void deliverConfigUpdate(const std::vector cluster_names, const std::string& version, @@ -88,57 +89,57 @@ INSTANTIATE_TEST_SUITE_P(SubscriptionImplTest, SubscriptionImplInitFetchTimeoutT // Validate basic request-response succeeds. TEST_P(SubscriptionImplTest, InitialRequestResponse) { startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); deliverConfigUpdate({"cluster0", "cluster1"}, "0", true); - statsAre(2, 1, 0, 0, 7148434200721666028); + statsAre(2, 1, 0, 0, 0, 7148434200721666028); } // Validate that multiple streamed updates succeed. TEST_P(SubscriptionImplTest, ResponseStream) { startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); deliverConfigUpdate({"cluster0", "cluster1"}, "0", true); - statsAre(2, 1, 0, 0, 7148434200721666028); + statsAre(2, 1, 0, 0, 0, 7148434200721666028); deliverConfigUpdate({"cluster0", "cluster1"}, "1", true); - statsAre(3, 2, 0, 0, 13237225503670494420U); + statsAre(3, 2, 0, 0, 0, 13237225503670494420U); } // Validate that the client can reject a config. TEST_P(SubscriptionImplTest, RejectConfig) { startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); deliverConfigUpdate({"cluster0", "cluster1"}, "0", false); - statsAre(2, 0, 1, 0, 0); + statsAre(2, 0, 1, 0, 0, 0); } // Validate that the client can reject a config and accept the same config later. TEST_P(SubscriptionImplTest, RejectAcceptConfig) { startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); deliverConfigUpdate({"cluster0", "cluster1"}, "0", false); - statsAre(2, 0, 1, 0, 0); + statsAre(2, 0, 1, 0, 0, 0); deliverConfigUpdate({"cluster0", "cluster1"}, "0", true); - statsAre(3, 1, 1, 0, 7148434200721666028); + statsAre(3, 1, 1, 0, 0, 7148434200721666028); } // Validate that the client can reject a config and accept another config later. TEST_P(SubscriptionImplTest, RejectAcceptNextConfig) { startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); deliverConfigUpdate({"cluster0", "cluster1"}, "0", false); - statsAre(2, 0, 1, 0, 0); + statsAre(2, 0, 1, 0, 0, 0); deliverConfigUpdate({"cluster0", "cluster1"}, "1", true); - statsAre(3, 1, 1, 0, 13237225503670494420U); + statsAre(3, 1, 1, 0, 0, 13237225503670494420U); } // Validate that stream updates send a message with the updated resources. TEST_P(SubscriptionImplTest, UpdateResources) { startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); deliverConfigUpdate({"cluster0", "cluster1"}, "0", true); - statsAre(2, 1, 0, 0, 7148434200721666028); + statsAre(2, 1, 0, 0, 0, 7148434200721666028); updateResources({"cluster2"}); - statsAre(3, 1, 0, 0, 7148434200721666028); + statsAre(3, 1, 0, 0, 0, 7148434200721666028); } // Validate that initial fetch timer is created and calls callback on timeout @@ -146,10 +147,10 @@ TEST_P(SubscriptionImplInitFetchTimeoutTest, InitialFetchTimeout) { InSequence s; expectEnableInitFetchTimeoutTimer(std::chrono::milliseconds(1000)); startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); expectConfigUpdateFailed(); callInitFetchTimeoutCb(); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 1, 0); } // Validate that initial fetch timer is disabled on config update @@ -157,7 +158,7 @@ TEST_P(SubscriptionImplInitFetchTimeoutTest, DisableInitTimeoutOnSuccess) { InSequence s; expectEnableInitFetchTimeoutTimer(std::chrono::milliseconds(1000)); startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); expectDisableInitFetchTimeoutTimer(); deliverConfigUpdate({"cluster0", "cluster1"}, "0", true); } @@ -167,7 +168,7 @@ TEST_P(SubscriptionImplInitFetchTimeoutTest, DisableInitTimeoutOnFail) { InSequence s; expectEnableInitFetchTimeoutTimer(std::chrono::milliseconds(1000)); startSubscription({"cluster0", "cluster1"}); - statsAre(1, 0, 0, 0, 0); + statsAre(1, 0, 0, 0, 0, 0); expectDisableInitFetchTimeoutTimer(); deliverConfigUpdate({"cluster0", "cluster1"}, "0", false); } diff --git a/test/common/config/subscription_test_harness.h b/test/common/config/subscription_test_harness.h index 510d550ff9be4..09803354e151d 100644 --- a/test/common/config/subscription_test_harness.h +++ b/test/common/config/subscription_test_harness.h @@ -50,7 +50,8 @@ class SubscriptionTestHarness { const std::string& version, bool accept) PURE; virtual testing::AssertionResult statsAre(uint32_t attempt, uint32_t success, uint32_t rejected, - uint32_t failure, uint64_t version) { + uint32_t failure, uint32_t init_fetch_timeout, + uint64_t version) { // TODO(fredlas) rework update_success_ to make sense across all xDS carriers. Its value in // statsAre() calls in many tests will probably have to be changed. UNREFERENCED_PARAMETER(attempt); @@ -66,6 +67,10 @@ class SubscriptionTestHarness { return testing::AssertionFailure() << "update_failure: expected " << failure << ", got " << stats_.update_failure_.value(); } + if (init_fetch_timeout != stats_.init_fetch_timeout_.value()) { + return testing::AssertionFailure() << "init_fetch_timeout: expected " << init_fetch_timeout + << ", got " << stats_.init_fetch_timeout_.value(); + } if (version != stats_.version_.value()) { return testing::AssertionFailure() << "version: expected " << version << ", got " << stats_.version_.value();