diff --git a/.bazelignore b/.bazelignore
new file mode 100644
index 0000000000000..04680184abec6
--- /dev/null
+++ b/.bazelignore
@@ -0,0 +1,2 @@
+api
+examples/grpc-bridge/script
diff --git a/.bazelrc b/.bazelrc
index 78d41c35d4a26..88a480bc0b0ff 100644
--- a/.bazelrc
+++ b/.bazelrc
@@ -22,6 +22,7 @@ build:asan --define signal_trace=disabled
build:asan --copt -DADDRESS_SANITIZER=1
build:asan --copt -D__SANITIZE_ADDRESS__
build:asan --test_env=ASAN_OPTIONS=handle_abort=1:allow_addr2line=true:check_initialization_order=true:strict_init_order=true
+build:asan --test_env=UBSAN_OPTIONS=halt_on_error=true:print_stacktrace=1
build:asan --test_env=ASAN_SYMBOLIZER_PATH
# Clang ASAN/UBSAN
diff --git a/.circleci/config.yml b/.circleci/config.yml
index 2992cee69641c..a8f3fb63bbe9b 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -4,7 +4,7 @@ executors:
ubuntu-build:
description: "A regular build executor based on ubuntu image"
docker:
- - image: envoyproxy/envoy-build:698009170e362f9ca0594f2b1927fbbee199bf98
+ - image: envoyproxy/envoy-build:cfc514546bc0284536893cca5fa43d7128edcd35
resource_class: xlarge
working_directory: /source
diff --git a/.clang-tidy b/.clang-tidy
index 0794aa66661f2..a62ee3c944146 100644
--- a/.clang-tidy
+++ b/.clang-tidy
@@ -1,7 +1,7 @@
-Checks: 'clang-diagnostic-*,clang-analyzer-*,abseil-*,bugprone-*,modernize-*,performance-*,readability-redundant-*,readability-braces-around-statements'
+Checks: 'clang-diagnostic-*,clang-analyzer-*,abseil-*,bugprone-*,modernize-*,performance-*,readability-redundant-*,readability-braces-around-statements,readability-container-size-empty'
#TODO(lizan): grow this list, fix possible warnings and make more checks as error
-WarningsAsErrors: 'bugprone-assert-side-effect,modernize-make-shared,modernize-make-unique,readability-redundant-smartptr-get,readability-braces-around-statements,readability-redundant-string-cstr,bugprone-use-after-move'
+WarningsAsErrors: 'bugprone-assert-side-effect,modernize-make-shared,modernize-make-unique,readability-redundant-smartptr-get,readability-braces-around-statements,readability-redundant-string-cstr,bugprone-use-after-move,readability-container-size-empty'
CheckOptions:
- key: bugprone-assert-side-effect.AssertMacros
diff --git a/CODEOWNERS b/CODEOWNERS
index abdf25c16f860..0f81b447d285f 100644
--- a/CODEOWNERS
+++ b/CODEOWNERS
@@ -22,3 +22,5 @@
/*/extensions/filters/network/mysql_proxy @rshriram @venilnoronha @mattklein123
# quic extension
/*/extensions/quic_listeners/ @alyssawilk @danzh2010 @mattklein123 @mpwarres @wu-bin
+# zookeeper_proxy extension
+/*/extensions/filters/network/zookeeper_proxy @rgs1 @snowp
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index fab701755c24d..345192d66ddfe 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -51,7 +51,7 @@ maximize the chances of your PR being merged.
deprecation window. Within this window, a warning of deprecation should be carefully logged (some
features might need rate limiting for logging this). We make no guarantees about code or deployments
that rely on undocumented behavior.
-* All deprecations/breaking changes will be clearly listed in [DEPRECATED.md](DEPRECATED.md).
+* All deprecations/breaking changes will be clearly listed in the [deprecated log](docs/root/intro/deprecated.rst).
* High risk deprecations//breaking changes may be announced to the
[envoy-announce](https://groups.google.com/forum/#!forum/envoy-announce) email list but by default
it is expected the multi-phase warn-by-default/fail-by-default is sufficient to warn users to move
@@ -132,7 +132,8 @@ maximize the chances of your PR being merged.
changes for 7 days. Obviously PRs that are closed due to lack of activity can be reopened later.
Closing stale PRs helps us to keep on top of all of the work currently in flight.
* If a commit deprecates a feature, the commit message must mention what has been deprecated.
- Additionally, [DEPRECATED.md](DEPRECATED.md) must be updated as part of the commit.
+ Additionally, the [deprecated log](docs/root/intro/deprecated.rst) must be updated with relevant
+ RST links for fields and messages as part of the commit.
* Please consider joining the [envoy-dev](https://groups.google.com/forum/#!forum/envoy-dev)
mailing list.
* If your PR involves any changes to
@@ -312,9 +313,17 @@ should only be done to correct a DCO mistake.
## Triggering CI re-run without making changes
-Sometimes CI test runs fail due to obvious resource problems or other issues
-which are not related to your PR. It may be desirable to re-trigger CI without
-making any code changes. Consider adding an alias into your `.gitconfig` file:
+To rerun failed tasks in CI, add a comment with the the line
+
+```
+/retest
+```
+
+in it. This should rebuild only the failed tasks.
+
+Sometimes tasks will be stuck in CI and won't be marked as failed, which means
+the above command won't work. Should this happen, pushing an empty commit should
+re-run all the CI tasks. Consider adding an alias into your `.gitconfig` file:
```
[alias]
diff --git a/DEPRECATED.md b/DEPRECATED.md
index f64cd7a3415b6..1b2962adcb975 100644
--- a/DEPRECATED.md
+++ b/DEPRECATED.md
@@ -1,112 +1,3 @@
# DEPRECATED
-As of release 1.3.0, Envoy will follow a
-[Breaking Change Policy](https://github.com/envoyproxy/envoy/blob/master//CONTRIBUTING.md#breaking-change-policy).
-
-The following features have been DEPRECATED and will be removed in the specified release cycle.
-A logged warning is expected for each deprecated item that is in deprecation window.
-
-## Version 1.10.0 (pending)
-* Use of `use_alpha` in [Ext-Authz Authorization Service](https://github.com/envoyproxy/envoy/blob/master/api/envoy/service/auth/v2/external_auth.proto) is deprecated. It should be used for a short time, and only when transitioning from alpha to V2 release version.
-* Use of `enabled` in `CorsPolicy`, found in
- [route.proto](https://github.com/envoyproxy/envoy/blob/master/api/envoy/api/v2/route/route.proto).
- Set the `filter_enabled` field instead.
-
-## Version 1.9.0 (Dec 20, 2018)
-
-* Order of execution of the network write filter chain has been reversed. Prior to this release cycle it was incorrect, see [#4599](https://github.com/envoyproxy/envoy/issues/4599). In the 1.9.0 release cycle we introduced `bugfix_reverse_write_filter_order` in [lds.proto](https://github.com/envoyproxy/envoy/blob/master/api/envoy/api/v2/lds.proto) to temporarily support both old and new behaviors. Note this boolean field is deprecated.
-* Order of execution of the HTTP encoder filter chain has been reversed. Prior to this release cycle it was incorrect, see [#4599](https://github.com/envoyproxy/envoy/issues/4599). In the 1.9.0 release cycle we introduced `bugfix_reverse_encode_order` in [http_connection_manager.proto](https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto) to temporarily support both old and new behaviors. Note this boolean field is deprecated.
-* Use of the v1 REST_LEGACY ApiConfigSource is deprecated.
-* Use of std::hash in the ring hash load balancer is deprecated.
-* Use of `rate_limit_service` configuration in the [bootstrap configuration](https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/bootstrap/v2/bootstrap.proto) is deprecated.
-* Use of `runtime_key` in `RequestMirrorPolicy`, found in
- [route.proto](https://github.com/envoyproxy/envoy/blob/master/api/envoy/api/v2/route/route.proto)
- is deprecated. Set the `runtime_fraction` field instead.
-* Use of buffer filter `max_request_time` is deprecated in favor of the request timeout found in [HttpConnectionManager](https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto)
-
-## Version 1.8.0 (Oct 4, 2018)
-
-* Use of the v1 API (including `*.deprecated_v1` fields in the v2 API) is deprecated.
- See envoy-announce [email](https://groups.google.com/forum/#!topic/envoy-announce/oPnYMZw8H4U).
-* Use of the legacy
- [ratelimit.proto](https://github.com/envoyproxy/envoy/blob/b0a518d064c8255e0e20557a8f909b6ff457558f/source/common/ratelimit/ratelimit.proto)
- is deprecated, in favor of the proto defined in
- [date-plane-api](https://github.com/envoyproxy/envoy/blob/master/api/envoy/service/ratelimit/v2/rls.proto)
- Prior to 1.8.0, Envoy can use either proto to send client requests to a ratelimit server with the use of the
- `use_data_plane_proto` boolean flag in the [ratelimit configuration](https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/ratelimit/v2/rls.proto).
- However, when using the deprecated client a warning is logged.
-* Use of the --v2-config-only flag.
-* Use of both `use_websocket` and `websocket_config` in
- [route.proto](https://github.com/envoyproxy/envoy/blob/master/api/envoy/api/v2/route/route.proto)
- is deprecated. Please use the new `upgrade_configs` in the
- [HttpConnectionManager](https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto)
- instead.
-* Use of the integer `percent` field in [FaultDelay](https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/filter/fault/v2/fault.proto)
- and in [FaultAbort](https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/filter/http/fault/v2/fault.proto) is deprecated in favor
- of the new `FractionalPercent` based `percentage` field.
-* Setting hosts via `hosts` field in `Cluster` is deprecated. Use `load_assignment` instead.
-* Use of `response_headers_to_*` and `request_headers_to_add` are deprecated at the `RouteAction`
- level. Please use the configuration options at the `Route` level.
-* Use of `runtime` in `RouteMatch`, found in
- [route.proto](https://github.com/envoyproxy/envoy/blob/master/api/envoy/api/v2/route/route.proto).
- Set the `runtime_fraction` field instead.
-* Use of the string `user` field in `Authenticated` in [rbac.proto](https://github.com/envoyproxy/envoy/blob/master/api/envoy/config/rbac/v2alpha/rbac.proto)
- is deprecated in favor of the new `StringMatcher` based `principal_name` field.
-
-## Version 1.7.0 (Jun 21, 2018)
-
-* Admin mutations should be sent as POSTs rather than GETs. HTTP GETs will result in an error
- status code and will not have their intended effect. Prior to 1.7, GETs can be used for
- admin mutations, but a warning is logged.
-* Rate limit service configuration via the `cluster_name` field is deprecated. Use `grpc_service`
- instead.
-* gRPC service configuration via the `cluster_names` field in `ApiConfigSource` is deprecated. Use
- `grpc_services` instead. Prior to 1.7, a warning is logged.
-* Redis health checker configuration via the `redis_health_check` field in `HealthCheck` is
- deprecated. Use `custom_health_check` with name `envoy.health_checkers.redis` instead. Prior
- to 1.7, `redis_health_check` can be used, but warning is logged.
-* `SAN` is replaced by `URI` in the `x-forwarded-client-cert` header.
-* The `endpoint` field in the http health check filter is deprecated in favor of the `headers`
- field where one can specify HeaderMatch objects to match on.
-* The `sni_domains` field in the filter chain match was deprecated/renamed to `server_names`.
-
-## Version 1.6.0 (March 20, 2018)
-
-* DOWNSTREAM_ADDRESS log formatter is deprecated. Use DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT
- instead.
-* CLIENT_IP header formatter is deprecated. Use DOWNSTREAM_REMOTE_ADDRESS_WITHOUT_PORT instead.
-* 'use_original_dst' field in the v2 LDS API is deprecated. Use listener filters and filter chain
- matching instead.
-* `value` and `regex` fields in the `HeaderMatcher` message is deprecated. Use the `exact_match`
- or `regex_match` oneof instead.
-
-## Version 1.5.0 (Dec 4, 2017)
-
-* The outlier detection `ejections_total` stats counter has been deprecated and not replaced. Monitor
- the individual `ejections_detected_*` counters for the detectors of interest, or
- `ejections_enforced_total` for the total number of ejections that actually occurred.
-* The outlier detection `ejections_consecutive_5xx` stats counter has been deprecated in favour of
- `ejections_detected_consecutive_5xx` and `ejections_enforced_consecutive_5xx`.
-* The outlier detection `ejections_success_rate` stats counter has been deprecated in favour of
- `ejections_detected_success_rate` and `ejections_enforced_success_rate`.
-
-## Version 1.4.0 (Aug 24, 2017)
-
-* Config option `statsd_local_udp_port` has been deprecated and has been replaced with
- `statsd_udp_ip_address`.
-* `HttpFilterConfigFactory` filter API has been deprecated in favor of `NamedHttpFilterConfigFactory`.
-* Config option `http_codec_options` has been deprecated and has been replaced with `http2_settings`.
-* The following log macros have been deprecated: `log_trace`, `log_debug`, `conn_log`,
- `conn_log_info`, `conn_log_debug`, `conn_log_trace`, `stream_log`, `stream_log_info`,
- `stream_log_debug`, `stream_log_trace`. For replacements, please see
- [logger.h](https://github.com/envoyproxy/envoy/blob/master/source/common/common/logger.h).
-* The connectionId() and ssl() callbacks of StreamFilterCallbacks have been deprecated and
- replaced with a more general connection() callback, which, when not returning a nullptr, can be
- used to get the connection id and SSL connection from the returned Connection object pointer.
-* The protobuf stub gRPC support via `Grpc::RpcChannelImpl` is now replaced with `Grpc::AsyncClientImpl`.
- This no longer uses `protoc` generated stubs but instead utilizes C++ template generation of the
- RPC stubs. `Grpc::AsyncClientImpl` supports streaming, in addition to the previous unary, RPCs.
-* The direction of network and HTTP filters in the configuration will be ignored from 1.4.0 and
- later removed from the configuration in the v2 APIs. Filter direction is now implied at the C++ type
- level. The `type()` methods on the `NamedNetworkFilterConfigFactory` and
- `NamedHttpFilterConfigFactory` interfaces have been removed to reflect this.
+The [deprecated log](https://www.envoyproxy.io/docs/envoy/latest/intro/deprecated) can be found in the official Envoy developer documentation.
diff --git a/GOVERNANCE.md b/GOVERNANCE.md
index 1c182eb00f75d..70417bc89c234 100644
--- a/GOVERNANCE.md
+++ b/GOVERNANCE.md
@@ -84,7 +84,7 @@ or you can subscribe to the iCal feed [here](https://app.opsgenie.com/webcal/get
corrections.
* Switch the [VERSION](VERSION) from a "dev" variant to a final variant. E.g., "1.6.0-dev" to
"1.6.0". Also remove the "Pending" tag from the top of the [release notes](docs/root/intro/version_history.rst)
- and [DEPRECATED.md](DEPRECATED.md). Get a review and merge.
+ and [deprecated log](docs/root/intro/deprecated.rst). Get a review and merge.
* **Wait for tests to pass on
[master](https://circleci.com/gh/envoyproxy/envoy/tree/master).**
* Create a [tagged release](https://github.com/envoyproxy/envoy/releases). The release should
@@ -99,7 +99,7 @@ or you can subscribe to the iCal feed [here](https://app.opsgenie.com/webcal/get
Envoy account post).
* Do a new PR to update [VERSION](VERSION) to the next development release. E.g., "1.7.0-dev". At
the same time, also add a new empty "pending" section to the [release
- notes](docs/root/intro/version_history.rst) and to [DEPRECATED.md](DEPRECATED.md) for the
+ notes](docs/root/intro/version_history.rst) and to [deprecated log](docs/root/intro/deprecated.rst) for the
following version. E.g., "1.7.0 (pending)".
* Run the deprecate_versions.py script (e.g. `sh tools/deprecate_version/deprecate_version.sh 1.8.0 1.10.0`)
to file tracking issues for code which can be removed.
diff --git a/OWNERS.md b/OWNERS.md
index 4bfa458f37bd7..f62e2f3036830 100644
--- a/OWNERS.md
+++ b/OWNERS.md
@@ -17,6 +17,9 @@ routing PRs, questions, etc. to the right place.
* Stephan Zuercher ([zuercher](https://github.com/zuercher)) (zuercher@gmail.com)
* Load balancing, upstream clusters and cluster manager, logging, complex HTTP routing
(metadata, etc.), and macOS build.
+* Lizan Zhou ([lizan](https://github.com/lizan)) (lizan@tetrate.io)
+ * gRPC, gRPC/JSON transcoding, and core networking (transport socket abstractions), Bazel, build
+ issues, and CI in general.
# Maintainers
@@ -24,8 +27,6 @@ routing PRs, questions, etc. to the right place.
* Outlier detection, HTTP routing, xDS, configuration/operational questions.
* Dan NoƩ ([dnoe](https://github.com/dnoe)) (dpn@google.com)
* Base server (watchdog, workers, startup, stack trace handling, etc.).
-* Lizan Zhou ([lizan](https://github.com/lizan)) (lizan@tetrate.io)
- * gRPC, gRPC/JSON transcoding, and core networking (transport socket abstractions).
* Dhi Aurrahman ([dio](https://github.com/dio)) (dio@tetrate.io)
* Lua, access logging, and general miscellany.
* Joshua Marantz ([jmarantz](https://github.com/jmarantz)) (jmarantz@google.com)
diff --git a/PULL_REQUESTS.md b/PULL_REQUESTS.md
index ad9cdafb99466..0bc71ab31e6bb 100644
--- a/PULL_REQUESTS.md
+++ b/PULL_REQUESTS.md
@@ -74,7 +74,7 @@ you may instead just tag the PR with the issue:
### Deprecated
If this PR deprecates existing Envoy APIs or code, it should include
-an update to the [deprecated file](DEPRECATED.md) and a one line note in the PR
+an update to the [deprecated file](docs/root/intro/deprecated.rst) and a one line note in the PR
description.
If you mark existing APIs or code as deprecated, when the next release is cut, the
diff --git a/README.md b/README.md
index 545684e3435c3..12d3df15bbfef 100644
--- a/README.md
+++ b/README.md
@@ -9,6 +9,8 @@ involved and how Envoy plays a role, read the CNCF
[announcement](https://www.cncf.io/blog/2017/09/13/cncf-hosts-envoy/).
[](https://bestpractices.coreinfrastructure.org/projects/1266)
+[](https://circleci.com/gh/envoyproxy/envoy/tree/master)
+[](http://powerci.osuosl.org/job/build-envoy-master/)
## Documentation
diff --git a/SECURITY_RELEASE_PROCESS.md b/SECURITY_RELEASE_PROCESS.md
index 8a2c569da83c8..28c552d5673f7 100644
--- a/SECURITY_RELEASE_PROCESS.md
+++ b/SECURITY_RELEASE_PROCESS.md
@@ -256,6 +256,11 @@ We are definitely willing to help!
> 8. Have someone already on the list vouch for the person requesting membership
on behalf of your distribution.
-CrashOverride will vouch for Acidburn joining the list on behalf of the "Seven"
-distribution.
+CrashOverride will vouch for the "Seven" distribution joining the distribution list.
+
+> 9. Nominate an e-mail alias or list for your organization to receive updates. This should not be
+ an individual user address, but instead a list that can be maintained by your organization as
+ individuals come and go. A good example is envoy-security@seven.com, a bad example is
+ acidburn@seven.com. You must accept the invite sent to this address or you will not receive any
+ e-mail updates.
```
diff --git a/VERSION b/VERSION
index a01185b4d67a2..1f724bf455d78 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-1.10.0-dev
+1.11.0-dev
diff --git a/WORKSPACE b/WORKSPACE
index ec06147ab36f5..5609189bd56df 100644
--- a/WORKSPACE
+++ b/WORKSPACE
@@ -1,5 +1,9 @@
workspace(name = "envoy")
+load("//bazel:api_repositories.bzl", "envoy_api_dependencies")
+
+envoy_api_dependencies()
+
load("//bazel:repositories.bzl", "GO_VERSION", "envoy_dependencies")
load("//bazel:cc_configure.bzl", "cc_configure")
@@ -11,10 +15,6 @@ rules_foreign_cc_dependencies()
cc_configure()
-load("@envoy_api//bazel:repositories.bzl", "api_dependencies")
-
-api_dependencies()
-
load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")
go_rules_dependencies()
diff --git a/api/STYLE.md b/api/STYLE.md
index 887a6c53a45b8..0289c5f85af27 100644
--- a/api/STYLE.md
+++ b/api/STYLE.md
@@ -123,7 +123,7 @@ In addition, the following conventions should be followed:
```
* The [Breaking Change
- Policy](https://github.com/envoyproxy/envoy/blob/master//CONTRIBUTING.md#breaking-change-policy) describes
+ Policy](https://github.com/envoyproxy/envoy/blob/master/CONTRIBUTING.md#breaking-change-policy) describes
API versioning, deprecation and compatibility.
## Package organization
diff --git a/api/XDS_PROTOCOL.md b/api/XDS_PROTOCOL.md
deleted file mode 100644
index 75f7d8f54e0ce..0000000000000
--- a/api/XDS_PROTOCOL.md
+++ /dev/null
@@ -1,385 +0,0 @@
-# xDS REST and gRPC protocol
-
-Envoy discovers its various dynamic resources via the filesystem or by querying
-one or more management servers. Collectively, these discovery services and their
-corresponding APIs are referred to as _xDS_. Resources are requested via
-_subscriptions_, by specifying a filesystem path to watch, initiating gRPC
-streams or polling a REST-JSON URL. The latter two methods involve sending
-requests with a
-[`DiscoveryRequest`](https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/discovery.proto#discoveryrequest)
-proto payload. Resources are delivered in a
-[`DiscoveryResponse`](https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/discovery.proto#discoveryresponse)
-proto payload in all methods. We discuss each type of subscription below.
-
-## Filesystem subscriptions
-
-The simplest approach to delivering dynamic configuration is to place it at a
-well known path specified in the
-[`ConfigSource`](https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/core/config_source.proto#core-configsource).
-Envoy will use `inotify` (`kqueue` on macOS) to monitor the file for changes
-and parse the `DiscoveryResponse` proto in the file on update. Binary
-protobufs, JSON, YAML and proto text are supported formats for the
-`DiscoveryResponse`.
-
-There is no mechanism available for filesystem subscriptions to ACK/NACK updates
-beyond stats counters and logs. The last valid configuration for an xDS API will
-continue to apply if an configuration update rejection occurs.
-
-## Streaming gRPC subscriptions
-
-### Singleton resource type discovery
-
-A gRPC
-[`ApiConfigSource`](https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/core/config_source.proto#core-apiconfigsource)
-can be specified independently for each xDS API, pointing at an upstream
-cluster corresponding to a management server. This will initiate an independent
-bidirectional gRPC stream for each xDS resource type, potentially to distinct
-management servers. API delivery is eventually consistent. See
-[ADS](#aggregated-discovery-service) below for situations in which explicit
-control of sequencing is required.
-
-#### Type URLs
-
-Each xDS API is concerned with resources of a given type. There is a 1:1
-correspondence between an xDS API and a resource type. That is:
-
-* [LDS: `envoy.api.v2.Listener`](envoy/api/v2/lds.proto)
-* [RDS: `envoy.api.v2.RouteConfiguration`](envoy/api/v2/rds.proto)
-* [CDS: `envoy.api.v2.Cluster`](envoy/api/v2/cds.proto)
-* [EDS: `envoy.api.v2.ClusterLoadAssignment`](envoy/api/v2/eds.proto)
-* [SDS: `envoy.api.v2.Auth.Secret`](envoy/api/v2/auth/cert.proto)
-
-The concept of [_type
-URLs_](https://developers.google.com/protocol-buffers/docs/proto3#any) appears
-below, and takes the form `type.googleapis.com/`, e.g.
-`type.googleapis.com/envoy.api.v2.Cluster` for CDS. In various requests from
-Envoy and responses by the management server, the resource type URL is stated.
-
-#### ACK/NACK and versioning
-
-Each stream begins with a `DiscoveryRequest` from Envoy, specifying the list of
-resources to subscribe to, the type URL corresponding to the subscribed
-resources, the node identifier and an empty `version_info`. An example EDS request
-might be:
-
-```yaml
-version_info:
-node: { id: envoy }
-resource_names:
-- foo
-- bar
-type_url: type.googleapis.com/envoy.api.v2.ClusterLoadAssignment
-response_nonce:
-```
-
-The management server may reply either immediately or when the requested
-resources are available with a `DiscoveryResponse`, e.g.:
-
-```yaml
-version_info: X
-resources:
-- foo ClusterLoadAssignment proto encoding
-- bar ClusterLoadAssignment proto encoding
-type_url: type.googleapis.com/envoy.api.v2.ClusterLoadAssignment
-nonce: A
-```
-
-After processing the `DiscoveryResponse`, Envoy will send a new request on the
-stream, specifying the last version successfully applied and the nonce provided
-by the management server. If the update was successfully applied, the
-`version_info` will be __X__, as indicated in the sequence diagram:
-
-
-
-In this sequence diagram, and below, the following format is used to abbreviate
-messages:
-* `DiscoveryRequest`: (V=`version_info`,R=`resource_names`,N=`response_nonce`,T=`type_url`)
-* `DiscoveryResponse`: (V=`version_info`,R=`resources`,N=`nonce`,T=`type_url`)
-
-The version provides Envoy and the management server a shared notion of the
-currently applied configuration, as well as a mechanism to ACK/NACK
-configuration updates. If Envoy had instead rejected configuration update __X__,
-it would reply with
-[`error_detail`](https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/discovery.proto#envoy-api-field-discoveryrequest-error-detail)
-populated and its previous version, which in this case was the empty
-initial version. The error_detail has more details around the exact error message
-populated in the message field:
-
-
-
-Later, an API update may succeed at a new version __Y__:
-
-
-
-Each stream has its own notion of versioning, there is no shared versioning
-across resource types. When ADS is not used, even each resource of a given
-resource type may have a
-distinct version, since the Envoy API allows distinct EDS/RDS resources to point
-at different `ConfigSource`s.
-
-#### When to send an update
-
-The management server should only send updates to the Envoy client when the
-resources in the `DiscoveryResponse` have changed. Envoy replies to any
-`DiscoveryResponse` with a `DiscoveryRequest` containing the ACK/NACK
-immediately after it has been either accepted or rejected. If the management
-server provides the same set of resources rather than waiting for a change to
-occur, it will cause Envoy and the management server to spin and have a severe
-performance impact.
-
-Within a stream, new `DiscoveryRequest`s supersede any prior `DiscoveryRequest`s
-having the same resource type. This means that the management server only needs
-to respond to the latest `DiscoveryRequest` on each stream for any given resource
-type.
-
-#### Resource hints
-
-The `resource_names` specified in the `DiscoveryRequest` are a hint. Some
-resource types, e.g. `Cluster`s and `Listener`s will specify an empty
-`resource_names` list, since Envoy is interested in learning about all the
-`Cluster`s (CDS) and `Listener`s (LDS) that the management server(s) know about
-corresponding to its node identification. Other resource types, e.g.
-`RouteConfiguration`s (RDS) and `ClusterLoadAssignment`s (EDS), follow from
-earlier CDS/LDS updates and Envoy is able to explicitly enumerate these
-resources.
-
-LDS/CDS resource hints will always be empty and it is expected that the
-management server will provide the complete state of the LDS/CDS resources in
-each response. An absent `Listener` or `Cluster` will be deleted.
-
-For EDS/RDS, the management server does not need to supply every requested
-resource and may also supply additional, unrequested resources. `resource_names`
-is only a hint. Envoy will silently ignore any superfluous resources. When a
-requested resource is missing in a RDS or EDS update, Envoy will retain the last
-known value for this resource except in the case where the `Cluster` or `Listener`
-is being warmed. See [Resource warming](#resource-warming) section below on the expectations
-during warming. The management server may be able to infer all
-the required EDS/RDS resources from the `node` identification in the
-`DiscoveryRequest`, in which case this hint may be discarded. An empty EDS/RDS
-`DiscoveryResponse` is effectively a nop from the perspective of the respective
-resources in the Envoy.
-
-When a `Listener` or `Cluster` is deleted, its corresponding EDS and RDS
-resources are also deleted inside the Envoy instance. In order for EDS resources
-to be known or tracked by Envoy, there must exist an applied `Cluster`
-definition (e.g. sourced via CDS). A similar relationship exists between RDS and
-`Listeners` (e.g. sourced via LDS).
-
-For EDS/RDS, Envoy may either generate a distinct stream for each resource of a
-given type (e.g. if each `ConfigSource` has its own distinct upstream cluster
-for a management server), or may combine together multiple resource requests for
-a given resource type when they are destined for the same management server.
-While this is left to implementation specifics, management servers should be capable
-of handling one or more `resource_names` for a given resource type in each
-request. Both sequence diagrams below are valid for fetching two EDS resources
-`{foo, bar}`:
-
-
-
-
-#### Resource updates
-
-As discussed above, Envoy may update the list of `resource_names` it presents to
-the management server in each `DiscoveryRequest` that ACK/NACKs a specific
-`DiscoveryResponse`. In addition, Envoy may later issue additional
-`DiscoveryRequest`s at a given `version_info` to update the management server
-with new resource hints. For example, if Envoy is at EDS version __X__ and knows
-only about cluster `foo`, but then receives a CDS update and learns about `bar`
-in addition, it may issue an additional `DiscoveryRequest` for __X__ with
-`{foo,bar}` as `resource_names`.
-
-
-
-There is a race condition that may arise here; if after a resource hint update
-is issued by Envoy at __X__, but before the management server processes the
-update it replies with a new version __Y__, the resource hint update may be
-interpreted as a rejection of __Y__ by presenting an __X__ `version_info`. To
-avoid this, the management server provides a `nonce` that Envoy uses to indicate
-the specific `DiscoveryResponse` each `DiscoveryRequest` corresponds to:
-
-
-
-The management server should not send a `DiscoveryResponse` for any
-`DiscoveryRequest` that has a stale nonce. A nonce becomes stale following a
-newer nonce being presented to Envoy in a `DiscoveryResponse`. A management
-server does not need to send an update until it determines a new version is
-available. Earlier requests at a version then also become stale. It may process
-multiple `DiscoveryRequests` at a version until a new version is ready.
-
-
-
-An implication of the above resource update sequencing is that Envoy does not
-expect a `DiscoveryResponse` for every `DiscoveryRequest` it issues.
-
-### Resource warming
-
-[`Clusters`](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/cluster_manager.html#cluster-warming)
-and [`Listeners`](https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/lds#config-listeners-lds)
-go through `warming` before they can serve requests. This process happens both during
-[`Envoy initialization`](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/init.html#initialization)
-and when the `Cluster` or `Listener` is updated. Warming of `Cluster` is completed only when a
-`ClusterLoadAssignment` response is supplied by management server. Similarly, warming of `Listener`
-is completed only when a `RouteConfiguration` is supplied by management server if the listener
-refers to an RDS configuration. Management server is expected to provide the EDS/RDS updates during
-warming. If management server does not provide EDS/RDS responses, Envoy will not initialize
-itself during the initialization phase and the updates sent via CDS/LDS will not take effect until
-EDS/RDS responses are supplied.
-
-#### Eventual consistency considerations
-
-Since Envoy's xDS APIs are eventually consistent, traffic may drop briefly
-during updates. For example, if only cluster __X__ is known via CDS/EDS,
-a `RouteConfiguration` references cluster __X__
-and is then adjusted to cluster __Y__ just before the CDS/EDS update
-providing __Y__, traffic will be blackholed until __Y__ is known about by the
-Envoy instance.
-
-For some applications, a temporary drop of traffic is acceptable, retries at the
-client or by other Envoy sidecars will hide this drop. For other scenarios where
-drop can't be tolerated, traffic drop could have been avoided by providing a
-CDS/EDS update with both __X__ and __Y__, then the RDS update repointing from
-__X__ to __Y__ and then a CDS/EDS update dropping __X__.
-
-In general, to avoid traffic drop, sequencing of updates should follow a
-`make before break` model, wherein
-* CDS updates (if any) must always be pushed first.
-* EDS updates (if any) must arrive after CDS updates for the respective clusters.
-* LDS updates must arrive after corresponding CDS/EDS updates.
-* RDS updates related to the newly added listeners must arrive in the end.
-* Stale CDS clusters and related EDS endpoints (ones no longer being
- referenced) can then be removed.
-
-xDS updates can be pushed independently if no new clusters/routes/listeners
-are added or if it's acceptable to temporarily drop traffic during
-updates. Note that in case of LDS updates, the listeners will be warmed
-before they receive traffic, i.e. the dependent routes are fetched through
-RDS if configured. Clusters are warmed when adding/removing/updating
-clusters. On the other hand, routes are not warmed, i.e., the management
-plane must ensure that clusters referenced by a route are in place, before
-pushing the updates for a route.
-
-### Aggregated Discovery Services (ADS)
-
-It's challenging to provide the above guarantees on sequencing to avoid traffic
-drop when management servers are distributed. ADS allow a single management
-server, via a single gRPC stream, to deliver all API updates. This provides the
-ability to carefully sequence updates to avoid traffic drop. With ADS, a single
-stream is used with multiple independent `DiscoveryRequest`/`DiscoveryResponse`
-sequences multiplexed via the type URL. For any given type URL, the above
-sequencing of `DiscoveryRequest` and `DiscoveryResponse` messages applies. An
-example update sequence might look like:
-
-
-
-A single ADS stream is available per Envoy instance.
-
-An example minimal `bootstrap.yaml` fragment for ADS configuration is:
-
-```yaml
-node:
- id:
-dynamic_resources:
- cds_config: {ads: {}}
- lds_config: {ads: {}}
- ads_config:
- api_type: GRPC
- grpc_services:
- envoy_grpc:
- cluster_name: ads_cluster
-static_resources:
- clusters:
- - name: ads_cluster
- connect_timeout: { seconds: 5 }
- type: STATIC
- hosts:
- - socket_address:
- address:
- port_value:
- lb_policy: ROUND_ROBIN
- http2_protocol_options: {}
- upstream_connection_options:
- # configure a TCP keep-alive to detect and reconnect to the admin
- # server in the event of a TCP socket disconnection
- tcp_keepalive:
- ...
-admin:
- ...
-
-```
-
-### Incremental xDS
-
-Incremental xDS is a separate xDS endpoint that:
-
- * Allows the protocol to communicate on the wire in terms of resource/resource
- name deltas ("Delta xDS"). This supports the goal of scalability of xDS
- resources. Rather than deliver all 100k clusters when a single cluster is
- modified, the management server only needs to deliver the single cluster
- that changed.
- * Allows the Envoy to on-demand / lazily request additional resources. For
- example, requesting a cluster only when a request for that cluster arrives.
-
-An Incremental xDS session is always in the context of a gRPC bidirectional
-stream. This allows the xDS server to keep track of the state of xDS clients
-connected to it. There is no REST version of Incremental xDS yet.
-
-In the delta xDS wire protocol, the nonce field is required and used to pair a
-[`DeltaDiscoveryResponse`](https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/discovery.proto#deltadiscoveryresponse)
-to a [`DeltaDiscoveryRequest`](https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/discovery.proto#deltadiscoveryrequest)
-ACK or NACK.
-Optionally, a response message level system_version_info is present for
-debugging purposes only.
-
-`DeltaDiscoveryRequest` can be sent in 3 situations:
- 1. Initial message in a xDS bidirectional gRPC stream.
- 2. As an ACK or NACK response to a previous `DeltaDiscoveryResponse`.
- In this case the `response_nonce` is set to the nonce value in the Response.
- ACK or NACK is determined by the absence or presence of `error_detail`.
- 3. Spontaneous `DeltaDiscoveryRequest` from the client.
- This can be done to dynamically add or remove elements from the tracked
- `resource_names` set. In this case `response_nonce` must be omitted.
-
-In this first example the client connects and receives a first update that it
-ACKs. The second update fails and the client NACKs the update. Later the xDS
-client spontaneously requests the "wc" resource.
-
-
-
-On reconnect the Incremental xDS client may tell the server of its known
-resources to avoid resending them over the network.
-
-
-
-#### Resource names
-Resources are identified by a resource name or an alias. Aliases of a resource, if present, can be
-identified by the alias field in the resource of a `DeltaDiscoveryResponse`. The resource name will
-be returned in the name field in the resource of a `DeltaDiscoveryResponse`.
-
-#### Subscribing to Resources
-Envoy can send either an alias or the name of a resource in the `resource_names_subscribe` field of
-a `DeltaDiscoveryRequest` in order to subscribe to a resource. Envoy should check both the names and
-aliases of resources in order to determine whether the entity in question has been subscribed to.
-
-#### Unsubscribing from Resources
-Envoy will keep track of a per resource reference count internally. This count will keep track of the
-total number of aliases/resource names that are currently subscribed to. When the reference count
-reaches zero, Envoy will send a `DeltaDiscoveryRequest` containing the resource name of the resource
-to unsubscribe from in the `resource_names_unsubscribe` field. When Envoy unsubscribes from a resource,
-it should check for both the resource name and all aliases and appropriately update all resources
-that reference either.
-
-## REST-JSON polling subscriptions
-
-Synchronous (long) polling via REST endpoints is also available for the xDS
-singleton APIs. The above sequencing of messages is similar, except no
-persistent stream is maintained to the management server. It is expected that
-there is only a single outstanding request at any point in time, and as a result
-the response nonce is optional in REST-JSON. The [JSON canonical transform of
-proto3](https://developers.google.com/protocol-buffers/docs/proto3#json) is used
-to encode `DiscoveryRequest` and `DiscoveryResponse` messages. ADS is not
-available for REST-JSON polling.
-
-When the poll period is set to a small value, with the intention of long
-polling, then there is also a requirement to avoid sending a `DiscoveryResponse`
-[unless a change to the underlying resources has
-occurred](#when-to-send-an-update).
diff --git a/api/bazel/repository_locations.bzl b/api/bazel/repository_locations.bzl
index 6d68524399fad..e4489eb3b17bd 100644
--- a/api/bazel/repository_locations.bzl
+++ b/api/bazel/repository_locations.bzl
@@ -25,7 +25,7 @@ REPOSITORY_LOCATIONS = dict(
com_lyft_protoc_gen_validate = dict(
sha256 = PGV_SHA256,
strip_prefix = "protoc-gen-validate-" + PGV_RELEASE,
- urls = ["https://github.com/lyft/protoc-gen-validate/archive/v" + PGV_RELEASE + ".tar.gz"],
+ urls = ["https://github.com/envoyproxy/protoc-gen-validate/archive/v" + PGV_RELEASE + ".tar.gz"],
),
googleapis = dict(
# TODO(dio): Consider writing a Skylark macro for importing Google API proto.
diff --git a/api/docs/BUILD b/api/docs/BUILD
index ead494196ab73..73e9473e152f9 100644
--- a/api/docs/BUILD
+++ b/api/docs/BUILD
@@ -35,6 +35,7 @@ proto_library(
"//envoy/config/bootstrap/v2:bootstrap",
"//envoy/config/common/tap/v2alpha:common",
"//envoy/config/filter/accesslog/v2:accesslog",
+ "//envoy/config/filter/dubbo/router/v2alpha1:router",
"//envoy/config/filter/http/buffer/v2:buffer",
"//envoy/config/filter/http/ext_authz/v2:ext_authz",
"//envoy/config/filter/http/fault/v2:fault",
@@ -52,6 +53,7 @@ proto_library(
"//envoy/config/filter/http/transcoder/v2:transcoder",
"//envoy/config/filter/listener/original_src/v2alpha1:original_src",
"//envoy/config/filter/network/client_ssl_auth/v2:client_ssl_auth",
+ "//envoy/config/filter/network/dubbo_proxy/v2alpha1:dubbo_proxy",
"//envoy/config/filter/network/ext_authz/v2:ext_authz",
"//envoy/config/filter/network/http_connection_manager/v2:http_connection_manager",
"//envoy/config/filter/network/mongo_proxy/v2:mongo_proxy",
diff --git a/api/envoy/api/v2/BUILD b/api/envoy/api/v2/BUILD
index f0327f8df8f9d..66efb5d30ec5e 100644
--- a/api/envoy/api/v2/BUILD
+++ b/api/envoy/api/v2/BUILD
@@ -129,6 +129,7 @@ api_proto_library_internal(
deps = [
":discovery",
"//envoy/api/v2/core:base",
+ "//envoy/api/v2/core:config_source",
"//envoy/api/v2/route",
],
)
@@ -139,6 +140,7 @@ api_go_grpc_library(
deps = [
":discovery_go_proto",
"//envoy/api/v2/core:base_go_proto",
+ "//envoy/api/v2/core:config_source_go_proto",
"//envoy/api/v2/route:route_go_proto",
],
)
diff --git a/api/envoy/api/v2/cds.proto b/api/envoy/api/v2/cds.proto
index e13f8dc771860..6fb858efd6420 100644
--- a/api/envoy/api/v2/cds.proto
+++ b/api/envoy/api/v2/cds.proto
@@ -194,7 +194,7 @@ message Cluster {
// :ref:`STRICT_DNS`
// or :ref:`LOGICAL_DNS` clusters.
// This field supersedes :ref:`hosts` field.
- // [#comment:TODO(dio): Deprecate the hosts field and add it to DEPRECATED.md
+ // [#comment:TODO(dio): Deprecate the hosts field and add it to :ref:`deprecated log`
// once load_assignment is implemented.]
//
// .. attention::
diff --git a/api/envoy/api/v2/cluster/circuit_breaker.proto b/api/envoy/api/v2/cluster/circuit_breaker.proto
index ebee99dae163d..f219fa07b4feb 100644
--- a/api/envoy/api/v2/cluster/circuit_breaker.proto
+++ b/api/envoy/api/v2/cluster/circuit_breaker.proto
@@ -51,6 +51,13 @@ message CircuitBreakers {
// the number of resources remaining until the circuit breakers open. If
// not specified, the default is false.
bool track_remaining = 6;
+
+ // The maximum number of connection pools per cluster that Envoy will concurrently support at
+ // once. If not specified, the default is unlimited. Set this for clusters which create a
+ // large number of connection pools. See
+ // :ref:`Circuit Breaking ` for
+ // more details.
+ google.protobuf.UInt32Value max_connection_pools = 7;
}
// If multiple :ref:`Thresholds`
diff --git a/api/envoy/api/v2/eds.proto b/api/envoy/api/v2/eds.proto
index 6a687ce9aa1a9..2f8fd7a4186dd 100644
--- a/api/envoy/api/v2/eds.proto
+++ b/api/envoy/api/v2/eds.proto
@@ -17,6 +17,7 @@ import "google/api/annotations.proto";
import "validate/validate.proto";
import "gogoproto/gogo.proto";
import "google/protobuf/wrappers.proto";
+import "google/protobuf/duration.proto";
option (gogoproto.equal_all) = true;
option (gogoproto.stable_marshaler_all) = true;
@@ -43,9 +44,10 @@ service EndpointDiscoveryService {
//
// With EDS, each cluster is treated independently from a LB perspective, with
// LB taking place between the Localities within a cluster and at a finer
-// granularity between the hosts within a locality. For a given cluster, the
-// effective weight of a host is its load_balancing_weight multiplied by the
-// load_balancing_weight of its Locality.
+// granularity between the hosts within a locality. The percentage of traffic
+// for each endpoint is determined by both its load_balancing_weight, and the
+// load_balancing_weight of its locality. First, a locality will be selected,
+// then an endpoint within that locality will be chose based on its weight.
message ClusterLoadAssignment {
// Name of the cluster. This will be the :ref:`service_name
// ` value if specified
@@ -106,6 +108,12 @@ message ClusterLoadAssignment {
// Read more at :ref:`priority levels ` and
// :ref:`localities `.
google.protobuf.UInt32Value overprovisioning_factor = 3 [(validate.rules).uint32.gt = 0];
+
+ // The max time until which the endpoints from this assignment can be used.
+ // If no new assignments are received before this time expires the endpoints
+ // are considered stale and should be marked unhealthy.
+ // Defaults to 0 which means endpoints never go stale.
+ google.protobuf.Duration endpoint_stale_after = 4 [(validate.rules).duration.gt.seconds = 0];
}
// Load balancing policy settings.
diff --git a/api/envoy/api/v2/rds.proto b/api/envoy/api/v2/rds.proto
index d75b68af6791f..18147b68174d3 100644
--- a/api/envoy/api/v2/rds.proto
+++ b/api/envoy/api/v2/rds.proto
@@ -9,6 +9,7 @@ option java_package = "io.envoyproxy.envoy.api.v2";
option java_generic_services = true;
import "envoy/api/v2/core/base.proto";
+import "envoy/api/v2/core/config_source.proto";
import "envoy/api/v2/discovery.proto";
import "envoy/api/v2/route/route.proto";
@@ -44,7 +45,23 @@ service RouteDiscoveryService {
}
}
-// [#comment:next free field: 9]
+// Virtual Host Discovery Service (VHDS) is used to dynamically update the list of virtual hosts for
+// a given RouteConfiguration. If VHDS is configured a virtual host list update will be triggered
+// during the processing of an HTTP request if a route for the request cannot be resolved. The
+// :ref:`resource_names_subscribe `
+// field contains a list of virtual host names or aliases to track. The contents of an alias would
+// be the contents of a *host* or *authority* header used to make an http request. An xDS server
+// will match an alias to a virtual host based on the content of :ref:`domains'
+// ` field. The *resource_names_unsubscribe* field contains
+// a list of virtual host names that have been `unsubscribed
+// `_
+// from the routing table associated with the RouteConfiguration.
+service VirtualHostDiscoveryService {
+ rpc DeltaVirtualHosts(stream DeltaDiscoveryRequest) returns (stream DeltaDiscoveryResponse) {
+ }
+}
+
+// [#comment:next free field: 10]
message RouteConfiguration {
// The name of the route configuration. For example, it might match
// :ref:`route_config_name
@@ -55,6 +72,15 @@ message RouteConfiguration {
// An array of virtual hosts that make up the route table.
repeated route.VirtualHost virtual_hosts = 2 [(gogoproto.nullable) = false];
+ // An array of virtual hosts will be dynamically loaded via the VHDS API.
+ // Both *virtual_hosts* and *vhds* fields will be used when present. *virtual_hosts* can be used
+ // for a base routing table or for infrequently changing virtual hosts. *vhds* is used for
+ // on-demand discovery of virtual hosts. The contents of these two fields will be merged to
+ // generate a routing table for a given RouteConfiguration, with *vhds* derived configuration
+ // taking precedence.
+ // [#not-implemented-hide:]
+ Vhds vhds = 9;
+
// Optionally specifies a list of HTTP headers that the connection manager
// will consider to be internal only. If they are found on external requests they will be cleaned
// prior to filter invocation. See :ref:`config_http_conn_man_headers_x-envoy-internal` for more
@@ -102,3 +128,10 @@ message RouteConfiguration {
// using CDS with a static route table).
google.protobuf.BoolValue validate_clusters = 7;
}
+
+// [#not-implemented-hide:]
+message Vhds {
+ // Configuration source specifier for VHDS.
+ envoy.api.v2.core.ConfigSource config_source = 1
+ [(validate.rules).message.required = true, (gogoproto.nullable) = false];
+}
\ No newline at end of file
diff --git a/api/envoy/api/v2/route/route.proto b/api/envoy/api/v2/route/route.proto
index 0c84cfbcf35cc..10ba8f6b4b7b4 100644
--- a/api/envoy/api/v2/route/route.proto
+++ b/api/envoy/api/v2/route/route.proto
@@ -39,17 +39,21 @@ message VirtualHost {
string name = 1 [(validate.rules).string.min_bytes = 1];
// A list of domains (host/authority header) that will be matched to this
- // virtual host. Wildcard hosts are supported in the form of ``*.foo.com`` or
- // ``*-bar.foo.com``.
+ // virtual host. Wildcard hosts are supported in the suffix or prefix form.
+ //
+ // Domain search order:
+ // 1. Exact domain names: ``www.foo.com``.
+ // 2. Suffix domain wildcards: ``*.foo.com`` or ``*-bar.foo.com``.
+ // 3. Prefix domain wildcards: ``foo.*`` or ``foo-*``.
+ // 4. Special wildcard ``*`` matching any domain.
//
// .. note::
//
// The wildcard will not match the empty string.
// e.g. ``*-bar.foo.com`` will match ``baz-bar.foo.com`` but not ``-bar.foo.com``.
- // Additionally, a special entry ``*`` is allowed which will match any
- // host/authority header. Only a single virtual host in the entire route
- // configuration can match on ``*``. A domain must be unique across all virtual
- // hosts or the config will fail to load.
+ // The longest wildcards match first.
+ // Only a single virtual host in the entire route configuration can match on ``*``. A domain
+ // must be unique across all virtual hosts or the config will fail to load.
repeated string domains = 2 [(validate.rules).repeated .min_items = 1];
// The list of routes that will be matched, in order, for incoming requests.
@@ -570,8 +574,7 @@ message RouteAction {
// fires, the stream is terminated with a 408 Request Timeout error code if no
// upstream response header has been received, otherwise a stream reset
// occurs.
- google.protobuf.Duration idle_timeout = 24
- [(validate.rules).duration.gt = {}, (gogoproto.stdduration) = true];
+ google.protobuf.Duration idle_timeout = 24 [(gogoproto.stdduration) = true];
// Indicates that the route has a retry policy. Note that if this is set,
// it'll take precedence over the virtual host level retry policy entirely
@@ -635,14 +638,9 @@ message RouteAction {
// https://github.com/lyft/protoc-gen-validate/issues/42 is resolved.]
core.RoutingPriority priority = 11;
- // [#not-implemented-hide:]
- repeated core.HeaderValueOption request_headers_to_add = 12 [deprecated = true];
-
- // [#not-implemented-hide:]
- repeated core.HeaderValueOption response_headers_to_add = 18 [deprecated = true];
-
- // [#not-implemented-hide:]
- repeated string response_headers_to_remove = 19 [deprecated = true];
+ reserved 12;
+ reserved 18;
+ reserved 19;
// Specifies a set of rate limit configurations that could be applied to the
// route.
@@ -767,6 +765,15 @@ message RouteAction {
// time gaps between gRPC request and response in gRPC streaming mode.
google.protobuf.Duration max_grpc_timeout = 23 [(gogoproto.stdduration) = true];
+ // If present, Envoy will adjust the timeout provided by the `grpc-timeout` header by subtracting
+ // the provided duration from the header. This is useful in allowing Envoy to set its global
+ // timeout to be less than that of the deadline imposed by the calling client, which makes it more
+ // likely that Envoy will handle the timeout instead of having the call canceled by the client.
+ // The offset will only be applied if the provided grpc_timeout is greater than the offset. This
+ // ensures that the offset will only ever decrease the timeout and never set it to 0 (meaning
+ // infinity).
+ google.protobuf.Duration grpc_timeout_offset = 28 [(gogoproto.stdduration) = true];
+
// Allows enabling and disabling upgrades on a per-route basis.
// This overrides any enabled/disabled upgrade filter chain specified in the
// HttpConnectionManager
@@ -798,6 +805,7 @@ message RouteAction {
}
// HTTP retry :ref:`architecture overview `.
+// [#comment:next free field: 9]
message RetryPolicy {
// Specifies the conditions under which retry takes place. These are the same
// conditions documented for :ref:`config_http_filters_router_x-envoy-retry-on` and
@@ -858,6 +866,34 @@ message RetryPolicy {
// HTTP status codes that should trigger a retry in addition to those specified by retry_on.
repeated uint32 retriable_status_codes = 7;
+
+ message RetryBackOff {
+ // Specifies the base interval between retries. This parameter is required and must be greater
+ // than zero. Values less than 1 ms are rounded up to 1 ms.
+ // See :ref:`config_http_filters_router_x-envoy-max-retries` for a discussion of Envoy's
+ // back-off algorithm.
+ google.protobuf.Duration base_interval = 1 [
+ (validate.rules).duration = {
+ required: true,
+ gt: {seconds: 0}
+ },
+ (gogoproto.stdduration) = true
+ ];
+
+ // Specifies the maximum interval between retries. This parameter is optional, but must be
+ // greater than or equal to the `base_interval` if set. The default is 10 times the
+ // `base_interval`. See :ref:`config_http_filters_router_x-envoy-max-retries` for a discussion
+ // of Envoy's back-off algorithm.
+ google.protobuf.Duration max_interval = 2
+ [(validate.rules).duration.gt = {seconds: 0}, (gogoproto.stdduration) = true];
+ }
+
+ // Specifies parameters that control retry back off. This parameter is optional, in which case the
+ // default base interval is 25 milliseconds or, if set, the current value of the
+ // `upstream.base_retry_backoff_ms` runtime parameter. The default maximum interval is 10 times
+ // the base interval. The documentation for :ref:`config_http_filters_router_x-envoy-max-retries`
+ // describes Envoy's back-off algorithm.
+ RetryBackOff retry_back_off = 8;
}
// HTTP request hedging TODO(mpuncel) docs
diff --git a/api/envoy/config/filter/dubbo/router/v2alpha1/BUILD b/api/envoy/config/filter/dubbo/router/v2alpha1/BUILD
index ce0ad0e254f03..51c69c0d5b20f 100644
--- a/api/envoy/config/filter/dubbo/router/v2alpha1/BUILD
+++ b/api/envoy/config/filter/dubbo/router/v2alpha1/BUILD
@@ -1,4 +1,4 @@
-load("//bazel:api_build_system.bzl", "api_proto_library_internal")
+load("@envoy_api//bazel:api_build_system.bzl", "api_proto_library_internal")
licenses(["notice"]) # Apache 2
diff --git a/api/envoy/config/filter/fault/v2/fault.proto b/api/envoy/config/filter/fault/v2/fault.proto
index 89d1dc2c55ff5..f27f9d446267f 100644
--- a/api/envoy/config/filter/fault/v2/fault.proto
+++ b/api/envoy/config/filter/fault/v2/fault.proto
@@ -19,19 +19,25 @@ import "gogoproto/gogo.proto";
// Delay specification is used to inject latency into the
// HTTP/gRPC/Mongo/Redis operation or delay proxying of TCP connections.
message FaultDelay {
+ // Fault delays are controlled via an HTTP header (if applicable). See the
+ // :ref:`http fault filter ` documentation for
+ // more information.
+ message HeaderDelay {
+ }
+
enum FaultDelayType {
- // Fixed delay (step function).
+ // Unused and deprecated.
FIXED = 0;
}
- // Delay type to use (fixed|exponential|..). Currently, only fixed delay (step function) is
- // supported.
- FaultDelayType type = 1 [(validate.rules).enum.defined_only = true];
+ // Unused and deprecated. Will be removed in the next release.
+ FaultDelayType type = 1 [deprecated = true];
reserved 2;
oneof fault_delay_secifier {
option (validate.required) = true;
+
// Add a fixed delay before forwarding the operation upstream. See
// https://developers.google.com/protocol-buffers/docs/proto3#json for
// the JSON/YAML Duration mapping. For HTTP/Mongo/Redis, the specified
@@ -40,6 +46,9 @@ message FaultDelay {
// for the specified period. This is required if type is FIXED.
google.protobuf.Duration fixed_delay = 3
[(validate.rules).duration.gt = {}, (gogoproto.stdduration) = true];
+
+ // Fault delays are controlled via an HTTP header (if applicable).
+ HeaderDelay header_delay = 5;
}
// The percentage of operations/connections/requests on which the delay will be injected.
@@ -54,11 +63,20 @@ message FaultRateLimit {
uint64 limit_kbps = 1 [(validate.rules).uint64.gte = 1];
}
+ // Rate limits are controlled via an HTTP header (if applicable). See the
+ // :ref:`http fault filter ` documentation for
+ // more information.
+ message HeaderLimit {
+ }
+
oneof limit_type {
option (validate.required) = true;
// A fixed rate limit.
FixedLimit fixed_limit = 1;
+
+ // Rate limits are controlled via an HTTP header (if applicable).
+ HeaderLimit header_limit = 3;
}
// The percentage of operations/connections/requests on which the rate limit will be injected.
diff --git a/api/envoy/config/filter/http/ext_authz/v2/ext_authz.proto b/api/envoy/config/filter/http/ext_authz/v2/ext_authz.proto
index e79e59865c06a..b430fe93a519f 100644
--- a/api/envoy/config/filter/http/ext_authz/v2/ext_authz.proto
+++ b/api/envoy/config/filter/http/ext_authz/v2/ext_authz.proto
@@ -50,6 +50,35 @@ message ExtAuthz {
// semantically compatible. Deprecation note: This field is deprecated and should only be used for
// version upgrade. See release notes for more details.
bool use_alpha = 4 [deprecated = true];
+
+ // Enables filter to buffer the client request body and send it within the authorization request.
+ BufferSettings with_request_body = 5;
+
+ // Clears route cache in order to allow the external authorization service to correctly affect
+ // routing decisions. Filter clears all cached routes when:
+ //
+ // 1. The field is set to *true*.
+ //
+ // 2. The status returned from the authorization service is a HTTP 200 or gRPC 0.
+ //
+ // 3. At least one *authorization response header* is added to the client request, or is used for
+ // altering another client request header.
+ //
+ bool clear_route_cache = 6;
+}
+
+// Configuration for buffering the request data.
+message BufferSettings {
+ // Sets the maximum size of a message body that the filter will hold in memory. Envoy will return
+ // *HTTP 413* and will *not* initiate the authorization process when buffer reaches the number
+ // set in this field. Note that this setting will have precedence over :ref:`failure_mode_allow
+ // `.
+ uint32 max_request_bytes = 1 [(validate.rules).uint32.gt = 0];
+
+ // When this field is true, Envoy will buffer the message until *max_request_bytes* is reached.
+ // The authorization request will be dispatched and no 413 HTTP error will be returned by the
+ // filter.
+ bool allow_partial_message = 2;
}
// HttpService is used for raw HTTP communication between the filter and the authorization service.
diff --git a/api/envoy/config/filter/http/fault/v2/fault.proto b/api/envoy/config/filter/http/fault/v2/fault.proto
index df4258968ab66..bc491580bb152 100644
--- a/api/envoy/config/filter/http/fault/v2/fault.proto
+++ b/api/envoy/config/filter/http/fault/v2/fault.proto
@@ -80,7 +80,9 @@ message HTTPFault {
// amount due to the implementation details.
google.protobuf.UInt32Value max_active_faults = 6;
- // The response rate limit to be applied to the response body of the stream.
+ // The response rate limit to be applied to the response body of the stream. When configured,
+ // the percentage can be overridden by the :ref:`fault.http.rate_limit.response_percent
+ // ` runtime key.
//
// .. attention::
// This is a per-stream limit versus a connection level limit. This means that concurrent streams
diff --git a/api/envoy/config/filter/http/jwt_authn/v2alpha/README.md b/api/envoy/config/filter/http/jwt_authn/v2alpha/README.md
index 9d083389a5aea..c390a4d5ce506 100644
--- a/api/envoy/config/filter/http/jwt_authn/v2alpha/README.md
+++ b/api/envoy/config/filter/http/jwt_authn/v2alpha/README.md
@@ -29,3 +29,38 @@ If a custom location is desired, `from_headers` or `from_params` can be used to
## HTTP header to pass successfully verified JWT
If a JWT is valid, its payload will be passed to the backend in a new HTTP header specified in `forward_payload_header` field. Its value is base64 encoded JWT payload in JSON.
+
+
+## Further header options
+
+In addition to the `name` field, which specifies the HTTP header name,
+the `from_headers` section can specify an optional `value_prefix` value, as in:
+
+```yaml
+ from_headers:
+ - name: bespoke
+ value_prefix: jwt_value
+```
+
+The above will cause the jwt_authn filter to look for the JWT in the `bespoke` header, following the tag `jwt_value`.
+
+Any non-JWT characters (i.e., anything _other than_ alphanumerics, `_`, `-`, and `.`) will be skipped,
+and all following, contiguous, JWT-legal chars will be taken as the JWT.
+
+This means all of the following will return a JWT of `eyJFbnZveSI6ICJyb2NrcyJ9.e30.c2lnbmVk`:
+
+```text
+bespoke: jwt_value=eyJFbnZveSI6ICJyb2NrcyJ9.e30.c2lnbmVk
+
+bespoke: {"jwt_value": "eyJFbnZveSI6ICJyb2NrcyJ9.e30.c2lnbmVk"}
+
+bespoke: beta:true,jwt_value:"eyJFbnZveSI6ICJyb2NrcyJ9.e30.c2lnbmVk",trace=1234
+```
+
+The header `name` may be `Authorization`.
+
+The `value_prefix` must match exactly, i.e., case-sensitively.
+If the `value_prefix` is not found, the header is skipped: not considered as a source for a JWT token.
+
+If there are no JWT-legal characters after the `value_prefix`, the entire string after it
+is taken to be the JWT token. This is unlikely to succeed; the error will reported by the JWT parser.
\ No newline at end of file
diff --git a/api/envoy/config/filter/http/jwt_authn/v2alpha/config.proto b/api/envoy/config/filter/http/jwt_authn/v2alpha/config.proto
index f63e834d157e0..2f8a0ec29c170 100644
--- a/api/envoy/config/filter/http/jwt_authn/v2alpha/config.proto
+++ b/api/envoy/config/filter/http/jwt_authn/v2alpha/config.proto
@@ -339,6 +339,32 @@ message RequirementRule {
JwtRequirement requires = 2;
}
+// This message specifies Jwt requirements based on stream_info.filterState.
+// This FilterState should use `Router::StringAccessor` object to set a string value.
+// Other HTTP filters can use it to specify Jwt requirements dynamically.
+//
+// Example:
+//
+// .. code-block:: yaml
+//
+// name: jwt_selector
+// requires:
+// issuer_1:
+// provider_name: issuer1
+// issuer_2:
+// provider_name: issuer2
+//
+// If a filter set "jwt_selector" with "issuer_1" to FilterState for a request,
+// jwt_authn filter will use JwtRequirement{"provider_name": "issuer1"} to verify.
+message FilterStateRule {
+ // The filter state name to retrieve the `Router::StringAccessor` object.
+ string name = 1 [(validate.rules).string.min_bytes = 1];
+
+ // A map of string keys to requirements. The string key is the string value
+ // in the FilterState with the name specified in the *name* field above.
+ map requires = 3;
+}
+
// This is the Envoy HTTP filter config for JWT authentication.
//
// For example:
@@ -432,4 +458,10 @@ message JwtAuthentication {
// - provider_name: provider2
//
repeated RequirementRule rules = 2;
+
+ // This message specifies Jwt requirements based on stream_info.filterState.
+ // Other HTTP filters can use it to specify Jwt requirements dynamically.
+ // The *rules* field above is checked first, if it could not find any matches,
+ // check this one.
+ FilterStateRule filter_state_rules = 3;
}
diff --git a/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/BUILD b/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/BUILD
index a2ae87ffcfae4..e3e83a7046847 100644
--- a/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/BUILD
+++ b/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/BUILD
@@ -1,8 +1,8 @@
-load("@envoy_api//bazel:api_build_system.bzl", "api_proto_library")
+load("@envoy_api//bazel:api_build_system.bzl", "api_proto_library_internal")
licenses(["notice"]) # Apache 2
-api_proto_library(
+api_proto_library_internal(
name = "dubbo_proxy",
srcs = [
"dubbo_proxy.proto",
diff --git a/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/dubbo_proxy.proto b/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/dubbo_proxy.proto
index e639830794741..5b0995ba0022d 100644
--- a/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/dubbo_proxy.proto
+++ b/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/dubbo_proxy.proto
@@ -15,7 +15,9 @@ import "validate/validate.proto";
import "gogoproto/gogo.proto";
// [#protodoc-title: Dubbo Proxy]
-// Dubbo Proxy filter configuration.
+// Dubbo Proxy :ref:`configuration overview `.
+
+// [#comment:next free field: 6]
message DubboProxy {
// The human readable prefix to use when emitting statistics.
string stat_prefix = 1 [(validate.rules).string.min_bytes = 1];
@@ -36,10 +38,12 @@ message DubboProxy {
repeated DubboFilter dubbo_filters = 5;
}
+// Dubbo Protocol types supported by Envoy.
enum ProtocolType {
Dubbo = 0; // the default protocol.
}
+// Dubbo Serialization types supported by Envoy.
enum SerializationType {
Hessian2 = 0; // the default serialization protocol.
}
diff --git a/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/route.proto b/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/route.proto
index bc5f682554946..84b6d3fc5c174 100644
--- a/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/route.proto
+++ b/api/envoy/config/filter/network/dubbo_proxy/v2alpha1/route.proto
@@ -18,8 +18,10 @@ import "gogoproto/gogo.proto";
option (gogoproto.stable_marshaler_all) = true;
-// [#protodoc-title: Dubbo route configuration]
+// [#protodoc-title: Dubbo Proxy Route Configuration]
+// Dubbo Proxy :ref:`configuration overview `.
+// [#comment:next free field: 6]
message RouteConfiguration {
// The name of the route configuration. Reserved for future use in asynchronous route discovery.
string name = 1;
@@ -38,6 +40,7 @@ message RouteConfiguration {
repeated Route routes = 5 [(gogoproto.nullable) = false];
}
+// [#comment:next free field: 3]
message Route {
// Route matching parameters.
RouteMatch match = 1 [(validate.rules).message.required = true, (gogoproto.nullable) = false];
@@ -46,6 +49,35 @@ message Route {
RouteAction route = 2 [(validate.rules).message.required = true, (gogoproto.nullable) = false];
}
+// [#comment:next free field: 3]
+message RouteMatch {
+ // Method level routing matching.
+ MethodMatch method = 1;
+
+ // Specifies a set of headers that the route should match on. The router will check the requestās
+ // headers against all the specified headers in the route config. A match will happen if all the
+ // headers in the route are present in the request with the same values (or based on presence if
+ // the value field is not in the config).
+ repeated envoy.api.v2.route.HeaderMatcher headers = 2;
+}
+
+// [#comment:next free field: 3]
+message RouteAction {
+ oneof cluster_specifier {
+ option (validate.required) = true;
+
+ // Indicates the upstream cluster to which the request should be routed.
+ string cluster = 1;
+
+ // Multiple upstream clusters can be specified for a given route. The
+ // request is routed to one of the upstream clusters based on weights
+ // assigned to each cluster.
+ // Currently ClusterWeight only supports the name and weight fields.
+ envoy.api.v2.route.WeightedCluster weighted_clusters = 2;
+ }
+}
+
+// [#comment:next free field: 5]
message MethodMatch {
// The name of the method.
envoy.type.matcher.StringMatcher name = 1;
@@ -66,8 +98,7 @@ message MethodMatch {
// Examples:
//
// * For range [-10,0), route will match for header value -1, but not for 0,
- // "somestring", 10.9,
- // "-1somestring"
+ // "somestring", 10.9, "-1somestring"
envoy.type.Int64Range range_match = 4;
}
}
@@ -77,32 +108,3 @@ message MethodMatch {
// The value is the parameter matching type.
map params_match = 2;
}
-
-message RouteMatch {
- // Method level routing matching.
- MethodMatch method = 1;
-
- // Specifies a set of headers that the route should match on. The router will check the requestās
- // headers against all the specified headers in the route config. A match will happen if all the
- // headers in the route are present in the request with the same values (or based on presence if
- // the value field is not in the config).
- repeated envoy.api.v2.route.HeaderMatcher headers = 2;
-}
-
-// [#comment:next free field: 2]
-message RouteAction {
- oneof cluster_specifier {
- option (validate.required) = true;
-
- // Indicates the upstream cluster to which the request should be routed.
- string cluster = 1;
-
- // Multiple upstream clusters can be specified for a given route. The
- // request is routed to one of the upstream clusters based on weights
- // assigned to each cluster.
- //
- // .. note::
- // Currently ClusterWeight only supports the name and weight fields.
- envoy.api.v2.route.WeightedCluster weighted_clusters = 2;
- }
-}
diff --git a/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto b/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto
index 627082314dc49..18a479d3d7f97 100644
--- a/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto
+++ b/api/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto
@@ -24,7 +24,7 @@ import "gogoproto/gogo.proto";
// [#protodoc-title: HTTP connection manager]
// HTTP connection manager :ref:`configuration overview `.
-// [#comment:next free field: 30]
+// [#comment:next free field: 31]
message HttpConnectionManager {
enum CodecType {
option (gogoproto.goproto_enum_prefix) = false;
@@ -200,8 +200,14 @@ message HttpConnectionManager {
// The delayed close timeout is for downstream connections managed by the HTTP connection manager.
// It is defined as a grace period after connection close processing has been locally initiated
- // during which Envoy will flush the write buffers for the connection and await the peer to close
- // (i.e., a TCP FIN/RST is received by Envoy from the downstream connection).
+ // during which Envoy will wait for the peer to close (i.e., a TCP FIN/RST is received by Envoy
+ // from the downstream connection) prior to Envoy closing the socket associated with that
+ // connection.
+ // NOTE: This timeout is enforced even when the socket associated with the downstream connection
+ // is pending a flush of the write buffer. However, any progress made writing data to the socket
+ // will restart the timer associated with this timeout. This means that the total grace period for
+ // a socket in this state will be
+ // +.
//
// Delaying Envoy's connection close and giving the peer the opportunity to initiate the close
// sequence mitigates a race condition that exists when downstream clients do not drain/process
@@ -213,8 +219,15 @@ message HttpConnectionManager {
//
// The default timeout is 1000 ms if this option is not specified.
//
- // A value of 0 will completely disable delayed close processing, and the downstream connection's
- // socket will be closed immediately after the write flush is completed.
+ // .. NOTE::
+ // To be useful in avoiding the race condition described above, this timeout must be set
+ // to *at least* +<100ms to account for
+ // a reasonsable "worst" case processing time for a full iteration of Envoy's event loop>.
+ //
+ // .. WARNING::
+ // A value of 0 will completely disable delayed close processing. When disabled, the downstream
+ // connection's socket will be closed immediately after the write flush is completed or will
+ // never close if the write flush does not complete.
google.protobuf.Duration delayed_close_timeout = 26 [(gogoproto.stdduration) = true];
// Configuration for :ref:`HTTP access logs `
@@ -345,6 +358,7 @@ message HttpConnectionManager {
// :ref:`http_connection_manager.represent_ipv4_remote_address_as_ipv4_mapped_ipv6
// ` for runtime
// control.
+ // [#not-implemented-hide:]
bool represent_ipv4_remote_address_as_ipv4_mapped_ipv6 = 20;
// The configuration for HTTP upgrades.
@@ -378,6 +392,19 @@ message HttpConnectionManager {
repeated UpgradeConfig upgrade_configs = 23;
reserved 27;
+
+ // Should paths be normalized according to RFC 3986 before any processing of
+ // requests by HTTP filters or routing? This affects the upstream *:path* header
+ // as well. For paths that fail this check, Envoy will respond with 400 to
+ // paths that are malformed. This defaults to false currently but will default
+ // true in the future. When not specified, this value may be overridden by the
+ // runtime variable
+ // :ref:`http_connection_manager.normalize_path`.
+ // See `Normalization and Comparison `
+ // for details of normalization.
+ // Note that Envoy does not perform
+ // `case normalization `
+ google.protobuf.BoolValue normalize_path = 30;
}
message Rds {
diff --git a/api/envoy/config/filter/network/redis_proxy/v2/redis_proxy.proto b/api/envoy/config/filter/network/redis_proxy/v2/redis_proxy.proto
index cd8c18b128755..eec8c3f409544 100644
--- a/api/envoy/config/filter/network/redis_proxy/v2/redis_proxy.proto
+++ b/api/envoy/config/filter/network/redis_proxy/v2/redis_proxy.proto
@@ -22,7 +22,13 @@ message RedisProxy {
// Name of cluster from cluster manager. See the :ref:`configuration section
// ` of the architecture overview for recommendations on
// configuring the backing cluster.
- string cluster = 2 [(validate.rules).string.min_bytes = 1];
+ //
+ // .. attention::
+ //
+ // This field is deprecated. Use a :ref:`catch-all
+ // cluster`
+ // instead.
+ string cluster = 2 [deprecated = true];
// Redis connection pool settings.
message ConnPoolSettings {
@@ -46,12 +52,93 @@ message RedisProxy {
// * '{user1000}.following' and '{user1000}.followers' **will** be sent to the same upstream
// * '{user1000}.following' and '{user1001}.following' **might** be sent to the same upstream
bool enable_hashtagging = 2;
+
+ // Accept `moved and ask redirection
+ // `_ errors from upstream
+ // redis servers, and retry commands to the specified target server. The target server does not
+ // need to be known to the cluster manager. If the command cannot be redirected, then the
+ // original error is passed downstream unchanged. By default, this support is not enabled.
+ bool enable_redirection = 3;
+
+ // Maximum size of encoded request buffer before flush is triggered and encoded requests
+ // are sent upstream. If this is unset, the buffer flushes whenever it receives data
+ // and performs no batching.
+ // This feature makes it possible for multiple clients to send requests to Envoy and have
+ // them batched- for example if one is running several worker processes, each with its own
+ // Redis connection. There is no benefit to using this with a single downstream process.
+ // Recommended size (if enabled) is 1024 bytes.
+ uint32 max_buffer_size_before_flush = 4;
+
+ // The encoded request buffer is flushed N milliseconds after the first request has been
+ // encoded, unless the buffer size has already exceeded `max_buffer_size_before_flush`.
+ // If `max_buffer_size_before_flush` is not set, this flush timer is not used. Otherwise,
+ // the timer should be set according to the number of clients, overall request rate and
+ // desired maximum latency for a single command. For example, if there are many requests
+ // being batched together at a high rate, the buffer will likely be filled before the timer
+ // fires. Alternatively, if the request rate is lower the buffer will not be filled as often
+ // before the timer fires.
+ // If `max_buffer_size_before_flush` is set, but `buffer_flush_timeout` is not, the latter
+ // defaults to 3ms.
+ google.protobuf.Duration buffer_flush_timeout = 5 [(gogoproto.stdduration) = true];
}
- // Network settings for the connection pool to the upstream cluster.
+ // Network settings for the connection pool to the upstream clusters.
ConnPoolSettings settings = 3 [(validate.rules).message.required = true];
// Indicates that latency stat should be computed in microseconds. By default it is computed in
// milliseconds.
bool latency_in_micros = 4;
+
+ message PrefixRoutes {
+ message Route {
+ // String prefix that must match the beginning of the keys. Envoy will always favor the
+ // longest match.
+ string prefix = 1 [(validate.rules).string.min_bytes = 1];
+
+ // Indicates if the prefix needs to be removed from the key when forwarded.
+ bool remove_prefix = 2;
+
+ // Upstream cluster to forward the command to.
+ string cluster = 3 [(validate.rules).string.min_bytes = 1];
+ }
+
+ // List of prefix routes.
+ repeated Route routes = 1 [(gogoproto.nullable) = false];
+
+ // Indicates that prefix matching should be case insensitive.
+ bool case_insensitive = 2;
+
+ // Optional catch-all route to forward commands that doesn't match any of the routes. The
+ // catch-all route becomes required when no routes are specified.
+ string catch_all_cluster = 3;
+ }
+
+ // List of **unique** prefixes used to separate keys from different workloads to different
+ // clusters. Envoy will always favor the longest match first in case of overlap. A catch-all
+ // cluster can be used to forward commands when there is no match. Time complexity of the
+ // lookups are in O(min(longest key prefix, key length)).
+ //
+ // Example:
+ //
+ // .. code-block:: yaml
+ //
+ // prefix_routes:
+ // routes:
+ // - prefix: "ab"
+ // cluster: "cluster_a"
+ // - prefix: "abc"
+ // cluster: "cluster_b"
+ //
+ // When using the above routes, the following prefixes would be sent to:
+ //
+ // * 'get abc:users' would retrive the key 'abc:users' from cluster_b.
+ // * 'get ab:users' would retrive the key 'ab:users' from cluster_a.
+ // * 'get z:users' would return a NoUpstreamHost error. A :ref:`catch-all
+ // cluster`
+ // would have retrieved the key from that cluster instead.
+ //
+ // See the :ref:`configuration section
+ // ` of the architecture overview for recommendations on
+ // configuring the backing clusters.
+ PrefixRoutes prefix_routes = 5 [(gogoproto.nullable) = false];
}
diff --git a/api/envoy/config/filter/network/tcp_proxy/v2/tcp_proxy.proto b/api/envoy/config/filter/network/tcp_proxy/v2/tcp_proxy.proto
index 12ce0d2757896..9eb8f4f078173 100644
--- a/api/envoy/config/filter/network/tcp_proxy/v2/tcp_proxy.proto
+++ b/api/envoy/config/filter/network/tcp_proxy/v2/tcp_proxy.proto
@@ -172,7 +172,8 @@ message TcpProxy {
// list of strings with each string in CIDR notation. Source and destination ports are
// specified as single strings containing a comma-separated list of ports and/or port ranges.
//
- DeprecatedV1 deprecated_v1 = 6 [deprecated = true];
+ // Deprecation pending https://github.com/envoyproxy/envoy/issues/4457
+ DeprecatedV1 deprecated_v1 = 6;
// The maximum number of unsuccessful connection attempts that will be made before
// giving up. If the parameter is not specified, 1 connection attempt will be made.
diff --git a/api/envoy/config/filter/network/zookeeper_proxy/v1alpha1/BUILD b/api/envoy/config/filter/network/zookeeper_proxy/v1alpha1/BUILD
new file mode 100644
index 0000000000000..8719f5083f126
--- /dev/null
+++ b/api/envoy/config/filter/network/zookeeper_proxy/v1alpha1/BUILD
@@ -0,0 +1,8 @@
+load("@envoy_api//bazel:api_build_system.bzl", "api_proto_library_internal")
+
+licenses(["notice"]) # Apache 2
+
+api_proto_library_internal(
+ name = "zookeeper_proxy",
+ srcs = ["zookeeper_proxy.proto"],
+)
diff --git a/api/envoy/config/filter/network/zookeeper_proxy/v1alpha1/zookeeper_proxy.proto b/api/envoy/config/filter/network/zookeeper_proxy/v1alpha1/zookeeper_proxy.proto
new file mode 100644
index 0000000000000..6a8afdd12ec07
--- /dev/null
+++ b/api/envoy/config/filter/network/zookeeper_proxy/v1alpha1/zookeeper_proxy.proto
@@ -0,0 +1,33 @@
+syntax = "proto3";
+
+package envoy.config.filter.network.zookeeper_proxy.v1alpha1;
+
+option java_outer_classname = "ZookeeperProxyProto";
+option java_multiple_files = true;
+option java_package = "io.envoyproxy.envoy.config.filter.network.zookeeper_proxy.v1alpha1";
+option go_package = "v1alpha1";
+
+import "validate/validate.proto";
+import "google/protobuf/wrappers.proto";
+
+// [#protodoc-title: ZooKeeper proxy]
+// ZooKeeper Proxy :ref:`configuration overview `.
+message ZooKeeperProxy {
+ // The human readable prefix to use when emitting :ref:`statistics
+ // `.
+ string stat_prefix = 1 [(validate.rules).string.min_bytes = 1];
+
+ // [#not-implemented-hide:] The optional path to use for writing ZooKeeper access logs.
+ // If the access log field is empty, access logs will not be written.
+ string access_log = 2;
+
+ // Messages ā requests, responses and events ā that are bigger than this value will
+ // be ignored. If it is not set, the default value is 1Mb.
+ //
+ // The value here should match the jute.maxbuffer property in your cluster configuration:
+ //
+ // https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#Unsafe+Options
+ //
+ // if that is set. If it isn't, ZooKeeper's default is also 1Mb.
+ google.protobuf.UInt32Value max_packet_bytes = 3;
+}
diff --git a/api/envoy/config/metrics/v2/stats.proto b/api/envoy/config/metrics/v2/stats.proto
index 27a838124a066..08172180b5451 100644
--- a/api/envoy/config/metrics/v2/stats.proto
+++ b/api/envoy/config/metrics/v2/stats.proto
@@ -59,9 +59,8 @@ message StatsConfig {
// If any default tags are specified twice, the config will be considered
// invalid.
//
- // See `well_known_names.h
- // `_
- // for a list of the default tags in Envoy.
+ // See :repo:`well_known_names.h ` for a list of the
+ // default tags in Envoy.
//
// If not provided, the value is assumed to be true.
google.protobuf.BoolValue use_all_default_tags = 2;
@@ -166,9 +165,8 @@ message StatsMatcher {
message TagSpecifier {
// Attaches an identifier to the tag values to identify the tag being in the
// sink. Envoy has a set of default names and regexes to extract dynamic
- // portions of existing stats, which can be found in `well_known_names.h
- // `_
- // in the Envoy repository. If a :ref:`tag_name
+ // portions of existing stats, which can be found in :repo:`well_known_names.h
+ // ` in the Envoy repository. If a :ref:`tag_name
// ` is provided in the config and
// neither :ref:`regex ` or
// :ref:`fixed_value ` were specified,
diff --git a/api/envoy/data/accesslog/v2/accesslog.proto b/api/envoy/data/accesslog/v2/accesslog.proto
index b387433394e55..f8058dedc3462 100644
--- a/api/envoy/data/accesslog/v2/accesslog.proto
+++ b/api/envoy/data/accesslog/v2/accesslog.proto
@@ -332,4 +332,7 @@ message HTTPResponseProperties {
// Map of trailers configured to be logged.
map response_trailers = 5;
+
+ // The HTTP response code details.
+ string response_code_details = 6;
}
diff --git a/api/envoy/service/auth/v2/BUILD b/api/envoy/service/auth/v2/BUILD
index 5cf93deb777cf..57041668ddc8e 100644
--- a/api/envoy/service/auth/v2/BUILD
+++ b/api/envoy/service/auth/v2/BUILD
@@ -9,6 +9,7 @@ api_proto_library_internal(
],
deps = [
"//envoy/api/v2/core:address",
+ "//envoy/api/v2/core:base",
],
)
diff --git a/api/envoy/service/auth/v2/attribute_context.proto b/api/envoy/service/auth/v2/attribute_context.proto
index a012a17cd9ad7..b110cec50ed20 100644
--- a/api/envoy/service/auth/v2/attribute_context.proto
+++ b/api/envoy/service/auth/v2/attribute_context.proto
@@ -6,6 +6,7 @@ option java_outer_classname = "AttributeContextProto";
option java_multiple_files = true;
option java_package = "io.envoyproxy.envoy.service.auth.v2";
+import "envoy/api/v2/core/base.proto";
import "envoy/api/v2/core/address.proto";
import "google/protobuf/timestamp.proto";
@@ -86,7 +87,8 @@ message AttributeContext {
// lowercased, because HTTP header keys are case-insensitive.
map headers = 3;
- // The HTTP URL path.
+ // The request target, as it appears in the first line of the HTTP request. This includes
+ // the URL path and query-string. No decoding is performed.
string path = 4;
// The HTTP request `Host` or 'Authority` header value.
@@ -95,19 +97,25 @@ message AttributeContext {
// The HTTP URL scheme, such as `http` and `https`.
string scheme = 6;
- // The HTTP URL query in the format of `name1=value`&name2=value2`, as it
- // appears in the first line of the HTTP request. No decoding is performed.
+ // This field is always empty, and exists for compatibility reasons. The HTTP URL query is
+ // included in `path` field.
string query = 7;
- // The HTTP URL fragment, excluding leading `#`. No URL decoding is performed.
+ // This field is always empty, and exists for compatibility reasons. The URL fragment is
+ // not submitted as part of HTTP requests; it is unknowable.
string fragment = 8;
// The HTTP request size in bytes. If unknown, it must be -1.
int64 size = 9;
- // The network protocol used with the request, such as
- // "http/1.1", "spdy/3", "h2", "h2c"
+ // The network protocol used with the request, such as "HTTP/1.0", "HTTP/1.1", or "HTTP/2".
+ //
+ // See :repo:`headers.h:ProtocolStrings ` for a list of all
+ // possible values.
string protocol = 10;
+
+ // The HTTP request body.
+ string body = 11;
}
// The source of a network activity, such as starting a TCP connection.
diff --git a/api/envoy/service/auth/v2/external_auth.proto b/api/envoy/service/auth/v2/external_auth.proto
index 0f723c98e46c2..ce28506cadfaa 100644
--- a/api/envoy/service/auth/v2/external_auth.proto
+++ b/api/envoy/service/auth/v2/external_auth.proto
@@ -61,7 +61,8 @@ message OkHttpResponse {
// Intended for gRPC and Network Authorization servers `only`.
message CheckResponse {
- // Status `OK` allows the request. Any other status indicates the request should be denied.
+ // Status `OK` allows the request. Status `UNKNOWN` causes Envoy to abort. Any other status
+ // indicates the request should be denied.
google.rpc.Status status = 1;
// An message that contains HTTP response attributes. This message is
diff --git a/api/udpa/data/orca/v1/BUILD b/api/udpa/data/orca/v1/BUILD
new file mode 100644
index 0000000000000..096ca28bac3b3
--- /dev/null
+++ b/api/udpa/data/orca/v1/BUILD
@@ -0,0 +1,16 @@
+load("@envoy_api//bazel:api_build_system.bzl", "api_go_proto_library", "api_proto_library")
+
+licenses(["notice"]) # Apache 2
+
+api_proto_library(
+ name = "orca_load_report",
+ srcs = ["orca_load_report.proto"],
+ visibility = [
+ "//visibility:public",
+ ],
+)
+
+api_go_proto_library(
+ name = "orca_load_report",
+ proto = ":orca_load_report",
+)
diff --git a/api/udpa/data/orca/v1/orca_load_report.proto b/api/udpa/data/orca/v1/orca_load_report.proto
new file mode 100644
index 0000000000000..bed48ed2a88ed
--- /dev/null
+++ b/api/udpa/data/orca/v1/orca_load_report.proto
@@ -0,0 +1,31 @@
+syntax = "proto3";
+
+package udpa.data.orca.v1;
+
+option java_outer_classname = "OrcaLoadReportProto";
+option java_multiple_files = true;
+option java_package = "io.envoyproxy.udpa.data.orca.v1";
+option go_package = "v1";
+
+import "validate/validate.proto";
+
+// See section `ORCA load report format` of the design document in
+// :ref:`https://github.com/envoyproxy/envoy/issues/6614`.
+
+message OrcaLoadReport {
+ // CPU utilization expressed as a fraction of available CPU resources. This
+ // should be derived from a sample or measurement taken during the request.
+ double cpu_utilization = 1 [(validate.rules).double.gte = 0, (validate.rules).double.lte = 1];
+
+ // Memory utilization expressed as a fraction of available memory
+ // resources. This should be derived from a sample or measurement taken
+ // during the request.
+ double mem_utilization = 2 [(validate.rules).double.gte = 0, (validate.rules).double.lte = 1];
+
+ // Application specific requests costs. Each value may be an absolute cost (e.g.
+ // 3487 bytes of storage) or utilization associated with the request,
+ // expressed as a fraction of total resources available. Utilization
+ // metrics should be derived from a sample or measurement taken
+ // during the request.
+ map request_cost_or_utilization = 3;
+}
\ No newline at end of file
diff --git a/api/udpa/service/orca/v1/BUILD b/api/udpa/service/orca/v1/BUILD
new file mode 100644
index 0000000000000..72543e8092216
--- /dev/null
+++ b/api/udpa/service/orca/v1/BUILD
@@ -0,0 +1,20 @@
+load("@envoy_api//bazel:api_build_system.bzl", "api_go_grpc_library", "api_proto_library_internal")
+
+licenses(["notice"]) # Apache 2
+
+api_proto_library_internal(
+ name = "orca",
+ srcs = ["orca.proto"],
+ has_services = 1,
+ deps = [
+ "//udpa/data/orca/v1:orca_load_report",
+ ],
+)
+
+api_go_grpc_library(
+ name = "orca",
+ proto = ":orca",
+ deps = [
+ "//udpa/data/orca/v1:orca_load_report_go_proto",
+ ],
+)
diff --git a/api/udpa/service/orca/v1/orca.proto b/api/udpa/service/orca/v1/orca.proto
new file mode 100644
index 0000000000000..87871d209a4cf
--- /dev/null
+++ b/api/udpa/service/orca/v1/orca.proto
@@ -0,0 +1,38 @@
+syntax = "proto3";
+
+package udpa.service.orca.v1;
+
+option java_outer_classname = "OrcaProto";
+option java_multiple_files = true;
+option java_package = "io.envoyproxy.udpa.service.orca.v1";
+option go_package = "v1";
+
+import "udpa/data/orca/v1/orca_load_report.proto";
+
+import "google/protobuf/duration.proto";
+
+import "validate/validate.proto";
+
+// See section `Out-of-band (OOB) reporting` of the design document in
+// :ref:`https://github.com/envoyproxy/envoy/issues/6614`.
+
+// Out-of-band (OOB) load reporting service for the additional load reporting
+// agent that does not sit in the request path. Reports are periodically sampled
+// with sufficient frequency to provide temporal association with requests.
+// OOB reporting compensates the limitation of in-band reporting in revealing
+// costs for backends that do not provide a steady stream of telemetry such as
+// long running stream operations and zero QPS services. This is a server
+// streaming service, client needs to terminate current RPC and initiate
+// a new call to change backend reporting frequency.
+service OpenRcaService {
+ rpc StreamCoreMetrics(OrcaLoadReportRequest) returns (stream udpa.data.orca.v1.OrcaLoadReport);
+}
+
+message OrcaLoadReportRequest {
+ // Interval for generating Open RCA core metric responses.
+ google.protobuf.Duration report_interval = 1;
+ // Request costs to collect. If this is empty, all known requests costs tracked by
+ // the load reporting agent will be returned. This provides an opportunity for
+ // the client to selectively obtain a subset of tracked costs.
+ repeated string request_cost_names = 2;
+}
diff --git a/api/xds_protocol.rst b/api/xds_protocol.rst
new file mode 100644
index 0000000000000..40f323c4bd0ad
--- /dev/null
+++ b/api/xds_protocol.rst
@@ -0,0 +1,456 @@
+xDS REST and gRPC protocol
+==========================
+
+Envoy discovers its various dynamic resources via the filesystem or by
+querying one or more management servers. Collectively, these discovery
+services and their corresponding APIs are referred to as *xDS*.
+Resources are requested via *subscriptions*, by specifying a filesystem
+path to watch, initiating gRPC streams or polling a REST-JSON URL. The
+latter two methods involve sending requests with a :ref:`DiscoveryRequest `
+proto payload. Resources are delivered in a
+:ref:`DiscoveryResponse `
+proto payload in all methods. We discuss each type of subscription
+below.
+
+Filesystem subscriptions
+------------------------
+
+The simplest approach to delivering dynamic configuration is to place it
+at a well known path specified in the :ref:`ConfigSource `.
+Envoy will use `inotify` (`kqueue` on macOS) to monitor the file for
+changes and parse the
+:ref:`DiscoveryResponse ` proto in the file on update.
+Binary protobufs, JSON, YAML and proto text are supported formats for
+the
+:ref:`DiscoveryResponse `.
+
+There is no mechanism available for filesystem subscriptions to ACK/NACK
+updates beyond stats counters and logs. The last valid configuration for
+an xDS API will continue to apply if an configuration update rejection
+occurs.
+
+Streaming gRPC subscriptions
+----------------------------
+
+Singleton resource type discovery
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A gRPC
+:ref:`ApiConfigSource `
+can be specified independently for each xDS API, pointing at an upstream
+cluster corresponding to a management server. This will initiate an
+independent bidirectional gRPC stream for each xDS resource type,
+potentially to distinct management servers. API delivery is eventually
+consistent. See :ref:`Aggregated Discovery Service` below for
+situations in which explicit control of sequencing is required.
+
+Type URLs
+^^^^^^^^^
+
+Each xDS API is concerned with resources of a given type. There is a 1:1
+correspondence between an xDS API and a resource type. That is:
+
+- LDS: :ref:`envoy.api.v2.Listener `
+- RDS: :ref:`envoy.api.v2.RouteConfiguration `
+- VHDS: :ref:`envoy.api.v2.Vhds `
+- CDS: :ref:`envoy.api.v2.Cluster `
+- EDS: :ref:`envoy.api.v2.ClusterLoadAssignment `
+- SDS: :ref:`envoy.api.v2.Auth.Secret `
+
+The concept of `type URLs `_ appears below, and takes the form
+`type.googleapis.com/`, e.g.
+`type.googleapis.com/envoy.api.v2.Cluster` for CDS. In various
+requests from Envoy and responses by the management server, the resource
+type URL is stated.
+
+ACK/NACK and versioning
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Each stream begins with a
+:ref:`DiscoveryRequest ` from Envoy, specifying
+the list of resources to subscribe to, the type URL corresponding to the
+subscribed resources, the node identifier and an empty :ref:`version_info `.
+An example EDS request might be:
+
+.. code:: yaml
+
+ version_info:
+ node: { id: envoy }
+ resource_names:
+ - foo
+ - bar
+ type_url: type.googleapis.com/envoy.api.v2.ClusterLoadAssignment
+ response_nonce:
+
+The management server may reply either immediately or when the requested
+resources are available with a :ref:`DiscoveryResponse `, e.g.:
+
+.. code:: yaml
+
+ version_info: X
+ resources:
+ - foo ClusterLoadAssignment proto encoding
+ - bar ClusterLoadAssignment proto encoding
+ type_url: type.googleapis.com/envoy.api.v2.ClusterLoadAssignment
+ nonce: A
+
+After processing the :ref:`DiscoveryResponse `, Envoy will send a new
+request on the stream, specifying the last version successfully applied
+and the nonce provided by the management server. If the update was
+successfully applied, the :ref:`version_info ` will be **X**, as indicated
+in the sequence diagram:
+
+.. figure:: diagrams/simple-ack.svg
+ :alt: Version update after ACK
+
+In this sequence diagram, and below, the following format is used to abbreviate messages:
+
+- *DiscoveryRequest*: (V=version_info,R=resource_names,N=response_nonce,T=type_url)
+- *DiscoveryResponse*: (V=version_info,R=resources,N=nonce,T=type_url)
+
+The version provides Envoy and the management server a shared notion of
+the currently applied configuration, as well as a mechanism to ACK/NACK
+configuration updates. If Envoy had instead rejected configuration
+update **X**, it would reply with :ref:`error_detail `
+populated and its previous version, which in this case was the empty
+initial version. The :ref:`error_detail ` has more details around the exact
+error message populated in the message field:
+
+.. figure:: diagrams/simple-nack.svg
+ :alt: No version update after NACK
+
+Later, an API update may succeed at a new version **Y**:
+
+
+.. figure:: diagrams/later-ack.svg
+ :alt: ACK after NACK
+
+Each stream has its own notion of versioning, there is no shared
+versioning across resource types. When ADS is not used, even each
+resource of a given resource type may have a distinct version, since the
+Envoy API allows distinct EDS/RDS resources to point at different :ref:`ConfigSources `.
+
+.. _Resource Updates:
+
+When to send an update
+^^^^^^^^^^^^^^^^^^^^^^
+
+The management server should only send updates to the Envoy client when
+the resources in the :ref:`DiscoveryResponse ` have changed. Envoy replies
+to any :ref:`DiscoveryResponse ` with a :ref:`DiscoveryRequest ` containing the
+ACK/NACK immediately after it has been either accepted or rejected. If
+the management server provides the same set of resources rather than
+waiting for a change to occur, it will cause Envoy and the management
+server to spin and have a severe performance impact.
+
+Within a stream, new :ref:`DiscoveryRequests ` supersede any prior
+:ref:`DiscoveryRequests ` having the same resource type. This means that
+the management server only needs to respond to the latest
+:ref:`DiscoveryRequest ` on each stream for any given resource type.
+
+Resource hints
+^^^^^^^^^^^^^^
+
+The :ref:`resource_names ` specified in the :ref:`DiscoveryRequest ` are a hint.
+Some resource types, e.g. `Clusters` and `Listeners` will
+specify an empty :ref:`resource_names ` list, since Envoy is interested in
+learning about all the :ref:`Clusters (CDS) ` and :ref:`Listeners (LDS) `
+that the management server(s) know about corresponding to its node
+identification. Other resource types, e.g. :ref:`RouteConfiguration (RDS) `
+and :ref:`ClusterLoadAssignment (EDS) `, follow from earlier
+CDS/LDS updates and Envoy is able to explicitly enumerate these
+resources.
+
+LDS/CDS resource hints will always be empty and it is expected that the
+management server will provide the complete state of the LDS/CDS
+resources in each response. An absent `Listener` or `Cluster` will
+be deleted.
+
+For EDS/RDS, the management server does not need to supply every
+requested resource and may also supply additional, unrequested
+resources. :ref:`resource_names ` is only a hint. Envoy will silently ignore
+any superfluous resources. When a requested resource is missing in a RDS
+or EDS update, Envoy will retain the last known value for this resource
+except in the case where the `Cluster` or `Listener` is being
+warmed. See :ref:`Resource warming` section below on
+the expectations during warming. The management server may be able to
+infer all the required EDS/RDS resources from the :ref:`node `
+identification in the :ref:`DiscoveryRequest `, in which case this hint may
+be discarded. An empty EDS/RDS :ref:`DiscoveryResponse ` is effectively a
+nop from the perspective of the respective resources in the Envoy.
+
+When a `Listener` or `Cluster` is deleted, its corresponding EDS and
+RDS resources are also deleted inside the Envoy instance. In order for
+EDS resources to be known or tracked by Envoy, there must exist an
+applied `Cluster` definition (e.g. sourced via CDS). A similar
+relationship exists between RDS and `Listeners` (e.g. sourced via
+LDS).
+
+For EDS/RDS, Envoy may either generate a distinct stream for each
+resource of a given type (e.g. if each :ref:`ConfigSource ` has its own
+distinct upstream cluster for a management server), or may combine
+together multiple resource requests for a given resource type when they
+are destined for the same management server. While this is left to
+implementation specifics, management servers should be capable of
+handling one or more :ref:`resource_names ` for a given resource type in
+each request. Both sequence diagrams below are valid for fetching two
+EDS resources `{foo, bar}`:
+
+|Multiple EDS requests on the same stream| |Multiple EDS requests on
+distinct streams|
+
+Resource updates
+^^^^^^^^^^^^^^^^
+
+As discussed above, Envoy may update the list of :ref:`resource_names ` it
+presents to the management server in each :ref:`DiscoveryRequest ` that
+ACK/NACKs a specific :ref:`DiscoveryResponse `. In addition, Envoy may later
+issue additional :ref:`DiscoveryRequests ` at a given :ref:`version_info ` to
+update the management server with new resource hints. For example, if
+Envoy is at EDS version **X** and knows only about cluster ``foo``, but
+then receives a CDS update and learns about ``bar`` in addition, it may
+issue an additional :ref:`DiscoveryRequest ` for **X** with `{foo,bar}` as
+`resource_names`.
+
+.. figure:: diagrams/cds-eds-resources.svg
+ :alt: CDS response leads to EDS resource hint update
+
+There is a race condition that may arise here; if after a resource hint
+update is issued by Envoy at **X**, but before the management server
+processes the update it replies with a new version **Y**, the resource
+hint update may be interpreted as a rejection of **Y** by presenting an
+**X** :ref:`version_info `. To avoid this, the management server provides a
+``nonce`` that Envoy uses to indicate the specific :ref:`DiscoveryResponse `
+each :ref:`DiscoveryRequest ` corresponds to:
+
+.. figure:: diagrams/update-race.svg
+ :alt: EDS update race motivates nonces
+
+The management server should not send a :ref:`DiscoveryResponse ` for any
+:ref:`DiscoveryRequest ` that has a stale nonce. A nonce becomes stale
+following a newer nonce being presented to Envoy in a
+:ref:`DiscoveryResponse `. A management server does not need to send an
+update until it determines a new version is available. Earlier requests
+at a version then also become stale. It may process multiple
+:ref:`DiscoveryRequests ` at a version until a new version is ready.
+
+.. figure:: diagrams/stale-requests.svg
+ :alt: Requests become stale
+
+An implication of the above resource update sequencing is that Envoy
+does not expect a :ref:`DiscoveryResponse ` for every :ref:`DiscoveryRequests `
+it issues.
+
+.. _Resource Warming:
+
+Resource warming
+~~~~~~~~~~~~~~~~
+
+:ref:`Clusters ` and
+:ref:`Listeners `
+go through warming before they can serve requests. This process
+happens both during :ref:`Envoy initialization `
+and when the `Cluster` or `Listener` is updated. Warming of
+`Cluster` is completed only when a `ClusterLoadAssignment` response
+is supplied by management server. Similarly, warming of `Listener` is
+completed only when a `RouteConfiguration` is supplied by management
+server if the listener refers to an RDS configuration. Management server
+is expected to provide the EDS/RDS updates during warming. If management
+server does not provide EDS/RDS responses, Envoy will not initialize
+itself during the initialization phase and the updates sent via CDS/LDS
+will not take effect until EDS/RDS responses are supplied.
+
+Eventual consistency considerations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Since Envoy's xDS APIs are eventually consistent, traffic may drop
+briefly during updates. For example, if only cluster **X** is known via
+CDS/EDS, a `RouteConfiguration` references cluster **X** and is then
+adjusted to cluster **Y** just before the CDS/EDS update providing
+**Y**, traffic will be blackholed until **Y** is known about by the
+Envoy instance.
+
+For some applications, a temporary drop of traffic is acceptable,
+retries at the client or by other Envoy sidecars will hide this drop.
+For other scenarios where drop can't be tolerated, traffic drop could
+have been avoided by providing a CDS/EDS update with both **X** and
+**Y**, then the RDS update repointing from **X** to **Y** and then a
+CDS/EDS update dropping **X**.
+
+In general, to avoid traffic drop, sequencing of updates should follow a
+make before break model, wherein:
+
+- CDS updates (if any) must always be pushed first.
+- EDS updates (if any) must arrive after CDS updates for the respective clusters.
+- LDS updates must arrive after corresponding CDS/EDS updates.
+- RDS updates related to the newly added listeners must arrive after CDS/EDS/LDS updates.
+- VHDS updates (if any) related to the newly added RouteConfigurations must arrive after RDS updates.
+- Stale CDS clusters and related EDS endpoints (ones no longer being referenced) can then be removed.
+
+xDS updates can be pushed independently if no new
+clusters/routes/listeners are added or if it's acceptable to temporarily
+drop traffic during updates. Note that in case of LDS updates, the
+listeners will be warmed before they receive traffic, i.e. the dependent
+routes are fetched through RDS if configured. Clusters are warmed when
+adding/removing/updating clusters. On the other hand, routes are not
+warmed, i.e., the management plane must ensure that clusters referenced
+by a route are in place, before pushing the updates for a route.
+
+.. _Aggregated Discovery Service:
+
+Aggregated Discovery Service (ADS)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+It's challenging to provide the above guarantees on sequencing to avoid
+traffic drop when management servers are distributed. ADS allow a single
+management server, via a single gRPC stream, to deliver all API updates.
+This provides the ability to carefully sequence updates to avoid traffic
+drop. With ADS, a single stream is used with multiple independent
+:ref:`DiscoveryRequest `/:ref:`DiscoveryResponse ` sequences multiplexed via the
+type URL. For any given type URL, the above sequencing of
+:ref:`DiscoveryRequest ` and :ref:`DiscoveryResponse ` messages applies. An
+example update sequence might look like:
+
+.. figure:: diagrams/ads.svg
+ :alt: EDS/CDS multiplexed on an ADS stream
+
+A single ADS stream is available per Envoy instance.
+
+An example minimal ``bootstrap.yaml`` fragment for ADS configuration is:
+
+.. code:: yaml
+
+ node:
+ id:
+ dynamic_resources:
+ cds_config: {ads: {}}
+ lds_config: {ads: {}}
+ ads_config:
+ api_type: GRPC
+ grpc_services:
+ envoy_grpc:
+ cluster_name: ads_cluster
+ static_resources:
+ clusters:
+ - name: ads_cluster
+ connect_timeout: { seconds: 5 }
+ type: STATIC
+ hosts:
+ - socket_address:
+ address:
+ port_value:
+ lb_policy: ROUND_ROBIN
+ http2_protocol_options: {}
+ upstream_connection_options:
+ # configure a TCP keep-alive to detect and reconnect to the admin
+ # server in the event of a TCP socket disconnection
+ tcp_keepalive:
+ ...
+ admin:
+ ...
+
+Incremental xDS
+~~~~~~~~~~~~~~~
+
+Incremental xDS is a separate xDS endpoint that:
+
+- Allows the protocol to communicate on the wire in terms of
+ resource/resource name deltas ("Delta xDS"). This supports the goal
+ of scalability of xDS resources. Rather than deliver all 100k
+ clusters when a single cluster is modified, the management server
+ only needs to deliver the single cluster that changed.
+- Allows the Envoy to on-demand / lazily request additional resources.
+ For example, requesting a cluster only when a request for that
+ cluster arrives.
+
+An Incremental xDS session is always in the context of a gRPC
+bidirectional stream. This allows the xDS server to keep track of the
+state of xDS clients connected to it. There is no REST version of
+Incremental xDS yet.
+
+In the delta xDS wire protocol, the nonce field is required and used to
+pair a :ref:`DeltaDiscoveryResponse `
+to a :ref:`DeltaDiscoveryRequest `
+ACK or NACK. Optionally, a response message level :ref:`system_version_info `
+is present for debugging purposes only.
+
+:ref:`DeltaDiscoveryRequest ` can be sent in the following situations:
+
+- Initial message in a xDS bidirectional gRPC stream.
+- As an ACK or NACK response to a previous :ref:`DeltaDiscoveryResponse `. In this case the :ref:`response_nonce ` is set to the nonce value in the Response. ACK or NACK is determined by the absence or presence of :ref:`error_detail `.
+- Spontaneous :ref:`DeltaDiscoveryRequests ` from the client. This can be done to dynamically add or remove elements from the tracked :ref:`resource_names ` set. In this case :ref:`response_nonce ` must be omitted.
+
+In this first example the client connects and receives a first update
+that it ACKs. The second update fails and the client NACKs the update.
+Later the xDS client spontaneously requests the "wc" resource.
+
+.. figure:: diagrams/incremental.svg
+ :alt: Incremental session example
+
+On reconnect the Incremental xDS client may tell the server of its known
+resources to avoid resending them over the network. Because no state is
+assumed to be preserved from the previous stream, the reconnecting
+client must provide the server with all resource names it is interested
+in.
+
+.. figure:: diagrams/incremental-reconnect.svg
+ :alt: Incremental reconnect example
+
+Resource names
+^^^^^^^^^^^^^^
+
+Resources are identified by a resource name or an alias. Aliases of a
+resource, if present, can be identified by the alias field in the
+resource of a :ref:`DeltaDiscoveryResponse `. The resource name will be
+returned in the name field in the resource of a
+:ref:`DeltaDiscoveryResponse `.
+
+Subscribing to Resources
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The client can send either an alias or the name of a resource in the
+:ref:`resource_names_subscribe ` field of a :ref:`DeltaDiscoveryRequest ` in
+order to subscribe to a resource. Both the names and aliases of
+resources should be checked in order to determine whether the entity in
+question has been subscribed to.
+
+A :ref:`resource_names_subscribe ` field may contain resource names that the
+server believes the client is already subscribed to, and furthermore has
+the most recent versions of. However, the server *must* still provide
+those resources in the response; due to implementation details hidden
+from the server, the client may have "forgotten" those resources despite
+apparently remaining subscribed.
+
+Unsubscribing from Resources
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When a client loses interest in some resources, it will indicate that
+with the :ref:`resource_names_unsubscribe ` field of a
+:ref:`DeltaDiscoveryRequest `. As with :ref:`resource_names_subscribe `, these
+may be resource names or aliases.
+
+A :ref:`resource_names_unsubscribe ` field may contain superfluous resource
+names, which the server thought the client was already not subscribed
+to. The server must cleanly process such a request; it can simply ignore
+these phantom unsubscriptions.
+
+REST-JSON polling subscriptions
+-------------------------------
+
+Synchronous (long) polling via REST endpoints is also available for the
+xDS singleton APIs. The above sequencing of messages is similar, except
+no persistent stream is maintained to the management server. It is
+expected that there is only a single outstanding request at any point in
+time, and as a result the response nonce is optional in REST-JSON. The
+`JSON canonical transform of
+proto3 `__
+is used to encode :ref:`DiscoveryRequest ` and :ref:`DiscoveryResponse `
+messages. ADS is not available for REST-JSON polling.
+
+When the poll period is set to a small value, with the intention of long
+polling, then there is also a requirement to avoid sending a
+:ref:`DiscoveryResponse ` :ref:`unless a change to the underlying resources has
+occurred `.
+
+.. |Multiple EDS requests on the same stream| image:: diagrams/eds-same-stream.svg
+.. |Multiple EDS requests on distinct streams| image:: diagrams/eds-distinct-stream.svg
\ No newline at end of file
diff --git a/bazel/BUILD b/bazel/BUILD
index 06dec8d89f45e..90271e7d9699e 100644
--- a/bazel/BUILD
+++ b/bazel/BUILD
@@ -105,6 +105,11 @@ config_setting(
values = {"define": "google_grpc=disabled"},
)
+config_setting(
+ name = "enable_path_normalization_by_default",
+ values = {"define": "path_normalization_by_default=true"},
+)
+
cc_proto_library(
name = "grpc_health_proto",
deps = ["@com_github_grpc_grpc//src/proto/grpc/health/v1:_health_proto_only"],
diff --git a/bazel/README.md b/bazel/README.md
index 9a678077f69ee..99adeb4d2b876 100644
--- a/bazel/README.md
+++ b/bazel/README.md
@@ -33,7 +33,7 @@ for how to update or override dependencies.
sudo apt-get install \
libtool \
cmake \
- clang-format-7 \
+ clang-format-8 \
automake \
autoconf \
make \
@@ -50,9 +50,9 @@ for how to update or override dependencies.
On macOS, you'll need to install several dependencies. This can be accomplished via [Homebrew](https://brew.sh/):
```
- brew install coreutils wget cmake libtool go bazel automake ninja llvm@7 autoconf
+ brew install coreutils wget cmake libtool go bazel automake ninja clang-format autoconf aspell
```
- _notes_: `coreutils` is used for `realpath`, `gmd5sum` and `gsha256sum`; `llvm@7` is used for `clang-format`
+ _notes_: `coreutils` is used for `realpath`, `gmd5sum` and `gsha256sum`
Envoy compiles and passes tests with the version of clang installed by XCode 9.3.0:
Apple LLVM version 9.1.0 (clang-902.0.30).
@@ -366,6 +366,8 @@ The following optional features can be enabled on the Bazel build command-line:
release builds so that the condition is not evaluated. This option has no effect in debug builds.
* memory-debugging (scribbling over memory after allocation and before freeing) with
`--define tcmalloc=debug`. Note this option cannot be used with FIPS-compliant mode BoringSSL.
+* Default [path normalization](https://github.com/envoyproxy/envoy/issues/6435) with
+ `--define path_normalization_by_default=true`. Note this still could be disable by explicit xDS config.
## Disabling extensions
diff --git a/bazel/api_repositories.bzl b/bazel/api_repositories.bzl
new file mode 100644
index 0000000000000..016fb16c8a2ee
--- /dev/null
+++ b/bazel/api_repositories.bzl
@@ -0,0 +1,35 @@
+def _default_envoy_api_impl(ctx):
+ ctx.file("WORKSPACE", "")
+ ctx.file("BUILD.bazel", "")
+ api_dirs = [
+ "bazel",
+ "docs",
+ "envoy",
+ "examples",
+ "test",
+ "tools",
+ ]
+ for d in api_dirs:
+ ctx.symlink(ctx.path(ctx.attr.api).dirname.get_child(d), d)
+
+_default_envoy_api = repository_rule(
+ implementation = _default_envoy_api_impl,
+ attrs = {
+ "api": attr.label(default = "@envoy//api:BUILD"),
+ },
+)
+
+def envoy_api_dependencies():
+ # Treat the data plane API as an external repo, this simplifies exporting the API to
+ # https://github.com/envoyproxy/data-plane-api.
+ if "envoy_api" not in native.existing_rules().keys():
+ _default_envoy_api(name = "envoy_api")
+
+ native.bind(
+ name = "api_httpbody_protos",
+ actual = "@googleapis//:api_httpbody_protos",
+ )
+ native.bind(
+ name = "http_api_protos",
+ actual = "@googleapis//:http_api_protos",
+ )
diff --git a/bazel/cc_wrapper.py b/bazel/cc_wrapper.py
index ba55379abeb0a..d31847904d2f3 100755
--- a/bazel/cc_wrapper.py
+++ b/bazel/cc_wrapper.py
@@ -71,6 +71,11 @@ def main():
else:
argv += sys.argv[1:]
+ # Bazel will add -fuse-ld=gold in some cases, gcc/clang will take the last -fuse-ld argument,
+ # so whenever we see lld once, add it to the end.
+ if "-fuse-ld=lld" in argv:
+ argv.append("-fuse-ld=lld")
+
# Add compiler-specific options
if "clang" in compiler:
# This ensures that STL symbols are included.
diff --git a/bazel/envoy_build_system.bzl b/bazel/envoy_build_system.bzl
index e47592fcb006d..d7de0270b6c48 100644
--- a/bazel/envoy_build_system.bzl
+++ b/bazel/envoy_build_system.bzl
@@ -83,7 +83,8 @@ def envoy_copts(repository, test = False):
"//conditions:default": [],
}) + envoy_select_hot_restart(["-DENVOY_HOT_RESTART"], repository) + \
envoy_select_perf_annotation(["-DENVOY_PERF_ANNOTATION"]) + \
- envoy_select_google_grpc(["-DENVOY_GOOGLE_GRPC"], repository)
+ envoy_select_google_grpc(["-DENVOY_GOOGLE_GRPC"], repository) + \
+ envoy_select_path_normalization_by_default(["-DENVOY_NORMALIZE_PATH_BY_DEFAULT"], repository)
def envoy_static_link_libstdcpp_linkopts():
return envoy_select_force_libcpp(
@@ -443,7 +444,8 @@ def envoy_cc_test(
args = [],
shard_count = None,
coverage = True,
- local = False):
+ local = False,
+ size = "medium"):
test_lib_tags = []
if coverage:
test_lib_tags.append("coverage_test_lib")
@@ -472,6 +474,7 @@ def envoy_cc_test(
tags = tags + ["coverage_test"],
local = local,
shard_count = shard_count,
+ size = size,
)
# Envoy C++ related test infrastructure (that want gtest, gmock, but may be
@@ -484,7 +487,8 @@ def envoy_cc_test_infrastructure_library(
external_deps = [],
deps = [],
repository = "",
- tags = []):
+ tags = [],
+ include_prefix = None):
native.cc_library(
name = name,
srcs = srcs,
@@ -496,8 +500,10 @@ def envoy_cc_test_infrastructure_library(
envoy_external_dep_path("googletest"),
],
tags = tags,
+ include_prefix = include_prefix,
alwayslink = 1,
linkstatic = 1,
+ visibility = ["//visibility:public"],
)
# Envoy C++ test related libraries (that want gtest, gmock) should be specified
@@ -510,7 +516,8 @@ def envoy_cc_test_library(
external_deps = [],
deps = [],
repository = "",
- tags = []):
+ tags = [],
+ include_prefix = None):
deps = deps + [
repository + "//test/test_common:printers_includes",
]
@@ -523,6 +530,7 @@ def envoy_cc_test_library(
deps,
repository,
tags,
+ include_prefix,
)
# Envoy test binaries should be specified with this function.
@@ -645,6 +653,13 @@ def envoy_select_hot_restart(xs, repository = ""):
"//conditions:default": xs,
})
+# Select the given values if default path normalization is on in the current build.
+def envoy_select_path_normalization_by_default(xs, repository = ""):
+ return select({
+ repository + "//bazel:enable_path_normalization_by_default": xs,
+ "//conditions:default": [],
+ })
+
def envoy_select_perf_annotation(xs):
return select({
"@envoy//bazel:enable_perf_annotation": xs,
@@ -681,7 +696,4 @@ def envoy_select_boringssl(if_fips, default = None):
# Selects the part of QUICHE that does not yet work with the current CI.
def envoy_select_quiche(xs, repository = ""):
- return select({
- repository + "//bazel:enable_quiche": xs,
- "//conditions:default": [],
- })
+ return xs
diff --git a/bazel/external/quiche.BUILD b/bazel/external/quiche.BUILD
index e41693b201776..0d24bc7e22c68 100644
--- a/bazel/external/quiche.BUILD
+++ b/bazel/external/quiche.BUILD
@@ -62,13 +62,15 @@ cc_library(
"quiche/http2/platform/api/http2_string.h",
"quiche/http2/platform/api/http2_string_piece.h",
# TODO: uncomment the following files as implementations are added.
- # "quiche/http2/platform/api/http2_bug_tracker.h",
# "quiche/http2/platform/api/http2_flags.h",
- # "quiche/http2/platform/api/http2_mock_log.h",
# "quiche/http2/platform/api/http2_reconstruct_object.h",
# "quiche/http2/platform/api/http2_test_helpers.h",
] + envoy_select_quiche(
- ["quiche/http2/platform/api/http2_string_utils.h"],
+ [
+ "quiche/http2/platform/api/http2_bug_tracker.h",
+ "quiche/http2/platform/api/http2_logging.h",
+ "quiche/http2/platform/api/http2_string_utils.h",
+ ],
"@envoy",
),
visibility = ["//visibility:public"],
@@ -90,17 +92,39 @@ cc_library(
# TODO: uncomment the following files as implementations are added.
# "quiche/spdy/platform/api/spdy_flags.h",
] + envoy_select_quiche(
- ["quiche/spdy/platform/api/spdy_string_utils.h"],
+ [
+ "quiche/spdy/platform/api/spdy_bug_tracker.h",
+ "quiche/spdy/platform/api/spdy_logging.h",
+ "quiche/spdy/platform/api/spdy_string_utils.h",
+ ],
"@envoy",
),
visibility = ["//visibility:public"],
deps = ["@envoy//source/extensions/quic_listeners/quiche/platform:spdy_platform_impl_lib"],
)
+cc_library(
+ name = "spdy_simple_arena_lib",
+ srcs = ["quiche/spdy/core/spdy_simple_arena.cc"],
+ hdrs = ["quiche/spdy/core/spdy_simple_arena.h"],
+ visibility = ["//visibility:public"],
+ deps = [":spdy_platform"],
+)
+
+cc_library(
+ name = "spdy_platform_unsafe_arena_lib",
+ hdrs = ["quiche/spdy/platform/api/spdy_unsafe_arena.h"],
+ visibility = ["//visibility:public"],
+ deps = ["@envoy//source/extensions/quic_listeners/quiche/platform:spdy_platform_unsafe_arena_impl_lib"],
+)
+
cc_library(
name = "quic_platform",
srcs = ["quiche/quic/platform/api/quic_mutex.cc"] + envoy_select_quiche(
- ["quiche/quic/platform/api/quic_hostname_utils.cc"],
+ [
+ "quiche/quic/platform/api/quic_file_utils.cc",
+ "quiche/quic/platform/api/quic_hostname_utils.cc",
+ ],
"@envoy",
),
hdrs = [
@@ -108,7 +132,10 @@ cc_library(
"quiche/quic/platform/api/quic_mutex.h",
"quiche/quic/platform/api/quic_str_cat.h",
] + envoy_select_quiche(
- ["quiche/quic/platform/api/quic_hostname_utils.h"],
+ [
+ "quiche/quic/platform/api/quic_file_utils.h",
+ "quiche/quic/platform/api/quic_hostname_utils.h",
+ ],
"@envoy",
),
visibility = ["//visibility:public"],
@@ -125,12 +152,25 @@ cc_library(
deps = ["@envoy//source/extensions/quic_listeners/quiche/platform:quic_platform_export_impl_lib"],
)
+cc_library(
+ name = "quic_platform_port_utils",
+ testonly = 1,
+ hdrs = envoy_select_quiche(
+ ["quiche/quic/platform/api/quic_port_utils.h"],
+ "@envoy",
+ ),
+ visibility = ["//visibility:public"],
+ deps = envoy_select_quiche(
+ ["@envoy//source/extensions/quic_listeners/quiche/platform:quic_platform_port_utils_impl_lib"],
+ "@envoy",
+ ),
+)
+
cc_library(
name = "quic_platform_base",
hdrs = [
"quiche/quic/platform/api/quic_aligned.h",
"quiche/quic/platform/api/quic_arraysize.h",
- "quiche/quic/platform/api/quic_bug_tracker.h",
"quiche/quic/platform/api/quic_client_stats.h",
"quiche/quic/platform/api/quic_containers.h",
"quiche/quic/platform/api/quic_endian.h",
@@ -139,22 +179,17 @@ cc_library(
"quiche/quic/platform/api/quic_fallthrough.h",
"quiche/quic/platform/api/quic_flag_utils.h",
"quiche/quic/platform/api/quic_iovec.h",
- "quiche/quic/platform/api/quic_logging.h",
"quiche/quic/platform/api/quic_map_util.h",
- "quiche/quic/platform/api/quic_mock_log.h",
"quiche/quic/platform/api/quic_prefetch.h",
"quiche/quic/platform/api/quic_ptr_util.h",
"quiche/quic/platform/api/quic_reference_counted.h",
"quiche/quic/platform/api/quic_server_stats.h",
- "quiche/quic/platform/api/quic_stack_trace.h",
+ "quiche/quic/platform/api/quic_stream_buffer_allocator.h",
"quiche/quic/platform/api/quic_string_piece.h",
"quiche/quic/platform/api/quic_test_output.h",
"quiche/quic/platform/api/quic_uint128.h",
- "quiche/quic/platform/api/quic_thread.h",
# TODO: uncomment the following files as implementations are added.
# "quiche/quic/platform/api/quic_clock.h",
- # "quiche/quic/platform/api/quic_expect_bug.h",
- # "quiche/quic/platform/api/quic_file_utils.h",
# "quiche/quic/platform/api/quic_flags.h",
# "quiche/quic/platform/api/quic_fuzzed_data_provider.h",
# "quiche/quic/platform/api/quic_goog_cc_sender.h",
@@ -166,15 +201,19 @@ cc_library(
# "quiche/quic/platform/api/quic_mem_slice_storage.h",
# "quiche/quic/platform/api/quic_pcc_sender.h",
# "quiche/quic/platform/api/quic_socket_address.h",
- # "quiche/quic/platform/api/quic_stack_trace.h",
- # "quiche/quic/platform/api/quic_test.h",
# "quiche/quic/platform/api/quic_test_loopback.h",
# "quiche/quic/platform/api/quic_test_mem_slice_vector.h",
] + envoy_select_quiche(
[
+ "quiche/quic/platform/api/quic_bug_tracker.h",
+ "quiche/quic/platform/api/quic_expect_bug.h",
+ "quiche/quic/platform/api/quic_mock_log.h",
+ "quiche/quic/platform/api/quic_logging.h",
+ "quiche/quic/platform/api/quic_stack_trace.h",
"quiche/quic/platform/api/quic_string_utils.h",
"quiche/quic/platform/api/quic_test.h",
"quiche/quic/platform/api/quic_text_utils.h",
+ "quiche/quic/platform/api/quic_thread.h",
],
"@envoy",
),
@@ -200,6 +239,20 @@ cc_library(
deps = [":quic_platform"],
)
+cc_library(
+ name = "quic_buffer_allocator_lib",
+ srcs = [
+ "quiche/quic/core/quic_buffer_allocator.cc",
+ "quiche/quic/core/quic_simple_buffer_allocator.cc",
+ ],
+ hdrs = [
+ "quiche/quic/core/quic_buffer_allocator.h",
+ "quiche/quic/core/quic_simple_buffer_allocator.h",
+ ],
+ visibility = ["//visibility:public"],
+ deps = [":quic_platform_export"],
+)
+
envoy_cc_test(
name = "http2_platform_test",
srcs = envoy_select_quiche(
@@ -222,12 +275,12 @@ envoy_cc_test(
envoy_cc_test(
name = "quic_platform_test",
- srcs = [
- "quiche/quic/platform/api/quic_reference_counted_test.cc",
- ] + envoy_select_quiche(
+ srcs = envoy_select_quiche(
[
- "quiche/quic/platform/api/quic_text_utils_test.cc",
+ "quiche/quic/platform/api/quic_endian_test.cc",
+ "quiche/quic/platform/api/quic_reference_counted_test.cc",
"quiche/quic/platform/api/quic_string_utils_test.cc",
+ "quiche/quic/platform/api/quic_text_utils_test.cc",
],
"@envoy",
),
diff --git a/bazel/genrule_repository.bzl b/bazel/genrule_repository.bzl
index cdddf14922de8..0689c39c88b0b 100644
--- a/bazel/genrule_repository.bzl
+++ b/bazel/genrule_repository.bzl
@@ -105,9 +105,11 @@ def _genrule_environment(ctx):
# running.
#
# https://stackoverflow.com/questions/37603238/fsanitize-not-using-gold-linker-in-gcc-6-1
- force_ld_gold = []
- if "gcc" in c_compiler or "g++" in c_compiler:
- force_ld_gold = ["-fuse-ld=gold"]
+ force_ld = []
+ if "clang" in c_compiler:
+ force_ld = ["-fuse-ld=lld"]
+ elif "gcc" in c_compiler or "g++" in c_compiler:
+ force_ld = ["-fuse-ld=gold"]
cc_flags = []
ld_flags = []
@@ -117,11 +119,11 @@ def _genrule_environment(ctx):
if ctx.var.get("ENVOY_CONFIG_ASAN"):
cc_flags += asan_flags
ld_flags += asan_flags
- ld_flags += force_ld_gold
+ ld_flags += force_ld
if ctx.var.get("ENVOY_CONFIG_TSAN"):
cc_flags += tsan_flags
ld_flags += tsan_flags
- ld_flags += force_ld_gold
+ ld_flags += force_ld
lines.append("export CFLAGS=%r" % (" ".join(cc_flags),))
lines.append("export LDFLAGS=%r" % (" ".join(ld_flags),))
diff --git a/bazel/repositories.bzl b/bazel/repositories.bzl
index f2cfc4cfe6c1a..a6b762243efeb 100644
--- a/bazel/repositories.bzl
+++ b/bazel/repositories.bzl
@@ -1,6 +1,6 @@
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load(":genrule_repository.bzl", "genrule_repository")
-load("//api/bazel:envoy_http_archive.bzl", "envoy_http_archive")
+load("@envoy_api//bazel:envoy_http_archive.bzl", "envoy_http_archive")
load(":repository_locations.bzl", "REPOSITORY_LOCATIONS")
load(
"@bazel_tools//tools/cpp:windows_cc_configure.bzl",
@@ -8,6 +8,7 @@ load(
"setup_vc_env_vars",
)
load("@bazel_tools//tools/cpp:lib_cc_configure.bzl", "get_env_var")
+load("@envoy_api//bazel:repositories.bzl", "api_dependencies")
# dict of {build recipe name: longform extension name,}
PPC_SKIP_TARGETS = {"luajit": "envoy.filters.http.lua"}
@@ -38,27 +39,6 @@ _default_envoy_build_config = repository_rule(
},
)
-def _default_envoy_api_impl(ctx):
- ctx.file("WORKSPACE", "")
- ctx.file("BUILD.bazel", "")
- api_dirs = [
- "bazel",
- "docs",
- "envoy",
- "examples",
- "test",
- "tools",
- ]
- for d in api_dirs:
- ctx.symlink(ctx.path(ctx.attr.api).dirname.get_child(d), d)
-
-_default_envoy_api = repository_rule(
- implementation = _default_envoy_api_impl,
- attrs = {
- "api": attr.label(default = "@envoy//api:BUILD"),
- },
-)
-
# Python dependencies. If these become non-trivial, we might be better off using a virtualenv to
# wrap them, but for now we can treat them as first-class Bazel.
def _python_deps():
@@ -94,6 +74,14 @@ def _python_deps():
name = "com_github_twitter_common_finagle_thrift",
build_file = "@envoy//bazel/external:twitter_common_finagle_thrift.BUILD",
)
+ _repository_impl(
+ name = "six_archive",
+ build_file = "@com_google_protobuf//:six.BUILD",
+ )
+ native.bind(
+ name = "six",
+ actual = "@six_archive//:six",
+ )
# Bazel native C++ dependencies. For the dependencies that doesn't provide autoconf/automake builds.
def _cc_deps():
@@ -127,29 +115,6 @@ def _go_deps(skip_targets):
_repository_impl("io_bazel_rules_go")
_repository_impl("bazel_gazelle")
-def _envoy_api_deps():
- # Treat the data plane API as an external repo, this simplifies exporting the API to
- # https://github.com/envoyproxy/data-plane-api.
- if "envoy_api" not in native.existing_rules().keys():
- _default_envoy_api(name = "envoy_api")
-
- native.bind(
- name = "api_httpbody_protos",
- actual = "@googleapis//:api_httpbody_protos",
- )
- native.bind(
- name = "http_api_protos",
- actual = "@googleapis//:http_api_protos",
- )
- _repository_impl(
- name = "six_archive",
- build_file = "@com_google_protobuf//:six.BUILD",
- )
- native.bind(
- name = "six",
- actual = "@six_archive//:six",
- )
-
def envoy_dependencies(skip_targets = []):
# Treat Envoy's overall build config as an external repo, so projects that
# build Envoy as a subcomponent can easily override the config.
@@ -207,7 +172,7 @@ def envoy_dependencies(skip_targets = []):
_python_deps()
_cc_deps()
_go_deps(skip_targets)
- _envoy_api_deps()
+ api_dependencies()
def _boringssl():
_repository_impl("boringssl")
diff --git a/bazel/repository_locations.bzl b/bazel/repository_locations.bzl
index b363f4f4bfc83..afac13ec52d8a 100644
--- a/bazel/repository_locations.bzl
+++ b/bazel/repository_locations.bzl
@@ -236,8 +236,8 @@ REPOSITORY_LOCATIONS = dict(
urls = ["https://github.com/google/subpar/archive/1.3.0.tar.gz"],
),
com_googlesource_quiche = dict(
- # Static snapshot of https://quiche.googlesource.com/quiche/+archive/4fbea5de9afdf30611b27afd54c45a596944f9c2.tar.gz
- sha256 = "2cf9f5ea62a03ca0d8773fe4f56949b72c28ac5b1bcf43d850a571f4e32add2a",
- urls = ["https://storage.googleapis.com/quiche-envoy-integration/4fbea5de9afdf30611b27afd54c45a596944f9c2.tar.gz"],
+ # Static snapshot of https://quiche.googlesource.com/quiche/+archive/840edb6d672931ff936004fc35a82ecac6060844.tar.gz
+ sha256 = "1aba26cec596e9f3b52d93fe40e1640c854e3a4c8949e362647f67eb8e2382e3",
+ urls = ["https://storage.googleapis.com/quiche-envoy-integration/840edb6d672931ff936004fc35a82ecac6060844.tar.gz"],
),
)
diff --git a/ci/README.md b/ci/README.md
index 05402fe1b88df..1676ab83ace6f 100644
--- a/ci/README.md
+++ b/ci/README.md
@@ -8,7 +8,7 @@ where `` is specified in [`envoy_build_sha.sh`](https://github.com/envoypr
may work with `envoyproxy/envoy-build:latest` to provide a self-contained environment for building Envoy binaries and
running tests that reflects the latest built Ubuntu Envoy image. Moreover, the Docker image
at [`envoyproxy/envoy:`](https://hub.docker.com/r/envoyproxy/envoy/) is an image that has an Envoy binary at `/usr/local/bin/envoy`. The ``
-corresponds to the master commit at which the binary was compiled. Lastly, `envoyproxy/envoy:latest` contains an Envoy
+corresponds to the master commit at which the binary was compiled. Lastly, `envoyproxy/envoy-dev:latest` contains an Envoy
binary built from the latest tip of master that passed tests.
## Alpine Envoy image
@@ -25,7 +25,7 @@ Currently there are three build images:
* `envoyproxy/envoy-build` — alias to `envoyproxy/envoy-build-ubuntu`.
* `envoyproxy/envoy-build-ubuntu` — based on Ubuntu 16.04 (Xenial) which uses the GCC 5.4 compiler.
-We also install and use the clang-7 compiler for some sanitizing runs.
+We also install and use the clang-8 compiler for some sanitizing runs.
# Building and running tests as a developer
@@ -91,8 +91,8 @@ The `./ci/run_envoy_docker.sh './ci/do_ci.sh '` targets are:
* `bazel.tsan` — build and run tests under `-c dbg --config=clang-tsan` with clang.
* `bazel.compile_time_options` — build Envoy and test with various compile-time options toggled to their non-default state, to ensure they still build.
* `bazel.clang_tidy` — build and run clang-tidy over all source files.
-* `check_format`— run `clang-format-6.0` and `buildifier` on entire source tree.
-* `fix_format`— run and enforce `clang-format-6.0` and `buildifier` on entire source tree.
+* `check_format`— run `clang-format` and `buildifier` on entire source tree.
+* `fix_format`— run and enforce `clang-format` and `buildifier` on entire source tree.
* `check_spelling`— run `misspell` on entire project.
* `fix_spelling`— run and enforce `misspell` on entire project.
* `check_spelling_pedantic`— run `aspell` on C++ and proto comments.
diff --git a/ci/WORKSPACE b/ci/WORKSPACE
deleted file mode 100644
index f33b9aa583168..0000000000000
--- a/ci/WORKSPACE
+++ /dev/null
@@ -1,29 +0,0 @@
-workspace(name = "ci")
-
-load("//bazel:repositories.bzl", "GO_VERSION", "envoy_dependencies")
-load("//bazel:cc_configure.bzl", "cc_configure")
-
-# We shouldn't need this, but it's a workaround for https://github.com/bazelbuild/bazel/issues/3580.
-local_repository(
- name = "envoy",
- path = "/source",
-)
-
-envoy_dependencies()
-
-# TODO(htuch): Roll this into envoy_dependencies()
-load("@rules_foreign_cc//:workspace_definitions.bzl", "rules_foreign_cc_dependencies")
-
-rules_foreign_cc_dependencies()
-
-cc_configure()
-
-load("@envoy_api//bazel:repositories.bzl", "api_dependencies")
-
-api_dependencies()
-
-load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")
-
-go_rules_dependencies()
-
-go_register_toolchains(go_version = GO_VERSION)
diff --git a/ci/WORKSPACE.filter.example b/ci/WORKSPACE.filter.example
index 6262671453103..4eb98345a13f7 100644
--- a/ci/WORKSPACE.filter.example
+++ b/ci/WORKSPACE.filter.example
@@ -5,6 +5,9 @@ local_repository(
path = "/source",
)
+load("@envoy//bazel:api_repositories.bzl", "envoy_api_dependencies")
+envoy_api_dependencies()
+
load("@envoy//bazel:repositories.bzl", "envoy_dependencies", "GO_VERSION")
load("@envoy//bazel:cc_configure.bzl", "cc_configure")
@@ -16,9 +19,6 @@ rules_foreign_cc_dependencies()
cc_configure()
-load("@envoy_api//bazel:repositories.bzl", "api_dependencies")
-api_dependencies()
-
load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")
go_rules_dependencies()
go_register_toolchains(go_version = GO_VERSION)
diff --git a/ci/build_container/build_container_centos.sh b/ci/build_container/build_container_centos.sh
index abc52e5e325d0..bf45bcc22a658 100755
--- a/ci/build_container/build_container_centos.sh
+++ b/ci/build_container/build_container_centos.sh
@@ -21,7 +21,7 @@ chmod u+x "./${BAZEL_INSTALLER}"
rm "./${BAZEL_INSTALLER}"
# SLES 11 has older glibc than CentOS 7, so pre-built binary for it works on CentOS 7
-LLVM_VERSION=7.0.1
+LLVM_VERSION=8.0.0
LLVM_RELEASE="clang+llvm-${LLVM_VERSION}-x86_64-linux-sles11.3"
curl -OL "https://releases.llvm.org/${LLVM_VERSION}/${LLVM_RELEASE}.tar.xz"
tar Jxf "${LLVM_RELEASE}.tar.xz"
diff --git a/ci/build_container/build_container_ubuntu.sh b/ci/build_container/build_container_ubuntu.sh
index 3f4b3f0b5f638..e66061523915c 100755
--- a/ci/build_container/build_container_ubuntu.sh
+++ b/ci/build_container/build_container_ubuntu.sh
@@ -7,11 +7,11 @@ apt-get update
export DEBIAN_FRONTEND=noninteractive
apt-get install -y wget software-properties-common make cmake git python python-pip python3 python3-pip \
unzip bc libtool ninja-build automake zip time golang gdb strace wireshark tshark tcpdump lcov
-# clang 7.
+# clang 8.
wget -O - http://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add -
-apt-add-repository "deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-7 main"
+apt-add-repository "deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-8 main"
apt-get update
-apt-get install -y clang-7 clang-format-7 clang-tidy-7 lld-7 libc++-7-dev libc++abi-7-dev
+apt-get install -y clang-8 clang-format-8 clang-tidy-8 lld-8 libc++-8-dev libc++abi-8-dev
# gcc-7
add-apt-repository -y ppa:ubuntu-toolchain-r/test
apt update
diff --git a/ci/build_setup.sh b/ci/build_setup.sh
index 6e969f272c652..9941d1b20b8cd 100755
--- a/ci/build_setup.sh
+++ b/ci/build_setup.sh
@@ -17,10 +17,10 @@ function setup_gcc_toolchain() {
}
function setup_clang_toolchain() {
- export PATH=/usr/lib/llvm-7/bin:$PATH
+ export PATH=/usr/lib/llvm-8/bin:$PATH
export CC=clang
export CXX=clang++
- export ASAN_SYMBOLIZER_PATH=/usr/lib/llvm-7/bin/llvm-symbolizer
+ export ASAN_SYMBOLIZER_PATH=/usr/lib/llvm-8/bin/llvm-symbolizer
echo "$CC/$CXX toolchain configured"
}
@@ -64,17 +64,15 @@ if [[ -f "/etc/redhat-release" ]]
then
export BAZEL_BUILD_EXTRA_OPTIONS="--copt=-DENVOY_IGNORE_GLIBCXX_USE_CXX11_ABI_ERROR=1 --action_env=PATH ${BAZEL_BUILD_EXTRA_OPTIONS}"
else
- export BAZEL_BUILD_EXTRA_OPTIONS="--action_env=PATH=/bin:/usr/bin:/usr/lib/llvm-7/bin --linkopt=-fuse-ld=lld ${BAZEL_BUILD_EXTRA_OPTIONS}"
+ export BAZEL_BUILD_EXTRA_OPTIONS="--action_env=PATH=/bin:/usr/bin:/usr/lib/llvm-8/bin --linkopt=-fuse-ld=lld ${BAZEL_BUILD_EXTRA_OPTIONS}"
fi
# Not sandboxing, since non-privileged Docker can't do nested namespaces.
-BAZEL_OPTIONS="--package_path %workspace%:${ENVOY_SRCDIR}"
export BAZEL_QUERY_OPTIONS="${BAZEL_OPTIONS}"
export BAZEL_BUILD_OPTIONS="--strategy=Genrule=standalone --spawn_strategy=standalone \
--verbose_failures ${BAZEL_OPTIONS} --action_env=HOME --action_env=PYTHONUSERBASE \
--jobs=${NUM_CPUS} --show_task_finish --experimental_generate_json_trace_profile ${BAZEL_BUILD_EXTRA_OPTIONS}"
export BAZEL_TEST_OPTIONS="${BAZEL_BUILD_OPTIONS} --test_env=HOME --test_env=PYTHONUSERBASE \
- --test_env=UBSAN_OPTIONS=print_stacktrace=1 \
--cache_test_results=no --test_output=all ${BAZEL_EXTRA_TEST_OPTIONS}"
[[ "${BAZEL_EXPUNGE}" == "1" ]] && "${BAZEL}" clean --expunge
@@ -92,7 +90,7 @@ if [ "$1" != "-nofetch" ]; then
then
git clone https://github.com/envoyproxy/envoy-filter-example.git "${ENVOY_FILTER_EXAMPLE_SRCDIR}"
fi
-
+
# This is the hash on https://github.com/envoyproxy/envoy-filter-example.git we pin to.
(cd "${ENVOY_FILTER_EXAMPLE_SRCDIR}" && git fetch origin && git checkout -f 6c0625cb4cc9a21df97cef2a1d065463f2ae81ae)
cp -f "${ENVOY_SRCDIR}"/ci/WORKSPACE.filter.example "${ENVOY_FILTER_EXAMPLE_SRCDIR}"/WORKSPACE
@@ -101,7 +99,6 @@ fi
# Also setup some space for building Envoy standalone.
export ENVOY_BUILD_DIR="${BUILD_DIR}"/envoy
mkdir -p "${ENVOY_BUILD_DIR}"
-cp -f "${ENVOY_SRCDIR}"/ci/WORKSPACE "${ENVOY_BUILD_DIR}"
# This is where we copy build deliverables to.
export ENVOY_DELIVERY_DIR="${ENVOY_BUILD_DIR}"/source/exe
@@ -119,29 +116,17 @@ mkdir -p "${ENVOY_FAILED_TEST_LOGS}"
export ENVOY_BUILD_PROFILE="${ENVOY_BUILD_DIR}"/generated/build-profile
mkdir -p "${ENVOY_BUILD_PROFILE}"
-# This is where we build for bazel.release* and bazel.dev.
-export ENVOY_CI_DIR="${ENVOY_SRCDIR}"/ci
-
function cleanup() {
# Remove build artifacts. This doesn't mess with incremental builds as these
# are just symlinks.
rm -rf "${ENVOY_SRCDIR}"/bazel-*
- rm -rf "${ENVOY_CI_DIR}"/bazel-*
- rm -rf "${ENVOY_CI_DIR}"/bazel
- rm -rf "${ENVOY_CI_DIR}"/tools
- rm -f "${ENVOY_CI_DIR}"/.bazelrc
}
cleanup
trap cleanup EXIT
-# Hack due to https://github.com/envoyproxy/envoy/issues/838 and the need to have
-# .bazelrc available for build linkstamping.
mkdir -p "${ENVOY_FILTER_EXAMPLE_SRCDIR}"/bazel
-mkdir -p "${ENVOY_CI_DIR}"/bazel
ln -sf "${ENVOY_SRCDIR}"/bazel/get_workspace_status "${ENVOY_FILTER_EXAMPLE_SRCDIR}"/bazel/
-ln -sf "${ENVOY_SRCDIR}"/bazel/get_workspace_status "${ENVOY_CI_DIR}"/bazel/
cp -f "${ENVOY_SRCDIR}"/.bazelrc "${ENVOY_FILTER_EXAMPLE_SRCDIR}"/
-cp -f "${ENVOY_SRCDIR}"/.bazelrc "${ENVOY_CI_DIR}"/
export BUILDIFIER_BIN="/usr/local/bin/buildifier"
diff --git a/ci/do_ci.sh b/ci/do_ci.sh
index c370cec2e3829..fa4f1dbb1e254 100755
--- a/ci/do_ci.sh
+++ b/ci/do_ci.sh
@@ -13,6 +13,7 @@ fi
. "$(dirname "$0")"/setup_gcs_cache.sh
. "$(dirname "$0")"/build_setup.sh $build_setup_args
+cd "${ENVOY_SRCDIR}"
echo "building using ${NUM_CPUS} CPUs"
@@ -27,11 +28,12 @@ function bazel_with_collection() {
if [ "${BAZEL_STATUS}" != "0" ]
then
declare -r FAILED_TEST_LOGS="$(grep " /build.*test.log" "${BAZEL_OUTPUT}" | sed -e 's/ \/build.*\/testlogs\/\(.*\)/\1/')"
- cd bazel-testlogs
+ pushd bazel-testlogs
for f in ${FAILED_TEST_LOGS}
do
cp --parents -f $f "${ENVOY_FAILED_TEST_LOGS}"
done
+ popd
exit "${BAZEL_STATUS}"
fi
collect_build_profile $1
@@ -39,13 +41,12 @@ function bazel_with_collection() {
function bazel_release_binary_build() {
echo "Building..."
- cd "${ENVOY_CI_DIR}"
bazel build ${BAZEL_BUILD_OPTIONS} -c opt //source/exe:envoy-static
collect_build_profile release_build
# Copy the envoy-static binary somewhere that we can access outside of the
# container.
cp -f \
- "${ENVOY_CI_DIR}"/bazel-bin/source/exe/envoy-static \
+ "${ENVOY_SRCDIR}"/bazel-bin/source/exe/envoy-static \
"${ENVOY_DELIVERY_DIR}"/envoy
# TODO(mattklein123): Replace this with caching and a different job which creates images.
@@ -58,13 +59,12 @@ function bazel_release_binary_build() {
function bazel_debug_binary_build() {
echo "Building..."
- cd "${ENVOY_CI_DIR}"
bazel build ${BAZEL_BUILD_OPTIONS} -c dbg //source/exe:envoy-static
collect_build_profile debug_build
# Copy the envoy-static binary somewhere that we can access outside of the
# container.
cp -f \
- "${ENVOY_CI_DIR}"/bazel-bin/source/exe/envoy-static \
+ "${ENVOY_SRCDIR}"/bazel-bin/source/exe/envoy-static \
"${ENVOY_DELIVERY_DIR}"/envoy-debug
}
@@ -119,12 +119,12 @@ elif [[ "$1" == "bazel.asan" ]]; then
setup_clang_toolchain
echo "bazel ASAN/UBSAN debug build with tests"
echo "Building and testing envoy tests..."
- cd "${ENVOY_SRCDIR}"
bazel_with_collection test ${BAZEL_TEST_OPTIONS} -c dbg --config=clang-asan //test/...
echo "Building and testing envoy-filter-example tests..."
- cd "${ENVOY_FILTER_EXAMPLE_SRCDIR}"
+ pushd "${ENVOY_FILTER_EXAMPLE_SRCDIR}"
bazel_with_collection test ${BAZEL_TEST_OPTIONS} -c dbg --config=clang-asan \
//:echo2_integration_test //:envoy_binary_test
+ popd
# Also validate that integration test traffic tapping (useful when debugging etc.)
# works. This requires that we set TAP_PATH. We do this under bazel.asan to
# ensure a debug build in CI.
@@ -132,7 +132,6 @@ elif [[ "$1" == "bazel.asan" ]]; then
TAP_TMP=/tmp/tap/
rm -rf "${TAP_TMP}"
mkdir -p "${TAP_TMP}"
- cd "${ENVOY_SRCDIR}"
bazel_with_collection test ${BAZEL_TEST_OPTIONS} -c dbg --config=clang-asan \
//test/extensions/transport_sockets/tls/integration:ssl_integration_test \
--test_env=TAP_PATH="${TAP_TMP}/tap"
@@ -145,7 +144,6 @@ elif [[ "$1" == "bazel.tsan" ]]; then
setup_clang_toolchain
echo "bazel TSAN debug build with tests"
echo "Building and testing envoy tests..."
- cd "${ENVOY_SRCDIR}"
bazel_with_collection test ${BAZEL_TEST_OPTIONS} -c dbg --config=clang-tsan //test/...
echo "Building and testing envoy-filter-example tests..."
cd "${ENVOY_FILTER_EXAMPLE_SRCDIR}"
@@ -156,13 +154,12 @@ elif [[ "$1" == "bazel.dev" ]]; then
setup_clang_toolchain
# This doesn't go into CI but is available for developer convenience.
echo "bazel fastbuild build with tests..."
- cd "${ENVOY_CI_DIR}"
echo "Building..."
bazel build ${BAZEL_BUILD_OPTIONS} -c fastbuild //source/exe:envoy-static
# Copy the envoy-static binary somewhere that we can access outside of the
# container for developers.
cp -f \
- "${ENVOY_CI_DIR}"/bazel-bin/source/exe/envoy-static \
+ "${ENVOY_SRCDIR}"/bazel-bin/source/exe/envoy-static \
"${ENVOY_DELIVERY_DIR}"/envoy-fastbuild
echo "Building and testing..."
bazel test ${BAZEL_TEST_OPTIONS} -c fastbuild //test/...
@@ -179,12 +176,12 @@ elif [[ "$1" == "bazel.compile_time_options" ]]; then
--define boringssl=fips \
--define log_debug_assert_in_release=enabled \
--define quiche=enabled \
+ --define path_normalization_by_default=true \
"
setup_clang_toolchain
# This doesn't go into CI but is available for developer convenience.
echo "bazel with different compiletime options build with tests..."
# Building all the dependencies from scratch to link them against libc++.
- cd "${ENVOY_SRCDIR}"
echo "Building..."
bazel build ${BAZEL_BUILD_OPTIONS} ${COMPILE_TIME_OPTIONS} -c dbg //source/exe:envoy-static
echo "Building and testing..."
@@ -210,13 +207,11 @@ elif [[ "$1" == "bazel.ipv6_tests" ]]; then
setup_clang_toolchain
echo "Testing..."
- cd "${ENVOY_CI_DIR}"
bazel_with_collection test ${BAZEL_TEST_OPTIONS} --test_env=ENVOY_IP_TEST_VERSIONS=v6only -c fastbuild \
//test/integration/... //test/common/network/...
exit 0
elif [[ "$1" == "bazel.api" ]]; then
setup_clang_toolchain
- cd "${ENVOY_CI_DIR}"
echo "Building API..."
bazel build ${BAZEL_BUILD_OPTIONS} -c fastbuild @envoy_api//envoy/...
echo "Testing API..."
@@ -229,13 +224,9 @@ elif [[ "$1" == "bazel.coverage" ]]; then
# gcovr is a pain to run with `bazel run`, so package it up into a
# relocatable and hermetic-ish .par file.
- cd "${ENVOY_SRCDIR}"
bazel build @com_github_gcovr_gcovr//:gcovr.par
- export GCOVR="${ENVOY_SRCDIR}/bazel-bin/external/com_github_gcovr_gcovr/gcovr.par"
-
- export GCOVR_DIR="${ENVOY_BUILD_DIR}/bazel-envoy"
- export TESTLOGS_DIR="${ENVOY_BUILD_DIR}/bazel-testlogs"
- export WORKSPACE=ci
+ export GCOVR="/tmp/gcovr.par"
+ cp -f "${ENVOY_SRCDIR}/bazel-bin/external/com_github_gcovr_gcovr/gcovr.par" ${GCOVR}
# Reduce the amount of memory and number of cores Bazel tries to use to
# prevent it from launching too many subprocesses. This should prevent the
@@ -245,23 +236,12 @@ elif [[ "$1" == "bazel.coverage" ]]; then
# after 0.21.
[ -z "$CIRCLECI" ] || export BAZEL_TEST_OPTIONS="${BAZEL_TEST_OPTIONS} --local_resources=12288,4,1"
- # There is a bug in gcovr 3.3, where it takes the -r path,
- # in our case /source, and does a regex replacement of various
- # source file paths during HTML generation. It attempts to strip
- # out the prefix (e.g. /source), but because it doesn't do a match
- # and only strip at the start of the string, it removes /source from
- # the middle of the string, corrupting the path. The workaround is
- # to point -r in the gcovr invocation in run_envoy_bazel_coverage.sh at
- # some Bazel created symlinks to the source directory in its output
- # directory. Wow.
- cd "${ENVOY_BUILD_DIR}"
- SRCDIR="${GCOVR_DIR}" "${ENVOY_SRCDIR}"/test/run_envoy_bazel_coverage.sh
+ test/run_envoy_bazel_coverage.sh
collect_build_profile coverage
exit 0
elif [[ "$1" == "bazel.clang_tidy" ]]; then
setup_clang_toolchain
- cd "${ENVOY_CI_DIR}"
- ./run_clang_tidy.sh
+ ci/run_clang_tidy.sh
exit 0
elif [[ "$1" == "bazel.coverity" ]]; then
# Coverity Scan version 2017.07 fails to analyze the entirely of the Envoy
@@ -271,7 +251,6 @@ elif [[ "$1" == "bazel.coverity" ]]; then
setup_gcc_toolchain
echo "bazel Coverity Scan build"
echo "Building..."
- cd "${ENVOY_CI_DIR}"
/build/cov-analysis/bin/cov-build --dir "${ENVOY_BUILD_DIR}"/cov-int bazel build --action_env=LD_PRELOAD ${BAZEL_BUILD_OPTIONS} \
-c opt //source/exe:envoy-static
# tar up the coverity results
@@ -283,11 +262,9 @@ elif [[ "$1" == "bazel.coverity" ]]; then
exit 0
elif [[ "$1" == "fix_format" ]]; then
echo "fix_format..."
- cd "${ENVOY_SRCDIR}"
./tools/check_format.py fix
exit 0
elif [[ "$1" == "check_format" ]]; then
- cd "${ENVOY_SRCDIR}"
echo "check_format_test..."
./tools/check_format_test_helper.py --log=WARN
echo "check_format..."
@@ -295,27 +272,22 @@ elif [[ "$1" == "check_format" ]]; then
./tools/format_python_tools.sh check
exit 0
elif [[ "$1" == "check_repositories" ]]; then
- cd "${ENVOY_SRCDIR}"
echo "check_repositories..."
./tools/check_repositories.sh
exit 0
elif [[ "$1" == "check_spelling" ]]; then
- cd "${ENVOY_SRCDIR}"
echo "check_spelling..."
./tools/check_spelling.sh check
exit 0
elif [[ "$1" == "fix_spelling" ]];then
- cd "${ENVOY_SRCDIR}"
echo "fix_spell..."
./tools/check_spelling.sh fix
exit 0
elif [[ "$1" == "check_spelling_pedantic" ]]; then
- cd "${ENVOY_SRCDIR}"
echo "check_spelling_pedantic..."
./tools/check_spelling_pedantic.py check
exit 0
elif [[ "$1" == "fix_spelling_pedantic" ]]; then
- cd "${ENVOY_SRCDIR}"
echo "fix_spelling_pedantic..."
./tools/check_spelling_pedantic.py fix
exit 0
diff --git a/ci/run_clang_tidy.sh b/ci/run_clang_tidy.sh
index e9df93bfcb652..8adbbd5089d83 100755
--- a/ci/run_clang_tidy.sh
+++ b/ci/run_clang_tidy.sh
@@ -3,6 +3,15 @@
set -e
echo "Generating compilation database..."
+
+cp -f .bazelrc .bazelrc.bak
+
+function cleanup() {
+ cp -f .bazelrc.bak .bazelrc
+ rm -f .bazelrc.bak
+}
+trap cleanup EXIT
+
# The compilation database generate script doesn't support passing build options via CLI.
# Writing them into bazelrc
echo "build ${BAZEL_BUILD_OPTIONS}" >> .bazelrc
@@ -11,25 +20,30 @@ echo "build ${BAZEL_BUILD_OPTIONS}" >> .bazelrc
# by clang-tidy
"${ENVOY_SRCDIR}/tools/gen_compilation_database.py" --run_bazel_build --include_headers
-# It had to be in ENVOY_CI_DIR to run bazel to generate compile database, but clang-tidy-diff
-# diff against current directory, moving them to ENVOY_SRCDIR.
-mv ./compile_commands.json "${ENVOY_SRCDIR}/compile_commands.json"
-cd "${ENVOY_SRCDIR}"
-
# Do not run incremental clang-tidy on check_format testdata files.
function exclude_testdata() {
grep -v tools/testdata/check_format/
}
+# Do not run clang-tidy against Chromium URL import, this needs to largely
+# reflect the upstream structure.
+function exclude_chromium_url() {
+ grep -v source/common/chromium_url/
+}
+
+function filter_excludes() {
+ exclude_testdata | exclude_chromium_url
+}
+
if [[ "${RUN_FULL_CLANG_TIDY}" == 1 ]]; then
echo "Running full clang-tidy..."
- run-clang-tidy-7
+ run-clang-tidy-8
elif [[ -z "${CIRCLE_PR_NUMBER}" && "$CIRCLE_BRANCH" == "master" ]]; then
echo "On master branch, running clang-tidy-diff against previous commit..."
- git diff HEAD^ | exclude_testdata | clang-tidy-diff-7.py -p 1
+ git diff HEAD^ | filter_excludes | clang-tidy-diff-8.py -p 1
else
echo "Running clang-tidy-diff against master branch..."
git fetch https://github.com/envoyproxy/envoy.git master
- git diff $(git merge-base HEAD FETCH_HEAD)..HEAD | exclude_testdata | \
- clang-tidy-diff-7.py -p 1
+ git diff $(git merge-base HEAD FETCH_HEAD)..HEAD | filter_excludes | \
+ clang-tidy-diff-8.py -p 1
fi
diff --git a/configs/BUILD b/configs/BUILD
index 7596ba2b41df1..9846609607e9e 100644
--- a/configs/BUILD
+++ b/configs/BUILD
@@ -29,19 +29,10 @@ filegroup(
}),
)
-genrule(
- name = "v1_upgraded_configs",
- srcs = ["google_com_proxy.yaml"],
- outs = ["google_com_proxy.v2.upgraded.json"],
- cmd = "$(location //tools:v1_to_bootstrap) $(location google_com_proxy.yaml) > $@",
- tools = ["//tools:v1_to_bootstrap"],
-)
-
genrule(
name = "example_configs",
srcs = [
":configs",
- ":v1_upgraded_configs",
"//examples:configs",
"//test/config/integration/certs",
],
diff --git a/configs/Dockerfile b/configs/Dockerfile
index e81237686687b..2d7b7a6a5e3bf 100644
--- a/configs/Dockerfile
+++ b/configs/Dockerfile
@@ -1,7 +1,7 @@
# This configuration will build a Docker container containing
# an Envoy proxy that routes to Google.
-FROM envoyproxy/envoy:latest
+FROM envoyproxy/envoy-dev:latest
RUN apt-get update
COPY google_com_proxy.v2.yaml /etc/envoy.yaml
CMD /usr/local/bin/envoy -c /etc/envoy.yaml
diff --git a/configs/configgen.sh b/configs/configgen.sh
index 2ecf6b77ba06d..2e82ebff3dd98 100755
--- a/configs/configgen.sh
+++ b/configs/configgen.sh
@@ -25,4 +25,4 @@ for FILE in $*; do
done
# tar is having issues with -C for some reason so just cd into OUT_DIR.
-(cd "$OUT_DIR"; tar -hcvf example_configs.tar *.json *.yaml certs/*.pem)
+(cd "$OUT_DIR"; tar -hcvf example_configs.tar *.yaml certs/*.pem)
diff --git a/configs/envoy_double_proxy_v2.template.yaml b/configs/envoy_double_proxy_v2.template.yaml
index 0d638a6fe85dc..2c08332f795d8 100644
--- a/configs/envoy_double_proxy_v2.template.yaml
+++ b/configs/envoy_double_proxy_v2.template.yaml
@@ -25,7 +25,8 @@
{%endif -%}
filters:
- name: envoy.http_connection_manager
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: AUTO
stat_prefix: router
route_config:
@@ -42,14 +43,18 @@
timeout: 20s
http_filters:
- name: envoy.health_check
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.http.health_check.v2.HealthCheck
pass_through_mode: false
- endpoint: /healthcheck
- name: envoy.buffer
- config:
+ headers:
+ - exact_match: /healthcheck
+ name: :path
+ - name: envoy.buffer
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.http.buffer.v2.Buffer
max_request_bytes: 5242880
- name: envoy.router
- config: {}
+ - name: envoy.router
+ typed_config: {}
tracing:
operation_name: INGRESS
idle_timeout: 840s
@@ -71,7 +76,8 @@
default_value: 1000
runtime_key: access_log.access_error.duration
- traceable_filter: {}
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: /var/log/envoy/access_error.log
format: "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%REQ(X-LYFT-USER-ID)%\" \"%RESP(GRPC-STATUS)%\"\n"
{% if proxy_proto %}
@@ -91,20 +97,30 @@ static_resources:
type: STATIC
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: 127.0.0.1
- port_value: 8125
+ load_assignment:
+ cluster_name: statsd
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: 127.0.0.1
+ port_value: 8125
+ protocol: TCP
- name: backhaul
type: STRICT_DNS
connect_timeout: 1s
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: front-proxy.yourcompany.net
- port_value: 9400
+ load_assignment:
+ cluster_name: backhaul
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: front-proxy.yourcompany.net
+ port_value: 9400
+ protocol: TCP
# There are so few connections going back
# that we can get some imbalance. Until we come up
# with a better solution just limit the requests
@@ -127,11 +143,16 @@ static_resources:
type: LOGICAL_DNS
connect_timeout: 1s
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: collector-grpc.lightstep.com
- port_value: 443
+ load_assignment:
+ cluster_name: lightstep_saas
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: collector-grpc.lightstep.com
+ port_value: 443
+ protocol: TCP
http2_protocol_options: {}
tls_context:
common_tls_context:
@@ -143,12 +164,14 @@ static_resources:
flags_path: "/etc/envoy/flags"
stats_sinks:
- name: envoy.statsd
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.metrics.v2.StatsdSink
tcp_cluster_name: statsd
tracing:
http:
name: envoy.lightstep
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.trace.v2.LightstepConfig
access_token_file: "/etc/envoy/lightstep_access_token"
collector_cluster: lightstep_saas
runtime:
@@ -156,7 +179,7 @@ runtime:
subdirectory: envoy
override_subdirectory: envoy_override
admin:
- access_log_path: "var/log/envoy/admin_access.log"
+ access_log_path: "/var/log/envoy/admin_access.log"
address:
socket_address:
protocol: TCP
diff --git a/configs/envoy_front_proxy_v2.template.yaml b/configs/envoy_front_proxy_v2.template.yaml
index ef44b641ab609..35f734f80ad2e 100644
--- a/configs/envoy_front_proxy_v2.template.yaml
+++ b/configs/envoy_front_proxy_v2.template.yaml
@@ -31,7 +31,8 @@
{%endif %}
filters:
- name: envoy.http_connection_manager
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: AUTO
stat_prefix: router
{% if proxy_proto -%}
@@ -42,13 +43,15 @@
{{ router_file_content(router_file='envoy_router_v2.template.yaml')|indent(10) }}
http_filters:
- name: envoy.health_check
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.http.health_check.v2.HealthCheck
pass_through_mode: false
headers:
- name: ":path"
exact_match: "/healthcheck"
- name: envoy.buffer
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.http.buffer.v2.Buffer
max_request_bytes: 5242880
- name: envoy.rate_limit
config:
@@ -59,7 +62,7 @@
envoy_grpc:
cluster_name: ratelimit
- name: envoy.router
- config: {}
+ typed_config: {}
add_user_agent: true
tracing:
operation_name: INGRESS
@@ -82,7 +85,8 @@
default_value: 1000
runtime_key: access_log.access_error.duration
- traceable_filter: {}
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/var/log/envoy/access_error.log"
format: "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%REQ(X-LYFT-USER-ID)%\" \"%RESP(GRPC-STATUS)%\"\n"
{% endmacro -%}
@@ -100,29 +104,44 @@ static_resources:
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: disccovery.yourcompany.net
- port_value: 80
+ load_assignment:
+ cluster_name: sds
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: discovery.yourcompany.net
+ port_value: 80
+ protocol: TCP
- name: statsd
type: STATIC
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: 127.0.0.1
- port_value: 8125
+ load_assignment:
+ cluster_name: statsd
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: 127.0.0.1
+ port_value: 8125
+ protocol: TCP
- name: lightstep_saas
type: LOGICAL_DNS
connect_timeout: 1s
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: collector-grpc.lightstep.com
- port_value: 443
+ load_assignment:
+ cluster_name: lightstep_saas
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: collector-grpc.lightstep.com
+ port_value: 443
+ protocol: TCP
http2_protocol_options: {}
{% for service, options in clusters.items() -%}
- {{ helper.internal_cluster_definition(service, options)|indent(2) }}
@@ -134,7 +153,8 @@ flags_path: /etc/envoy/flags
tracing:
http:
name: envoy.lightstep
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.trace.v2.LightstepConfig
collector_cluster: lightstep_saas
access_token_file: "/etc/envoy/lightstep_access_token"
runtime:
diff --git a/configs/envoy_service_to_service_v2.template.yaml b/configs/envoy_service_to_service_v2.template.yaml
index e6b40b734ff77..083a8c39a2926 100644
--- a/configs/envoy_service_to_service_v2.template.yaml
+++ b/configs/envoy_service_to_service_v2.template.yaml
@@ -9,7 +9,8 @@
filter_chains:
- filters:
- name: envoy.http_connection_manager
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
@@ -32,22 +33,25 @@
cluster: local_service
http_filters:
- name: envoy.health_check
- config:
- pass_through_mode: true
- headers:
- - name: ":path"
- exact_match: "/healthcheck"
- cache_time: 2.5s
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.http.health_check.v2.HealthCheck
+ pass_through_mode: true
+ headers:
+ - name: ":path"
+ exact_match: "/healthcheck"
+ cache_time: 2.5s
- name: envoy.buffer
- config:
- max_request_bytes: 5242880
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.http.buffer.v2.Buffer
+ max_request_bytes: 5242880
- name: envoy.router
- config: {}
+ typed_config: {}
access_log:
- name: envoy.file_access_log
filter:
not_health_check_filter: {}
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/var/log/envoy/ingress_http.log"
{{ access_log_helper.ingress_full()|indent(10)}}
- name: envoy.file_access_log
@@ -75,7 +79,8 @@
default_value: 2000
runtime_key: access_log.access_error.duration
- not_health_check_filter: {}
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/var/log/envoy/ingress_http_error.log"
{{ access_log_helper.ingress_sampled_log()|indent(10)}}
- name: envoy.file_access_log
@@ -85,7 +90,8 @@
- not_health_check_filter: {}
- runtime_filter:
runtime_key: access_log.ingress_http
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/var/log/envoy/ingress_http_sampled.log"
{{ access_log_helper.ingress_sampled_log()|indent(10)}}
idle_timeout: 840s
@@ -103,7 +109,8 @@ static_resources:
filter_chains:
- filters:
- name: envoy.http_connection_manager
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: AUTO
stat_prefix: egress_http
route_config:
@@ -141,9 +148,10 @@ static_resources:
default_value: 2000
runtime_key: access_log.access_error.duration
- traceable_filter: {}
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/var/log/envoy/egress_http_error.log"
- {{ access_log_helper.egress_error_log()|indent(10)}}
+ {{ access_log_helper.egress_error_log()|indent(10) }}
use_remote_address: true
http_filters:
- name: envoy.rate_limit
@@ -154,9 +162,9 @@ static_resources:
envoy_grpc:
cluster_name: ratelimit
- name: envoy.grpc_http1_bridge
- config: {}
+ typed_config: {}
- name: envoy.router
- config: {}
+ typed_config: {}
- address:
socket_address:
@@ -166,7 +174,8 @@ static_resources:
filter_chains:
- filters:
- name: envoy.http_connection_manager
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: AUTO
stat_prefix: egress_http
rds:
@@ -199,7 +208,8 @@ static_resources:
default_value: 2000
runtime_key: access_log.access_error.duration
- traceable_filter: {}
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/var/log/envoy/egress_http_error.log"
{{ access_log_helper.egress_error_log()|indent(10) }}
use_remote_address: true
@@ -212,9 +222,9 @@ static_resources:
envoy_grpc:
cluster_name: ratelimit
- name: envoy.grpc_http1_bridge
- config: {}
+ typed_config: {}
- name: envoy.router
- config: {}
+ typed_config: {}
{% if external_virtual_hosts|length > 0 or mongos_servers|length > 0 %}{% endif -%}
{% for mapping in external_virtual_hosts -%}
- name: "{{ mapping['address']}}"
@@ -226,7 +236,8 @@ static_resources:
filter_chains:
- filters:
- name: envoy.http_connection_manager
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: AUTO
idle_timeout: 840s
stat_prefix: egress_{{ mapping['name'] }}
@@ -251,10 +262,10 @@ static_resources:
http_filters:
{% if mapping['name'] in ['dynamodb_iad', 'dynamodb_legacy'] -%}
- name: envoy.http_dynamo_filter
- config: {}
+ typed_config: {}
{% endif -%}
- name: envoy.router
- config: {}
+ typed_config: {}
access_log:
- name: envoy.file_access_log
filter:
@@ -280,7 +291,8 @@ static_resources:
default_value: 2000
runtime_key: access_log.access_error.duration
{% endif %}
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: "/var/log/envoy/egress_{{ mapping['name'] }}_http_error.log"
{% if mapping.get('is_amzn_service', False) -%}
{{ access_log_helper.egress_error_amazon_service()|indent(10) }}
@@ -299,7 +311,8 @@ static_resources:
filter_chains:
- filters:
- name: envoy.tcp_proxy
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy
stat_prefix: mongo_{{ key }}
cluster: mongo_{{ key }}
- name: envoy.mongo_proxy
@@ -342,11 +355,16 @@ static_resources:
{% endif %}
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- address: {{ host['remote_address'] }}
- port_value: {{ host['port_value'] }}
- protocol: {{ host['protocol'] }}
+ load_assignment:
+ cluster_name: egress_{{ host['name'] }}
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: {{ host['remote_address'] }}
+ port_value: {{ host['port_value'] }}
+ protocol: {{ host['protocol'] }}
{% endfor -%}
{% endfor -%}
{% for key, value in mongos_servers.items() -%}
@@ -354,13 +372,18 @@ static_resources:
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: RANDOM
- hosts:
- {% for server in value['hosts'] -%}
- - socket_address:
- protocol: {{ server['protocol'] }}
- port_value: {{ server['port_value'] }}
- address: {{ server['address'] }}
- {% endfor -%}
+ load_assignment:
+ cluster_name: mongo_{{ key }}
+ endpoints:
+ - lb_endpoints:
+ {% for server in value['hosts'] -%}
+ - endpoint:
+ address:
+ socket_address:
+ address: {{ server['address'] }}
+ port_value: {{ server['port_value'] }}
+ protocol: {{ server['protocol'] }}
+ {% endfor -%}
{% endfor %}
- name: main_website
connect_timeout: 0.25s
@@ -368,20 +391,32 @@ static_resources:
# Comment out the following line to test on v6 networks
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- address: main_website.com
- port_value: 443
- tls_context: { sni: www.main_website.com }
+ load_assignment:
+ cluster_name: main_website
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: main_website.com
+ port_value: 443
+ protocol: TCP
+ tls_context:
+ sni: www.main_website.com
- name: local_service
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: 127.0.0.1
- port_value: 8080
+ load_assignment:
+ cluster_name: main_website
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: 127.0.0.1
+ port_value: 8080
+ protocol: TCP
circuit_breakers:
thresholds:
max_pending_requests: 30
@@ -391,11 +426,16 @@ static_resources:
type: STATIC
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
- hosts:
- - socket_address:
- protocol: TCP
- address: 127.0.0.1
- port_value: 8081
+ load_assignment:
+ cluster_name: local_service_grpc
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: 127.0.0.1
+ port_value: 8081
+ protocol: TCP
circuit_breakers:
thresholds:
max_requests: 200
@@ -404,31 +444,46 @@ static_resources:
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: rds.yourcompany.net
- port_value: 80
+ load_assignment:
+ cluster_name: local_service_grpc
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: rds.yourcompany.net
+ port_value: 80
+ protocol: TCP
dns_lookup_family: V4_ONLY
- name: statsd
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: 127.0.0.1
- port_value: 8125
+ load_assignment:
+ cluster_name: statsd
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: 127.0.0.1
+ port_value: 8125
+ protocol: TCP
dns_lookup_family: V4_ONLY
- name: lightstep_saas
connect_timeout: 1s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: collector-grpc.lightstep.com
- port_value: 443
+ load_assignment:
+ cluster_name: lightstep_saas
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: collector-grpc.lightstep.com
+ port_value: 443
+ protocol: TCP
http2_protocol_options:
max_concurrent_streams: 100
tls_context:
@@ -442,20 +497,30 @@ static_resources:
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: cds.yourcompany.net
- port_value: 80
+ load_assignment:
+ cluster_name: cds_cluster
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: cds.yourcompany.net
+ port_value: 80
+ protocol: TCP
- name: sds
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
- hosts:
- - socket_address:
- protocol: TCP
- address: discovery.yourcompany.net
- port_value: 80
+ load_assignment:
+ cluster_name: sds
+ endpoints:
+ - lb_endpoints:
+ - endpoint:
+ address:
+ socket_address:
+ address: discovery.yourcompany.net
+ port_value: 80
+ protocol: TCP
dynamic_resources:
cds_config:
api_config_source:
@@ -467,13 +532,15 @@ cluster_manager: {}
flags_path: "/etc/envoy/flags"
stats_sinks:
- name: envoy.statsd
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.metrics.v2.StatsdSink
tcp_cluster_name: statsd
watchdog: {}
tracing:
http:
name: envoy.lightstep
- config:
+ typed_config:
+ "@type": type.googleapis.com/envoy.config.trace.v2.LightstepConfig
access_token_file: "/etc/envoy/lightstep_access_token"
collector_cluster: lightstep_saas
runtime:
diff --git a/configs/google_com_proxy.json b/configs/google_com_proxy.json
deleted file mode 100644
index 6e131e1e1e543..0000000000000
--- a/configs/google_com_proxy.json
+++ /dev/null
@@ -1,48 +0,0 @@
-{
- "listeners": [{
- "address": "tcp://127.0.0.1:10000",
- "filters": [{
- "name": "http_connection_manager",
- "config": {
- "codec_type": "auto",
- "stat_prefix": "ingress_http",
- "route_config": {
- "virtual_hosts": [{
- "name": "local_service",
- "domains": [
- "*"
- ],
- "routes": [{
- "timeout_ms": 0,
- "prefix": "/",
- "host_rewrite": "www.google.com",
- "cluster": "service_google"
- }]
- }]
- },
- "filters": [{
- "name": "router",
- "config": {}
- }]
- }
- }]
- }],
- "admin": {
- "access_log_path": "/tmp/admin_access.log",
- "address": "tcp://127.0.0.1:9901"
- },
- "cluster_manager": {
- "clusters": [{
- "name": "service_google",
- "connect_timeout_ms": 250,
- "type": "logical_dns",
- "lb_type": "round_robin",
- "hosts": [{
- "url": "tcp://google.com:443"
- }],
- "ssl_context": {
- "sni": "www.google.com"
- }
- }]
- }
-}
diff --git a/configs/google_com_proxy.yaml b/configs/google_com_proxy.yaml
deleted file mode 100644
index 8683e9e4c9254..0000000000000
--- a/configs/google_com_proxy.yaml
+++ /dev/null
@@ -1,31 +0,0 @@
-listeners:
-- address: tcp://127.0.0.1:10000
- filters:
- - name: http_connection_manager
- config:
- codec_type: auto
- stat_prefix: ingress_http
- route_config:
- virtual_hosts:
- - name: local_service
- domains: ["*"]
- routes:
- - prefix: "/"
- timeout_ms: 0
- host_rewrite: www.google.com
- cluster: service_google
- filters:
- - { name: router, config: {} }
-
-admin:
- access_log_path: /tmp/admin_access.log
- address: tcp://127.0.0.1:9901
-
-cluster_manager:
- clusters:
- - name: service_google
- connect_timeout_ms: 250
- type: logical_dns
- lb_type: round_robin
- hosts: [{ url: tcp://google.com:443 }]
- ssl_context: { sni: www.google.com }
diff --git a/configs/requirements.txt b/configs/requirements.txt
index b60338e30ada0..f4c7b793c7b9c 100644
--- a/configs/requirements.txt
+++ b/configs/requirements.txt
@@ -1 +1 @@
-jinja2==2.10
+jinja2==2.10.1
diff --git a/docs/build.sh b/docs/build.sh
index 59178838e1d96..036ee5a67aaa7 100755
--- a/docs/build.sh
+++ b/docs/build.sh
@@ -16,18 +16,23 @@ then
exit 1
fi
# Check the version_history.rst contains current release version.
- grep --fixed-strings "$VERSION_NUMBER" docs/root/intro/version_history.rst
+ grep --fixed-strings "$VERSION_NUMBER" docs/root/intro/version_history.rst \
+ || (echo "Git tag not found in version_history.rst" && exit 1)
+
# Now that we now there is a match, we can use the tag.
export ENVOY_DOCS_VERSION_STRING="tag-$CIRCLE_TAG"
export ENVOY_DOCS_RELEASE_LEVEL=tagged
+ export ENVOY_BLOB_SHA="$CIRCLE_TAG"
else
BUILD_SHA=$(git rev-parse HEAD)
VERSION_NUM=$(cat VERSION)
export ENVOY_DOCS_VERSION_STRING="${VERSION_NUM}"-"${BUILD_SHA:0:6}"
export ENVOY_DOCS_RELEASE_LEVEL=pre-release
+ export ENVOY_BLOB_SHA="$BUILD_SHA"
fi
SCRIPT_DIR=$(dirname "$0")
+API_DIR=$(dirname "$dir")/api
BUILD_DIR=build_docs
[[ -z "${DOCS_OUTPUT_DIR}" ]] && DOCS_OUTPUT_DIR=generated/docs
[[ -z "${GENERATED_RST_DIR}" ]] && GENERATED_RST_DIR=generated/rst
@@ -42,7 +47,8 @@ source_venv "$BUILD_DIR"
pip install -r "${SCRIPT_DIR}"/requirements.txt
bazel build ${BAZEL_BUILD_OPTIONS} @envoy_api//docs:protos --aspects \
- tools/protodoc/protodoc.bzl%proto_doc_aspect --output_groups=rst --action_env=CPROFILE_ENABLED --spawn_strategy=standalone
+ tools/protodoc/protodoc.bzl%proto_doc_aspect --output_groups=rst --action_env=CPROFILE_ENABLED \
+ --action_env=ENVOY_BLOB_SHA --spawn_strategy=standalone
# These are the protos we want to put in docs, this list will grow.
# TODO(htuch): Factor this out of this script.
@@ -100,6 +106,9 @@ PROTO_RST="
/envoy/config/filter/http/tap/v2alpha/tap/envoy/config/filter/http/tap/v2alpha/tap.proto.rst
/envoy/config/filter/http/transcoder/v2/transcoder/envoy/config/filter/http/transcoder/v2/transcoder.proto.rst
/envoy/config/filter/listener/original_src/v2alpha1/original_src/envoy/config/filter/listener/original_src/v2alpha1/original_src.proto.rst
+ /envoy/config/filter/network/dubbo_proxy/v2alpha1/dubbo_proxy/envoy/config/filter/network/dubbo_proxy/v2alpha1/dubbo_proxy.proto.rst
+ /envoy/config/filter/network/dubbo_proxy/v2alpha1/dubbo_proxy/envoy/config/filter/network/dubbo_proxy/v2alpha1/route.proto.rst
+ /envoy/config/filter/dubbo/router/v2alpha1/router/envoy/config/filter/dubbo/router/v2alpha1/router.proto.rst
/envoy/config/filter/network/client_ssl_auth/v2/client_ssl_auth/envoy/config/filter/network/client_ssl_auth/v2/client_ssl_auth.proto.rst
/envoy/config/filter/network/ext_authz/v2/ext_authz/envoy/config/filter/network/ext_authz/v2/ext_authz.proto.rst
/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager/envoy/config/filter/network/http_connection_manager/v2/http_connection_manager.proto.rst
@@ -151,6 +160,12 @@ do
[ -n "${CPROFILE_ENABLED}" ] && cp -f bazel-bin/"${p}".profile "$(dirname "${DEST}")"
done
+mkdir -p ${GENERATED_RST_DIR}/api-docs
+
+cp -f $API_DIR/xds_protocol.rst "${GENERATED_RST_DIR}/api-docs/xds_protocol.rst"
+
+rsync -rav $API_DIR/diagrams "${GENERATED_RST_DIR}/api-docs"
+
rsync -av "${SCRIPT_DIR}"/root/ "${SCRIPT_DIR}"/conf.py "${GENERATED_RST_DIR}"
sphinx-build -W --keep-going -b html "${GENERATED_RST_DIR}" "${DOCS_OUTPUT_DIR}"
diff --git a/docs/conf.py b/docs/conf.py
index 932135b2cf5fe..64c48a8f6c793 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -14,18 +14,45 @@
from datetime import datetime
import os
+from sphinx.directives.code import CodeBlock
import sphinx_rtd_theme
import sys
+# https://stackoverflow.com/questions/44761197/how-to-use-substitution-definitions-with-code-blocks
+class SubstitutionCodeBlock(CodeBlock):
+ """
+ Similar to CodeBlock but replaces placeholders with variables. See "substitutions" below.
+ """
+
+ def run(self):
+ """
+ Replace placeholders with given variables.
+ """
+ app = self.state.document.settings.env.app
+ new_content = []
+ existing_content = self.content
+ for item in existing_content:
+ for pair in app.config.substitutions:
+ original, replacement = pair
+ item = item.replace(original, replacement)
+ new_content.append(item)
+
+ self.content = new_content
+ return list(CodeBlock.run(self))
+
+
def setup(app):
app.add_config_value('release_level', '', 'env')
+ app.add_config_value('substitutions', [], 'html')
+ app.add_directive('substitution-code-block', SubstitutionCodeBlock)
if not os.environ.get('ENVOY_DOCS_RELEASE_LEVEL'):
raise Exception("ENVOY_DOCS_RELEASE_LEVEL env var must be defined")
release_level = os.environ['ENVOY_DOCS_RELEASE_LEVEL']
+blob_sha = os.environ['ENVOY_BLOB_SHA']
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
@@ -42,10 +69,16 @@ def setup(app):
# ones.
extensions = ['sphinxcontrib.httpdomain', 'sphinx.ext.extlinks', 'sphinx.ext.ifconfig']
extlinks = {
- 'repo': ('https://github.com/envoyproxy/envoy/blob/master/%s', ''),
- 'api': ('https://github.com/envoyproxy/envoy/blob/master/api/%s', ''),
+ 'repo': ('https://github.com/envoyproxy/envoy/blob/{}/%s'.format(blob_sha), ''),
+ 'api': ('https://github.com/envoyproxy/envoy/blob/{}/api/%s'.format(blob_sha), ''),
}
+# Setup global substitutions
+if 'pre-release' in release_level:
+ substitutions = [('|envoy_docker_image|', 'envoy-dev:{}'.format(blob_sha))]
+else:
+ substitutions = [('|envoy_docker_image|', 'envoy:{}'.format(blob_sha))]
+
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
diff --git a/docs/requirements.txt b/docs/requirements.txt
index 44a91ddeecc9b..8b19a35ae6339 100644
--- a/docs/requirements.txt
+++ b/docs/requirements.txt
@@ -1,5 +1,5 @@
GitPython==2.0.8
-Jinja2==2.10
+Jinja2==2.10.1
MarkupSafe==1.1.0
Pygments==2.2.0
alabaster==0.7.10
diff --git a/docs/root/api-v2/config/filter/dubbo/dubbo.rst b/docs/root/api-v2/config/filter/dubbo/dubbo.rst
new file mode 100644
index 0000000000000..d90e49b707dae
--- /dev/null
+++ b/docs/root/api-v2/config/filter/dubbo/dubbo.rst
@@ -0,0 +1,8 @@
+Dubbo filters
+==============
+
+.. toctree::
+ :glob:
+ :maxdepth: 2
+
+ */v2alpha1/*
diff --git a/docs/root/api-v2/config/filter/filter.rst b/docs/root/api-v2/config/filter/filter.rst
index 88385094a2f44..6ddd5e15abf30 100644
--- a/docs/root/api-v2/config/filter/filter.rst
+++ b/docs/root/api-v2/config/filter/filter.rst
@@ -11,3 +11,4 @@ Filters
accesslog/v2/accesslog.proto
fault/v2/fault.proto
listener/listener
+ dubbo/dubbo
diff --git a/docs/root/api/api.rst b/docs/root/api/api.rst
new file mode 100644
index 0000000000000..27e7731090095
--- /dev/null
+++ b/docs/root/api/api.rst
@@ -0,0 +1,11 @@
+.. _api:
+
+API
+===
+
+.. toctree::
+ :glob:
+ :maxdepth: 2
+
+ ../api-v2/api
+ ../api-docs/xds_protocol
diff --git a/docs/root/configuration/access_log.rst b/docs/root/configuration/access_log.rst
index bfc28c108fde2..4faa310a4848a 100644
--- a/docs/root/configuration/access_log.rst
+++ b/docs/root/configuration/access_log.rst
@@ -162,6 +162,16 @@ The following command operators are supported:
TCP
Not implemented ("-").
+.. _config_access_log_format_response_code_details:
+
+%RESPONSE_CODE_DETAILS%
+ HTTP
+ HTTP response code details provides additional information about the response code, such as
+ who set it (the upstream or envoy) and why.
+
+ TCP
+ Not implemented ("-")
+
%BYTES_SENT%
HTTP
Body bytes sent. For WebSocket connection it will also include response header bytes.
diff --git a/docs/root/configuration/cluster_manager/cluster_stats.rst b/docs/root/configuration/cluster_manager/cluster_stats.rst
index 47c2a011c3e66..b5b6554be7b63 100644
--- a/docs/root/configuration/cluster_manager/cluster_stats.rst
+++ b/docs/root/configuration/cluster_manager/cluster_stats.rst
@@ -56,6 +56,7 @@ Every cluster has a statistics tree rooted at *cluster..* with the followi
upstream_cx_rx_bytes_buffered, Gauge, Received connection bytes currently buffered
upstream_cx_tx_bytes_total, Counter, Total sent connection bytes
upstream_cx_tx_bytes_buffered, Gauge, Send connection bytes currently buffered
+ upstream_cx_pool_overflow, Counter, Total times that the cluster's connection pool circuit breaker overflowed
upstream_cx_protocol_error, Counter, Total connection protocol errors
upstream_cx_max_requests, Counter, Total connections closed due to maximum requests
upstream_cx_none_healthy, Counter, Total times connection not established due to no healthy hosts
@@ -94,6 +95,8 @@ Every cluster has a statistics tree rooted at *cluster..* with the followi
version, Gauge, Hash of the contents from the last successful API fetch
max_host_weight, Gauge, Maximum weight of any host in the cluster
bind_errors, Counter, Total errors binding the socket to the configured source address
+ assignment_timeout_received, Counter, Total assignments received with endpoint lease information.
+ assignment_stale, Counter, Number of times the received assignments went stale before new assignments arrived.
Health check statistics
-----------------------
@@ -149,6 +152,7 @@ Circuit breakers statistics will be rooted at *cluster..circuit_breakers.<
:widths: 1, 1, 2
cx_open, Gauge, Whether the connection circuit breaker is closed (0) or open (1)
+ cx_pool_open, Gauge, Whether the connection pool circuit breaker is closed (0) or open (1)
rq_pending_open, Gauge, Whether the pending requests circuit breaker is closed (0) or open (1)
rq_open, Gauge, Whether the requests circuit breaker is closed (0) or open (1)
rq_retry_open, Gauge, Whether the retry circuit breaker is closed (0) or open (1)
diff --git a/docs/root/configuration/configuration.rst b/docs/root/configuration/configuration.rst
index fca889b67559a..3effeaa8e8554 100644
--- a/docs/root/configuration/configuration.rst
+++ b/docs/root/configuration/configuration.rst
@@ -14,6 +14,7 @@ Configuration reference
http_conn_man/http_conn_man
http_filters/http_filters
thrift_filters/thrift_filters
+ dubbo_filters/dubbo_filters
cluster_manager/cluster_manager
health_checkers/health_checkers
access_log
diff --git a/docs/root/configuration/dubbo_filters/dubbo_filters.rst b/docs/root/configuration/dubbo_filters/dubbo_filters.rst
new file mode 100644
index 0000000000000..2577324dd3554
--- /dev/null
+++ b/docs/root/configuration/dubbo_filters/dubbo_filters.rst
@@ -0,0 +1,11 @@
+.. _config_dubbo_filters:
+
+Dubbo filters
+===============
+
+Envoy has the following builtin Dubbo filters.
+
+.. toctree::
+ :maxdepth: 2
+
+ router_filter
diff --git a/docs/root/configuration/dubbo_filters/router_filter.rst b/docs/root/configuration/dubbo_filters/router_filter.rst
new file mode 100644
index 0000000000000..f4393238d9836
--- /dev/null
+++ b/docs/root/configuration/dubbo_filters/router_filter.rst
@@ -0,0 +1,11 @@
+.. _config_dubbo_filters_router:
+
+Router
+======
+
+The router filter implements Dubbo forwarding. It will be used in almost all Dubbo proxying
+scenarios. The filter's main job is to follow the instructions specified in the configured
+:ref:`route table `.
+
+* :ref:`v2 API reference `
+* This filter should be configured with the name *envoy.router*.
diff --git a/docs/root/configuration/http_conn_man/headers.rst b/docs/root/configuration/http_conn_man/headers.rst
index c29eeaa6f7e9d..d48bac34152f6 100644
--- a/docs/root/configuration/http_conn_man/headers.rst
+++ b/docs/root/configuration/http_conn_man/headers.rst
@@ -325,12 +325,6 @@ A few very important notes about XFF:
Envoy will not consider it internal. This is a known "bug" due to the simplification of how
XFF is parsed to determine if a request is internal. In this scenario, do not forward XFF and
allow Envoy to generate a new one with a single internal origin IP.
-3. Testing IPv6 in a large multi-hop system can be difficult from a change management perspective.
- For testing IPv6 compatibility of upstream services which parse XFF header values,
- :ref:`represent_ipv4_remote_address_as_ipv4_mapped_ipv6 `
- can be enabled in the v2 API. Envoy will append an IPv4 address in mapped IPv6 format, e.g.
- ::FFFF:50.0.0.1. This change will also apply to
- :ref:`config_http_conn_man_headers_x-envoy-external-address`.
.. _config_http_conn_man_headers_x-forwarded-proto:
diff --git a/docs/root/configuration/http_conn_man/runtime.rst b/docs/root/configuration/http_conn_man/runtime.rst
index 9b5286bd02b68..dcc85412c6315 100644
--- a/docs/root/configuration/http_conn_man/runtime.rst
+++ b/docs/root/configuration/http_conn_man/runtime.rst
@@ -5,16 +5,13 @@ Runtime
The HTTP connection manager supports the following runtime settings:
-.. _config_http_conn_man_runtime_represent_ipv4_remote_address_as_ipv4_mapped_ipv6:
-
-http_connection_manager.represent_ipv4_remote_address_as_ipv4_mapped_ipv6
- % of requests with a remote address that will have their IPv4 address mapped to IPv6. Defaults to
- 0.
- :ref:`use_remote_address `
- must also be enabled. See
- :ref:`represent_ipv4_remote_address_as_ipv4_mapped_ipv6
- `
- for more details.
+.. _config_http_conn_man_runtime_normalize_path:
+
+http_connection_manager.normalize_path
+ % of requests that will have path normalization applied if not already configured in
+ :ref:`normalize_path `.
+ This is evaluated at configuration load time and will apply to all requests for a given
+ configuration.
.. _config_http_conn_man_runtime_client_enabled:
diff --git a/docs/root/configuration/http_filters/fault_filter.rst b/docs/root/configuration/http_filters/fault_filter.rst
index 39de89628fe29..90b11404b90bc 100644
--- a/docs/root/configuration/http_filters/fault_filter.rst
+++ b/docs/root/configuration/http_filters/fault_filter.rst
@@ -16,15 +16,6 @@ The scope of failures is restricted to those that are observable by an
application communicating over the network. CPU and disk failures on the
local host cannot be emulated.
-Currently, the fault injection filter has the following limitations:
-
-* Abort codes are restricted to HTTP status codes only
-* Delays are restricted to fixed duration.
-
-Future versions will include support for restricting faults to specific
-routes, injecting *gRPC* and *HTTP/2* specific error codes and delay
-durations based on distributions.
-
Configuration
-------------
@@ -36,6 +27,44 @@ Configuration
* :ref:`v2 API reference `
* This filter should be configured with the name *envoy.fault*.
+.. _config_http_filters_fault_injection_http_header:
+
+Controlling fault injection via HTTP headers
+--------------------------------------------
+
+The fault filter has the capability to allow fault configuration to be specified by the caller.
+This is useful in certain scenarios in which it is desired to allow the client to specify its own
+fault configuration. The currently supported header controls are:
+
+* Request delay configuration via the *x-envoy-fault-delay-request* header. The header value
+ should be an integer that specifies the number of milliseconds to throttle the latency for.
+* Response rate limit configuration via the *x-envoy-fault-throughput-response* header. The
+ header value should be an integer that specified the limit in KiB/s and must be > 0.
+
+.. attention::
+
+ Allowing header control is inherently dangerous if exposed to untrusted clients. In this case,
+ it is suggested to use the :ref:`max_active_faults
+ ` setting to limit the
+ maximum concurrent faults that can be active at any given time.
+
+The following is an example configuration that enables header control for both of the above
+options:
+
+.. code-block:: yaml
+
+ name: envoy.fault
+ config:
+ max_active_faults: 100
+ delay:
+ header_delay: {}
+ percentage:
+ numerator: 100
+ response_rate_limit:
+ header_limit: {}
+ percentage:
+ numerator: 100
+
.. _config_http_filters_fault_injection_runtime:
Runtime
@@ -43,26 +72,38 @@ Runtime
The HTTP fault injection filter supports the following global runtime settings:
+.. attention::
+
+ Some of the following runtime keys require the filter to be configured for the specific fault
+ type and some do not. Please consult the documentation for each key for more information.
+
fault.http.abort.abort_percent
% of requests that will be aborted if the headers match. Defaults to the
*abort_percent* specified in config. If the config does not contain an
- *abort* block, then *abort_percent* defaults to 0.
+ *abort* block, then *abort_percent* defaults to 0. For historic reasons, this runtime key is
+ available regardless of whether the filter is :ref:`configured for abort
+ `.
fault.http.abort.http_status
HTTP status code that will be used as the of requests that will be
aborted if the headers match. Defaults to the HTTP status code specified
in the config. If the config does not contain an *abort* block, then
- *http_status* defaults to 0.
+ *http_status* defaults to 0. For historic reasons, this runtime key is
+ available regardless of whether the filter is :ref:`configured for abort
+ `.
fault.http.delay.fixed_delay_percent
% of requests that will be delayed if the headers match. Defaults to the
- *delay_percent* specified in the config or 0 otherwise.
+ *delay_percent* specified in the config or 0 otherwise. This runtime key is only available when
+ the filter is :ref:`configured for delay
+ `.
fault.http.delay.fixed_duration_ms
The delay duration in milliseconds. If not specified, the
*fixed_duration_ms* specified in the config will be used. If this field
is missing from both the runtime and the config, no delays will be
- injected.
+ injected. This runtime key is only available when the filter is :ref:`configured for delay
+ `.
fault.http.max_active_faults
The maximum number of active faults (of all types) that Envoy will will inject via the fault
@@ -72,10 +113,10 @@ fault.http.max_active_faults
` setting will be used.
fault.http.rate_limit.response_percent
- % of requests which will have a response rate limit fault injected, if the filter is
- :ref:`configured ` to
- do so. Defaults to the value set in the :ref:`percentage
- ` field.
+ % of requests which will have a response rate limit fault injected. Defaults to the value set in
+ the :ref:`percentage ` field.
+ This runtime key is only available when the filter is :ref:`configured for response rate limiting
+ `.
*Note*, fault filter runtime settings for the specific downstream cluster
override the default ones if present. The following are downstream specific
diff --git a/docs/root/configuration/http_filters/router_filter.rst b/docs/root/configuration/http_filters/router_filter.rst
index 143a438048ff2..d33a974eaad51 100644
--- a/docs/root/configuration/http_filters/router_filter.rst
+++ b/docs/root/configuration/http_filters/router_filter.rst
@@ -28,21 +28,27 @@ x-envoy-max-retries
^^^^^^^^^^^^^^^^^^^
If a :ref:`route config retry policy ` or a
:ref:`virtual host retry policy ` is in place, Envoy will default to retrying
-one time unless explicitly specified. The number of retries can be explicitly set in either the virtual host retry config,
-or the route retry config, or by using this header. If a retry policy is not configured and
-:ref:`config_http_filters_router_x-envoy-retry-on` or :ref:`config_http_filters_router_x-envoy-retry-grpc-on` headers
-are not specified, Envoy will not retry a failed request.
+one time unless explicitly specified. The number of retries can be explicitly set in the virtual host retry config,
+the route retry config, or by using this header. If this header is used, its value takes precedence over the number of
+retries set in either retry policy. If a retry policy is not configured and :ref:`config_http_filters_router_x-envoy-retry-on`
+or :ref:`config_http_filters_router_x-envoy-retry-grpc-on` headers are not specified, Envoy will not retry a failed request.
A few notes on how Envoy does retries:
* The route timeout (set via :ref:`config_http_filters_router_x-envoy-upstream-rq-timeout-ms` or the
:ref:`route configuration `) **includes** all
retries. Thus if the request timeout is set to 3s, and the first request attempt takes 2.7s, the
- retry (including backoff) has .3s to complete. This is by design to avoid an exponential
+ retry (including back-off) has .3s to complete. This is by design to avoid an exponential
retry/timeout explosion.
-* Envoy uses a fully jittered exponential backoff algorithm for retries with a base time of 25ms.
- The first retry will be delayed randomly between 0-24ms, the 2nd between 0-74ms, the 3rd between
- 0-174ms and so on.
+* Envoy uses a fully jittered exponential back-off algorithm for retries with a default base
+ interval of 25ms. Given a base interval B and retry number N, the back-off for the retry is in
+ the range :math:`\big[0, (2^N-1)B\big)`. For example, given the default interval, the first retry
+ will be delayed randomly by 0-24ms, the 2nd by 0-74ms, the 3rd by 0-174ms, and so on. The
+ interval is capped at a maximum interval, which defaults to 10 times the base interval (250ms).
+ The default base interval (and therefore the maximum interval) can be manipulated by setting the
+ upstream.base_retry_backoff_ms runtime parameter. The back-off intervals can also be modified
+ by configuring the retry policy's
+ :ref:`retry back-off `.
* If max retries is set both by header as well as in the route configuration, the maximum value is
taken when determining the max retries to use for the request.
@@ -156,7 +162,7 @@ x-envoy-retriable-status-codes
Setting this header informs Envoy about what status codes should be considered retriable when used in
conjunction with the :ref:`retriable-status-code ` retry policy.
When the corresponding retry policy is set, the list of retriable status codes will be considered retriable
-in addition to the status codes enabled for retry through other retry policies.
+in addition to the status codes enabled for retry through other retry policies.
The list is a comma delimited list of integers: "409" would cause 409 to be considered retriable, while "504,409"
would consider both 504 and 409 retriable.
@@ -239,7 +245,7 @@ x-envoy-ratelimited
If this header is set by upstream, Envoy will not retry. Currently the value of the header is not
looked at, only its presence. This header is set by :ref:`rate limit filter`
-when the request is rate limited.
+when the request is rate limited.
.. _config_http_filters_router_x-envoy-decorator-operation:
@@ -350,8 +356,9 @@ Runtime
The router filter supports the following runtime settings:
upstream.base_retry_backoff_ms
- Base exponential retry back off time. See :ref:`here ` for more
- information. Defaults to 25ms.
+ Base exponential retry back-off time. See :ref:`here ` and
+ :ref:`config_http_filters_router_x-envoy-max-retries` for more information. Defaults to 25ms.
+ The default maximum retry back-off time is 10 times this value.
.. _config_http_filters_router_runtime_maintenance_mode:
diff --git a/docs/root/configuration/network_filters/dubbo_proxy_filter.rst b/docs/root/configuration/network_filters/dubbo_proxy_filter.rst
new file mode 100644
index 0000000000000..503dd6970a9b6
--- /dev/null
+++ b/docs/root/configuration/network_filters/dubbo_proxy_filter.rst
@@ -0,0 +1,82 @@
+.. _config_network_filters_dubbo_proxy:
+
+Dubbo proxy
+============
+
+The dubbo proxy filter decodes the RPC protocol between dubbo clients
+and servers. the decoded RPC information is converted to metadata.
+the metadata includes the basic request ID, request type, serialization type,
+and the required service name, method name, parameter name,
+and parameter value for routing.
+
+* :ref:`v2 API reference `
+* This filter should be configured with the name *envoy.filters.network.dubbo_proxy*.
+
+.. _config_network_filters_dubbo_proxy_stats:
+
+Statistics
+----------
+
+Every configured dubbo proxy filter has statistics rooted at *redis..* with the
+following statistics:
+
+.. csv-table::
+ :header: Name, Type, Description
+ :widths: 1, 1, 2
+
+ request, Counter, Total requests
+ request_twoway, Counter, Total twoway requests
+ request_oneway, Counter, Total oneway requests
+ request_event, Counter, Total event requests
+ request_decoding_error, Counter, Total decoding error requests
+ request_decoding_success, Counter, Total decoding success requests
+ request_active, Gauge, Total active requests
+ response, Counter, Total responses
+ response_success, Counter, Total success responses
+ response_error, Counter, Total responses that protocol parse error
+ response_error_caused_connection_close, Counter, Total responses that caused by the downstream connection close
+ response_business_exception, Counter, Total responses that the protocol contains exception information returned by the business layer
+ response_decoding_error, Counter, Total decoding error responses
+ response_decoding_success, Counter, Total decoding success responses
+ response_error, Counter, Total responses that protocol parse error
+ local_response_success, Counter, Total local responses
+ local_response_error, Counter, Total local responses that encoding error
+ local_response_business_exception, Counter, Total local responses that the protocol contains business exception
+ cx_destroy_local_with_active_rq, Counter, Connections destroyed locally with an active query
+ cx_destroy_remote_with_active_rq, Counter, Connections destroyed remotely with an active query
+
+
+Implement custom filter based on the dubbo proxy filter
+--------------------------------------------------------
+
+If you want to implement a custom filter based on the dubbo protocol,
+the dubbo proxy filter like HTTP also provides a very convenient way to expand,
+the first step is to implement the DecoderFilter interface, and give the filter named, such as testFilter,
+the second step is to add your configuration, configuration method refer to the following sample
+
+.. code-block:: yaml
+
+ filter_chains:
+ - filters:
+ - name: envoy.filters.network.dubbo_proxy
+ config:
+ stat_prefix: dubbo_incomming_stats
+ protocol_type: Dubbo
+ serialization_type: Hessian2
+ route_config:
+ name: local_route
+ interface: org.apache.dubbo.demo.DemoService
+ routes:
+ - match:
+ method:
+ name:
+ exact: sayHello
+ route:
+ cluster: user_service_dubbo_server
+ dubbo_filters:
+ - name: envoy.filters.dubbo.testFilter
+ config:
+ "@type": type.googleapis.com/google.protobuf.Struct
+ value:
+ name: test_service
+ - name: envoy.filters.dubbo.router
\ No newline at end of file
diff --git a/docs/root/configuration/network_filters/network_filters.rst b/docs/root/configuration/network_filters/network_filters.rst
index dd559ddd66890..f43f474ac6547 100644
--- a/docs/root/configuration/network_filters/network_filters.rst
+++ b/docs/root/configuration/network_filters/network_filters.rst
@@ -10,6 +10,7 @@ filters.
.. toctree::
:maxdepth: 2
+ dubbo_proxy_filter
client_ssl_auth_filter
echo_filter
ext_authz_filter
@@ -21,3 +22,4 @@ filters.
tcp_proxy_filter
thrift_proxy_filter
sni_cluster_filter
+ zookeeper_proxy_filter
diff --git a/docs/root/configuration/network_filters/zookeeper_proxy_filter.rst b/docs/root/configuration/network_filters/zookeeper_proxy_filter.rst
new file mode 100644
index 0000000000000..cf8e1c9716a72
--- /dev/null
+++ b/docs/root/configuration/network_filters/zookeeper_proxy_filter.rst
@@ -0,0 +1,92 @@
+.. _config_network_filters_zookeeper_proxy:
+
+ZooKeeper proxy
+===============
+
+The ZooKeeper proxy filter decodes the client protocol for
+`Apache ZooKeeper `_. It decodes the requests,
+responses and events in the payload. Most opcodes known in
+`ZooKeeper 3.5 `_
+are supported. The unsupported ones are related to SASL authentication.
+
+.. attention::
+
+ The zookeeper_proxy filter is experimental and is currently under active
+ development. Capabilities will be expanded over time and the
+ configuration structures are likely to change.
+
+.. _config_network_filters_zookeeper_proxy_config:
+
+Configuration
+-------------
+
+The ZooKeeper proxy filter should be chained with the TCP proxy filter as shown
+in the configuration snippet below:
+
+.. code-block:: yaml
+
+ filter_chains:
+ - filters:
+ - name: envoy.filters.network.zookeeper_proxy
+ config:
+ stat_prefix: zookeeper
+ - name: envoy.tcp_proxy
+ config:
+ stat_prefix: tcp
+ cluster: ...
+
+
+.. _config_network_filters_zookeeper_proxy_stats:
+
+Statistics
+----------
+
+Every configured ZooKeeper proxy filter has statistics rooted at *zookeeper..* with the
+following statistics:
+
+.. csv-table::
+ :header: Name, Type, Description
+ :widths: 1, 1, 2
+
+ decoder_error, Counter, Number of times a message wasn't decoded
+ request_bytes, Counter, Number of bytes in decoded request messages
+ connect_rq, Counter, Number of regular connect (non-readonly) requests
+ connect_readonly_rq, Counter, Number of connect requests with the readonly flag set
+ ping_rq, Counter, Number of ping requests
+ auth._rq, Counter, Number of auth requests for a given type
+ getdata_rq, Counter, Number of getdata requests
+ create_rq, Counter, Number of create requests
+ create2_rq, Counter, Number of create2 requests
+ setdata_rq, Counter, Number of setdata requests
+ getchildren_rq, Counter, Number of getchildren requests
+ getchildren2_rq, Counter, Number of getchildren2 requests
+ remove_rq, Counter, Number of delete requests
+ exists_rq, Counter, Number of stat requests
+ getacl_rq, Counter, Number of getacl requests
+ setacl_rq, Counter, Number of setacl requests
+ sync_rq, Counter, Number of sync requests
+ multi_rq, Counter, Number of multi transaction requests
+ reconfig_rq, Counter, Number of reconfig requests
+ close_rq, Counter, Number of close requests
+ setwatches_rq, Counter, Number of setwatches requests
+ checkwatches_rq, Counter, Number of checkwatches requests
+ removewatches_rq, Counter, Number of removewatches requests
+ check_rq, Counter, Number of check requests
+
+.. _config_network_filters_zookeeper_proxy_dynamic_metadata:
+
+Dynamic Metadata
+----------------
+
+The ZooKeeper filter emits the following dynamic metadata for each message parsed:
+
+.. csv-table::
+ :header: Name, Type, Description
+ :widths: 1, 1, 2
+
+ , string, "The path associated with the request, response or event"
+ , string, "The opname for the request, response or event"
+ , string, "The string representation of the flags applied to the znode"
+ , string, "The size of the request message in bytes"
+ , string, "True if a watch is being set, false otherwise"
+ , string, "The version parameter, if any, given with the request"
diff --git a/docs/root/configuration/overview/v2_overview.rst b/docs/root/configuration/overview/v2_overview.rst
index de78c974e5915..30b6066a98206 100644
--- a/docs/root/configuration/overview/v2_overview.rst
+++ b/docs/root/configuration/overview/v2_overview.rst
@@ -8,19 +8,19 @@ The Envoy v2 APIs are defined as `proto3
`_ in the `data plane API
repository `_. They support
-* Streaming delivery of `xDS `_
- API updates via gRPC. This reduces resource requirements and can lower the update latency.
+* Streaming delivery of :repo:`xDS ` API updates via gRPC. This reduces
+ resource requirements and can lower the update latency.
* A new REST-JSON API in which the JSON/YAML formats are derived mechanically via the `proto3
canonical JSON mapping
`_.
* Delivery of updates via the filesystem, REST-JSON or gRPC endpoints.
* Advanced load balancing through an extended endpoint assignment API and load
and resource utilization reporting to management servers.
-* `Stronger consistency and ordering properties
- `_
+* :repo:`Stronger consistency and ordering properties
+ `
when needed. The v2 APIs still maintain a baseline eventual consistency model.
-See the `xDS protocol description `_ for
+See the :repo:`xDS protocol description ` for
further details on aspects of v2 message exchange between Envoy and the management server.
.. _config_overview_v2_bootstrap:
@@ -199,8 +199,8 @@ In the above example, the EDS management server could then return a proto encodi
The versioning and type URL scheme that appear above are explained in more
-detail in the `streaming gRPC subscription protocol
-`_
+detail in the :repo:`streaming gRPC subscription protocol
+`
documentation.
Dynamic
@@ -332,17 +332,6 @@ The management server could respond to EDS requests with:
address: 127.0.0.2
port_value: 1234
-Upgrading from v1 configuration
--------------------------------
-
-While new v2 bootstrap JSON/YAML can be written, it might be expedient to upgrade an existing
-v1 JSON/YAML configuration to v2. To do this (in an Envoy source tree),
-you can run:
-
-.. code-block:: console
-
- bazel run //tools:v1_to_bootstrap
-
.. _config_overview_v2_management_server:
Management server
@@ -352,7 +341,7 @@ A v2 xDS management server will implement the below endpoints as required for
gRPC and/or REST serving. In both streaming gRPC and
REST-JSON cases, a :ref:`DiscoveryRequest ` is sent and a
:ref:`DiscoveryResponse ` received following the
-`xDS protocol `_.
+:repo:`xDS protocol `.
.. _v2_grpc_streaming_endpoints:
@@ -361,9 +350,8 @@ gRPC streaming endpoints
.. http:post:: /envoy.api.v2.ClusterDiscoveryService/StreamClusters
-See `cds.proto
-`_
-for the service definition. This is used by Envoy as a client when
+See :repo:`cds.proto ` for the service definition. This is used by Envoy
+as a client when
.. code-block:: yaml
@@ -380,8 +368,8 @@ is set in the :ref:`dynamic_resources
.. http:post:: /envoy.api.v2.EndpointDiscoveryService/StreamEndpoints
-See `eds.proto
-`_
+See :repo:`eds.proto
+`
for the service definition. This is used by Envoy as a client when
.. code-block:: yaml
@@ -399,8 +387,8 @@ is set in the :ref:`eds_cluster_config
.. http:post:: /envoy.api.v2.ListenerDiscoveryService/StreamListeners
-See `lds.proto
-`_
+See :repo:`lds.proto
+`
for the service definition. This is used by Envoy as a client when
.. code-block:: yaml
@@ -418,8 +406,8 @@ is set in the :ref:`dynamic_resources
.. http:post:: /envoy.api.v2.RouteDiscoveryService/StreamRoutes
-See `rds.proto
-`_
+See :repo:`rds.proto
+`
for the service definition. This is used by Envoy as a client when
.. code-block:: yaml
@@ -441,8 +429,8 @@ REST endpoints
.. http:post:: /v2/discovery:clusters
-See `cds.proto
-`_
+See :repo:`cds.proto
+`
for the service definition. This is used by Envoy as a client when
.. code-block:: yaml
@@ -458,8 +446,8 @@ is set in the :ref:`dynamic_resources
.. http:post:: /v2/discovery:endpoints
-See `eds.proto
-`_
+See :repo:`eds.proto
+`
for the service definition. This is used by Envoy as a client when
.. code-block:: yaml
@@ -475,8 +463,8 @@ is set in the :ref:`eds_cluster_config
.. http:post:: /v2/discovery:listeners
-See `lds.proto
-`_
+See :repo:`lds.proto
+`
for the service definition. This is used by Envoy as a client when
.. code-block:: yaml
@@ -492,8 +480,8 @@ is set in the :ref:`dynamic_resources
.. http:post:: /v2/discovery:routes
-See `rds.proto
-`_
+See :repo:`rds.proto
+`
for the service definition. This is used by Envoy as a client when
.. code-block:: yaml
@@ -536,14 +524,14 @@ synchronization to correctly sequence the update. With ADS, the management
server would deliver the CDS, EDS and then RDS updates on a single stream.
ADS is only available for gRPC streaming (not REST) and is described more fully
-in `this
-`_
+in :repo:`this
+`
document. The gRPC endpoint is:
-.. http:post:: /envoy.api.v2.AggregatedDiscoveryService/StreamAggregatedResources
+.. http:post:: /envoy.service.discovery.v2.AggregatedDiscoveryService/StreamAggregatedResources
-See `discovery.proto
-`_
+See :repo:`discovery.proto
+`
for the service definition. This is used by Envoy as a client when
.. code-block:: yaml
@@ -622,8 +610,8 @@ means that we will not break wire format compatibility.
manner that does not break `backwards compatibility
`_.
Fields in the above protos may be later deprecated, subject to the
-`breaking change policy
-`_,
+:repo:`breaking change policy
+`,
when their related functionality is no longer required. While frozen APIs
have their wire format compatibility preserved, we reserve the right to change
proto namespaces, file locations and nesting relationships, which may cause
diff --git a/docs/root/configuration/runtime.rst b/docs/root/configuration/runtime.rst
index 7d3ced54614c0..9934b005ba192 100644
--- a/docs/root/configuration/runtime.rst
+++ b/docs/root/configuration/runtime.rst
@@ -89,7 +89,7 @@ feature deprecation in Envoy is in 3 phases: warn-by-default, fail-by-default, a
In the first phase, Envoy logs a warning to the warning log that the feature is deprecated and
increments the :ref:`deprecated_feature_use ` runtime stat.
-Users are encouraged to go to :repo:`DEPRECATED.md ` to see how to
+Users are encouraged to go to :ref:`deprecated ` to see how to
migrate to the new code path and make sure it is suitable for their use case.
In the second phase the message and filename will be added to
diff --git a/docs/root/configuration/secret.rst b/docs/root/configuration/secret.rst
index 2f6445260eb96..bf42233583fce 100644
--- a/docs/root/configuration/secret.rst
+++ b/docs/root/configuration/secret.rst
@@ -20,8 +20,8 @@ The connection between Envoy proxy and SDS server has to be secure. One option i
SDS server
----------
-A SDS server needs to implement the gRPC service `SecretDiscoveryService `_.
-It follows the same protocol as other `xDS `_
+A SDS server needs to implement the gRPC service :repo:`SecretDiscoveryService `.
+It follows the same protocol as other :repo:`xDS `.
SDS Configuration
-----------------
diff --git a/docs/root/configuration/well_known_dynamic_metadata.rst b/docs/root/configuration/well_known_dynamic_metadata.rst
index dd11866a42a02..73215617e46db 100644
--- a/docs/root/configuration/well_known_dynamic_metadata.rst
+++ b/docs/root/configuration/well_known_dynamic_metadata.rst
@@ -17,3 +17,4 @@ The following Envoy filters emit dynamic metadata that other filters can leverag
* :ref:`MySQL Proxy Filter `
* :ref:`Role Based Access Control (RBAC) Filter `
* :ref:`Role Based Access Control (RBAC) Network Filter `
+* :ref:`ZooKeeper Proxy Filter `
diff --git a/docs/root/index.rst b/docs/root/index.rst
index 2e824f0c135a4..354b3578c8ed2 100644
--- a/docs/root/index.rst
+++ b/docs/root/index.rst
@@ -18,5 +18,5 @@ Envoy documentation
configuration/configuration
operations/operations
extending/extending
- api-v2/api
+ api/api
faq/overview
diff --git a/docs/root/intro/arch_overview/circuit_breaking.rst b/docs/root/intro/arch_overview/circuit_breaking.rst
index b2b6e31aa8c30..57dc097dba90d 100644
--- a/docs/root/intro/arch_overview/circuit_breaking.rst
+++ b/docs/root/intro/arch_overview/circuit_breaking.rst
@@ -9,6 +9,8 @@ mesh is that Envoy enforces circuit breaking limits at the network level as oppo
configure and code each application independently. Envoy supports various types of fully distributed
(not coordinated) circuit breaking:
+.. _arch_overview_circuit_break_cluster_maximum_connections:
+
* **Cluster maximum connections**: The maximum number of connections that Envoy will establish to
all hosts in an upstream cluster. In practice this is only applicable to HTTP/1.1 clusters since
HTTP/2 uses a single connection to each host. If this circuit breaker overflows the :ref:`upstream_cx_overflow
@@ -34,6 +36,24 @@ configure and code each application independently. Envoy supports various types
:ref:`upstream_rq_retry_overflow ` counter for the cluster
will increment.
+ .. _arch_overview_circuit_break_cluster_maximum_connection_pools:
+
+* **Cluster maximum concurrent connection pools**: The maximum number of connection pools that can be
+ concurrently instantiated. Some features, such as the
+ :ref:`Original Src Listener Filter `, can
+ create an unbounded number of connection pools. When a cluster has exhausted its concurrent
+ connection pools, it will attempt to reclaim an idle one. If it cannot, then the circuit breaker
+ will overflow. This differs from
+ :ref:`Cluster maximum connections ` in that
+ connection pools never time out, whereas connections typically will. Connections automatically
+ clean up; connection pools do not. Note that in order for a connection pool to function it needs
+ at least one upstream connection, so this value should likely be no greater than
+ :ref:`Cluster maximum connections `.
+ If this circuit breaker overflows the
+ :ref:`upstream_cx_pool_overflow ` counter for the cluster
+ will increment.
+
+
Each circuit breaking limit is :ref:`configurable `
and tracked on a per upstream cluster and per priority basis. This allows different components of
the distributed system to be tuned independently and have different limits. The live state of these
diff --git a/docs/root/intro/arch_overview/cluster_manager.rst b/docs/root/intro/arch_overview/cluster_manager.rst
index 71739a4a302c8..8550d3a0655ba 100644
--- a/docs/root/intro/arch_overview/cluster_manager.rst
+++ b/docs/root/intro/arch_overview/cluster_manager.rst
@@ -25,6 +25,8 @@ distribution.
* Cluster manager :ref:`configuration `.
* CDS :ref:`configuration `.
+.. _arch_overview_cluster_warming:
+
Cluster warming
---------------
diff --git a/docs/root/intro/arch_overview/grpc.rst b/docs/root/intro/arch_overview/grpc.rst
index 84226f444511d..1277ac6e983fd 100644
--- a/docs/root/intro/arch_overview/grpc.rst
+++ b/docs/root/intro/arch_overview/grpc.rst
@@ -40,9 +40,9 @@ Envoy supports two gRPC bridges:
gRPC services
-------------
-In addition to proxying gRPC on the data plane, Envoy make use of gRPC for its
+In addition to proxying gRPC on the data plane, Envoy makes use of gRPC for its
control plane, where it :ref:`fetches configuration from management server(s)
-` and also in filters, for example for :ref:`rate limiting
+` and in filters, such as for :ref:`rate limiting
` or authorization checks. We refer to these as
*gRPC services*.
diff --git a/docs/root/intro/arch_overview/http_connection_management.rst b/docs/root/intro/arch_overview/http_connection_management.rst
index e21155b3f2b4c..68bdeacb33e20 100644
--- a/docs/root/intro/arch_overview/http_connection_management.rst
+++ b/docs/root/intro/arch_overview/http_connection_management.rst
@@ -48,31 +48,31 @@ table `. The route table can be specified in one of
Retry plugin configuration
--------------------------
-Normally during retries, hosts selection follows the same process as the original request. To modify
-this behavior retry plugins can be used, which fall into two categories:
+Normally during retries, host selection follows the same process as the original request. Retry plugins
+can be used to modify this behavior, and they fall into two categories:
* :ref:`Host Predicates `:
- These predicates can be used to "reject" a host, which will cause host selection to be reattempted.
- Any number of these predicates can be specified, and the host will be rejected if any of the predicates reject the host.
+ These predicates can be used to "reject" a host, which will cause host selection to be reattempted.
+ Any number of these predicates can be specified, and the host will be rejected if any of the predicates reject the host.
Envoy supports the following built-in host predicates
* *envoy.retry_host_predicates.previous_hosts*: This will keep track of previously attempted hosts, and rejects
hosts that have already been attempted.
-
+
* :ref:`Priority Predicates`: These predicates can
be used to adjust the priority load used when selecting a priority for a retry attempt. Only one such
predicate may be specified.
Envoy supports the following built-in priority predicates
- * *envoy.retry_priority.previous_priorities*: This will keep track of previously attempted priorities,
+ * *envoy.retry_priority.previous_priorities*: This will keep track of previously attempted priorities,
and adjust the priority load such that other priorities will be targeted in subsequent retry attempts.
Host selection will continue until either the configured predicates accept the host or a configurable
-:ref:`max attempts ` has been reached.
+:ref:`max attempts ` has been reached.
-These plugins can be combined to affect both host selection and priority load. Envoy can also be extended
+These plugins can be combined to affect both host selection and priority load. Envoy can also be extended
with custom retry plugins similar to how custom filters can be added.
@@ -152,7 +152,7 @@ upstream will be modified by:
2. Replacing the Authority/Host, Scheme, and Path headers with the values from the Location header.
The altered request headers will then have a new route selected, be sent through a new filter chain,
-and then shipped upstream with all of the normal Envoy request sanitization taking place.
+and then shipped upstream with all of the normal Envoy request sanitization taking place.
.. warning::
Note that HTTP connection manager sanitization such as clearing untrusted headers will only be
diff --git a/docs/root/intro/arch_overview/redis.rst b/docs/root/intro/arch_overview/redis.rst
index 044ea66553726..4d2929a14e2ae 100644
--- a/docs/root/intro/arch_overview/redis.rst
+++ b/docs/root/intro/arch_overview/redis.rst
@@ -8,7 +8,9 @@ In this mode, the goals of Envoy are to maintain availability and partition tole
over consistency. This is the key point when comparing Envoy to `Redis Cluster
`_. Envoy is designed as a best-effort cache,
meaning that it will not try to reconcile inconsistent data or keep a globally consistent
-view of cluster membership.
+view of cluster membership. It also supports routing commands from different workload to
+different to different upstream clusters based on their access patterns, eviction, or isolation
+requirements.
The Redis project offers a thorough reference on partitioning as it relates to Redis. See
"`Partitioning: how to split data among multiple Redis instances
@@ -22,6 +24,7 @@ The Redis project offers a thorough reference on partitioning as it relates to R
* Detailed command statistics.
* Active and passive healthchecking.
* Hash tagging.
+* Prefix routing.
**Planned future enhancements**:
@@ -148,6 +151,8 @@ For details on each command's usage see the official
ZREVRANGEBYLEX, Sorted Set
ZREVRANGEBYSCORE, Sorted Set
ZREVRANK, Sorted Set
+ ZPOPMIN, Sorted Set
+ ZPOPMAX, Sorted Set
ZSCAN, Sorted Set
ZSCORE, Sorted Set
APPEND, String
diff --git a/docs/root/intro/arch_overview/tracing.rst b/docs/root/intro/arch_overview/tracing.rst
index 22d4f16410bd7..47c635d5cb432 100644
--- a/docs/root/intro/arch_overview/tracing.rst
+++ b/docs/root/intro/arch_overview/tracing.rst
@@ -12,14 +12,18 @@ sources of latency. Envoy supports three features related to system wide tracing
* **Request ID generation**: Envoy will generate UUIDs when needed and populate the
:ref:`config_http_conn_man_headers_x-request-id` HTTP header. Applications can forward the
x-request-id header for unified logging as well as tracing.
-* **External trace service integration**: Envoy supports pluggable external trace visualization
- providers. Currently Envoy supports `LightStep `_, `Zipkin `_
- or any Zipkin compatible backends (e.g. `Jaeger `_), and
- `Datadog `_.
- However, support for other tracing providers would not be difficult to add.
* **Client trace ID joining**: The :ref:`config_http_conn_man_headers_x-client-trace-id` header can
be used to join untrusted request IDs to the trusted internal
:ref:`config_http_conn_man_headers_x-request-id`.
+* **External trace service integration**: Envoy supports pluggable external trace visualization
+ providers, that are divided into two subgroups:
+
+ - External tracers which are part of the Envoy code base, like `LightStep `_,
+ `Zipkin `_ or any Zipkin compatible backends (e.g. `Jaeger `_), and
+ `Datadog `_.
+ - External tracers which come as a third party plugin, like `Instana `_.
+
+Support for other tracing providers would not be difficult to add.
How to initiate a trace
-----------------------
diff --git a/docs/root/intro/arch_overview/websocket.rst b/docs/root/intro/arch_overview/websocket.rst
index dab57656eb276..e854eb53bb271 100644
--- a/docs/root/intro/arch_overview/websocket.rst
+++ b/docs/root/intro/arch_overview/websocket.rst
@@ -32,15 +32,15 @@ laid out below, but custom filter chains can only be configured on a per-HttpCon
| F | F | F |
+-----------------------+-------------------------+-------------------+
-Note that the statistics for upgrades are all bundled together so websocket
+Note that the statistics for upgrades are all bundled together so WebSocket
:ref:`statistics ` are tracked by stats such as
downstream_cx_upgrades_total and downstream_cx_upgrades_active
Handling H2 hops
^^^^^^^^^^^^^^^^
-Envoy currently has an alpha implementation of tunneling websockets over H2 streams for deployments
-that prefer a uniform H2 mesh throughout, for example, for a deployment of the form:
+Envoy supports tunneling WebSockets over H2 streams for deployments that prefer a uniform
+H2 mesh throughout; this enables, for example, a deployment of the form:
[Client] ---- HTTP/1.1 ---- [Front Envoy] ---- HTTP/2 ---- [Sidecar Envoy ---- H1 ---- App]
@@ -48,7 +48,7 @@ In this case, if a client is for example using WebSocket, we want the Websocket
upstream server functionally intact, which means it needs to traverse the HTTP/2 hop.
This is accomplished via
-`extended CONNECT