Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 60 additions & 13 deletions api/envoy/config/filter/network/tcp_proxy/v2/tcp_proxy.proto
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import "envoy/api/v2/core/address.proto";
import "envoy/api/v2/core/base.proto";

import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
import "google/protobuf/wrappers.proto";

import "validate/validate.proto";
Expand All @@ -21,19 +22,65 @@ message TcpProxy {
// <config_network_filters_tcp_proxy_stats>`.
string stat_prefix = 1 [(validate.rules).string.min_bytes = 1];

// The upstream cluster to connect to.
//
// .. note::
//
// Once full filter chain matching is implemented in listeners, this field will become the only
// way to configure the target cluster. All other matching will be done via :ref:`filter chain
// matching rules <envoy_api_msg_listener.FilterChainMatch>`. For very simple configurations,
// this field can still be used to select the cluster when no other matching rules are required.
// Otherwise, a :ref:`deprecated_v1
// <envoy_api_field_config.filter.network.tcp_proxy.v2.TcpProxy.deprecated_v1>` configuration is
// required to use more complex routing in the interim.
//
string cluster = 2;
// [#not-implemented-hide:] Weighted clusters allow for specification of
// multiple upstream clusters along with weights that indicate the
// percentage of traffic to be forwarded to each cluster. TCP Proxy
// selects an upstream cluster based on the weights.
message WeightedCluster {
message ClusterWeight {
// Name of the upstream cluster. The cluster must exist in the
// :ref:`cluster manager configuration <config_cluster_manager>`.
string name = 1 [(validate.rules).string.min_bytes = 1];

// An integer between 0 and :ref:`total_weight
// <envoy_api_field_route.WeightedCluster.total_weight>`. The choice of an upstream
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: envoy_api_field_route.WeightedCluster.total_weight -> envoy_api_field_config.filter.network.tcp_proxy.v2.TcpProxy.WeightedCluster.total_weight.

// cluster is determined by its weight. The sum of weights across all entries in the
// clusters array must add up to the total_weight, which defaults to 100.
google.protobuf.UInt32Value weight = 2;

// [#not-implemented-hide:] Optional endpoint metadata match
// criteria. Only endpoints in the upstream cluster with metadata
// matching that set in metadata_match will be considered. The filter
// name should be specified as *envoy.lb*.
envoy.api.v2.core.Metadata metadata_match = 3;

// [#not-implemented-hide:] The per_filter_config field can be used
// to provide weighted cluster-specific configurations for
// filters. The key should match the filter name, such as
// *envoy.buffer* for the HTTP buffer filter. Use of this field is
// filter specific; see the :ref:`HTTP filter documentation
// <config_http_filters>` for if and how it is utilized.
map<string, google.protobuf.Struct> per_filter_config = 4;
}

// Specifies one or more upstream clusters associated with the TCP proxy.
repeated ClusterWeight clusters = 1 [(validate.rules).repeated .min_items = 1];

// Specifies the total weight across all clusters. The sum of all cluster
// weights must equal this value, which must be greater than 0. Defaults
// to 100.
google.protobuf.UInt32Value total_weight = 3 [(validate.rules).uint32.gte = 1];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need total_weight, or can it be computed from the clusters definition?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is to allow people to blackhole a certain portion of traffic. I copied this verbatim from http so that we can maintain the same semantics for http thrift and tcp

Copy link
Member

@venilnoronha venilnoronha Sep 17, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it.

Just FYI, it's determined from the clusters definition for thrift currently:

// weight. The sum of weights across all entries in the clusters array determines the total
// weight.
google.protobuf.UInt32Value weight = 2 [(validate.rules).uint32.gte = 1];

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh bummer. Why have different weighted clusters! I will leave it upto you.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense to allow for blackholing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this is documented as "the sum of all cluster weights must equal this value". So there's no use for blackholing then, right? It seems like for blackholing, you can create a cluster with no hosts (and even name it "blackhole"), and give it a specific weight.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That feels like a better approach to me. Thanks.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: total_weight = 3 -> total_weight = 2.

}

oneof cluster_specifier {
option (validate.required) = true;

// The upstream cluster to connect to.
string cluster = 2 [(validate.rules).string.min_bytes = 1];

// [#not-implemented-hide:] Envoy will determine the cluster to route
// to based on the SNI presented by the client during the TLS
// handshake. If the SNI is empty (as is the case for plaintext TCP
// connections) or the referenced cluster does not exist, Envoy will
// terminate the connection.
google.protobuf.BoolValue use_sni_as_cluster = 10;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should do this with a more generic approach. We had a discussion last week about allowing per-connection metadata, which filters could set and consume. I think we should have tcp_proxy consume something that is a cluster override value, and write a separate filter that sets this value based on SNI. Then we can support arbitrary routing of this type without putting it all hard-coded into tcp_proxy.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, in #4357, it was suggested to disallow cluster names with '.' in them, which would prevent a cluster from being named with the SNI value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The per connection metadata - is this same as the request info stuff that went into HTTP conn. manager?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussion was here: #4331

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes. But setting cluster from another filter means the tcp proxy can no longer be the terminating filter in the network stack. We need to have http style router as the final filter. And it’s unclear how it would work with things like redis.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t understand. How will the filter set the cluster that tcp proxy should use to route to upstream? The metadata stuff works for logging purposes. Not for changing the route that tcp proxy is supposed to take.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take a look at https://github.com/envoyproxy/envoy/blob/master/include/envoy/request_info/filter_state.h, and the discussion in #4331 and #4100. It allows setting arbitrary data to communicate between filters.

For this case, we would say that a specific metadata value, maybe "TcpProxy.cluster", will contain a cluster name to route to. And if it's not present, use the one in the TcpProxy config. So than any earlier filter could set "TcpProxy.cluster" to an appropriate value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I know about that request info stuff.

But I would much rather opt for more explicit semantics than an implicit “if not set, use the cluster in tcp proxy”. It makes it hard to debug and the configuration at first sight doesn’t tell you what mode Envoy is using.

Put another way, we might as well make the cluster field optional so that whatever filter I add earlier could set the cluster, allowing me to leave the cluster field in tcp proxy to be empty. Even then, it feels a bit awkward.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree it should be allowed to leave the cluster empty.

What I'm trying to avoid is everyone putting their own special routing rules hard-coded into tcp_proxy. Especially when it adds opaque requirements on other parts of the configuration (like cluster names must match SNI values).

Another approach we could think about is adding a hook for a lua script to run and select the cluster to use.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am okay with the separate filter. Just don't want to implicitly pull and override a user specified config in tcp proxy.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use server_names instead of sni here, to be consistent with what FilterChainMatch has, and put SNI only where it is TLS specific.

Also I suggest put a strong warning here that this is pretty dangerous when used without specifying server_names in FilterChainMatch and/or RBAC filter in front of it. It will allow client to request arbitrary cluster configured in the Envoy (e.g., Secret Discovery Service).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. See Greg’s comments.

Also, this behavior is already present today with the cluster_header option in http and also the original dst ip from the headers. So we might want to define a separate thing that marks one or more clusters as private or not addressable from filters


// [#not-implemented-hide:] Multiple upstream clusters can be specified for a
// given TCP proxy. The connection will be routed to one of the upstream clusters
// based on weights assigned to each cluster, similar to HTTP
// :ref:`traffic splitting <config_http_conn_man_route_table_traffic_splitting_split>`.
WeightedCluster weighted_clusters = 11;
}

// Optional endpoint metadata match criteria. Only endpoints in the upstream
// cluster with metadata matching that set in metadata_match will be
Expand Down