-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add update queue to endpoint translator #11491
Conversation
Signed-off-by: Alex Leong <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
// The endStream channel has already been closed so no action is | ||
// necessary. | ||
default: | ||
et.log.Error("endpoint update queue full; ending stream") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What information do we have about the stream here? Can we log additional diagnostic information? E.g. the address of the proxy.
Can we also explain what we expect to happen by ending the stream? E.g. is this something the user needs to come to us with an issue on? Or do they just need to know that the proxy should just reconnect organically.
|
||
et.log.Debugf("Sending destination no endpoints: %+v", u) | ||
if err := et.stream.Send(u); err != nil { | ||
et.log.Debugf("Failed to send address update: %s", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is previous code that we've moved, but why would an error would be logged at Debug level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We expect to see this every time a client proxy restarts/closes a stream.
Signed-off-by: Alex Leong <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
When a grpc client of the destination.Get API initiates a request but then doesn't read off of that stream, the HTTP2 stream flow control window will fill up and eventually exert backpressure on the destination controller. This manifests as calls to `Send` on the stream blocking. Since `Send` is called synchronously from the client-go informer callback (by way of the endpoint translator), this blocks the informer callback and prevents all further informer calllbacks from firing. This causes the destination controller to stop sending updates to any of its clients. We add a queue in the endpoint translator so that when it gets an update from the informer callback, that update is queued and we avoid potentially blocking the informer callback. Each endpoint translator spawns a goroutine to process this queue and call `Send`. If there is not capacity in this queue (e.g. because a client has stopped reading and we are experiencing backpressure) then we terminate the stream. Signed-off-by: Alex Leong <[email protected]>
#11491 changed the EndpointTranslator to use a queue to avoid calling `Send` on a gRPC stream directly from an informer callback goroutine. This change updates the ProfileTranslator in the same way, adding a queue to ensure we do not block the informer thread. Signed-off-by: Alex Leong <[email protected]>
## edge-23.11.2 This edge release contains observability improvements and bug fixes to the Destination controller, and a refinement to the multicluster gateway resolution logic. * Fixed an issue where the Destination controller could stop processing service profile updates, if a proxy subscribed to those updates stops reading them; this is a followup to the issue [#11491] fixed in edge-23.10.3 ([#11546]) * In the Destination controller, added informer lag histogram metrics to track whenever the objects tracked are falling behind the state in the kube-apiserver ([#11534]) * In the multicluster service mirror, extended the target gateway resolution logic to take into account all the possible IPs a hostname might resolve to, not just the first one (thanks @MrFreezeex!) ([#11499]) * Added probes to the debug container to appease environments requiring probes for all containers ([#11308])
## edge-23.11.2 This edge release contains observability improvements and bug fixes to the Destination controller, and a refinement to the multicluster gateway resolution logic. * Fixed an issue where the Destination controller could stop processing service profile updates, if a proxy subscribed to those updates stops reading them; this is a followup to the issue [#11491] fixed in [edge-23.10.3] ([#11546]) * In the Destination controller, added informer lag histogram metrics to track whenever the Kubernetes objects watched by the controller are falling behind the state in the kube-apiserver ([#11534]) * In the multicluster service mirror, extended the target gateway resolution logic to take into account all the possible IPs a hostname might resolve to, rather than just the first one (thanks @MrFreezeex!) ([#11499]) * Added probes to the debug container to appease environments requiring probes for all containers ([#11308]) [edge-23.10.3]: https://github.com/linkerd/linkerd2/releases/tag/edge-23.10.3 [#11546]: #11546 [#11534]: #11534 [#11499]: #11499 [#11308]: #11308
#11491 changed the EndpointTranslator to use a queue to avoid calling `Send` on a gRPC stream directly from an informer callback goroutine. This change updates the ProfileTranslator in the same way, adding a queue to ensure we do not block the informer thread. Signed-off-by: Alex Leong <[email protected]>
This stable release improves observability for the control plane by adding additional logging to the destination controller and by adding histograms which can detect Kubernetes informer lag. It also adds the ability to configure protocol detection. * Improved logging in the destination controller by adding the client pod's name to the logging context. This will improve visibility into the messages sent and received by the control plane from a specific proxy ([#11532]) * helm: Introduce configurable values for protocol detection ([#11536]) * Fixed an issue where the Destination controller could stop processing service profile updates, if a proxy subscribed to those updates stops reading them; this is a followup to the issue [#11491] fixed in [stable-2.14.2] ([#11546]) * In the Destination controller, added informer lag histogram metrics to track whenever the Kubernetes objects watched by the controller are falling behind the state in the kube-apiserver ([#11534]) * proxy: Fix grpc_status metric labels for inbound traffic [stable-2.14.2]: https://github.com/linkerd/linkerd2/releases/tag/stable-2.14.2 [#11532]: #11532 [#11536]: #11536 [#11546]: #11546 [#11534]: #11534 --------- Signed-off-by: Matei David <[email protected]> Signed-off-by: Alex Leong <[email protected]> Co-authored-by: Matei David <[email protected]>
When a grpc client of the destination.Get API initiates a request but then doesn't read off of that stream, the HTTP2 stream flow control window will fill up and eventually exert backpressure on the destination controller. This manifests as calls to
Send
on the stream blocking. SinceSend
is called synchronously from the client-go informer callback (by way of the endpoint translator), this blocks the informer callback and prevents all further informer calllbacks from firing. This causes the destination controller to stop sending updates to any of its clients.We add a queue in the endpoint translator so that when it gets an update from the informer callback, that update is queued and we avoid potentially blocking the informer callback. Each endpoint translator spawns a goroutine to process this queue and call
Send
. If there is not capacity in this queue (e.g. because a client has stopped reading and we are experiencing backpressure) then we terminate the stream.