Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add update queue to endpoint translator #11491

Merged
merged 7 commits into from
Oct 18, 2023
Merged

Add update queue to endpoint translator #11491

merged 7 commits into from
Oct 18, 2023

Conversation

adleong
Copy link
Member

@adleong adleong commented Oct 16, 2023

When a grpc client of the destination.Get API initiates a request but then doesn't read off of that stream, the HTTP2 stream flow control window will fill up and eventually exert backpressure on the destination controller. This manifests as calls to Send on the stream blocking. Since Send is called synchronously from the client-go informer callback (by way of the endpoint translator), this blocks the informer callback and prevents all further informer calllbacks from firing. This causes the destination controller to stop sending updates to any of its clients.

We add a queue in the endpoint translator so that when it gets an update from the informer callback, that update is queued and we avoid potentially blocking the informer callback. Each endpoint translator spawns a goroutine to process this queue and call Send. If there is not capacity in this queue (e.g. because a client has stopped reading and we are experiencing backpressure) then we terminate the stream.

@adleong adleong marked this pull request as ready for review October 17, 2023 20:49
@adleong adleong requested a review from a team as a code owner October 17, 2023 20:49
@adleong adleong changed the title [DNM] Add update queue to endpoint translator Add update queue to endpoint translator Oct 18, 2023
// The endStream channel has already been closed so no action is
// necessary.
default:
et.log.Error("endpoint update queue full; ending stream")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What information do we have about the stream here? Can we log additional diagnostic information? E.g. the address of the proxy.

Can we also explain what we expect to happen by ending the stream? E.g. is this something the user needs to come to us with an issue on? Or do they just need to know that the proxy should just reconnect organically.


et.log.Debugf("Sending destination no endpoints: %+v", u)
if err := et.stream.Send(u); err != nil {
et.log.Debugf("Failed to send address update: %s", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this is previous code that we've moved, but why would an error would be logged at Debug level?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We expect to see this every time a client proxy restarts/closes a stream.

Signed-off-by: Alex Leong <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
@adleong adleong merged commit 357a1d3 into main Oct 18, 2023
34 checks passed
@adleong adleong deleted the alex/queuwu branch October 18, 2023 19:34
mateiidavid pushed a commit that referenced this pull request Oct 26, 2023
When a grpc client of the destination.Get API initiates a request but then doesn't read off of that stream, the HTTP2 stream flow control window will fill up and eventually exert backpressure on the destination controller.  This manifests as calls to `Send` on the stream blocking.  Since `Send` is called synchronously from the client-go informer callback (by way of the endpoint translator), this blocks the informer callback and prevents all further informer calllbacks from firing.  This causes the destination controller to stop sending updates to any of its clients.

We add a queue in the endpoint translator so that when it gets an update from the informer callback, that update is queued and we avoid potentially blocking the informer callback.  Each endpoint translator spawns a goroutine to process this queue and call `Send`.  If there is not capacity in this queue (e.g. because a client has stopped reading and we are experiencing backpressure) then we terminate the stream.

Signed-off-by: Alex Leong <[email protected]>
@mateiidavid mateiidavid mentioned this pull request Oct 26, 2023
alpeb pushed a commit that referenced this pull request Nov 9, 2023
#11491 changed the EndpointTranslator to use a queue to avoid calling `Send` on a gRPC stream directly from an informer callback goroutine.  This change updates the ProfileTranslator in the same way, adding a queue to ensure we do not block the informer thread.

Signed-off-by: Alex Leong <[email protected]>
alpeb added a commit that referenced this pull request Nov 9, 2023
## edge-23.11.2

This edge release contains observability improvements and bug fixes to the
Destination controller, and a refinement to the multicluster gateway resolution
logic.

* Fixed an issue where the Destination controller could stop processing service
  profile updates, if a proxy subscribed to those updates stops reading them;
  this is a followup to the issue [#11491] fixed in edge-23.10.3 ([#11546])
* In the Destination controller, added informer lag histogram metrics to track
  whenever the objects tracked are falling behind the state in the
  kube-apiserver ([#11534])
* In the multicluster service mirror, extended the target gateway resolution
  logic to take into account all the possible IPs a hostname might resolve to,
  not just the first one (thanks @MrFreezeex!) ([#11499])
* Added probes to the debug container to appease environments requiring probes
  for all containers ([#11308])
alpeb added a commit that referenced this pull request Nov 9, 2023
## edge-23.11.2

This edge release contains observability improvements and bug fixes to the
Destination controller, and a refinement to the multicluster gateway resolution
logic.

* Fixed an issue where the Destination controller could stop processing service
  profile updates, if a proxy subscribed to those updates stops reading them;
  this is a followup to the issue [#11491] fixed in [edge-23.10.3] ([#11546])
* In the Destination controller, added informer lag histogram metrics to track
  whenever the Kubernetes objects watched by the controller are falling behind
  the state in the kube-apiserver ([#11534])
* In the multicluster service mirror, extended the target gateway resolution
  logic to take into account all the possible IPs a hostname might resolve to,
  rather than just the first one (thanks @MrFreezeex!) ([#11499])
* Added probes to the debug container to appease environments requiring probes
  for all containers ([#11308])

[edge-23.10.3]: https://github.com/linkerd/linkerd2/releases/tag/edge-23.10.3
[#11546]: #11546
[#11534]: #11534
[#11499]: #11499
[#11308]: #11308
adleong added a commit that referenced this pull request Nov 16, 2023
#11491 changed the EndpointTranslator to use a queue to avoid calling `Send` on a gRPC stream directly from an informer callback goroutine.  This change updates the ProfileTranslator in the same way, adding a queue to ensure we do not block the informer thread.

Signed-off-by: Alex Leong <[email protected]>
@adleong adleong mentioned this pull request Nov 16, 2023
adleong added a commit that referenced this pull request Nov 16, 2023
This stable release improves observability for the control plane by adding
additional logging to the destination controller and by adding histograms which
can detect Kubernetes informer lag. It also adds the ability to configure
protocol detection.

* Improved logging in the destination controller by adding the client pod's
  name to the logging context. This will improve visibility into the messages
  sent and received by the control plane from a specific proxy ([#11532])
* helm: Introduce configurable values for protocol detection ([#11536])
* Fixed an issue where the Destination controller could stop processing service
  profile updates, if a proxy subscribed to those updates stops reading them;
  this is a followup to the issue [#11491] fixed in [stable-2.14.2] ([#11546])
* In the Destination controller, added informer lag histogram metrics to track
  whenever the Kubernetes objects watched by the controller are falling behind
  the state in the kube-apiserver ([#11534])
* proxy: Fix grpc_status metric labels for inbound traffic

[stable-2.14.2]: https://github.com/linkerd/linkerd2/releases/tag/stable-2.14.2
[#11532]: #11532
[#11536]: #11536
[#11546]: #11546
[#11534]: #11534

---------

Signed-off-by: Matei David <[email protected]>
Signed-off-by: Alex Leong <[email protected]>
Co-authored-by: Matei David <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants