Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not stop retrying based on earlier good message from the stream #313

Merged
merged 1 commit into from
Apr 23, 2021

Conversation

kartlee
Copy link
Contributor

@kartlee kartlee commented Jul 8, 2020

  • Usually, the pattern is to establish a stream and read messages in the loop until io.EOF. If the server becomes unavailable after sending a good message in the stream, we stop retrying in the middleware based on an earlier good message. This forces the caller to check for codes.Unavailable and initiate establishing the stream and read again. The patch handles this case inside the middleware.

- Usually the pattern is to establish a stream and read messages
in the loop until io.EOF. If the server become unavailable after
sending a good message in the stream, the client has to retry from
establish stream and handle the backoff logic.
@johanbrandhorst
Copy link
Collaborator

I'm not convinced this is as straightforward a change as it looks. Why was this logic added if we can just remove it? Could you please add some tests that exhibits the issue and why this change is necessary?

@kartlee
Copy link
Contributor Author

kartlee commented Jul 9, 2020

Sure, I will see the existing test framework and try to add one. I looked at the history of the line of code and didn't get an explanation of why wasGood was added in the first place.

  1. client gets a server stream
  2. loop reading the stream to get messages until an error is received.
  3. The server sends a good message and restarts.
  4. The client gets a good message and comes out of the loop without retrying in backoff to wait for the server to come back.

-Karthik

@bwplotka
Copy link
Collaborator

But it's a good point to take a look here what's going on. The problem is that if it was required, there was no tests for it (as CI is green), so it might be good idea to revisit this.

Also do you mind changing v2 branch? @kartlee

Copy link
Collaborator

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I agree with @johanbrandhorst that it might be not trivial, having something in the code that we don't know why exists (plus there are no tests for it and no comment)... I would say let's remove it, and if this was important we will know. Let's make sure we have that in changelog and v2.

Without knowing more I need to agree with this PR ;p

@kartlee
Copy link
Contributor Author

kartlee commented Aug 2, 2020

But it's a good point to take a look here what's going on. The problem is that if it was required, there was no tests for it (as CI is green), so it might be good idea to revisit this.

Also do you mind changing v2 branch? @kartlee

For v2 -> #323

Thanks for approving the change to master.

@bwplotka bwplotka merged commit 3d8607d into grpc-ecosystem:master Apr 23, 2021
@bwplotka
Copy link
Collaborator

Let's make exception here, normally we should slowly stop updating v1

@bwplotka
Copy link
Collaborator

Anyway, thanks for both changes!

@renovate renovate bot mentioned this pull request Nov 28, 2023
1 task
ggilmore added a commit to sourcegraph/sourcegraph-public-snapshot that referenced this pull request Dec 22, 2023
…e already recieved a message on the stream (#59145)

When retrying a client stream, we must ensure that we haven't received any data from the server yet before retrying. Otherwise, we can't know if the client has already consumed part of the stream. Blindly retrying the stream could produce duplicate messages or inconsistent messages. The only safe generic behavior that we can implement is to only retry if an error occurs _before_ the server successfully sends the first message. After that, any encounters that we see on the stream will be directly returned to the caller - no retries will occur. Only the caller knows the retry semantics that it wants.


This matches the built-in grpc retry behavior (that we can't use, see https://github.com/sourcegraph/sourcegraph/issues/51060) as documented on https://learn.microsoft.com/en-us/aspnet/core/grpc/retries?view=aspnetcore-8.0#when-retries-are-valid:

> Streaming calls
> 
> Streaming calls can be used with gRPC retries, but there are important considerations when they are used together:
> 
> Server streaming, bidirectional streaming: **Streaming RPCs that return multiple messages from the server won't retry after the first message has been received. Apps must add additional logic to manually re-establish server and bidirectional streaming calls.**


As a side note: The upstream library had this behavior back in 2021 (and the discussion is a bit baffling to me): grpc-ecosystem/go-grpc-middleware#313

## Test plan

This PR adds two additonal tests to the test suite that ensure that:

1. The library is capable of retrying the RPC if we haven't received the first message in the stream yet
2. The library will **not automatically retry** if the first message from the server has already been recieved
ggilmore added a commit to sourcegraph/sourcegraph-public-snapshot that referenced this pull request Jan 8, 2024
…e already recieved a message on the stream (#59145)

When retrying a client stream, we must ensure that we haven't received any data from the server yet before retrying. Otherwise, we can't know if the client has already consumed part of the stream. Blindly retrying the stream could produce duplicate messages or inconsistent messages. The only safe generic behavior that we can implement is to only retry if an error occurs _before_ the server successfully sends the first message. After that, any encounters that we see on the stream will be directly returned to the caller - no retries will occur. Only the caller knows the retry semantics that it wants.


This matches the built-in grpc retry behavior (that we can't use, see https://github.com/sourcegraph/sourcegraph/issues/51060) as documented on https://learn.microsoft.com/en-us/aspnet/core/grpc/retries?view=aspnetcore-8.0#when-retries-are-valid:

> Streaming calls
> 
> Streaming calls can be used with gRPC retries, but there are important considerations when they are used together:
> 
> Server streaming, bidirectional streaming: **Streaming RPCs that return multiple messages from the server won't retry after the first message has been received. Apps must add additional logic to manually re-establish server and bidirectional streaming calls.**


As a side note: The upstream library had this behavior back in 2021 (and the discussion is a bit baffling to me): grpc-ecosystem/go-grpc-middleware#313

## Test plan

This PR adds two additonal tests to the test suite that ensure that:

1. The library is capable of retrying the RPC if we haven't received the first message in the stream yet
2. The library will **not automatically retry** if the first message from the server has already been recieved
DaedalusG pushed a commit to sourcegraph/sourcegraph-public-snapshot that referenced this pull request Jan 9, 2024
* grpc: create stub retry utilities (#59095)

This PR adds a basic configuration for enabling retries with gRPC  for certain RPC types. 

The description for `defaults.RetryPolicy` is probably the most important thing to read:

```go
// RetryPolicy is the default retry policy for internal GRPC requests.
//
// The retry policy will trigger on Unavailable and ResourceExhausted status errors, and will retry up to 20 times using an
// exponential backoff policy with a maximum duration of 3s in between retries.
//
// Only Unary (1:1) and ServerStreaming (1:N) requests are retried. All other types of requests will immediately
// return an Unimplemented status error. It's up to the caller to manually retry these requests.
//
// These defaults can be overridden with the following environment variables:
// - SRC_GRPC_RETRY_DELAY_BASE: Base retry delay duration for internal GRPC requests
// - SRC_GRPC_RETRY_MAX_ATTEMPTS: Max retry attempts for internal GRPC requests
// - SRC_GRPC_RETRY_MAX_DURATION: Max retry duration for internal GRPC requests
var RetryPolicy = []grpc.CallOption{
	retry.WithCodes(codes.Unavailable, codes.ResourceExhausted),
	// Together with the default options, the maximum delay will behave like this:
	// Retry# Delay
	// 1	0.05s
	// 2	0.1s
	// 3	0.2s
	// 4	0.4s
	// 5	0.8s
	// 6	1.6s
	// 7	3.0s
	// 8	3.0s
	// ...
	// 20	3.0s
	retry.WithMax(uint(internalRetryMaxAttempts)),
	retry.WithBackoff(fullJitter(internalRetryDelayBase, internalRetryMaxDuration)),
}
```

This is off by default for all services (since this logic doesn't work with RPCS or might not be desirable as the default behavior if you don't know whether or not your method is idempotent).

 The upstack PRs selectively enable this logic for appropriate RPCs (see those PRs for the exact semantics). 

## Test plan

CI

* grpc: retry: fork retry package grpc: import fork of go-grpc-middlware/retry package (#59140)

The package has some issues (the retry logic for client stream is flawed). I'm adding a copy of this to our repository for future edits.

See the discussion in https://github.com/sourcegraph/sourcegraph/pull/59145

## Test plan

The existing test suite from the copied project is now running in CI.

* grpc: forked retry package: force streaming retries to fail if we have already recieved a message on the stream (#59145)

When retrying a client stream, we must ensure that we haven't received any data from the server yet before retrying. Otherwise, we can't know if the client has already consumed part of the stream. Blindly retrying the stream could produce duplicate messages or inconsistent messages. The only safe generic behavior that we can implement is to only retry if an error occurs _before_ the server successfully sends the first message. After that, any encounters that we see on the stream will be directly returned to the caller - no retries will occur. Only the caller knows the retry semantics that it wants.


This matches the built-in grpc retry behavior (that we can't use, see https://github.com/sourcegraph/sourcegraph/issues/51060) as documented on https://learn.microsoft.com/en-us/aspnet/core/grpc/retries?view=aspnetcore-8.0#when-retries-are-valid:

> Streaming calls
> 
> Streaming calls can be used with gRPC retries, but there are important considerations when they are used together:
> 
> Server streaming, bidirectional streaming: **Streaming RPCs that return multiple messages from the server won't retry after the first message has been received. Apps must add additional logic to manually re-establish server and bidirectional streaming calls.**


As a side note: The upstream library had this behavior back in 2021 (and the discussion is a bit baffling to me): grpc-ecosystem/go-grpc-middleware#313

## Test plan

This PR adds two additonal tests to the test suite that ensure that:

1. The library is capable of retrying the RPC if we haven't received the first message in the stream yet
2. The library will **not automatically retry** if the first message from the server has already been recieved

* grpc: defaults: switch defaults package to use custom retry fork (#59146)



## Test plan

<!-- All pull requests REQUIRE a test plan: https://docs.sourcegraph.com/dev/background-information/testing_principles 

Why does it matter? 

These test plans are there to demonstrate that are following industry standards which are important or critical for our customers. 
They might be read by customers or an auditor. There are meant be simple and easy to read. Simply explain what you did to ensure 
your changes are correct!

Here are a non exhaustive list of test plan examples to help you:

- Making changes on a given feature or component: 
  - "Covered by existing tests" or "CI" for the shortest possible plan if there is zero ambiguity
  - "Added new tests" 
  - "Manually tested" (if non trivial, share some output, logs, or screenshot)
- Updating docs: 
  - "previewed locally" 
  - share a screenshot if you want to be thorough
- Updating deps, that would typically fail immediately in CI if incorrect
  - "CI" 
  - "locally tested" 
-->

* grpc: retry: add sourcegraph tracing support (#59191)

This tweaks our forked [grpc retry](https://pkg.go.dev/github.com/grpc-ecosystem/go-grpc-middleware/retry) package to support traces in a similar manner to our internal httpcli logic. 

When reviewing this PR, I'd recommend comparing this against the logic in `internal/httpcli` to see if it's to your liking: https://github.com/sourcegraph/sourcegraph/blob/023e96c2fc25ced65c528be2474b5fd1f9a34792/internal/httpcli/client.go#L582-L631

## Test plan

1. (pre-requisite) I checked out https://github.com/sourcegraph/sourcegraph/pull/59136 (`12-20-grpc_frontend_configuration_support_automatic_retries_GetConfig_is_idempotent_`) that is the PR that has retries hooked up for all services.

2.  In `sg.config.yaml`, I commented out the entry that starts one of the gitserver instances when running `sg start`.

```patch
diff --git a/sg.config.yaml b/sg.config.yaml
index 312e5ebfbdc..eb0eef61193 100644
--- a/sg.config.yaml
+++ b/sg.config.yaml
@@ -1106,7 +1106,7 @@ commandsets:
       - repo-updater
       - web
       - gitserver-0
-      - gitserver-1
+#      - gitserver-1
       - searcher
       - caddy
       - symbols
```

3. I then ran `sg start` and `sg start monitoring` to start jaeger.

4. I executed the following search query with tracing enabled: https://sourcegraph.test:3443/search?q=context:global+type:diff+test+timeout:2m+count:all&patternType=standard&sm=1&trace=1&groupBy=repo


This produces a trace with entries that look like the following

<img width="1713" alt="Screenshot 2023-12-21 at 4 32 42 PM" src="https://github.com/sourcegraph/sourcegraph/assets/9022011/ec7e2c48-602c-4537-b27a-e9490105b384">

You can see the full trace here: [gh_trace.json](https://github.com/sourcegraph/sourcegraph/files/13747118/gh_trace.json)

* grpc: gitserver: add automatic retries for idempotent methods (#59107)

This PR adds support for automatic retries in the gitserver grpc client.

I have gone through the gitserver protobuf file and marked all the methods I thought were idempotent (we can't inspect this using the go protobuf packages, but I thought this was nice for documentation).

I then wrapped the basic gitserver grpc client with an "automaticRetryClient" that uses the default retry policy that was defined in https://github.com/sourcegraph/sourcegraph/pull/59095. See that PR for more details.

Note that for ServerStreaming methods like Exec and Search, the retry logic will only automatically retry if we haven't received any messages back from the server yet.

After we receive a single message, we can't know whether or not the callers has consumed the message yet (e.x: started consuming the `io.Reader` from ArchiveReader) and can tolerate receiving old messages, duplicated messages, etc. If we get an error after this point, we'll fail the RPC immediately and bubble up the underlying error to the caller. Only the caller would know the semantics of how it's consuming the stream to know how to proceed.

CI

* grpc: symbols: add support for automatic retries (#59110)

This PR adds support for automatic retries in the symbols grpc client.

I have gone through the symbols protobuf file and marked all the methods I thought were idempotent (we can't inspect this using the go protobuf packages, but I thought this was nice for documentation).

I then wrapped the basic symbols grpc client with an "automaticRetryClient" that uses the default retry policy that was defined in https://github.com/sourcegraph/sourcegraph/pull/59095. See that PR for more details.

Note that for ServerStreaming methods like LocalCodeIntel and SymbolInfo, the retry logic will only automatically retry if we haven't received any messages back from the server yet.

After we receive a single message, we can't know whether or not the callers has consumed the message yet (e.x: started aggregating the symbols from `LocalCodeIntel` ) and can tolerate receiving old messages, duplicated messages, etc. If we get an error after this point, we'll fail the RPC immediately and bubble up the underlying error to the caller. Only the caller would know the semantics of how it's consuming the stream to know how to proceed.

CI

<!-- All pull requests REQUIRE a test plan: https://docs.sourcegraph.com/dev/background-information/testing_principles

Why does it matter?

These test plans are there to demonstrate that are following industry standards which are important or critical for our customers.
They might be read by customers or an auditor. There are meant be simple and easy to read. Simply explain what you did to ensure
your changes are correct!

Here are a non exhaustive list of test plan examples to help you:

- Making changes on a given feature or component:
  - "Covered by existing tests" or "CI" for the shortest possible plan if there is zero ambiguity
  - "Added new tests"
  - "Manually tested" (if non trivial, share some output, logs, or screenshot)
- Updating docs:
  - "previewed locally"
  - share a screenshot if you want to be thorough
- Updating deps, that would typically fail immediately in CI if incorrect
  - "CI"
  - "locally tested"
-->

* grpc: searcher: add support for automatic retries (#59111)

This PR adds support for automatic retries in the searcher grpc client.

---

I have gone through the searcher protobuf file and marked all the methods I thought were idempotent (we can't inspect this using the go protobuf packages, but I thought this was nice for documentation).

I then wrapped the basic searcher grpc client with an "automaticRetryClient" that uses the default retry policy that was defined in https://github.com/sourcegraph/sourcegraph/pull/59095. See that PR for more details.

Note that for ServerStreaming methods like Search, the retry logic will only automatically retry if we haven't received any messages back from the server yet.

After we receive a single message, we can't know whether or not the callers has consumed the message yet (e.x: started presenting the data in the WebUI from `Search`) and can tolerate receiving old messages, duplicated messages, etc.

 If we get an error after this point, we'll fail the RPC immediately and bubble up the underlying error to the caller. Only the caller would know the semantics of how it's consuming the stream to know how to proceed.

CI

* grpc: repo-updater: add support for automatic retries for all methods (all are idempotent) (#59130)

This PR adds support for automatic retries in the repo-updater grpc client.

---

I have gone through the repo-updater protobuf file and marked all the methods I thought were idempotent (we can't inspect this using the go protobuf packages, but I thought this was nice for documentation).

I then wrapped the basic repo-updater grpc client with an "automaticRetryClient" that uses the default retry policy that was defined in https://github.com/sourcegraph/sourcegraph/pull/59095. See that PR for more details.

CI

* grpc: searcher: zoekt-webserver support automatic retries (#59133)

This PR adds support for automatic retries in the `zoekt-webserver` grpc client that `searcher` uses.

---

I wrapped the basic zoekt-webserver grpc client with an "automaticRetryClient" that uses the default retry policy that was defined in https://github.com/sourcegraph/sourcegraph/pull/59095. See that PR for more details. All the methods don't have any side effects, so they're all capable of being retried. 

Note that for ServerStreaming methods like StreamSearch and List, the retry logic will only automatically retry if we haven't received any messages back from the server yet. 

After we receive a single message, we can't know whether or not the caller has consumed the message yet (e.x: started consuming the search results from `StreamSearch` and displaying them in the WebUI) and can tolerate receiving old messages, duplicated messages, etc. If we get an error after this point, we'll fail the RPC immediately and bubble up the underlying error to the caller. Only the caller would know the semantics of how it's consuming the stream to know how to proceed. 

## Test plan

CI

* grpc: frontend: configuration: support automatic retries (GetConfig is idempotent) (#59136)

This PR adds support for automatic retries in the frontend configuration grpc client. 

I have gone through the frontend protobuf file and marked all the methods I thought were idempotent (we can't inspect this using the go protobuf packages, but I thought this was nice for documentation). 

I then wrapped the basic frontend grpc client with an "automaticRetryClient" that uses the default retry policy that was defined in https://github.com/sourcegraph/sourcegraph/pull/59095. See that PR for more details.  All the methods are idempotent, so they all get the new retry logic. 

## Test plan

CI

* format gitserver proto

* changelog
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants