-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x/net/http2: make Transport return nicer error when Amazon ALB hangs up mid-response? #18639
Comments
To add some color on this, what I'm wondering is if go http clients are expected to handle the GOAWAY frames potentially returned from http2 servers or should the transports somehow manage this case similar to a remotely closed connection. Ideally our program would just re-try the request for a NO_ERROR+GOAWAY scenario. I've asked a related question on SO and golang-nuts. |
You don't provide enough information in this bug report. Are you importing golang.org/x/net/http2 at all, or only using Go's net/http package? This was fixed in December via golang/net@8dab929 Are you perhaps using an old version of golang.org/x/net/http2 directly? What HTTP method are these? If they're not idempotent requests, how are you creating the requests? /cc @tombergan |
@bradfitz We're using the standard net/http package. I miscommunicated the version information. We originally observed this using go 1.7 but since then I was able to reproduce it using go1.8 beta1. Yesterday I upgraded to go 1.8rc1 but I'm not certain if I rebuilt the test case. I can retry the test case with 1.8rc1 if that includes golang/net@8dab929. The call stack is roughly:
|
I was just able to observe the same error using a test binary built with go1.8 rc1. |
Can you capture the stderr output with GODEBUG=http2debug=2 set in your environment when it occurs? How "rough" is that "roughly" call stack? Is it always a GET with no body? |
@bradfitz the NewRequest() line is coped verbatim, the body is always nil. I can collect the stderr output later today or tomorrow. |
And where do you see |
@bradfitz that is the error returned (and logged by us) after a specific call to http.Client.Do(). |
@bradfitz below is stderr with GODEBUG=http2debug=2. I removed some sensitive data, replaced with 'XXX'. Hopefully those changes won't interfere.
|
@tombergan, I have a theory. Can you sanity check me? Note that the error he's getting from RoundTrip is:
That comes only from But My theory is that when we get the initial GOAWAY and decide to retry the request on a new conn, we're forgetting to remove those streams from the originally-chosen Plausible? |
You may have described a bug, but I don't think that's happening in this scenario. I see only two requests in the connection: stream=1 and stream=3. This is the second request:
Note that the GOAWAY frame comes just after sending request HEADERS on stream=3. Note also that the GOAWAY frame has LastStreamID=3. We do not close stream=3 because the server claims they might process it. We optimistically assume that we will get a response. We receive response HEADERS on stream=3 after the GOAWAY (6 seconds after the GOAWAY, in fact). We then receive a sequence of DATA frames. However, we never get END_STREAM. Instead, the server closes the connection. Their server is behaving legally ... I just double-checked the spec, and it's quite clear that the server is not required to process streams with id < GOAWAY.LastStreamId. The server MAY process those streams only partially. There's not much we can do. We cannot retry the request after the connection closes because we've already received response headers and returned from RoundTrip. I have only two ideas, neither perfect:
|
He said that RoundTrip returned an error. If this trace is really about streamid 3 yet we saw this:
... then RoundTrip should've returned something (END_HEADERS is set), and it's the Body.Read that would've returned the GOAWAY error. So I still don't see a clear picture of what happened. |
I am stumped. I looked for suspicious uses of @bfallik, can you triple-check that the error is coming from +buf := make([]byte, 16<<10)
+buf = buf[:runtime.Stack(buf, true)]
+cc.vlogf("stack after run()\n:%s", string(buf))
err = GoAwayError{ It would also help to add a vlogs in // We'd get here if we canceled a request while the
// server had its response still in flight. So if this
// was just something we canceled, ignore it.
+cc.vlog("processHeaders could not find stream with id %d", f.StreamID)
return nil and this return nil // (nil, nil) special case. See handleResponse docs.
+cc.vlog("processHeaders got (nil, nil) from handleResponse")
return nil After making the above changes, could you run the program until it fails with that same "server sent goaway" error, then copy the stack dump here? Thanks! It is fine to remove the call frames running your code if you need to keep those private. |
@tombergan Hi. I don't have the rest of the logging you requested but I do believe that the error is propagated from Response.Body.Read() and not http.Client.Do(). I slightly modified my test harness to split apart those operations and the stack trace clearly originates from:
Do you still need that other debugging information or is there different info I can provide? Also, just to be clear, this is the code I used to reproduce the error:
I think this code is missing a |
Well, that changes everything :) We're back to my earlier comment.
No, I don't think that's related. I think the sequence of events is described by my linked comment above. |
So there's no bug here, then. Except maybe in the http2 server. Which server is this? The linked SO post suggests it's AWS? Is that a new implementation? Do we have any contacts there? |
@jeffbarr, it seems the AWS ALB's new HTTP/2 support is behaving oddly, not finishing streams after sending a GOAWAY. Who's the right person on the AWS side to look into this? Thanks! |
@bradfitz yes, the server is AWS Application Load Balancer. |
@bradfitz assuming there's no bug client side do you recommend we explicitly catch the GOAWAY+NO_ERROR error and retry the request within our application logic? I'm still learning about http2 and so wasn't sure the expected behavior but now that seems like our only/best workaround until the server can be fixed. |
@bfallik, you could I suppose. I wouldn't make it specific to that error, though. Even though ALB shouldn't do that, you could just treat it as any other type of network error or interrupted request and retry N times, especially if it's just a GET request. In this case it's too late for the Go http client to retry since the headers have already been returned. If we did retry, it's possible the headers would be different, so we couldn't stitch the second response's body together with the first response's headers. Ideally ALB would be fixed, though. |
@bradfitz OK, thanks. |
@froodian, what do you want me to do? |
Is it safe to simply retry the request? Or no because it may have gone through and may not be idempotent? I suspect that the way I'll silence this at my layer is to retry any error where the message contains both "server sent GOAWAY and closed the connection" and "ErrCode=NO_ERROR" |
@froodian, nope, what's happening is:
If ALB hadn't sent a response header then we could retry the request, but it's pretty weird for us to retry the request when we've already given the response headers to the user code. The only safe thing to do is retry the request and hope for exactly the "same" response headers and only if they "match", then continue acting like the original res.Body (which the Go user code is already reading from) is part of the second retried request. But things like the server's Date header probably changed, so that at least needs to be ignored. What else? What if ALB had already returned some of the response body bytes, but not all? Do we need to keep a checksum of bytes read and stitch together the two bodies if the body is same length and second response response's prefix bytes have the same checksum? That would all be super sketchy. It's better to just return an error, which we do. If the caller wants to retry, they can retry. Do you just want a better error message? What text would sound good to you? |
I see, yeah, thank you for that condensed write-up @bradfitz... that makes sense, and I agree it's correct from a client perspective not to retry when we've already been given response headers... I agree with @sj26 that a message like "unexpected connection close before stream end" or something long those lines might help indicate the problem a little more clearly. But at a more root level, I also wonder if we could lean more heavily on AWS as a group to change their behavior - I agree it really seems like they should let that last response write out its whole body to EOF before they close the TCP connection... but I guess they want to have timeouts on their end too so that if the server's app code never EOFs the response body for some reason, they still clean up the TCP connection at some point, hence their current behavior...? I guess https://forums.aws.amazon.com/thread.jspa?messageID=771883#771883 appears to be the most recent public thread with AWS about this? But I also wonder if other conversations have gone on behind the scenes. |
I searched in our logs for GOAWAY and found a couple of thousand hits with the following message: All hits have ErrCode=NO_ERROR afaict. This seems harmless, can this be an info message instead of error? |
This comment has been minimized.
This comment has been minimized.
hi, same error.
|
If it helps anyone else out there, we solved this problem by changing our ALB config to use HTTP1 instead of HTTP2. It is a workaround obviously and not a fix, but It is effective for now until Go gets around to changing this behavior. |
You misunderstand where the problem lies. See my earlier comment. This isn't something that Go can fix. ALB is hanging up on the middle of responses. This is an Amazon problem. |
It has been a while since the long and painful process of troubleshooting this issue with AWS support so my memory may be faulty. I had 2 seemingly conflicting notes on this issue and how it was concluded. Needless to say, the result was a bit unsatisfying and resulted in our just disabling HTTP2 on our load balancers. One note I had, which seemed to support the POV that the issue was caused by a problem with the way AWS load balancers handle the HTTP2 standard said
But then I had some follow-up notes that appeared to imply the way the Go HTTP2 client was handling GOAWAY is what was responsible. They included these notes from AWS support.
Then I have some much less organized notes from some internal discussions at this time where there's some speculation about whether the issue is that this is being treated as an error when it should not be. I don't recall the context right now. Specifically elsewhere I've noted
Regardless I think some reality check is in order. AWS is not infallible, but when such a significant portion of the INET depends on AWS, an issue like this should be brought to some sort of satisfactory conclusion. AWS is either RFC compliant or not and it seems there may be differing opinions on this; however, Go applications should be able to use services from the single largest cloud provider without encountering errors like this. I think it is noteworthy that I've haven't seen this issue discussed on forums from any other languages. |
Let me know if you need a connection (no pun intended) inside of AWS.
…On Thu, Mar 17, 2022, at 7:50 PM, CameronGo wrote:
>> but It is effective for now until Go gets around to changing this behavior.
>>
> You misunderstand where the problem lies. See my earlier comment. This isn't something that Go can fix. ALB is hanging up on the middle of responses. This is an Amazon problem.
>
It has been a while since the long and painful process of troubleshooting this issue with AWS support so my memory may be faulty. I had 2 seemingly conflicting notes on this issue and how it was concluded. Needless to say, the result was a bit unsatisfying and resulted in our just disabling HTTP2 on our load balancers.
One note I had, which seemed to support the POV that the issue was caused by a problem with the way AWS load balancers handle the HTTP2 standard said
> ALBs seem to send GOAWAY — as you linked, https://tools.ietf.org/html/rfc7540#section-6.8 — mid-stream with a last stream id, which is fine, then potentially more header/data frames for that last stream id, which is fine, but never sets the END_STREAM flag — https://tools.ietf.org/html/rfc7540#section-8.1 — on any frame in the stream before closing the connection, which is the problem. This is an error in the ALB implementation of the HTTP2 spec — it is dropping the connection with a stream in an open state — https://tools.ietf.org/html/rfc7540#section-5.1 — which golang is correctly handling as an "unexpected closed connection" error, albeit hidden beneath a "goaway" error.
>
But then I had some follow-up notes that appeared to imply the way the Go HTTP2 client was handling GOAWAY is what was responsible. They included these notes from AWS support.
> the RFC does not state that after a GOAWAY Frame is sent, the sender (ALB) 'must' allow all Streams including-and-up-to the stream identifier included in the frame, to send END_STREAM before terminating the connection. These are separate concepts, the connection termination process 'should' allow for this to happen, but what if the GOAWAY Frame includes an error code and a stream identifier of 0 - It moves responsibility onto the client application to respond or retry in the appropriate manner.
>
Then I have some much less organized notes from some internal discussions at this time where there's some speculation about whether the issue is that this is being treated as an error when it should not be. I don't recall the context right now. Specifically elsewhere I've noted
> Since the server is sending NO_ERROR, your client should simply try to reconnect, and not treat the message as an error.
>
Regardless I think some reality check is in order. AWS is not infallible, but when such a significant portion of the INET depends on AWS, an issue like this should be brought to some sort of satisfactory conclusion. AWS is either RFC compliant or not and it seems there may be differing opinions on this; however, Go applications should be able to use services from the single largest cloud provider without encountering errors like this. I think it is noteworthy that I've haven't seen this issue discussed on forums from any other languages.
—
Reply to this email directly, view it on GitHub <#18639 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAATH4C7E65AVYR4BN3H56TVAPVRBANCNFSM4C4I66LA>.
Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@jeffbarr, gladly. I'm bradfitz on Twitter (DMs open) or gmail.com/golang.org. Thanks! |
I am working to find a good connection, stay tuned... |
We are also seeing this issue on ELBs in front of ElasticSearch/OpenSearch clusters:
|
@jeffbarr, thanks for the connection! Three of us hopped on a call the other day and were able to repro the issue on demand. For the record, the tool we used for debugging was https://github.com/bradfitz/h2slam pointed at an ALB, and then changed certain ALB parameters on the AWS control plane and the TCP connection from AWS would fail (in up to 10 seconds), often without even a GOAWAY. I'll let AWS folk give further updates here. |
|
Hitting the ElasticSearch ALB with https://github.com/bradfitz/h2slam gives
which is the expected behavior actually from AWS documentation perspective. So exactly at the 10k request on the same connection, the connection closes. |
Hi folks, was there ever a resolution here? I'm witnessing the same problematic behavior from the ALB. The GOAWAY frame seems to preempt the response data, which is never received by the application. |
To ensure we didn't see this behavior, we ended up switching all of our ALB target groups to HTTP1. This doesn't occur then. In addition, we made this change in our http client with the following reference:
|
My briff from the discussion: This is an issue of how Go handle behaviour of aws load balancer (or other) under http 2.0 connection
So the root cause => is "to heavy header" or "ddos" (hight load in the moment) to the server/service where your Go app do http 2.0 requests I suggest to
|
I am getting this error consistently as well with https://mubi.com. disabling HTTP/2 seems to fix it: client := http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
TLSNextProto: map[string]func(string, *tls.Conn) http.RoundTripper{},
},
} |
Please answer these questions before submitting your issue. Thanks!
What version of Go are you using (
go version
)?$ go version
go version go1.8rc1 darwin/amd64
What operating system and processor architecture are you using (
go env
)?Linux AMD64
What did you do?
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
We have http client code that has started to return errors when the corresponding server uses HTTP2 instead of HTTP.
What did you expect to see?
Identical behavior.
What did you see instead?
http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=""
The text was updated successfully, but these errors were encountered: