-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Streaming fixes #970
Streaming fixes #970
Conversation
- Allow DisablePreParseMultipartForm in combination with StreamRequestBody. - Support streaming into MultipartForm instead of reading the whole body first. - Support calling ctx.PostBody() when streaming is enabled.
if err != nil { | ||
bodyBuf.SetString(err.Error()) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this various functions such as ctx.PostBody()
wouldn't work when streaming was enabled.
if err != nil { | ||
return nil, err | ||
mr := multipart.NewReader(bodyStream, req.multipartFormBoundary) | ||
req.multipartForm, err = mr.ReadForm(8 * 1024) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't use AppendGunzipBytes
and readMultipartForm
which requires the whole body to be in memory. Using gzip.Reader
and multipart.NewReader
we can stream the body and allow multipart
to keep big files on disk and reduce memory usage.
req.bodyStream = acquireRequestStream(bodyBuf, r, contentLength) | ||
|
||
return nil | ||
return req.ContinueReadBodyStream(r, maxBodySize, preParseMultipartForm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change this to be similar to readBody
which also just calls ContinueReadBody
instead of having the logic inlined.
if err != nil { | ||
if err == ErrBodyTooLarge { | ||
req.Header.SetContentLength(contentLength) | ||
req.body = bodyBuf | ||
req.bodyRaw = bodyBuf.B[:bodyBufLen] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to store the unread part in bodyRaw
as it wasn't used and was causing bugs in functions that were checking if bodyRaw != nil
and assume the whole body is in there.
|
||
readN := maxBodySize | ||
if readN > contentLength { | ||
readN = contentLength |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this we would wait forever on bodies smaller than 8kb as we would block on reading from a connection that never had any more data.
@@ -3413,8 +3425,8 @@ func TestMaxBodySizePerRequest(t *testing.T) { | |||
func TestStreamRequestBody(t *testing.T) { | |||
t.Parallel() | |||
|
|||
part1 := strings.Repeat("1", 1<<10) | |||
part2 := strings.Repeat("2", 1<<20-1<<10) | |||
part1 := strings.Repeat("1", 1<<15) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Had to increase this size. If the body is big enough we always block on chunks of 8kb. So when part1
is smaller than 8kb we get a read timeout.
if prefetchedSize > rs.totalBytesRead { | ||
left := prefetchedSize - rs.totalBytesRead | ||
if len(p) > left { | ||
p = p[:left] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Never try to read more into p
than we have left. Otherwise a big p
would block on reading data that would never come.
|
||
"github.com/valyala/fasthttp/fasthttputil" | ||
) | ||
|
||
func TestStreamingPipeline(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Streaming wasn't compatible with pipelining as the stream would already read data from the next request which would then fail the next request.
StreamRequestBody.
first.