Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming fixes #970

Merged
merged 1 commit into from
Feb 16, 2021
Merged

Streaming fixes #970

merged 1 commit into from
Feb 16, 2021

Conversation

erikdubbelboer
Copy link
Collaborator

  • Allow DisablePreParseMultipartForm in combination with
    StreamRequestBody.
  • Support streaming into MultipartForm instead of reading the whole body
    first.
  • Support calling ctx.PostBody() when streaming is enabled.

- Allow DisablePreParseMultipartForm in combination with
StreamRequestBody.
- Support streaming into MultipartForm instead of reading the whole body
  first.
- Support calling ctx.PostBody() when streaming is enabled.
if err != nil {
bodyBuf.SetString(err.Error())
}
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this various functions such as ctx.PostBody() wouldn't work when streaming was enabled.

if err != nil {
return nil, err
mr := multipart.NewReader(bodyStream, req.multipartFormBoundary)
req.multipartForm, err = mr.ReadForm(8 * 1024)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use AppendGunzipBytes and readMultipartForm which requires the whole body to be in memory. Using gzip.Reader and multipart.NewReader we can stream the body and allow multipart to keep big files on disk and reduce memory usage.

req.bodyStream = acquireRequestStream(bodyBuf, r, contentLength)

return nil
return req.ContinueReadBodyStream(r, maxBodySize, preParseMultipartForm)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change this to be similar to readBody which also just calls ContinueReadBody instead of having the logic inlined.

if err != nil {
if err == ErrBodyTooLarge {
req.Header.SetContentLength(contentLength)
req.body = bodyBuf
req.bodyRaw = bodyBuf.B[:bodyBufLen]
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to store the unread part in bodyRaw as it wasn't used and was causing bugs in functions that were checking if bodyRaw != nil and assume the whole body is in there.


readN := maxBodySize
if readN > contentLength {
readN = contentLength
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this we would wait forever on bodies smaller than 8kb as we would block on reading from a connection that never had any more data.

@@ -3413,8 +3425,8 @@ func TestMaxBodySizePerRequest(t *testing.T) {
func TestStreamRequestBody(t *testing.T) {
t.Parallel()

part1 := strings.Repeat("1", 1<<10)
part2 := strings.Repeat("2", 1<<20-1<<10)
part1 := strings.Repeat("1", 1<<15)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had to increase this size. If the body is big enough we always block on chunks of 8kb. So when part1 is smaller than 8kb we get a read timeout.

if prefetchedSize > rs.totalBytesRead {
left := prefetchedSize - rs.totalBytesRead
if len(p) > left {
p = p[:left]
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Never try to read more into p than we have left. Otherwise a big p would block on reading data that would never come.


"github.com/valyala/fasthttp/fasthttputil"
)

func TestStreamingPipeline(t *testing.T) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Streaming wasn't compatible with pipelining as the stream would already read data from the next request which would then fail the next request.

@erikdubbelboer erikdubbelboer merged commit 3cd0862 into master Feb 16, 2021
@erikdubbelboer erikdubbelboer deleted the streaming-fixes branch February 16, 2021 20:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants