-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mem: implement ReadAll()
for more efficient io.Reader
consumption
#7653
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #7653 +/- ##
==========================================
+ Coverage 81.87% 81.89% +0.01%
==========================================
Files 373 373
Lines 37822 37857 +35
==========================================
+ Hits 30967 31003 +36
- Misses 5563 5564 +1
+ Partials 1292 1290 -2
|
@ash2k are you working on this actively? |
@purnesh42H No, I'm not. I think I've answered the questions and it's ready to be merged. Is something off still? Let me know what needs to be changed. |
@PapaCharlie PTAL |
There are still a whole lot of comments which are not wrapped at 80-cols. Could you please take care of that. Also, please don't mark comments as resolved. It is the responsibility of the person making the comment to mark it as resolved when they think that the comment has been sufficiently addressed. |
Wrapped. Let me know if I missed something.
Ok, fair point. I used that as a way to track what I have addressed, but I see why that's not the best idea. |
Related question: I see quite a few calls with a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking care of the comments.
IIRC, the |
io.Reader
consumptionReadAll()
for more efficient io.Reader
consumption
mem/buffer_slice.go
Outdated
wt, ok := r.(io.WriterTo) | ||
if ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please combine into a compound if
to limit the scope of wt
, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
// A failed call returns a non-nil error and could return partially read | ||
// buffers. It is the responsibility of the caller to free this buffer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is surprising behavior. The one saving grace is that freeing buffers is optional -- GC will take them away if you forget. If not for that, I would say this is definitely not OK.
I highly doubt gRPC would ever want the partially data, and I'm curious why you want it, too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This behavior matches io.ReadAll()
. Reasons to do it this way:
- A drop-in replacement for
io.ReadAll()
(behavior-wise). - Sometimes you may need the read data regardless if there was an error or not.
Example of the last point: a proxy forwarding a response from an upstream server. It must send everything it got from the upstream and then return an error (return == maybe RST the connection or something else, protocol-specific). Isn't e.g. gRPC streaming response client the same? It reads and provides the client with messages even if it already got an error after the message. This is also similar to how HTTP/1.0 or 1.1 Connection: close
works. Sometimes you don't want Transfer-Encoding: chunked
and prefer the underlying connection to be closed on EOF instead of chunking.
mem/buffer_slice.go
Outdated
} | ||
if len(*buf) == usedCap { | ||
result = append(result, NewBuffer(buf, pool)) | ||
break // grab a new buf from pool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a label on 251 nextBuffer:
and continue nextBuffer
here instead? Then you don't even need the comment, and it's easier to find where the control flow is going.
Or, would it work if the inner loop is for len(*buf) < usedCap {
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reads better, thanks.
This is current master vs this branch (just rebased on master to compare apples vs apples) with the benchmark from #7786.
|
I moved my project to gRPC 1.66.2 and saw a good reduction in RAM consumption. Now the hot spot is
decompress()
, whereio.Copy()
allocates a temporary buffer, reads from the reader into it, copies the read data into another buffer it got from the pool. This is an unnecessary allocation, an unnecessary copy, and underutilized buffers from the pool.This PR adds
mem.ReadAll()
(likeio.ReadAll()
) to efficiently consume a reader into buffers from the pool.I found #7631 while working on this code (I have similar code in my project, but decided to contribute it upstream and replace this
io.Copy
with it).RELEASE NOTES:
ReadAll()
method for more efficientio.Reader
consumption