-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Profile Guided Optimization results in high memory usage #6991
Comments
Note that on ARM/graviton processors, option 2 and 3 result in 20%+ throughput increase when running grpc-go's benchmarks with concurrency=100 and a 1kb payload:
Same benchmark running on Intel gives less of a gain, but is still a clear improvement:
|
Thanks for looking into this/thinking about potential solutions. After reading both issue threads I like option 1 and 3, especially if 3 increases throughput and also doesn't allocate when connection is idle. However, option 1 seems very simple (I don't know too much historically about these pragmas in this codebase). Doug is out this week and back next and I trust his judgement on this so I'll defer to him on the final decision, but we'd definitely be willing to review any PR's/patches for this :). |
Sorry for the delay here. I'm fine with option (1) as a quick fix here. What are your thoughts about this option:
|
Thanks for asking. I agree that option 4 sounds like the best. I had run benchmarks with option 4 ( I did find out why the perf improvement on ARM comes thought: it is from zeroing the array: ![]() Option 4 also gets rid of this zeroing in the hot path, so there's something else going on with sync.Pool that I couldn't explain. I'll try to finish that investigation and then decide on option 4 vs option 1. |
This issue is labeled as requiring an update from the reporter, and no update has been received after 6 days. If no update is provided in the next 7 days, this issue will be automatically closed. |
This is likely to be addressed by the work in PapaCharlie#1, which reworks the loopy writer using option 4. This probably will decrease performance slightly on the benchmark in ARM, but that might be offset by other optimizations in that branch. And I'm not sure the slowdowns I'm seeing on ARM is going to materialize in real losses on production usages. I'd suggest we try this branch when @PapaCharlie is ready with it. |
We recently started using profile guided optimizations (pgo) for our Go gRPC services, and in some cases saw a significant increase in memory usage from optimized binaries.
The details of the investigation can be found in golang/go#65532. To summarize, pgo may inlines internal/transport.(*loopyWriter).processData, which is called 3 times in internal/transport.(*loopyWriter).run. This is the goroutine that schedules writes of HTTP2 frames on TCP connections.
processData
allocates a 16KiB array on the stack to construct a frame, so inlining it inloopyWriter.run
results in a total of fix 48KiB memory allocated per connection, instead of 16KiB (that may even be released if loopy is blocked). When there are many connnections, this can be a lot of memory. One of our production services saw a 20% memory increase after building with PGO due to this issue.There are options to still use PGO (which provides otherwise interesting gains) while avoiding this undesirable side effect, but they require changes to grpc-go:
go:noinline
pragma toloopyWriter.processData
to avoid any memory increase.loopyWriter.run
. The downside is that when the connection is idle, the 16KiB cannot be reclaimed.loopyWriter.run
, so that when loopy blocks because the connection is idle, the array is not allocated.From those option 1 and 3 seem the most compelling to me. Would you be willing to accept a patch for this?
The text was updated successfully, but these errors were encountered: