-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce memcpy with chunked encoding #9838
Conversation
CodSpeed Performance ReportMerging #9838 will improve performances by 21.36%Comparing Summary
Benchmarks breakdown
|
Maybe we should use |
Codecov ReportAll modified and coverable lines are covered by tests ✅
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## master #9838 +/- ##
==========================================
- Coverage 98.70% 98.70% -0.01%
==========================================
Files 118 118
Lines 36148 36145 -3
Branches 4294 4294
==========================================
- Hits 35680 35677 -3
Misses 315 315
Partials 153 153
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
#9839 is likely going to be a better solution |
replaced by #9839 |
finishing a chunk has a lot of
memcpy
. We can switch it to ab"".join()
since it has a more efficient implementation in https://github.com/python/cpython/blob/91f4908798074db6c41925b4417bee1f933aca93/Objects/stringlib/join.h#L36 vs constructing a new bytes string for every add operation