Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: excessive memory usage #2164

Merged

Conversation

eusebiu-constantin-petu-dbk
Copy link
Collaborator

instead of reading entire files before calculating their digests stream them by using their Reader method.

What type of PR is this?

Which issue does this PR fix:

What does this PR do / Why do we need it:

If an issue # is not available please add repro steps and logs showing the issue:

Testing done on this change:

Automation added to e2e:

Will this break upgrades or downgrades?

Does this PR introduce any user-facing change?:


By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Copy link

codecov bot commented Jan 12, 2024

Codecov Report

Attention: 12 lines in your changes are missing coverage. Please review.

Comparison is base (d685adb) 92.15% compared to head (1d89d99) 92.14%.
Report is 2 commits behind head on main.

Files Patch % Lines
pkg/storage/imagestore/imagestore.go 82.08% 7 Missing and 5 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2164      +/-   ##
==========================================
- Coverage   92.15%   92.14%   -0.01%     
==========================================
  Files         165      165              
  Lines       28768    28751      -17     
==========================================
- Hits        26510    26493      -17     
+ Misses       1664     1663       -1     
- Partials      594      595       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@rchincha
Copy link
Contributor

@peusebiu can we see the effect of this change when a image with several large blobs need to be scrubbed?

@eusebiu-constantin-petu-dbk
Copy link
Collaborator Author

@peusebiu can we see the effect of this change when a image with several large blobs need to be scrubbed?

Ok I needed to handle a lock case, so I made an update to add a new storage method which doesn't lock.

Yes tested it and no memory spike at all, on my machine is stays on a steady 1.3% memory, on main it has a lot of spikes when it's digesting big blobs.

@eusebiu-constantin-petu-dbk eusebiu-constantin-petu-dbk force-pushed the stream_shasums branch 2 times, most recently from 190db9a to 1d89d99 Compare January 15, 2024 16:03
instead of reading entire files before calculating their digests
stream them by using their Reader method.

Signed-off-by: Petu Eusebiu <[email protected]>
@rchincha rchincha merged commit d1bf713 into project-zot:main Jan 16, 2024
31 of 33 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Excessive Memory Usage
3 participants