-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Usage Does Not Decrease After Large Inputs #185
Comments
Hey @mihainiculai , thanks for submitting this bug. I will check |
From my use of LLM Guard's I wondered though, if in the model config in |
@asofter Can we assist in any way? |
I have problems, too. Is there any way to reduce the CPU ? @asofter |
I am using only the anonymizer scanner with the LLM Guard API, and I noticed significant memory (RAM) increase when processing larger inputs, with the memory usage never decreasing afterward. For example, when processing small inputs, memory consumption is around 2GB. However, when I pass inputs of around 4-5k characters, the memory usage increases to 4-5GB and stays at that level even after the processing is complete. If I input something excessive, like 15k characters, memory usage spikes to 240GB (likely starting to write to disk at that point).
I experience this behavior with all default settings, except for removing some scanners from the
scanners.yaml
file. Is this expected behavior, or is there an issue with memory management when using the anonymizer scanner?The text was updated successfully, but these errors were encountered: