-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perform quantization in Chunks #196
Conversation
62a70bd
to
20056b8
Compare
do we want to incorporate this into our quantization primitive in the future? #160 |
20056b8
to
ad50e90
Compare
ad50e90
to
884eca9
Compare
@jerryzh168 like include the ability to apply quantization in chunks? I am not sure if this problem is manifest in all quantization techniques. I think this may have been a quirk of the broadcasting stuff I was doing. But if it does come again I think it would be good to standardize on. This change is a "free lunch" for qlora since we only quantize once. For dynamic quant I think that the speed up might not be worth it |
our general op is quantizing things with certain block_size, so I'm wondering if this is what you are doing here as well, if so I think it might make sense to merge |
* fix gguf loader * group_size -> groupsize * dangling reference * gix ggf_utils to use fqn * typo * fqn * circular import * more fqn
Summary
Previously we were seeing a huge memory spike when attempting to NF4'afy the very large tensors. This was do to quantize to nearest creating very large intermediates and realizing them in gmem. This PR instead does this in chunks of default size 1024**2. This was seen to be a pretty reasonable tradeoff between speed while drastically reducing memory usage.