Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perform quantization in Chunks #196

Merged
merged 1 commit into from
May 1, 2024
Merged

Conversation

drisspg
Copy link
Contributor

@drisspg drisspg commented May 1, 2024

Summary

Previously we were seeing a huge memory spike when attempting to NF4'afy the very large tensors. This was do to quantize to nearest creating very large intermediates and realizing them in gmem. This PR instead does this in chunks of default size 1024**2. This was seen to be a pretty reasonable tradeoff between speed while drastically reducing memory usage.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 1, 2024
@drisspg drisspg force-pushed the add-chunking-option branch 2 times, most recently from 62a70bd to 20056b8 Compare May 1, 2024 17:52
@jerryzh168
Copy link
Contributor

do we want to incorporate this into our quantization primitive in the future? #160

@drisspg drisspg force-pushed the add-chunking-option branch from 20056b8 to ad50e90 Compare May 1, 2024 18:17
@drisspg drisspg force-pushed the add-chunking-option branch from ad50e90 to 884eca9 Compare May 1, 2024 18:50
@drisspg
Copy link
Contributor Author

drisspg commented May 1, 2024

@jerryzh168 like include the ability to apply quantization in chunks? I am not sure if this problem is manifest in all quantization techniques. I think this may have been a quirk of the broadcasting stuff I was doing. But if it does come again I think it would be good to standardize on. This change is a "free lunch" for qlora since we only quantize once. For dynamic quant I think that the speed up might not be worth it

@cpuhrsch cpuhrsch merged commit 6081796 into pytorch:main May 1, 2024
13 checks passed
@jerryzh168
Copy link
Contributor

@jerryzh168 like include the ability to apply quantization in chunks? I am not sure if this problem is manifest in all quantization techniques. I think this may have been a quirk of the broadcasting stuff I was doing. But if it does come again I think it would be good to standardize on. This change is a "free lunch" for qlora since we only quantize once. For dynamic quant I think that the speed up might not be worth it

our general op is quantizing things with certain block_size, so I'm wondering if this is what you are doing here as well, if so I think it might make sense to merge

dbyoung18 pushed a commit to dbyoung18/ao that referenced this pull request Jul 31, 2024
yanbing-j pushed a commit to yanbing-j/ao that referenced this pull request Dec 9, 2024
* fix gguf loader

* group_size -> groupsize

* dangling reference

* gix ggf_utils to use fqn

* typo

* fqn

* circular import

* more fqn
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants