Supporting llama int4 inference using AutoGPTQ in HPU (#166)#1125
Closed
HolyFalafel wants to merge 3 commits into
Closed
Supporting llama int4 inference using AutoGPTQ in HPU (#166)#1125HolyFalafel wants to merge 3 commits into
HolyFalafel wants to merge 3 commits into
Conversation
* Supporting llama int4 quantization using AutoGPTQ * cleanups in int4 * Blocking running hqt with int4 * Rename int4 param to gptq * Added call to preprocessing in gptq * Added call to preprocessing in gptq fix * Added call to preprocessing in gptq fix2 * Removed call to preprocessing (found a better solution on AutoGPTQ) * Fixed deprecated message for exllama
This was referenced Jul 29, 2024
mounikamandava
added a commit
to emascarenhas/optimum-habana
that referenced
this pull request
Aug 2, 2024
Supporting llama int4 inference using AutoGPTQ in HPU (huggingface#166) huggingface#1125
Contributor
|
Please sync your PR with main/upstream and fix any merge conflicts. Thank you. |
Contributor
|
@HolyFalafel, please sync this PR with main and ping me to wrap it up. Thanks |
yafshar
reviewed
Sep 10, 2024
|
|
||
|
|
||
| Llama2-7b in UINT4 is enabled using [AutoGPTQ Fork](https://github.com/HabanaAI/AutoGPTQ), which provides quantization capabilities in PyTorch. | ||
| Currently, the support is for UINT4 inference of pre-quantized models only. |
Contributor
There was a problem hiding this comment.
@HolyFalafel please add the AutoGPTQ installation here,
BUILD_CUDA_EXT=0 pip install auto-gptq --no-build-isolation
Contributor
|
@HolyFalafel , we have the similar functionality already added. Please check #1165 |
hsubramony
added a commit
that referenced
this pull request
Oct 1, 2024
hsubramony
added a commit
that referenced
this pull request
Oct 1, 2024
Contributor
Author
|
#1364 replaces this PR |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Added support for AutoGPTQ when loading quantized model, and running inference in HPU.
This will be available in v1.17