-
-
Notifications
You must be signed in to change notification settings - Fork 11.6k
[Bugfix] Fix Machete zero point issue for GPTQ models on SM90 #21066
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Fix Machete zero point issue for GPTQ models on SM90 #21066
Conversation
Signed-off-by: mgoin <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly fixes a crash for GPTQ models using the Machete kernel when an unexpected zero-point tensor is present. My review includes a suggestion to enhance robustness by adding a check for cases where zero points are expected but missing, which could otherwise lead to silent correctness issues.
vllm/model_executor/layers/quantization/kernels/mixed_precision/machete.py
Outdated
Show resolved
Hide resolved
Signed-off-by: mgoin <[email protected]>
LucasWilkinson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thanks for the fix
…roject#21066) Signed-off-by: mgoin <[email protected]> Signed-off-by: x22x22 <[email protected]>
…roject#21066) Signed-off-by: mgoin <[email protected]>
…roject#21066) Signed-off-by: mgoin <[email protected]>
…roject#21066) Signed-off-by: mgoin <[email protected]> Signed-off-by: Jinzhen Lin <[email protected]>
…roject#21066) Signed-off-by: mgoin <[email protected]> Signed-off-by: Paul Pak <[email protected]>
…roject#21066) Signed-off-by: mgoin <[email protected]> Signed-off-by: Diego-Castan <[email protected]>
…roject#21066) Signed-off-by: mgoin <[email protected]>
Purpose
FIX #20974 #20986
The issue was that the GPTQ caller was setting
zero_points=False, however since it always allocates itsqzerosparam that still gets loaded into the machete kernel. We just need to filter out that case to fix.Test Plan
Manual testing of the issue on H100
Test Result
Before
After