Skip to content

[AWQ] Fix _accumulate_mean bug, move AWQ activation averaging off CPU, and improve logging#2161

Merged
dsikka merged 15 commits intovllm-project:mainfrom
ZewenShen-Cohere:awq_bugs
Jan 8, 2026
Merged

[AWQ] Fix _accumulate_mean bug, move AWQ activation averaging off CPU, and improve logging#2161
dsikka merged 15 commits intovllm-project:mainfrom
ZewenShen-Cohere:awq_bugs

Conversation

@ZewenShen-Cohere
Copy link
Contributor

This PR addresses the following issues:

  1. _accumulate_mean produces incorrect output on its first run.

  2. cache_smooth_activations_hook previously performed the averaging computation on the CPU. When both the hidden dimension and sequence length are large, this makes AWQ calibration CPU-bound. The slowdown is especially severe when multiple AWQ quantization jobs run concurrently.

  3. Added more informative logging to the AWQ calibration grid search, including per-mapping JSON logs.

This PR is a subset of #2158

@github-actions
Copy link

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ZewenShen-Cohere, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the robustness, performance, and observability of the AWQ (Activation-aware Weight Quantization) calibration process. It addresses a critical bug in the mean accumulation logic, optimizes activation averaging to leverage GPU resources more effectively, and introduces comprehensive logging capabilities to provide better insights into the calibration grid search and its outcomes.

Highlights

  • _accumulate_mean Bug Fix: Corrected an issue where the _accumulate_mean function produced incorrect output on its initial run by ensuring it returns the mean instead of the sum, resolving a critical calculation error.
  • AWQ Activation Averaging Optimization: Optimized the AWQ activation averaging process by moving the computation off the CPU to the original device (typically GPU). This significantly improves performance and prevents CPU-bound bottlenecks during calibration, especially with large models and concurrent quantization jobs.
  • Enhanced AWQ Calibration Logging and Metrics: Implemented more detailed logging for the AWQ calibration grid search, including a tqdm progress bar for better visibility, real-time updates on the best error found, and the ability to save per-layer error metrics (initial error, best error, reduction rate) to a JSON file for comprehensive analysis.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ZewenShen-Cohere
Copy link
Contributor Author

@brian-dellabetta

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several valuable improvements to the AWQ modifier. It correctly fixes a bug in _accumulate_mean that caused incorrect output on its first run. Moving the activation averaging computation off the CPU is a significant performance enhancement, especially for large models. The addition of more informative logging, including a progress bar and JSON output for error metrics, greatly improves the usability and observability of the AWQ calibration process. The code is well-structured, but I have a couple of suggestions to enhance robustness and simplify the implementation.

Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating this! The fixes look good, I just have a few questions on the logging part, hopefully we can streamline some more of it

ZewenShen-Cohere and others added 2 commits December 19, 2025 14:07
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: ZewenShen-Cohere <zewen.shen@cohere.com>
@ZewenShen-Cohere
Copy link
Contributor Author

Thank you for the revision @brian-dellabetta. I've switched to use the logger.

@dsikka dsikka added the ready When a PR is ready for review label Dec 20, 2025
@dsikka
Copy link
Collaborator

dsikka commented Dec 20, 2025

Can you address the quality issues?

You can do this by running these steps in the root dir:

pip install -e .[dev]
make style
make quality

@ZewenShen-Cohere
Copy link
Contributor Author

Can you address the quality issues?

You can do this by running these steps in the root dir:

pip install -e .[dev] make style make quality

Done

@dsikka dsikka added the awq For any issue / PR related to AWQ support label Dec 20, 2025
@dsikka dsikka requested review from HDCharles and kylesayrs January 6, 2026 15:07
Copy link
Collaborator

@HDCharles HDCharles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hey, I mentioned before that I fixed the accumulate means bug in a different PR, was that removed from the code but not the description?

edit: oh i see there were multiple bugs

@HDCharles HDCharles enabled auto-merge (squash) January 6, 2026 15:18
@HDCharles HDCharles disabled auto-merge January 6, 2026 15:18
HDCharles
HDCharles previously approved these changes Jan 6, 2026
Copy link
Collaborator

@HDCharles HDCharles left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, see comments

Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Things look great! Thanks for catching the accumulate bug. One comment on the tqdm logging

Signed-off-by: ZewenShen-Cohere <zewen.shen@cohere.com>
Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsikka
Copy link
Collaborator

dsikka commented Jan 7, 2026

@ZewenShen-Cohere do you mind fixing the quality issue, by running the command in your rootdir?

make style
make quality

@ZewenShen-Cohere
Copy link
Contributor Author

@dsikka Done!

@dsikka dsikka enabled auto-merge (squash) January 7, 2026 23:10
@dsikka dsikka merged commit 8b01f3f into vllm-project:main Jan 8, 2026
11 checks passed
jwpark33 pushed a commit to jwpark33/llm-compressor that referenced this pull request Jan 8, 2026
…, and improve logging (vllm-project#2161)

This PR addresses the following issues:

1. _accumulate_mean produces incorrect output on its first run.

2. cache_smooth_activations_hook previously performed the averaging
computation on the CPU. When both the hidden dimension and sequence
length are large, this makes AWQ calibration CPU-bound. The slowdown is
especially severe when multiple AWQ quantization jobs run concurrently.

3. Added more informative logging to the AWQ calibration grid search,
including per-mapping JSON logs.

This PR is a subset of
vllm-project#2158

---------

Signed-off-by: ZewenShen-Cohere <zewen.shen@cohere.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Signed-off-by: jwpark33 <pjw9703@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awq For any issue / PR related to AWQ support ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments