Skip to content

Conversation

@HDCharles
Copy link
Collaborator

@HDCharles HDCharles commented Jan 5, 2026

SUMMARY:
We identified several fixes, TODOs and improvements after the AWQ generalization PR to increase the AWQ speed. This largely implements them, details below.

speedup on:
python /home/HDCharles/repos/llm-compressor/examples/awq/llama_example.py
OLD:
(8.00 minutes)
GPU Memory - Peak: 10.00 GB
NOW:
(7.09 minutes)
GPU Memory - Peak: 13.18 GB

RESULT:
11.37% speedup, memory increase is expected and primarily due to change #4 and #1 below

changes:

  1. instead of recording the fp16_baseline_output during apply_smoothing, we add a hook so that the output is captured during sequential pipeline execution. also keep it on device to avoid unnecessary device on/offloading
  2. we concatenate outputs into a single tensor for faster error calculation
  3. instead of recording the entire state dict during compute_best_scale, we only record the state of the balance layers (also keep them onloaded on gpu instead of offloading since we're storing significantly less now)
  4. previously we would write the stored value to the balance layer, then update that value based on the scale factor (2 writes), now we calculate hte scaled balance layer and update directly (1 write)
  5. don't need to update the offload parameter when calculating best scale, only local
  6. improvement Add FP8 Support #4 also allows us to be more targeted, previously during the first write we would update the whole state dict which is no longer necessary

other changes which were tested:
torch compiling the best_scales_loop (device offloading prevented compilation)
calculating mse_loss progressively as each sample is run (slower)

TEST PLAN:
ran AWQ tests and examples to verify correctness

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @HDCharles, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces several key optimizations to the AWQ quantization process, primarily focusing on improving execution speed. By refining how FP16 baseline outputs are cached, streamlining error calculation, and making state management more targeted during scale computation, the changes aim to reduce overall processing time. The reported impact is a reduction in quantization time for a Llama example from 7.96 minutes to 6.93 minutes.

Highlights

  • Optimized FP16 Baseline Output Caching: Instead of recomputing FP16 baseline outputs during apply_smoothing, a hook is now used to capture these outputs during the initial sequential pipeline execution, storing them in a dedicated cache for later retrieval.
  • Faster Error Calculation: Outputs are now concatenated into a single tensor for Mean Squared Error (MSE) calculation, significantly speeding up the error computation process by performing it in a single operation rather than batch by batch.
  • Targeted State Management: During compute_best_scale, only the state of the relevant balance layers is recorded and restored, rather than the entire parent module's state dictionary, reducing overhead and improving efficiency.
  • Streamlined Weight Scaling: The process of applying scale factors to weights has been optimized from two write operations (storing then updating) to a single direct calculation and update, improving efficiency and precision.
  • Reduced Offload Parameter Updates: The need to update offload parameters during the best scale calculation has been eliminated, as weight updates are now handled locally and more precisely within the balance layers.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several well-reasoned performance improvements to the AWQ modifier, focusing on optimizing data handling and reducing redundant computations. The changes, such as caching FP16 baseline outputs and concatenating tensors for faster loss calculation, are effective and result in a significant speedup. My review has identified one critical issue where model weights are not correctly restored after the grid search for scaling factors, which could lead to an incorrect model state. I have provided a code suggestion to address this. Additionally, I've included a medium-severity suggestion to improve code clarity. Overall, this is a great step forward in optimizing the AWQ implementation.

kylesayrs
kylesayrs previously approved these changes Jan 5, 2026
Copy link
Collaborator

@kylesayrs kylesayrs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, awesome improvements

update_offload_parameter(
balance_layer,
"weight",
balance_layer.weight.data = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job avoiding writing to the offload

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we don't need update_offload_parameter here because it all happens on the exec device, and the smooth function is done elsewhere after best_scales are calculated?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah we can just keep in memory and not mess with that.

@github-actions
Copy link

github-actions bot commented Jan 5, 2026

👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review.

Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed.

@HDCharles HDCharles added enhancement New feature or request ready When a PR is ready for review awq For any issue / PR related to AWQ support labels Jan 6, 2026
Summary

Signed-off-by: HDCharles <[email protected]>
Signed-off-by: HDCharles <[email protected]>
Summary

Signed-off-by: HDCharles <[email protected]>
Summary

Signed-off-by: HDCharles <[email protected]>
old: (7.96 minutes)
now: (6.93 minutes)

meta llama 3-8b example

Summary

Signed-off-by: HDCharles <[email protected]>
Summary

Signed-off-by: HDCharles <[email protected]>
Summary

Signed-off-by: HDCharles <[email protected]>
Copy link
Collaborator

@brian-dellabetta brian-dellabetta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! One clarifying question and a question on the increased memory requirements

update_offload_parameter(
balance_layer,
"weight",
balance_layer.weight.data = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we don't need update_offload_parameter here because it all happens on the exec device, and the smooth function is done elsewhere after best_scales are calculated?

values = inspect.signature(module.forward).bind(*args, **kwargs)
self._parent_args_cache[module].append(values.arguments)

def cache_fp16_baseline_hook(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we are now caching output activations for every mapping in a given subgraph, wouldn't this increase memory requirements quite a bit, especially for MoE models? For which model are you seeing the 30% memory increase that you mention in the summary?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hold on i'm rewriting this, i didn't realize by default we don't enable offloading so all my measurements were off. we do need to cache this but not on gpu and we can offload it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awq For any issue / PR related to AWQ support enhancement New feature or request ready When a PR is ready for review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants