Skip to content

[Feat] [310p] Support w8a8sc quantization method#7075

Merged
wangxiyuan merged 7 commits intovllm-project:mainfrom
pu-zhe:w8a8sc
Mar 10, 2026
Merged

[Feat] [310p] Support w8a8sc quantization method#7075
wangxiyuan merged 7 commits intovllm-project:mainfrom
pu-zhe:w8a8sc

Conversation

@pu-zhe
Copy link
Copy Markdown
Contributor

@pu-zhe pu-zhe commented Mar 9, 2026

What this PR does / why we need it?

New Quantization Method: Introduced support for the W8A8SC static linear quantization scheme specifically for 310P hardware, enabling more efficient model compression.
Refactored the save_sharded_state_310.py to avoid multi-process issue.

Does this PR introduce any user-facing change?

No

How was this patch tested?

W8A8SC quant E2E test.

pu-zhe added 4 commits March 9, 2026 09:56
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
@pu-zhe pu-zhe requested a review from wangxiyuan as a code owner March 9, 2026 07:46
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates a new W8A8SC quantization method tailored for 310P hardware, enhancing the framework's capability to compress models more effectively. It includes the core implementation of the quantization scheme and updates to an existing example script to showcase its application, alongside improvements in how quantization configuration files are managed during the compression workflow.

Highlights

  • New Quantization Method: Introduced support for the W8A8SC static linear quantization scheme specifically for 310P hardware, enabling more efficient model compression.
  • Example Script Update: The save_sharded_state_310.py example script was updated to demonstrate the usage of the new W8A8SC quantization method, including new command-line arguments for enforce-eager, dtype, and quantization.
  • Quantization Description Handling Refinement: Refactored the update_quant_description function to explicitly handle original and target JSON file paths, improving clarity and robustness in managing quantization metadata during the compression process. This also includes new logic for backing up the quant_model_description.json file.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • examples/save_sharded_state_310.py
    • Updated example usage command to include new quantization-related arguments (--enforce-eager, --dtype, --quantization).
    • Added QUANT_DESCRIPTION_FNAME constant for the quantization model description file name.
    • Modified update_quant_description function signature to accept separate ori_json_file and target_json_file arguments, and updated its internal logic to use these.
    • Removed previous logic for renaming quant_model_description.json within weight_compress_worker.
    • Implemented new logic in main to rename quant_model_description.json to ori_quant_model_description.json before processing.
    • Updated the call to update_quant_description in main to pass the new original and target file paths.
  • vllm_ascend/_310p/quantization/methods/init.py
    • Imported the newly added w8a8sc module to register the W8A8SC quantization scheme.
  • vllm_ascend/_310p/quantization/methods/w8a8sc.py
    • Added a new file implementing the AscendW8A8SCLinearMethod310 class, which defines the W8A8SC static linear quantization scheme for 310P.
    • Included methods for get_weight, get_pertensor_param, get_perchannel_param to define the structure of quantized weights and parameters.
    • Implemented the apply method for performing quantized matrix multiplication using torch_npu.npu_matmul_compress_dequant.
    • Added process_weights_after_loading to handle post-loading weight adjustments, including input scale, offset, and bias handling for parallel linear layers.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the w8a8sc quantization method on 310p devices. The changes include the implementation of the new quantization scheme, modifications to the save_sharded_state_310.py example to enable compression into this format, and the necessary integration. The implementation is largely sound, but I've identified a couple of areas for improvement. Specifically, the memory allocation for the compressed weight placeholder in the new scheme is inefficient, and the example script has debugging enabled by default, which could impact performance. My review includes suggestions to address these points.

Comment thread vllm_ascend/_310p/quantization/methods/w8a8sc.py
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Mar 9, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

pu-zhe added 3 commits March 9, 2026 16:26
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
@wangxiyuan wangxiyuan merged commit 5df450b into vllm-project:main Mar 10, 2026
36 checks passed
@pu-zhe pu-zhe deleted the w8a8sc branch March 12, 2026 00:58
Nagisa125 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 17, 2026
### What this PR does / why we need it?
New Quantization Method: Introduced support for the W8A8SC static linear
quantization scheme specifically for 310P hardware, enabling more
efficient model compression.
Refactored the save_sharded_state_310.py to avoid multi-process issue.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
W8A8SC quant E2E test.

- vLLM version: v0.16.0
- vLLM main:
vllm-project/vllm@4034c3d

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants