Skip to content

[Feat] Support native Kimi-K2-Thinking native W4A16 quantized experts weights#4516

Merged
wangxiyuan merged 23 commits intovllm-project:mainfrom
zhoux77899:k2-thinking
Dec 10, 2025
Merged

[Feat] Support native Kimi-K2-Thinking native W4A16 quantized experts weights#4516
wangxiyuan merged 23 commits intovllm-project:mainfrom
zhoux77899:k2-thinking

Conversation

@zhoux77899
Copy link
Copy Markdown
Contributor

@zhoux77899 zhoux77899 commented Nov 28, 2025

What this PR does / why we need it?

Adds W4A16 quantization method for the Kimi-K2-Thinking model and updates relevant modules to support the new quantization method.

  • Implements complete W4A16 quantization method including weight packing/unpacking, per-group quantization parameter generation, post-processing logic and MoE method application.
  • Adds parameters use_int4_w4a16, w1_offset and w2_offset, adjusts with_quant conditional logic to support W4A16 matrix multiplication.
  • Adds packed_modules_model_mapping for Kimi-K2-Thinking model and processing logic for weight_packed field.

Does this PR introduce any user-facing change?

None.

How was this patch tested?

k2-kimi-thinking

…mi-K2-Thinking quantized experts weights

Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for W4A16 quantization for MoE layers, specifically for Kimi-K2 models. The changes include a new quantization method AscendW4A16FusedMoEMethod, modifications to the MoE MLP logic to handle the new format, and updates to configuration files. Additionally, a bug fix in the rotary embedding implementation is included, which prevents a potential crash. The implementation for W4A16 seems consistent with existing quantization methods for Ascend NPUs. The bug fix is a welcome improvement to robustness.

Comment on lines +69 to +70
if hasattr(self, "cos") and hasattr(self, "sin") and \
self.cos is not None and self.sin is not None:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This change correctly prevents a potential AttributeError. In the previous implementation, if _rope_forward_oot was called when is_first_layer was False in the calling AscendRotaryEmbedding.forward_oot on its first execution, self.cos and self.sin would not have been initialized, leading to a crash. The addition of hasattr checks ensures the attributes exist before they are accessed, making the code more robust.

Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
@github-actions github-actions Bot added documentation Improvements or additions to documentation module:tests labels Nov 29, 2025
zhoux77899 and others added 3 commits November 29, 2025 15:03
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: Ruri <33858552+zhoux77899@users.noreply.github.com>
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
Comment thread tests/e2e/multicard/test_offline_inference_distributed.py
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
@MengqingCao MengqingCao added ready read for review ready-for-test start test by label for PR labels Dec 2, 2025
zhoux77899 and others added 6 commits December 3, 2025 09:10
…ze` attr

Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
Signed-off-by: zhoux77899 <zhouxiang100@huawei.com>
Comment thread vllm_ascend/quantization/w4a16.py Outdated
Comment thread vllm_ascend/quantization/w4a16.py
Comment thread vllm_ascend/quantization/w4a16.py
Comment thread vllm_ascend/quantization/w4a16.py
@wangxiyuan wangxiyuan merged commit ce58727 into vllm-project:main Dec 10, 2025
27 checks passed
@zhoux77899 zhoux77899 deleted the k2-thinking branch December 10, 2025 07:59
Comment on lines +272 to +274
def _is_w4a16(self, weight_quant: QuantizationArgs) -> bool:
is_4_bits = weight_quant.num_bits == 4
return is_4_bits
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

W4A16 quantization configuration is incomplete.

  1. Verify the weight QuantizationArgs strategy.
  2. Confirm that the activation QuantizationArgs is empty.

Comment on lines +157 to +160
if isinstance(layer, FusedMoE):
layer.ascend_quant_method = COMPRESSED_TENSORS_METHOD
# collect schemes
quant_scheme = self.get_scheme(layer=layer, layer_name=prefix)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The target_scheme_map only contains the "Linear" key—how can you obtain a scheme specific to FuseMoE?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation module:ops module:quantization module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants