Skip to content

[Bugfix] [NPU] bugfixes for running deepseek w4a8 quantization#14542

Closed
iforgetmyname wants to merge 2 commits intosgl-project:mainfrom
iforgetmyname:modelslim_fix
Closed

[Bugfix] [NPU] bugfixes for running deepseek w4a8 quantization#14542
iforgetmyname wants to merge 2 commits intosgl-project:mainfrom
iforgetmyname:modelslim_fix

Conversation

@iforgetmyname
Copy link
Collaborator

@iforgetmyname iforgetmyname commented Dec 6, 2025

Motivation

fixed in #14806
this pr is closed

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @iforgetmyname, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers essential bugfixes aimed at enhancing the stability and correctness of DeepSeek W4A8 quantization when running on NPU hardware. The changes primarily involve refining the attention mechanism's core operations and logic, alongside correcting how weights are accessed within the fused Mixture of Experts (MoE) method, which collectively ensures more accurate and efficient model execution.

Highlights

  • NPU Attention Core Optimization: The batch matrix multiplication operation in deepseek_v2_attention_mla_npu.py has been optimized for NPU by adjusting the output tensor shape and replacing torch.bmm with the NPU-specific torch.ops.npu.batch_matmul_transpose for improved efficiency and compatibility.
  • Expanded Attention Forward Mode Logic: The conditional logic for transposing attention output in deepseek_v2_attention_mla_npu.py has been refined to correctly handle decode, draft_extend, and target_verify forward modes, ensuring proper data flow in various DeepSeek V2 attention scenarios.
  • Fused MoE Weight Access Correction: A bug in fused_moe_method_npu.py was fixed where MoE layer weights, scales, and biases were incorrectly accessed using self instead of the layer object, ensuring that the correct parameters are used during DeepSeek W4A8 quantization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces bugfixes for running deepseek models with w4a8 quantization on NPU hardware. The changes primarily involve replacing torch.bmm with a more efficient NPU-specific operator and correcting weight access in the MoE method. My review confirms that most changes are correct and beneficial. However, I've identified a critical issue in forward_mla_core_npu where a necessary reshape operation is missing after the batch_matmul_transpose call, which would lead to a shape mismatch for the subsequent projection layer. I've provided a code suggestion to address this.

)
torch.ops.npu.batch_matmul_transpose(attn_output, m.w_vc, attn_bmm_output)

output, _ = m.o_proj(attn_bmm_output)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The attn_bmm_output tensor has a 3D shape of (num_tokens, num_local_heads, v_head_dim), but m.o_proj (a RowParallelLinear layer) expects a 2D input where the last dimension is m.num_local_heads * m.v_head_dim. You should reshape attn_bmm_output before passing it to m.o_proj. This is consistent with how it's handled in forward_dsa_core_npu and forward_mha_core_npu.

Suggested change
output, _ = m.o_proj(attn_bmm_output)
attn_bmm_output = attn_bmm_output.reshape(-1, m.num_local_heads * m.v_head_dim)
output, _ = m.o_proj(attn_bmm_output)

@ping1jing2 ping1jing2 self-assigned this Dec 8, 2025
@iforgetmyname iforgetmyname deleted the modelslim_fix branch December 12, 2025 08:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants