[Ascend]optimize Qwen3 on Ascend#10574
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @ping1jing2, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a series of performance optimizations specifically tailored for running the Qwen3 model on Ascend NPUs. The changes aim to significantly enhance inference speed and efficiency by leveraging high-performance NPU-specific operations for attention, implementing intelligent weight prefetching, and optimizing data formats for matrix multiplications. These improvements collectively contribute to a more efficient utilization of Ascend hardware resources.
Highlights
- Optimized Attention Mechanism: Switched to
torch_npu._npu_paged_attentionfor improved performance in the attention layer on Ascend NPUs, replacing the previous fused attention operator. - Weight Prefetching: Implemented
torch_npu.npu_prefetchto proactively load MLP weights (gate_up, down proj) into cache, aiming to overlap memory access with computation and reduce latency. - Data Format Acceleration: Utilized the
FRACTAL_NZdata format viatorch_npu.npu_format_castto accelerate General Matrix Multiply (GEMM) operations, particularly for quantized weights. - NPU Configuration Enhancements: Configured
torch.nputo allow internal data formats and disabled JIT compilation, which are crucial for better NPU integration and performance. - Dynamic NPUGraphRunner Updates: Adapted the NPUGraphRunner to dynamically handle input updates based on the specific attention architecture (MLA vs. paged attention) being used, ensuring compatibility and efficiency.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Summary of Changes
Hello @ping1jing2, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces significant optimizations for the Qwen3 model when running on Ascend NPUs. By integrating NPU-specific high-performance operations for attention, implementing weight prefetching, and leveraging specialized data formats, the changes aim to boost the model's inference speed and overall efficiency on Ascend hardware.
Highlights
- Attention Mechanism Optimization: Switched to
torch_npu._npu_paged_attentionfor improved performance in the attention layer on Ascend NPUs, replacing the oldernpu_fused_infer_attention_scoreAPI. - Weight Prefetching: Implemented
torch_npu.npu_prefetchto overlap memory access time by prefetching weights formatmuloperations, specifically for thegate_upanddown_projlayers in the MLP. - Data Format Acceleration: Utilized the
FRACTAL_NZdata format (format 29) viatorch_npu.npu_format_castto accelerate General Matrix Multiply (GEMM) operations, particularly for quantized linear layers on Ascend.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces several performance optimizations for Qwen3 models on Ascend NPUs, including using a high-performance paged attention kernel, prefetching MLP weights to overlap communication and computation, and using the FRACTAL_NZ data format for weights to accelerate GEMM operations. The changes are mostly sound and target specific Ascend hardware features. However, I've found a critical race condition in the weight prefetching logic where weights might be used before they are fully prefetched. I've also pointed out a potential bug in the usage of the npu_prefetch API and a minor issue with a magic number. Addressing these points will ensure the correctness and robustness of these optimizations.
* origin/qwen3: (30 commits) chore: bump sgl-kernel 0.3.11 (sgl-project#10630) feat: add fused moe config for Qwen3-Next-80B-A3B-Instruct on B200 (sgl-project#10631) model support: Sarashina2VisionForCausalLM (sgl-project#10632) [Performance] Qwen3-Next: speed up update_mamba_state_after_mtp_verify by 10x; e2e up to 3.54% faster (sgl-project#10586) [Performance] Qwen3-Next: replace arange to cached query_start_loc_li… (sgl-project#10553) [Feature] Speculative decoding support lookahead (sgl-project#9873) refactor: use registry for _get_attention_backend_from_str (sgl-project#10629) [router] refactor worker to builder pattern 1/n (sgl-project#10628) Garbage collector regression in the online server (sgl-project#10621) feat: Add FlexAttention Backend for Efficient Sparse Attention (sgl-project#9947) Fix bias handling in TritonMoeQuantInfo within quantization/mxfp4.py (sgl-project#10579) [Performance] qwen3-next improve causal conv1d in prefill phase (sgl-project#10595) Fix sgl_kernel import failure on devices other than CUDA (sgl-project#10610) support qwen3-next-fp8 deepep (sgl-project#10622) update deepep version for qwen3-next deepep moe (sgl-project#10624) Feat/add heartbeat mechanism for nixl conn (sgl-project#10222) [RL] Add destroy process group api (sgl-project#9979) fix deepep assert when PD disaggregation == null (sgl-project#8274) Scale kkt after reduction (sgl-project#10604) [improvement] add average input/output token length for hicache benchmark stats output (sgl-project#10525) ...
Co-authored-by: c30031083 <chenxu140@huawei.com>
Motivation
related to #10337
Modifications
use high-performance Attention Ops named. The internal testing is ready. However, the relevant software packages have not been released, so they are temporarily deleted.torch_npu._npu_paged_attentionin ACLGraphAccuracy Tests
Benchmarking and Profiling
before

after

Checklist