Skip to content

[Ascend]optimize Qwen3 on Ascend#10574

Merged
zhyncs merged 9 commits intosgl-project:mainfrom
ping1jing2:qwen3
Sep 23, 2025
Merged

[Ascend]optimize Qwen3 on Ascend#10574
zhyncs merged 9 commits intosgl-project:mainfrom
ping1jing2:qwen3

Conversation

@ping1jing2
Copy link
Copy Markdown
Collaborator

@ping1jing2 ping1jing2 commented Sep 17, 2025

Motivation

related to #10337

Modifications

  1. use high-performance Attention Ops named torch_npu._npu_paged_attention in ACLGraph. The internal testing is ready. However, the relevant software packages have not been released, so they are temporarily deleted.
  2. Cache Management Operation(CMO):Using torch_npu.npu_prefetch to Prefetch the weight of matmul(gate_up, down proj) when running other AIV kernels, aiming to overlap the memory access time.
  3. using FRACTAL_NZ, a kind of private data format on Ascend from hiascend docs, to accelerate gemm by torch_npu.npu_format_cast

Accuracy Tests

accuracy

Benchmarking and Profiling

before
before

after
performance

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ping1jing2, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a series of performance optimizations specifically tailored for running the Qwen3 model on Ascend NPUs. The changes aim to significantly enhance inference speed and efficiency by leveraging high-performance NPU-specific operations for attention, implementing intelligent weight prefetching, and optimizing data formats for matrix multiplications. These improvements collectively contribute to a more efficient utilization of Ascend hardware resources.

Highlights

  • Optimized Attention Mechanism: Switched to torch_npu._npu_paged_attention for improved performance in the attention layer on Ascend NPUs, replacing the previous fused attention operator.
  • Weight Prefetching: Implemented torch_npu.npu_prefetch to proactively load MLP weights (gate_up, down proj) into cache, aiming to overlap memory access with computation and reduce latency.
  • Data Format Acceleration: Utilized the FRACTAL_NZ data format via torch_npu.npu_format_cast to accelerate General Matrix Multiply (GEMM) operations, particularly for quantized weights.
  • NPU Configuration Enhancements: Configured torch.npu to allow internal data formats and disabled JIT compilation, which are crucial for better NPU integration and performance.
  • Dynamic NPUGraphRunner Updates: Adapted the NPUGraphRunner to dynamically handle input updates based on the specific attention architecture (MLA vs. paged attention) being used, ensuring compatibility and efficiency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ping1jing2, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces significant optimizations for the Qwen3 model when running on Ascend NPUs. By integrating NPU-specific high-performance operations for attention, implementing weight prefetching, and leveraging specialized data formats, the changes aim to boost the model's inference speed and overall efficiency on Ascend hardware.

Highlights

  • Attention Mechanism Optimization: Switched to torch_npu._npu_paged_attention for improved performance in the attention layer on Ascend NPUs, replacing the older npu_fused_infer_attention_score API.
  • Weight Prefetching: Implemented torch_npu.npu_prefetch to overlap memory access time by prefetching weights for matmul operations, specifically for the gate_up and down_proj layers in the MLP.
  • Data Format Acceleration: Utilized the FRACTAL_NZ data format (format 29) via torch_npu.npu_format_cast to accelerate General Matrix Multiply (GEMM) operations, particularly for quantized linear layers on Ascend.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several performance optimizations for Qwen3 models on Ascend NPUs, including using a high-performance paged attention kernel, prefetching MLP weights to overlap communication and computation, and using the FRACTAL_NZ data format for weights to accelerate GEMM operations. The changes are mostly sound and target specific Ascend hardware features. However, I've found a critical race condition in the weight prefetching logic where weights might be used before they are fully prefetched. I've also pointed out a potential bug in the usage of the npu_prefetch API and a minor issue with a magic number. Addressing these points will ensure the correctness and robustness of these optimizations.

ping1jing2 and others added 4 commits September 19, 2025 04:54
* origin/qwen3: (30 commits)
  chore: bump sgl-kernel 0.3.11 (sgl-project#10630)
  feat: add fused moe config for Qwen3-Next-80B-A3B-Instruct on B200 (sgl-project#10631)
  model support: Sarashina2VisionForCausalLM (sgl-project#10632)
  [Performance] Qwen3-Next: speed up update_mamba_state_after_mtp_verify by 10x; e2e up to 3.54% faster (sgl-project#10586)
  [Performance] Qwen3-Next: replace arange to cached query_start_loc_li… (sgl-project#10553)
  [Feature] Speculative decoding support lookahead (sgl-project#9873)
  refactor: use registry for _get_attention_backend_from_str (sgl-project#10629)
  [router] refactor worker to builder pattern 1/n (sgl-project#10628)
  Garbage collector regression in the online server (sgl-project#10621)
  feat: Add FlexAttention Backend for Efficient Sparse Attention (sgl-project#9947)
  Fix bias handling in TritonMoeQuantInfo within quantization/mxfp4.py (sgl-project#10579)
  [Performance] qwen3-next improve causal conv1d in prefill phase (sgl-project#10595)
  Fix sgl_kernel import failure on devices other than CUDA (sgl-project#10610)
  support qwen3-next-fp8 deepep (sgl-project#10622)
  update deepep version for qwen3-next deepep moe (sgl-project#10624)
  Feat/add heartbeat mechanism for nixl conn (sgl-project#10222)
  [RL] Add destroy process group api (sgl-project#9979)
  fix deepep assert when PD disaggregation == null (sgl-project#8274)
  Scale kkt after reduction (sgl-project#10604)
  [improvement] add average input/output token length for hicache benchmark stats output (sgl-project#10525)
  ...
@zhyncs zhyncs merged commit e22f3a5 into sgl-project:main Sep 23, 2025
28 of 66 checks passed
HanHan009527 pushed a commit to HanHan009527/sglang that referenced this pull request Oct 9, 2025
Co-authored-by: c30031083 <chenxu140@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants