Skip to content

[Build] Add support for Ascend950 chip#7151

Merged
MengqingCao merged 6 commits intovllm-project:mainfrom
linfeng-yuan:ascend950_chipname_suppport
Mar 12, 2026
Merged

[Build] Add support for Ascend950 chip#7151
MengqingCao merged 6 commits intovllm-project:mainfrom
linfeng-yuan:ascend950_chipname_suppport

Conversation

@linfeng-yuan
Copy link
Copy Markdown
Collaborator

@linfeng-yuan linfeng-yuan commented Mar 11, 2026

What this PR does / why we need it?

This PR adds support for the Ascend950 chip. This includes:

  • Updating build scripts (CMakeLists.txt and setup.py) to recognize the Ascend950 chip and set appropriate compilation flags.
  • Disabling a set of custom operators that are not yet supported on the Ascend950 hardware target.
  • Performing a codebase-wide refactoring of pipe_barrier() calls to the namespaced AscendC::PipeBarrier<>() for improved code consistency and adherence to the latest API standards.

Does this PR introduce any user-facing change?

Yes, this PR adds support for a new hardware target, the Ascend950.

How was this patch tested?

Ascend950DT e2e passed (Qwen3-32B-MXFP8) and CI passed

…dC::PipeBarrier<PIPE_ENUM>()

Signed-off-by: linfeng-yuan <1102311262@qq.com>
Signed-off-by: linfeng-yuan <1102311262@qq.com>
Signed-off-by: linfeng-yuan <1102311262@qq.com>
Signed-off-by: linfeng-yuan <1102311262@qq.com>
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily extends hardware support to include the Ascend950 chip. This involves modifications to the build configuration to correctly identify the new chip and manage operator compatibility. Additionally, a widespread refactoring was performed to update the pipe_barrier synchronization primitive to use the more explicit AscendC::PipeBarrier API within custom kernel implementations, enhancing consistency and potentially future-proofing the code.

Highlights

  • Ascend950 Chip Support: Introduced build system logic to recognize and configure for the Ascend950 chip, including specific operator exclusions and compile definitions in CMake.
  • Pipe Barrier API Standardization: Refactored numerous custom operator kernels to standardize the usage of pipe barriers, replacing direct pipe_barrier calls with the more explicit AscendC::PipeBarrier API.
  • Conditional Operator Compilation: Implemented conditional compilation for mla_preprocess and moe_grouped_matmul operators in the PyTorch binding, ensuring they are excluded when targeting Ascend950.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • CMakeLists.txt
    • Renamed VLLM_ASCEND_CUSTOM_OP_EXCLUDE to VLLM_ASCEND_CUSTOM_OP_EXCLUDE_310P.
    • Introduced VLLM_ASCEND_CUSTOM_OP_EXCLUDE_ASCEND950 to exclude specific operators for Ascend950.
    • Added conditional logic to set _ASCEND950 compile definition when SOC_VERSION matches "ascend950".
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_fullload_dynamic_quant.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_fullload_quant.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_fullload_quant_base.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_gather_dynamic_quant.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_gather_quant.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_init_routing_fullload.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_mrgsort_out.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_sort_multi_core.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_sort_one_core.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_src_to_dst_and_gather.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/moe_init_routing_quant_v2/moe_v2_src_to_dst_op.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_ffn_combine/op_kernel/utils/hccl_shmem.hpp
    • Replaced pipe_barrier(PIPE_ALL) with AscendC::PipeBarrier<PIPE_ALL>().
  • csrc/dispatch_gmm_combine_decode/op_kernel/dispatch_gmm_combine_decode/raw_distributed/cam_moe_distribute_combine.h
    • Replaced pipe_barrier(PIPE_ALL) with AscendC::PipeBarrier<PIPE_ALL>().
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/dispatch_gmm_combine_decode/op_kernel/dispatch_gmm_combine_decode/raw_distributed/cam_moe_distribute_dispatch.h
    • Replaced pipe_barrier(PIPE_ALL) with AscendC::PipeBarrier<PIPE_ALL>().
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/kernels/bgmv_expand.cpp
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/kernels/bgmv_shrink.cpp
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/kernels/sgmv_expand.cpp
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/kernels/sgmv_shrink.cpp
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/notify_dispatch/op_kernel/notify_dispatch.h
    • Replaced pipe_barrier(PIPE_ALL) with AscendC::PipeBarrier<PIPE_ALL>().
  • csrc/sparse_flash_attention/op_kernel/sparse_flash_attention_service_vector_mla.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • csrc/torch_binding.cpp
    • Added conditional compilation (#ifndef _ASCEND950 and #endif) around mla_preprocess and moe_grouped_matmul operator definitions.
  • csrc/utils/inc/kernel/pse.h
    • Replaced pipe_barrier(PIPE_V) with AscendC::PipeBarrier<PIPE_V>().
  • setup.py
    • Updated get_chip_type function to correctly identify "950" in chip_name.
    • Modified gen_build_info to set device_type to "A5" if soc_version contains "ascend950".
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the Ascend950 chip and performs a large-scale refactoring of pipe_barrier calls.

The key changes are:

  • Ascend950 Support: Build scripts (CMakeLists.txt, setup.py) are updated to recognize the Ascend950 chip, setting the device type to 'A5' and adding a _ASCEND950 compile definition.
  • Operator Disabling: A number of operators are disabled for the Ascend950 target by using the _ASCEND950 preprocessor directive in csrc/torch_binding.cpp. This is consistent with the build script changes that exclude the corresponding kernels from compilation.
  • Refactoring: All occurrences of pipe_barrier() have been refactored to AscendC::PipeBarrier<>() across numerous kernel files. This appears to be a modernization effort to use a namespaced and templated version of the API, which improves code clarity and consistency.

The changes are logical and well-executed. The new chip support is implemented cleanly, and the refactoring is applied consistently.

As per the repository's style guide, I'm suggesting an updated Pull Request title and summary:

Suggested PR Title:

[Build][Feature] Add support for Ascend950 chip

Suggested PR Summary:

### What this PR does / why we need it?

This PR adds support for the Ascend950 chip. This includes:
- Updating build scripts (`CMakeLists.txt` and `setup.py`) to recognize the Ascend950 chip and set appropriate compilation flags.
- Disabling a set of custom operators that are not yet supported on the Ascend950 hardware target.
- Performing a codebase-wide refactoring of `pipe_barrier()` calls to the namespaced `AscendC::PipeBarrier<>()` for improved code consistency and adherence to the latest API standards.

### Does this PR introduce _any_ user-facing change?

Yes, this PR adds support for a new hardware target, the Ascend950.

### How was this patch tested?

CI should be run to ensure all existing tests pass on supported platforms. Specific tests should be added and executed on an Ascend950 device to validate the new support and ensure that the enabled operators function correctly.

@linfeng-yuan linfeng-yuan changed the title Ascend950 chipname suppport [Build] Add support for Ascend950 chip Mar 11, 2026
Signed-off-by: linfeng-yuan <1102311262@qq.com>
@linfeng-yuan linfeng-yuan force-pushed the ascend950_chipname_suppport branch from dc80348 to d6280bd Compare March 11, 2026 08:38
Comment thread vllm_ascend/utils.py
# There are some customed operators which aren't implemented
# with batch invariant in vllm-ascend, we need to disable them.
if vllm_is_batch_invariant():
if vllm_is_batch_invariant() or get_ascend_device_type() == AscendDeviceType.A5:
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, execution of some ops do not check enable_custom_op() and support fallback calling with torch_npu (e.g., init_routing_custom, gating_top_k_custom, etc.). Currently, we can use the pta API with CANN 8.5.1. We plan to investigate the affects and delete these unnecessary custom ops ASAP.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OP_LIST: [enabled, disabled] add comments here

Copy link
Copy Markdown
Collaborator Author

@linfeng-yuan linfeng-yuan Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've added comments to refer reader to #7157 now.

…op support link

Signed-off-by: linfeng-yuan <1102311262@qq.com>
@MengqingCao MengqingCao merged commit 5f3826b into vllm-project:main Mar 12, 2026
36 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 12, 2026
…to qwen3next_graph

* 'main' of https://github.com/vllm-project/vllm-ascend: (88 commits)
  [main][bugfix] Fixed the problem of speculative decoding in FULL mode (vllm-project#7148)
  fixed fia pad logic in graph mode. (vllm-project#7144)
  [Doc] fix DSV3.1 PD configs (vllm-project#7187)
  refactor: add a check before layer_sharding logging (vllm-project#7186)
  [Build] Add support for Ascend950 chip (vllm-project#7151)
  Revert "[CI] fix skiped e2e test when upgrade vllm version  (vllm-project#6654)" (vllm-project#7166)
  [MODELRUNNERV2]fix penality ops (vllm-project#7013)
  [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (vllm-project#6650)
  [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (vllm-project#7146)
  [CI] Build Image for v0.16.0rc1 (vllm-project#7155)
  [CI] Skip `test_mooncake_layerwise_connector.py` in `ut` (vllm-project#7147)
  [BugFix]Fix recomputed scheduler bug (vllm-project#7137)
  [Model] Support Minimax-m2.5 on NPU (vllm-project#7105)
  [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (vllm-project#7022)
  Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (vllm-project#7109)
  [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (vllm-project#7067)
  [Misc] Download on both hk and guiyang region (vllm-project#7129)
  [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (vllm-project#7090)
  [eagle][cp] fix eagle_cp enable bug2 (vllm-project#7079)
  [CI]Upgrade niglty multi-node-tests max-parallel to 2 (vllm-project#7035)
  ...
Nagisa125 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 17, 2026
### What this PR does / why we need it?
This PR adds support for the Ascend950 chip. This includes:
- Updating build scripts (`CMakeLists.txt` and `setup.py`) to recognize
the Ascend950 chip and set appropriate compilation flags.
- Disabling a set of custom operators that are not yet supported on the
Ascend950 hardware target.
- Performing a codebase-wide refactoring of `pipe_barrier()` calls to
the namespaced `AscendC::PipeBarrier<>()` for improved code consistency
and adherence to the latest API standards.

Ascend950DT e2e passed (Qwen3-32B-MXFP8) and CI passed
- vLLM version: v0.16.0
- vLLM main:
vllm-project/vllm@4034c3d
---------
Signed-off-by: linfeng-yuan <1102311262@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants