Skip to content

[Refactor]refactor 310p attention impl and add ut#6579

Merged
wangxiyuan merged 5 commits intovllm-project:mainfrom
pu-zhe:refact_attn
Feb 7, 2026
Merged

[Refactor]refactor 310p attention impl and add ut#6579
wangxiyuan merged 5 commits intovllm-project:mainfrom
pu-zhe:refact_attn

Conversation

@pu-zhe
Copy link
Copy Markdown
Contributor

@pu-zhe pu-zhe commented Feb 5, 2026

What this PR does / why we need it?

This pull request significantly refactors the attention mechanism for the Ascend 310P hardware, enhancing its architecture by separating mask generation concerns from the core attention implementation. It introduces a dedicated mask builder class capable of handling various mask types, including causal, splitfuse, and sliding window attention masks, all optimized for the NPU's fractal data format. This change not only cleans up the codebase but also lays the groundwork for more robust and feature-rich attention operations on Ascend devices, backed by new, extensive unit tests.

Does this PR introduce any user-facing change?

No

How was this patch tested?

E2E test with qwen3 and qwen3-moe

@pu-zhe pu-zhe requested a review from wangxiyuan as a code owner February 5, 2026 12:44
Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: pu-zhe <zpuaa@outlook.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @pu-zhe, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the attention mechanism for the Ascend 310P hardware, enhancing its architecture by separating mask generation concerns from the core attention implementation. It introduces a dedicated mask builder class capable of handling various mask types, including causal, splitfuse, and sliding window attention masks, all optimized for the NPU's fractal data format. This change not only cleans up the codebase but also lays the groundwork for more robust and feature-rich attention operations on Ascend devices, backed by new, extensive unit tests.

Highlights

  • Refactored 310P Attention Mask Logic: The attention mask generation and management for the Ascend 310P has been centralized into a new AttentionMaskBuilder310 class, significantly improving modularity and clarity within the attention mechanism.
  • Introduced Sliding Window Attention (SWA) Mask Support: New functionality has been added to generate and cache masks specifically for Sliding Window Attention, enabling more efficient processing for models that utilize this technique on the 310P.
  • Streamlined Attention Backend Implementation: The AscendAttentionBackendImpl310 class has been refactored to delegate all mask generation responsibilities to the new builder, simplifying its forward_prefill_310 and forward_chunked_prefill_310 methods and focusing them on core attention operations.
  • Comprehensive Unit Test Coverage: Extensive new unit tests have been added for both the AttentionMaskBuilder310 and the AscendAttentionBackend310 implementations, ensuring the correctness and stability of the refactored components.
  • Updated Copyright Year: The copyright year in several modified files has been updated from 2025 to 2026.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • tests/ut/_310p/attention/test_attention_mask_310.py
    • Added new unit tests for AttentionMaskBuilder310, covering causal, SWA, and splitfuse mask generation.
  • tests/ut/_310p/attention/test_attention_v1_310.py
    • Added new unit tests for AscendAttentionBackend310 and AscendAttentionBackendImpl310, verifying get_impl_cls, get_builder_cls, get_kv_cache_shape, and the forward methods for prefill, chunked prefill, and paged attention.
  • vllm_ascend/_310p/attention/attention_mask.py
    • Refactored mask generation logic into a new AttentionMaskBuilder310 class.
    • Implemented gen_causal_additive_mask, get_splitfuse_mask, and get_swa_mask methods.
    • Removed old _AttentionMaskBuilder310P class and related helper functions.
    • Updated copyright year to 2026.
  • vllm_ascend/_310p/attention/attention_v1.py
    • Integrated the new AttentionMaskBuilder310 into AscendAttentionBackend310.
    • Renamed and simplified _forward_prefill_310p_fallback to forward_prefill_310 and _forward_chunked_prefill_310p to forward_chunked_prefill_310, delegating mask logic.
    • Registered AscendAttentionBackend310 as a custom attention backend.
    • Updated copyright year to 2026.
  • vllm_ascend/_310p/attention/metadata_builder.py
    • Renamed AscendAttentionMetadataBuilder310P to AscendAttentionMetadataBuilder310.
    • Updated to use the new AttentionMaskBuilder310.
    • Updated copyright year to 2026.
Activity
  • The pull request was initiated by pu-zhe with the stated intent to refactor the 310P attention implementation and add unit tests.
  • New test files were added to cover the refactored attention mask builder and the attention backend implementation.
  • Core attention logic files for the 310P were modified to introduce a new, modular mask building class and streamline existing attention forward passes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the attention implementation for the Ascend 310P backend to improve code structure and adds unit tests. The main changes include consolidating attention mask logic into a new AttentionMaskBuilder310 class and renaming methods in AscendAttentionBackendImpl310 for clarity.

I've found a critical bug that will cause a runtime error. Please see the detailed comment.

Following the repository's style guide, here are suggestions for the pull request title and summary:

Suggested PR Title:

[Attention][Refactor] Refactor 310P attention implementation and add unit tests

Suggested PR Summary:

### What this PR does / why we need it?
This PR refactors the attention mechanism implementation for the Ascend 310P backend. Key changes include:
- Consolidating all attention mask generation logic into a new, cleaner `AttentionMaskBuilder310` class.
- Replacing the previous `_AttentionMaskBuilder310P` wrapper and helper functions with this unified class, improving code organization and maintainability.
- Renaming internal methods in `AscendAttentionBackendImpl310` for better readability (e.g., `_forward_prefill_310p_fallback` is now `forward_prefill_310`).
- Adding comprehensive unit tests for the new `AttentionMaskBuilder310` and the refactored `AscendAttentionBackendImpl310` to improve test coverage and ensure correctness.

### Does this PR introduce _any_ user-facing change?
No. This is an internal refactoring and does not change any user-facing APIs or behavior.

### How was this patch tested?
New unit tests have been added in `tests/ut/_310p/attention/test_attention_mask_310.py` and `tests/ut/_310p/attention/test_attention_v1_310.py`. These tests cover the new attention mask builder and the different attention forward paths (prefill, chunked prefill, and paged attention). CI is expected to pass with these new tests.

Comment thread vllm_ascend/_310p/attention/attention_mask.py
Signed-off-by: pu-zhe <zpuaa@outlook.com>
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Feb 5, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Signed-off-by: pu-zhe <zpuaa@outlook.com>
@wangxiyuan wangxiyuan merged commit 4f33e25 into vllm-project:main Feb 7, 2026
24 checks passed
@pu-zhe pu-zhe deleted the refact_attn branch February 7, 2026 10:22
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Feb 9, 2026
…to qwen3next_rebase

* 'main' of https://github.com/vllm-project/vllm-ascend:
  [Patch] Remove the patch of MiniCPM (vllm-project#5975)
  [P/D] layerwise connector support recompute scheduler (vllm-project#5900)
  [CI] Add workflow support for lint image build (vllm-project#6489)
  [Bugfix] Fix problematic dummy_run & improper input_batch_size in eagle (vllm-project#6517)
  [Refactor]310p_e2e test case update (vllm-project#6539)
  [Refactor]refactor p2p connector (vllm-project#6551)
  [Refactor]refactor 310p attention impl and add ut (vllm-project#6579)
  [Refactor]refactor 310p ops and add ut (vllm-project#6591)
  [Ops][Refactor] Remove custom rotary_embedding operator (vllm-project#6523)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(new Batch vllm-project#8) (vllm-project#6604)
  [Test] Add initial multi modal cases of Qwen2.5-VL-7B-Instruct for disaggregated encoder  (vllm-project#5301)
  [CI] Fix broken CI (vllm-project#6599)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(Batch vllm-project#10) (vllm-project#6173)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(Batch vllm-project#11) (vllm-project#6176)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(Batch vllm-project#8) (vllm-project#6129)
  [Lint]Style: Convert `vllm-ascend/` to ruff format(Batch vllm-project#7) (vllm-project#6023)
  [CI][Misc] Some improvement for github action (vllm-project#6587)
  [Image] Bump mooncake version to v0.3.8.post1 (vllm-project#6428)
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Feb 12, 2026
### What this PR does / why we need it?
This pull request significantly refactors the attention mechanism for
the Ascend 310P hardware, enhancing its architecture by separating mask
generation concerns from the core attention implementation. It
introduces a dedicated mask builder class capable of handling various
mask types, including causal, splitfuse, and sliding window attention
masks, all optimized for the NPU's fractal data format. This change not
only cleans up the codebase but also lays the groundwork for more robust
and feature-rich attention operations on Ascend devices, backed by new,
extensive unit tests.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test with qwen3 and qwen3-moe
- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@d7e17aa

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: momochenchuw <chenchuw@huawei.com>
@wangxiyuan wangxiyuan mentioned this pull request Feb 24, 2026
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
This pull request significantly refactors the attention mechanism for
the Ascend 310P hardware, enhancing its architecture by separating mask
generation concerns from the core attention implementation. It
introduces a dedicated mask builder class capable of handling various
mask types, including causal, splitfuse, and sliding window attention
masks, all optimized for the NPU's fractal data format. This change not
only cleans up the codebase but also lays the groundwork for more robust
and feature-rich attention operations on Ascend devices, backed by new,
extensive unit tests.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test with qwen3 and qwen3-moe
- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@d7e17aa

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
### What this PR does / why we need it?
This pull request significantly refactors the attention mechanism for
the Ascend 310P hardware, enhancing its architecture by separating mask
generation concerns from the core attention implementation. It
introduces a dedicated mask builder class capable of handling various
mask types, including causal, splitfuse, and sliding window attention
masks, all optimized for the NPU's fractal data format. This change not
only cleans up the codebase but also lays the groundwork for more robust
and feature-rich attention operations on Ascend devices, backed by new,
extensive unit tests.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test with qwen3 and qwen3-moe
- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@d7e17aa

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
This pull request significantly refactors the attention mechanism for
the Ascend 310P hardware, enhancing its architecture by separating mask
generation concerns from the core attention implementation. It
introduces a dedicated mask builder class capable of handling various
mask types, including causal, splitfuse, and sliding window attention
masks, all optimized for the NPU's fractal data format. This change not
only cleans up the codebase but also lays the groundwork for more robust
and feature-rich attention operations on Ascend devices, backed by new,
extensive unit tests.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test with qwen3 and qwen3-moe
- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@d7e17aa

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
### What this PR does / why we need it?
This pull request significantly refactors the attention mechanism for
the Ascend 310P hardware, enhancing its architecture by separating mask
generation concerns from the core attention implementation. It
introduces a dedicated mask builder class capable of handling various
mask types, including causal, splitfuse, and sliding window attention
masks, all optimized for the NPU's fractal data format. This change not
only cleans up the codebase but also lays the groundwork for more robust
and feature-rich attention operations on Ascend devices, backed by new,
extensive unit tests.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
E2E test with qwen3 and qwen3-moe
- vLLM version: v0.15.0
- vLLM main:
vllm-project/vllm@d7e17aa

---------

Signed-off-by: pu-zhe <zpuaa@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants