Skip to content

Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule#7109

Merged
wangxiyuan merged 7 commits intovllm-project:mainfrom
ppppeng:Qwen3_5_patch_recurrent
Mar 10, 2026
Merged

Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule#7109
wangxiyuan merged 7 commits intovllm-project:mainfrom
ppppeng:Qwen3_5_patch_recurrent

Conversation

@ppppeng
Copy link
Copy Markdown
Collaborator

@ppppeng ppppeng commented Mar 10, 2026

What this PR does / why we need it?

The ops torch_npu.npu_recurrent_gated_delta_rule currently does not support ssm_state inputs in float32 format,
we temporarily retain the _forward_core implementation with triton for Qwen3_5.

Does this PR introduce any user-facing change?

How was this patch tested?

@ppppeng ppppeng requested a review from wangxiyuan as a code owner March 10, 2026 06:49
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the vllm-ascend framework by adding a dedicated patch for the Qwen3.5 model. The primary goal is to ensure compatibility and optimize performance for Qwen3.5 on Ascend hardware, specifically by providing an alternative implementation for core attention computations that circumvents current limitations with float32 state inputs in NPU operations. This change allows the model to function correctly and efficiently within the Ascend ecosystem.

Highlights

  • New Qwen3.5 Patch: Introduced a new patch file, patch_qwen3_5.py, specifically for the Qwen3.5 model within the vllm-ascend project.
  • Float32 SSM State Compatibility: Implemented AscendQwen3_5GatedDeltaNet to override the _forward_core method of Qwen3_5GatedDeltaNet, addressing a current limitation where torch_npu.npu_recurrent_gated_delta_rule does not support ssm_state inputs in float32 format.
  • Triton Operations Integration: The new _forward_core implementation leverages Triton-based fused operations, including fused_recurrent_gated_delta_rule and fused_sigmoid_gating_delta_rule_update, for efficient gated delta rule computations on Ascend hardware.
  • Module Import: The newly added patch_qwen3_5 module is imported in vllm_ascend/patch/worker/__init__.py to ensure its activation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/patch/worker/init.py
    • Imported the new patch_qwen3_5 module to enable Qwen3.5 model-specific optimizations.
  • vllm_ascend/patch/worker/patch_qwen3_5.py
    • Added a new module containing AscendQwen3_5GatedDeltaNet which overrides the _forward_core method of Qwen3_5GatedDeltaNet.
    • Implemented logic to handle ssm_state inputs, utilizing Triton-based fused operations for gated delta rule computations to ensure compatibility with Ascend NPUs.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a patch for Qwen3.5's GatedDeltaNet to enable it on Ascend hardware. The patch introduces a workaround for an operator limitation by using a Triton kernel for the decode path. My review identifies a critical bug where a variable could be used before initialization, potentially causing a crash. I've provided a code suggestion to fix this issue.

Comment on lines +138 to +253
if attn_metadata.num_prefills > 0 or spec_sequence_masks is not None:
g, beta = fused_gdn_gating_patch(self.A_log, a, b, self.dt_bias)
if spec_sequence_masks is not None:
if attn_metadata.num_prefills == 0 and attn_metadata.num_decodes == 0:
g_spec = g
beta_spec = beta
g_non_spec = None
beta_non_spec = None
else:
g_spec = g.index_select(1, spec_token_indx)
beta_spec = beta.index_select(1, spec_token_indx)
g_non_spec = g.index_select(1, non_spec_token_indx)
beta_non_spec = beta.index_select(1, non_spec_token_indx)
else:
g_spec = None
beta_spec = None
g_non_spec = g
beta_non_spec = beta

# 2. Recurrent attention

# 2.1: Process the multi-query part
if spec_sequence_masks is not None:
core_attn_out_spec, last_recurrent_state = fused_recurrent_gated_delta_rule(
q=query_spec,
k=key_spec,
v=value_spec,
g=g_spec,
beta=beta_spec,
initial_state=ssm_state,
inplace_final_state=True,
cu_seqlens=spec_query_start_loc[: attn_metadata.num_spec_decodes + 1],
ssm_state_indices=spec_state_indices_tensor,
num_accepted_tokens=num_accepted_tokens,
use_qk_l2norm_in_kernel=True,
)
else:
core_attn_out_spec, last_recurrent_state = None, None

# 2.2: Process the remaining part
if attn_metadata.num_prefills > 0:
initial_state = ssm_state[non_spec_state_indices_tensor].contiguous()
initial_state[~has_initial_state, ...] = 0
(
core_attn_out_non_spec,
last_recurrent_state,
) = chunk_gated_delta_rule(
q=query_non_spec,
k=key_non_spec,
v=value_non_spec,
g=g_non_spec,
beta=beta_non_spec,
initial_state=initial_state,
output_final_state=True,
cu_seqlens=non_spec_query_start_loc,
head_first=False,
use_qk_l2norm_in_kernel=True,
)
# Init cache
ssm_state[non_spec_state_indices_tensor] = last_recurrent_state.to(ssm_state.dtype)
elif attn_metadata.num_decodes > 0:
core_attn_out_non_spec, last_recurrent_state = fused_recurrent_gated_delta_rule(
q=query_non_spec,
k=key_non_spec,
v=value_non_spec,
g=g_non_spec,
beta=beta_non_spec,
initial_state=ssm_state,
inplace_final_state=True,
cu_seqlens=non_spec_query_start_loc[: attn_metadata.num_decodes + 1],
ssm_state_indices=non_spec_state_indices_tensor,
use_qk_l2norm_in_kernel=True,
)
else:
core_attn_out_non_spec, last_recurrent_state = None, None

elif attn_metadata.num_decodes > 0:
core_attn_out_non_spec = fused_sigmoid_gating_delta_rule_update(
A_log=self.A_log.contiguous(),
dt_bias=self.dt_bias.contiguous(),
q=query_non_spec.contiguous(),
k=key_non_spec.contiguous(),
v=value_non_spec.contiguous(),
a=a.contiguous(),
b=b.contiguous(),
initial_state_source=ssm_state,
initial_state_indices=non_spec_state_indices_tensor,
cu_seqlens=non_spec_query_start_loc,
use_qk_l2norm_in_kernel=True,
softplus_beta=1.0,
softplus_threshold=20.0,
)

# 3. Merge core attention output
if spec_sequence_masks is not None and core_attn_out_non_spec is not None:
merged_out = torch.empty(
(1, num_actual_tokens, *core_attn_out_spec.shape[2:]),
dtype=core_attn_out_non_spec.dtype,
device=core_attn_out_non_spec.device,
)
merged_out.index_copy_(1, spec_token_indx, core_attn_out_spec)
merged_out.index_copy_(1, non_spec_token_indx, core_attn_out_non_spec)
if not enable_sp():
core_attn_out[:num_actual_tokens] = merged_out.squeeze(0)
else:
core_attn_out[:num_actual_tokens] = merged_out.squeeze(0)[:num_actual_tokens]
elif spec_sequence_masks is not None:
if not enable_sp():
core_attn_out[:num_actual_tokens] = core_attn_out_spec.squeeze(0)
else:
core_attn_out[:num_actual_tokens] = core_attn_out_spec.squeeze(0)[:num_actual_tokens]
else:
if not enable_sp():
core_attn_out[:num_actual_tokens] = core_attn_out_non_spec.squeeze(0)
else:
core_attn_out[:num_actual_tokens] = core_attn_out_non_spec.squeeze(0)[:num_actual_tokens]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The variable core_attn_out_non_spec is not initialized on all code paths, which can lead to an UnboundLocalError. Specifically, if num_prefills == 0, spec_sequence_masks is None, and num_decodes == 0, the variable is never assigned a value, but it is accessed in the final else block of the merge logic. This is a critical issue that could cause a runtime crash.

        core_attn_out_non_spec = None
        core_attn_out_spec = None
        if attn_metadata.num_prefills > 0 or spec_sequence_masks is not None:
            g, beta = fused_gdn_gating_patch(self.A_log, a, b, self.dt_bias)
            if spec_sequence_masks is not None:
                if attn_metadata.num_prefills == 0 and attn_metadata.num_decodes == 0:
                    g_spec = g
                    beta_spec = beta
                    g_non_spec = None
                    beta_non_spec = None
                else:
                    g_spec = g.index_select(1, spec_token_indx)
                    beta_spec = beta.index_select(1, spec_token_indx)
                    g_non_spec = g.index_select(1, non_spec_token_indx)
                    beta_non_spec = beta.index_select(1, non_spec_token_indx)
            else:
                g_spec = None
                beta_spec = None
                g_non_spec = g
                beta_non_spec = beta

            # 2. Recurrent attention

            # 2.1: Process the multi-query part
            if spec_sequence_masks is not None:
                core_attn_out_spec, last_recurrent_state = fused_recurrent_gated_delta_rule(
                    q=query_spec,
                    k=key_spec,
                    v=value_spec,
                    g=g_spec,
                    beta=beta_spec,
                    initial_state=ssm_state,
                    inplace_final_state=True,
                    cu_seqlens=spec_query_start_loc[: attn_metadata.num_spec_decodes + 1],
                    ssm_state_indices=spec_state_indices_tensor,
                    num_accepted_tokens=num_accepted_tokens,
                    use_qk_l2norm_in_kernel=True,
                )
            else:
                core_attn_out_spec, last_recurrent_state = None, None

            # 2.2: Process the remaining part
            if attn_metadata.num_prefills > 0:
                initial_state = ssm_state[non_spec_state_indices_tensor].contiguous()
                initial_state[~has_initial_state, ...] = 0
                (
                    core_attn_out_non_spec,
                    last_recurrent_state,
                ) = chunk_gated_delta_rule(
                    q=query_non_spec,
                    k=key_non_spec,
                    v=value_non_spec,
                    g=g_non_spec,
                    beta=beta_non_spec,
                    initial_state=initial_state,
                    output_final_state=True,
                    cu_seqlens=non_spec_query_start_loc,
                    head_first=False,
                    use_qk_l2norm_in_kernel=True,
                )
                # Init cache
                ssm_state[non_spec_state_indices_tensor] = last_recurrent_state.to(ssm_state.dtype)
            elif attn_metadata.num_decodes > 0:
                core_attn_out_non_spec, last_recurrent_state = fused_recurrent_gated_delta_rule(
                    q=query_non_spec,
                    k=key_non_spec,
                    v=value_non_spec,
                    g=g_non_spec,
                    beta=beta_non_spec,
                    initial_state=ssm_state,
                    inplace_final_state=True,
                    cu_seqlens=non_spec_query_start_loc[: attn_metadata.num_decodes + 1],
                    ssm_state_indices=non_spec_state_indices_tensor,
                    use_qk_l2norm_in_kernel=True,
                )
            else:
                core_attn_out_non_spec, last_recurrent_state = None, None

        elif attn_metadata.num_decodes > 0:
            core_attn_out_non_spec = fused_sigmoid_gating_delta_rule_update(
                A_log=self.A_log.contiguous(),
                dt_bias=self.dt_bias.contiguous(),
                q=query_non_spec.contiguous(),
                k=key_non_spec.contiguous(),
                v=value_non_spec.contiguous(),
                a=a.contiguous(),
                b=b.contiguous(),
                initial_state_source=ssm_state,
                initial_state_indices=non_spec_state_indices_tensor,
                cu_seqlens=non_spec_query_start_loc,
                use_qk_l2norm_in_kernel=True,
                softplus_beta=1.0,
                softplus_threshold=20.0,
            )

        # 3. Merge core attention output
        if spec_sequence_masks is not None and core_attn_out_non_spec is not None:
            merged_out = torch.empty(
                (1, num_actual_tokens, *core_attn_out_spec.shape[2:]),
                dtype=core_attn_out_non_spec.dtype,
                device=core_attn_out_non_spec.device,
            )
            merged_out.index_copy_(1, spec_token_indx, core_attn_out_spec)
            merged_out.index_copy_(1, non_spec_token_indx, core_attn_out_non_spec)
            if not enable_sp():
                core_attn_out[:num_actual_tokens] = merged_out.squeeze(0)
            else:
                core_attn_out[:num_actual_tokens] = merged_out.squeeze(0)[:num_actual_tokens]
        elif spec_sequence_masks is not None:
            if not enable_sp():
                core_attn_out[:num_actual_tokens] = core_attn_out_spec.squeeze(0)
            else:
                core_attn_out[:num_actual_tokens] = core_attn_out_spec.squeeze(0)[:num_actual_tokens]
        elif core_attn_out_non_spec is not None:
            if not enable_sp():
                core_attn_out[:num_actual_tokens] = core_attn_out_non_spec.squeeze(0)
            else:
                core_attn_out[:num_actual_tokens] = core_attn_out_non_spec.squeeze(0)[:num_actual_tokens]

Signed-off-by: pppeng <zepengliu912@qq.com>
@ppppeng ppppeng force-pushed the Qwen3_5_patch_recurrent branch from 0e7c5c2 to 6688359 Compare March 10, 2026 06:57
@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

ppppeng and others added 5 commits March 10, 2026 15:18
Signed-off-by: pppeng <zepengliu912@qq.com>
Signed-off-by: pppeng <zepengliu912@qq.com>
Signed-off-by: pppeng <zepengliu912@qq.com>
Signed-off-by: pppeng <60355449+ppppeng@users.noreply.github.com>
Signed-off-by: pppeng <zepengliu912@qq.com>
from tests.e2e.conftest import VllmRunner


def test_qwen3_5_27b_distributed_mp_tp4():
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@ppppeng ppppeng requested a review from Yikun as a code owner March 10, 2026 11:41
@ppppeng ppppeng force-pushed the Qwen3_5_patch_recurrent branch from 477c589 to 3999080 Compare March 10, 2026 12:00
@MengqingCao MengqingCao added ready read for review ready-for-test start test by label for PR labels Mar 10, 2026
Signed-off-by: pppeng <zepengliu912@qq.com>
@ppppeng ppppeng force-pushed the Qwen3_5_patch_recurrent branch from 002dfa5 to 91c51f3 Compare March 10, 2026 15:00
@wangxiyuan wangxiyuan merged commit 0f289fa into vllm-project:main Mar 10, 2026
35 of 38 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 12, 2026
…to qwen3next_graph

* 'main' of https://github.com/vllm-project/vllm-ascend: (88 commits)
  [main][bugfix] Fixed the problem of speculative decoding in FULL mode (vllm-project#7148)
  fixed fia pad logic in graph mode. (vllm-project#7144)
  [Doc] fix DSV3.1 PD configs (vllm-project#7187)
  refactor: add a check before layer_sharding logging (vllm-project#7186)
  [Build] Add support for Ascend950 chip (vllm-project#7151)
  Revert "[CI] fix skiped e2e test when upgrade vllm version  (vllm-project#6654)" (vllm-project#7166)
  [MODELRUNNERV2]fix penality ops (vllm-project#7013)
  [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (vllm-project#6650)
  [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (vllm-project#7146)
  [CI] Build Image for v0.16.0rc1 (vllm-project#7155)
  [CI] Skip `test_mooncake_layerwise_connector.py` in `ut` (vllm-project#7147)
  [BugFix]Fix recomputed scheduler bug (vllm-project#7137)
  [Model] Support Minimax-m2.5 on NPU (vllm-project#7105)
  [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (vllm-project#7022)
  Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (vllm-project#7109)
  [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (vllm-project#7067)
  [Misc] Download on both hk and guiyang region (vllm-project#7129)
  [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (vllm-project#7090)
  [eagle][cp] fix eagle_cp enable bug2 (vllm-project#7079)
  [CI]Upgrade niglty multi-node-tests max-parallel to 2 (vllm-project#7035)
  ...
Nagisa125 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 17, 2026
…lm-project#7109)

### What this PR does / why we need it?

The ops `torch_npu.npu_recurrent_gated_delta_rule` currently does not
support `ssm_state` inputs in float32 format,
we temporarily retain the _forward_core implementation with triton for
Qwen3_5

---------

Signed-off-by: pppeng <zepengliu912@qq.com>
Signed-off-by: pppeng <60355449+ppppeng@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants