Skip to content

[Feat][Qwen3.5 Mamba] Prefix caching all-mode + operator state writeback + runtime unblock fixes#8

Closed
lHrHenry233 wants to merge 4 commits intomainfrom
qwen35-all-mode-prefix-caching-clean
Closed

[Feat][Qwen3.5 Mamba] Prefix caching all-mode + operator state writeback + runtime unblock fixes#8
lHrHenry233 wants to merge 4 commits intomainfrom
qwen35-all-mode-prefix-caching-clean

Conversation

@lHrHenry233
Copy link
Copy Markdown
Owner

@lHrHenry233 lHrHenry233 commented Apr 1, 2026

What this PR does / why we need it?

This PR contains the full difference from main for the Qwen3.5 Mamba prefix-caching workstream, including:

  • scheduler/platform/worker integration for mamba_cache_mode=all
  • operator-layer token/state writeback support in causal conv path
  • regression/unit/e2e tests for the new behavior
  • runtime compatibility fixes discovered during real 9B validation

The goal is to make all mode actually executable and verifiable end-to-end on Ascend, not only at config level but down to operator state handling.

Scope (main -> this branch)

Commits in this PR

  1. feat: add qwen3.5 all-mode prefix caching and token-state writeback
  2. fix(qwen3.5): unblock 9b perf validation in align/all modes

Changed files

  • csrc/causal_conv1d/op_host/causal_conv1d_tiling.cpp
  • csrc/causal_conv1d/op_host/causal_conv1d_tiling.h
  • csrc/causal_conv1d/op_kernel/causal_conv1d.h
  • tests/e2e/nightly/single_node/ops/singlecard_ops/triton/test_causal_conv1d.py
  • tests/ut/core/test_recompute_scheduler.py
  • tests/ut/patch/platform/test_patch_mamba_config.py
  • tests/ut/patch/worker/patch_common/test_patch_qwen3_5.py
  • vllm_ascend/core/recompute_scheduler.py
  • vllm_ascend/patch/platform/patch_mamba_config.py
  • vllm_ascend/patch/worker/patch_qwen3_5.py
  • vllm_ascend/patch/worker/patch_qwen3_next.py
  • vllm_ascend/worker/model_runner_v1.py

Detailed design and motivation

1) Prefix caching all-mode integration across scheduler/platform/worker

  • Add and wire Qwen3.5 all-mode behavior so scheduling and runtime metadata remain consistent with Mamba cache semantics.
  • Motivation:
    • Prior behavior could not fully represent all-mode state progression under prefix-caching paths.
    • Need deterministic scheduling + correct cache/state evolution for long-running decode.

2) Operator-level token/state writeback support (causal_conv1d)

  • Extend causal conv host/kernel interfaces and logic to support 2D state indices and token-level writeback behavior required by all-mode.
  • Motivation:
    • Without operator-side writeback, upper-layer all-mode logic is incomplete and can diverge from expected state transitions.
    • This closes the gap between framework intent and kernel execution.

3) Runtime unblock fixes from real service validation

3.1 Optional draft proposer import

  • File: vllm_ascend/worker/model_runner_v1.py
  • Handle absent draft_proposer module gracefully using optional import and dynamic type tuple checks.
  • Motivation: decouple non-target MTP draft dependency from this perf validation flow.

3.2 Remove stale NPUInputBatch argument

  • File: vllm_ascend/worker/model_runner_v1.py
  • Remove max_num_blocks_per_req from constructor call to match branch signature.
  • Motivation: fix startup TypeError caused by branch API mismatch.

3.3 Enforce BF16 recurrent state

  • File: vllm_ascend/patch/worker/patch_qwen3_next.py
  • Cast recurrent state to BF16 before torch_npu.npu_recurrent_gated_delta_rule in both spec/non-spec paths.
  • Motivation: satisfy NPU op dtype constraints and remove runtime 500.

Testing and validation

Added/updated tests in this PR

  • UT:
    • tests/ut/core/test_recompute_scheduler.py
    • tests/ut/patch/platform/test_patch_mamba_config.py
    • tests/ut/patch/worker/patch_common/test_patch_qwen3_5.py
  • E2E/nightly:
    • tests/e2e/nightly/single_node/ops/singlecard_ops/triton/test_causal_conv1d.py

Runtime validation performed

  • Model: Qwen/Qwen3.5-9B
  • Settings: prefix caching enabled, compare mamba_cache_mode=align vs all
  • Outcome:
    • Service and /v1/completions succeed after fixes.
    • Small-sample latency check indicates all is better than align in this environment.

Does this PR introduce any user-facing change?

Yes.

  • Users can run and validate Qwen3.5 all-mode prefix-caching path more reliably on Ascend.
  • Fixes eliminate startup/runtime failures in the verified scenario.

Risk assessment

  • Changes include scheduler + worker + operator surfaces, but are covered by targeted UT/E2E additions.

  • Runtime unblock fixes are minimal and localized to compatibility/dtype enforcement.

  • vLLM version: v0.16.0

  • vLLM main: vllm-project/vllm@4034c3d

lHrenry and others added 4 commits March 11, 2026 08:48
- add worker patch module for mamba batch memcpy kernel override\n- register patch import in worker patch init to apply override at runtime

Signed-off-by: lHrenry <luohairui@luohairuideMacBook-Air.local>
- support mamba all-mode across config/scheduler/model runner and qwen3.5 patch flow\n- extend causal_conv1d host/kernel to handle 2D token-level cache indices\n- add UT/e2e coverage for prefill/decode all-mode paths and token snapshot writeback

Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
- make draft proposer import optional to avoid hard dependency when MTP draft module is absent
- remove stale NPUInputBatch argument max_num_blocks_per_req for branch compatibility
- cast recurrent state to bf16 before npu_recurrent_gated_delta_rule to satisfy NPU op dtype constraints

These fixes are required to keep Qwen3.5-9B prefix-caching perf runs stable and comparable between mamba_cache_mode=align and all.

Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 1, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 7, 2026

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@lHrHenry233 lHrHenry233 deleted the qwen35-all-mode-prefix-caching-clean branch April 10, 2026 03:49
lHrHenry233 pushed a commit that referenced this pull request Apr 10, 2026
…(v3.1)

- Port upstream _causal_conv1d_fwd_kernel as NPU Triton kernel
  - Handles initial/final/intermediate conv state in-kernel
  - Supports APC block boundary state writes
  - NPU adaptations: removed .cache_modifier, kept debug_barrier
- Rewrite causal_conv1d_fn to dispatch to new Triton kernel
- Rewrite gdn.py conv1d path: split decode/prefill like upstream
  - Decode: causal_conv1d_update_npu with block params
  - Prefill: causal_conv1d_fn with APC params (new kernel)
- Fix SSM #6: _build_initial_state only zeros prefill sequences
- Fix SSM #7: _write_final_states adds slot >= 0 validation
- Fix SSM #8: _scatter_intermediate_states adds unaligned offset
- Update all 36 UTs to pass with new num_computed_tokens_all field

Alignment status vs upstream #26807:
  #1 conv1d prefill kernel:     FIXED (kernel ported)
  #3 causal_conv1d_fn params:   FIXED (rewritten)
  #4 intermediate conv state:   FIXED (kernel internal)
  #6 SSM zeroing scope:         FIXED
  #7 _write_final_states guard: FIXED
  #8 SSM scatter alignment:     FIXED
  #9 causal_conv1d_fn signature: FIXED
  #2 decode pre-copy:           KEEP (NPU needs it)
  #5 SSM decode index:          OK (correct approach)
  #10 conv layout hardcoded:    DEFERRED

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant