Skip to content

[Bugfix][Worker] Unblock Qwen3.5-9B align/all perf validation#7

Closed
lHrHenry233 wants to merge 2 commits intomainfrom
qwen35-all-mode-prefix-caching-clean
Closed

[Bugfix][Worker] Unblock Qwen3.5-9B align/all perf validation#7
lHrHenry233 wants to merge 2 commits intomainfrom
qwen35-all-mode-prefix-caching-clean

Conversation

@lHrHenry233
Copy link
Copy Markdown
Owner

@lHrHenry233 lHrHenry233 commented Apr 1, 2026

What this PR does / why we need it?

This PR addresses two runtime blockers found while validating Qwen3.5 Mamba prefix-caching performance under mamba_cache_mode=align vs all on single-card Ascend (32G HBM).

Background and motivation

During end-to-end perf runs on Qwen/Qwen3.5-9B, the service hit two production-path failures:

  1. Spec decode draft proposer hard dependency caused startup/import failure when draft-model module was absent in the runtime package layout.
  2. NPU recurrent op dtype mismatch (state as FP32) caused runtime 500 in torch_npu.npu_recurrent_gated_delta_rule, which expects BF16 in this path.

Additionally, there was one constructor compatibility mismatch in this branch:

  • NPUInputBatch.__init__() does not accept max_num_blocks_per_req, but caller still passed it.

These issues blocked stable apples-to-apples perf comparison of align/all mode, so this PR focuses on minimal, targeted fixes.

Code changes

1) Make draft proposer import optional (decouple from mandatory MTP draft module)

  • File: vllm_ascend/worker/model_runner_v1.py
  • Change:
    • Wrap AscendDraftModelProposer import in try/except ModuleNotFoundError.
    • Build draft_proposer_types tuple dynamically in isinstance/assert checks.
  • Motivation:
    • Keep speculative path functional without hard crash when draft proposer module is absent.
    • Preserve existing behavior when module is present.

2) Remove stale constructor arg for branch compatibility

  • File: vllm_ascend/worker/model_runner_v1.py
  • Change:
    • Remove max_num_blocks_per_req=max_num_blocks from NPUInputBatch(...) call.
  • Motivation:
    • Current NPUInputBatch.__init__ signature in this branch does not accept that argument.
    • Prevent startup-time TypeError and restore request execution path.

3) Enforce BF16 recurrent state for NPU gated delta rule

  • File: vllm_ascend/patch/worker/patch_qwen3_next.py
  • Change:
    • Before calling torch_npu.npu_recurrent_gated_delta_rule, cast ssm_state to BF16 when needed in both spec and non-spec decode branches.
  • Motivation:
    • Match NPU operator dtype constraints and avoid runtime 500.
    • Keep change localized to callsite with minimal behavior impact.

Does this PR introduce any user-facing change?

  • User-visible behavior change is limited to robustness:
    • Service no longer fails in the above runtime scenarios.
    • Enables stable Qwen3.5-9B perf validation flow for align/all cache mode.

How was this patch tested?

Local verification

  • Syntax check:
    • python3 -m py_compile vllm_ascend/worker/model_runner_v1.py vllm_ascend/patch/worker/patch_qwen3_next.py

Runtime validation (Ascend NPU)

  • Model: Qwen/Qwen3.5-9B
  • Server mode: --enable-prefix-caching --enforce-eager
  • Compared:
    • --mamba-cache-mode align
    • --mamba-cache-mode all
  • Result:
    • Both modes serve /v1/completions successfully after fixes.
    • Small-sample latency run showed all faster than align in this environment.

Risk assessment

  • Scope is narrow and targeted to error paths uncovered in real runtime.

  • No architectural behavior changes beyond compatibility and dtype safety.

  • vLLM version: v0.16.0

  • vLLM main: vllm-project/vllm@4034c3d

- support mamba all-mode across config/scheduler/model runner and qwen3.5 patch flow\n- extend causal_conv1d host/kernel to handle 2D token-level cache indices\n- add UT/e2e coverage for prefill/decode all-mode paths and token snapshot writeback

Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
- make draft proposer import optional to avoid hard dependency when MTP draft module is absent
- remove stale NPUInputBatch argument max_num_blocks_per_req for branch compatibility
- cast recurrent state to bf16 before npu_recurrent_gated_delta_rule to satisfy NPU op dtype constraints

These fixes are required to keep Qwen3.5-9B prefix-caching perf runs stable and comparable between mamba_cache_mode=align and all.

Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 1, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@lHrHenry233 lHrHenry233 closed this Apr 1, 2026
@lHrHenry233 lHrHenry233 deleted the qwen35-all-mode-prefix-caching-clean branch April 10, 2026 03:49
lHrHenry233 pushed a commit that referenced this pull request Apr 10, 2026
…(v3.1)

- Port upstream _causal_conv1d_fwd_kernel as NPU Triton kernel
  - Handles initial/final/intermediate conv state in-kernel
  - Supports APC block boundary state writes
  - NPU adaptations: removed .cache_modifier, kept debug_barrier
- Rewrite causal_conv1d_fn to dispatch to new Triton kernel
- Rewrite gdn.py conv1d path: split decode/prefill like upstream
  - Decode: causal_conv1d_update_npu with block params
  - Prefill: causal_conv1d_fn with APC params (new kernel)
- Fix SSM #6: _build_initial_state only zeros prefill sequences
- Fix SSM #7: _write_final_states adds slot >= 0 validation
- Fix SSM #8: _scatter_intermediate_states adds unaligned offset
- Update all 36 UTs to pass with new num_computed_tokens_all field

Alignment status vs upstream #26807:
  #1 conv1d prefill kernel:     FIXED (kernel ported)
  #3 causal_conv1d_fn params:   FIXED (rewritten)
  #4 intermediate conv state:   FIXED (kernel internal)
  #6 SSM zeroing scope:         FIXED
  #7 _write_final_states guard: FIXED
  #8 SSM scatter alignment:     FIXED
  #9 causal_conv1d_fn signature: FIXED
  #2 decode pre-copy:           KEEP (NPU needs it)
  #5 SSM decode index:          OK (correct approach)
  #10 conv layout hardcoded:    DEFERRED

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant