Skip to content

[Feat][Mamba] Minimal all-mode core validation for prefix caching#9

Closed
lHrHenry233 wants to merge 3 commits intomainfrom
smallpr/mamba-all-core
Closed

[Feat][Mamba] Minimal all-mode core validation for prefix caching#9
lHrHenry233 wants to merge 3 commits intomainfrom
smallpr/mamba-all-core

Conversation

@lHrHenry233
Copy link
Copy Markdown
Owner

@lHrHenry233 lHrHenry233 commented Apr 7, 2026

What this PR does

This PR is a focused, minimal validation for the core all-mode behavior under prefix caching on Qwen3.5/Qwen3Next, so the community can review and merge a small, low-risk change first.

Changed files (4):

  • vllm_ascend/patch/platform/patch_mamba_config.py
  • vllm_ascend/core/recompute_scheduler.py
  • tests/ut/patch/platform/test_patch_mamba_config.py
  • tests/ut/core/test_recompute_scheduler.py

Main changes

  1. Config path: when prefix caching is enabled, both mamba_cache_mode in ("align", "all") use block-level mamba_block_size.
  2. Scheduler path: disable align-only block-aligned split in all mode; prefer has_mamba_layers when available, with model-type fallback.
  3. Tests: add focused UT coverage for all/align/none behavior and has_mamba_layers branch.

Why this split

A smaller PR is easier to review, safer to merge, and keeps the core behavior discussion independent from runtime and kernel optimizations.

Validation

  • python3 -m py_compile vllm_ascend/patch/platform/patch_mamba_config.py vllm_ascend/core/recompute_scheduler.py tests/ut/patch/platform/test_patch_mamba_config.py tests/ut/core/test_recompute_scheduler.py
  • python3 -m pytest -sv --noconftest tests/ut/core/test_recompute_scheduler.py tests/ut/patch/platform/test_patch_mamba_config.py
  • Result: 8 passed

- set mamba_block_size to block_size for align/all when prefix caching is enabled
- disable block-aligned split for hybrid qwen3.5/qwen3_next only in all mode
- add focused UT for config and scheduler behavior

This intentionally keeps scope minimal for easier review and mentor-first validation before larger operator/worker changes.

Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 7, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

- decouple the two new UT files from TestBase side effects
- use unittest.TestCase so the tests can run independently
- validated with: python3 -m pytest -sv --noconftest tests/ut/patch/platform/test_patch_mamba_config.py tests/ut/core/test_recompute_scheduler.py

Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
- prefer has_mamba_layers (when available) to decide all-mode split disabling
- keep hybrid model-name fallback for current ascend paths
- add UT covering has_mamba_layers-driven behavior

Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
@lHrHenry233 lHrHenry233 closed this Apr 7, 2026
@lHrHenry233 lHrHenry233 deleted the smallpr/mamba-all-core branch April 7, 2026 13:13
lHrHenry233 pushed a commit that referenced this pull request Apr 10, 2026
…(v3.1)

- Port upstream _causal_conv1d_fwd_kernel as NPU Triton kernel
  - Handles initial/final/intermediate conv state in-kernel
  - Supports APC block boundary state writes
  - NPU adaptations: removed .cache_modifier, kept debug_barrier
- Rewrite causal_conv1d_fn to dispatch to new Triton kernel
- Rewrite gdn.py conv1d path: split decode/prefill like upstream
  - Decode: causal_conv1d_update_npu with block params
  - Prefill: causal_conv1d_fn with APC params (new kernel)
- Fix SSM #6: _build_initial_state only zeros prefill sequences
- Fix SSM #7: _write_final_states adds slot >= 0 validation
- Fix SSM #8: _scatter_intermediate_states adds unaligned offset
- Update all 36 UTs to pass with new num_computed_tokens_all field

Alignment status vs upstream #26807:
  #1 conv1d prefill kernel:     FIXED (kernel ported)
  #3 causal_conv1d_fn params:   FIXED (rewritten)
  #4 intermediate conv state:   FIXED (kernel internal)
  #6 SSM zeroing scope:         FIXED
  #7 _write_final_states guard: FIXED
  #8 SSM scatter alignment:     FIXED
  #9 causal_conv1d_fn signature: FIXED
  #2 decode pre-copy:           KEEP (NPU needs it)
  #5 SSM decode index:          OK (correct approach)
  #10 conv layout hardcoded:    DEFERRED

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant