[Feat][Mamba] Minimal all-mode core validation for prefix caching#9
Closed
lHrHenry233 wants to merge 3 commits intomainfrom
Closed
[Feat][Mamba] Minimal all-mode core validation for prefix caching#9lHrHenry233 wants to merge 3 commits intomainfrom
lHrHenry233 wants to merge 3 commits intomainfrom
Conversation
- set mamba_block_size to block_size for align/all when prefix caching is enabled - disable block-aligned split for hybrid qwen3.5/qwen3_next only in all mode - add focused UT for config and scheduler behavior This intentionally keeps scope minimal for easier review and mentor-first validation before larger operator/worker changes. Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
- decouple the two new UT files from TestBase side effects - use unittest.TestCase so the tests can run independently - validated with: python3 -m pytest -sv --noconftest tests/ut/patch/platform/test_patch_mamba_config.py tests/ut/core/test_recompute_scheduler.py Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
- prefer has_mamba_layers (when available) to decide all-mode split disabling - keep hybrid model-name fallback for current ascend paths - add UT covering has_mamba_layers-driven behavior Signed-off-by: lHrHenry233 <lHrHenry233@users.noreply.github.com>
lHrHenry233
pushed a commit
that referenced
this pull request
Apr 10, 2026
…(v3.1) - Port upstream _causal_conv1d_fwd_kernel as NPU Triton kernel - Handles initial/final/intermediate conv state in-kernel - Supports APC block boundary state writes - NPU adaptations: removed .cache_modifier, kept debug_barrier - Rewrite causal_conv1d_fn to dispatch to new Triton kernel - Rewrite gdn.py conv1d path: split decode/prefill like upstream - Decode: causal_conv1d_update_npu with block params - Prefill: causal_conv1d_fn with APC params (new kernel) - Fix SSM #6: _build_initial_state only zeros prefill sequences - Fix SSM #7: _write_final_states adds slot >= 0 validation - Fix SSM #8: _scatter_intermediate_states adds unaligned offset - Update all 36 UTs to pass with new num_computed_tokens_all field Alignment status vs upstream #26807: #1 conv1d prefill kernel: FIXED (kernel ported) #3 causal_conv1d_fn params: FIXED (rewritten) #4 intermediate conv state: FIXED (kernel internal) #6 SSM zeroing scope: FIXED #7 _write_final_states guard: FIXED #8 SSM scatter alignment: FIXED #9 causal_conv1d_fn signature: FIXED #2 decode pre-copy: KEEP (NPU needs it) #5 SSM decode index: OK (correct approach) #10 conv layout hardcoded: DEFERRED Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What this PR does
This PR is a focused, minimal validation for the core all-mode behavior under prefix caching on Qwen3.5/Qwen3Next, so the community can review and merge a small, low-risk change first.
Changed files (4):
vllm_ascend/patch/platform/patch_mamba_config.pyvllm_ascend/core/recompute_scheduler.pytests/ut/patch/platform/test_patch_mamba_config.pytests/ut/core/test_recompute_scheduler.pyMain changes
mamba_cache_mode in ("align", "all")use block-levelmamba_block_size.has_mamba_layerswhen available, with model-type fallback.has_mamba_layersbranch.Why this split
A smaller PR is easier to review, safer to merge, and keeps the core behavior discussion independent from runtime and kernel optimizations.
Validation
python3 -m py_compile vllm_ascend/patch/platform/patch_mamba_config.py vllm_ascend/core/recompute_scheduler.py tests/ut/patch/platform/test_patch_mamba_config.py tests/ut/core/test_recompute_scheduler.pypython3 -m pytest -sv --noconftest tests/ut/core/test_recompute_scheduler.py tests/ut/patch/platform/test_patch_mamba_config.py8 passed