Skip to content

[P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups#7022

Merged
wangxiyuan merged 8 commits intovllm-project:mainfrom
nwpu-zxr:rfc_pd
Mar 10, 2026
Merged

[P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups#7022
wangxiyuan merged 8 commits intovllm-project:mainfrom
nwpu-zxr:rfc_pd

Conversation

@nwpu-zxr
Copy link
Copy Markdown
Contributor

@nwpu-zxr nwpu-zxr commented Mar 5, 2026

What this PR does / why we need it?

Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups.

Does this PR introduce any user-facing change?

Yes.

How was this patch tested?

By CI.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 5, 2026

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Mooncake Layerwise Connector by introducing comprehensive support for hybrid attention managers that utilize multiple KV cache groups. The changes involve a fundamental restructuring of how KV cache metadata, block identifiers, and transfer operations are managed, allowing the system to adapt to diverse KV cache configurations across different model layers. This enables more complex and efficient attention mechanisms, particularly for models with varying attention patterns or specialized architectures like Mamba.

Highlights

  • Multi-Group KV Cache Support: Refactored the Mooncake Layerwise Connector to support multiple KV cache groups, allowing for different KV cache specifications (e.g., Mamba, Full Attention, Sliding Window) across various layers or groups of layers.
  • Enhanced Metadata Handling: Introduced a new LayerMetadata dataclass and updated MooncakeAgentMetadata and ReqMeta to store and transfer layer-specific KV cache base addresses, block lengths, and tensor group indices, enabling fine-grained control over KV cache management.
  • Flexible Block ID Management: Modified block ID structures (local_block_ids, remote_block_ids) from single lists to lists of lists (list[list[int]]) to accommodate block IDs for each KV cache group, improving flexibility for hybrid attention mechanisms.
  • Dynamic KV Cache Resharding: Updated the KV cache sending and transfer logic to dynamically handle resharding for different attention types (e.g., Full Attention, Sliding Window) based on pd_head_ratio and KV cache specifications, including dedicated logic for Mamba models.
  • Improved Initialization Flow: Adjusted the initialization sequence for KV transfer, moving ensure_kv_transfer_initialized to initialize_from_config in the worker, ensuring KV transfer is set up with the correct KVCacheConfig.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/distributed/kv_transfer/kv_p2p/mooncake_layerwise_connector.py
    • Added SupportsHMA and various KVCacheSpec types to imports.
    • Introduced LayerMetadata dataclass for layer-specific KV cache information.
    • Updated MooncakeAgentMetadata to use layer_metadata instead of kv_caches_base_addr.
    • Modified ReqMeta to use list[list[int]] for block IDs and remote_layer_metadata.
    • Extended SendTask with layer_name and group-indexed block transfer information.
    • Updated SendReqInfo to handle list[list[int]] for local block IDs.
    • Refactored KVCacheSendingLayerThread constructor to accept kv_cache_config, kv_cache_specs, attn_resharding_group_idx, tp_size, and layer_metadata.
    • Rewrote get_transfer_meta to support multi-group KV caches and Mamba-specific transfer logic.
    • Adjusted _transfer_kv_cache and callback_func calls to incorporate layer group indices.
    • Made MooncakeLayerwiseConnector inherit from SupportsHMA.
    • Modified constructors for MooncakeLayerwiseConnectorScheduler and MooncakeLayerwiseConnectorWorker to accept kv_cache_config.
    • Added request_finished_all_groups method to MooncakeLayerwiseConnector and MooncakeLayerwiseConnectorScheduler.
    • Updated MooncakeLayerwiseConnectorScheduler._reqs_need_recv to handle list[list[int]].
    • Adjusted update_state_after_alloc to use blocks.get_block_ids() for multi-group block IDs.
    • Refactored MooncakeLayerwiseConnectorWorker initialization to manage multiple KV cache groups and specs.
    • Replaced remote_kv_caches_base_addr with remote_layer_metadata in MooncakeLayerwiseConnectorWorker.
    • Completely revised register_kv_caches to register KV caches based on LayerMetadata and handle resharding buffers.
    • Updated _get_kv_split_metadata to accept group_idx and use group-specific block sizes.
    • Added _get_kv_split_metadata_for_mamba for Mamba-specific KV split metadata.
    • Modified start_load_kv to iterate through KV cache specs and apply group-specific transfer logic.
    • Updated save_kv_layer to handle attn_metadata as a dictionary and use group-indexed send task attributes.
    • Adjusted update_decoder_info to use remote_layer_metadata and updated logging.
    • Modified send_done_send_signal to include group_idx and use group-indexed trans_count.
  • vllm_ascend/distributed/kv_transfer/utils/utils.py
    • Added block_group_idx parameter to get_transfer_mappings.
    • Updated get_transfer_mappings to use group-indexed local_block_ids and remote_block_ids from req_meta.
  • vllm_ascend/distributed/parallel_state.py
    • Incorporated pcp_size into the calculation of group_ranks for init_ascend_model_parallel when num_head_replica > 1.
  • vllm_ascend/worker/worker.py
    • Moved ensure_kv_transfer_initialized call from _init_worker_distributed_environment to initialize_from_config and passed kv_cache_config.
Activity
  • The pull request was tested by CI.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces layer-wise KV cache transfer for the Mooncake architecture, enabling heterogeneous memory architecture (HMA) support. It includes changes to metadata structures, data transfer logic, and parallel processing groups. The code introduces a LayerMetadata dataclass and modifies ReqMeta and MooncakeAgentMetadata to support layer-specific information. The KVCacheSendingLayerThread is updated to handle different KV cache specifications, including Mamba models, and to manage head resharding. The MooncakeLayerwiseConnector class is modified to support HMA and to manage KV cache transfer between scheduler and worker nodes. Review comments suggest improvements to the logic for populating layer2group_ids to prevent errors when a layer name appears in multiple KV cache groups, and to simplify the logic for obtaining reshape_cache_event by directly inspecting the type and attributes of attn_metadata.

Comment on lines +964 to +966
if layer_name not in layer2group_ids.keys():
layer2group_ids[layer_name] = []
layer2group_ids[layer_name] = i
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for populating layer2group_ids is confusing and potentially buggy. Line 965 is redundant as the list assigned to layer2group_ids[layer_name] is immediately overwritten on the next line. More importantly, if a layer name appears in multiple kv_cache_group_specs, the group index will be silently overwritten. This could lead to incorrect behavior if a layer is not supposed to be in multiple groups. It would be safer to raise an error in such cases to prevent subtle bugs.

Suggested change
if layer_name not in layer2group_ids.keys():
layer2group_ids[layer_name] = []
layer2group_ids[layer_name] = i
if layer_name in layer2group_ids:
raise ValueError(f"Layer '{layer_name}' is defined in multiple KV cache groups.")
layer2group_ids[layer_name] = i

Comment on lines 1262 to 1269
if (type(attn_metadata) == dict and not getattr(attn_metadata[layer_name], "reshape_cache_event", None)) or \
(not getattr(attn_metadata, "reshape_cache_event", None)):
reshape_cache_event = torch.npu.Event()
reshape_cache_event.record()
elif self.use_mla:
reshape_cache_event = attn_metadata[layer_name].reshape_cache_event
else:
reshape_cache_event = attn_metadata.reshape_cache_event
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic to get reshape_cache_event is complex and brittle. It uses type() == dict which is not robust against subclasses, and relies on self.use_mla to infer the structure of attn_metadata. This indirect dependency makes the code hard to understand and maintain, and could lead to runtime errors if attn_metadata's structure doesn't align with the use_mla flag. A more robust approach would be to directly inspect the type and attributes of attn_metadata.

Suggested change
if (type(attn_metadata) == dict and not getattr(attn_metadata[layer_name], "reshape_cache_event", None)) or \
(not getattr(attn_metadata, "reshape_cache_event", None)):
reshape_cache_event = torch.npu.Event()
reshape_cache_event.record()
elif self.use_mla:
reshape_cache_event = attn_metadata[layer_name].reshape_cache_event
else:
reshape_cache_event = attn_metadata.reshape_cache_event
reshape_cache_event = None
if isinstance(attn_metadata, dict):
if layer_name in attn_metadata:
reshape_cache_event = getattr(attn_metadata[layer_name], "reshape_cache_event", None)
else:
reshape_cache_event = getattr(attn_metadata, "reshape_cache_event", None)
if reshape_cache_event is None:
reshape_cache_event = torch.npu.Event()
reshape_cache_event.record()

@ppppeng ppppeng requested a review from wangxiyuan as a code owner March 9, 2026 03:06
@zzzzwwjj zzzzwwjj added ready read for review ready-for-test start test by label for PR labels Mar 10, 2026
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
@wangxiyuan wangxiyuan merged commit 239683c into vllm-project:main Mar 10, 2026
34 of 38 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Mar 12, 2026
…to qwen3next_graph

* 'main' of https://github.com/vllm-project/vllm-ascend: (88 commits)
  [main][bugfix] Fixed the problem of speculative decoding in FULL mode (vllm-project#7148)
  fixed fia pad logic in graph mode. (vllm-project#7144)
  [Doc] fix DSV3.1 PD configs (vllm-project#7187)
  refactor: add a check before layer_sharding logging (vllm-project#7186)
  [Build] Add support for Ascend950 chip (vllm-project#7151)
  Revert "[CI] fix skiped e2e test when upgrade vllm version  (vllm-project#6654)" (vllm-project#7166)
  [MODELRUNNERV2]fix penality ops (vllm-project#7013)
  [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (vllm-project#6650)
  [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (vllm-project#7146)
  [CI] Build Image for v0.16.0rc1 (vllm-project#7155)
  [CI] Skip `test_mooncake_layerwise_connector.py` in `ut` (vllm-project#7147)
  [BugFix]Fix recomputed scheduler bug (vllm-project#7137)
  [Model] Support Minimax-m2.5 on NPU (vllm-project#7105)
  [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (vllm-project#7022)
  Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (vllm-project#7109)
  [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (vllm-project#7067)
  [Misc] Download on both hk and guiyang region (vllm-project#7129)
  [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (vllm-project#7090)
  [eagle][cp] fix eagle_cp enable bug2 (vllm-project#7079)
  [CI]Upgrade niglty multi-node-tests max-parallel to 2 (vllm-project#7035)
  ...
Nagisa125 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 17, 2026
…ith multiple kvcache groups (vllm-project#7022)

### What this PR does / why we need it?
Mooncake Layerwise Connector supports hybrid attention manager with
multiple kvcache groups.

### Does this PR introduce _any_ user-facing change?
Yes.

### How was this patch tested?
By CI.

- vLLM version: v0.16.0
- vLLM main:
vllm-project/vllm@15d76f7

---------

Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
@nwpu-zxr nwpu-zxr deleted the rfc_pd branch March 24, 2026 07:47
@nwpu-zxr nwpu-zxr restored the rfc_pd branch March 24, 2026 07:47
@nwpu-zxr nwpu-zxr deleted the rfc_pd branch April 1, 2026 03:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants