[P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups#7022
Conversation
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Mooncake Layerwise Connector by introducing comprehensive support for hybrid attention managers that utilize multiple KV cache groups. The changes involve a fundamental restructuring of how KV cache metadata, block identifiers, and transfer operations are managed, allowing the system to adapt to diverse KV cache configurations across different model layers. This enables more complex and efficient attention mechanisms, particularly for models with varying attention patterns or specialized architectures like Mamba. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces layer-wise KV cache transfer for the Mooncake architecture, enabling heterogeneous memory architecture (HMA) support. It includes changes to metadata structures, data transfer logic, and parallel processing groups. The code introduces a LayerMetadata dataclass and modifies ReqMeta and MooncakeAgentMetadata to support layer-specific information. The KVCacheSendingLayerThread is updated to handle different KV cache specifications, including Mamba models, and to manage head resharding. The MooncakeLayerwiseConnector class is modified to support HMA and to manage KV cache transfer between scheduler and worker nodes. Review comments suggest improvements to the logic for populating layer2group_ids to prevent errors when a layer name appears in multiple KV cache groups, and to simplify the logic for obtaining reshape_cache_event by directly inspecting the type and attributes of attn_metadata.
| if layer_name not in layer2group_ids.keys(): | ||
| layer2group_ids[layer_name] = [] | ||
| layer2group_ids[layer_name] = i |
There was a problem hiding this comment.
The logic for populating layer2group_ids is confusing and potentially buggy. Line 965 is redundant as the list assigned to layer2group_ids[layer_name] is immediately overwritten on the next line. More importantly, if a layer name appears in multiple kv_cache_group_specs, the group index will be silently overwritten. This could lead to incorrect behavior if a layer is not supposed to be in multiple groups. It would be safer to raise an error in such cases to prevent subtle bugs.
| if layer_name not in layer2group_ids.keys(): | |
| layer2group_ids[layer_name] = [] | |
| layer2group_ids[layer_name] = i | |
| if layer_name in layer2group_ids: | |
| raise ValueError(f"Layer '{layer_name}' is defined in multiple KV cache groups.") | |
| layer2group_ids[layer_name] = i |
| if (type(attn_metadata) == dict and not getattr(attn_metadata[layer_name], "reshape_cache_event", None)) or \ | ||
| (not getattr(attn_metadata, "reshape_cache_event", None)): | ||
| reshape_cache_event = torch.npu.Event() | ||
| reshape_cache_event.record() | ||
| elif self.use_mla: | ||
| reshape_cache_event = attn_metadata[layer_name].reshape_cache_event | ||
| else: | ||
| reshape_cache_event = attn_metadata.reshape_cache_event |
There was a problem hiding this comment.
The logic to get reshape_cache_event is complex and brittle. It uses type() == dict which is not robust against subclasses, and relies on self.use_mla to infer the structure of attn_metadata. This indirect dependency makes the code hard to understand and maintain, and could lead to runtime errors if attn_metadata's structure doesn't align with the use_mla flag. A more robust approach would be to directly inspect the type and attributes of attn_metadata.
| if (type(attn_metadata) == dict and not getattr(attn_metadata[layer_name], "reshape_cache_event", None)) or \ | |
| (not getattr(attn_metadata, "reshape_cache_event", None)): | |
| reshape_cache_event = torch.npu.Event() | |
| reshape_cache_event.record() | |
| elif self.use_mla: | |
| reshape_cache_event = attn_metadata[layer_name].reshape_cache_event | |
| else: | |
| reshape_cache_event = attn_metadata.reshape_cache_event | |
| reshape_cache_event = None | |
| if isinstance(attn_metadata, dict): | |
| if layer_name in attn_metadata: | |
| reshape_cache_event = getattr(attn_metadata[layer_name], "reshape_cache_event", None) | |
| else: | |
| reshape_cache_event = getattr(attn_metadata, "reshape_cache_event", None) | |
| if reshape_cache_event is None: | |
| reshape_cache_event = torch.npu.Event() | |
| reshape_cache_event.record() |
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
…to qwen3next_graph * 'main' of https://github.com/vllm-project/vllm-ascend: (88 commits) [main][bugfix] Fixed the problem of speculative decoding in FULL mode (vllm-project#7148) fixed fia pad logic in graph mode. (vllm-project#7144) [Doc] fix DSV3.1 PD configs (vllm-project#7187) refactor: add a check before layer_sharding logging (vllm-project#7186) [Build] Add support for Ascend950 chip (vllm-project#7151) Revert "[CI] fix skiped e2e test when upgrade vllm version (vllm-project#6654)" (vllm-project#7166) [MODELRUNNERV2]fix penality ops (vllm-project#7013) [Bugfix][LoRA] Fix the issue when enable LoRA + tp + fully_sharded_loras (vllm-project#6650) [KV Pool]get_num_new_matched_tokens return 0 if token length < block_size (vllm-project#7146) [CI] Build Image for v0.16.0rc1 (vllm-project#7155) [CI] Skip `test_mooncake_layerwise_connector.py` in `ut` (vllm-project#7147) [BugFix]Fix recomputed scheduler bug (vllm-project#7137) [Model] Support Minimax-m2.5 on NPU (vllm-project#7105) [P/D]Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups (vllm-project#7022) Add patch_qwen3_5 for triton ops fused_recurrent_gated_delta_rule (vllm-project#7109) [Doc][ReleaseNote] Add release notes for v0.16.0rc1 (vllm-project#7067) [Misc] Download on both hk and guiyang region (vllm-project#7129) [bugdix] The problem that the w4a8 weight fails to be loaded when the EP is not enabled is resolved. (vllm-project#7090) [eagle][cp] fix eagle_cp enable bug2 (vllm-project#7079) [CI]Upgrade niglty multi-node-tests max-parallel to 2 (vllm-project#7035) ...
…ith multiple kvcache groups (vllm-project#7022) ### What this PR does / why we need it? Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups. ### Does this PR introduce _any_ user-facing change? Yes. ### How was this patch tested? By CI. - vLLM version: v0.16.0 - vLLM main: vllm-project/vllm@15d76f7 --------- Signed-off-by: nwpu-zxr <zhouxuerong2@huawei.com>
What this PR does / why we need it?
Mooncake Layerwise Connector supports hybrid attention manager with multiple kvcache groups.
Does this PR introduce any user-facing change?
Yes.
How was this patch tested?
By CI.