Conversation
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
Summary of ChangesHello @vladnosiv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses critical issues in DeepSeek-V3.2's Context Parallelism (CP) and P/D disaggregation, which were introduced by a recent refactoring. The changes focus on rectifying inconsistent KV transfer states across distributed participants, specifically in scenarios involving the MLA backend. By refining prefill registration, streamlining non-authoritative rank behavior, and enhancing KV manager state transitions, this PR aims to restore stability and correct functionality for these distributed configurations. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for Context Parallelism (CP) within the disaggregation framework, specifically for prefill operations in MLA backends. Key changes include adding CP rank and size tracking, and designating only one CP rank per attention-TP rank as 'authoritative' for prefill registration, with non-authoritative ranks acting as dummy participants that skip registration and start in a WaitingForInput state. A new poll_and_all_reduce_attn_groups function is implemented to synchronize request statuses hierarchically across both CP and attention-TP groups, and dummy ranks are now explicitly marked as Success upon completion of the last transfer chunk to ensure global consensus. Review comments point out a type hinting issue with ProcessGroup in the new poll_and_all_reduce_attn_groups function and question the initialization state of dummy ranks in add_transfer_request, specifically regarding the bootstrap_room in self.request_status check.
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
|
@xu-yfei can you take alook ? |
|
Accuracy test after merge main (fixed conflicts with new dp routing): |
|
LGTM |
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
|
I also added the same logic for nixl and mori. The changes look safe and should not affect any configurations other than CP x P/D. Additionally, in nixl introduced an explicit runtime error instead of the key error in a couple of lines below. |
|
@vladnosiv Do you test pp2 cp8tp8 , i find it was broken |
|
@whybeyoung no, I haven't tried the configuration with PP. But to be honest, right now I still see some problems using CP+P/D on long load tests. In addition, problems are observed on commits before refactoring. I will try to understand a little more in the this PR |
|
upd: as a result, there are no problems with the setup There are problems when enabling additional hicache storage from my other PR, it is clearly incompatible with CP in its current form, I will continue there. In addition, I was unable to make a stable setup with MTP with CP and even in setup
But this also looks out of scope here because it is confirmed with DP8 EP8. cc @whybeyoung |
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
|
I also noticed that CP + P/D seemed to have issues before CP-refactoring under load, but even single queries stopped working after refactoring. |
|
The cp group poll part LGTM, but the transfer part might need a general refactor design (so that we can support hetero setup for CP). I am working on it now. We also want to enable support kv transfer without dummy cp rank, so that we can speed up the transfer. |
Co-authored-by: Vladislav Nosivskoy <vladnosiv@gmail.com> Signed-off-by: Shangming Cai <csmthu@gmail.com>
|
@vladnosiv Can you check #19504? I make a PR for general CP support, and cleanup the logic a little bit. |
|
Cherry-picked to #19504 |
Motivation
After PR #17213 DeepSeek-V3.2 with MLA + Context Parallelism in P/D disaggregation could stall or fail due to inconsistent KV transfer state across CP/TP participants.
UPD: there were problems under load before refactoring
Observed bugs:
This PR makes rank ownership and status propagation deterministic so KV transfer works consistently across TP only, CP-enabled, and mixed TP-attn x CP configurations.
Modifications
attn_cp_size > 1, only CP rank 0 registers to bootstrap per attention-TP shardWaitingForInput(instead of bootstrapping), matching their no-op roleis_last=True, request state is forced toSuccesspoll_and_all_reduce_attn_groupsin prefill flow for all-reduce over CP/TP groupsAccuracy Tests
Setup:
Bench (adapted to openai-compatible api: #19231)
Results: