Skip to content

[DSv32] Support CP + P/D#19119

Closed
vladnosiv wants to merge 8 commits intosgl-project:mainfrom
vladnosiv:fix-cp-and-pd
Closed

[DSv32] Support CP + P/D#19119
vladnosiv wants to merge 8 commits intosgl-project:mainfrom
vladnosiv:fix-cp-and-pd

Conversation

@vladnosiv
Copy link
Copy Markdown
Contributor

@vladnosiv vladnosiv commented Feb 21, 2026

Motivation

After PR #17213 DeepSeek-V3.2 with MLA + Context Parallelism in P/D disaggregation could stall or fail due to inconsistent KV transfer state across CP/TP participants.
UPD: there were problems under load before refactoring

Observed bugs:

  • bootstrap registration collisions - multiple CP ranks overwriting bootstrap since they have attn_tp_rank = 0
  • status desynchronization across CP/TP-attn participants

This PR makes rank ownership and status propagation deterministic so KV transfer works consistently across TP only, CP-enabled, and mixed TP-attn x CP configurations.

Modifications

  • Authoritative prefill registration for MLA+CP:
    • In prefill disaggregation mode with MLA backend and attn_cp_size > 1, only CP rank 0 registers to bootstrap per attention-TP shard
    • Non-authoritative CP ranks are treated as explicit no-op participants
  • Non-authoritative prefill senders start from WaitingForInput (instead of bootstrapping), matching their no-op role
  • In Mooncake KV manager, when a rank has no transfer chunks but receives is_last=True, request state is forced to Success
  • Added poll_and_all_reduce_attn_groups in prefill flow for all-reduce over CP/TP groups

Accuracy Tests

Setup:

  • Nvidia Dynamo
  • 1 Prefill: CP8 EP1
  • 1 Decode: DP8 EP8

Bench (adapted to openai-compatible api: #19231)

python benchmark/gsm8k/bench_sglang.py --port 8000 --num-questions 500 --num-shots 48 --parallel 100

Results:

Accuracy: 0.960
Invalid: 0.000
Latency: 53.732 s

Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @vladnosiv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses critical issues in DeepSeek-V3.2's Context Parallelism (CP) and P/D disaggregation, which were introduced by a recent refactoring. The changes focus on rectifying inconsistent KV transfer states across distributed participants, specifically in scenarios involving the MLA backend. By refining prefill registration, streamlining non-authoritative rank behavior, and enhancing KV manager state transitions, this PR aims to restore stability and correct functionality for these distributed configurations.

Highlights

  • Authoritative Prefill Registration: Implemented logic to ensure only the primary Context Parallelism (CP) rank (rank 0) registers for prefill bootstrapping when using MLA backend with CP disaggregation, preventing registration collisions.
  • Non-Authoritative Rank Handling: Configured non-authoritative CP ranks to start in a 'WaitingForInput' state, aligning with their non-participatory role in the prefill process.
  • KV Transfer State Synchronization: Added a mechanism in the Mooncake KV manager to force a 'Success' state for requests when 'is_last=True' is received, even if no transfer chunks are present, resolving potential stalling issues.
  • Cross-Group All-Reduce: Introduced a new utility function, 'poll_and_all_reduce_attn_groups', to perform all-reduce operations across both Context Parallelism (CP) and Tensor Parallelism (TP) groups, ensuring consistent status synchronization.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/disaggregation/common/conn.py
    • Imported 'get_attention_cp_rank' and 'get_attention_cp_size' for Context Parallelism awareness.
    • Added 'attn_cp_size' and 'attn_cp_rank' attributes to 'KVSender'.
    • Introduced 'should_register_prefill' flag to control bootstrap registration, allowing only CP rank 0 to register for MLA+CP prefill.
    • Modified '_register_to_bootstrap' call to be conditional on 'should_register_prefill'.
    • Updated 'KVSender.init' to initialize non-authoritative CP ranks directly to 'KVPoll.WaitingForInput'.
  • python/sglang/srt/disaggregation/mooncake/conn.py
    • Modified 'add_transfer_request' to set 'KVPoll.Success' if 'is_last' is true and the rank has no transfer chunks, preventing indefinite waiting.
  • python/sglang/srt/disaggregation/prefill.py
    • Imported 'torch.distributed' for distributed operations.
    • Added 'poll_and_all_reduce_attn_groups' function to perform all-reduce across CP and then TP groups.
    • Replaced existing 'poll_and_all_reduce' calls with 'poll_and_all_reduce_attn_groups' in 'pop_bootstrapped', 'process_disagg_prefill_inflight_queue', and 'get_transferred_rids' to ensure proper synchronization across attention groups.
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Context Parallelism (CP) within the disaggregation framework, specifically for prefill operations in MLA backends. Key changes include adding CP rank and size tracking, and designating only one CP rank per attention-TP rank as 'authoritative' for prefill registration, with non-authoritative ranks acting as dummy participants that skip registration and start in a WaitingForInput state. A new poll_and_all_reduce_attn_groups function is implemented to synchronize request statuses hierarchically across both CP and attention-TP groups, and dummy ranks are now explicitly marked as Success upon completion of the last transfer chunk to ensure global consensus. Review comments point out a type hinting issue with ProcessGroup in the new poll_and_all_reduce_attn_groups function and question the initialization state of dummy ranks in add_transfer_request, specifically regarding the bootstrap_room in self.request_status check.

@vladnosiv vladnosiv changed the title [DSv32] Fix CP + P/D broken by CP refactoring [DSv32] Fix CP+P/D and support mixed TP/CP Feb 21, 2026
@vladnosiv vladnosiv changed the title [DSv32] Fix CP+P/D and support mixed TP/CP [DSv32] Fix CP+P/D and support mixed TP/CP+P/D Feb 21, 2026
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
@whybeyoung
Copy link
Copy Markdown
Collaborator

@xu-yfei can you take alook ?

@vladnosiv
Copy link
Copy Markdown
Contributor Author

Accuracy test after merge main (fixed conflicts with new dp routing):

Accuracy: 0.958
Invalid: 0.000
Latency: 53.436 s

@whybeyoung
Copy link
Copy Markdown
Collaborator

LGTM

Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
@vladnosiv
Copy link
Copy Markdown
Contributor Author

I also added the same logic for nixl and mori. The changes look safe and should not affect any configurations other than CP x P/D. Additionally, in nixl introduced an explicit runtime error instead of the key error in a couple of lines below.

@whybeyoung
Copy link
Copy Markdown
Collaborator

@vladnosiv Do you test pp2 cp8tp8 , i find it was broken

@vladnosiv
Copy link
Copy Markdown
Contributor Author

@whybeyoung no, I haven't tried the configuration with PP.

But to be honest, right now I still see some problems using CP+P/D on long load tests. In addition, problems are observed on commits before refactoring.

I will try to understand a little more in the this PR

@vladnosiv
Copy link
Copy Markdown
Contributor Author

upd:

as a result, there are no problems with the setup
* P: CP8
* D: DP8 EP8

There are problems when enabling additional hicache storage from my other PR, it is clearly incompatible with CP in its current form, I will continue there.

In addition, I was unable to make a stable setup with MTP with CP and even in setup

  • P: DP8 EP8
  • D: DP8 EP8
    With SpecV1 I'm seeing something similar to deadlocks on the decodes and with SpecV2 I see device side asserts.

But this also looks out of scope here because it is confirmed with DP8 EP8.

cc @whybeyoung

Signed-off-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
@vladnosiv
Copy link
Copy Markdown
Contributor Author

I also noticed that CP + P/D seemed to have issues before CP-refactoring under load, but even single queries stopped working after refactoring.

@vladnosiv vladnosiv changed the title [DSv32] Fix CP+P/D and support mixed TP/CP+P/D [DSv32] Support CP + P/D Feb 26, 2026
@ShangmingCai
Copy link
Copy Markdown
Collaborator

ShangmingCai commented Feb 26, 2026

The cp group poll part LGTM, but the transfer part might need a general refactor design (so that we can support hetero setup for CP). I am working on it now. We also want to enable support kv transfer without dummy cp rank, so that we can speed up the transfer.

ShangmingCai added a commit that referenced this pull request Feb 27, 2026
Co-authored-by: Vladislav Nosivskoy <vladnosiv@gmail.com>
Signed-off-by: Shangming Cai <csmthu@gmail.com>
@ShangmingCai
Copy link
Copy Markdown
Collaborator

@vladnosiv Can you check #19504? I make a PR for general CP support, and cleanup the logic a little bit.

@vladnosiv
Copy link
Copy Markdown
Contributor Author

Cherry-picked to #19504

@vladnosiv vladnosiv closed this Feb 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants