Skip to content

feat: add prefill_data_parallel_rank for external dp dispatch in P/D disaggregation#16059

Open
qy-seu wants to merge 2 commits intosgl-project:mainfrom
qy-seu:feat/decouple-pd-dprank
Open

feat: add prefill_data_parallel_rank for external dp dispatch in P/D disaggregation#16059
qy-seu wants to merge 2 commits intosgl-project:mainfrom
qy-seu:feat/decouple-pd-dprank

Conversation

@qy-seu
Copy link
Copy Markdown
Contributor

@qy-seu qy-seu commented Dec 29, 2025

Motivation

Currently, the data_parallel_rank field in the request body sent to the Decode instance is overloaded with a dual purpose:

  1. It identifies the Decode instance's own DP group (Self-Identity).
  2. It implicitly specifies the target Prefill DP group to connect to for KV cache transfer (Target-Binding).

This coupling forces the Prefill and Decode instances to share the exact same DP rank. It makes it impossible to schedule a request to a Decode instance in DP group i while fetching KV cache from a Prefill instance in DP group j (where i != j), limiting the flexibility of external schedulers.

Modifications

  • Interface Protocol:

    • Added an optional field prefill_data_parallel_rank to the request interface.
  • Logic in P/D disaggregation:

    • Decode Dispatch: The request is still routed to the specific Decode DP group based on the existing data_parallel_rank.
    • KV Cache Connection: When running in Prefill/Decode (P/D) disaggregation mode, the Decode instance now uses prefill_data_parallel_rank to explicitly identify and connect to the target Prefill DP group.

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @qianyang01, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the flexibility of data parallel dispatch in Prefill/Decode (P/D) disaggregation mode. Previously, the data_parallel_rank field served a dual, coupled purpose, limiting how requests could be routed and KV cache transferred between different data parallel groups. By introducing a dedicated prefill_data_parallel_rank field, the system can now explicitly define which Prefill data parallel group a Decode instance should connect to for KV cache, independent of its own data parallel rank. This change empowers external schedulers with greater control and optimization possibilities for distributed inference workloads.

Highlights

  • Decoupled DP Ranks: Introduced prefill_data_parallel_rank to explicitly specify the target Prefill Data Parallel (DP) group for KV cache transfer, separating it from the Decode instance's own data_parallel_rank.
  • Enhanced Flexibility: Allows external schedulers to connect a Decode instance in DP group i to a Prefill instance in DP group j (where i != j), removing previous coupling constraints and enabling more flexible resource allocation.
  • Interface Update: Added prefill_data_parallel_rank as an optional field to the request interface across various components, including EngineBase, engine, OpenAI protocols, and internal I/O structures.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the prefill_data_parallel_rank field to decouple the Prefill and Decode data parallelism ranks, allowing for more flexible scheduling in P/D disaggregation mode. The changes correctly propagate this new parameter through the request lifecycle, from the API entrypoints down to the core logic in the decode server. The implementation is straightforward and addresses the motivation well. I have one minor suggestion to improve code readability.

I am having trouble creating individual review comments. Click here to see my feedback.

python/sglang/srt/managers/io_struct.py (644-648)

medium

The conditional assignment here is redundant. Since self.prefill_data_parallel_rank is already of type Optional[int], it will be either an integer or None. The expression x if x is not None else None is equivalent to just x. This can be simplified for better readability. A similar pattern is also present for data_parallel_rank above.

            prefill_data_parallel_rank=self.prefill_data_parallel_rank,

@ShangmingCai
Copy link
Copy Markdown
Collaborator

Can you check this PR: #14726? There are some similar efforts.

@qy-seu
Copy link
Copy Markdown
Contributor Author

qy-seu commented Feb 5, 2026

Can you check this PR: #14726? There are some similar efforts.

The implementation in #14726 introduces a dependency on a Bootstrap server for DP rank synchronization, which seems prone to race conditions if the Decode node leads the Prefill node ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants