Skip to content

bugfix(MC2): refactor the comm group of MC2 to be compatible with PP#7291

Merged
MengqingCao merged 7 commits intovllm-project:mainfrom
pisceskkk:pp/dp_bugfix
Mar 23, 2026
Merged

bugfix(MC2): refactor the comm group of MC2 to be compatible with PP#7291
MengqingCao merged 7 commits intovllm-project:mainfrom
pisceskkk:pp/dp_bugfix

Conversation

@pisceskkk
Copy link
Copy Markdown
Contributor

@pisceskkk pisceskkk commented Mar 16, 2026

What this PR does / why we need it?

This PR refactors the communication group of MC2 to keep it consistent with vllm's EP group, making it compatible with PP.

How was this patch tested?

vllm serve deepseek/DeepSeek-V2-Lite-Chat -dp 2 -pp 4 -ep --trust-remote-code --enforce-eager

Prior to this fix, starting the service with the above command would cause an immediate crash; after the modification, the parallel combination of DP+EP+PP now works properly.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug in the MC2 communication group setup within the vllm_ascend distributed module. By refactoring how parallel groups are initialized and how ranks are assigned, it ensures that the system can properly handle a combination of Data Parallelism, Expert Parallelism, and Pipeline Parallelism, resolving crashes encountered in such configurations.

Highlights

  • Prefill Context Parallelism Integration: Introduced global_pcp_size (prefill context parallel size) as a distinct variable to manage parallelism configurations more explicitly.
  • MC2 Communication Group Refactoring: Modified the calculation of _MC2 communication group ranks to correctly incorporate global_dp_size (data parallel size) and global_pcp_size, ensuring compatibility with Pipeline Parallelism (PP).
  • Parallel Group Initialization Update: Updated the init_ascend_model_parallel function to use the newly defined global_pcp_size when initializing parallel groups, specifically for _P_TP.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • vllm_ascend/distributed/parallel_state.py
    • Introduced global_pcp_size variable for prefill context parallel size.
    • Updated the init_ascend_model_parallel function to use global_pcp_size in group initialization.
    • Refactored the _MC2 communication group rank calculation to correctly account for data and prefill context parallelism dimensions.
Activity
  • No review activity or comments have been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the creation of the _MC2 communication group to make it compatible with pipeline parallelism (PP). The change adjusts the rank grouping logic to be based on pipeline stages by leveraging vllm_all_ranks and a transpose operation. This is intended to resolve a crash when using a combination of data, expert, and pipeline parallelism. The changes are focused on this refactoring. After a thorough review, I have not identified any high or critical severity issues.

@pisceskkk pisceskkk force-pushed the pp/dp_bugfix branch 2 times, most recently from 4ca0b21 to f825e20 Compare March 16, 2026 02:54
Comment thread vllm_ascend/distributed/parallel_state.py
@pisceskkk pisceskkk requested a review from wangxiyuan as a code owner March 16, 2026 11:47
@pisceskkk pisceskkk force-pushed the pp/dp_bugfix branch 2 times, most recently from 578446f to 13fe30f Compare March 16, 2026 12:04
@zhenwenqi2024 zhenwenqi2024 added ready read for review ready-for-test start test by label for PR labels Mar 16, 2026
@zhenwenqi2024 zhenwenqi2024 requested a review from liziyu179 March 16, 2026 12:27
Comment thread vllm_ascend/distributed/parallel_state.py Outdated
Comment thread tests/e2e/multicard/4-cards/test_pipeline_parallel.py Outdated
@pisceskkk pisceskkk force-pushed the pp/dp_bugfix branch 2 times, most recently from c52171e to 5e19e12 Compare March 18, 2026 07:52
@pisceskkk pisceskkk requested a review from MengqingCao March 18, 2026 08:09
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@pisceskkk
Copy link
Copy Markdown
Contributor Author

https://github.com/vllm-project/vllm-ascend/actions/runs/23234531290/job/67535195887
CI details before rebase caused by .github/workflows/scripts/config.yaml

@pisceskkk pisceskkk force-pushed the pp/dp_bugfix branch 2 times, most recently from 3f6f672 to 49b2c25 Compare March 23, 2026 01:51
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
…X to be compatible with PP

Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
@MengqingCao MengqingCao merged commit 71df17f into vllm-project:main Mar 23, 2026
38 checks passed
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Mar 25, 2026
…llm-project#7291)

### What this PR does / why we need it?
This PR refactors the communication group of MC2 to keep it consistent
with vllm's EP group, making it compatible with PP.

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d
---------
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
lihaokun-2026 pushed a commit to lihaokun-2026/vllm-ascend that referenced this pull request Mar 29, 2026
…llm-project#7291)

### What this PR does / why we need it?
This PR refactors the communication group of MC2 to keep it consistent
with vllm's EP group, making it compatible with PP.

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d
---------
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
chenchuw886 pushed a commit to chenchuw886/vllm-ascend that referenced this pull request Apr 1, 2026
…llm-project#7291)

### What this PR does / why we need it?
This PR refactors the communication group of MC2 to keep it consistent
with vllm's EP group, making it compatible with PP.

- vLLM version: v0.17.0
- vLLM main:
vllm-project/vllm@4034c3d
---------
Signed-off-by: QiuChunshuo <qiuchunshuo@huawei.com>
SlightwindSec added a commit to SlightwindSec/vllm-ascend that referenced this pull request Apr 14, 2026
1. Fix intermediate tensor shape in dummy_run when SP+PP enabled
   (from b390e0e, PR vllm-project#5416):
   - Divide intermediate tensor tokens by tp_size when enable_sp()
   - Allocate self.intermediate_tensors with max_tokens / tp_size
   - Prevents extra all-gather on non-first PP ranks causing OOM

2. Refactor MC2 communication group for PP compatibility
   (adapted from 71df17f, PR vllm-project#7291):
   - Reshape all_ranks to 4D: (ExternalDP, DP, PP, TP)
   - MC2 group_ranks now uses EP-like layout via transpose(1,2)
     so ranks within the same PP stage are grouped together
   - Update P_TP alltoall group construction for PP dimension

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants