[Bugfix] Refactor to support DP parallel in R3#32306
[Bugfix] Refactor to support DP parallel in R3#32306youkaichao merged 1 commit intovllm-project:mainfrom
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
There was a problem hiding this comment.
Code Review
This pull request effectively addresses a bug in data parallelism for routed experts. The changes correctly introduce rank-specific shared memory resources, preventing race conditions between data-parallel ranks. The logic for handling sharded data within the capture method is sound and properly slices tensors based on the data-parallel rank. Additionally, the refactoring to use the more general VllmConfig is a good improvement for code clarity and maintainability. Overall, the changes are correct and well-implemented.
|
@Mergifyio rebase |
✅ Branch has been successfully rebased |
Head branch was pushed to by a user without write access
Signed-off-by: xhx1022 <1737006628@qq.com>
Signed-off-by: xhx1022 <1737006628@qq.com> Co-authored-by: arlenxu <arlenxu@tencent.com>
Fix a bug introduced during the DP parallel for R3(#28284).