-
-
Notifications
You must be signed in to change notification settings - Fork 12k
[Core] Avoid unnecessary coordination for non-MoE data parallel #24828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: zjy0516 <[email protected]>
|
CC @njhill |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces an optimization to avoid unnecessary dummy batch execution for non-MoE models in a data-parallel setup, which should reduce overhead. The changes are well-structured, adding a skip_dummy_batch path that is conditionally used based on whether expert parallelism is enabled. The implementation correctly preserves the necessary synchronization for data parallelism while skipping the actual model execution. I've added one comment regarding device placement to prevent potential future bugs. The changes look good overall.
Signed-off-by: zjy0516 <[email protected]>
|
@njhill Should I also consider the offline inference scenario? |
|
Thanks for this @ZJY0516! I don't think it looks quite like what we had in mind though. We want to avoid doing the additional collectives altogether and ideally avoid initializing the associated torch distributed process groups. We should also be able to avoid the messaging done for coordination of request waves, etc. |
|
@njhill But if self.get_dp_padding is not executed, non-moe model on dp > 1 will be blocked forever. Do you have any suggstion? |
I'm not sure that I follow, the different ranks should be completely independent in the non-MoE case... |
I think it's because there is a |
|
@ZJY0516 sorry, I was thinking of something different for this. Basically most aspects should work like a non-DP deployment - we don't need the DP process group, etc. The DPCoordinator doesn't need to synchronize request waves, etc. |
You mean for non-moe model we don't need the DP process group and synchronize request waves? |



Purpose
FIX #24461
Avoid unnecessary coordination for non-MoE data parallel
Test Plan
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.