EPLB: prefer to use physical experts in the same gpu or node#10874
EPLB: prefer to use physical experts in the same gpu or node#10874zhyncs merged 6 commits intosgl-project:mainfrom
Conversation
Summary of ChangesHello @acelyc111, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the Expert Parallel Load Balancing (EPLB) mechanism by introducing a preference for assigning physical experts to the same GPU or compute node as the requesting process. This optimization aims to reduce communication overhead and improve performance by keeping expert computations local whenever possible. The changes involve modifying expert mapping functions to consider GPU and node IDs and refactoring the expert assignment logic for better modularity. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a mechanism to prefer physically local experts (same GPU, then same node) to reduce communication overhead. This is achieved by refactoring the expert selection logic into a _find_nearest_expert function and using it to optimize the expert mapping for each GPU rank. While the refactoring is a good improvement, I have a couple of concerns. First, the new logic forces the use of a single nearest expert rather than just preferring it, which could lead to performance bottlenecks by eliminating load balancing opportunities. Second, this new optimization is not applied consistently for all initialization paths, specifically for the 'trivial' expert location case. My review includes suggestions to address these points.
9c9e453 to
05f8ca4
Compare
05f8ca4 to
3cfe192
Compare
aa19935 to
56c9455
Compare
Motivation
Similar to #9849, when it's possible to use physical experts on the same node, we'd prefer to route to these GPU ranks in dynamic mode.
Modifications
When setting
--ep-dispatch-algorithm dynamic, select the experts on the same GPU at first if possible, then select experts on the same node if possible, at last, select the experts randomly as a fall back method.Accuracy Tests
Benchmarking and Profiling
Checklist