-
-
Notifications
You must be signed in to change notification settings - Fork 11.3k
Introduce RayPPCommunicator for ray-based PP #21660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces RayPPCommunicator to integrate vLLM's pipeline parallelism with Ray's Compiled DAG by wrapping the existing vLLM _PP communicator. The changes are logical and well-structured. My review focuses on the new ray_communicator.py file, where I've identified a couple of high-severity maintainability issues: a copy-paste error in error messages and the use of a private Ray API attribute which could be brittle. Addressing these will improve the quality of this new component.
Signed-off-by: Rui Qiao <[email protected]>
5be55e0 to
2cbfd7e
Compare
Signed-off-by: Rui Qiao <[email protected]>
2cbfd7e to
607b877
Compare
|
@stephanie-wang @youkaichao Could you please help review, thanks |
| lambda: bool(int(os.getenv("VLLM_USE_RAY_COMPILED_DAG_OVERLAP_COMM", "0")) | ||
| ), | ||
|
|
||
| # If the env var is set, it uses a Ray Communicator wrapping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this flag? Can we just always use vLLM's PP communicator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change in this PR is by default on. Wanted to have this config guard for a while so that users can fall back if needed.
This will be cleaned up altogether after it is rolled out for a while.
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
|
Hi @stephanie-wang , I've addressed your comments and full CI passed. Could you take another look? |
|
@ruisearch42 Is there any perf improvement with this pr? Thanks. |
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]> Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Rui Qiao <[email protected]> Signed-off-by: Noam Gat <[email protected]>
Signed-off-by: Rui Qiao <[email protected]> Signed-off-by: Paul Pak <[email protected]>
Signed-off-by: Rui Qiao <[email protected]> Signed-off-by: Diego-Castan <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.Purpose
Currently, ray-based PP uses a cupy/nccl communicator created separately from other communicators in vLLM. This caused inconsistencies and potential conflicts.
This PR introduces
RayPPCommunicator, which wraps around vLLM_PPcommunicator so that the same underlying communicator is used.This is the major change needed for #19758
and an alternative to #19816
Test Plan
Test with test_pipeline_parallel.py
Test Result
Passed
(Optional) Documentation Update