Skip to content

[Core][Doc] Default to multiprocessing for single-node distributed case#5230

Merged
simon-mo merged 13 commits into
vllm-project:mainfrom
njhill:default-dist-executor
Jun 11, 2024
Merged

[Core][Doc] Default to multiprocessing for single-node distributed case#5230
simon-mo merged 13 commits into
vllm-project:mainfrom
njhill:default-dist-executor

Conversation

@njhill
Copy link
Copy Markdown
Member

@njhill njhill commented Jun 3, 2024

Also update docs to reflect support for the multiprocessing distributed executor.

Resolves #4955
Resolves #5221

Also update docs to reflect support for the multiprocessing distributed executor.

Resolves vllm-project#4955
Resolves vllm-project#5221
Copy link
Copy Markdown
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Comment thread vllm/config.py
@Yard1
Copy link
Copy Markdown
Collaborator

Yard1 commented Jun 3, 2024

If we detect we are in a Ray placement group (ray.util.get_current_placement_group() is not None), we should still use Ray by default.

Comment thread vllm/config.py Outdated
Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
Comment thread docs/source/serving/distributed_serving.rst Outdated
Copy link
Copy Markdown

@rcarrata rcarrata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @njhill !!

@vrdn-23
Copy link
Copy Markdown
Contributor

vrdn-23 commented Jun 6, 2024

Would it possible to also make ray optional or as an extra when installing vllm as part of this PR? Currently it's being installed by default as part of the requirements for vllm so it'll always be present?

@njhill
Copy link
Copy Markdown
Member Author

njhill commented Jun 7, 2024

@vrdn-23 I had considered this and actually have a change for it in a branch since this is what we do in our own builds. But right now there are other dependencies included that could be considered optional as well, such as pillow, outlines, lm_format_enforcer so we should perhaps make those optional extras too. The only one managed that way right now is tensorizer.

Would be good to get thoughts from others on this too.

@rcarrata
Copy link
Copy Markdown

with all these dependencies (not only affect ray, also other components that may cause any issue), I'd suggest to open another PR from the branch that is suggested by @njhill alongside another GH issue to track the effort.

@simon-mo
Copy link
Copy Markdown
Collaborator

+1 I'm in favor of optional dependencies and trimming down our dependencies in general. Please open PRs!

@simon-mo
Copy link
Copy Markdown
Collaborator

@simon-mo simon-mo merged commit 99dac09 into vllm-project:main Jun 11, 2024
@njhill njhill deleted the default-dist-executor branch June 11, 2024 18:19
robertgshaw2-redhat pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jun 12, 2024
…se (vllm-project#5230)

Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
@robertgshaw2-redhat
Copy link
Copy Markdown
Collaborator

+1

joerunde pushed a commit to joerunde/vllm that referenced this pull request Jun 17, 2024
…se (vllm-project#5230)

Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jun 27, 2024
…se (vllm-project#5230)

Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 8, 2024
…se (vllm-project#5230)

Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 24, 2024
…se (vllm-project#5230)

Co-authored-by: Antoni Baum <antoni.baum@protonmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

7 participants