[Bugfix][WideEP] Apply TP Attn + EP MoE fix to other models#24982
Merged
tlrmchlsmth merged 39 commits intovllm-project:mainfrom Sep 27, 2025
Merged
[Bugfix][WideEP] Apply TP Attn + EP MoE fix to other models#24982tlrmchlsmth merged 39 commits intovllm-project:mainfrom
tlrmchlsmth merged 39 commits intovllm-project:mainfrom
Conversation
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Runs but wrong answer in this case Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
xuechendi
pushed a commit
to vllm-project/vllm-gaudi
that referenced
this pull request
Sep 30, 2025
After vllm-project/vllm#24982 merged, sequence parallel MOE will be turned on when `enable_expert_parallel=True`, `tp_size > 1` and `dp_size > 1`. Since for Gaudi, there is no choice for `VLLM_ALL2ALL_BACKEND`, we can not easily bypass it. So this PR aims to support the feature. ```python class ParallelConfig: @Property def use_sequence_parallel_moe(self) -> bool: return (envs.VLLM_ALL2ALL_BACKEND in ("allgather_reducescatter", "naive", "deepep_high_throughput", "deepep_low_latency") and self.enable_expert_parallel and self.tensor_parallel_size > 1 and self.data_parallel_size > 1) ``` Update: No hard requirement on vllm-project/vllm#25828 --------- Signed-off-by: Wuxun Zhang <wuxun.zhang@intel.com>
5 tasks
iboiko-habana
pushed a commit
to iboiko-habana/vllm-gaudi
that referenced
this pull request
Oct 2, 2025
After vllm-project/vllm#24982 merged, sequence parallel MOE will be turned on when `enable_expert_parallel=True`, `tp_size > 1` and `dp_size > 1`. Since for Gaudi, there is no choice for `VLLM_ALL2ALL_BACKEND`, we can not easily bypass it. So this PR aims to support the feature. ```python class ParallelConfig: @Property def use_sequence_parallel_moe(self) -> bool: return (envs.VLLM_ALL2ALL_BACKEND in ("allgather_reducescatter", "naive", "deepep_high_throughput", "deepep_low_latency") and self.enable_expert_parallel and self.tensor_parallel_size > 1 and self.data_parallel_size > 1) ``` Update: No hard requirement on vllm-project/vllm#25828 --------- Signed-off-by: Wuxun Zhang <wuxun.zhang@intel.com> Signed-off-by: Iryna Boiko <iboiko@habana.ai>
pdasigi
pushed a commit
to pdasigi/vllm
that referenced
this pull request
Oct 2, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
pdasigi
pushed a commit
to pdasigi/vllm
that referenced
this pull request
Oct 2, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io>
yewentao256
pushed a commit
that referenced
this pull request
Oct 3, 2025
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: yewentao256 <zhyanwentao@126.com>
yewentao256
pushed a commit
that referenced
this pull request
Oct 3, 2025
1 task
choprahetarth
pushed a commit
to Tandemn-Labs/vllm
that referenced
this pull request
Oct 11, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: simon-mo <simon.mo@hey.com>
choprahetarth
pushed a commit
to Tandemn-Labs/vllm
that referenced
this pull request
Oct 11, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io> Signed-off-by: simon-mo <simon.mo@hey.com>
shyeh25
pushed a commit
to shyeh25/vllm
that referenced
this pull request
Oct 14, 2025
Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com> Signed-off-by: simon-mo <simon.mo@hey.com>
shyeh25
pushed a commit
to shyeh25/vllm
that referenced
this pull request
Oct 14, 2025
Signed-off-by: Roger Wang <hey@rogerw.io> Signed-off-by: simon-mo <simon.mo@hey.com>
lywa1998
pushed a commit
to lywa1998/vllm
that referenced
this pull request
Oct 20, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
lywa1998
pushed a commit
to lywa1998/vllm
that referenced
this pull request
Oct 20, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io>
alhridoy
pushed a commit
to alhridoy/vllm
that referenced
this pull request
Oct 24, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
alhridoy
pushed a commit
to alhridoy/vllm
that referenced
this pull request
Oct 24, 2025
…t#25814) Signed-off-by: Roger Wang <hey@rogerw.io>
5 tasks
5 tasks
rtourgeman
pushed a commit
to rtourgeman/vllm
that referenced
this pull request
Nov 10, 2025
…ject#24982) Signed-off-by: Tyler Michael Smith <tlrmchlsmth@gmail.com>
1 task
1 task
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Prior to this PR, in many cases, using TP Attn and EP MoEs with
--tensor-parallel-size N --data-parallel-size M --enable-expert-parallelwould result in factorNredundant work in the MoE layers.This PR extends #24134 to other models, and to the
naiveandallgather_reducescatterAll2All backends.Test Plan
Test Result
Qwen/Qwen3-30B-A3B-FP8:Qwen/Qwen3-Next-80B-A3B-Instruct(with--enforce-eagerdue to #25437):meta-llama/Llama-4-Scout-17B-16E:ibm-granite/granite-4.0-tiny-preview(with--enforce-eagerdue to #25437 (comment)):openai/gpt-oss-20b(main at TP4 is almost the same):Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.