Skip to content

enable ep32 for dispatch_ffn_combine#5787

Merged
wangxiyuan merged 3 commits intovllm-project:mainfrom
lhchg:ep32_dispatch_ffn_combine
Jan 13, 2026
Merged

enable ep32 for dispatch_ffn_combine#5787
wangxiyuan merged 3 commits intovllm-project:mainfrom
lhchg:ep32_dispatch_ffn_combine

Conversation

@lhchg
Copy link
Copy Markdown
Contributor

@lhchg lhchg commented Jan 12, 2026

What this PR does / why we need it?

To support dispatch_ffn_combine ep32 enabled

Does this PR introduce any user-facing change?

N/A

How was this patch tested?

Single operator tested

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables dispatch_ffn_combine for expert parallelism (EP) sizes up to 32 by updating the guard in select_moe_comm_method. The change is straightforward and aligns with the PR's goal. I've added a comment regarding an outdated comment in a related file that should be updated for consistency.

Comment thread vllm_ascend/ascend_forward_context.py
lhchg added 3 commits January 12, 2026 16:44
Signed-off-by: lhchg <lhao_cheng@163.com>
Signed-off-by: lhchg <lhao_cheng@163.com>
Signed-off-by: lhchg <lhao_cheng@163.com>
@lhchg lhchg force-pushed the ep32_dispatch_ffn_combine branch from 4e52b8a to fe1d578 Compare January 12, 2026 08:44
@wangxiyuan wangxiyuan added ready read for review ready-for-test start test by label for PR labels Jan 13, 2026
@wangxiyuan wangxiyuan merged commit 4b67998 into vllm-project:main Jan 13, 2026
39 checks passed
wangxiyuan pushed a commit that referenced this pull request Jan 13, 2026
### What this PR does / why we need it?
To support dispatch_ffn_combine ep32 enabled

pick-from: #5787
### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Single operator tested

---------

Signed-off-by: lhchg <lhao_cheng@163.com>
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Jan 14, 2026
…to eplb_refactor

* 'main' of https://github.com/vllm-project/vllm-ascend:
  [CI] Fix lint CI (vllm-project#5880)
  [Feature] implement eagle spec decoding for model runner v2 (vllm-project#5840)
  [Quantization] Support compressed tensors moe w8a8 int8 dynamic weight (vllm-project#5718)
  [EPLB][Bugfix] Get expert map from layers (vllm-project#5817)
  [Bugfix] Fixed an accuracy problem of sp with eagle3 (vllm-project#5816)
  [P/D] bugfix for p node force free requset (vllm-project#5431)
  [Lint]Style: Convert `example` to `ruff format` (vllm-project#5863)
  [Main2Main] Upgrade vllm commit to 0109 (vllm-project#5752)
  [Bugfix][P/D] fix layerwise connector for decoder tp size > num kv heads (vllm-project#5846)
  [Test][e2e][LoRA] Add more e2e tests to cover scenarios of LoRA (vllm-project#4075)
  [CustomOp][Perf] Merge Q/K split to simplify AscendApplyRotaryEmb for better performance (vllm-project#5799)
  [Lint]Style: Convert `root`, `benchmarks`, `tools` and `docs` to `ruff format` (vllm-project#5843)
  enable ep32 for dispatch_ffn_combine (vllm-project#5787)
aipaes pushed a commit to aipaes/vllm-ascend that referenced this pull request Jan 15, 2026
### What this PR does / why we need it?
To support dispatch_ffn_combine ep32 enabled

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Single operator tested

---------

Signed-off-by: lhchg <lhao_cheng@163.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
To support dispatch_ffn_combine ep32 enabled

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Single operator tested

---------

Signed-off-by: lhchg <lhao_cheng@163.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
### What this PR does / why we need it?
To support dispatch_ffn_combine ep32 enabled

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Single operator tested

---------

Signed-off-by: lhchg <lhao_cheng@163.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
To support dispatch_ffn_combine ep32 enabled

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Single operator tested

---------

Signed-off-by: lhchg <lhao_cheng@163.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
### What this PR does / why we need it?
To support dispatch_ffn_combine ep32 enabled

### Does this PR introduce _any_ user-facing change?
N/A

### How was this patch tested?
Single operator tested

---------

Signed-off-by: lhchg <lhao_cheng@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:core ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants