Skip to content

[Feature]Use DispatchGmmCombineDecode operator to replace MC2(Optional)#5040

Merged
wangxiyuan merged 1 commit intovllm-project:mainfrom
wangqiankun13:add_fused_mc2
Dec 21, 2025
Merged

[Feature]Use DispatchGmmCombineDecode operator to replace MC2(Optional)#5040
wangxiyuan merged 1 commit intovllm-project:mainfrom
wangqiankun13:add_fused_mc2

Conversation

@wangqiankun13
Copy link
Copy Markdown
Contributor

@wangqiankun13 wangqiankun13 commented Dec 15, 2025

What this PR does / why we need it?

This PR adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR#4139 .
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) might be replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Does this PR introduce any user-facing change?

How was this patch tested?

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the DispatchGmmCombineDecode operator as a new fused MoE communication method (FUSED_MC2) to replace the existing MC2 implementation for w8a8_dynamic quantization on Ascend hardware. The changes include adding the necessary enum, environment variable, and implementation logic. My review focuses on improving code clarity and fixing a potential bug related to handling shared experts. I've identified a misleading docstring, hardcoded values that should be parameterized, and a complex conditional that could be simplified for better maintainability.

Comment on lines +382 to +383
shared_expert_num=1,
shared_expert_rank_num=0,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The parameters shared_expert_num and shared_expert_rank_num are hardcoded. This implementation ignores the shared_experts parameter passed to the function, which likely contains the necessary information for handling shared experts. This will lead to incorrect behavior when shared experts are used. The values should be derived from the function arguments to correctly support shared experts.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines +322 to +329
"""This implementation is for the scenarios listed below:
1. `enable_expert_parallel=True`.
2. `npu_moe_distribute_dispatch` and `npu_moe_distribute_combine` are available.
3. `enable_expert_parallel=False` is not supported.

This implementation uses the FusedMC2 communication method, which is optimized for
Communication and Computation parallelism on Ascend devices.
"""
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The docstring for FusedMC2CommImpl appears to be copied from MC2CommImpl and is misleading. It mentions npu_moe_distribute_dispatch and npu_moe_distribute_combine, but this implementation uses the dispatch_gmm_combine_decode operator. The docstring should be updated to reflect the actual implementation and its requirements, such as being specific to w8a8_dynamic quantization, to improve maintainability.

Suggested change
"""This implementation is for the scenarios listed below:
1. `enable_expert_parallel=True`.
2. `npu_moe_distribute_dispatch` and `npu_moe_distribute_combine` are available.
3. `enable_expert_parallel=False` is not supported.
This implementation uses the FusedMC2 communication method, which is optimized for
Communication and Computation parallelism on Ascend devices.
"""
"""This implementation is for the scenarios listed below:
1. `enable_expert_parallel=True`.
2. `VLLM_ASCEND_ENABLE_FUSED_MC2` is enabled.
3. `w8a8_dynamic` quantization is used.
This implementation uses the `dispatch_gmm_combine_decode` operator, which is a fused
operator for MoE decoding that combines communication and computation for optimization
on Ascend devices.
"""

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment thread vllm_ascend/worker/model_runner_v1.py Outdated
Comment on lines +1421 to +1425
moe_comm_type = (
MoECommType.MC2 if num_tokens <= mc2_tokens_capacity else
MoECommType.FUSED_ALLTOALL if quant_type == "w8a8_dynamic"
and get_ep_group().world_size <= 16 else MoECommType.ALLTOALL)
(MoECommType.FUSED_MC2 if envs_ascend.VLLM_ASCEND_ENABLE_FUSED_MC2 and quant_type == "w8a8_dynamic"
else MoECommType.MC2) if num_tokens <= mc2_tokens_capacity
else MoECommType.FUSED_ALLTOALL if quant_type == "w8a8_dynamic" and get_ep_group().world_size <= 16
else MoECommType.ALLTOALL)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This nested ternary expression is difficult to read and maintain. Consider refactoring it into a more explicit if/else structure to improve clarity.

Suggested change
moe_comm_type = (
MoECommType.MC2 if num_tokens <= mc2_tokens_capacity else
MoECommType.FUSED_ALLTOALL if quant_type == "w8a8_dynamic"
and get_ep_group().world_size <= 16 else MoECommType.ALLTOALL)
(MoECommType.FUSED_MC2 if envs_ascend.VLLM_ASCEND_ENABLE_FUSED_MC2 and quant_type == "w8a8_dynamic"
else MoECommType.MC2) if num_tokens <= mc2_tokens_capacity
else MoECommType.FUSED_ALLTOALL if quant_type == "w8a8_dynamic" and get_ep_group().world_size <= 16
else MoECommType.ALLTOALL)
if num_tokens <= mc2_tokens_capacity:
if envs_ascend.VLLM_ASCEND_ENABLE_FUSED_MC2 and quant_type == "w8a8_dynamic":
moe_comm_type = MoECommType.FUSED_MC2
else:
moe_comm_type = MoECommType.MC2
elif quant_type == "w8a8_dynamic" and get_ep_group().world_size <= 16:
moe_comm_type = MoECommType.FUSED_ALLTOALL
else:
moe_comm_type = MoECommType.ALLTOALL

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

@wangqiankun13 wangqiankun13 force-pushed the add_fused_mc2 branch 2 times, most recently from f0d7faa to a8ceb08 Compare December 15, 2025 16:01
@wangqiankun13 wangqiankun13 changed the title Use DispatchGmmCombineDecode operator to replace MC2 [Feature]Use DispatchGmmCombineDecode operator to replace MC2(Optional) Dec 16, 2025
@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Comment thread vllm_ascend/worker/model_runner_v1.py Outdated
and get_ep_group().world_size <= 16 else MoECommType.ALLTOALL)
if num_tokens <= mc2_tokens_capacity:
if envs_ascend.VLLM_ASCEND_ENABLE_FUSED_MC2 and quant_type == "w8a8_dynamic":
moe_comm_type = MoECommType.FUSED_MC2
Copy link
Copy Markdown
Collaborator

@weijinqian0 weijinqian0 Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this envs variables

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As new conclusion, we use the same one env variable with dispatch_fnn_combine operator.

Copy link
Copy Markdown
Collaborator

@linfeng-yuan linfeng-yuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the operator currently only supports w8a8_dynamic. It is necessary to disblae ths usage of fused_mc2 in mtp_proposer.py in case the dtype of mtp is bfloat16 (notice both _dummy_run and propose should be changed):
Option 1 (Recommended):
Recognize the quant_type of mtp layer (e.g., through the instance class of FusedMoE) to decide the moe_comm_method of this layer.
Option 2:
Refer to #4751 and #4947, use hard_code to disable fused_op paths in mtp_proposer. But please add a note here and future plan.

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@wangqiankun13 wangqiankun13 force-pushed the add_fused_mc2 branch 5 times, most recently from 4294e32 to 7e0616c Compare December 20, 2025 10:14
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
@wangqiankun13
Copy link
Copy Markdown
Contributor Author

Since the operator currently only supports w8a8_dynamic. It is necessary to disblae ths usage of fused_mc2 in mtp_proposer.py in case the dtype of mtp is bfloat16 (notice both _dummy_run and propose should be changed): Option 1 (Recommended): Recognize the quant_type of mtp layer (e.g., through the instance class of FusedMoE) to decide the moe_comm_method of this layer. Option 2: Refer to #4751 and #4947, use hard_code to disable fused_op paths in mtp_proposer. But please add a note here and future plan.

Since FUESD_MC2 must be enabled by env variable now, my operator will not incur issues in default. I will add mtp guard later and has add a note and future here.

@kiscad
Copy link
Copy Markdown
Contributor

kiscad commented Dec 20, 2025

LGTM

@wangxiyuan wangxiyuan added ready read for review ready-for-test start test by label for PR labels Dec 21, 2025
@wangxiyuan wangxiyuan merged commit 904c18f into vllm-project:main Dec 21, 2025
48 checks passed
wangxiyuan pushed a commit that referenced this pull request Jan 21, 2026
…issue 5476] (#5932)

### What this PR does / why we need it?

In [PR 5040](#5040), the
`dispatch_gmm_combine_decode` operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
`moe_distributed_dispatch` operator, specifically: (the maximum batch
size across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
#5476

### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Acc
test qwen3-235b eplb on a single A3 node(ep16),
with dispatch_gmm_combine_decode

| dataset | version | metric | mode | vllm-api-stream-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 80.00 |
- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
huangfeifei1995 pushed a commit to huangfeifei1995/vllm-ascend that referenced this pull request Jan 21, 2026
…issue 5476] (vllm-project#5932)

### What this PR does / why we need it?

In [PR 5040](vllm-project#5040), the
`dispatch_gmm_combine_decode` operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
`moe_distributed_dispatch` operator, specifically: (the maximum batch
size across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
vllm-project#5476

### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Acc
test qwen3-235b eplb on a single A3 node(ep16),
with dispatch_gmm_combine_decode

| dataset | version | metric | mode | vllm-api-stream-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 80.00 |
- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Signed-off-by: huangning1995 <huangning12@huawei.com>
wangxiyuan pushed a commit that referenced this pull request Jan 22, 2026
…m_combine_decode (#5931)

### What this PR does / why we need it?
This PR is cherry-picked from
[PR5932](#5932).

In #5040, the
dispatch_gmm_combine_decode operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
moe_distributed_dispatch operator, specifically: (the maximum batch size
across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
#5476

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
tangtiangu pushed a commit to tangtiangu/jiusi-vllm-ascend that referenced this pull request Feb 24, 2026
…m_combine_decode (vllm-project#5931)

### What this PR does / why we need it?
This PR is cherry-picked from
[PR5932](vllm-project#5932).

In vllm-project#5040, the
dispatch_gmm_combine_decode operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
moe_distributed_dispatch operator, specifically: (the maximum batch size
across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
vllm-project#5476

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
tangtiangu pushed a commit to tangtiangu/jiusi-vllm-ascend that referenced this pull request Feb 24, 2026
…m_combine_decode (vllm-project#5931)

### What this PR does / why we need it?
This PR is cherry-picked from
[PR5932](vllm-project#5932).

In vllm-project#5040, the
dispatch_gmm_combine_decode operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
moe_distributed_dispatch operator, specifically: (the maximum batch size
across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
vllm-project#5476

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
…l) (vllm-project#5040)

### What this PR does / why we need it?

This PR adds model-side integration for the previously introduced
experimental AscendC fused operator DispatchGmmCombineDecode, used in
MoE decoding.

The operator implementation itself was added in a prior PR[vllm-project#4139
](vllm-project#4139).
This change only adapts the model execution path to optionally use the
fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the
original MC2 path composed of multiple operators (A8W8 dispatch → GMM →
SwiGLU → GMM → combine) might be replaced by the single fused operator
DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
…issue 5476] (vllm-project#5932)

### What this PR does / why we need it?

In [PR 5040](vllm-project#5040), the
`dispatch_gmm_combine_decode` operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
`moe_distributed_dispatch` operator, specifically: (the maximum batch
size across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
vllm-project#5476

### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Acc
test qwen3-235b eplb on a single A3 node(ep16),
with dispatch_gmm_combine_decode

| dataset | version | metric | mode | vllm-api-stream-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 80.00 |
- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
…issue 5476] (vllm-project#5932)

### What this PR does / why we need it?

In [PR 5040](vllm-project#5040), the
`dispatch_gmm_combine_decode` operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
`moe_distributed_dispatch` operator, specifically: (the maximum batch
size across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
vllm-project#5476

### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Acc
test qwen3-235b eplb on a single A3 node(ep16),
with dispatch_gmm_combine_decode

| dataset | version | metric | mode | vllm-api-stream-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 80.00 |
- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
…l) (vllm-project#5040)

### What this PR does / why we need it?

This PR adds model-side integration for the previously introduced
experimental AscendC fused operator DispatchGmmCombineDecode, used in
MoE decoding.

The operator implementation itself was added in a prior PR[vllm-project#4139
](vllm-project#4139).
This change only adapts the model execution path to optionally use the
fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the
original MC2 path composed of multiple operators (A8W8 dispatch → GMM →
SwiGLU → GMM → combine) might be replaced by the single fused operator
DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
…issue 5476] (vllm-project#5932)

### What this PR does / why we need it?

In [PR 5040](vllm-project#5040), the
`dispatch_gmm_combine_decode` operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
`moe_distributed_dispatch` operator, specifically: (the maximum batch
size across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
vllm-project#5476

### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Acc
test qwen3-235b eplb on a single A3 node(ep16),
with dispatch_gmm_combine_decode

| dataset | version | metric | mode | vllm-api-stream-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 80.00 |
- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
…issue 5476] (vllm-project#5932)

### What this PR does / why we need it?

In [PR 5040](vllm-project#5040), the
`dispatch_gmm_combine_decode` operator was configured with an incorrect
global_bs parameter. This PR is to fix the bug.

The global_bs provided as input should have the same meaning as in the
`moe_distributed_dispatch` operator, specifically: (the maximum batch
size across all cards) * (expert parallel world size).
However, the implementation incorrectly used the variable
max_num_tokens, which does not account for tensor parallelism. This
error likely resulted in an unnecessarily large (overestimated) value.

More info about this operator, please refer to RFC: issue
vllm-project#5476

### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Acc
test qwen3-235b eplb on a single A3 node(ep16),
with dispatch_gmm_combine_decode

| dataset | version | metric | mode | vllm-api-stream-chat |
|----- | ----- | ----- | ----- | -----|
| aime2024 | 604a78 | accuracy | gen | 80.00 |
- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@11b6af5

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants