Skip to content

[Kernel] add custom op DispatchGmmCombineDecode#4139

Merged
wangxiyuan merged 2 commits intovllm-project:mainfrom
GuoRen868:fused_pr
Dec 6, 2025
Merged

[Kernel] add custom op DispatchGmmCombineDecode#4139
wangxiyuan merged 2 commits intovllm-project:mainfrom
GuoRen868:fused_pr

Conversation

@GuoRen868
Copy link
Copy Markdown
Contributor

@GuoRen868 GuoRen868 commented Nov 12, 2025

What this PR does / why we need it?

add custom opapi DispatchGmmCombineDecode for A3, include kernel inpl, python Api, pytest.

vLLM version: v0.11.0
vLLM main: vllm-project/vllm@24d6314

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new custom operator DispatchGmmCombineDecode for the Ascend platform. The changes include the operator definition, kernel implementation, build scripts, and PyTorch bindings. My review has identified a few critical issues. There is a significant issue in the shell script csrc/build_aclnn.sh regarding environment variable setup which could cause silent failures. Another critical bug is in csrc/pytorch_npu_helper.hpp where tensor strides are calculated incorrectly, which will fail for non-contiguous tensors. Additionally, there's a confusing duplicated field in csrc/custom_ops/kernels/dispatch_gmm_combine_decode/op_kernel/dispatch_gmm_combine_decode_tiling.h that should be corrected to improve maintainability.

Comment thread csrc/build_aclnn.sh Outdated

# install custom ops
./build_out/custom_ops/run/CANN_ascend910_93_ubuntu_aarch64.run --install-path=/usr/local/Ascend/ascend-toolkit/latest/opp/
source /usr/local/Ascend/ascend-toolkit/latest/opp/vendors/customize/bin/set_env.bash
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The source command on this line will only affect the environment of the script's execution shell. When this script is executed, it runs in a sub-shell, and any environment variables set within it are lost when the script finishes. If the intention is to modify the environment of the calling shell, this script should be sourced (e.g., source csrc/build_aclnn.sh) rather than executed. The #!/bin/bash shebang is misleading if the script is meant to be sourced. This can lead to silent failures in the environment setup.

Comment thread csrc/pytorch_npu_helper.hpp Outdated
Comment on lines +222 to +237

// 适配dispatch_gmm_combine_decode算子的weight入参
if (acl_data_type == ACL_INT8 && dimNum == 3) {
format = ACL_FORMAT_FRACTAL_NZ;
}

auto acl_tensor =
aclCreateTensor(at_tensor.sizes().data(), at_tensor.sizes().size(), acl_data_type, strides.data(),
0, format, at_tensor.sizes().data(), at_tensor.sizes().size(),
const_cast<void *>(at_tensor.storage().data()));

return acl_tensor;
}

inline aclScalar *ConvertType(const at::Scalar &at_scalar)
{
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The calculation of tensor strides is incorrect as it assumes the tensor is contiguous. This will lead to incorrect memory access and data corruption for non-contiguous tensors. You should use the tensor's actual strides and storage offset provided by PyTorch via at_tensor.strides() and at_tensor.storage_offset().

    const auto dimNum = at_tensor.dim();
    aclFormat format = ACL_FORMAT_ND;

    // 适配dispatch_gmm_combine_decode算子的weight入参
    if (acl_data_type == ACL_INT8 && dimNum == 3) {
        format = ACL_FORMAT_FRACTAL_NZ;
    }

    auto acl_tensor =
        aclCreateTensor(at_tensor.sizes().data(), dimNum, acl_data_type, at_tensor.strides().data(),
                        at_tensor.storage_offset(), format, at_tensor.sizes().data(), dimNum,
                        const_cast<void *>(at_tensor.storage().data()));

Comment on lines +27 to +28
uint32_t aicNum; // aivNum
uint32_t aivNum; // aivNum
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There appears to be a duplicated field aivNum and a confusing comment. The struct DispatchGmmCombineDecodeInfo has aicNum with comment // aivNum and aivNum with comment // aivNum. This is likely a copy-paste error and can lead to confusion and bugs. Please clarify the purpose of each field and correct the comments. For example, aicNum should probably be for AI Core count and aivNum for AI Vector count.

Suggested change
uint32_t aicNum; // aivNum
uint32_t aivNum; // aivNum
uint32_t aicNum; // aicNum
uint32_t aivNum; // aivNum

@github-actions
Copy link
Copy Markdown
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Dec 1, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Dec 1, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Dec 3, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
@wangxiyuan wangxiyuan merged commit 4bd1030 into vllm-project:main Dec 6, 2025
19 checks passed
yuxingcyx pushed a commit to yuxingcyx/vllm-ascend that referenced this pull request Dec 8, 2025
#### What this PR does / why we need it?
add custom opapi DispatchGmmCombineDecode for A3, include kernel inpl,
python Api, pytest.

vLLM version: v0.11.0
vLLM main:
vllm-project/vllm@24d6314

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Co-authored-by: wangqiankun <wangqiankun13@huawei.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: yuxingcyx <yuxingchen.math@gmail.com>
Clorist33 pushed a commit to Clorist33/vllm-ascend that referenced this pull request Dec 9, 2025
#### What this PR does / why we need it?
add custom opapi DispatchGmmCombineDecode for A3, include kernel inpl,
python Api, pytest.

vLLM version: v0.11.0
vLLM main:
vllm-project/vllm@24d6314

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Co-authored-by: wangqiankun <wangqiankun13@huawei.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Signed-off-by: tanqingshan (A) <50050625@china.huawei.com>
Clorist33 pushed a commit to Clorist33/vllm-ascend that referenced this pull request Dec 10, 2025
#### What this PR does / why we need it?
add custom opapi DispatchGmmCombineDecode for A3, include kernel inpl,
python Api, pytest.

vLLM version: v0.11.0
vLLM main:
vllm-project/vllm@24d6314


- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Co-authored-by: wangqiankun <wangqiankun13@huawei.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
Mercykid-bash pushed a commit to Mercykid-bash/vllm-ascend that referenced this pull request Dec 10, 2025
#### What this PR does / why we need it?
add custom opapi DispatchGmmCombineDecode for A3, include kernel inpl,
python Api, pytest.

vLLM version: v0.11.0
vLLM main:
vllm-project/vllm@24d6314


- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Co-authored-by: wangqiankun <wangqiankun13@huawei.com>
Co-authored-by: wangxiyuan <wangxiyuan1007@gmail.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 15, 2025
… experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 15, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 15, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 16, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 16, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 17, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 18, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 18, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 19, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 19, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=1 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 19, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 19, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 20, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 20, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 20, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangqiankun13 added a commit to wangqiankun13/vllm-ascend that referenced this pull request Dec 20, 2025
…bineDecode.

This commit adds model-side integration for the previously introduced experimental AscendC fused operator DispatchGmmCombineDecode, used in MoE decoding.

The operator implementation itself was added in a prior PR vllm-project#4139.
This change only adapts the model execution path to optionally use the fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the original MC2 path composed of multiple operators (A8W8 dispatch → GMM → SwiGLU → GMM → combine) is replaced by the single fused operator DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
wangxiyuan pushed a commit that referenced this pull request Dec 21, 2025
…l) (#5040)

### What this PR does / why we need it?

This PR adds model-side integration for the previously introduced
experimental AscendC fused operator DispatchGmmCombineDecode, used in
MoE decoding.

The operator implementation itself was added in a prior PR[#4139
](#4139).
This change only adapts the model execution path to optionally use the
fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the
original MC2 path composed of multiple operators (A8W8 dispatch → GMM →
SwiGLU → GMM → combine) might be replaced by the single fused operator
DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
…l) (vllm-project#5040)

### What this PR does / why we need it?

This PR adds model-side integration for the previously introduced
experimental AscendC fused operator DispatchGmmCombineDecode, used in
MoE decoding.

The operator implementation itself was added in a prior PR[vllm-project#4139
](vllm-project#4139).
This change only adapts the model execution path to optionally use the
fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the
original MC2 path composed of multiple operators (A8W8 dispatch → GMM →
SwiGLU → GMM → combine) might be replaced by the single fused operator
DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
…l) (vllm-project#5040)

### What this PR does / why we need it?

This PR adds model-side integration for the previously introduced
experimental AscendC fused operator DispatchGmmCombineDecode, used in
MoE decoding.

The operator implementation itself was added in a prior PR[vllm-project#4139
](vllm-project#4139).
This change only adapts the model execution path to optionally use the
fused operator.

When the environment variable VLLM_ASCEND_ENABLE_FUSED_MC2=2 is set, the
original MC2 path composed of multiple operators (A8W8 dispatch → GMM →
SwiGLU → GMM → combine) might be replaced by the single fused operator
DispatchGmmCombineDecode.

By default, the existing multi-operator MC2 implementation is preserved.

### Does this PR introduce _any_ user-facing change?

### How was this patch tested?

- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e

Signed-off-by: wangqiankun <wangqiankun13@huawei.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation module:tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants