Skip to content

add dispath_ffn_combine_bf16#5866

Merged
wangxiyuan merged 2 commits intovllm-project:mainfrom
guanguan0308:dispath_ffn_combine_bf16_3
Jan 21, 2026
Merged

add dispath_ffn_combine_bf16#5866
wangxiyuan merged 2 commits intovllm-project:mainfrom
guanguan0308:dispath_ffn_combine_bf16_3

Conversation

@guanguan0308
Copy link
Copy Markdown
Contributor

@guanguan0308 guanguan0308 commented Jan 13, 2026

What this PR does / why we need it?

add dispath_ffn_combine_bf16

Does this PR introduce any user-facing change?

How was this patch tested?

@github-actions
Copy link
Copy Markdown
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new operator dispatch_ffn_combine_bf16 for Mixture-of-Experts models on the CANN platform. The changes are extensive, covering operator definition, host and device-side implementations, PyTorch bindings, and tests. However, I've identified several critical issues related to correctness and potential runtime failures, including incorrect template instantiations, wrong PyTorch bindings, potential buffer overflows due to fixed-size arrays, and incomplete operator prototype implementations. These issues must be addressed to ensure the operator functions correctly and safely.

Comment thread csrc/dispatch_ffn_combine_bf16/op_kernel/dispatch_ffn_combine_bf16_kernel.hpp Outdated
Comment thread csrc/dispatch_ffn_combine_bf16/op_kernel/dispatch_ffn_combine_bf16.h Outdated
Comment thread csrc/dispatch_ffn_combine_bf16/op_kernel/dispatch_ffn_combine_bf16_kernel.hpp Outdated
Comment thread csrc/torch_binding_meta.cpp Outdated
@guanguan0308 guanguan0308 changed the title fix add dispath_ffn_combine_bf16 Jan 13, 2026
@guanguan0308 guanguan0308 force-pushed the dispath_ffn_combine_bf16_3 branch 3 times, most recently from e729497 to 0cb311a Compare January 16, 2026 02:27
@wangxiyuan wangxiyuan added ready read for review ready-for-test start test by label for PR labels Jan 16, 2026
Signed-off-by: guanguan0308 <1546542263@qq.com>
@guanguan0308 guanguan0308 force-pushed the dispath_ffn_combine_bf16_3 branch from 175ce0b to d561470 Compare January 19, 2026 03:29
Signed-off-by: guanguan0308 <1546542263@qq.com>
@guanguan0308 guanguan0308 force-pushed the dispath_ffn_combine_bf16_3 branch from 12d5c9d to e4e76cd Compare January 20, 2026 02:15
@wangxiyuan wangxiyuan merged commit 1ed9524 into vllm-project:main Jan 21, 2026
20 checks passed
845473182 pushed a commit to 845473182/vllm-ascend that referenced this pull request Jan 21, 2026
…to FIA_rebase

* 'main' of https://github.com/vllm-project/vllm-ascend: (24 commits)
  add dispath_ffn_combine_bf16 (vllm-project#5866)
  [BugFix] Fix input parameter bug of dispatch_gmm_combine_decode[RFC: issue 5476] (vllm-project#5932)
  [1/N][Feat] Xlite Qwen3 MoE Support (vllm-project#5951)
  [Bugfix] Fix setting of `speculative_config.enforce_eager` for dsv32 (vllm-project#5945)
  [bugfix][mm] change get_num_encoder_tokens to get_num_encoder_embeds in recompute_schedule.py (vllm-project#5132)
  [Bugfix] fix pcp qwen full graph FIA bug (vllm-project#6037)
  [Bugfix]Fixed precision issues caused by pooled request pooling (vllm-project#6049)
  【main】【bugfix】Resolved memory deallocation failure in the pooling layer under re-computation workloads. (vllm-project#6045)
  [main][Bugfix] Fixed an problem related to embeddings sharing (vllm-project#5967)
  [Feature]refactor the npugraph_ex config, support online-infer with static kernel (vllm-project#5775)
  [CI][Lint] Show lint diff on failure (vllm-project#5956)
  [CI] Add wait logic for each individual case (vllm-project#6036)
  [CI] Add DeepSeek-V3.2-W8A8 nightly ci test (vllm-project#4633)
  model runner v2 support triton of penalty (vllm-project#5854)
  [Docs][Model] Support Qwen3-VL-Embedding & Qwen3-VL-Reranker (vllm-project#6034)
  [Tests] move qwen3 performance test from nightly to e2e (vllm-project#5980)
  [Bugfix] fix bug of pcp+mtp+async scheduler (vllm-project#5994)
  [Main2Main] Upgrade vllm commit to releases/v0.14.0 (vllm-project#5988)
  [Ops] Add layernorm for qwen3Next (vllm-project#5765)
  [Doc] Add layer_sharding additional config for DeepSeek-V3.2-W8A8 (vllm-project#5921)
  ...
huangfeifei1995 pushed a commit to huangfeifei1995/vllm-ascend that referenced this pull request Jan 21, 2026
### What this PR does / why we need it?
add dispath_ffn_combine_bf16

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@bde38c1

---------

Signed-off-by: guanguan0308 <1546542263@qq.com>
Signed-off-by: huangning1995 <huangning12@huawei.com>
huangfeifei1995 added a commit to huangfeifei1995/vllm-ascend that referenced this pull request Jan 21, 2026
starmountain1997 pushed a commit to starmountain1997/vllm-ascend that referenced this pull request Jan 31, 2026
### What this PR does / why we need it?
add dispath_ffn_combine_bf16

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@bde38c1

---------

Signed-off-by: guanguan0308 <1546542263@qq.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Feb 28, 2026
### What this PR does / why we need it?
add dispath_ffn_combine_bf16

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@bde38c1

---------

Signed-off-by: guanguan0308 <1546542263@qq.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
maoxx241 pushed a commit to maoxx241/vllm-ascend that referenced this pull request Mar 2, 2026
### What this PR does / why we need it?
add dispath_ffn_combine_bf16

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@bde38c1

---------

Signed-off-by: guanguan0308 <1546542263@qq.com>
ZRJ026 pushed a commit to ZRJ026/vllm-ascend that referenced this pull request Mar 4, 2026
### What this PR does / why we need it?
add dispath_ffn_combine_bf16

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@bde38c1

---------

Signed-off-by: guanguan0308 <1546542263@qq.com>
Signed-off-by: zrj026 <zhangrunjiang026@gmail.com>
LCAIZJ pushed a commit to LCAIZJ/vllm-ascend that referenced this pull request Mar 7, 2026
### What this PR does / why we need it?
add dispath_ffn_combine_bf16

- vLLM version: v0.13.0
- vLLM main:
vllm-project/vllm@bde38c1

---------

Signed-off-by: guanguan0308 <1546542263@qq.com>
@guanguan0308 guanguan0308 deleted the dispath_ffn_combine_bf16_3 branch March 13, 2026 08:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants