Skip to content

[Bugfix] Fix DSV3 kernels breaking _C and _moe_C on unsupported arches#35123

Merged
vllm-bot merged 2 commits intovllm-project:mainfrom
neuralmagic:fix-dsv3-kernels-breaking-cuda-ops
Feb 24, 2026
Merged

[Bugfix] Fix DSV3 kernels breaking _C and _moe_C on unsupported arches#35123
vllm-bot merged 2 commits intovllm-project:mainfrom
neuralmagic:fix-dsv3-kernels-breaking-cuda-ops

Conversation

@mgoin
Copy link
Copy Markdown
Member

@mgoin mgoin commented Feb 23, 2026

Purpose

dsv3_fused_a_gemm and dsv3_router_gemm had their ops.impl() in torch_bindings.cpp, creating a hard symbol dependency even when the .cu files are excluded by CMake arch filtering. This causes the entire _C/_moe_C extension to fail to link on architectures like SM121, taking down unrelated ops like topk_softmax.

See #34758 (comment) for reference of failure

Moved impl registration into the source .cu files via TORCH_LIBRARY_IMPL_EXPAND, matching the existing pattern used by marlin, cutlass, and MLA kernels.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@mgoin mgoin changed the title [Build] Fix DSV3 kernels breaking _C and _moe_C on unsupported arches [Bugfix] Fix DSV3 kernels breaking _C and _moe_C on unsupported arches Feb 23, 2026
@mergify mergify bot added ci/build bug Something isn't working labels Feb 23, 2026
@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 23, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses build failures on unsupported architectures (e.g., SM121) by moving the ops.impl() registrations for DeepSeek V3 kernels from torch_bindings.cpp to their respective source .cu files. This change ensures that the implementation is only registered when the source file is actually compiled, removing hard symbol dependencies that previously caused link errors when CMake arch filtering excluded these files. The approach follows the established pattern in vLLM for architecture-specific kernels. I have identified one critical issue where a required header is missing in one of the modified source files, which would lead to compilation errors on supported platforms.

Comment on lines 26 to +27

#include "core/registration.h"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The addition of TORCH_LIBRARY_IMPL_EXPAND at the end of this file requires TORCH_LIBRARY_IMPL, which is defined in <torch/library.h>. While core/registration.h is included, it does not provide the underlying PyTorch macro definitions. Including <torch/all.h> (as seen in csrc/dsv3_fused_a_gemm.cu) or <torch/library.h> is necessary to avoid compilation errors on supported architectures.

#include <cuda_runtime.h>
#include <torch/all.h>

#include "core/registration.h"

@robertgshaw2-redhat robertgshaw2-redhat enabled auto-merge (squash) February 23, 2026 18:10
@stavinsky
Copy link
Copy Markdown

loads fine
VLLM_USE_FLASHINFER_MOE_FP4=0 vllm serve --host 0.0.0.0 --gpu-memory-utilization 0.4 --load-format fastsafetensors --max-num-seqs 1 --kv-cache-dtype fp8 nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4

fails on load (i think not related to this pr)

VLLM_USE_FLASHINFER_MOE_FP4=1 vllm serve --host 0.0.0.0 --gpu-memory-utilization 0.4 --load-format fastsafetensors --max-num-seqs 1 --kv-cache-dtype fp8 nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4

(EngineCore_DP0 pid=111946)   File "/home/dev/dev/vllm_source/vllm/model_executor/layers/quantization/modelopt.py", line 1442, in process_weights_after_loading
(EngineCore_DP0 pid=111946)     assert self.experts_cls is not None

loads, fails on infer (again different story afaik)

vllm serve  --host 0.0.0.0 --gpu-memory-utilization 0.4 --load-format fastsafetensors --max-num-seqs 1 --kv-cache-dtype fp8 nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4

Signed-off-by: mgoin <mgoin64@gmail.com>
@vllm-bot vllm-bot merged commit 3ef9fd0 into vllm-project:main Feb 24, 2026
107 of 112 checks passed
@mgoin mgoin deleted the fix-dsv3-kernels-breaking-cuda-ops branch February 24, 2026 01:11
jjarquin added a commit to vistralis/vllm that referenced this pull request Feb 24, 2026
Cherry-picked from atalman/update_torch_211 (PR vllm-project#34644):
- Bump torch version pins: 2.10.0 → 2.11.0
- Update CUDA version: 12.9 → 13.0
- CPU API changes: torch._C._cpu → torch.cpu
- at::cpu::L2_cache_size() → get_cpu_capabilities()
- Update test/build requirements for CUDA 13.0
- Fix distributed test for torch 2.11
- NIXL updates

Rebased on synced main (includes upstream dsv3 fix vllm-project#35123).
llsj14 pushed a commit to llsj14/vllm that referenced this pull request Mar 1, 2026
tunglinwood pushed a commit to tunglinwood/vllm that referenced this pull request Mar 4, 2026
askliar pushed a commit to askliar/vllm that referenced this pull request Mar 9, 2026
vllm-project#35123)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Andrii Skliar <askliar@nvidia.com>
Copilot AI pushed a commit to machov/vllm that referenced this pull request Mar 10, 2026
EricccYang pushed a commit to EricccYang/vllm that referenced this pull request Apr 1, 2026
vllm-project#35123)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: EricccYang <yangyang4991@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working ci/build ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants