Skip to content

Fix MoE backend selection for LoRA (unquantized MoE)#40273

Merged
robertgshaw2-redhat merged 9 commits intovllm-project:mainfrom
de-inf:fix_lora_moe_backend
Apr 19, 2026
Merged

Fix MoE backend selection for LoRA (unquantized MoE)#40273
robertgshaw2-redhat merged 9 commits intovllm-project:mainfrom
de-inf:fix_lora_moe_backend

Conversation

@danisereb
Copy link
Copy Markdown
Contributor

@danisereb danisereb commented Apr 19, 2026

Purpose

When using LoRA adapters with Nemotron Nano BF16:
https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16

The following error was raised:

Using FlashInfer CUTLASS Unquantized MoE backend out of potential backends: ['FlashInfer TRTLLM', 'FlashInfer CUTLASS', 'TRITON', 'BATCHED_TRITON'].
...
File "/my_home/workspace/my_vllm/vllm/lora/layers/fused_moe.py", line 164, in _inject_lora_into_fused_moe
assert isinstance(m_fused_moe_fn.impl.fused_experts, TritonExperts)

In previous vLLM versions the default backend for unquantized MoE was TritonExperts.
The new default backend "FlashInfer CUTLASS" does not support LoRA (see class FlashInferExperts, function moe_sum).

This PR selects TritonExperts when LoRA is enabled.

This change aligns with select_fp8_moe_backend, select_mxfp8_moe_backend, select_gpt_oss_mxfp4_moe_backend.

Test Plan

Add new test(s) in pytest tests/kernels/moe/test_unquantized_backend_selection.py

Check that LoRA works with Nemotron Nano BF16.

Test Result

All tests in test_unquantized_backend_selection.py passed.

LoRA adapters now work with Nemotron Nano BF16 (TP1/2/4):

Using TRITON Unquantized MoE backend out of potential backends: ['TRITON', 'BATCHED_TRITON'].

When running without LoRA adapters, the expected default backend is selected:

Using FlashInfer CUTLASS Unquantized MoE backend out of potential backends: ['FlashInfer TRTLLM', 'FlashInfer CUTLASS', 'TRITON', 'BATCHED_TRITON'].

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces logic to force the Triton backend for unquantized MoE when LoRA is enabled and adds a corresponding test case. Feedback indicates that the current early return implementation is problematic because it bypasses support for BATCHED_TRITON (required for models like DeepSeek-V3), skips backend selection logging, and overrides user-specified backend preferences. A suggestion was provided to filter the available backends instead of returning early.

Comment thread vllm/model_executor/layers/fused_moe/oracle/unquantized.py
@danisereb danisereb force-pushed the fix_lora_moe_backend branch from 9056200 to 0168ea9 Compare April 19, 2026 07:48
@danisereb danisereb changed the title Fix backend selection for LoRA (unquantized MoE) Fix MoE backend selection for LoRA (unquantized MoE) Apr 19, 2026
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
@danisereb danisereb force-pushed the fix_lora_moe_backend branch from 4f5f7fe to d57b541 Compare April 19, 2026 08:19
Comment thread tests/kernels/moe/test_unquantized_backend_selection.py Outdated
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Comment thread vllm/model_executor/layers/fused_moe/oracle/unquantized.py Outdated
Comment thread vllm/model_executor/layers/fused_moe/oracle/unquantized.py Outdated
@danisereb danisereb marked this pull request as ready for review April 19, 2026 09:36
Copy link
Copy Markdown

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude Code Review

This pull request is from a fork — automated review is disabled. A repository maintainer can comment @claude review to run a one-time review.

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
@danisereb danisereb force-pushed the fix_lora_moe_backend branch from 51f70ad to 4c0c318 Compare April 19, 2026 10:13
Should keep ROCm behavior unchanged.

Also update tests.

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
@danisereb danisereb force-pushed the fix_lora_moe_backend branch from 2c410cb to cdf3444 Compare April 19, 2026 11:15
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Copy link
Copy Markdown
Member

@tomeras91 tomeras91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @danisereb!
Added a few suggestions

Comment thread tests/kernels/moe/test_unquantized_backend_selection.py Outdated
Comment thread tests/kernels/moe/test_unquantized_backend_selection.py Outdated
Comment thread vllm/model_executor/layers/fused_moe/oracle/unquantized.py Outdated
- Use Triton for both CUDA and ROCm (aligned with select_fp8_moe_backend).
- Update tests accordingly.

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
@danisereb danisereb force-pushed the fix_lora_moe_backend branch from 82f0ef3 to 6b852cd Compare April 19, 2026 14:25
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
if current_platform.is_out_of_tree():
return UnquantizedMoeBackend.OOT, None

if moe_config.is_lora_enabled:
Copy link
Copy Markdown
Contributor Author

@danisereb danisereb Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is now aligned with select_fp8_moe_backend (early exit if LoRA is enabled).

Copy link
Copy Markdown
Member

@tomeras91 tomeras91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much better now.
Left a small nit



@skipif_not_cuda_rocm
def test_select_explicit_triton_ignores_flashinfer_env(monkeypatch):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This test can run on all platforms.. nothing CUDA/ROCm specific about it

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't sure about XPUs.

@tomeras91 tomeras91 added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 19, 2026
@robertgshaw2-redhat robertgshaw2-redhat enabled auto-merge (squash) April 19, 2026 15:45
@robertgshaw2-redhat robertgshaw2-redhat merged commit d1135a5 into vllm-project:main Apr 19, 2026
68 checks passed
vllm-agent pushed a commit to vllm-agent/vllm that referenced this pull request Apr 20, 2026
bnellnm pushed a commit to neuralmagic/vllm that referenced this pull request Apr 20, 2026
)

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
baonudesifeizhai pushed a commit to baonudesifeizhai/vllm that referenced this pull request Apr 23, 2026
)

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
avinashsingh77 pushed a commit to avinashsingh77/vllm that referenced this pull request Apr 27, 2026
)

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: Avinash Singh <avinashsingh.rcoem@gmail.com>
Lafunamor pushed a commit to Lafunamor/vllm that referenced this pull request May 1, 2026
)

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Signed-off-by: Adrian <info@zzit.ch>
mystous pushed a commit to mystous/vllm_hybrid that referenced this pull request May 10, 2026
)

Signed-off-by: Daniel Serebrenik <daserebrenik@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants