Skip to content

[NemotronH] Do not force router to run in fp32#34582

Merged
vllm-bot merged 4 commits intovllm-project:mainfrom
roikoren755:feat/nemotronh-bf16-router
Feb 16, 2026
Merged

[NemotronH] Do not force router to run in fp32#34582
vllm-bot merged 4 commits intovllm-project:mainfrom
roikoren755:feat/nemotronh-bf16-router

Conversation

@roikoren755
Copy link
Copy Markdown
Contributor

@roikoren755 roikoren755 commented Feb 15, 2026

Purpose

Current code forces the MoE router computation to FP32, even though checkpoints have it in bfloat16. This takes up about 40% of the forward pass, under normal workloads, and does not provide an accuracy boost.

This PR removes this limitations.

Test Plan

No additional tests, all tests pass, accuracy does not degrade

Test Result

All tests pass, accuracy did not degrade

Running GSM8K, got the following results.

PR:

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.5671|±  |0.0136|
|     |       |strict-match    |     5|exact_match|↑  |0.8431|±  |0.0100|

Main:

|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.5572|±  |0.0137|
|     |       |strict-match    |     5|exact_match|↑  |0.8453|±  |0.0100|

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Roi Koren <roik@nvidia.com>
Signed-off-by: Roi Koren <roik@nvidia.com>
@roikoren755 roikoren755 force-pushed the feat/nemotronh-bf16-router branch from 2fb9690 to 48ab68e Compare February 15, 2026 13:02
@mergify mergify bot added the nvidia label Feb 15, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable performance optimization by removing the forced casting of MoE router logits to float32. The changes in nemotron_h.py correctly implement this, and the special case for DeepSeekV3 is properly handled in flashinfer_trtllm_moe.py. I've found one minor issue: a leftover debug print statement that should be removed.

from vllm.utils.flashinfer import flashinfer_trtllm_fp8_per_tensor_scale_moe

# The DeepSeekV3 routing method requires float32 router logits.
print(routing_method_type)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This print statement appears to be for debugging purposes and should be removed before merging to avoid polluting logs.

routing_logits = routing_logits.to(torch.float32)

if routing_bias is not None:
routing_bias = routing_bias.to(hidden_states.dtype)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I remember something about it being important that the bias is in FP32.. I understand in this case we first cast logits to FP32 (since we're using DS routing) so the bias is actually in FP32, but doesn't it make more sense to cast logits to bias dtype instead of the other way around?

Signed-off-by: Roi Koren <roik@nvidia.com>
Signed-off-by: Roi Koren <roik@nvidia.com>
Copy link
Copy Markdown
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice find! We maybe should add an assert on the bias, but this roughly matches other trtllm moe impls. Do you have any perf result? You mentioned this takes 40% of time, which doesn't make sense to me

@github-project-automation github-project-automation bot moved this to Ready in NVIDIA Feb 16, 2026
@mgoin mgoin added performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed labels Feb 16, 2026
@roikoren755
Copy link
Copy Markdown
Contributor Author

roikoren755 commented Feb 16, 2026

LGTM, nice find! We maybe should add an assert on the bias, but this roughly matches other trtllm moe impls. Do you have any perf result? You mentioned this takes 40% of time, which doesn't make sense to me

40% might have been an exaggeration 😅
But in an example workload, with TP8, we saw the following kernel distribution:
image

Keep in mind this is with an older commit, before the TRTLLM-Gen kernels were merged. The second kernel, taking ~19% of the profile, is the fp32 router

@vllm-bot vllm-bot merged commit 3b30e61 into vllm-project:main Feb 16, 2026
62 of 68 checks passed
@github-project-automation github-project-automation bot moved this from Ready to Done in NVIDIA Feb 16, 2026
@roikoren755 roikoren755 deleted the feat/nemotronh-bf16-router branch February 17, 2026 14:09
roikoren755 added a commit to roikoren755/vllm that referenced this pull request Feb 18, 2026
roikoren755 added a commit to roikoren755/vllm that referenced this pull request Feb 18, 2026
…34582)"

This reverts commit 3b30e61.

Signed-off-by: Roi Koren <roik@nvidia.com>
wzhao18 pushed a commit to wzhao18/vllm that referenced this pull request Feb 18, 2026
Signed-off-by: Roi Koren <roik@nvidia.com>
Signed-off-by: wzhao18 <wzhao18.sz@gmail.com>
vllm-bot pushed a commit that referenced this pull request Feb 19, 2026
)

Signed-off-by: Roi Koren <roik@nvidia.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
eldarkurtic pushed a commit to eldarkurtic/vllm that referenced this pull request Feb 19, 2026
Signed-off-by: Roi Koren <roik@nvidia.com>
Signed-off-by: Eldar Kurtic <research@neuralmagic.com>
jmamou pushed a commit to jmamou/vllm that referenced this pull request Feb 23, 2026
…34582)" (vllm-project#34808)

Signed-off-by: Roi Koren <roik@nvidia.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
ZJY0516 pushed a commit to ZJY0516/vllm that referenced this pull request Feb 23, 2026
Signed-off-by: Roi Koren <roik@nvidia.com>
Signed-off-by: zjy0516 <riverclouds.zhu@qq.com>
llsj14 pushed a commit to llsj14/vllm that referenced this pull request Mar 1, 2026
llsj14 pushed a commit to llsj14/vllm that referenced this pull request Mar 1, 2026
…34582)" (vllm-project#34808)

Signed-off-by: Roi Koren <roik@nvidia.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
tunglinwood pushed a commit to tunglinwood/vllm that referenced this pull request Mar 4, 2026
tunglinwood pushed a commit to tunglinwood/vllm that referenced this pull request Mar 4, 2026
…34582)" (vllm-project#34808)

Signed-off-by: Roi Koren <roik@nvidia.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
askliar pushed a commit to askliar/vllm that referenced this pull request Mar 9, 2026
…34582)" (vllm-project#34808)

Signed-off-by: Roi Koren <roik@nvidia.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Signed-off-by: Andrii Skliar <askliar@nvidia.com>
Copilot AI pushed a commit to machov/vllm that referenced this pull request Mar 10, 2026
…34582)" (vllm-project#34808)

Signed-off-by: Roi Koren <roik@nvidia.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
EricccYang pushed a commit to EricccYang/vllm that referenced this pull request Apr 1, 2026
Signed-off-by: Roi Koren <roik@nvidia.com>
Signed-off-by: EricccYang <yangyang4991@gmail.com>
EricccYang pushed a commit to EricccYang/vllm that referenced this pull request Apr 1, 2026
…34582)" (vllm-project#34808)

Signed-off-by: Roi Koren <roik@nvidia.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Signed-off-by: EricccYang <yangyang4991@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

nvidia performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

4 participants