Skip to content

[EPLB] Support EPLB w/ NVFP4#29804

Merged
pavanimajety merged 10 commits intovllm-project:mainfrom
andrewbriand:abriand_eplb_nvfp4_2
Dec 11, 2025
Merged

[EPLB] Support EPLB w/ NVFP4#29804
pavanimajety merged 10 commits intovllm-project:mainfrom
andrewbriand:abriand_eplb_nvfp4_2

Conversation

@andrewbriand
Copy link
Contributor

@andrewbriand andrewbriand commented Dec 1, 2025

Purpose

Support EPLB in combination with NVFP4.

Test Plan

Added a test test_eplb_fused_moe_layer_dep_nvfp4.py which ensures that NVFP4 backends correctly route tokens to physical experts based on their logical expert ids.

Test Result

Tests pass on GB200.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Expert Parallel Load Balancing (EPLB) with NVFP4 quantization. The changes include a new test case for this functionality and modifications to ModelOptNvFp4FusedMoE to handle the EPLB path, along with a new kernel wrapper flashinfer_trtllm_fp4_routed_moe. The implementation is largely correct, but I've identified a critical issue where the routing method type is hardcoded in the new kernel wrapper. This would lead to incorrect behavior for MoE models that use different routing mechanisms. I have provided comments with suggestions to address this issue by dynamically determining the routing method.

@github-actions
Copy link

github-actions bot commented Dec 1, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Signed-off-by: Andrew Briand <abriand@nvidia.com>
@JaheimLee
Copy link

Does this support marlin kernel?

@andrewbriand
Copy link
Contributor Author

Does this support marlin kernel?

Yes, this should work since Marlin accepts topk_ids from select_experts which will handle mapping of logical experts to physical experts:

.

):
# Pack top k ids and expert weights into a single int32 tensor, as
# required by TRT-LLM
packed_tensor = (topk_ids.to(torch.int32) << 16) | topk_weights.to(
Copy link
Contributor

@IwakuraRein IwakuraRein Dec 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe hide this packing operation in the flashinfer_trtllm_fp4_routed_moe. I.e., let flashinfer_trtllm_fp4_routed_moe take topk_ids and topk_weights directly, making its interface closer to Marlin’s.

Additionally, the packing will be removed in the flashinfer api in the near future so we can just pass topk_ids and topk_weights to flashinfer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Andrew Briand added 2 commits December 2, 2025 17:32
Signed-off-by: Andrew Briand <abriand@nvidia.com>
Signed-off-by: Andrew Briand <abriand@nvidia.com>
@heheda12345
Copy link
Collaborator

CC @tlrmchlsmth

@IwakuraRein IwakuraRein moved this to Ready in NVIDIA Dec 9, 2025
@pavanimajety pavanimajety added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 9, 2025
Copy link
Contributor

@IwakuraRein IwakuraRein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks for the contribution

Andrew Briand added 2 commits December 9, 2025 16:29
…re comms

Signed-off-by: Andrew Briand <abriand@nvidia.com>
Signed-off-by: Andrew Briand <abriand@nvidia.com>
@mergify
Copy link

mergify bot commented Dec 10, 2025

Hi @andrewbriand, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy or markdownlint failing?
mypy and markdownlint are run differently in CI. If the failure is related to either of these checks, please use the following commands to run them locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10
# For markdownlint
pre-commit run --hook-stage manual markdownlint

Andrew Briand and others added 2 commits December 9, 2025 17:16
Signed-off-by: Andrew Briand <abriand@nvidia.com>
Comment on lines +183 to +184
weight[src],
# Move to device in case the weights have been offloaded to CPU
weight[src].to(torch.cuda.current_device()),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we submit this change separately? I don't see the need to prioritize supporting cpu offloading with eplb and this may have complications

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I will revert this for now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Member

@abmfy abmfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

…GPU before comms"

This reverts commit c3a7ea1.

Signed-off-by: Andrew Briand <abriand@nvidia.com>
@andrewbriand andrewbriand requested a review from mgoin December 11, 2025 19:32
@pavanimajety pavanimajety enabled auto-merge (squash) December 11, 2025 19:44
Copy link
Collaborator

@pavanimajety pavanimajety left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the PR!

@github-project-automation github-project-automation bot moved this from Ready to In review in NVIDIA Dec 11, 2025
@pavanimajety pavanimajety merged commit a00d889 into vllm-project:main Dec 11, 2025
59 checks passed
@github-project-automation github-project-automation bot moved this from In review to Done in NVIDIA Dec 11, 2025
Lucaskabela pushed a commit to Lucaskabela/vllm that referenced this pull request Dec 15, 2025
Signed-off-by: Andrew Briand <abriand@nvidia.com>
Co-authored-by: Andrew Briand <abriand@nvidia.com>
Majid-Taheri pushed a commit to Majid-Taheri/vllm that referenced this pull request Dec 23, 2025
Signed-off-by: Andrew Briand <abriand@nvidia.com>
Co-authored-by: Andrew Briand <abriand@nvidia.com>
Signed-off-by: Ubuntu <mjtaheri68@gmail.com>
dsuhinin pushed a commit to dsuhinin/vllm that referenced this pull request Jan 21, 2026
Signed-off-by: Andrew Briand <abriand@nvidia.com>
Co-authored-by: Andrew Briand <abriand@nvidia.com>
Signed-off-by: dsuhinin <suhinin.dmitriy@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

nvidia ready ONLY add when PR is ready to merge/full CI is needed

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

7 participants