Skip to content

Use maximum number of batched tokens to autotune MoE#28106

Open
nvjullin wants to merge 2 commits intovllm-project:mainfrom
nvjullin:fix-moe-tune-tokens
Open

Use maximum number of batched tokens to autotune MoE#28106
nvjullin wants to merge 2 commits intovllm-project:mainfrom
nvjullin:fix-moe-tune-tokens

Conversation

@nvjullin
Copy link
Contributor

@nvjullin nvjullin commented Nov 5, 2025

Purpose

Follow up on #27904.
CC @varun-sundar-rabindranath.

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the Mixture-of-Experts (MoE) autotuning logic to use the maximum number of batched tokens from the scheduler configuration, which is a more appropriate parameter for this purpose than the CUDA graph capture size. The changes are logical, but the refactoring is incomplete, leading to a critical issue where a removed attribute is still being accessed in the code.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@nvjullin nvjullin force-pushed the fix-moe-tune-tokens branch from 2773c72 to ce4f6a9 Compare November 5, 2025 06:55
@mgoin mgoin added the nvidia label Nov 5, 2025
Signed-off-by: Julien Lin <jullin@nvidia.com>
@nvpohanh
Copy link
Contributor

@nvjullin could you rebase so that we can keep driving this PR? thanks

@nvjullin nvjullin force-pushed the fix-moe-tune-tokens branch from 5e1d4ea to 0bdb4c5 Compare December 16, 2025 10:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

3 participants