Skip to content

fix moe benchmark#2886

Open
nv-yunzheq wants to merge 4 commits intoflashinfer-ai:mainfrom
nv-yunzheq:fix_moe_benchmark
Open

fix moe benchmark#2886
nv-yunzheq wants to merge 4 commits intoflashinfer-ai:mainfrom
nv-yunzheq:fix_moe_benchmark

Conversation

@nv-yunzheq
Copy link
Collaborator

@nv-yunzheq nv-yunzheq commented Mar 24, 2026

📌 Description

Fix inconsistent between autotune and real benchmark for deepseek moe benchmark script

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Bug Fixes

    • Fixed autotune log interleaving with benchmark results output
    • Improved benchmark accuracy via one-time pre-warm and GPU synchronization
    • Safer memory handling to avoid buffer overflow when larger token batches are selected
  • Refactor

    • Integrated autotuning into the main benchmark workflow so autotune runs during benchmarking rather than beforehand

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves inconsistencies between autotuning and actual benchmark results in the DeepSeek MoE benchmark script. The changes ensure that the autotuning process is correctly executed on the default CUDA stream and is seamlessly integrated into the benchmark's execution flow, leading to more accurate and reliable performance measurements.

Highlights

  • Autotuning Consistency Fix: Implemented pre-warming steps in the run functions for moe_deepseek_fwd, moe_deepseek_bwd_w1_w2, and moe_deepseek_bwd_x_router to ensure autotuning completes on the default CUDA stream before bench_gpu_time moves execution to a side stream, preventing illegal memory access errors.
  • Integrated Autotuning Workflow: Removed the standalone run_autotune function and refactored run_benchmark to integrate autotuning directly into the benchmark loop. This ensures that autotuning uses the same API, expert parallelism configuration, and weight shapes as the timed runs, avoiding cache-key mismatches.
  • Improved Benchmark Output: Updated the run_benchmark function's documentation to clarify that autotuning is now merged into the benchmark runs and that all output is buffered to prevent autotuner log messages from interleaving with the results table.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 24, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bb9e8036-fe6a-4de8-96ee-7bbf26eeda2a

📥 Commits

Reviewing files that changed from the base of the PR and between 5677a08 and c0b80b6.

📒 Files selected for processing (1)
  • flashinfer/fused_moe/cute_dsl/fused_moe.py

📝 Walkthrough

Walkthrough

Autotuning now occurs during the benchmark loop via an autotune(True) context; per-backend pre-warm runs with torch.cuda.synchronize() were added. A prior dedicated run_autotune() helper was removed. Result printing is deferred until after the autotune context to avoid interleaved logs.

Changes

Cohort / File(s) Summary
Benchmark Autotuning Refactor
benchmarks/bench_moe_deepseek.py
Removed the separate run_autotune() helper. Wrapped per-token benchmark loop with with autotune(True) when do_autotune is enabled, added one-time pre-warm run(**input_kwargs) + torch.cuda.synchronize() for CuteDSL/CUTLASS/TRTLLM backends, and deferred printing of rows/histograms until after the autotune context. Docstring updated to reflect autotune-during-benchmark behavior.
CuteDSL MoE buffer safety
flashinfer/fused_moe/cute_dsl/fused_moe.py
CuteDslMoEWrapper._forward_with_tactic now checks num_tokens = x.shape[0] <= self.max_num_tokens before using pre-allocated CUDA-graph buffers; if exceeded, passes None for pre-allocated buffers to avoid overflow.

Sequence Diagram(s)

sequenceDiagram
    participant Bench as Benchmark Runner
    participant Auto as Autotune Context
    participant Backend as Backend (CuteDSL/CUTLASS/TRTLLM)
    participant CUDA as CUDA/GPU

    Bench->>Auto: enter with autotune(True) (if enabled)
    Bench->>Backend: pre-warm run(**input_kwargs)
    Backend->>CUDA: dispatch kernels
    CUDA-->>Bench: torch.cuda.synchronize()
    loop per-token benchmark
        Bench->>Backend: run(token)
        Backend->>Auto: first calls trigger tactic profiling
        Backend->>CUDA: execute kernels (use cached tactics after profiling)
        Backend-->>Bench: timing/result
        Bench->>Bench: collect rows_and_histograms
    end
    Bench->>Auto: exit autotune context
    Bench->>Bench: print rows_and_histograms (after autotune logs)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

run-ci, op: moe, cute-dsl

Suggested reviewers

  • yzh119
  • cyx-6
  • sricketts
  • aleozlx
  • yongwww
  • bkryu

Poem

🐰 I hopped in to warm the GPU’s heart,
I nudged autotune to play its part.
No pre-phase fuss, the loop now sings,
Results printed clean—fresh benchmarking springs! 🎛️✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title is vague and does not clearly describe the main change, using generic phrasing like 'fix moe benchmark' without specificity. Provide a more descriptive title such as 'Fix autotune-benchmark inconsistency in DeepSeek MoE benchmark' to clearly convey the main purpose.
✅ Passed checks (1 passed)
Check name Status Explanation
Description check ✅ Passed The description identifies a core issue ('Fix inconsistent between autotune and real benchmark for deepseek moe benchmark script'), but the explanation lacks detail about what the inconsistency is, how it's fixed, or why the changes were necessary.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the DeepSeek-V3 MoE benchmarks by removing the standalone run_autotune function. Autotuning is now integrated directly into the run_benchmark function, with pre-warmup steps added to individual run functions and the benchmark loop wrapped in an autotune context manager. This ensures autotuning completes on the default stream before CUDA-graph capture and guarantees the autotuner sees the correct API/config/weight shapes, preventing cache-key mismatches. The docstring for run_benchmark has been updated to reflect these changes. A review comment suggests moving the contextlib and autotune imports to the top of the file for better code organization and PEP 8 adherence.

Comment on lines +668 to +670
import contextlib

from flashinfer.autotuner import autotune
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better code organization and adherence to Python's style guide (PEP 8), it's recommended to move these imports to the top of the file. Placing all imports at the beginning of a module makes it easier to see its dependencies at a glance.

Please move import contextlib and from flashinfer.autotuner import autotune to the top of the file with the other imports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant