Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves inconsistencies between autotuning and actual benchmark results in the DeepSeek MoE benchmark script. The changes ensure that the autotuning process is correctly executed on the default CUDA stream and is seamlessly integrated into the benchmark's execution flow, leading to more accurate and reliable performance measurements. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAutotuning now occurs during the benchmark loop via an Changes
Sequence Diagram(s)sequenceDiagram
participant Bench as Benchmark Runner
participant Auto as Autotune Context
participant Backend as Backend (CuteDSL/CUTLASS/TRTLLM)
participant CUDA as CUDA/GPU
Bench->>Auto: enter with autotune(True) (if enabled)
Bench->>Backend: pre-warm run(**input_kwargs)
Backend->>CUDA: dispatch kernels
CUDA-->>Bench: torch.cuda.synchronize()
loop per-token benchmark
Bench->>Backend: run(token)
Backend->>Auto: first calls trigger tactic profiling
Backend->>CUDA: execute kernels (use cached tactics after profiling)
Backend-->>Bench: timing/result
Bench->>Bench: collect rows_and_histograms
end
Bench->>Auto: exit autotune context
Bench->>Bench: print rows_and_histograms (after autotune logs)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request refactors the DeepSeek-V3 MoE benchmarks by removing the standalone run_autotune function. Autotuning is now integrated directly into the run_benchmark function, with pre-warmup steps added to individual run functions and the benchmark loop wrapped in an autotune context manager. This ensures autotuning completes on the default stream before CUDA-graph capture and guarantees the autotuner sees the correct API/config/weight shapes, preventing cache-key mismatches. The docstring for run_benchmark has been updated to reflect these changes. A review comment suggests moving the contextlib and autotune imports to the top of the file for better code organization and PEP 8 adherence.
| import contextlib | ||
|
|
||
| from flashinfer.autotuner import autotune |
There was a problem hiding this comment.
For better code organization and adherence to Python's style guide (PEP 8), it's recommended to move these imports to the top of the file. Placing all imports at the beginning of a module makes it easier to see its dependencies at a glance.
Please move import contextlib and from flashinfer.autotuner import autotune to the top of the file with the other imports.
📌 Description
Fix inconsistent between autotune and real benchmark for deepseek moe benchmark script
🔍 Related Issues
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
Bug Fixes
Refactor