-
Notifications
You must be signed in to change notification settings - Fork 573
Patch sm103 for 3xfp4 moe generation #2082
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe PR adds SM103 (Blackwell) architecture support to the fused MoE module generation pipeline by introducing a dedicated module generator function with SM103-specific compilation flags and integrating it into the existing JIT, core dispatch, and AOT layers. Changes
Sequence DiagramsequenceDiagram
participant AOT as AOT Pipeline
participant Dispatch as Backend Dispatch<br/>(fused_moe/core.py)
participant SM103 as gen_cutlass_fused_moe_<br/>sm103_module
AOT->>Dispatch: Request MoE module for SM103
rect rgb(200, 230, 255)
Note over Dispatch: Backend "103" detected
Dispatch->>SM103: Route to SM103-specific generator
end
SM103->>SM103: Build nvcc_flags<br/>+ MOE macros<br/>+ COMPILE_BLACKWELL_SM103_TMA_GROUPED_GEMMS
SM103-->>Dispatch: Return JitSpec for SM103
Dispatch-->>AOT: Compiled MoE module (SM103)
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (4)
🧰 Additional context used🧬 Code graph analysis (4)flashinfer/aot.py (1)
flashinfer/fused_moe/core.py (2)
flashinfer/fused_moe/__init__.py (1)
flashinfer/jit/fused_moe.py (2)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (6)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @aleozlx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances FlashInfer by adding explicit support for the SM103 GPU architecture within its fused Mixture-of-Experts (MoE) generation pipeline. By providing a dedicated CUTLASS module with tailored compilation flags, the changes aim to optimize performance and ensure compatibility for models utilizing MoE operations, particularly with 3xFP4 quantization, on SM103-based hardware. This allows the system to better leverage the specific capabilities of the SM103 architecture for improved efficiency. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the sm103 architecture for 3xfp4 MoE generation. The changes are well-contained, introducing a new gen_cutlass_fused_moe_sm103_module and wiring it into the AOT build system and runtime dispatch logic. The implementation looks correct. I have one suggestion regarding the nvcc_flags to improve consistency with other architecture-specific modules and potentially reduce compilation time. Overall, this is a good patch that achieves its goal.
|
/bot run |
bkryu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM looks straightforward. Let's wait for the CI returns to come back before merging
|
[FAILED] Pipeline #38366909: 12/17 passed |
|
There are output mismatch in |
|
ok looking |
wenscarl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
|
/bot run |
|
as far as we can tell the error was a glitch. we'll be monitoring UT and if it shows up again, investigate/patch it then merging with multiple approvals |
<!-- .github/pull_request_template.md --> ## 📌 Description Patch sm103 for 3xfp4 moe generation ## 🔍 Related Issues Following up of flashinfer-ai#2020 flashinfer-ai#1925 ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [x] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [x] I have installed the hooks with `pre-commit install`. - [x] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [x] Tests have been added or updated as needed. - [x] All tests are passing (`unittest`, etc.). ## Reviewer Notes ``` $ ls csrc/nv_internal/tensorrt_llm/cutlass_instantiations/103/gemm_grouped 100 103 80 $ pytest tests/moe/test_trtllm_cutlass_fused_moe.py 22 passed, 3 skipped, 1 warning in 771.89s (0:12:51) ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Added support for Blackwell (SM103) GPU architecture in MOE (Mixture of Experts) operations with specialized CUTLASS-optimized modules. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
📌 Description
Patch sm103 for 3xfp4 moe generation
🔍 Related Issues
Following up of #2020 #1925
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit