Skip to content

Conversation

@aleozlx
Copy link
Collaborator

@aleozlx aleozlx commented Nov 12, 2025

📌 Description

Patch sm103 for 3xfp4 moe generation

🔍 Related Issues

Following up of #2020 #1925

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

$ ls csrc/nv_internal/tensorrt_llm/cutlass_instantiations/103/gemm_grouped
100  103  80

$ pytest tests/moe/test_trtllm_cutlass_fused_moe.py
22 passed, 3 skipped, 1 warning in 771.89s (0:12:51)

Summary by CodeRabbit

  • New Features
    • Added support for Blackwell (SM103) GPU architecture in MOE (Mixture of Experts) operations with specialized CUTLASS-optimized modules.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 12, 2025

Walkthrough

The PR adds SM103 (Blackwell) architecture support to the fused MoE module generation pipeline by introducing a dedicated module generator function with SM103-specific compilation flags and integrating it into the existing JIT, core dispatch, and AOT layers.

Changes

Cohort / File(s) Summary
Module Implementation
flashinfer/jit/fused_moe.py
Added gen_cutlass_fused_moe_sm103_module() function that defines nvcc compilation flags for SM103, including the new -DCOMPILE_BLACKWELL_SM103_TMA_GROUPED_GEMMS flag, and delegates to gen_cutlass_fused_moe_module() with backend identifier "103"
Module Export & Routing
flashinfer/fused_moe/__init__.py, flashinfer/fused_moe/core.py
Imported and exported gen_cutlass_fused_moe_sm103_module as a public symbol; updated backend selection logic to route backend "103" to the new SM103-specific module generator
AOT Integration
flashinfer/aot.py
Imported gen_cutlass_fused_moe_sm103_module and integrated it into the JIT specs list when SM103 is detected

Sequence Diagram

sequenceDiagram
    participant AOT as AOT Pipeline
    participant Dispatch as Backend Dispatch<br/>(fused_moe/core.py)
    participant SM103 as gen_cutlass_fused_moe_<br/>sm103_module
    
    AOT->>Dispatch: Request MoE module for SM103
    rect rgb(200, 230, 255)
    Note over Dispatch: Backend "103" detected
    Dispatch->>SM103: Route to SM103-specific generator
    end
    SM103->>SM103: Build nvcc_flags<br/>+ MOE macros<br/>+ COMPILE_BLACKWELL_SM103_TMA_GROUPED_GEMMS
    SM103-->>Dispatch: Return JitSpec for SM103
    Dispatch-->>AOT: Compiled MoE module (SM103)
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

  • Changes follow an established pattern already present for SM100 and SM120 backends
  • New function is straightforward with no complex logic
  • Modifications are homogeneous across files (consistent dispatch routing)
  • Primary focus: verify SM103 flag correctness and compatibility with Blackwell architecture

Possibly related PRs

Suggested reviewers

  • djmmoss
  • yzh119
  • cyx-6
  • wenscarl
  • nvmbreughe
  • jiahanc

Poem

🐰 Blackwell hops into view,
SM103 shines so bright and true,
With flags set just right,
The fused MoE takes flight,
Another architecture, and we're through!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately summarizes the main change: adding SM103 support for 3xfp4 MoE generation, which aligns with the code changes introducing gen_cutlass_fused_moe_sm103_module.
Description check ✅ Passed The PR description follows the template with completed sections: description, related issues, pre-commit checks marked complete, tests marked complete, and reviewer notes provided showing test results.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6765cad and eb2b392.

📒 Files selected for processing (4)
  • flashinfer/aot.py (2 hunks)
  • flashinfer/fused_moe/__init__.py (2 hunks)
  • flashinfer/fused_moe/core.py (2 hunks)
  • flashinfer/jit/fused_moe.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
flashinfer/aot.py (1)
flashinfer/jit/fused_moe.py (1)
  • gen_cutlass_fused_moe_sm103_module (50-65)
flashinfer/fused_moe/core.py (2)
flashinfer/jit/fused_moe.py (1)
  • gen_cutlass_fused_moe_sm103_module (50-65)
flashinfer/jit/core.py (1)
  • build_and_load (300-312)
flashinfer/fused_moe/__init__.py (1)
flashinfer/jit/fused_moe.py (1)
  • gen_cutlass_fused_moe_sm103_module (50-65)
flashinfer/jit/fused_moe.py (2)
flashinfer/jit/core.py (1)
  • JitSpec (213-312)
flashinfer/compilation_context.py (1)
  • get_nvcc_flags_list (50-68)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs
🔇 Additional comments (6)
flashinfer/fused_moe/__init__.py (1)

24-24: LGTM! SM103 module generator properly exposed.

The new gen_cutlass_fused_moe_sm103_module is correctly imported and added to the public API, following the established pattern for other architecture-specific module generators.

Also applies to: 43-43

flashinfer/aot.py (2)

46-46: LGTM! SM103 module generator imported for AOT build.

The import correctly brings in the new SM103 module generator alongside existing architecture-specific generators.


498-498: LGTM! SM103 module properly integrated into AOT build pipeline.

The conditional generation of the SM103 fused MoE module follows the established pattern for other architectures and is correctly gated by the has_sm103 capability check.

flashinfer/fused_moe/core.py (2)

37-37: LGTM! SM103 module generator imported into core dispatch.

The import correctly brings in the new SM103-specific module generator for backend routing.


319-321: LGTM! Backend "103" correctly routed to SM103 module.

The conditional routing properly directs SM103 architecture to its dedicated module generator. The separation from SM100/SM110 is appropriate since SM103 requires distinct compilation flags (-DCOMPILE_BLACKWELL_SM103_TMA_GROUPED_GEMMS).

flashinfer/jit/fused_moe.py (1)

50-66: LGTM! SM103 module generator correctly implemented.

The new function properly defines the SM103 fused MoE module generator with:

  • Appropriate compilation flags including the SM103-specific -DCOMPILE_BLACKWELL_SM103_TMA_GROUPED_GEMMS
  • Correct major version constraint [10] for SM103 (compute capability 10.3)
  • Consistent pattern following existing SM120 and SM100 implementations

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @aleozlx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances FlashInfer by adding explicit support for the SM103 GPU architecture within its fused Mixture-of-Experts (MoE) generation pipeline. By providing a dedicated CUTLASS module with tailored compilation flags, the changes aim to optimize performance and ensure compatibility for models utilizing MoE operations, particularly with 3xFP4 quantization, on SM103-based hardware. This allows the system to better leverage the specific capabilities of the SM103 architecture for improved efficiency.

Highlights

  • SM103 Architecture Support: Introduced a dedicated CUTLASS fused Mixture-of-Experts (MoE) module specifically for the SM103 architecture, enabling optimized performance for this hardware.
  • Advanced Compilation Flags: Configured specific NVCC flags for SM103, including support for BF16, FP8, FP4 data types, and Blackwell TMA grouped GEMMs, to leverage the architecture's capabilities.
  • Integration into Build System: Integrated the new SM103 MoE module into the ahead-of-time (AOT) compilation process and the runtime selection logic, ensuring it's built and used when targeting SM103.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the sm103 architecture for 3xfp4 MoE generation. The changes are well-contained, introducing a new gen_cutlass_fused_moe_sm103_module and wiring it into the AOT build system and runtime dispatch logic. The implementation looks correct. I have one suggestion regarding the nvcc_flags to improve consistency with other architecture-specific modules and potentially reduce compilation time. Overall, this is a good patch that achieves its goal.

@aleozlx
Copy link
Collaborator Author

aleozlx commented Nov 12, 2025

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !132 has been created, and the CI pipeline #38366909 is currently running. I'll report back once the pipeline job completes.

Copy link
Collaborator

@bkryu bkryu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM looks straightforward. Let's wait for the CI returns to come back before merging

@flashinfer-bot
Copy link
Collaborator

[FAILED] Pipeline #38366909: 12/17 passed

@yzh119
Copy link
Collaborator

yzh119 commented Nov 13, 2025

There are output mismatch in test_groupwise_scaled_gemm_mxfp4 for b200 and gb300 UT, @aleozlx would you mind taking a look?

@aleozlx
Copy link
Collaborator Author

aleozlx commented Nov 13, 2025

ok looking

Copy link
Collaborator

@wenscarl wenscarl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@aleozlx
Copy link
Collaborator Author

aleozlx commented Nov 14, 2025

pytest tests/gemm/test_groupwise_scaled_gemm_mxfp4.py::test_mxfp8_mxfp4_groupwise_group_gemm[out_dtype1-fp8_dtype1-8-8192-4096-8192]

1 passed in 204.18s (0:03:24)
1 passed in 2.75s

pytest tests/gemm/test_groupwise_scaled_gemm_mxfp4.py::test_mxfp8_mxfp4_groupwise_group_gemm

3456 passed in 60.88s (0:01:00)

@aleozlx
Copy link
Collaborator Author

aleozlx commented Nov 14, 2025

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !132 has been created, and the CI pipeline #38503080 is currently running. I'll report back once the pipeline job completes.

@aleozlx
Copy link
Collaborator Author

aleozlx commented Nov 14, 2025

as far as we can tell the error was a glitch. we'll be monitoring UT and if it shows up again, investigate/patch it then

merging with multiple approvals

@aleozlx aleozlx merged commit 37434ed into flashinfer-ai:main Nov 14, 2025
4 checks passed
qsang-nv pushed a commit to qsang-nv/flashinfer that referenced this pull request Nov 18, 2025
<!-- .github/pull_request_template.md -->

## 📌 Description

Patch sm103 for 3xfp4 moe generation

## 🔍 Related Issues

Following up of flashinfer-ai#2020 flashinfer-ai#1925 

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [x] All tests are passing (`unittest`, etc.).

## Reviewer Notes

```
$ ls csrc/nv_internal/tensorrt_llm/cutlass_instantiations/103/gemm_grouped
100  103  80

$ pytest tests/moe/test_trtllm_cutlass_fused_moe.py
22 passed, 3 skipped, 1 warning in 771.89s (0:12:51)
```


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added support for Blackwell (SM103) GPU architecture in MOE (Mixture
of Experts) operations with specialized CUTLASS-optimized modules.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants