Skip to content

fix: add SM121 support to SM120 version guards#2631

Merged
yzh119 merged 2 commits intoflashinfer-ai:mainfrom
Yuening-wa:add_sm121
Feb 25, 2026
Merged

fix: add SM121 support to SM120 version guards#2631
yzh119 merged 2 commits intoflashinfer-ai:mainfrom
Yuening-wa:add_sm121

Conversation

@Yuening-wa
Copy link
Copy Markdown
Contributor

@Yuening-wa Yuening-wa commented Feb 24, 2026

📌 Description

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • New Features
    • Broadened hardware compatibility to include additional NVIDIA Blackwell GPUs (SM121) and added an explicit configuration option for the newer scheduler on those devices.
  • Documentation
    • Updated user-facing messages, docs and comments to reflect SM121 support.
  • Tests
    • Adjusted test skip messages to include SM121 where applicable.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Yuening-wa, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances FlashInfer's compatibility with newer NVIDIA GPU architectures by integrating SM121 into existing SM120-specific logic. This ensures that advanced features like FP4 GEMM and optimized Cutlass configurations are correctly applied to SM121 devices, maintaining performance and functionality across a broader range of hardware.

Highlights

  • SM121 Support: Extended existing SM120 version guards and configurations to include support for SM121, ensuring compatibility with newer NVIDIA architectures.
  • FP4 GEMM Requirements: Updated the _cudnn_gemm_fp4_requirement function in Python to check for SM121 alongside SM120 for CUDNN FP4 GEMM operations.
  • Cutlass GEMM Configurations: Modified the C++ Cutlass GEMM configuration logic to apply SM120-specific tile configurations and StreamK scheduler options to SM121 devices.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • flashinfer/gemm/gemm_base.py
    • Modified _cudnn_gemm_fp4_requirement to include SM121 in the version check for FP4 GEMM.
  • include/flashinfer/gemm/cutlass_gemm_configs.h
    • Updated getTileConfigAsInt to return SM120 tile configurations for SM121.
    • Extended the StreamK scheduler option to apply for SM121 devices.
  • include/flashinfer/trtllm/common.h
    • Added a new constant kSM_121 for the SM121 architecture.
Activity
  • No human activity (comments, reviews) has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 24, 2026

📝 Walkthrough

Walkthrough

This PR expands SM120-specific checks, messages, and config paths to also include SM121 across GEMM, Cutlass config, XQA, and tests; it adds a new kSM_121 constant and a CutlassGemmConfig constructor for SM120/SM121 scheduler choice.

Changes

Cohort / File(s) Summary
GEMM core & cuDNN checks
flashinfer/gemm/gemm_base.py, include/flashinfer/gemm/fp4_gemm_template_sm120.h, include/flashinfer/gemm/fp4_gemm_cutlass_template_sm120.h, include/flashinfer/gemm/gemm_groupwise_sm120.cuh, include/flashinfer/gemm/group_gemm_fp8_groupwise_sm120.cuh
Broadened SM120-only checks/messages/comments to include SM121 for cuDNN FP4/MXFP4 requirements and GEMM groupwise/FP4 paths; no control-flow changes.
Cutlass GEMM config API
include/flashinfer/gemm/cutlass_gemm_configs.h
Added constructor accepting use_stream_k for SM120/SM121, updated getTileConfigAsInt() and toString() to map/report SM121 like SM120.
SM constant & small docs/messages
include/flashinfer/trtllm/common.h, flashinfer/decode.py, flashinfer/mla.py, flashinfer/xqa.py, flashinfer/jit/gemm/core.py, flashinfer/jit/gemm/cutlass/generate_kernels.py, include/flashinfer/gemm/cutlass_gemm_configs.h (call sites)
Added kSM_121 and updated docstrings, comments, and user-facing error/skip messages to reference SM121 alongside SM120.
Device-specific C/CUDA sources
csrc/gemm_groupwise_sm120.cu, csrc/group_gemm_fp8_groupwise_sm120.cu, csrc/trtllm_fmha_v2_binding.cu
Updated comments and error text to include SM121 where previously SM120-only; no logic changes.
Tests
tests/attention/test_trtllm_gen_mla.py, tests/attention/test_xqa.py, tests/attention/test_xqa_batch_decode.py, tests/attention/test_xqa_mla_batch_decode.py, tests/moe/test_trtllm_cutlass_fused_moe.py
Adjusted skip/expectation messages to include SM121 in supported GPU lists; test logic unchanged.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

run-ci

Suggested reviewers

  • nvmbreughe
  • yzh119
  • cyx-6
  • yongwww
  • aleozlx

Poem

🐰 I hopped from SM120 to SM121's land,
Updated constants, messages, and plan,
Tiles and schedulers now aligned,
A tiny tweak, compatibility signed. 🥕

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description uses only the template with no actual content filled in. The Description and Related Issues sections are empty, and reviewer notes are blank. Fill in the Description section with what this PR does and why it's needed. Link any related issues in the Related Issues section. Add reviewer notes if applicable.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding SM121 support to existing SM120 version guards, which aligns with the comprehensive file-level changes.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the SM121 architecture by including it in version guards that previously only checked for SM120. The changes are straightforward and correctly extend support across the Python and C++ codebases. I've identified a couple of minor issues where comments and error messages should be updated to reflect the inclusion of SM121, ensuring code clarity and accurate user feedback. Overall, the changes are good and align with the PR's goal.

if (
not use_nvfp4
and _match_sm_version(a.device, ["120"])
and _match_sm_version(a.device, ["120", "121"])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While this change correctly adds support for SM121, the error message raised on line 3044, which is controlled by the CUDNN_FP4_MXFP4_SM120_CUDNN_VERSION_ERROR constant, is now potentially misleading as it only mentions SM120. To avoid confusion for users on SM121 devices, please consider updating the error message to include SM121.

<< "\n\tenable cuda kernel: " << (enableCudaKernel ? "true" : "false");
// SM120 specific: StreamK scheduler option
if (sm_version == 120) {
if (sm_version == 120 || sm_version == 121) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

With this change, the check now includes SM121, but the comment on the preceding line (416) still says 'SM120 specific'. Please update the comment to reflect this change for better code clarity. For example: // SM120/SM121 specific: StreamK scheduler option.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@flashinfer/gemm/gemm_base.py`:
- Around line 3039-3043: The conditional checks use_nvfp4,
_match_sm_version(a.device, ["120", "121"]), and cudnn.backend_version() < 91400
but the raised error string only mentions “SM120”; update the error message
produced in this branch to mention both SM120 and SM121 (or use a combined
phrase like “SM120/SM121”) so users on SM121 see the correct guidance; locate
the raise or processLogger.error call inside the block guarded by
use_nvfp4/_match_sm_version and change the literal to include SM121.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41545e5 and 06da444.

📒 Files selected for processing (3)
  • flashinfer/gemm/gemm_base.py
  • include/flashinfer/gemm/cutlass_gemm_configs.h
  • include/flashinfer/trtllm/common.h

@yzh119
Copy link
Copy Markdown
Collaborator

yzh119 commented Feb 24, 2026

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !347 has been created, and the CI pipeline #44685986 is currently running. I'll report back once the pipeline job completes.

@yzh119
Copy link
Copy Markdown
Collaborator

yzh119 commented Feb 24, 2026

@flashinfer-bot run

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
include/flashinfer/gemm/fp4_gemm_template_sm120.h (1)

254-259: ⚠️ Potential issue | 🟡 Minor

Stale inline comment at line 259 not updated to match the SM120/SM121 change.

Lines 254–255 were updated to reference "SM120/SM121", but the adjacent inline comment on line 259 still only mentions "SM120":

cutlass::gemm::collective::StageCount<2>, /* Fixed 2 stages for SM120 */
✏️ Proposed fix
-        cutlass::gemm::collective::StageCount<2>, /* Fixed 2 stages for SM120 */
+        cutlass::gemm::collective::StageCount<2>, /* Fixed 2 stages for SM120/SM121 */
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@include/flashinfer/gemm/fp4_gemm_template_sm120.h` around lines 254 - 259,
Update the stale inline comment that mentions only "SM120" next to
cutlass::gemm::collective::StageCount<2> to reference "SM120/SM121" (or make it
generic) so it matches the surrounding lines that were changed; locate the
CollectiveMainloop typedef in fp4_gemm_template_sm120.h and modify the comment
after StageCount<2> to reflect SM120/SM121 consistency.
🧹 Nitpick comments (1)
include/flashinfer/gemm/fp4_gemm_template_sm120.h (1)

52-73: SMTypeAdapter definitions are unused in SM120 — remove or document to avoid confusion.

The SMTypeAdapter<_1SM> and SMTypeAdapter<_2SM> structs define SM100-specific schedules (KernelTmaWarpSpecialized{1,2}SmNvf4Sm100) that are never referenced in the SM120 kernel macro. SM120 hardcodes EpilogueScheduleAuto and KernelScheduleAuto directly (lines 244–245), bypassing SMTypeAdapter entirely.

In contrast, SM100 (fp4_gemm_template_sm100.h) actively uses these fields at lines 129–130 within its macro definition. The SM120 definitions appear to be an artifact of copying from SM100 and represent dead code that risks future misinterpretation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@include/flashinfer/gemm/fp4_gemm_template_sm120.h` around lines 52 - 73,
SMTypeAdapter<_1SM> and SMTypeAdapter<_2SM> declare SM100-specific
EpilogueSchedule and MainloopSchedule values that are never used by the SM120
kernel (which uses EpilogueScheduleAuto and KernelScheduleAuto); remove these
dead structs or clearly document them as SM100-only artifacts to avoid
confusion. Locate the template specializations SMTypeAdapter<_1SM> and
SMTypeAdapter<_2SM> and either delete them or add a comment indicating they are
intentionally present for SM100 compatibility only, ensuring references to
EpilogueSchedule and MainloopSchedule and the fact SM120 uses
EpilogueScheduleAuto/KernelScheduleAuto are mentioned.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@include/flashinfer/gemm/fp4_gemm_template_sm120.h`:
- Around line 254-259: Update the stale inline comment that mentions only
"SM120" next to cutlass::gemm::collective::StageCount<2> to reference
"SM120/SM121" (or make it generic) so it matches the surrounding lines that were
changed; locate the CollectiveMainloop typedef in fp4_gemm_template_sm120.h and
modify the comment after StageCount<2> to reflect SM120/SM121 consistency.

---

Nitpick comments:
In `@include/flashinfer/gemm/fp4_gemm_template_sm120.h`:
- Around line 52-73: SMTypeAdapter<_1SM> and SMTypeAdapter<_2SM> declare
SM100-specific EpilogueSchedule and MainloopSchedule values that are never used
by the SM120 kernel (which uses EpilogueScheduleAuto and KernelScheduleAuto);
remove these dead structs or clearly document them as SM100-only artifacts to
avoid confusion. Locate the template specializations SMTypeAdapter<_1SM> and
SMTypeAdapter<_2SM> and either delete them or add a comment indicating they are
intentionally present for SM100 compatibility only, ensuring references to
EpilogueSchedule and MainloopSchedule and the fact SM120 uses
EpilogueScheduleAuto/KernelScheduleAuto are mentioned.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 06da444 and f86d2df.

📒 Files selected for processing (19)
  • csrc/gemm_groupwise_sm120.cu
  • csrc/group_gemm_fp8_groupwise_sm120.cu
  • csrc/trtllm_fmha_v2_binding.cu
  • flashinfer/decode.py
  • flashinfer/gemm/gemm_base.py
  • flashinfer/jit/gemm/core.py
  • flashinfer/jit/gemm/cutlass/generate_kernels.py
  • flashinfer/mla.py
  • flashinfer/xqa.py
  • include/flashinfer/gemm/cutlass_gemm_configs.h
  • include/flashinfer/gemm/fp4_gemm_cutlass_template_sm120.h
  • include/flashinfer/gemm/fp4_gemm_template_sm120.h
  • include/flashinfer/gemm/gemm_groupwise_sm120.cuh
  • include/flashinfer/gemm/group_gemm_fp8_groupwise_sm120.cuh
  • tests/attention/test_trtllm_gen_mla.py
  • tests/attention/test_xqa.py
  • tests/attention/test_xqa_batch_decode.py
  • tests/attention/test_xqa_mla_batch_decode.py
  • tests/moe/test_trtllm_cutlass_fused_moe.py
✅ Files skipped from review due to trivial changes (6)
  • include/flashinfer/gemm/fp4_gemm_cutlass_template_sm120.h
  • csrc/trtllm_fmha_v2_binding.cu
  • tests/attention/test_xqa_batch_decode.py
  • include/flashinfer/gemm/gemm_groupwise_sm120.cuh
  • tests/attention/test_trtllm_gen_mla.py
  • csrc/gemm_groupwise_sm120.cu

@johnnynunez
Copy link
Copy Markdown
Contributor

Thanks, mates! Let´s keep improving DGX Spark

@yzh119 yzh119 added the run-ci label Feb 24, 2026
@yzh119 yzh119 enabled auto-merge (squash) February 24, 2026 15:25
@flashinfer-bot
Copy link
Copy Markdown
Collaborator

[FAILED] Pipeline #44685986: 9/20 passed

@eugr
Copy link
Copy Markdown

eugr commented Feb 24, 2026

keeping fingers crossed :)

@yzh119 yzh119 disabled auto-merge February 25, 2026 03:13
@yzh119 yzh119 merged commit dd417a5 into flashinfer-ai:main Feb 25, 2026
45 checks passed
ameynaik-hub pushed a commit to ameynaik-hub/flashinfer that referenced this pull request Mar 18, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [x] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [x] I have installed the hooks with `pre-commit install`.
- [x] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [x] Tests have been added or updated as needed.
- [x] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Broadened hardware compatibility to include additional NVIDIA
Blackwell GPUs (SM121) and added an explicit configuration option for
the newer scheduler on those devices.
* **Documentation**
* Updated user-facing messages, docs and comments to reflect SM121
support.
* **Tests**
  * Adjusted test skip messages to include SM121 where applicable.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Amey Naik <212485788+ameynaik-hub@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants