Skip to content

feat: add SM120 fmha_v2 kernels to AOT pip wheel builds#2885

Open
blake-snc wants to merge 2 commits intoflashinfer-ai:mainfrom
blake-snc:feat/enable-sm120-default
Open

feat: add SM120 fmha_v2 kernels to AOT pip wheel builds#2885
blake-snc wants to merge 2 commits intoflashinfer-ai:mainfrom
blake-snc:feat/enable-sm120-default

Conversation

@blake-snc
Copy link
Contributor

@blake-snc blake-snc commented Mar 24, 2026

Summary

gen_trtllm_fmha_v2_sm120_module() exists in jit/attention/modules.py and the JIT runtime path (generate_kernels.py) already dispatches to it correctly. However, aot.py's gen_all_modules() — which drives the pip wheel AOT build — was missing it from the has_sm120 or has_sm121 section.

This means SM120/SM121 devices using a pip wheel would never get the fmha_v2 SM120 kernels compiled into the wheel, and would have to fall back to slower paths.

Fix: Add gen_trtllm_fmha_v2_sm120_module() to the has_sm120 or has_sm121 block in aot.py, alongside the other SM120 modules (fused MOE, GEMM, FP4 quantization).

No behavior change for JIT users; only affects AOT pip wheel builds.

Addresses the AOT gap noted in #2555.

Contributed by Second Nature Computing (https://joinsecondnature.com)

Summary by CodeRabbit

  • Chores
    • Expanded optimized inference module generation for SM120 and SM121 GPUs to include attention kernels in addition to existing fused MOE/GEMM coverage.

`gen_trtllm_fmha_v2_sm120_module()` was already callable via JIT
(generate_kernels.py dispatches to it at runtime), but was never
registered in gen_all_modules() in aot.py. SM120/SM121 devices
getting flashinfer from a pip wheel would skip the fmha_v2 SM120
kernels entirely during the AOT build step, falling back to slower
paths or missing support.

Add it to the `has_sm120 or has_sm121` section alongside the other
SM120 modules (fused MOE, GEMM, FP4 quantization).

Contributed by Second Nature Computing (https://joinsecondnature.com)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 24, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ddc07201-932a-407a-b0c0-02b403d86574

📥 Commits

Reviewing files that changed from the base of the PR and between b4ae649 and 372f460.

📒 Files selected for processing (1)
  • flashinfer/aot.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • flashinfer/aot.py

📝 Walkthrough

Walkthrough

This change imports the SM120 FMHA_V2 attention module generator and appends its JIT spec into gen_all_modules() when has_sm120 or has_sm121 is true, expanding SM12x shared-kernel coverage to include attention.

Changes

Cohort / File(s) Summary
FMHA_V2 SM120 Module Integration
flashinfer/aot.py
Added gen_trtllm_fmha_v2_sm120_module import and appended its JIT spec in gen_all_modules() under the has_sm120 or has_sm121 branch; updated inline comment to include attention alongside fused MOE/GEMM.

Sequence Diagram(s)

(omitted — change is a small wiring update, not a multi-component flow)

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Possibly related issues

Possibly related PRs

Suggested labels

run-ci

Suggested reviewers

  • bkryu
  • yzh119
  • jimmyzho
  • cyx-6
  • nv-yunzheq

Poem

🐰 A tiny patch hopped into the trees,
SM120 hummed on a warm spring breeze,
Two imports, one list — a gentle delight,
FMHA_V2 flutters, prepared for flight ✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ❓ Inconclusive The description provides a clear summary of the problem, solution, and impact. However, it does not follow the template structure with explicit sections like 'Description', 'Related Issues', and checklist items. Consider restructuring the description to match the template format with explicit sections: Description, Related Issues, and Pre-commit/Test checklist items.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: adding SM120 fmha_v2 kernels to AOT pip wheel builds, which is the core purpose of this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the AOT compilation process for FlashInfer's pip wheel builds by integrating the fmha_v2 kernels specifically designed for SM120/SM121 GPU architectures. This ensures that users deploying with pre-compiled wheels on compatible hardware will benefit from optimized performance, addressing a previously identified gap where these kernels were not included, leading to suboptimal execution paths. The change is isolated to AOT builds and does not impact JIT compilation workflows.

Highlights

  • AOT Compilation for SM120/SM121: Added the fmha_v2 kernels for SM120/SM121 architectures to the Ahead-Of-Time (AOT) pip wheel build process.
  • Performance Improvement: Resolved an issue where SM120/SM121 devices using pip wheels would fall back to slower paths due to missing compiled fmha_v2 kernels.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates a new TensorRT-LLM Flash Attention v2 module for SM120 architectures into the AOT compilation process. A review comment suggests updating an existing code comment to accurately reflect the inclusion of attention kernels alongside fused MOE and GEMM, improving clarity and maintainability.

jit_specs.append(gen_cutlass_fused_moe_sm120_module())
jit_specs.append(gen_gemm_sm120_module())
jit_specs.append(gen_gemm_sm120_module_cutlass_fp4())
jit_specs.append(gen_trtllm_fmha_v2_sm120_module())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

With the addition of this fmha_v2 module, the comment on lines 525-527 is now slightly outdated as it only mentions 'fused MOE and GEMM'. For better maintainability, please consider updating it to include attention kernels for clarity.

For example:

-            # SM120 and SM121 share the same CUTLASS kernels for fused MOE and GEMM.
+            # SM120 and SM121 share the same kernels for fused MOE, GEMM, and attention.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
flashinfer/aot.py (1)

524-531: Consider decoupling FMHA v2 from the add_moe gate.

At Line 531, this is an attention kernel but it is only emitted when add_moe is True. For custom AOT configs (--add-moe false), that can unexpectedly drop FMHA v2.

♻️ Suggested placement change
@@
-    if add_moe:
+    if has_sm120 or has_sm121:
+        jit_specs.append(gen_trtllm_fmha_v2_sm120_module())
+
+    if add_moe:
@@
         if has_sm120 or has_sm121:
             # SM120 and SM121 share the same CUTLASS kernels for fused MOE and GEMM.
             # The SM120 module generators use supported_major_versions=[12] which
             # compiles for all SM12x targets.
             jit_specs.append(gen_cutlass_fused_moe_sm120_module())
             jit_specs.append(gen_gemm_sm120_module())
             jit_specs.append(gen_gemm_sm120_module_cutlass_fp4())
-            jit_specs.append(gen_trtllm_fmha_v2_sm120_module())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flashinfer/aot.py` around lines 524 - 531, The FMHA v2 module
gen_trtllm_fmha_v2_sm120_module() is currently gated by the add_moe flag (inside
the has_sm120/has_sm121 block) which causes FMHA v2 to be omitted when --add-moe
false; update the logic so that gen_trtllm_fmha_v2_sm120_module() is appended to
jit_specs independently of add_moe (i.e., move or duplicate the call out of the
add_moe-specific branch in the SM120/SM121 handling code), or replace the
add_moe check with a dedicated attention-kernel condition so FMHA v2 is always
emitted for SM12x targets regardless of the MOE flag.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@flashinfer/aot.py`:
- Around line 524-531: The FMHA v2 module gen_trtllm_fmha_v2_sm120_module() is
currently gated by the add_moe flag (inside the has_sm120/has_sm121 block) which
causes FMHA v2 to be omitted when --add-moe false; update the logic so that
gen_trtllm_fmha_v2_sm120_module() is appended to jit_specs independently of
add_moe (i.e., move or duplicate the call out of the add_moe-specific branch in
the SM120/SM121 handling code), or replace the add_moe check with a dedicated
attention-kernel condition so FMHA v2 is always emitted for SM12x targets
regardless of the MOE flag.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 06c41597-0a60-4df3-961f-79d2f7163cd4

📥 Commits

Reviewing files that changed from the base of the PR and between 6d34eba and b4ae649.

📒 Files selected for processing (1)
  • flashinfer/aot.py

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant