Skip to content

Add gen_gemm_sm100_module_cutlass_mxfp8 to jit-cache#2549

Merged
yongwww merged 1 commit intoflashinfer-ai:mainfrom
yongwww:aot-gen_gemm_sm100_module_cutlass_mxfp8
Feb 13, 2026
Merged

Add gen_gemm_sm100_module_cutlass_mxfp8 to jit-cache#2549
yongwww merged 1 commit intoflashinfer-ai:mainfrom
yongwww:aot-gen_gemm_sm100_module_cutlass_mxfp8

Conversation

@yongwww
Copy link
Copy Markdown
Member

@yongwww yongwww commented Feb 12, 2026

📌 Description

follow-up #2464

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • New Features
    • Added mixed FP8 (mxfp8) precision support for SM100 GPUs, expanding available kernel variants for more flexible computation options.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @yongwww, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the JIT compilation system for the SM100 GPU architecture by incorporating a new GEMM module specifically designed for mixed-precision FP8 operations. This integration ensures that the system can dynamically compile and optimize kernels utilizing this new data type, potentially leading to improved performance and broader hardware compatibility for relevant computations.

Highlights

  • JIT Cache Extension: The gen_gemm_sm100_module_cutlass_mxfp8 function has been added to the JIT compilation cache for the SM100 architecture, expanding support for mixed-precision floating-point operations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • flashinfer/aot.py
    • Imported gen_gemm_sm100_module_cutlass_mxfp8.
    • Registered gen_gemm_sm100_module_cutlass_mxfp8 within the gen_all_modules function for SM100 architecture.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to add the gen_gemm_sm100_module_cutlass_mxfp8 module to the AOT build cache. While the changes correctly add the necessary import and function call, my review of the context files reveals a critical issue in the underlying implementation of this module. It appears to be restricted to CUDA versions 10 and 11, which is incompatible with the sm100 (Blackwell) architecture that requires CUDA 12.8+. Adding this module to the build will likely cause AOT compilation to fail for Blackwell GPUs. Therefore, I have recommended removing these changes until the module is fixed.

Copy link
Copy Markdown
Collaborator

@yzh119 yzh119 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gemini's suggestion doesn't look reasonable to me.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 12, 2026

📝 Walkthrough

Walkthrough

A new GEMM module generator for SM100 with mixed FP8 precision support is introduced. The generator function is imported from the gemm module and integrated into the AOT module generation flow, expanding the available kernel variants.

Changes

Cohort / File(s) Summary
SM100 GEMM mxfp8 Support
flashinfer/aot.py, flashinfer/jit/gemm
Adds gen_gemm_sm100_module_cutlass_mxfp8 generator function and integrates it into the SM100 JIT spec generation sequence alongside existing FP4/FP8 variants.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~5 minutes

Suggested labels

v0.6.3

Suggested reviewers

  • djmmoss
  • cyx-6
  • yzh119
  • wenscarl

Poem

🐰 A GEMM variant hops into view,
mxfp8 precision, shiny and new,
SM100 kernels multiply with delight,
Cutlass threads weaving math so bright! ✨

🚥 Pre-merge checks | ✅ 1 | ❌ 3
❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (2 files):

⚔️ flashinfer/aot.py (content)
⚔️ flashinfer/utils.py (content)

These conflicts must be resolved before merging into main.
Resolve conflicts locally and push changes to this branch.
Description check ❓ Inconclusive The description includes the template structure but lacks meaningful content in the required Description section, only referencing a follow-up PR without explaining the actual changes or rationale. Expand the Description section to explain what changes are being made, why they're needed, and the purpose of adding mxfp8 support. Consider also marking relevant checklist items if applicable.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding a new GEMM module generator function (gen_gemm_sm100_module_cutlass_mxfp8) to the JIT cache.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
⚔️ Resolve merge conflicts (beta)
  • Auto-commit resolved conflicts to branch aot-gen_gemm_sm100_module_cutlass_mxfp8
  • Post resolved changes as copyable diffs in a comment

No actionable comments were generated in the recent review. 🎉

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@yongwww yongwww merged commit 2fff6b6 into flashinfer-ai:main Feb 13, 2026
44 of 61 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants