Skip to content

Conversation

@bkryu
Copy link
Collaborator

@bkryu bkryu commented Dec 8, 2025

📌 Description

CUDA Release notes indicate full sm120 support on CUDA 12.8 and sm121 on 12.9. Changing the version noted in aot.py.

The change alters the coverage in flashinfer-jit-cache -- today some sm120 kernels are not compiled & shipped in cuda 12.9, but are included in 13.0. The change here will insure kernels are included.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Bug Fixes

    • Adjusted GPU capability detection and CUDA version requirements:
      • SM110 now requires CUDA 13.0 (raised from 12.9).
      • SM120 support expanded to CUDA 12.8 (lowered from 13.0).
      • SM121 support updated to CUDA 12.9 (from 13.0).
  • Impact

    • Broader compatibility for SM120/SM121 on slightly older CUDA; SM110 now requires a newer CUDA runtime.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 8, 2025

Walkthrough

Adjusts CUDA SM capability detection in flashinfer/aot.py by updating the version thresholds for SM110, SM120, and SM121 mappings (tuning which CUDA runtimes are considered compatible). No other logic or control-flow changes present.

Changes

Cohort / File(s) Summary
SM capability detection
flashinfer/aot.py
Update CUDA version thresholds for SM mappings: sm110 threshold changed from 12.9 → 13.0; sm120 threshold changed from 13.0 → 12.8; sm121 threshold changed from 13.0 → 12.9. No other code paths modified.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify the new numeric thresholds match intended CUDA release/version semantics.
  • Check any tests or runtime checks that assume previous thresholds.
  • Confirm downstream logic (feature gating or selection) still behaves correctly with updated mappings.

Possibly related PRs

Suggested reviewers

  • aleozlx
  • yzh119
  • cyx-6
  • wenscarl
  • nvmbreughe

Poem

🐰 I hopped through versions, nibbling fine,
Swapped some numbers — 12.9 to 13.0, align!
SM120 and 121 now dance in line,
A tiny tweak, a tidy sign,
Bright kernels hop and sing with brine. 🥕

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: updating sm12X minimum CUDA capability to 12.9 in aot.py, which matches the actual changes in the PR.
Description check ✅ Passed The description includes a detailed explanation of the changes referencing CUDA release notes and explaining the impact on flashinfer-jit-cache coverage, with pre-commit checklist completed and tests appropriately not marked.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 35c0347 and 1b95a22.

📒 Files selected for processing (1)
  • flashinfer/aot.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines the configuration for CUDA compute capabilities within the aot.py file. Specifically, it adjusts the minimum required CUDA version for sm120 and sm121 architectures to 12.9, potentially enhancing compatibility or correcting a previous version specification for these compute units.

Highlights

  • CUDA Capability Update: The minimum CUDA capability for sm120 and sm121 architectures has been updated from 13.0 to 12.9 in aot.py.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the minimum required CUDA version for sm120 and sm121 architectures from 13.0 to 12.9 in flashinfer/aot.py. This change appears to be a correction to enable compilation for these architectures on CUDA 12.9, which is consistent with the versions for other newer SM architectures like sm100f, sm103, and sm110. The change is straightforward and I don't see any issues with it.

@bkryu
Copy link
Collaborator Author

bkryu commented Dec 8, 2025

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !183 has been created, and the CI pipeline #39835910 is currently running. I'll report back once the pipeline job completes.

@bkryu
Copy link
Collaborator Author

bkryu commented Dec 8, 2025

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !183 has been updated with latest changes, and the CI pipeline #39836651 is currently running. I'll report back once the pipeline job completes.

Copy link
Collaborator

@yzh119 yzh119 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@yzh119 yzh119 merged commit 6bb01d1 into flashinfer-ai:main Dec 9, 2025
4 checks passed
@bkryu bkryu deleted the aot_sm12x_capability branch December 9, 2025 22:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants