Skip to content

fix: compile flags for trtllm fmha_v2 #2175

Merged
yzh119 merged 2 commits intoflashinfer-ai:mainfrom
jimmyzho:fmha
Dec 5, 2025
Merged

fix: compile flags for trtllm fmha_v2 #2175
yzh119 merged 2 commits intoflashinfer-ai:mainfrom
jimmyzho:fmha

Conversation

@jimmyzho
Copy link
Contributor

@jimmyzho jimmyzho commented Dec 4, 2025

📌 Description

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Chores

    • Removed noisy runtime console prints during build/generation.
    • Updated CUDA compiler requirements to target CUDA 12 and added a new compiler flag for compatibility.
  • Bug Fixes

    • Added an early check that raises a clear error on unsupported GPU devices (SM120a), preventing misruns.
  • Tests

    • Test now skips automatically when the required SM120a GPU support is not present.

✏️ Tip: You can customize this high-level summary in your review settings.

@jimmyzho jimmyzho requested a review from bkryu December 4, 2025 19:46
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 4, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Suppressed two runtime prints in the FMHA code generator, tightened NVCC supported major versions to only 12 and added an NVCC flag, added an early-device-capability ValueError guard to prefill, and added a test runtime skip when SM120a is unsupported.

Changes

Cohort / File(s) Summary
Build / Generator output
flashinfer/jit/attention/fmha_v2/generator_utils.py
Removed two runtime print statements from generate_files (initial command invocation and pre-execution notification).
NVCC configuration
flashinfer/jit/attention/modules.py
Restricted supported NVCC major versions to [12] and appended "-Wno-deprecated-gpu-targets" to nvcc_flags.
Runtime device guard
flashinfer/prefill.py, tests/attention/test_fmha_v2_prefill_deepseek.py
Added a pre-condition in fmha_v2_prefill_deepseek that raises ValueError when the active device is not SM120a-capable; test now skips at runtime when is_sm120a_supported(cuda) is false.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–30 minutes

  • Inspect fmha_v2_prefill_deepseek for correct placement and wording of the early ValueError and ensure callers/tests expect this error path.
  • Verify tests' new runtime skip is correct and that CI will still exercise supported paths.
  • Confirm NVCC version restriction to 12 and the added flag don't break build permutations or packaging logic.
  • Quick check that removed prints don't remove needed diagnostic information for debugging builds.

Possibly related PRs

Suggested reviewers

  • cyx-6
  • wenscarl

Poem

"I nibble bytes where compilers play,
I hush a print and hop away.
NVCC now favors twelve so bright,
I guard the chip for safe delight.
A tiny rabbit, builds at night 🐇"

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely a template with no actual content filled in; the Description, Related Issues, and Reviewer Notes sections are all empty placeholders. Fill in the Description section with what the PR does and why, link any related issues, and add reviewer notes if applicable. Check the pre-commit and test checklist items as appropriate.
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title mentions fixing compile flags for trtllm fmha_v2, which aligns with broadening NVCC versions and adding deprecation flags in modules.py, but doesn't capture all changes including device support guards and test updates.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jimmyzho, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the compatibility of the trtllm fmha_v2 module by enabling support for a broader range of NVIDIA GPU architectures, including Ampere and Hopper. This is achieved through adjustments to the NVCC compilation flags, specifically by incorporating older CUDA major versions and suppressing potential deprecation warnings, thereby allowing the module to be built and utilized on more diverse hardware environments.

Highlights

  • Expanded GPU Architecture Support: The trtllm fmha_v2 module now supports older NVIDIA GPU architectures, specifically Ampere (CUDA 8) and Hopper (CUDA 9), by extending the list of supported CUDA major versions for NVCC compilation.
  • Compiler Warning Suppression: A -Wno-deprecated-gpu-targets flag has been added to the NVCC compilation flags to suppress warnings that may arise when compiling for these older GPU targets.
  • Minor Code Cleanup: Two print statements related to the bin/print_traits.exe utility were commented out in generator_utils.py, likely for cleaner output or debugging purposes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Ampere and Hopper architectures for the trtllm_fmha_v2 module by updating the supported major versions for nvcc and adding a flag to suppress warnings about deprecated GPU targets. The changes look correct and align with the goal of extending hardware support. I've also noticed that some print statements in the kernel generation utility have been commented out. While this reduces verbosity, I've suggested using Python's logging module as a more maintainable and flexible approach for controlling debug output.

Comment on lines +3714 to +3717
# print('Running command "{}" to build "bin/print_traits.exe":'.format(" ".join(cmd)))
process = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
output, error = process.communicate()
print('Running "bin/print_traits.exe":')
# print('Running "bin/print_traits.exe":')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Instead of commenting out these print statements, consider using the logging module. This allows for more flexible control over verbosity (e.g., via log levels like INFO or DEBUG) and is a better practice for maintainability. The information about the commands being run is valuable for debugging the kernel generation process.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
flashinfer/jit/attention/fmha_v2/generator_utils.py (1)

3714-3718: Silencing debug prints in generate_files is reasonable

Commenting out these two debug print calls reduces log noise without changing behavior. If you need this diagnostics later, consider wiring them through a configurable logger or env-guarded debug path instead of unconditional prints.

flashinfer/jit/attention/modules.py (1)

1726-1728: Verify NVCC compatibility after widening supported majors and adding -Wno-deprecated-gpu-targets

Both gen_fmha_cutlass_sm100a_module and gen_trtllm_fmha_v2_module now accept NVCC major versions 8–12, and the TRT-LLM FMHA path additionally always appends -Wno-deprecated-gpu-targets. That’s aligned with the goal of broader support and silencing deprecation noise on newer toolchains, but it’s worth double-checking that:

  • current_compilation_context.get_nvcc_flags_list(supported_major_versions=[8, 9, 10, 11, 12]) will never select an NVCC that (a) doesn’t understand -Wno-deprecated-gpu-targets or (b) can’t compile the targeted SMs (Ampere/Hopper/Blackwell) without failing the build, and
  • If very old NVCCs (8/9) are still in play for some users, you either gate adding -Wno-deprecated-gpu-targets based on the detected NVCC major or constrain supported_major_versions here to the range you’ve actually validated for these kernels.

If current_compilation_context already enforces a minimum NVCC version for these architectures, then this change looks fine as-is; otherwise, a small guard around the warning-suppression flag would make this more robust.

Also applies to: 1903-1908

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 40bc6e1 and bbfd407.

📒 Files selected for processing (2)
  • flashinfer/jit/attention/fmha_v2/generator_utils.py (1 hunks)
  • flashinfer/jit/attention/modules.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
flashinfer/jit/attention/fmha_v2/generator_utils.py

3715-3715: subprocess call: check for execution of untrusted input

(S603)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs

@bkryu
Copy link
Collaborator

bkryu commented Dec 4, 2025

/bot run

if "CUDA_PATH" in os.environ:
cmd[0] = os.environ["CUDA_PATH"] + "/bin/" + cmd[0]
print('Running command "{}" to build "bin/print_traits.exe":'.format(" ".join(cmd)))
# print('Running command "{}" to build "bin/print_traits.exe":'.format(" ".join(cmd)))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these changes relevant to the PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not really, just commenting these out from the original trtllm script to clean up the stdout

@flashinfer-bot
Copy link
Collaborator

GitLab MR !177 has been created, and the CI pipeline #39627205 is currently running. I'll report back once the pipeline job completes.

@jimmyzho jimmyzho changed the title fix: ampere and hopper flags for trtllm fmha_v2 fix: compile flags trtllm fmha_v2 Dec 4, 2025
@jimmyzho jimmyzho changed the title fix: compile flags trtllm fmha_v2 fix: compile flags for trtllm fmha_v2 Dec 4, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
tests/attention/test_fmha_v2_prefill_deepseek.py (1)

60-61: Good defensive test skip for unsupported hardware.

The test correctly skips on devices that don't support SM120a. The skip message could optionally be more specific about the requirements (SM 12.0 + CUDA >= 12.8), but the current message is acceptable.

If you want to be more specific, consider:

-        pytest.skip("fmha_v2_prefill_deepseek is only supported on SM120 GPUs.")
+        pytest.skip("fmha_v2_prefill_deepseek requires SM 12.0 GPU with CUDA >= 12.8")
flashinfer/prefill.py (1)

3606-3607: Good early validation for device capability.

The device check correctly prevents execution on unsupported hardware. The error message could optionally be more specific about the full requirements.

Consider making the error message more informative about both the GPU architecture and CUDA version requirements:

-        raise ValueError("fmha_v2_prefill_deepseek is only supported on SM120 GPUs.")
+        raise ValueError(
+            "fmha_v2_prefill_deepseek requires SM 12.0 GPU with CUDA >= 12.8. "
+            f"Current device: {query.device}"
+        )

Note: The static analysis hint about using custom exception classes (Ruff TRY003) is a style preference and not critical for this use case.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bbfd407 and 1e7fe36.

📒 Files selected for processing (3)
  • flashinfer/jit/attention/modules.py (1 hunks)
  • flashinfer/prefill.py (1 hunks)
  • tests/attention/test_fmha_v2_prefill_deepseek.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • flashinfer/jit/attention/modules.py
🧰 Additional context used
🧬 Code graph analysis (2)
flashinfer/prefill.py (3)
flashinfer/utils.py (1)
  • is_sm120a_supported (546-548)
include/flashinfer/trtllm/common.h (1)
  • device (83-90)
flashinfer/logits_processor/types.py (1)
  • device (119-123)
tests/attention/test_fmha_v2_prefill_deepseek.py (1)
flashinfer/utils.py (1)
  • is_sm120a_supported (546-548)
🪛 Ruff (0.14.7)
flashinfer/prefill.py

3607-3607: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs
🔇 Additional comments (2)
tests/attention/test_fmha_v2_prefill_deepseek.py (1)

8-8: LGTM!

The import is correctly added and used in the test guard below.

flashinfer/prefill.py (1)

60-60: LGTM!

The import is correctly added and used in the device capability check.

@jimmyzho
Copy link
Contributor Author

jimmyzho commented Dec 4, 2025

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !177 has been updated with latest changes, and the CI pipeline #39635306 is currently running. I'll report back once the pipeline job completes.

@yzh119 yzh119 merged commit cc50469 into flashinfer-ai:main Dec 5, 2025
4 checks passed
BingooYang pushed a commit to BingooYang/flashinfer that referenced this pull request Mar 13, 2026
<!-- .github/pull_request_template.md -->

## 📌 Description

<!-- What does this PR do? Briefly describe the changes and why they’re
needed. -->

## 🔍 Related Issues

<!-- Link any related issues here -->

## 🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull
request, please make sure the following items are complete.

### ✅ Pre-commit Checks

- [ ] I have installed `pre-commit` by running `pip install pre-commit`
(or used your preferred method).
- [ ] I have installed the hooks with `pre-commit install`.
- [ ] I have run the hooks manually with `pre-commit run --all-files`
and fixed any reported issues.

> If you are unsure about how to set up `pre-commit`, see [the
pre-commit documentation](https://pre-commit.com/).

## 🧪 Tests

- [ ] Tests have been added or updated as needed.
- [ ] All tests are passing (`unittest`, etc.).

## Reviewer Notes

<!-- Optional: anything you'd like reviewers to focus on, concerns, etc.
-->


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
  * Removed noisy runtime console prints during build/generation.
* Updated CUDA compiler requirements to target CUDA 12 and added a new
compiler flag for compatibility.

* **Bug Fixes**
* Added an early check that raises a clear error on unsupported GPU
devices (SM120a), preventing misruns.

* **Tests**
* Test now skips automatically when the required SM120a GPU support is
not present.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants