Skip to content

[Spark bug] Fix arch 12.1 -> "sm120a" flag for Spark, CUDA 12.9#2839

Merged
aleozlx merged 2 commits intoflashinfer-ai:mainfrom
kahyunnam:knam/spark-compilation-fix
Mar 21, 2026
Merged

[Spark bug] Fix arch 12.1 -> "sm120a" flag for Spark, CUDA 12.9#2839
aleozlx merged 2 commits intoflashinfer-ai:mainfrom
kahyunnam:knam/spark-compilation-fix

Conversation

@kahyunnam
Copy link
Copy Markdown
Collaborator

@kahyunnam kahyunnam commented Mar 20, 2026

📌 Description

Bug found in nightly [Spark, 12.9] matrix https://gitlab-master.nvidia.com/dl/flashinfer/flashinfer-ci/-/jobs/285092631, where Spark compiles to "120a" (see "/tmp/.cache/flashinfer/0.6.6/120a/" path in log below).

E   RuntimeError: Check failed: (status == cudaSuccess) is false: SingleDecodeWithKVCache kernel launch failed, error: no kernel image is available for execution on the device
/tmp/.cache/flashinfer/0.6.6/120a/generated/single_decode_with_kv_cache_dtype_q_f16_dtype_kv_f16_dtype_o_f16_head_dim_qk_128_head_dim_vo_128_posenc_2_use_swa_False_use_logits_cap_False/single_decode.cu:100: RuntimeError: Check failed: (status == cudaSuccess) is false: SingleDecodeWithKVCache kernel launch failed, error: no kernel image is available for execution on the device

Root cause was #2725 , where we added logic for compiling both Spark and RTX Pro 6000 to 120f, but on the condition that cuda version is 13 or higher. Lower (12.9) defaults to 'a' suffix, 120a.

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Bug Fixes
    • Strengthened CUDA validation for SM 12.x GPUs: now requires CUDA 12.9 or newer and emits a clear error if unmet, replacing the previous silent fallback behavior.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical compilation issue affecting Spark GPUs (SM 12.1) when used with CUDA 12.9. The change ensures that the correct architecture flag ('120f') is applied during compilation, preventing runtime failures caused by incompatible kernel images. This update improves the robustness of the compilation process for specific GPU architectures and CUDA versions.

Highlights

  • Bug Fix: Spark GPU Architecture Flag: Addressed a bug where Spark (SM 12.1) with CUDA 12.9 was incorrectly compiling to the '120a' architecture flag, leading to runtime errors due to missing kernel images.
  • CUDA Version Logic Update: Modified the _normalize_cuda_arch function to correctly assign the '120f' architecture flag for SM 12.x devices when the CUDA version is 12.9 or higher, instead of requiring CUDA 13.0+.
  • Improved Error Handling: Introduced a RuntimeError for SM 12.x if the CUDA version is below 12.9, ensuring that unsupported configurations are explicitly flagged.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 20, 2026

📝 Walkthrough

Walkthrough

Adjusted SM 12.x CUDA architecture normalization to require CUDA >= 12.9 by checking version directly and raising a RuntimeError when unmet; removed previous exception-based fallback path and simplified import/usage of is_cuda_version_at_least.

Changes

Cohort / File(s) Summary
CUDA SM 12.x Architecture Normalization
flashinfer/compilation_context.py
Replaced fallback import/try-except path for SM 12.x with a direct is_cuda_version_at_least("12.9") check; now raises RuntimeError("SM 12.x requires CUDA >= 12.9") when requirement is not satisfied and removed prior '0a' suffix fallback.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • bkryu
  • yzh119
  • cyx-6
  • yongwww
  • aleozlx
  • nv-yunzheq

Poem

🐰 I hopped through code with nimble feet,
SM 12.x now asks for twelve point nine, neat.
No fallbacks hiding in try/except light,
Just a clear check to keep builds right.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main fix: addressing the incorrect "sm120a" architecture flag for Spark on CUDA 12.9.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Description check ✅ Passed PR description includes problem statement, root cause analysis, and solution context, but lacks specific implementation details and test coverage information.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses a bug related to CUDA architecture flags for Spark on CUDA 12.9 by lowering the required version from 13.0. However, the change introduces a critical circular import by moving an import statement to the top level of flashinfer/compilation_context.py. This will cause the application to fail on startup with an ImportError. I've added comments with suggestions to resolve this by reverting to a local import within the function where it's used, which is a common pattern for breaking such dependency cycles.

Comment thread flashinfer/compilation_context.py Outdated
Comment thread flashinfer/compilation_context.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
flashinfer/compilation_context.py (1)

48-51: Correct fix for the CUDA 12.9/Spark compilation issue.

The threshold change from "13.0" to "12.9" correctly ensures Spark (SM 12.1) and SM 12.0 get the "0f" suffix on CUDA 12.9. The fail-fast RuntimeError is better than producing non-functional "120a" binaries that fail at runtime with "no kernel image is available".

Consider making the error message slightly more actionable:

💡 Optional: More descriptive error message
             if is_cuda_version_at_least("12.9"):
                 return (major, "0f")
             else:
-                raise RuntimeError("SM 12.x requires CUDA >= 12.9")
+                raise RuntimeError(
+                    "SM 12.x (Spark/Thor) requires CUDA >= 12.9. "
+                    "Please upgrade your CUDA toolkit."
+                )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flashinfer/compilation_context.py` around lines 48 - 51, The branch handling
SM 12.x should keep the CUDA threshold at "12.9": when
is_cuda_version_at_least("12.9") return (major, "0f"), otherwise raise a
RuntimeError; update the RuntimeError in the same block (where (major, "0f") is
returned and is_cuda_version_at_least is called) to a clearer message such as
"CUDA >= 12.9 is required to compile for SM 12.x (avoids producing
non-functional SM 12.0/12.1 binaries)"; locate and modify this logic in
compilation_context.py where is_cuda_version_at_least and the (major, "0f")
tuple are used.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@flashinfer/compilation_context.py`:
- Around line 48-51: The branch handling SM 12.x should keep the CUDA threshold
at "12.9": when is_cuda_version_at_least("12.9") return (major, "0f"), otherwise
raise a RuntimeError; update the RuntimeError in the same block (where (major,
"0f") is returned and is_cuda_version_at_least is called) to a clearer message
such as "CUDA >= 12.9 is required to compile for SM 12.x (avoids producing
non-functional SM 12.0/12.1 binaries)"; locate and modify this logic in
compilation_context.py where is_cuda_version_at_least and the (major, "0f")
tuple are used.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1766d240-187b-47d9-827b-e450d27b0a41

📥 Commits

Reviewing files that changed from the base of the PR and between 7cb016d and fb30b7d.

📒 Files selected for processing (1)
  • flashinfer/compilation_context.py

@kahyunnam
Copy link
Copy Markdown
Collaborator Author

/bot run

@kahyunnam kahyunnam self-assigned this Mar 20, 2026
@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !442 has been created, and the CI pipeline #46625067 is currently running. I'll report back once the pipeline job completes.

@kahyunnam kahyunnam added the v0.6.7 release blocker label for 0.6.7 label Mar 20, 2026
@flashinfer-bot
Copy link
Copy Markdown
Collaborator

[FAILED] Pipeline #46625067: 14/20 passed

@aleozlx
Copy link
Copy Markdown
Collaborator

aleozlx commented Mar 20, 2026

@kahyunnam if you'd like to correct the PR desc about Thor

@aleozlx aleozlx enabled auto-merge (squash) March 20, 2026 23:18
@aleozlx aleozlx merged commit e2f821b into flashinfer-ai:main Mar 21, 2026
32 of 54 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

v0.6.7 release blocker label for 0.6.7

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants