refactor: reduce hopper's gdn prefill compilation time and fix docstring.#2422
refactor: reduce hopper's gdn prefill compilation time and fix docstring.#2422yzh119 merged 17 commits intoflashinfer-ai:mainfrom
Conversation
…lation Split the 32 template instantiations (2 dtypes × 16 boolean combinations) into separate .cu files to enable parallel compilation with ninja. This significantly reduces build time on multi-core machines. Changes: - Add Jinja template for generating kernel instantiation files - Add extern template declarations to prevent implicit instantiation - Update JIT module to generate 32 separate kernel files - Keep original source files for launcher (relative includes work) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary of ChangesHello @yzh119, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on enhancing the development workflow by significantly reducing the compilation time of GDN prefill kernels on the Hopper architecture through a new split compilation approach. Concurrently, it addresses a documentation inaccuracy by correcting the specified state layout in the GDN prefill kernel's docstring, ensuring clarity and correctness for users. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
📝 WalkthroughWalkthroughThis PR introduces template-driven separate compilation for GDN prefill kernels with SM90 support. It adds a Jinja template for kernel instantiation, generates 32 kernel variants via a new JIT module function, introduces extern template declarations to prevent implicit instantiation, and standardizes include paths across the codebase to use absolute project-scoped paths. A conditional C++ standard flag mechanism was also added to the core JIT spec generator. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request effectively reduces the compilation time for the GDN prefill kernel by splitting the template instantiations into separate compilation units, which is a great improvement for developer productivity. The use of Jinja2 to generate the instantiation files is a clean solution. The docstring fix for the state layout is also a welcome correction. I've added one suggestion to improve the maintainability of the new extern template declaration file by using macros to reduce code duplication. Overall, this is a solid contribution.
|
/bot run |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@flashinfer/jit/gdn.py`:
- Around line 81-86: Replace the hard-coded sm90a_nvcc_flags usage with a
CompilationContext-derived nvcc flags list: create a CompilationContext(), call
CompilationContext.get_nvcc_flags_list(supported_major_versions=[9]) to get
nvcc_flags, append ["-DFLAT_SM90A_ENABLED","-std=c++20"], and pass that as
extra_cuda_cflags to gen_jit_spec (keep
extra_include_paths=[jit_env.FLASHINFER_CSRC_DIR] unchanged); update references
to sm90a_nvcc_flags in this function to use the new nvcc_flags variable and
ensure CompressionContext is imported or available.
🧹 Nitpick comments (2)
flashinfer/jit/gdn.py (2)
33-38: Optional: Replace Unicode multiplication sign with ASCIIx.The linter flags
×(U+00D7 MULTIPLICATION SIGN) as ambiguous. While readable, using ASCIIximproves portability across different editors and terminal encodings.Suggested fix
- """Generate JIT module for GDN prefill kernel with separate compilation. - - This generates 32 separate kernel instantiation files (2 dtypes × 16 boolean combinations) + """Generate JIT module for GDN prefill kernel with separate compilation. + + This generates 32 separate kernel instantiation files (2 dtypes x 16 boolean combinations)
46-47: Optional: Specify explicit encoding when opening files.Adding
encoding="utf-8"ensures consistent behavior across platforms with different default encodings.Suggested fix
- with open(jit_env.FLASHINFER_CSRC_DIR / "gdn_prefill_sm90_kernel_inst.jinja") as f: + with open(jit_env.FLASHINFER_CSRC_DIR / "gdn_prefill_sm90_kernel_inst.jinja", encoding="utf-8") as f: kernel_inst_templ = jinja2.Template(f.read())
| return gen_jit_spec( | ||
| name="gdn_prefill_launcher", | ||
| sources=[ | ||
| jit_env.FLASHINFER_CSRC_DIR / "gdn_prefill_launcher.cu", | ||
| jit_env.FLASHINFER_CSRC_DIR | ||
| / "flat" | ||
| / "prefill" | ||
| / "prefill_kernel_delta_rule_sm90.cu", | ||
| ], | ||
| uri, | ||
| source_paths, | ||
| extra_cuda_cflags=sm90a_nvcc_flags + ["-DFLAT_SM90A_ENABLED", "-std=c++20"], | ||
| extra_include_paths=[jit_env.FLASHINFER_CSRC_DIR], | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check gen_jit_spec function signature and supported_major_versions usage
rg -n "supported_major_versions" --type py -B 2 -A 2Repository: flashinfer-ai/flashinfer
Length of output: 7679
🏁 Script executed:
# Find gen_jit_spec definition
rg -n "def gen_jit_spec" --type py -A 10Repository: flashinfer-ai/flashinfer
Length of output: 829
🏁 Script executed:
# Check for other JIT module generators to see usage patterns
fd "\.py$" flashinfer/jit/ | head -10 | xargs grep -l "gen_jit_spec" | xargs -I {} sh -c 'echo "=== {} ===" && grep -A 5 "gen_jit_spec(" {}'Repository: flashinfer-ai/flashinfer
Length of output: 4164
🏁 Script executed:
cat -n flashinfer/jit/gdn.pyRepository: flashinfer-ai/flashinfer
Length of output: 3736
🏁 Script executed:
grep -n "sm90a_nvcc_flags" flashinfer/jit/core.py | head -20Repository: flashinfer-ai/flashinfer
Length of output: 92
🏁 Script executed:
# Get context around the definition
rg -n "sm90a_nvcc_flags\s*=" flashinfer/jit/core.py -B 2 -A 2Repository: flashinfer-ai/flashinfer
Length of output: 238
🏁 Script executed:
sed -n '120,135p' flashinfer/jit/core.pyRepository: flashinfer-ai/flashinfer
Length of output: 810
🏁 Script executed:
# Check how other SM90-specific generators handle the architecture restriction
rg -n "SM90\|sm_90" flashinfer/jit/ --type py -B 3 -A 3 | grep -E "(supported_major_versions|sm90)" | head -30Repository: flashinfer-ai/flashinfer
Length of output: 50
🏁 Script executed:
head -30 flashinfer/jit/gdn.pyRepository: flashinfer-ai/flashinfer
Length of output: 839
🏁 Script executed:
# Check if CompilationContext is available and how it's used
rg -n "CompilationContext\|current_compilation_context" flashinfer/jit/ --type py | head -20Repository: flashinfer-ai/flashinfer
Length of output: 50
🏁 Script executed:
# Check what common_nvcc_flags contains
sed -n '1,130p' flashinfer/jit/core.py | grep -A 10 "common_nvcc_flags"Repository: flashinfer-ai/flashinfer
Length of output: 567
Refactor to use CompilationContext.get_nvcc_flags_list(supported_major_versions=[9]) pattern.
Per coding guidelines for JIT modules, supported_major_versions should be specified when creating nvcc flags. Other JIT modules (fused_moe.py, xqa.py, mamba, gemm, comm, attention) consistently use CompilationContext().get_nvcc_flags_list(supported_major_versions=[...]) before passing flags to gen_jit_spec(). This module should follow the same pattern instead of using the pre-defined sm90a_nvcc_flags constant:
compilation_context = CompilationContext()
nvcc_flags = compilation_context.get_nvcc_flags_list(supported_major_versions=[9])
nvcc_flags += ["-DFLAT_SM90A_ENABLED", "-std=c++20"]
return gen_jit_spec(
uri,
source_paths,
extra_cuda_cflags=nvcc_flags,
extra_include_paths=[jit_env.FLASHINFER_CSRC_DIR],
)🧰 Tools
🪛 Ruff (0.14.14)
84-84: Consider iterable unpacking instead of concatenation
Replace with iterable unpacking
(RUF005)
🤖 Prompt for AI Agents
In `@flashinfer/jit/gdn.py` around lines 81 - 86, Replace the hard-coded
sm90a_nvcc_flags usage with a CompilationContext-derived nvcc flags list: create
a CompilationContext(), call
CompilationContext.get_nvcc_flags_list(supported_major_versions=[9]) to get
nvcc_flags, append ["-DFLAT_SM90A_ENABLED","-std=c++20"], and pass that as
extra_cuda_cflags to gen_jit_spec (keep
extra_include_paths=[jit_env.FLASHINFER_CSRC_DIR] unchanged); update references
to sm90a_nvcc_flags in this function to use the new nvcc_flags variable and
ensure CompressionContext is imported or available.
|
[FAILED] Pipeline #42635345: 11/20 passed |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@csrc/flat/prefill/prefill_kernel_delta_rule_sm90_extern.inc`:
- Around line 1-64: This file uses cudaStream_t and fixed-width integer types in
the extern template declaration for launch_delta_rule_prefill_kernel_gbai but
doesn't include headers that define them; add explicit includes for the CUDA
stream type and integer types (e.g., include <cuda_runtime.h> or <cuda.h> for
cudaStream_t and <cstdint> for int32_t/int64_t) at the top of this .inc so the
extern template declaration (launch_delta_rule_prefill_kernel_gbai<...>) is
self-contained and no longer relies on transitive includes.
In `@flashinfer/aot.py`:
- Around line 446-447: The early return that unconditionally returns only
gen_gdn_prefill_sm90_module() must be gated behind an explicit flag so normal
AOT generation still runs; update the code around the temporary return in aot.py
to check a configuration or environment variable (e.g.,
FLASHINFER_ONLY_GDN_PREFILL or a passed-in option) and only return
[gen_gdn_prefill_sm90_module()] when that flag is truthy, otherwise fall through
to the existing full generation path that builds all AOT kernels; ensure the
flag defaults to false and document/rename any helper variable so callers can
opt into the temporary behavior without shipping a crippled build.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@flashinfer/jit/gdn.py`:
- Around line 32-48: The docstring/comment in this module uses the Unicode
multiplication sign "×" (e.g., "2 dtypes × 16 boolean combinations") which
triggers lint/confusable warnings; replace that character with the ASCII letter
"x" (so it reads "2 dtypes x 16 boolean combinations") to avoid Ruff warnings.
Update the text near the top of the file where uri = "gdn_prefill_sm90",
gen_directory is created and the comment about generating "32 separate instance
files" so the change is adjacent to the kernel instantiation template loading
(kernel_inst_templ) and the surrounding docstring/comment.
1322d60 to
e5236d7
Compare
|
The PR should be ready now. |
…ing. (flashinfer-ai#2422) <!-- .github/pull_request_template.md --> ## 📌 Description This PR implements these features: 1. accelerate hopper's gdn prefill compilation time by split compilation 2. fix the docstring of gdn prefill kernel, instead of [N, H, K, V], it expects [N, H, V, K] ## 🔍 Related Issues flashinfer-ai#2276 ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [x] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [x] I have installed the hooks with `pre-commit install`. - [x] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [ ] Tests have been added or updated as needed. - [ ] All tests are passing (`unittest`, etc.). ## Reviewer Notes cc @guangyunh-nv <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Release Notes * **New Features** * Enhanced JIT module generation for GDN prefill kernels with template-driven compilation and separate kernel instantiation. * **Improvements** * JIT specification now intelligently handles C++ standard flags, applying defaults only when not already specified. * **Documentation** * Clarified final state memory layout description for GDN prefill operations. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
📌 Description
This PR implements these features:
🔍 Related Issues
#2276
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
cc @guangyunh-nv
Summary by CodeRabbit
Release Notes
New Features
Improvements
Documentation