Allow BatchDecodeWithPagedKVCacheWrapper for GQA ratio 16 and 32#2895
Allow BatchDecodeWithPagedKVCacheWrapper for GQA ratio 16 and 32#2895bkryu wants to merge 2 commits intoflashinfer-ai:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the FlashInfer library by extending support for Grouped Query Attention (GQA) to a wider range of group sizes. This change specifically addresses and prevents crashes in batch decode operations for models with high GQA ratios, ensuring greater compatibility and stability. Additionally, the associated tests have been updated to cover these new configurations, reinforcing the robustness of the implementation. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
/bot run |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdded compile-time dispatch branches for GQA group sizes 16 and 32 in a CUDA header; updated four decode kernel tests to parametrize Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request expands the supported group sizes in flashinfer/utils.cuh to include 16, 32, and 64, in addition to the existing 8. It also enhances test coverage for batch decode kernels by adding num_kv_heads = 2 to the parameterized tests in test_batch_decode_kernels.py. There is no feedback to provide.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@include/flashinfer/utils.cuh`:
- Around line 153-161: The GROUP_SIZE=64 branch can produce illegal thread-block
sizes for some HEAD_DIM values; add a guard that rejects invalid
GROUP_SIZE/HEAD_DIM combos by inserting a compile-time static_assert in the
kernel template (where GROUP_SIZE and HEAD_DIM are template parameters) to
validate (e.g. compute threads_per_block from HEAD_DIM and GROUP_SIZE and assert
<= 1024), and also add a runtime check in the kernel launcher to return/error if
the chosen GROUP_SIZE (from the macro that defines GROUP_SIZE) together with the
provided HEAD_DIM would create threads_per_block > 1024; reference GROUP_SIZE
and HEAD_DIM (and the macro block in include/flashinfer/utils.cuh) so the check
is colocated with the branch that sets GROUP_SIZE and the kernel launch path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ea2e2b74-041b-45e2-8b56-7debd67de09e
📒 Files selected for processing (2)
include/flashinfer/utils.cuhtests/attention/test_batch_decode_kernels.py
|
/bot stop |
|
/bot run |
|
The GitLab CI pipeline #47010006 has been cancelled. |
|
[FAILED] Pipeline #47010961: 11/20 passed |
yzh119
left a comment
There was a problem hiding this comment.
It's recommend to use enable tensor cores (which do not rely on this macro) for large GQA shape, when group size = 16/32, cuda cores implementation is very slow, see #2684 (review).
Right. I left a comment about this in #2849 that on SM121 Waiting for a response from @TrevorS who filed the issue. |
📌 Description
DISPATCH_GQA_GROUP_SIZEininclude/flashinfer/utils.cuh, fixingBatchDecodeWithPagedKVCacheWrappercrash for models with high GQA ratios (e.g., Nemotron 32 QO / 2 KV = ratio 16)num_kv_heads=2to batch decode test parametrization to cover GQA ratio 16🔍 Related Issues
#2849
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
New Features
Tests