Skip to content

[Minor] Reduce num blocks of qknorm in small batch size#2264

Merged
yzh119 merged 1 commit intoflashinfer-ai:mainfrom
DarkSharpness:qknorm_decode
Dec 24, 2025
Merged

[Minor] Reduce num blocks of qknorm in small batch size#2264
yzh119 merged 1 commit intoflashinfer-ai:mainfrom
DarkSharpness:qknorm_decode

Conversation

@DarkSharpness
Copy link
Copy Markdown
Contributor

@DarkSharpness DarkSharpness commented Dec 24, 2025

📌 Description

In QKNorm kernel with small batch size, we can reduce the number of blocks launched. This can reduce block launching overhead especially in decode stage.

A example result on B200 where (batch_size, num_heads, head_dim) = (128, 8, 128), which is common in Qwen3 model decode stage.

Before this PR: 2.448us
After this PR: 1.584us

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Refactor
    • Optimized GPU kernel grid size calculation to reduce unnecessary block launches and improve overall performance efficiency.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Dec 24, 2025

📝 Walkthrough

Walkthrough

This change optimizes kernel grid launching in a normalization operation by capping the block count at the computed minimum needed (based on batch size and heads) rather than always using the maximum available blocks, reducing over-subscription while maintaining per-block computation.

Changes

Cohort / File(s) Change Summary
Grid launch optimization
include/flashinfer/norm.cuh
Changed kernel grid launch to cap blocks at computed minimum needed (ceil_div(batch_size * num_heads, num_warps)) instead of always using maximum available (num_blocks_per_sm * num_sms), reducing unnecessary block scheduling

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Suggested reviewers

  • yzh119

Poem

🐰 Hop, skip, and launch with care—
No more blocks floating in thin air!
We cap the grid to what we need,
Efficiency's our newfound creed.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: reducing the number of CUDA blocks in the QKNorm kernel for small batch sizes.
Description check ✅ Passed The description covers the key required sections with sufficient detail: it explains what the PR does, provides performance metrics, and includes pre-commit checklist items. Related issues section is appropriately left blank.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 705205a and 359142b.

📒 Files selected for processing (1)
  • include/flashinfer/norm.cuh
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-12T03:35:17.583Z
Learnt from: raayandhar
Repo: flashinfer-ai/flashinfer PR: 2070
File: include/flashinfer/gemm/bf16_gemm_cutlass_template.h:145-160
Timestamp: 2025-11-12T03:35:17.583Z
Learning: In flashinfer GEMM implementations (e.g., include/flashinfer/gemm/bf16_gemm_cutlass_template.h, fp8_gemm_cutlass_template.h), it is acceptable to catch and silently ignore std::runtime_error exceptions in getWorkspaceSizeImpl when probing multiple GEMM configurations, as some configurations may legitimately fail due to SMEM constraints. This pattern should include a comment like "// Swallow errors when SMEM exceeds maximum allowed" to document the rationale.

Applied to files:

  • include/flashinfer/norm.cuh
🔇 Additional comments (1)
include/flashinfer/norm.cuh (1)

372-373: Optimization correctly caps blocks to workload requirements.

The calculation properly computes the minimum blocks needed to cover batch_size * num_heads jobs across warps, and std::min prevents launching idle blocks when the workload is smaller than device capacity. The kernel's grid-stride loop pattern (line 286) gracefully handles both capped and uncapped cases. The edge case where batch_size or num_heads is zero results in launching zero blocks, which is handled safely by the CUDA runtime with no threads executing.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @DarkSharpness, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a performance optimization to the QKNorm kernel by intelligently limiting the number of CUDA blocks launched. By calculating the minimum required blocks based on the workload and capping the launch configuration, it significantly reduces overhead, especially for smaller batch sizes frequently encountered during the decode phase of inference. This change aims to improve the overall efficiency and speed of operations within FlashInfer.

Highlights

  • Performance Optimization: The number of launched CUDA blocks for the QKNorm kernel has been reduced, specifically targeting scenarios with small batch sizes.
  • Targeted Efficiency: This optimization is particularly beneficial for the decode stage in models like Qwen3, where small batch sizes are common, leading to reduced block launching overhead.
  • Measured Speedup: An example on a B200 GPU showed a performance improvement from 2.448us to 1.584us for a (128, 8, 128) configuration.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@DarkSharpness DarkSharpness changed the title [Minor] Redeuce num blocks of qknorm in small batch size [Minor] Reduce num blocks of qknorm in small batch size Dec 24, 2025
@yzh119
Copy link
Copy Markdown
Collaborator

yzh119 commented Dec 24, 2025

/bot run

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

GitLab MR !215 has been created, and the CI pipeline #40736015 is currently running. I'll report back once the pipeline job completes.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable optimization by reducing the number of launched blocks for the QKNorm kernel when dealing with small batch sizes, which demonstrably improves performance in decode stages. My review identifies a potential integer overflow issue in the calculation of needed_blocks. This could lead to incorrect behavior or performance degradation with large batch_size or num_heads. I have provided a code suggestion to use 64-bit integers for this calculation to enhance robustness. Additionally, please note the typo in the pull request title ('Redeuce' should be 'Reduce').

@flashinfer-bot
Copy link
Copy Markdown
Collaborator

[SUCCESS] Pipeline #40736015: 12/20 passed

@yzh119 yzh119 merged commit fe093d6 into flashinfer-ai:main Dec 24, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants