Fast rotary embedding#10527
Fast rotary embedding#10527AlienKevin wants to merge 22 commits intosgl-project:mainfrom AlienKevin:rotary_emb_kernel_mick
Conversation
… sgl-kernel (CMakeLists, common_extension.cc, sgl_kernel_ops.h, utils.h); keep rotary embedding kernel and new memory/kvcache sources; unify macros
…on VisionAttention
There was a problem hiding this comment.
Summary of Changes
Hello @AlienKevin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical performance bottleneck in SGLang's VisionAttention layer, particularly for Vision-Language Models like Qwen 2.5 VL. The previous reliance on a slow PyTorch-native rotary embedding implementation and the inability to handle non-standard head sizes led to performance degradation and forced fallbacks to external kernels. The core of this change is the integration of a new, highly optimized CUDA rotary embedding kernel that supports flexible head sizes. This enhancement not only resolves the head size compatibility issue but also delivers a substantial throughput improvement, making SGLang more efficient for VLM workloads without compromising accuracy.
Highlights
- New Rotary Embedding Kernel: Introduced a new, optimized CUDA kernel for rotary embedding, specifically designed to address performance bottlenecks in Vision-Language Models (VLMs).
- Flexible Head Size Support: The new kernel supports flexible head sizes, such as 80, which were previously unsupported by SGLang's native kernels. This eliminates the need to fall back to vLLM's rotary embedding kernel for such configurations.
- Significant Performance Improvement: Achieved a 12% increase in throughput on the MMMU benchmark compared to vLLM, with SGLang (after) reaching 12,450 total tokens/s from 10,242 total tokens/s (before).
- Accuracy Preservation: Verified that the new rotary embedding kernel preserves model accuracy on Qwen2.5-VL-7B-Instruct and Qwen2.5-VL-3B-Instruct models.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a new CUDA kernel for rotary embeddings to improve performance for Vision-Language Models, especially those with non-standard head sizes. The changes are well-supported by performance benchmarks and accuracy checks. My review focuses on ensuring the changes are robust and maintainable. I've found a critical regression that could affect text models, an issue with the new CUDA operator's schema definition, and a suggestion to improve the configurability of the vision attention layer. Overall, this is a great performance enhancement, and with these fixes, it will be a solid contribution.
| cos, sin = positions | ||
| assert cos.dtype == torch.float and cos.is_contiguous() | ||
| assert sin.dtype == torch.float and sin.is_contiguous() | ||
| orig_q_dtype = query.dtype | ||
| orig_k_dtype = key.dtype | ||
| query, key = query.float(), key.float() | ||
|
|
||
| self.sglang_rotary_embedding( | ||
| cos, | ||
| sin, | ||
| query, | ||
| key, | ||
| self.head_size, | ||
| self.cos_sin_cache, | ||
| self.is_neox_style, | ||
| ) | ||
|
|
||
| query = query.to(dtype=orig_q_dtype) | ||
| key = key.to(dtype=orig_k_dtype) |
There was a problem hiding this comment.
The else block in forward_cuda now assumes that the positions argument is a tuple of (cos, sin), which is true for the new vision model use case. However, this breaks existing functionality for text-based models that have a head_size not in the optimized list [64, 128, 256, 512]. For those models, positions is a tensor of indices, and cos, sin = positions will raise an exception. This is a regression that needs to be fixed.
The logic should handle both cases: when positions is a tensor of indices and when it's a pre-computed (cos, sin) tuple.
if isinstance(positions, torch.Tensor):
# Handle tensor of position indices for text models
if offsets is not None:
positions = positions + offsets
positions = positions.flatten()
cos_sin = self.cos_sin_cache.index_select(0, positions)
cos, sin = cos_sin.chunk(2, dim=-1)
else:
# Handle pre-computed (cos, sin) tuple for vision models
cos, sin = positions
# Ensure tensors are float and contiguous for the kernel
if cos.dtype != torch.float or not cos.is_contiguous():
cos = cos.float().contiguous()
if sin.dtype != torch.float or not sin.is_contiguous():
sin = sin.float().contiguous()
orig_q_dtype = query.dtype
orig_k_dtype = key.dtype
query, key = query.float(), key.float()
self.sglang_rotary_embedding(
cos,
sin,
query,
key,
self.head_size,
self.is_neox_style,
)
query = query.to(dtype=orig_q_dtype)
key = key.to(dtype=orig_k_dtype)| self.rotary_emb = RotaryEmbedding( | ||
| head_size=self.head_size, | ||
| rotary_dim=self.head_size, | ||
| max_position_embeddings=2048, | ||
| base=10000, | ||
| is_neox_style=False, | ||
| dtype=torch.get_default_dtype(), | ||
| ) |
There was a problem hiding this comment.
The RotaryEmbedding is initialized with hardcoded values for max_position_embeddings (2048) and base (10000). This reduces the reusability of the VisionAttention class for other models that may have different rotary embedding configurations. Consider passing these values as arguments to the __init__ method to make the component more flexible.
|
Looks some rope test failed. |
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ |
There was a problem hiding this comment.
We'd better add reference:
// Adapted from https://github.com/vllm-project/vllm/blob/main/csrc/pos_encoding_kernels.cu
|
@AlienKevin Do you have time to look into the failed Rope tests? |
| return q_embed, k_embed | ||
|
|
||
|
|
||
| class RotaryEmbedding(torch.nn.Module): |
There was a problem hiding this comment.
I don't think this file is for mm_rotary_embedding.
|
May I know why this PR was closed? |
This PR updates @mickqian's #6530 with benchmarks on MMMU.
VisionAttention used by most VLMs, including Qwen 2.5 VL, relied on the slow apply_rotary_pos_emb_native
implementation written in PyTorch. After a series of optimizations (#8484, #9661), rotary embedding became one of the largest bottlenecks on the GPU side for Qwen 2.5 VL.
The main challenge in adapting existing SGL rotary kernels to VLMs was handling non-standard head sizes. Vision encoders often use head sizes like 80, which weren’t supported by the previous SGL rotary kernel. In those cases, the code had to fall back to vLLM’s kernel.
With @mickqian's improved rotary embedding kernel—now supporting flexible head sizes—SGL no longer needs to fall back, and performance has jumped significantly. On MMMU, SGL now outperforms vLLM by 12% in throughput 🥳
Run 1
Run 2
Run 3
Accuracy verified on 7B and 3B
Accuracy preserved on Qwen2.5-VL-7B-Instruct (before: 0.519, after: 0.516)
Before PR (python3 -m sglang.launch_server --model-path Qwen/Qwen2.5-VL-7B-Instruct --mm-attention-backend sdpa):
Accuracy preserved on Qwen2.5-VL-3B-Instruct (before: 0.454, after: 0.456)
Before PR (python3 -m sglang.launch_server --model-path Qwen/Qwen2.5-VL-3B-Instruct --mm-attention-backend sdpa):
Tested using #9812
Server cmd:
Client cmd: