Skip to content

[VLM] Support cos sin cache for Qwen3-VL & GLM-4.1V#15205

Merged
yuan-luo merged 1 commit intosgl-project:mainfrom
antgroup:cos_sin_cache
Dec 18, 2025
Merged

[VLM] Support cos sin cache for Qwen3-VL & GLM-4.1V#15205
yuan-luo merged 1 commit intosgl-project:mainfrom
antgroup:cos_sin_cache

Conversation

@yuan-luo
Copy link
Collaborator

@yuan-luo yuan-luo commented Dec 15, 2025

Motivation

Support cos sin cache for Qwen3-VL & GLM-4.1V.

This PR refactors the rotary positional embedding (RoPE) implementation to expose an explicit cosine/sine cache interface and reuse it across the 2D vision RoPE code path. Rather than recomputing frequencies and repeatedly calling cos()/sin(), we precompute and cache the 1D cosine/sine tables once, then index into this cache for both text RoPE and the 2D grid RoPE used by the vision encoder.

Step 1 refactored Qwen3-VL and GLM-4.1V.

Before PR: 490us
image

After PR: 186us
image

Inspired by vllm-project/vllm#28798 & vllm-project/vllm#28962.

$SGLANG_USE_CUDA_IPC_TRANSPORT=1 SGLANG_VLM_CACHE_SIZE_MB=512 python -m sglang.launch_server --model-path /home/admin/Qwen3-VL-8B-Instruct/ --host 0.0.0.0 --port 30000 --trust-remote-code --tp-size 2 --enable-cache-report --log-level info --max-running-requests 48 --mem-fraction-static 0.7 --chunked-prefill-size 8192  --attention-backend flashinfer --mm-attention-backend fa3 --log-level debug --log-requests --log-requests-level 1
$bash bench_local_video.sh 
{"id":"06642dd08e3542bdb47dff2ec8609978","object":"chat.completion","created":1765823917,"model":"auto","choices":[{"index":0,"message":{"role":"assistant","content":"视频里的招牌上写着“小鞋匠洗鞋”,并附有服务内容“修复 上色 保养”和手机号码“15295211190”。","reasoning_content":null,"tool_calls":null},"logprobs":null,"finish_reason":"stop","matched_stop":151645}],"usage":{"prompt_tokens":12009,"total_tokens":12050,"completion_tokens":41,"prompt_tokens_details":{"cached_tokens":3},"reasoning_tokens":0},"metadata":{"weight_version":"default","e2e_latency":2586.2960815429688,"ttft_latency":2586.3027572631836,"queue_latency":1.3520624488592148}}
real    0m2.593s
user    0m0.001s
sys     0m0.003s
{"id":"e108ca7921db464990ad4cf7365d86cc","object":"chat.completion","created":1765823918,"model":"auto","choices":[{"index":0,"message":{"role":"assistant","content":"视频里的招牌上写着:\n\n**小鞋匠洗鞋**\n\n此外,招牌上还有以下信息:\n\n- **修复 上色 保养**\n- **手机号码:15295211190**\n\n招牌的背景是黄色的,文字是黑色的,顶部有一个白色鞋子的图案。招牌旁边还有一个红色的波浪形遮阳篷。","reasoning_content":null,"tool_calls":null},"logprobs":null,"finish_reason":"stop","matched_stop":151645}],"usage":{"prompt_tokens":12009,"total_tokens":12087,"completion_tokens":78,"prompt_tokens_details":{"cached_tokens":12008},"reasoning_tokens":0},"metadata":{"weight_version":"default","e2e_latency":868.2491779327393,"ttft_latency":723.18434715271,"queue_latency":1.3202261179685593}}
real    0m0.875s
user    0m0.000s
sys     0m0.004s

Modifications

Accuracy Tests

$python -m sglang.launch_server --model-path Qwen/Qwen3-VL-8B-Instruct --host 0.0.0.0 --port 30000 --trust-remote-code --tp-size 2 --enable-cache-report --log-level info --max-running-requests 48 --mem-fraction-static 0.7 --chunked-prefill-size 8192 --attention-backend fa3 --mm-attention-backend fa3 
$python3 -m lmms_eval --model openai_compatible --model_args model_version=Qwen/Qwen3-VL-8B-Instruct   --tasks mmmu_val   --batch_size 16

Baseline:

openai_compatible (model_version=Qwen/Qwen3-VL-8B-Instruct), gen_kwargs: (), limit: None, num_fewshot: None, batch_size: 16
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.5122|±  |   N/A|

PR:

openai_compatible (model_version=Qwen/Qwen3-VL-8B-Instruct), gen_kwargs: (), limit: None, num_fewshot: None, batch_size: 16
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.5156|±  |   N/A|

Benchmarking and Profiling

8xH20
Qwen3-VL-8B-Instruct
TTFT speedup 2%.

Server:

$python -m sglang.launch_server \
  --model-path /home/admin/Qwen3-VL-8B-Instruct \
  --host 0.0.0.0 \
  --port 30000 \
  --trust-remote-code \
  --tp-size 1 \
  --enable-cache-report \
  --max-running-requests 128 \
  --mem-fraction-static 0.7 \
  --chunked-prefill-size 8192 \
  --attention-backend fa3 \
  --mm-attention-backend fa3 \
  --log-level debug \
  --log-requests \
  --log-requests-level 1

Client:

$python3 -m sglang.bench_serving \
  --backend sglang-oai-chat \
  --dataset-name image \
  --num-prompts 256 \
  --apply-chat-template \
  --random-input-len 128 \
  --random-output-len 1 \
  --image-resolution 560x560 \
  --image-format jpeg \
  --image-count 1 \
  --image-content random \
  --random-range-ratio 0.1 \
  --port 30000 \
  --max-concurrency 32
Baseline
============ Serving Benchmark Result ============
Backend:                                 sglang-oai-chat
Traffic request rate:                    inf       
Max request concurrency:                 32        
Successful requests:                     256       
Benchmark duration (s):                  20.42     
Total input tokens:                      104053    
Total input text tokens:                 20597     
Total input vision tokens:               83456     
Total generated tokens:                  256       
Total generated tokens (retokenized):    256       
Request throughput (req/s):              12.54     
Input token throughput (tok/s):          5094.98   
Output token throughput (tok/s):         12.54     
Peak output token throughput (tok/s):    32.00     
Peak concurrent requests:                64        
Total token throughput (tok/s):          5107.51   
Concurrency:                             31.03     
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   2475.23   
Median E2E Latency (ms):                 2485.64   
---------------Time to First Token----------------
Mean TTFT (ms):                          2475.22   
Median TTFT (ms):                        2485.63   
P99 TTFT (ms):                           3237.06   
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          0.00      
Median TPOT (ms):                        0.00      
P99 TPOT (ms):                           0.00      
---------------Inter-Token Latency----------------
Mean ITL (ms):                           0.00      
Median ITL (ms):                         0.00      
P95 ITL (ms):                            0.00      
P99 ITL (ms):                            0.00      
Max ITL (ms):                            0.00      
==================================================

PR:
============ Serving Benchmark Result ============
Backend:                                 sglang-oai-chat
Traffic request rate:                    inf       
Max request concurrency:                 32        
Successful requests:                     256       
Benchmark duration (s):                  20.24     
Total input tokens:                      104050    
Total input text tokens:                 20593     
Total input vision tokens:               83457     
Total generated tokens:                  256       
Total generated tokens (retokenized):    256       
Request throughput (req/s):              12.65     
Input token throughput (tok/s):          5139.84   
Output token throughput (tok/s):         12.65     
Peak output token throughput (tok/s):    32.00     
Peak concurrent requests:                64        
Total token throughput (tok/s):          5152.49   
Concurrency:                             30.72     
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   2429.14   
Median E2E Latency (ms):                 2453.28   
---------------Time to First Token----------------
Mean TTFT (ms):                          2429.13   
Median TTFT (ms):                        2453.27   
P99 TTFT (ms):                           3259.75   
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          0.00      
Median TPOT (ms):                        0.00      
P99 TPOT (ms):                           0.00      
---------------Inter-Token Latency----------------
Mean ITL (ms):                           0.00      
Median ITL (ms):                         0.00      
P95 ITL (ms):                            0.00      
P99 ITL (ms):                            0.00      
Max ITL (ms):                            0.00      
==================================================

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yuan-luo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant optimization to the Rotary Positional Embedding (RoPE) computation within the Qwen3-VL model. By implementing a caching mechanism for cosine and sine tables, the system avoids redundant calculations, particularly for the 2D vision RoPE, leading to more efficient processing and potentially improved inference speed for multimodal tasks.

Highlights

  • Rotary Positional Embedding (RoPE) Refactoring: The core change involves refactoring the RoPE implementation to introduce an explicit cosine/sine cache interface, inspired by vLLM's approach.
  • Performance Optimization: Instead of recomputing frequencies and repeatedly calling cos()/sin(), the system now precomputes and caches 1D cosine/sine tables once, which are then reused for both text RoPE and the 2D grid RoPE used by the vision encoder.
  • Qwen3-VL Model Integration: The Qwen3-VL model has been refactored to utilize this new caching mechanism, specifically in its vision attention layers, to improve efficiency.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the rotary positional embedding (RoPE) implementation for Qwen3-VL to use a cosine/sine cache, which is a good performance optimization. The changes are mostly correct and achieve the intended goal. I have a few suggestions for code cleanup and refactoring to improve maintainability and remove dead code.

@yuan-luo yuan-luo force-pushed the cos_sin_cache branch 2 times, most recently from 42fabc7 to da0c9f4 Compare December 16, 2025 06:13
@yuan-luo
Copy link
Collaborator Author

/tag-and-rerun-ci

@yuan-luo yuan-luo changed the title [VLM] Support cos sin cache for Qwen3-VL [VLM] Support cos sin cache for Qwen3-VL & GLM-4.1V Dec 16, 2025
Copy link
Collaborator

@BBuf BBuf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job. LGTM.

Copy link
Collaborator

@JustinTong0323 JustinTong0323 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verified, LGTM

@JustinTong0323
Copy link
Collaborator

/rerun-failed-ci

@yuan-luo yuan-luo merged commit 8fa3dc3 into sgl-project:main Dec 18, 2025
177 of 187 checks passed
Liwansi added a commit to iforgetmyname/sglang that referenced this pull request Dec 19, 2025
…n3_pp

* 'main' of https://github.com/sgl-project/sglang: (74 commits)
  [bug fix][pp] fix inconsistent latency between tp (sgl-project#15379)
  Fix warp illegal instruction in kimi k2 thinking PCG (sgl-project#15306)
  Fix gpt-oss yarn with `truncate` argument (sgl-project#14270)
  Monkey patch deepseek-ocr's `v_head_dim` (sgl-project#15384)
  [model-gateway] Replace PolicyRegistry RwLock with DashMap for lock-free policy lookups (sgl-project#15361)
  [PP] Fix dynamic chunking strategy for PP (sgl-project#15372)
  Fix issue: ENABLE_BELOW_SM90 cannot be enabled on aarch64 CPU (sgl-project#12967)
  Split test_piecewise_cuda_graph.py to optimize CI resource usage (sgl-project#15290)
  unified management of environment variables for vlm cuda ipc transport  (sgl-project#14501)
  Mistral Large 3 NVFP4 TRTLLM MoE support (sgl-project#15049)
  fix: adjust time for test_epd_disaggregation.py (sgl-project#15354)
  Add doc for qwen3 next (sgl-project#15337)
  feat: DeepSeek-V3.2 Streaming tool call output (sgl-project#15278)
  Feature/trtllm mha workspace size configurable sgl-project#15089 (sgl-project#15131)
  [VLM] Support cos sin cache for Qwen3-VL & GLM-4.1V (sgl-project#15205)
  [Deepseek V3.2] Support Overlap Spec + NSA (sgl-project#15307)
  Add request-level timestamp for when prefill finishes (sgl-project#14860)
  [CI] Migrate LoRA tests to test/registered/lora/ (sgl-project#15176)
  Reserve more memory for DeepSeekOCR model and adjust server start timeout for DeepGEMM to reduce flakiness (sgl-project#15277)
  Fix condition check for require_gathered_buffer (sgl-project#15328)
  ...
@yuan-luo yuan-luo deleted the cos_sin_cache branch December 22, 2025 10:52
Prozac614 pushed a commit to Prozac614/sglang that referenced this pull request Dec 23, 2025
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
jiaming1130 pushed a commit to zhuyijie88/sglang that referenced this pull request Dec 25, 2025
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
YChange01 pushed a commit to YChange01/sglang that referenced this pull request Jan 13, 2026
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants