Skip to content

Revise MRotaryEmbedding's forward#11859

Merged
hnyls2002 merged 1 commit intosgl-project:mainfrom
antgroup:revise_rotary_embedding_fwd
Oct 21, 2025
Merged

Revise MRotaryEmbedding's forward#11859
hnyls2002 merged 1 commit intosgl-project:mainfrom
antgroup:revise_rotary_embedding_fwd

Conversation

@yuan-luo
Copy link
Copy Markdown
Collaborator

@yuan-luo yuan-luo commented Oct 20, 2025

Motivation

This PR is follow up with #11722.
Revise the forward method in MRotaryEmbedding so as not to break backends other than cuda.

Acc Result:
➜  sglang_dev2 git:(revise_rotary_embedding_fwd) ✗ python benchmark/mmmu/bench_sglang.py --port 30000 --concurrency 16

Preparing samples...
Loading datasets for 30 subjects...
Loading datasets: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:06<00:00,  4.79it/s]
Saving images to: /root/.cache/mmmu/images
Processing samples...
Processing samples: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 900/900 [00:00<00:00, 201541.57it/s]
Skipping 0 samples with large images, 0.0% of dataset
Samples have been prepared
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 900/900 [01:54<00:00,  7.85it/s]
Benchmark time: 114.69285167008638
answers saved to: ./answer_sglang.json
Evaluating...
answers saved to: ./answer_sglang.json
{'Accounting': {'acc': 0.433, 'num': 30},
 'Agriculture': {'acc': 0.533, 'num': 30},
 'Architecture_and_Engineering': {'acc': 0.467, 'num': 30},
 'Art': {'acc': 0.667, 'num': 30},
 'Art_Theory': {'acc': 0.833, 'num': 30},
 'Basic_Medical_Science': {'acc': 0.6, 'num': 30},
 'Biology': {'acc': 0.367, 'num': 30},
 'Chemistry': {'acc': 0.367, 'num': 30},
 'Clinical_Medicine': {'acc': 0.633, 'num': 30},
 'Computer_Science': {'acc': 0.433, 'num': 30},
 'Design': {'acc': 0.7, 'num': 30},
 'Diagnostics_and_Laboratory_Medicine': {'acc': 0.467, 'num': 30},
 'Economics': {'acc': 0.5, 'num': 30},
 'Electronics': {'acc': 0.3, 'num': 30},
 'Energy_and_Power': {'acc': 0.267, 'num': 30},
 'Finance': {'acc': 0.4, 'num': 30},
 'Geography': {'acc': 0.4, 'num': 30},
 'History': {'acc': 0.667, 'num': 30},
 'Literature': {'acc': 0.767, 'num': 30},
 'Manage': {'acc': 0.333, 'num': 30},
 'Marketing': {'acc': 0.5, 'num': 30},
 'Materials': {'acc': 0.433, 'num': 30},
 'Math': {'acc': 0.533, 'num': 30},
 'Mechanical_Engineering': {'acc': 0.5, 'num': 30},
 'Music': {'acc': 0.433, 'num': 30},
 'Overall': {'acc': 0.517, 'num': 900},
 'Overall-Art and Design': {'acc': 0.658, 'num': 120},
 'Overall-Business': {'acc': 0.433, 'num': 150},
 'Overall-Health and Medicine': {'acc': 0.593, 'num': 150},
 'Overall-Humanities and Social Science': {'acc': 0.667, 'num': 120},
 'Overall-Science': {'acc': 0.427, 'num': 150},
 'Overall-Tech and Engineering': {'acc': 0.419, 'num': 210},
 'Pharmacy': {'acc': 0.633, 'num': 30},
 'Physics': {'acc': 0.467, 'num': 30},
 'Psychology': {'acc': 0.7, 'num': 30},
 'Public_Health': {'acc': 0.633, 'num': 30},
 'Sociology': {'acc': 0.533, 'num': 30}}
eval out saved to ./val_sglang.json
Overall accuracy: 0.517

Benchmark:

➜  python git:(revise_rotary_embedding_fwd) ✗ python -m sglang.launch_server \
    --model-path Qwen/Qwen2.5-VL-7B-Instruct \
    --mem-fraction-static 0.8 \
    --chat-template 'qwen2-vl' \
    --tp 1 \
    --disable-radix-cache \
    --cuda-graph-bs 256 \
    --cuda-graph-max-bs 256 \
    --chunked-prefill-size 8192 \
    --max-prefill-tokens 8192 \
    --max-running-requests 256 \
    --enable-multimodal

➜  sglang_dev2 git:(revise_rotary_embedding_fwd) ✗ python3 -m sglang.bench_serving \
    --backend sglang-oai-chat \
    --dataset-name mmmu \
    --num-prompts 200 \
    --flush-cache \
    --max-concurrency 16


Before:
============ Serving Benchmark Result ============
Backend:                                 sglang-oai-chat
Traffic request rate:                    inf
Max request concurrency:                 16
Successful requests:                     200
Benchmark duration (s):                  107.82
Total input tokens:                      86435
Total input text tokens:                 17454
Total input vision tokens:               68981
Total generated tokens:                  204800
Total generated tokens (retokenized):    138695
Request throughput (req/s):              1.85
Input token throughput (tok/s):          801.63
Output token throughput (tok/s):         1899.39
Total token throughput (tok/s):          2701.02
Concurrency:                             15.45
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   8331.75
Median E2E Latency (ms):                 8067.46
---------------Time to First Token----------------
Mean TTFT (ms):                          1279.81
Median TTFT (ms):                        1247.15
P99 TTFT (ms):                           2157.51
---------------Inter-Token Latency----------------
Mean ITL (ms):                           8.86
Median ITL (ms):                         6.77
P95 ITL (ms):                            15.32
P99 ITL (ms):                            18.89
Max ITL (ms):                            1369.59
==================================================

After:
============ Serving Benchmark Result ============
Backend:                                 sglang-oai-chat
Traffic request rate:                    inf
Max request concurrency:                 16
Successful requests:                     200
Benchmark duration (s):                  104.98
Total input tokens:                      86435
Total input text tokens:                 17454
Total input vision tokens:               68981
Total generated tokens:                  204800
Total generated tokens (retokenized):    138619
Request throughput (req/s):              1.91
Input token throughput (tok/s):          823.38
Output token throughput (tok/s):         1950.93
Total token throughput (tok/s):          2774.32
Concurrency:                             15.45
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   8111.52
Median E2E Latency (ms):                 7695.90
---------------Time to First Token----------------
Mean TTFT (ms):                          1058.12
Median TTFT (ms):                        996.66
P99 TTFT (ms):                           1556.17
---------------Inter-Token Latency----------------
Mean ITL (ms):                           8.80
Median ITL (ms):                         6.62
P95 ITL (ms):                            13.57
P99 ITL (ms):                            18.20
Max ITL (ms):                            1872.27
==================================================

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @yuan-luo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the MRotaryEmbedding's forward pass to enhance its robustness and performance across different execution environments. By introducing a conditional dispatch mechanism, the system can now leverage optimized Triton kernels on CUDA-enabled setups for multimodal inputs, while maintaining full functionality and correctness on other backends. This change ensures broader compatibility without compromising on potential speedups where specialized hardware is available.

Highlights

  • Backend Compatibility: The MRotaryEmbedding's forward method has been revised to ensure compatibility with non-CUDA backends, addressing a potential breakage identified in a previous pull request.
  • Method Refactoring: The original forward_native method was renamed to _forward_native and a new private method, _forward_triton, was introduced to encapsulate Triton kernel-accelerated computations.
  • Conditional Dispatch: The main forward method now acts as a dispatcher, intelligently routing calls to _forward_triton when specific conditions (multimodal input, CUDA availability, neox-style, and matching rotary/head dimensions) are met, otherwise falling back to the _forward_native implementation.
  • Performance Improvement: Serving benchmarks indicate minor performance improvements, including increased request throughput (1.85 req/s to 1.93 req/s) and reduced mean end-to-end latency (8331.75 ms to 8021.05 ms).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors the MRotaryEmbedding to support non-CUDA backends by dispatching to either a Triton or a native PyTorch implementation. This is a clean solution that improves code clarity and maintainability. The included benchmarks also show a nice performance improvement, especially in Time to First Token (TTFT). I have one suggestion to improve argument handling and prevent potential silent failures.

@yuan-luo yuan-luo force-pushed the revise_rotary_embedding_fwd branch from adf9936 to 6f3caa3 Compare October 20, 2025 11:01
Copy link
Copy Markdown
Collaborator

@BBuf BBuf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Co-authored-by: 羽癫 <yudian.zy@antgroup.com>
Co-authored-by: b8zhong <b8zhong@uwaterloo.ca>
@yuan-luo yuan-luo force-pushed the revise_rotary_embedding_fwd branch from 6f3caa3 to ff53417 Compare October 20, 2025 12:36
@yuan-luo yuan-luo enabled auto-merge (squash) October 20, 2025 12:55
@JustinTong0323 JustinTong0323 self-assigned this Oct 20, 2025
@hnyls2002 hnyls2002 disabled auto-merge October 21, 2025 02:38
@hnyls2002 hnyls2002 merged commit 74de76c into sgl-project:main Oct 21, 2025
160 of 176 checks passed
xjpang pushed a commit to xjpang/sglang that referenced this pull request Oct 22, 2025
Co-authored-by: luoyuan.luo <luoyuan.luo@antgroup.com>
Co-authored-by: 羽癫 <yudian.zy@antgroup.com>
Co-authored-by: b8zhong <b8zhong@uwaterloo.ca>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants