Skip to content

Conversation

@JaceyShao
Copy link
Contributor

@JaceyShao JaceyShao commented Aug 7, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results

Purpose

H20-3e is H20 GPU with 141G memory. Add Moe tuning configs for GLM-4.5

Test Plan

Tuning command:

python3 /mnt/workspace/repos/vllm/benchmarks/kernels/benchmark_moe.py --model /mnt/models/ZhipuAI/GLM-4.5 -tp 8 --dtype auto --tune

Deploy GLM-4.5:

vllm serve /mnt/workspace/GLM-4.5 --host 0.0.0.0 --port 8000 --root '/' --no-enable-prefix-caching --trust-remote-code --tensor-parallel-size 8 --served-model-name GLM-4.5

Benchmark:

python3 -m sglang.bench_serving --tokenizer /mnt/models/ZhipuAI/GLM-4.5 --host 0.0.0.0 --port 8000 --backend vllm --dataset-name random --random-input 1024 --random-output 512 --max-concurrency 8 --num-prompt 200

Test Result

Results without Moe tuning:

============ Serving Benchmark Result ============
Backend:                                 vllm      
Traffic request rate:                    inf       
Max request concurrency:                 8         
Successful requests:                     200       
Benchmark duration (s):                  188.34    
Total input tokens:                      103005    
Total generated tokens:                  53590     
Total generated tokens (retokenized):    53521     
Request throughput (req/s):              1.06      
Input token throughput (tok/s):          546.92    
Output token throughput (tok/s):         284.54    
Total token throughput (tok/s):          831.46    
Concurrency:                             7.80      
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   7342.70   
Median E2E Latency (ms):                 7227.67   
---------------Time to First Token----------------
Mean TTFT (ms):                          132.61    
Median TTFT (ms):                        130.02    
P99 TTFT (ms):                           467.44    
---------------Inter-Token Latency----------------
Mean ITL (ms):                           27.05     
Median ITL (ms):                         25.35     
P95 ITL (ms):                            25.84     
P99 ITL (ms):                            108.98    
Max ITL (ms):                            413.55    
==================================================

Results with Moe tuning:

============ Serving Benchmark Result ============
Backend:                                 vllm      
Traffic request rate:                    inf       
Max request concurrency:                 8         
Successful requests:                     200       
Benchmark duration (s):                  174.84    
Total input tokens:                      103005    
Total generated tokens:                  53590     
Total generated tokens (retokenized):    53521     
Request throughput (req/s):              1.14      
Input token throughput (tok/s):          589.13    
Output token throughput (tok/s):         306.51    
Total token throughput (tok/s):          895.64    
Concurrency:                             7.79      
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   6811.83   
Median E2E Latency (ms):                 6722.36   
---------------Time to First Token----------------
Mean TTFT (ms):                          162.75    
Median TTFT (ms):                        129.51    
P99 TTFT (ms):                           1099.86   
---------------Inter-Token Latency----------------
Mean ITL (ms):                           24.95     
Median ITL (ms):                         22.98     
P95 ITL (ms):                            23.65     
P99 ITL (ms):                            111.87    
Max ITL (ms):                            1048.71   
==================================================

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new fused MoE kernel tuning configuration for the NVIDIA H20-3e GPU, specifically for the GLM-4.5 model. The new configuration file is generated from a tuning process, and the included benchmark results clearly demonstrate a performance improvement with increased request and token throughput. The changes are well-documented, and the new configuration file adheres to the project's existing format and naming conventions. The pull request is a valuable performance enhancement and is ready for merging.

@github-actions
Copy link

github-actions bot commented Aug 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Collaborator

@jeejeelee jeejeelee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you

@jeejeelee
Copy link
Collaborator

@DarkLight1337 The CI documentation failure seems unrelated to this PR.

@JaceyShao JaceyShao force-pushed the dev/add_glm4.5_moe_config branch from 57f4e1d to 8173fd5 Compare August 7, 2025 07:12
@vllm-bot vllm-bot merged commit c2dba2d into vllm-project:main Aug 7, 2025
5 of 8 checks passed
@JaceyShao JaceyShao deleted the dev/add_glm4.5_moe_config branch August 7, 2025 07:31
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants