Skip to content

Faster weight processing (trtllm-gen moe nvfp4)#9162

Merged
zhyncs merged 6 commits intosgl-project:mainfrom
aleozlx:feature/weight_proc_cache
Aug 14, 2025
Merged

Faster weight processing (trtllm-gen moe nvfp4)#9162
zhyncs merged 6 commits intosgl-project:mainfrom
aleozlx:feature/weight_proc_cache

Conversation

@aleozlx
Copy link
Copy Markdown
Contributor

@aleozlx aleozlx commented Aug 13, 2025

Motivation

Reduce server start-up time in weights processing for trtllm-gen MoE

Modifications

Speeding up the weights processing by caching. The utility function is integrated to FI flashinfer-ai/flashinfer#1412 . Now utilizing this inside SGL.

Accuracy Tests

$ python3 benchmark/gsm8k/bench_sglang.py \
  --num-questions 900 \
  --parallel 32 \
  --num-shots 8

Accuracy: 0.961
Invalid: 0.000
Latency: 176.088 s
Output throughput: 515.702 token/s

Benchmarking and Profiling

✅ Weights processing is improved 12x

(Before)
[2025-08-13 14:27:06 TP0] Load weight begin. avail mem=175.34 GB
[2025-08-13 14:27:52 TP2] Applied flashinfer weight processing for both w13 and w2
[2025-08-13 14:27:52 TP3] Applied flashinfer weight processing for both w13 and w2
[2025-08-13 14:27:52 TP0] Applied flashinfer weight processing for both w13 and w2
[2025-08-13 14:27:52 TP1] Applied flashinfer weight processing for both w13 and w2
[2025-08-13 14:34:24 TP0] Load weight end. type=DeepseekV3ForCausalLM, dtype=torch.bfloat16, avail mem=80.84 GB, mem usage=94.50 GB.

(After)
[2025-08-13 14:47:38 TP0] Load weight begin. avail mem=175.34 GB
[2025-08-13 14:48:10 TP3] Applied flashinfer weight processing for both w13 and w2
[2025-08-13 14:48:12 TP1] Applied flashinfer weight processing for both w13 and w2
[2025-08-13 14:48:13 TP0] Applied flashinfer weight processing for both w13 and w2
[2025-08-13 14:48:14 TP2] Applied flashinfer weight processing for both w13 and w2
[2025-08-13 14:48:15 TP0] Load weight end. type=DeepseekV3ForCausalLM, dtype=torch.bfloat16, avail mem=80.83 GB, mem usage=94.51 GB.

Forward pass is unaffected (but see further perf testing results in comments)

thanks to @azhurkevich for development/testing instructions

followed the same benchmark #8552 (comment)

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @aleozlx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant performance enhancement by optimizing the weight processing pipeline for trtllm-gen Mixture-of-Experts (MoE) models. The main objective is to drastically cut down server startup times. This is achieved by integrating and leveraging caching utilities from the FlashInfer library, which efficiently stores and reuses permutation indices during the preparation of quantized weights. The changes lead to a much faster initialization process without affecting the forward pass performance, as validated by benchmarks showing a reduction in weight loading time from minutes to seconds.

Highlights

  • Performance Improvement: The primary goal of this pull request is to significantly reduce the server startup time by optimizing the weight processing phase for trtllm-gen MoE models.
  • Caching Mechanism: The core of the optimization involves integrating and utilizing caching mechanisms from the FlashInfer library to store and reuse permutation indices during weight processing.
  • Code Refactoring and Integration: The prepare_static_weights_for_kernel function has been refactored to use _maybe_get_cached_w2_permute_indices and _maybe_get_cached_w3_w1_permute_indices from FlashInfer, replacing previous manual reordering and shuffling logic.
  • Quantifiable Impact: Benchmarking results show a substantial reduction in weight loading time, from several minutes to under one minute, demonstrating the effectiveness of the caching approach.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization for MoE weight processing by caching permutation indices. The changes in python/sglang/srt/layers/quantization/modelopt_quant.py effectively replace repeated computations with a cached lookup, which should significantly reduce server start-up time as shown in the benchmarks. My main feedback is on improving code structure to reduce repetition, which will enhance maintainability.

@aleozlx
Copy link
Copy Markdown
Contributor Author

aleozlx commented Aug 13, 2025

Perf testing results (B200)

✅ TLDR, no regression

instructions see #8552 (comment)

Benchmark --max-concurrency 1

(Before)

============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf
Max request concurrency:                 1
Successful requests:                     5
Benchmark duration (s):                  43.44
Total input tokens:                      5000
Total generated tokens:                  5000
Total generated tokens (retokenized):    4996
Request throughput (req/s):              0.12
Input token throughput (tok/s):          115.11
Output token throughput (tok/s):         115.11
Total token throughput (tok/s):          230.22
Concurrency:                             1.00
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   8685.19
Median E2E Latency (ms):                 8523.30
---------------Time to First Token----------------
Mean TTFT (ms):                          242.79
Median TTFT (ms):                        81.27
P99 TTFT (ms):                           762.93
---------------Inter-Token Latency----------------
Mean ITL (ms):                           8.45
Median ITL (ms):                         8.45
P95 ITL (ms):                            8.61
P99 ITL (ms):                            8.78
Max ITL (ms):                            25.44
==================================================

(After)

============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf
Max request concurrency:                 1
Successful requests:                     5
Benchmark duration (s):                  44.04
Total input tokens:                      5000
Total generated tokens:                  5000
Total generated tokens (retokenized):    4997
Request throughput (req/s):              0.11
Input token throughput (tok/s):          113.53
Output token throughput (tok/s):         113.53
Total token throughput (tok/s):          227.06
Concurrency:                             1.00
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   8805.77
Median E2E Latency (ms):                 8661.03
---------------Time to First Token----------------
Mean TTFT (ms):                          223.18
Median TTFT (ms):                        78.45
P99 TTFT (ms):                           746.50
---------------Inter-Token Latency----------------
Mean ITL (ms):                           8.59
Median ITL (ms):                         8.56
P95 ITL (ms):                            8.90
P99 ITL (ms):                            9.07
Max ITL (ms):                            25.82
==================================================

Benchmark --max-concurrency 4

(Before)

============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf
Max request concurrency:                 4
Successful requests:                     20
Benchmark duration (s):                  55.39
Total input tokens:                      20000
Total generated tokens:                  20000
Total generated tokens (retokenized):    19962
Request throughput (req/s):              0.36
Input token throughput (tok/s):          361.08
Output token throughput (tok/s):         361.08
Total token throughput (tok/s):          722.16
Concurrency:                             4.00
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   11075.18
Median E2E Latency (ms):                 11280.21
---------------Time to First Token----------------
Mean TTFT (ms):                          587.04
Median TTFT (ms):                        814.76
P99 TTFT (ms):                           1058.22
---------------Inter-Token Latency----------------
Mean ITL (ms):                           10.52
Median ITL (ms):                         10.50
P95 ITL (ms):                            11.10
P99 ITL (ms):                            11.46
Max ITL (ms):                            810.01
==================================================

(After)

============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf
Max request concurrency:                 4
Successful requests:                     20
Benchmark duration (s):                  55.18
Total input tokens:                      20000
Total generated tokens:                  20000
Total generated tokens (retokenized):    19945
Request throughput (req/s):              0.36
Input token throughput (tok/s):          362.48
Output token throughput (tok/s):         362.48
Total token throughput (tok/s):          724.96
Concurrency:                             4.00
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   11032.34
Median E2E Latency (ms):                 11289.03
---------------Time to First Token----------------
Mean TTFT (ms):                          567.49
Median TTFT (ms):                        878.50
P99 TTFT (ms):                           911.96
---------------Inter-Token Latency----------------
Mean ITL (ms):                           10.50
Median ITL (ms):                         10.49
P95 ITL (ms):                            11.17
P99 ITL (ms):                            11.49
Max ITL (ms):                            781.61
==================================================

Benchmark --max-concurrency 16

(Before)

============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf
Max request concurrency:                 16
Successful requests:                     80
Benchmark duration (s):                  90.68
Total input tokens:                      80000
Total generated tokens:                  80000
Total generated tokens (retokenized):    79766
Request throughput (req/s):              0.88
Input token throughput (tok/s):          882.26
Output token throughput (tok/s):         882.26
Total token throughput (tok/s):          1764.53
Concurrency:                             16.00
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   18130.56
Median E2E Latency (ms):                 17772.65
---------------Time to First Token----------------
Mean TTFT (ms):                          1575.72
Median TTFT (ms):                        1283.64
P99 TTFT (ms):                           2639.64
---------------Inter-Token Latency----------------
Mean ITL (ms):                           16.61
Median ITL (ms):                         16.47
P95 ITL (ms):                            16.91
P99 ITL (ms):                            17.12
Max ITL (ms):                            1139.41
==================================================

(After)

============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf
Max request concurrency:                 16
Successful requests:                     80
Benchmark duration (s):                  91.35
Total input tokens:                      80000
Total generated tokens:                  80000
Total generated tokens (retokenized):    79736
Request throughput (req/s):              0.88
Input token throughput (tok/s):          875.74
Output token throughput (tok/s):         875.74
Total token throughput (tok/s):          1751.48
Concurrency:                             16.00
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   18266.05
Median E2E Latency (ms):                 18400.96
---------------Time to First Token----------------
Mean TTFT (ms):                          1708.28
Median TTFT (ms):                        1937.41
P99 TTFT (ms):                           2604.37
---------------Inter-Token Latency----------------
Mean ITL (ms):                           16.62
Median ITL (ms):                         16.49
P95 ITL (ms):                            16.93
P99 ITL (ms):                            17.17
Max ITL (ms):                            1151.90
==================================================

Benchmark --max-concurrency 32

(Before)

============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf
Max request concurrency:                 32
Successful requests:                     160
Benchmark duration (s):                  119.00
Total input tokens:                      160000
Total generated tokens:                  160000
Total generated tokens (retokenized):    159450
Request throughput (req/s):              1.34
Input token throughput (tok/s):          1344.56
Output token throughput (tok/s):         1344.56
Total token throughput (tok/s):          2689.12
Concurrency:                             31.99
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   23793.76
Median E2E Latency (ms):                 24046.88
---------------Time to First Token----------------
Mean TTFT (ms):                          2532.76
Median TTFT (ms):                        2841.09
P99 TTFT (ms):                           3164.95
---------------Inter-Token Latency----------------
Mean ITL (ms):                           21.34
Median ITL (ms):                         21.17
P95 ITL (ms):                            21.60
P99 ITL (ms):                            21.79
Max ITL (ms):                            2359.45
==================================================

(After)

============ Serving Benchmark Result ============
Backend:                                 sglang-oai
Traffic request rate:                    inf
Max request concurrency:                 32
Successful requests:                     160
Benchmark duration (s):                  119.05
Total input tokens:                      160000
Total generated tokens:                  160000
Total generated tokens (retokenized):    159417
Request throughput (req/s):              1.34
Input token throughput (tok/s):          1344.03
Output token throughput (tok/s):         1344.03
Total token throughput (tok/s):          2688.05
Concurrency:                             31.99
----------------End-to-End Latency----------------
Mean E2E Latency (ms):                   23802.94
Median E2E Latency (ms):                 24079.03
---------------Time to First Token----------------
Mean TTFT (ms):                          2520.77
Median TTFT (ms):                        2838.14
P99 TTFT (ms):                           3152.32
---------------Inter-Token Latency----------------
Mean ITL (ms):                           21.37
Median ITL (ms):                         21.18
P95 ITL (ms):                            21.56
P99 ITL (ms):                            21.75
Max ITL (ms):                            2351.39
==================================================

@aleozlx aleozlx marked this pull request as ready for review August 13, 2025 22:27
@azhurkevich
Copy link
Copy Markdown
Collaborator

azhurkevich commented Aug 13, 2025

LGTM. 12x is awesome. Thank you @aleozlx for flashinfer and SGL integration. Thank you @rosenrodt for original implementation.

CC @zhyncs

@zhyncs zhyncs self-assigned this Aug 13, 2025
@zhyncs
Copy link
Copy Markdown
Collaborator

zhyncs commented Aug 14, 2025

➜  sglang git:(feature/weight_proc_cache) python3 -m sglang.launch_server --model-path nvidia/DeepSeek-V3-
0324-FP4 --trust-remote-code --tp-size 8 --quantization modelopt_fp4 --enable-flashinfer-cutlass-moe
W0814 00:23:28.742000 1672851 torch/utils/cpp_extension.py:2425] TORCH_CUDA_ARCH_LIST is not set, all arch
s for visible cards are included for compilation.
W0814 00:23:28.742000 1672851 torch/utils/cpp_extension.py:2425] If this is not desired, please set os.env
iron['TORCH_CUDA_ARCH_LIST'] to specific architectures.
/sgl-workspace/sglang/python/sglang/srt/managers/session_controller.py:57: SyntaxWarning: invalid escape s
equence '\-'
  prefix = " " * len(origin_prefix) + " \- " + child.req.rid
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/sgl-workspace/sglang/python/sglang/launch_server.py", line 11, in <module>
    server_args = prepare_server_args(sys.argv[1:])
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/server_args.py", line 2213, in prepare_server_args
    server_args = ServerArgs.from_cli_args(raw_args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/server_args.py", line 1969, in from_cli_args
    return cls(**{attr: getattr(args, attr) for attr in attrs})
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<string>", line 193, in __init__
  File "/sgl-workspace/sglang/python/sglang/srt/server_args.py", line 361, in __post_init__
    model_config = ModelConfig.from_server_args(self)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 292, in from_server_args
    return ModelConfig(
           ^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 277, in __init__
    self._verify_quantization()
  File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 475, in _verify_quantization
    raise ValueError(
ValueError: Quantization method specified in the model config (fp8) does not match the quantization method specified in the `quantization` argument (modelopt_fp4).
➜  sglang git:(feature/weight_proc_cache) python3 -m sglang.launch_server --model-path nvidia/DeepSeek-V3-0324-FP4 --trust-remote-code --tp-size 8 --quantization modelopt_fp4 --enable-flashinfer-trtllm-moe
W0814 00:24:19.053000 1673501 torch/utils/cpp_extension.py:2425] TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
W0814 00:24:19.053000 1673501 torch/utils/cpp_extension.py:2425] If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'] to specific architectures.
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/sgl-workspace/sglang/python/sglang/launch_server.py", line 11, in <module>
    server_args = prepare_server_args(sys.argv[1:])
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/server_args.py", line 2213, in prepare_server_args
    server_args = ServerArgs.from_cli_args(raw_args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/server_args.py", line 1969, in from_cli_args
    return cls(**{attr: getattr(args, attr) for attr in attrs})
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<string>", line 193, in __init__
  File "/sgl-workspace/sglang/python/sglang/srt/server_args.py", line 361, in __post_init__
    model_config = ModelConfig.from_server_args(self)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 292, in from_server_args
    return ModelConfig(
           ^^^^^^^^^^^^
  File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 277, in __init__
    self._verify_quantization()
  File "/sgl-workspace/sglang/python/sglang/srt/configs/model_config.py", line 475, in _verify_quantization
    raise ValueError(
ValueError: Quantization method specified in the model config (fp8) does not match the quantization method specified in the `quantization` argument (modelopt_fp4).

@azhurkevich
Copy link
Copy Markdown
Collaborator

@zhyncs we will take a look

@zhyncs zhyncs merged commit 1bc183c into sgl-project:main Aug 14, 2025
84 of 90 checks passed
narutolhy pushed a commit to narutolhy/sglang that referenced this pull request Aug 17, 2025
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants