Skip to content

[Blackwell] Make mxint4 flashinfer_trtllm moe gemm set by default on blackwell#18136

Merged
b8zhong merged 1 commit into
sgl-project:mainfrom
bzhng-development:vz/default-flashinfer-mxint4-blackwell
Mar 1, 2026
Merged

[Blackwell] Make mxint4 flashinfer_trtllm moe gemm set by default on blackwell#18136
b8zhong merged 1 commit into
sgl-project:mainfrom
bzhng-development:vz/default-flashinfer-mxint4-blackwell

Conversation

@vincentzed
Copy link
Copy Markdown
Contributor

@vincentzed vincentzed commented Feb 3, 2026

Motivation

After #16892, we found that it was suitable to enable by default.

Input lens: [8192]. Output lens: [1024].
|   batch size |   input len |   latency (s) |   input throughput (tok/s) |   output throughput (tok/s) | acc length   |   ITL (ms) |   input cost ($/1M) |   output cost ($/1M) | cache hit rate   |
|--------------|-------------|---------------|----------------------------|-----------------------------|--------------|------------|---------------------|----------------------|------------------|
|            1 |        8192 |          7.36 |                    40647.1 |                      143.04 | n/a          |       6.99 |                0.04 |                 7.77 | n/a              |
|            4 |        8192 |          9.01 |                    42463.8 |                      497.41 | n/a          |       8.04 |                0.04 |                 2.23 | n/a              |
|            8 |        8192 |         12.04 |                    43786.8 |                      777.19 | n/a          |      10.29 |                0.04 |                 1.43 | n/a              |
|           16 |        8192 |         17.2  |                    44599.6 |                     1148.9  | n/a          |      13.93 |                0.04 |                 0.97 | n/a              |
|           32 |        8192 |         26.53 |                    44599.8 |                     1586.41 | n/a          |      20.17 |                0.04 |                 0.7  | n/a              |
|           64 |        8192 |         41.54 |                    44598.8 |                     2200.14 | n/a          |      29.09 |                0.04 |                 0.51 | n/a              |
|          128 |        8192 |         66.24 |                    44598.5 |                     3067.66 | n/a          |      41.73 |                0.04 |                 0.36 | n/a              |
|          256 |        8192 |        113.05 |                    44728.8 |                     3961.87 | n/a          |      64.62 |                0.04 |                 0.28 | n/a              |

Modifications

python3 -m sglang.launch_server --model-path moonshotai/Kimi-K2-Thinking --tp 8 --trust-remote-code --tool-call-parser kimi_k2 --reasoning-parser kimi_k2

python3 -m sglang.bench_one_batch_server --model None --tokenizer-path xxx --base-url http://localhost:30000 --batch-size 1 4 8 16 32 64 128 256 --input-len 8192 --output-len 1024 --show-report

https://huggingface.co/moonshotai/Kimi-K2-Thinking/blob/main/config.json
Important: K2 regular and K2 0905 are not Marlin (neither)

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @vincentzed, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request optimizes the performance of certain Kimi K2/K2.5 models on Blackwell GPUs by automatically configuring the flashinfer_trtllm MoE runner backend. This change streamlines the setup process for these models, leveraging specialized hardware capabilities and specific quantization schemes to achieve better inference efficiency, as supported by the provided benchmark data.

Highlights

  • Automatic MoE Backend Activation: The pull request introduces logic to automatically enable flashinfer_trtllm as the Mixture of Experts (MoE) runner backend for specific Kimi K2/K2.5 models that utilize int4 compressed tensor quantization on Blackwell GPUs.
  • Kimi K2/K2.5 Model Identification: New code has been added to accurately identify Kimi K2/K2.5 models based on their quantization configuration, specifically looking for compressed-tensors with 4-bit integer weights, a group size of 32, and a group strategy.
  • Enhanced Logging: The logging mechanism has been updated to provide more specific information, indicating when flashinfer_trtllm is activated for Kimi K2/K2.5 models on Blackwell, distinguishing it from other models like DeepSeekV3ForCausalLM.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/server_args.py
    • Added a new boolean variable is_kimi_k2_k25_thinking_int4 to detect if the model's quantization configuration matches the criteria for Kimi K2/K2.5 models using int4 compressed tensors.
    • Modified the condition for setting self.moe_runner_backend to flashinfer_trtllm to include the newly defined is_kimi_k2_k25_thinking_int4 flag, alongside existing fp8, modelopt_fp8, and modelopt_fp4 quantizations.
    • Updated the logger.info message to provide a more specific context when flashinfer_trtllm is enabled for Kimi K2/K2.5 models on Blackwell, differentiating it from DeepSeekV3ForCausalLM.
Activity
  • The author has provided benchmark results in the pull request description, demonstrating the performance benefits of the proposed changes.
  • This pull request is a follow-up to a previous investigation or change (PR Support mxint4 flashinfer_trtllm moe gemm #16892), indicating iterative development and refinement based on prior findings.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables the flashinfer_trtllm MoE backend by default for Kimi int4 models on Blackwell GPUs. The changes introduce logic to detect this specific quantization configuration. My review identified a potential critical issue where this detection might fail if the model's quantization information is stored under compression_config instead of quantization_config. I've provided a code suggestion to make the detection more robust by checking both locations. Additionally, I've included a medium-severity suggestion to improve maintainability by replacing magic values in the detection logic with named constants.

Comment on lines 1279 to +1280
quant_method = get_quantization_config(hf_config)
quant_cfg = getattr(hf_config, "quantization_config", None) or {}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The logic here for getting quant_method and quant_cfg only considers hf_config.quantization_config. However, as seen in the _parse_quant_hf_config method within ModelConfig, models using the compressed-tensors format might store their configuration under hf_config.compression_config. If the Kimi model uses compression_config, this detection logic will fail as both quant_method and quant_cfg will be None or empty. The logic should be updated to check for compression_config as a fallback, similar to _parse_quant_hf_config.

Suggested change
quant_method = get_quantization_config(hf_config)
quant_cfg = getattr(hf_config, "quantization_config", None) or {}
quant_cfg = getattr(hf_config, "quantization_config", None) or getattr(hf_config, "compression_config", None) or {}
quant_method = quant_cfg.get("quant_method")

Comment on lines +1287 to +1293
is_kimi_k2_k25_thinking_int4 = (
quant_method == "compressed-tensors"
and weights_cfg.get("num_bits") == 4
and weights_cfg.get("group_size") == 32
and weights_cfg.get("strategy") == "group"
and weights_cfg.get("type") == "int"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The conditions for detecting the Kimi int4 model use several magic strings and numbers. It would be more maintainable to define these as constants at the top of the file or within the class. This makes the code easier to read and modify if the quantization recipe changes in the future.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gemini is right

@vincentzed vincentzed changed the title Make mxint4 flashinfer_trtllm moe gemm set by default on blackwell [Blackwell] Make mxint4 flashinfer_trtllm moe gemm set by default on blackwell Feb 3, 2026
Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
@vincentzed vincentzed force-pushed the vz/default-flashinfer-mxint4-blackwell branch from 8f46313 to 9e84744 Compare February 28, 2026 01:59
@b8zhong
Copy link
Copy Markdown
Collaborator

b8zhong commented Feb 28, 2026

/tag-and-rerun-ci

@b8zhong b8zhong enabled auto-merge (squash) February 28, 2026 02:14
@vincentzed
Copy link
Copy Markdown
Contributor Author

/rerun-failed-ci

@b8zhong b8zhong merged commit 894e887 into sgl-project:main Mar 1, 2026
237 of 264 checks passed
@JustinTong0323
Copy link
Copy Markdown
Collaborator

@vincentzed @b8zhong I meet this error:

  File "/root/xinyuan/sglang/python/sglang/srt/models/deepseek_v2.py", line 1778, in <lambda>
    lambda idx, prefix: DeepseekV2DecoderLayer(
                        ^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/xinyuan/sglang/python/sglang/srt/models/deepseek_v2.py", line 1522, in __init__
    self.mlp = DeepseekV2MoE(
               ^^^^^^^^^^^^^^
  File "/root/xinyuan/sglang/python/sglang/srt/models/deepseek_v2.py", line 386, in __init__
    self.experts = get_moe_impl_class(quant_config)(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/xinyuan/sglang/python/sglang/srt/layers/moe/fused_moe_triton/layer.py", line 277, in __init__
    self.quant_method = quant_config.get_quant_method(self, prefix)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/xinyuan/sglang/python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py", line 179, in get_quant_method
    layer.scheme = self.get_moe_scheme(layer=layer, layer_name=prefix)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/xinyuan/sglang/python/sglang/srt/layers/quantization/compressed_tensors/compressed_tensors.py", line 680, in get_moe_scheme
    return CompressedTensorsMxInt4MoE(self)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Can't instantiate abstract class CompressedTensorsMxInt4MoE without an implementation for abstract method 'apply_weights'

When launch kimi-k25 by:

python3 -m sglang.launch_server --model-path moonshotai/Kimi-K2.5 --tp 8 --trust-remote-code --tool-call-parser kimi_k2 --reasoning-parser kimi_k2

JustinTong0323 added a commit to JustinTong0323/sglang that referenced this pull request Mar 2, 2026
magicYang1573 pushed a commit to magicYang1573/sglang that referenced this pull request Mar 9, 2026
…blackwell (sgl-project#18136)

Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
…blackwell (sgl-project#18136)

Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
JustinTong0323 pushed a commit to JustinTong0323/sglang that referenced this pull request Apr 7, 2026
…blackwell (sgl-project#18136)

Signed-off-by: vincentzed <207368749+vincentzed@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants