Skip to content

[Fix] GLM 4.7 + NVFP4 + MTP#17166

Merged
Fridge003 merged 5 commits intosgl-project:mainfrom
bzhng-development:brayden/fix-glm-47-fp4-mtp
Jan 21, 2026
Merged

[Fix] GLM 4.7 + NVFP4 + MTP#17166
Fridge003 merged 5 commits intosgl-project:mainfrom
bzhng-development:brayden/fix-glm-47-fp4-mtp

Conversation

@b8zhong
Copy link
Copy Markdown
Collaborator

@b8zhong b8zhong commented Jan 15, 2026

Motivation

A few issues reported by @ynwang007 @JustinTong0323

Modifications

  1. We will face an error in loading the draft model with NVFP4 due to some inheritance relationship. Instead of deleting the detection from modelopt -> modelopt_fp4 in fix: GLM4.7-FP4 usage #16581 I think this detection might be a better method.
  2. Hardcode a fix for GLM 4.7 FP4 checkpoint + MTP: Essentially, safetensors.index.json should be mapping (you can thnkn of it like symlink) the layer 92 weights → mtp.safetensors (technically, models like Deepseek store these within the rest of the weights as an additional layer, but I suppose GLM simply decided to do it differently), into a mtp.safetensors file seperately, but during the creation with the ModelOpt library it was not added to the index. Instead we can automatically detect this is the case, warn the user about this invalid checkpoint, but still do the remapping online if we have mtp.safetensors.
  3. Enable trtllm-gen automatically, which has much better performance than the CUTLASS backend (requires Flashinfer 0.6.2 which will be released later) after a small fix in tiny support glm routing flashinfer-ai/flashinfer#2313.

Accuracy Tests

python3 benchmark/gsm8k/bench_sglang.py --num-questions 1319 --parallel 1319
Downloading from https://raw.githubusercontent.com/openai/grade-school-math/master/grade_school_math/data/test.jsonl to /tmp/test.jsonl
/tmp/test.jsonl: 732kB [00:00, 33.7MB/s]                                                                                                                                                                                                                              
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1319/1319 [00:55<00:00, 23.94it/s]
Accuracy: 0.951
Invalid: 0.000
Latency: 57.876 s
Output throughput: 2323.323 token/s

Benchmarking and Profiling

SGLANG_ENABLE_SPEC_V2=1 python3 -m sglang.launch_server --model-path baseten-admin/glm-4.7-fp8-attn-fp4-mlp --trust-remote-code --tp 8 --quantization modelopt_fp4 --moe-runner-backend flashinfer_trtllm --speculative-algorithm EAGLE --attention-backend trtllm_mha

This is an example of a "broken" checkpoint. After our fix, it will run fine automatically:

[2026-01-15 18:38:23 TP0] Found mtp.safetensors but it's not referenced in model.safetensors.index.json. This is a checkpoint packaging bug. Auto-adding it for loading. Please report this to the checkpoint provider.
[2026-01-15 18:40:04 TP0] Decode batch, #running-req: 1, #token: 320, token usage: 0.00, accept len: 3.33, accept rate: 0.83, cuda graph: True, gen throughput (token/s): 323.35, #queue-req: 0, 
[2026-01-15 18:40:05 TP0] Decode batch, #running-req: 1, #token: 448, token usage: 0.00, accept len: 3.62, accept rate: 0.91, cuda graph: True, gen throughput (token/s): 353.81, #queue-req: 0, 
[2026-01-15 18:40:05 TP0] Decode batch, #running-req: 1, #token: 576, token usage: 0.00, accept len: 3.50, accept rate: 0.88, cuda graph: True, gen throughput (token/s): 340.39, #queue-req: 0, 
[2026-01-15 18:40:05 TP0] Decode batch, #running-req: 1, #token: 704, token usage: 0.00, accept len: 3.33, accept rate: 0.83, cuda graph: True, gen throughput (token/s): 328.84, #queue-req: 0, 
+-------------+--------+------------+-----------------+
| Latency (s) | Tokens | Acc Length | Speed (token/s) |
+-------------+--------+------------+-----------------+
|    2.449    |  767   |   3.335    |     313.24      |
+-------------+--------+------------+-----------------+

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @b8zhong, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses several critical issues related to model loading and performance for specific GLM models, particularly when using NVFP4 quantization and MTP (Multi-Task Prediction) checkpoints. The changes improve the robustness of the model loading process by handling quantization method compatibility more intelligently and fixing a common checkpoint packaging bug. Additionally, it optimizes performance for GLM4Moe models on compatible hardware by automatically enabling a faster MoE runner backend.

Highlights

  • Improved Quantization Compatibility: Enhanced the logic for handling quantization method discrepancies, especially for draft models, allowing for more flexible loading of models with modelopt_fp4 even if the Hugging Face config specifies modelopt.
  • GLM 4.7 MTP Checkpoint Fix: Implemented an automatic detection and remapping mechanism for GLM 4.7 FP4 checkpoints that incorrectly package mtp.safetensors outside the model.safetensors.index.json, ensuring these models can load correctly without manual intervention.
  • Automatic trtllm-gen Enablement: Automatically enables the flashinfer_trtllm backend for Glm4MoeForCausalLM models using modelopt_fp4 quantization on SM100-supported GPUs, leveraging its superior performance.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several important fixes and improvements, primarily for GLM 4.7 models with NVFP4 quantization and MTP. The changes include a more robust quantization method detection, a workaround for incorrectly packaged checkpoints by auto-detecting and including mtp.safetensors, and performance enhancement by automatically enabling the flashinfer_trtllm MoE runner backend under specific conditions. The code is well-structured and addresses the issues effectively. I have one suggestion to refactor some conditional checks for improved conciseness and readability.

@b8zhong b8zhong added the run-ci label Jan 21, 2026
b8zhong and others added 5 commits January 20, 2026 16:40
Co-Authored-By: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>

Co-Authored-By: Warp <agent@warp.dev>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@b8zhong b8zhong force-pushed the brayden/fix-glm-47-fp4-mtp branch from a33c14f to 8f8fd35 Compare January 21, 2026 00:40
@ConnorLi96
Copy link
Copy Markdown
Contributor

Hi @b8zhong did you try a quality sanity test for this model? I use your launch command and got very bad output. Just very simple request can trigger, like

curl -X POST "http://localhost:22345/v1/completions"   -H "Content-Type: application/json"   -d '{
    "model": "zai-org/GLM-4.7",
    "prompt": "What are the top 3 things to do in New York?",
    "max_tokens": 100
  }'

but if we specify --moe-runner-backend flashinfer_cutlass, it will be good.

@b8zhong
Copy link
Copy Markdown
Collaborator Author

b8zhong commented Jan 21, 2026

@ConnorLi96 You need nightly flashinfer for this fix, if you did not install that, it will be still broken

@Fridge003 Fridge003 merged commit 2ff0880 into sgl-project:main Jan 21, 2026
255 of 277 checks passed
@b8zhong b8zhong deleted the brayden/fix-glm-47-fp4-mtp branch January 21, 2026 13:51
@ConnorLi96
Copy link
Copy Markdown
Contributor

Yep, this does work, very fresh update, lol

Comment on lines +1513 to +1533
elif model_arch in ["Glm4MoeForCausalLM"]:
if is_sm100_supported():
quantization_config = getattr(hf_config, "quantization_config", None)
quant_method = (
quantization_config.get("quant_method")
if quantization_config is not None
else None
)
if self.quantization is None and quant_method is not None:
self.quantization = quant_method
if (
self.quantization == "modelopt_fp4"
and self.moe_a2a_backend == "none"
and self.moe_runner_backend == "auto"
):
# Only enable flashinfer_trtllm if flashinfer-python version is >= 0.6.2
if check_pkg_version_at_least("flashinfer-python", "0.6.2"):
self.moe_runner_backend = "flashinfer_trtllm"
logger.info(
"Use flashinfer_trtllm as MoE runner backend on sm100 for Glm4MoeForCausalLM"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this if branch has been inserted to wrong place!

Qwen3NextForCausalLM 's Mamba radix cache v2 is inside Glm4MoeForCausalLM now

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jimmy-evo sorry about breaking this codes. It looks like it has been fixed in main though. I will be more careful next time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants