-
Notifications
You must be signed in to change notification settings - Fork 5k
[Glm46v] Bug fix for accuracy drop and unable to launch server #14585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Changes from all commits
Commits
Show all changes
21 commits
Select commit
Hold shift + click to select a range
b67a5a9
Fix all glm46v & transformer 5.x release issues
byjiang1996 bdeb447
remove flash-attn
yhyang201 698d44c
fix
yhyang201 8ca9cf8
remove rope_config_validation
yhyang201 edc4668
Update custom_logit_processor.py
zRzRzRzRzRzRzR 1552677
add doc
zRzRzRzRzRzRzR fb65da2
Update glm4v_moe.py
zRzRzRzRzRzRzR c5a56f4
Update glm4v_moe.py
zRzRzRzRzRzRzR 966378b
Update glm4v_moe.py
zRzRzRzRzRzRzR c97a165
Update glm4v_moe.py
zRzRzRzRzRzRzR 6bcca63
Update glm4v_moe.py
zRzRzRzRzRzRzR 6d2f096
Merge branch 'main' into glm
JustinTong0323 66703a1
Merge branch 'main' into glm46v
byjiang1996 79887d4
Fix/Disable broken pp_group use_data_parallel num_fused_shared_expert…
byjiang1996 ffa7fa1
Fix glm4vmoe get_video_feature
byjiang1996 b18dd0b
Add Glm4vForConditionalGeneration to fa3 default arch
byjiang1996 04c6f42
Fix self.num_fused_shared_experts hack in glm45v
byjiang1996 b1a7779
Merge branch 'sgl-project:main' into glm
zRzRzRzRzRzRzR e45a481
disable-shared-experts-fusion for GLM4v
byjiang1996 9b417b1
Merge branch 'glm' into glm46v
byjiang1996 0224b17
fix load_weights for glm4v_moe with shared_experts fusion (#14610)
zminglei File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,70 @@ | ||
| ## Launch GLM-4.5 / GLM-4.6 with SGLang | ||
|
|
||
| To serve GLM-4.5 / GLM-4.6 FP8 models on 8xH100/H200 GPUs: | ||
|
|
||
| ```bash | ||
| python3 -m sglang.launch_server --model zai-org/GLM-4.6-FP8 --tp 8 | ||
| ``` | ||
|
|
||
| ### Configuration Tips | ||
|
|
||
| - `--max-mamba-cache-size`: Adjust `--max-mamba-cache-size` to increase mamba cache space and max running requests | ||
| capability. It will decrease KV cache space as a trade-off. You can adjust it according to workload. | ||
|
|
||
| ### EAGLE Speculative Decoding | ||
|
|
||
| **Description**: SGLang has supported GLM-4.5 / GLM-4.6 models | ||
| with [EAGLE speculative decoding](https://docs.sglang.io/advanced_features/speculative_decoding.html#EAGLE-Decoding). | ||
|
|
||
| **Usage**: | ||
| Add arguments `--speculative-algorithm`, `--speculative-num-steps`, `--speculative-eagle-topk` and | ||
| `--speculative-num-draft-tokens` to enable this feature. For example: | ||
|
|
||
| ``` bash | ||
| python3 -m sglang.launch_server \ | ||
| --model-path zai-org/GLM-4.6-FP8 \ | ||
| --tp-size 8 \ | ||
| --tool-call-parser glm45 \ | ||
| --reasoning-parser glm45 \ | ||
| --speculative-algorithm EAGLE \ | ||
| --speculative-num-steps 3 \ | ||
| --speculative-eagle-topk 1 \ | ||
| --speculative-num-draft-tokens 4 \ | ||
| --mem-fraction-static 0.9 \ | ||
| --served-model-name glm-4.6-fp8 \ | ||
| --enable-custom-logit-processor | ||
| ``` | ||
|
|
||
| ### Thinking Budget for GLM-4.5 / GLM-4.6 | ||
|
|
||
| In SGLang, we can implement thinking budget with `CustomLogitProcessor`. | ||
|
|
||
| Launch a server with `--enable-custom-logit-processor` flag on. | ||
|
|
||
| Sample Request: | ||
|
|
||
| ```python | ||
| import openai | ||
| from rich.pretty import pprint | ||
| from sglang.srt.sampling.custom_logit_processor import Glm4MoeThinkingBudgetLogitProcessor | ||
|
|
||
|
|
||
| client = openai.Client(base_url="http://127.0.0.1:30000/v1", api_key="*") | ||
| response = client.chat.completions.create( | ||
| model="zai-org/GLM-4.6", | ||
| messages=[ | ||
| { | ||
| "role": "user", | ||
| "content": "Question: Is Paris the Capital of France?", | ||
| } | ||
| ], | ||
| max_tokens=1024, | ||
| extra_body={ | ||
| "custom_logit_processor": Glm4MoeThinkingBudgetLogitProcessor().to_str(), | ||
| "custom_params": { | ||
| "thinking_budget": 512, | ||
| }, | ||
| }, | ||
| ) | ||
| pprint(response) | ||
| ``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,136 @@ | ||
| # GLM-4.6V / GLM-4.5V Usage | ||
|
|
||
| ## Launch commands for SGLang | ||
|
|
||
| Below are suggested launch commands tailored for different hardware / precision modes | ||
|
|
||
| ### FP8 (quantised) mode | ||
|
|
||
| For high memory-efficiency and latency optimized deployments (e.g., on H100, H200) where FP8 checkpoint is supported: | ||
|
|
||
| ```bash | ||
| python3 -m sglang.launch_server \ | ||
| --model-path zai-org/GLM-4.6V-FP8 \ | ||
| --tp 2 \ | ||
| --ep 2 \ | ||
| --host 0.0.0.0 \ | ||
| --port 30000 \ | ||
| --keep-mm-feature-on-device | ||
| ``` | ||
|
|
||
| ### Non-FP8 (BF16 / full precision) mode | ||
| For deployments on A100/H100 where BF16 is used (or FP8 snapshot not used): | ||
| ```bash | ||
| python3 -m sglang.launch_server \ | ||
| --model-path zai-org/GLM-4.6V \ | ||
| --tp 4 \ | ||
| --ep 4 \ | ||
| --host 0.0.0.0 \ | ||
| --port 30000 | ||
| ``` | ||
|
|
||
| ## Hardware-specific notes / recommendations | ||
|
|
||
| - On H100 with FP8: Use the FP8 checkpoint for best memory efficiency. | ||
| - On A100 / H100 with BF16 (non-FP8): It’s recommended to use `--mm-max-concurrent-calls` to control parallel throughput and GPU memory usage during image/video inference. | ||
| - On H200 & B200: The model can be run “out of the box”, supporting full context length plus concurrent image + video processing. | ||
|
|
||
| ## Sending Image/Video Requests | ||
|
|
||
| ### Image input: | ||
|
|
||
| ```python | ||
| import requests | ||
|
|
||
| url = f"http://localhost:30000/v1/chat/completions" | ||
|
|
||
| data = { | ||
| "model": "zai-org/GLM-4.6V", | ||
| "messages": [ | ||
| { | ||
| "role": "user", | ||
| "content": [ | ||
| {"type": "text", "text": "What’s in this image?"}, | ||
| { | ||
| "type": "image_url", | ||
| "image_url": { | ||
| "url": "https://github.com/sgl-project/sglang/blob/main/examples/assets/example_image.png?raw=true" | ||
| }, | ||
| }, | ||
| ], | ||
| } | ||
| ], | ||
| "max_tokens": 300, | ||
| } | ||
|
|
||
| response = requests.post(url, json=data) | ||
| print(response.text) | ||
| ``` | ||
|
|
||
| ### Video Input: | ||
|
|
||
| ```python | ||
| import requests | ||
|
|
||
| url = f"http://localhost:30000/v1/chat/completions" | ||
|
|
||
| data = { | ||
| "model": "zai-org/GLM-4.6V", | ||
| "messages": [ | ||
| { | ||
| "role": "user", | ||
| "content": [ | ||
| {"type": "text", "text": "What’s happening in this video?"}, | ||
| { | ||
| "type": "video_url", | ||
| "video_url": { | ||
| "url": "https://github.com/sgl-project/sgl-test-files/raw/refs/heads/main/videos/jobs_presenting_ipod.mp4" | ||
| }, | ||
| }, | ||
| ], | ||
| } | ||
| ], | ||
| "max_tokens": 300, | ||
| } | ||
|
|
||
| response = requests.post(url, json=data) | ||
| print(response.text) | ||
| ``` | ||
|
|
||
| ## Important Server Parameters and Flags | ||
|
|
||
| When launching the model server for **multimodal support**, you can use the following command-line arguments to fine-tune performance and behavior: | ||
|
|
||
| - `--mm-attention-backend`: Specify multimodal attention backend. Eg. `fa3`(Flash Attention 3) | ||
| - `--mm-max-concurrent-calls <value>`: Specifies the **maximum number of concurrent asynchronous multimodal data processing calls** allowed on the server. Use this to control parallel throughput and GPU memory usage during image/video inference. | ||
| - `--mm-per-request-timeout <seconds>`: Defines the **timeout duration (in seconds)** for each multimodal request. If a request exceeds this time limit (e.g., for very large video inputs), it will be automatically terminated. | ||
| - `--keep-mm-feature-on-device`: Instructs the server to **retain multimodal feature tensors on the GPU** after processing. This avoids device-to-host (D2H) memory copies and improves performance for repeated or high-frequency inference workloads. | ||
| - `--mm-enable-dp-encoder`: Placing the ViT in data parallel while keeping the LLM in tensor parallel consistently lowers TTFT and boosts end-to-end throughput. | ||
| - `SGLANG_USE_CUDA_IPC_TRANSPORT=1`: Shared memory pool based CUDA IPC for multi-modal data transport. For significantly improving e2e latency. | ||
|
|
||
| ### Example usage with the above optimizations: | ||
| ```bash | ||
| SGLANG_USE_CUDA_IPC_TRANSPORT=1 \ | ||
| SGLANG_VLM_CACHE_SIZE_MB=0 \ | ||
| python -m sglang.launch_server \ | ||
| --model-path zai-org/GLM-4.6V \ | ||
| --host 0.0.0.0 \ | ||
| --port 30000 \ | ||
| --trust-remote-code \ | ||
| --tp-size 8 \ | ||
| --enable-cache-report \ | ||
| --log-level info \ | ||
| --max-running-requests 64 \ | ||
| --mem-fraction-static 0.65 \ | ||
| --chunked-prefill-size 8192 \ | ||
| --attention-backend fa3 \ | ||
| --mm-attention-backend fa3 \ | ||
| --mm-enable-dp-encoder \ | ||
| --enable-metrics | ||
| ``` | ||
|
|
||
| ### Thinking Budget for GLM-4.5V / GLM-4.6V | ||
|
|
||
| In SGLang, we can implement thinking budget with `CustomLogitProcessor`. | ||
|
|
||
| Launch a server with `--enable-custom-logit-processor` flag on. and using `Glm4MoeThinkingBudgetLogitProcessor` in the request likes `GLM-4.6` example in [glm45.md](./glm45.md). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.