Skip to content

Fix GLM-4.6 tool calls don't support streaming output for arguments i…#13989

Merged
Kangyan-Zhou merged 6 commits intosgl-project:mainfrom
cynial:feat/glm-xml2json-tool-call-streaming
Dec 13, 2025
Merged

Fix GLM-4.6 tool calls don't support streaming output for arguments i…#13989
Kangyan-Zhou merged 6 commits intosgl-project:mainfrom
cynial:feat/glm-xml2json-tool-call-streaming

Conversation

@cynial
Copy link
Contributor

@cynial cynial commented Nov 26, 2025

Motivation

The original glm_moe_detector_old_normal.py implementation, unable to achieve character-level streaming output, resulting in suboptimal user experience:

Modifications

1. Refactored parse_streaming_increment() Method

  • Old Version Problem:
# Old: Complete parsing
result = self.detect_and_parse(
    current_text[: end + len(self.eot_token)], tools=tools
)
  • New Version Optimization:
# New: Incremental processing
raw_increment = func_args_raw[self._streamed_raw_length:]
json_increment = self._process_xml_to_json_streaming(
    raw_increment, func_name, tools
)

2. Introduced Streaming State Machine

Added a complete state machine to handle real-time XML to JSON conversion:

State Transition Flow:

  • INITIN_KEYWAITING_VALUEIN_VALUEBETWEEN → ...

Flow Comparison Diagrams:

  • Old Version Flow (Batch Parsing)
graph TD
    A[Receive new text] --> B[Append to buffer]
    B --> C{Found </tool_call>?}
    C -->|Yes| D[Call detect_and_parse<br/>Re-parse entire tool call]
    C -->|No| E[Return empty result]
    D --> F[Parse complete XML]
    F --> G[Extract function name and parameters]
    G --> H[Return complete result]
    H --> I[Update tool_id]
    
    style D fill:#ffcccc
    style F fill:#ffcccc
Loading
  • New Version Flow (Incremental Streaming Parsing)
graph TD
    A[Receive new text] --> B[Append to buffer]
    B --> C{Found <tool_call>?}
    C -->|No| D[Check if potential start]
    C -->|Yes| E{Tool name sent?}
    
    E -->|No| F[Send tool name]
    E -->|Yes| G[Get new XML fragment]
    
    G --> H[State machine processing]
    H --> I{Current state?}
    
    I -->|INIT/BETWEEN| J[Detect arg_key tag]
    I -->|IN_KEY| K[Collect key name]
    I -->|WAITING_VALUE| L[Detect arg_value tag]
    I -->|IN_VALUE| M[Process value content]
    
    M --> N{Value type?}
    N -->|string| O[Output quoted JSON string]
    N -->|number| P[Output number]
    N -->|object| Q[Output object/array]
    
    O --> R[Real-time JSON increment output]
    P --> R
    Q --> R
    
    R --> S{Encountered </tool_call>?}
    S -->|No| T[Continue waiting]
    S -->|Yes| U[Complete JSON structure]
    U --> V[Update tool_id and reset state]
    
    style H fill:#ccffcc
    style R fill:#ccffcc
    style N fill:#ffffcc
Loading

Verification

================================================================================
Test 1: Streaming with complex string containing quotes
================================================================================

--- Increment 1: '<tool_call>img_gen\n'
  Tool name: img_gen

--- Increment 2: '<arg_key>prompt</arg_key>\n'
  Parameters increment: '{"prompt": '

--- Increment 3: '<arg_value>A vibrant cartoon illustration titled "Youth Eco-Tech Team Debugging Smart Sorting Trash Cans".  and teamwork.</arg_value>\n'
  Parameters increment: '"A vibrant cartoon illustration titled \\"Youth Eco-Tech Team Debugging Smart Sorting Trash Cans\\".  and teamwork."'

--- Increment 4: '<arg_key>width</arg_key>\n'
  Parameters increment: ', "width": '

--- Increment 5: '<arg_value>1024</arg_value>\n'
  Parameters increment: '1024'

--- Increment 6: '<arg_key>height</arg_key>\n'
  Parameters increment: ', "height": '

--- Increment 7: '<arg_value>768</arg_value>\n'
  Parameters increment: '768'

--- Increment 8: '</tool_call>'
  Parameters increment: '}'

--- Accumulated JSON: {"prompt": "A vibrant cartoon illustration titled \"Youth Eco-Tech Team Debugging Smart Sorting Trash Cans\".  and teamwork.", "width": 1024, "height": 768}
✅ JSON is valid!
   Parsed: {
  "prompt": "A vibrant cartoon illustration titled \"Youth Eco-Tech Team Debugging Smart Sorting Trash Cans\".  and teamwork.",
  "width": 1024,
  "height": 768
}
✅ Prompt value is correct!

================================================================================
Test 2: Parameter without type definition (should default to string)
================================================================================

--- Increment 1: '<tool_call>unknown_func\n'
  Tool name: unknown_func

--- Increment 2: '<arg_key>param1</arg_key>\n'
  Parameters increment: '{"param1": '

--- Increment 3: '<arg_value>This should be treated as a string</arg_value>\n'
  Parameters increment: '"This should be treated as a string"'

--- Increment 4: '</tool_call>'
  Parameters increment: '}'

--- Accumulated JSON: {"param1": "This should be treated as a string"}
✅ JSON is valid!
   Parsed: {
  "param1": "This should be treated as a string"
}
✅ Parameter correctly treated as string!

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @cynial, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a limitation in GLM-4.6 tool calls where streaming output for arguments was not supported at a character level, leading to a suboptimal user experience. The changes introduce a robust streaming mechanism that incrementally processes XML tool call fragments and converts them into JSON, enabling real-time output and improving the responsiveness of tool argument parsing.

Highlights

  • Refactored parse_streaming_increment() Method: The method was optimized from complete batch parsing to incremental processing, allowing for character-level streaming output.
  • Introduced Streaming State Machine: A comprehensive state machine was added to handle real-time XML to JSON conversion, managing states like INIT, IN_KEY, WAITING_VALUE, IN_VALUE, and BETWEEN.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant and well-executed refactoring to enable character-level streaming for GLM-4.6 tool call arguments. The implementation of a state machine for converting the XML-like format to JSON is a robust solution for incremental parsing. The code quality is high, with improved modularity through new helper functions, better error handling with specific exceptions, and comprehensive docstrings. The associated test cases have also been effectively updated to validate the new streaming behavior. I have one suggestion for a minor performance optimization.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@Kangyan-Zhou
Copy link
Collaborator

/tag-and-rerun-ci

@JustinTong0323 JustinTong0323 self-assigned this Nov 27, 2025
@cynial
Copy link
Contributor Author

cynial commented Dec 1, 2025

@JustinTong0323 Hi, could you please review this PR when you have a chance? Thanks

@cynial
Copy link
Contributor Author

cynial commented Dec 12, 2025

@CatherineSue @JustinTong0323 Let me know if there's anything else I should do, or if you'd like me to make any other changes. I've been running this patch in production for a week now, and everything seems to be working well. If anything comes up, I'll post updates here.

@Kangyan-Zhou Kangyan-Zhou merged commit 8055459 into sgl-project:main Dec 13, 2025
90 of 96 checks passed
Liwansi added a commit to iforgetmyname/sglang that referenced this pull request Dec 13, 2025
…n_eagle3_npu

* 'main' of https://github.com/sgl-project/sglang: (25 commits)
  [NPU] perf update with kvcache nz & w4a8 quant (sgl-project#14423)
  [PP Prefill][NIXL] Fix PP mode transfer completion tracking to wait for all ranks (sgl-project#15027)
  Fix GLM-4.6 tool calls don't support streaming output for arguments i… (sgl-project#13989)
  feature: adding nightly wheel workflow and indexer (sgl-project#14924)
  [diffusion] feat: Improve LoRA compatibility by adding unified format detection and diffusers-based normalization (sgl-project#14659)
  [Fix] Disable trtllm moe backend for draft model for a qucik fix (sgl-project#15002)
  [diffusion] fix: use NDRotaryEmbedding in flux_2   (sgl-project#15034)
  Mistral Large 3 NVFP4 support (sgl-project#14485)
  call check_quantized_moe_compatibility after initialize (sgl-project#13876)
  Add sgl_router_attempt_http_responses_total for single attempt information (sgl-project#15037)
  Add error code in prometheus metrics and add X-SMG-Error-Code header (sgl-project#15036)
  Provide more fine grained error reason for reqwest error (sgl-project#15032)
  Tiny change http router response format to unify (sgl-project#15031)
  Tiny unify grpc existing error responses into new format (sgl-project#15030)
  Add `code` field and unify error responses for router (sgl-project#15028)
  Super tiny remove unused log_request (sgl-project#15035)
  Fix decode OOM caused by retraction (sgl-project#14939)
  [CI]Add gb200 runner back (sgl-project#15024)
  Add a special label for b200 CI runner that can run kernel tests (sgl-project#15033)
  Fix regression caused by fa3 block_table (sgl-project#15009)
  ...

# Conflicts:
#	python/sglang/srt/hardware_backend/npu/attention/ascend_backend.py
@gaoganlsz
Copy link

@cynial Can the content of <arg_value> be modified to return a stream? Because in scenarios like file editing tools or file creation tools, the content that <arg_value> needs to return is often very large, leading to long wait times and a poor user experience. If streaming returns could be supported, we could show the user the process of file creation, for example:
--- Increment 3: '<arg_value>A vibrant'
Parameters increment: '"A vibrant'
--- Increment 4: 'cartoon illustration'
Parameters increment: 'cartoon illustration'
--- Increment 5: 'titled "Youth Eco-Tech
Parameters increment: '"titled "Youth Eco-Tech'
--- Increment 6: 'Team Debugging'
Parameters increment: 'Team Debugging'
--- Increment 7: 'Smart Sorting'
Parameters increment: '"Smart Sorting'
--- Increment 8: 'Trash Cans'
Parameters increment: 'Trash Cans'
--- Increment 9: '". and teamwork.</arg_value>\n'
Parameters increment: '"". and teamwork."'
--- Increment 10: '<arg_key>width</arg_key>\n'
Parameters increment: ', "width": '
--- Increment 11: '<arg_value>1024</arg_value>\n'
Parameters increment: '1024'
--- Increment 12 '<arg_key>height</arg_key>\n'
Parameters increment: ', "height": '
--- Increment 13: '<arg_value>768</arg_value>\n'
Parameters increment: '768'
--- Increment 14: '</tool_call>'
Parameters increment: '}' ;

@cynial
Copy link
Contributor Author

cynial commented Dec 15, 2025

@gaoganlsz You can try this patch; it should work as you expect (perhaps my test case wasn't clear enough).

Prozac614 pushed a commit to Prozac614/sglang that referenced this pull request Dec 17, 2025
sgl-project#13989)

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
YChange01 pushed a commit to YChange01/sglang that referenced this pull request Jan 13, 2026
sgl-project#13989)

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants