[Frontend] [Bugfix] Refactor tool parsers and simplify the tool parsing interface.#11554
[Frontend] [Bugfix] Refactor tool parsers and simplify the tool parsing interface.#11554elementary-particle wants to merge 1 commit intovllm-project:mainfrom
Conversation
…` for streaming outputs. We only use `delta_token_ids` and `delta_text` in the streaming code. This avoids overhead and errors. Note that finish reasons other than end of stream aren't addressed yet. Signed-off-by: elementary-particle <quantum.field@outlook.com>
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
|
@elementary-particle I have been testing your version of the Hermes tool parser with ijson, and it is working nicely, great job! It solved the issue I was facing here: #11279. I tested it on v0.7.0 and V1 mode. Also, sometimes I encountered an issue regarding the JSON decode of arguments in the postprocessing step, but this small modification proposed by wangluyi fixed it: #9874 (comment). I pushed a v0.7.0 Docker image with ijson installed and your commit 2f77b7b here: https://hub.docker.com/repository/docker/marcelodiaz/vllm-openai-hermes-fix. 07-02 EDIT: While testing a few models, I actually discovered that the issue I was facing might not be directly related to the hermes tool parser, instead, it happens when I set the tool_choice parameter to any value different than "auto", and it is reproducible with commit 2f77b7b of this PR. This issue was introduced in v0.6.5, I assume that it is related to the guided decoding used for the tool_choice parameter to work. Nevertheless, the hermes tool parser of this PR works nicely and looks way cleaner than the one currently in the main branch. |
|
Hi @elementary-particle , |
|
It would be really awesome if this could be merged into main 🙏 . I don't know what's blocking it, but with the state of this PR the strange (malformed) function calls we receive during streaming are resolved. Surprised this isn't impacting a lot more people. No one else is using Qwen with streaming + function calling? |
|
This pull request has merge conflicts that must be resolved before it can be |
|
@Endebert Same issues here when trying to use vllm + Qwen with langgraph. The args in tool_calls becomes None, which make the whole chain fail to use tools. |
|
I ran into a number conversion problem when using this feature, and here’s how I fixed it def decimal_default(obj):
if isinstance(obj, decimal.Decimal):
return float(obj)
raise TypeError and change |
Hi @wangtingshuai , Thank you for your input, I incorporated it. |
|
Closing this PR as stale. It has had unresolved merge conflicts since April 2025 with no author activity. A continuation was attempted in #16096 but that was also closed without merging. The tool parser codebase has evolved significantly since then with many individual fixes merged, so this would need to be reimplemented against the current codebase. If the broader refactoring is still desired, please open a fresh PR. |
This is the PR for the RFC #11522. Currently we are building a draft of simpler tool parsers using streaming JSON parsing libraries to reduce overhead and avoid bugs. Tests and commits will be added gradually.
FIX #11392.