fix(openai): handle server error chunks in streaming responses#7663
Closed
fresh3nough wants to merge 1 commit into
Closed
fix(openai): handle server error chunks in streaming responses#7663fresh3nough wants to merge 1 commit into
fresh3nough wants to merge 1 commit into
Conversation
0089069 to
d67a313
Compare
When an OpenAI-compatible server (e.g. llama.cpp) returns an error during streaming, it sends a JSON chunk with an 'error' field instead of the expected 'choices' field. Previously, the StreamingChunk deserialization would fail with 'missing field choices', producing a confusing error. This is particularly triggered by subagents/summon, which create concurrent streaming sessions that can overwhelm local LLM servers. Changes: - Add #[serde(default)] to StreamingChunk.choices so error-only chunks can be deserialized - Add optional 'error' field to StreamingChunk to capture server errors - Add check_streaming_error() that propagates server errors with clear messages including the original error code, type, and message - Check for errors in both the main streaming loop and the inner tool-call accumulation loop Fixes aaif-goose#7645 Signed-off-by: Ubuntu <ubuntu@ip-172-31-31-131.us-east-2.compute.internal> Signed-off-by: fre <anonwurcod@proton.me>
d67a313 to
0583ef7
Compare
Collaborator
|
Thanks for the fix @fresh3nough! Unfortunately this has been superseded by #8031 which landed the same fix (with a slightly different approach — using |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
When an OpenAI-compatible server (e.g. llama.cpp) returns an error during streaming, it sends a JSON chunk with an
errorfield instead of the expectedchoicesfield. TheStreamingChunkdeserialization fails withmissing field choices, producing a confusing error:This is particularly triggered by subagents/summon, which create concurrent streaming sessions that can overwhelm local LLM servers (regression since the summon extension was introduced).
Fix
#[serde(default)]toStreamingChunk.choicesso error-only chunks can be deserialized without crashingerrorfield toStreamingChunkto capture server error responsescheck_streaming_error()helper that propagates server errors with clear messages including the original error code, type, and messageThis follows the same pattern already used by the Google provider (
formats/google.rslines 469-479) which already handles streaming errors correctly.Testing
formats::openai::testspass (no regressions)test_streaming_error_chunk_returns_server_error- reproduces the exact error from Stream decode error when using subagents/summon extension (regression in v1.25.0+) #7645test_streaming_error_chunk_during_tool_calls- error mid-tool-call accumulationtest_streaming_error_chunk_with_no_choices_no_crash- rate limit errorscargo clippyandcargo fmtcleanFixes #7645
Related: #7364, #7570