fix(openai): guard against AttributeError on LegacyAPIResponse in streaming helpers#5913
Closed
NIK-TIGER-BILL wants to merge 1 commit intogetsentry:masterfrom
Closed
Conversation
… _iterator When third-party libraries (e.g. LiteLLM) use the openai client with openai >= 2.x, the streaming response object can be a LegacyAPIResponse instead of a Stream. LegacyAPIResponse has no _iterator attribute, which causes the OpenAIIntegration to raise an unhandled AttributeError and breaks the caller's streaming. Guard both _set_streaming_completions_api_output_data and _set_streaming_responses_api_output_data: if the response object does not have _iterator, close the span and return early so that the original (unmodified) response is returned to the caller. Fixes getsentry#5890 Signed-off-by: NIK-TIGER-BILL <nik.tiger.bill@github.com>
|
This PR has been automatically closed. The referenced issue does not show a discussion between you and a maintainer. To avoid wasted effort on both sides, please discuss your proposed approach in the issue first and wait for a maintainer to respond before opening a PR. Please review our contributing guidelines for more details. |
Contributor
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨Langchain
Bug Fixes 🐛Ci
Openai
Other
Documentation 📚
Internal Changes 🔧Langchain
Openai
Other
Other
🤖 This preview updates automatically when you update the PR. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #5890
Problem
When a third-party library (LiteLLM, and potentially others) uses the
openaiPython SDK through sentry-python'sOpenAIIntegration, and the OpenAI library returns aLegacyAPIResponseobject instead of aStream(observed withopenai >= 2.x), both_set_streaming_completions_api_output_dataand_set_streaming_responses_api_output_dataraise:This exception propagates to the caller as:
Breaking streaming completely when Sentry is initialised. The only workaround is to explicitly disable
OpenAIIntegration.Root Cause
Both streaming helper functions unconditionally access
response._iterator:Fix
Add a
hasattr(response, "_iterator")guard at the start of both functions. When the attribute is absent, the span is closed and the function returns early, leaving the original (unmodified) response untouched so the caller can iterate it normally.This is a purely defensive change — it does not affect the normal
Stream/AsyncStreampath.Testing
Reproducer (requires
sentry-sdk,litellm,openai >= 2.x):