Skip to content

fix(openai): guard against AttributeError on LegacyAPIResponse in streaming helpers#5913

Closed
NIK-TIGER-BILL wants to merge 1 commit intogetsentry:masterfrom
NIK-TIGER-BILL:fix/openai-integration-stream-no-iterator-attribute
Closed

fix(openai): guard against AttributeError on LegacyAPIResponse in streaming helpers#5913
NIK-TIGER-BILL wants to merge 1 commit intogetsentry:masterfrom
NIK-TIGER-BILL:fix/openai-integration-stream-no-iterator-attribute

Conversation

@NIK-TIGER-BILL
Copy link
Copy Markdown

Summary

Fixes #5890

Problem

When a third-party library (LiteLLM, and potentially others) uses the openai Python SDK through sentry-python's OpenAIIntegration, and the OpenAI library returns a LegacyAPIResponse object instead of a Stream (observed with openai >= 2.x), both _set_streaming_completions_api_output_data and _set_streaming_responses_api_output_data raise:

AttributeError: 'LegacyAPIResponse' object has no attribute '_iterator'

This exception propagates to the caller as:

litellm.InternalServerError: OpenAIException - 'LegacyAPIResponse' object has no attribute '_iterator'

Breaking streaming completely when Sentry is initialised. The only workaround is to explicitly disable OpenAIIntegration.

Root Cause

Both streaming helper functions unconditionally access response._iterator:

old_iterator = response._iterator  # AttributeError when response is LegacyAPIResponse

Fix

Add a hasattr(response, "_iterator") guard at the start of both functions. When the attribute is absent, the span is closed and the function returns early, leaving the original (unmodified) response untouched so the caller can iterate it normally.

if not hasattr(response, "_iterator"):
    if finish_span:
        span.__exit__(None, None, None)
    return

This is a purely defensive change — it does not affect the normal Stream/AsyncStream path.

Testing

Reproducer (requires sentry-sdk, litellm, openai >= 2.x):

import sentry_sdk
import litellm

sentry_sdk.init(dsn="...")

response = litellm.completion(
    model="gpt-4.1-nano",
    messages=[{"role": "user", "content": "hello"}],
    stream=True,
)
for chunk in response:
    print(chunk)  # Previously raised InternalServerError

… _iterator

When third-party libraries (e.g. LiteLLM) use the openai client with
openai >= 2.x, the streaming response object can be a LegacyAPIResponse
instead of a Stream.  LegacyAPIResponse has no _iterator attribute, which
causes the OpenAIIntegration to raise an unhandled AttributeError and
breaks the caller's streaming.

Guard both _set_streaming_completions_api_output_data and
_set_streaming_responses_api_output_data: if the response object does
not have _iterator, close the span and return early so that the original
(unmodified) response is returned to the caller.

Fixes getsentry#5890

Signed-off-by: NIK-TIGER-BILL <nik.tiger.bill@github.com>
@sdk-maintainer-bot sdk-maintainer-bot bot added missing-maintainer-discussion Used for automated community contribution checks. violating-contribution-guidelines Used for automated community contribution checks. labels Mar 30, 2026
@sdk-maintainer-bot
Copy link
Copy Markdown

This PR has been automatically closed. The referenced issue does not show a discussion between you and a maintainer.

To avoid wasted effort on both sides, please discuss your proposed approach in the issue first and wait for a maintainer to respond before opening a PR.

Please review our contributing guidelines for more details.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 30, 2026

Semver Impact of This PR

🟢 Patch (bug fixes)

📋 Changelog Preview

This is how your changes will appear in the changelog.
Entries from this PR are highlighted with a left border (blockquote style).


New Features ✨

Langchain

  • Set gen_ai.operation.name and gen_ai.pipeline.name on LLM spans by ericapisani in #5849
  • Broaden AI provider detection beyond OpenAI and Anthropic by ericapisani in #5707
  • Update LLM span operation to gen_ai.generate_text by ericapisani in #5796

Bug Fixes 🐛

Ci

  • Use gh CLI to convert PR to draft by stephanie-anderson in #5874
  • Use GitHub App token for draft PR enforcement by stephanie-anderson in #5871

Openai

  • Guard against AttributeError on LegacyAPIResponse in streaming helpers by NIK-TIGER-BILL in #5913
  • Always set gen_ai.response.streaming for Responses by alexander-alderman-webb in #5697
  • Simplify Responses input handling by alexander-alderman-webb in #5695
  • Use max_output_tokens for Responses API by alexander-alderman-webb in #5693
  • Always set gen_ai.response.streaming for Completions by alexander-alderman-webb in #5692
  • Simplify Completions input handling by alexander-alderman-webb in #5690
  • Simplify embeddings input handling by alexander-alderman-webb in #5688

Other

  • (google-genai) Guard response extraction by alexander-alderman-webb in #5869
  • (workflow) Fix permission issue with github app and PR draft graphql endpoint by Jeffreyhung in #5887

Documentation 📚

  • Update CONTRIBUTING.md with contribution requirements and TOC by stephanie-anderson in #5896

Internal Changes 🔧

Langchain

  • Add text completion test by alexander-alderman-webb in #5740
  • Add tool execution test by alexander-alderman-webb in #5739
  • Add basic agent test with Responses call by alexander-alderman-webb in #5726
  • Replace mocks with httpx types by alexander-alderman-webb in #5724
  • Consolidate span origin assertion by alexander-alderman-webb in #5723
  • Consolidate available tools assertion by alexander-alderman-webb in #5721

Openai

  • Replace mocks with httpx types for streaming Responses by alexander-alderman-webb in #5882
  • Replace mocks with httpx types for streaming Completions by alexander-alderman-webb in #5879
  • Move input handling code into API-specific functions by alexander-alderman-webb in #5687

Other

  • (ai) Rename generate_text to text_completion by ericapisani in #5885
  • (asyncpg) Normalize query whitespace in integration by ericapisani in #5855
  • Merge PR validation workflows and add reason-specific labels by stephanie-anderson in #5898
  • Add workflow to close unvetted non-maintainer PRs by stephanie-anderson in #5895
  • Exclude compromised litellm versions by alexander-alderman-webb in #5876
  • Reactivate litellm tests by alexander-alderman-webb in #5853
  • Add note to coordinate with assignee before PR submission by sentrivana in #5868
  • Temporarily stop running litellm tests by alexander-alderman-webb in #5851

Other

  • ci+docs: Add draft PR enforcement by stephanie-anderson in #5867

🤖 This preview updates automatically when you update the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

missing-maintainer-discussion Used for automated community contribution checks. violating-contribution-guidelines Used for automated community contribution checks.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OpenAIIntegration breaks litellm streaming — LegacyAPIResponse wraps Stream objects

1 participant