Skip to content

fix(otel): guard against None message in set_attributes tool_calls#20339

Merged
krrishdholakia merged 584 commits intolitellm_oss_staging_01_24_2026from
litellm_otel_fix_none_message
Feb 5, 2026
Merged

fix(otel): guard against None message in set_attributes tool_calls#20339
krrishdholakia merged 584 commits intolitellm_oss_staging_01_24_2026from
litellm_otel_fix_none_message

Conversation

@Harshit28j
Copy link
Collaborator

@Harshit28j Harshit28j commented Feb 3, 2026

Relevant issues

fixes AttributeError: 'NoneType' object has no attribute 'get'

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

CI (LiteLLM team)

CI status guideline:

  • 50-55 passing tests: main is stable with minor issues.
  • 45-49 passing tests: acceptable but needs attention
  • <= 40 passing tests: unstable; be careful with your merges and assess the risk.
  • Branch creation CI run
    Link:
  • CI run for the last commit
    Link:
  • Merge / cherry-pick CI run
    Links:

Type

🐛 Bug Fix

Changes

image

Sameerlite and others added 30 commits January 30, 2026 16:06
fix(gemini): support file retrieval in GoogleAIStudioFilesHandle
…t-correctly-extracted

fix(ResponseAPILoggingUtils): extract input tokens details as dict
…ens-for-gpt-5.2-codex

Fix `max_input_tokens` for `gpt-5.2-codex`
…_support

feat: add /delete endpoint support for gemini
Fix: Batch and File user level permissions
[Feat]Add cost tracking and usage object in aretrieve_batch call type
Add routing of xai chat completions to responses when web search options is present
Add disable flag for anthropic gemini cache translation
…ng-scope-2026-01-05,

Fix: remove unsupported prompt-caching-scope-2026-01-05 header for vertex ai
[Feature] UI - Usage: Model Breakdown Per Key
…e, and Braintrust integrations (#19707)

* Add LangSmith mock client support

- Create langsmith_mock_client.py following GCS and Langfuse patterns
- Add mock mode detection via LANGSMITH_MOCK environment variable
- Intercept LangSmith API calls via AsyncHTTPHandler.post patching
- Add verbose logging throughout mock implementation
- Update LangsmithLogger to initialize mock client when mock mode enabled
- Supports configurable mock latency via LANGSMITH_MOCK_LATENCY_MS

* Add Datadog mock client support

- Create datadog_mock_client.py following GCS, Langfuse, and LangSmith patterns
- Add mock mode detection via DATADOG_MOCK environment variable
- Intercept Datadog API calls via AsyncHTTPHandler.post and httpx.Client.post patching
- Add verbose logging throughout mock implementation
- Update DataDogLogger and DataDogLLMObsLogger to initialize mock client when mock mode enabled
- Supports both async and sync logging paths
- Supports configurable mock latency via DATADOG_MOCK_LATENCY_MS

* refactor: consolidate mock client logic into factory pattern

- Create mock_client_factory.py to centralize common mock HTTP client logic
- Refactor GCS, Langfuse, LangSmith, and Datadog mock clients to use factory
- Improve GET/DELETE mock accuracy for GCS (return valid StandardLoggingPayload)
- Fix DELETE mock to return empty body (204 No Content) instead of JSON
- Reduce code duplication across integration mock clients

* feat: add PostHog mock client support

- Create posthog_mock_client.py using factory pattern
- Integrate mock client into PostHogLogger with mock mode detection
- Add verbose logging for mock mode initialization and batch operations
- Enable mock mode via POSTHOG_MOCK environment variable

* Add Helicone mock client support

- Created helicone_mock_client.py using factory pattern (similar to GCS)
- Integrated mock mode detection and initialization in HeliconeLogger
- Mock client patches HTTPHandler.post to intercept Helicone API calls
- Uses factory pattern for should_use_mock and MockResponse utilities
- Custom HTTPHandler.post patching required since HTTPHandler uses self.client.send()

* Add mock support for Braintrust integration and extend mock client factory

- Add braintrust_mock_client.py with mock HTTP client for Braintrust integration testing
- Integrate mock client into BraintrustLogger with mock mode detection
- Refactor Helicone mock client to fully utilize factory's HTTPHandler.post patching
- Extend mock_client_factory to support patching HTTPHandler.post for sync calls
- Enable endpoint-specific mock responses for Braintrust (/project vs /project_logs)
- All mock clients now properly handle both async (AsyncHTTPHandler) and sync (HTTPHandler) calls

* Fix linter errors: remove unused imports and suppress complexity warning

- Remove unused imports from gcs_bucket_mock_client.py (httpx, json, timedelta, Dict, Optional)
- Remove unused Callable import from mock_client_factory.py
- Add noqa comment to suppress PLR0915 complexity warning for create_mock_client_factory function

* Document mock environment variables for PostHog, Helicone, Braintrust, Datadog, and Langsmith integrations

- Add POSTHOG_MOCK and POSTHOG_MOCK_LATENCY_MS documentation
- Add HELICONE_MOCK and HELICONE_MOCK_LATENCY_MS documentation
- Add BRAINTRUST_MOCK and BRAINTRUST_MOCK_LATENCY_MS documentation
- Add DATADOG_MOCK and DATADOG_MOCK_LATENCY_MS documentation
- Add LANGSMITH_MOCK and LANGSMITH_MOCK_LATENCY_MS documentation

All mock env vars follow the same pattern: enable mock mode for integration testing by intercepting API calls and returning mock responses without making actual network calls.

* Fix security issue
* Add /realtime API benchmarks to Benchmarks documentation

- Added new section showing performance improvements for /realtime endpoint
- Included before/after metrics showing 182× faster p99 latency
- Added test setup specifications and key optimizations
- Referenced from v1.80.5-stable release notes

Co-authored-by: ishaan <ishaan@berri.ai>

* Update /realtime benchmarks to show current performance only

- Removed before/after comparison, showing only current metrics
- Clarified that benchmarks are e2e latency against fake realtime endpoint
- Simplified table format for better readability

Co-authored-by: ishaan <ishaan@berri.ai>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: ishaan <ishaan@berri.ai>
)

* Add async_post_call_response_headers_hook to CustomLogger (#20070)

Allow CustomLogger callbacks to inject custom HTTP response headers
into streaming, non-streaming, and failure responses via a new
async_post_call_response_headers_hook method.

* async_post_call_response_headers_hook

---------

Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
…EGISTRY.collect() in PrometheusServicesLogger (#20087)
This reverts commits:
- 437e9e2 fix drawer
- 61bb51d complete v2 viewer
- 2014bcf fixes ui
- 5f07635 fix ui
- f07ef8a refactored code
- 8b7a925 v0 - looks decen view

Will create a new clean PR with the original changes.
yuneng-jiang and others added 19 commits February 2, 2026 20:09
[Feature] UI - Default Team Settings: Migrate Default Team Settings to use Reusable Model Select
[Feature] UI - Navbar: Option to Hide Community Engagement Buttons
…icd_release

Revert "Litellm tuesday cicd release"
@vercel
Copy link

vercel bot commented Feb 3, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 3, 2026 0:32am

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 3, 2026

Greptile Overview

Greptile Summary

This PR adds defensive null guards in the OpenTelemetry integration's set_attributes method to prevent AttributeError when processing response choices with None messages.

Key changes:

  • Line 1627: Changed choice.get("message") to choice.get("message") or {} to default to empty dict
  • Line 1628: Changed message.get("tool_calls") to message.get("tool_calls") or [] to default to empty list

The fix is simple and effective - when a choice has a finish_reason but a None message (which can occur in certain edge cases), the code now safely handles this by using empty defaults instead of crashing. The _tool_calls_kv_pair method can safely handle an empty list, and the if tool_calls: check will correctly skip empty lists.

Note: The PR checklist indicates tests were added, but no test files are included in this commit. Verify that tests exist elsewhere or were added in a separate commit.

Confidence Score: 4/5

  • This PR is safe to merge with minimal risk - it's a defensive bug fix that prevents crashes
  • The fix properly addresses a real crash (AttributeError) with a simple, defensive coding pattern. The changes use Python's or operator to provide safe defaults (empty dict and empty list) when values are None. The logic is sound and the fix is minimal. Score is 4 instead of 5 because: (1) the PR claims tests were added but they're not visible in the commit, and (2) the fix is in a critical observability integration where any issues could affect production monitoring.
  • No files require special attention - the change is straightforward and localized

Important Files Changed

Filename Overview
litellm/integrations/opentelemetry.py Added null guards for message and tool_calls to prevent AttributeError when response choices contain None message

Sequence Diagram

sequenceDiagram
    participant Client as LLM Client
    participant OTEL as OpenTelemetry.set_attributes
    participant Response as response_obj
    participant Choice as choice.message
    participant ToolCalls as _tool_calls_kv_pair
    
    Client->>OTEL: log_success_event(response_obj)
    OTEL->>Response: get("choices")
    Response-->>OTEL: choices array
    
    loop For each choice with finish_reason
        OTEL->>Choice: choice.get("message")
        alt message is None (Edge Case)
            Choice-->>OTEL: None
            Note over OTEL: Before fix: AttributeError<br/>After fix: or {} returns empty dict
            OTEL->>OTEL: message = {}
        else message exists
            Choice-->>OTEL: message dict
        end
        
        OTEL->>Choice: message.get("tool_calls")
        alt tool_calls is None
            Choice-->>OTEL: None
            Note over OTEL: After fix: or [] returns empty list
            OTEL->>OTEL: tool_calls = []
        else tool_calls exists
            Choice-->>OTEL: tool_calls array
        end
        
        alt tool_calls is not empty
            OTEL->>ToolCalls: _tool_calls_kv_pair(tool_calls)
            ToolCalls-->>OTEL: key-value pairs
            OTEL->>OTEL: safe_set_attribute(span, key, value)
        end
    end
    
    OTEL-->>Client: attributes set successfully
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@krrishdholakia krrishdholakia changed the base branch from main to litellm_oss_staging_02_04_2026 February 4, 2026 06:33
@krrishdholakia krrishdholakia changed the base branch from litellm_oss_staging_02_04_2026 to main February 4, 2026 06:34
@krrishdholakia
Copy link
Member

@Harshit28j can you please fix the linting error?

@krrishdholakia krrishdholakia changed the base branch from main to litellm_oss_staging_01_24_2026 February 5, 2026 07:10
@krrishdholakia krrishdholakia changed the base branch from litellm_oss_staging_01_24_2026 to main February 5, 2026 07:11
@krrishdholakia krrishdholakia changed the base branch from main to litellm_oss_staging_01_24_2026 February 5, 2026 07:13
@krrishdholakia krrishdholakia merged commit bf214c1 into litellm_oss_staging_01_24_2026 Feb 5, 2026
60 of 65 checks passed
@krrishdholakia krrishdholakia deleted the litellm_otel_fix_none_message branch February 5, 2026 07:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.