fix(otel): guard against None message in set_attributes tool_calls#20339
fix(otel): guard against None message in set_attributes tool_calls#20339krrishdholakia merged 584 commits intolitellm_oss_staging_01_24_2026from
Conversation
fix(gemini): support file retrieval in GoogleAIStudioFilesHandle
…t-correctly-extracted fix(ResponseAPILoggingUtils): extract input tokens details as dict
…ens-for-gpt-5.2-codex Fix `max_input_tokens` for `gpt-5.2-codex`
Litellm oss staging 01 29 2026
…_support feat: add /delete endpoint support for gemini
Fix: Batch and File user level permissions
[Feat]Add cost tracking and usage object in aretrieve_batch call type
Add routing of xai chat completions to responses when web search options is present
Add disable flag for anthropic gemini cache translation
fix aspectRatio mapping in image edit
Fix: vllm embedding format
…ng-scope-2026-01-05, Fix: remove unsupported prompt-caching-scope-2026-01-05 header for vertex ai
[Feature] UI - Usage: Model Breakdown Per Key
…e, and Braintrust integrations (#19707) * Add LangSmith mock client support - Create langsmith_mock_client.py following GCS and Langfuse patterns - Add mock mode detection via LANGSMITH_MOCK environment variable - Intercept LangSmith API calls via AsyncHTTPHandler.post patching - Add verbose logging throughout mock implementation - Update LangsmithLogger to initialize mock client when mock mode enabled - Supports configurable mock latency via LANGSMITH_MOCK_LATENCY_MS * Add Datadog mock client support - Create datadog_mock_client.py following GCS, Langfuse, and LangSmith patterns - Add mock mode detection via DATADOG_MOCK environment variable - Intercept Datadog API calls via AsyncHTTPHandler.post and httpx.Client.post patching - Add verbose logging throughout mock implementation - Update DataDogLogger and DataDogLLMObsLogger to initialize mock client when mock mode enabled - Supports both async and sync logging paths - Supports configurable mock latency via DATADOG_MOCK_LATENCY_MS * refactor: consolidate mock client logic into factory pattern - Create mock_client_factory.py to centralize common mock HTTP client logic - Refactor GCS, Langfuse, LangSmith, and Datadog mock clients to use factory - Improve GET/DELETE mock accuracy for GCS (return valid StandardLoggingPayload) - Fix DELETE mock to return empty body (204 No Content) instead of JSON - Reduce code duplication across integration mock clients * feat: add PostHog mock client support - Create posthog_mock_client.py using factory pattern - Integrate mock client into PostHogLogger with mock mode detection - Add verbose logging for mock mode initialization and batch operations - Enable mock mode via POSTHOG_MOCK environment variable * Add Helicone mock client support - Created helicone_mock_client.py using factory pattern (similar to GCS) - Integrated mock mode detection and initialization in HeliconeLogger - Mock client patches HTTPHandler.post to intercept Helicone API calls - Uses factory pattern for should_use_mock and MockResponse utilities - Custom HTTPHandler.post patching required since HTTPHandler uses self.client.send() * Add mock support for Braintrust integration and extend mock client factory - Add braintrust_mock_client.py with mock HTTP client for Braintrust integration testing - Integrate mock client into BraintrustLogger with mock mode detection - Refactor Helicone mock client to fully utilize factory's HTTPHandler.post patching - Extend mock_client_factory to support patching HTTPHandler.post for sync calls - Enable endpoint-specific mock responses for Braintrust (/project vs /project_logs) - All mock clients now properly handle both async (AsyncHTTPHandler) and sync (HTTPHandler) calls * Fix linter errors: remove unused imports and suppress complexity warning - Remove unused imports from gcs_bucket_mock_client.py (httpx, json, timedelta, Dict, Optional) - Remove unused Callable import from mock_client_factory.py - Add noqa comment to suppress PLR0915 complexity warning for create_mock_client_factory function * Document mock environment variables for PostHog, Helicone, Braintrust, Datadog, and Langsmith integrations - Add POSTHOG_MOCK and POSTHOG_MOCK_LATENCY_MS documentation - Add HELICONE_MOCK and HELICONE_MOCK_LATENCY_MS documentation - Add BRAINTRUST_MOCK and BRAINTRUST_MOCK_LATENCY_MS documentation - Add DATADOG_MOCK and DATADOG_MOCK_LATENCY_MS documentation - Add LANGSMITH_MOCK and LANGSMITH_MOCK_LATENCY_MS documentation All mock env vars follow the same pattern: enable mock mode for integration testing by intercepting API calls and returning mock responses without making actual network calls. * Fix security issue
* Add /realtime API benchmarks to Benchmarks documentation - Added new section showing performance improvements for /realtime endpoint - Included before/after metrics showing 182× faster p99 latency - Added test setup specifications and key optimizations - Referenced from v1.80.5-stable release notes Co-authored-by: ishaan <ishaan@berri.ai> * Update /realtime benchmarks to show current performance only - Removed before/after comparison, showing only current metrics - Clarified that benchmarks are e2e latency against fake realtime endpoint - Simplified table format for better readability Co-authored-by: ishaan <ishaan@berri.ai> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: ishaan <ishaan@berri.ai>
) * Add async_post_call_response_headers_hook to CustomLogger (#20070) Allow CustomLogger callbacks to inject custom HTTP response headers into streaming, non-streaming, and failure responses via a new async_post_call_response_headers_hook method. * async_post_call_response_headers_hook --------- Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
…EGISTRY.collect() in PrometheusServicesLogger (#20087)
[Feature] UI - Default Team Settings: Migrate Default Team Settings to use Reusable Model Select
[Feature] UI - Navbar: Option to Hide Community Engagement Buttons
…mantic_tool_filter.py tests
This reverts commit 1e8848c.
Litellm tuesday cicd release
…icd_release Revert "Litellm tuesday cicd release"
…mantic_tool_filter.py tests
This reverts commit 1e8848c.
…inal Litellm tuesday cicd release final
bump litellm 1.81.7
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile OverviewGreptile SummaryThis PR adds defensive null guards in the OpenTelemetry integration's Key changes:
The fix is simple and effective - when a choice has a finish_reason but a Note: The PR checklist indicates tests were added, but no test files are included in this commit. Verify that tests exist elsewhere or were added in a separate commit. Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/integrations/opentelemetry.py | Added null guards for message and tool_calls to prevent AttributeError when response choices contain None message |
Sequence Diagram
sequenceDiagram
participant Client as LLM Client
participant OTEL as OpenTelemetry.set_attributes
participant Response as response_obj
participant Choice as choice.message
participant ToolCalls as _tool_calls_kv_pair
Client->>OTEL: log_success_event(response_obj)
OTEL->>Response: get("choices")
Response-->>OTEL: choices array
loop For each choice with finish_reason
OTEL->>Choice: choice.get("message")
alt message is None (Edge Case)
Choice-->>OTEL: None
Note over OTEL: Before fix: AttributeError<br/>After fix: or {} returns empty dict
OTEL->>OTEL: message = {}
else message exists
Choice-->>OTEL: message dict
end
OTEL->>Choice: message.get("tool_calls")
alt tool_calls is None
Choice-->>OTEL: None
Note over OTEL: After fix: or [] returns empty list
OTEL->>OTEL: tool_calls = []
else tool_calls exists
Choice-->>OTEL: tool_calls array
end
alt tool_calls is not empty
OTEL->>ToolCalls: _tool_calls_kv_pair(tool_calls)
ToolCalls-->>OTEL: key-value pairs
OTEL->>OTEL: safe_set_attribute(span, key, value)
end
end
OTEL-->>Client: attributes set successfully
|
@Harshit28j can you please fix the linting error? |
bf214c1
into
litellm_oss_staging_01_24_2026
Relevant issues
fixes AttributeError: 'NoneType' object has no attribute 'get'
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitCI (LiteLLM team)
Link:
Link:
Links:
Type
🐛 Bug Fix
Changes