Skip to content

fix: preserve metadata for custom callbacks on codex/responses path (…#21243

Merged
2 commits merged intoBerriAI:litellm_oss_staging_02_16_2026from
saneroen:fix/21204-metadata-codex-callback
Feb 16, 2026
Merged

fix: preserve metadata for custom callbacks on codex/responses path (…#21243
2 commits merged intoBerriAI:litellm_oss_staging_02_16_2026from
saneroen:fix/21204-metadata-codex-callback

Conversation

@saneroen
Copy link
Contributor

Description

fix: preserve metadata for custom callbacks on codex/responses path (#21204)

  • Use metadata or litellm_metadata when calling update_environment_variables in responses/main.py so metadata is not overwritten by None on the bridge path (completion -> responses API).
  • Add tests for metadata in custom callback for codex models and for litellm_metadata in aresponses().

Relevant issues

Fixes #21204

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem
  • I have requested a Greptile review by commenting @greptileai and received a Confidence Score of at least 4/5 before requesting a maintainer review

CI (LiteLLM team)

CI status guideline:

  • 50-55 passing tests: main is stable with minor issues.
  • 45-49 passing tests: acceptable but needs attention
  • <= 40 passing tests: unstable; be careful with your merges and assess the risk.
  • Branch creation CI run
    Link:

  • CI run for the last commit
    Link:

  • Merge / cherry-pick CI run
    Links:

Type

🐛 Bug Fix

Changes

  • litellm/responses/main.py: Preserve metadata (or litellm_metadata) when calling update_environment_variables on the responses API bridge path so it is not overwritten by None.
  • tests/test_litellm/responses/test_metadata_codex_callback.py: Add tests that ensure metadata is passed to custom callbacks for codex models via acompletion() and for direct aresponses() with litellm_metadata.

…erriAI#21204)

- Use metadata or litellm_metadata when calling update_environment_variables
  in responses/main.py so metadata is not overwritten by None on the
  bridge path (completion -> responses API).
- Add tests for metadata in custom callback for codex models and for
  litellm_metadata in aresponses().

Co-authored-by: Cursor <cursoragent@cursor.com>
@vercel
Copy link

vercel bot commented Feb 15, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 16, 2026 4:41pm

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 15, 2026

Greptile Summary

Fixes a regression where user-supplied metadata was not reaching custom callbacks for codex models (e.g., gpt-5.1-codex) that route through the responses API bridge. The bridge converts metadata to litellm_metadata in kwargs, but responses/main.py only passed the metadata parameter (which is None on the bridge path) to update_environment_variables. The fix falls back to kwargs.get("litellm_metadata") when metadata is None.

  • litellm/responses/main.py: Added metadata_for_callbacks = metadata or kwargs.get("litellm_metadata") or {} before the update_environment_variables call, ensuring the bridge path preserves metadata for callbacks.
  • tests/test_litellm/responses/test_metadata_codex_callback.py: Added two mock-based async tests covering the codex acompletion() bridge path and the direct aresponses() path with litellm_metadata.
  • Tests use proper HTTP mocking (no real network calls) but should clean up litellm.callbacks global state after each test to avoid interference in parallel test runs.

Confidence Score: 4/5

  • This PR is safe to merge — it fixes a clear metadata propagation bug with a minimal, well-scoped change.
  • The core fix is a single line of fallback logic that correctly handles the metadata flow through the codex bridge path. The change is low-risk since it only adds a fallback when metadata is None, and doesn't alter behavior when metadata is already set. Tests cover both paths. Minor concern is that tests don't clean up global litellm.callbacks state.
  • tests/test_litellm/responses/test_metadata_codex_callback.py — needs litellm.callbacks cleanup to avoid test isolation issues.

Important Files Changed

Filename Overview
litellm/responses/main.py Adds fallback logic to preserve metadata from litellm_metadata kwargs when metadata parameter is None (codex bridge path). The fix is correct and well-targeted.
tests/test_litellm/responses/test_metadata_codex_callback.py New test file with two async tests verifying metadata preservation for codex models and direct responses API calls. Tests use mocks (no real network calls) but don't clean up litellm.callbacks global state.

Sequence Diagram

sequenceDiagram
    participant User
    participant acompletion as litellm.acompletion()
    participant Bridge as responses_api_bridge
    participant Responses as responses/main.py
    participant Logging as LiteLLMLoggingObj
    participant Callback as CustomLogger

    User->>acompletion: metadata={"foo": "bar"}
    acompletion->>Bridge: litellm_params={metadata: {"foo": "bar"}}
    Bridge->>Bridge: _build_sanitized_litellm_params()
    Note over Bridge: Merges metadata → litellm_metadata<br/>Strips metadata from params
    Bridge->>Responses: responses(**request_data)<br/>litellm_metadata={"foo": "bar"}, metadata=None
    Responses->>Responses: metadata_for_callbacks = metadata or kwargs["litellm_metadata"]
    Note over Responses: FIX: Falls back to litellm_metadata<br/>when metadata is None
    Responses->>Logging: update_environment_variables(metadata=metadata_for_callbacks)
    Logging->>Callback: async_log_success_event(kwargs)<br/>kwargs["litellm_params"]["metadata"] = {"foo": "bar"}
Loading

Last reviewed commit: fb2ee17

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@jeromeroussin
Copy link

@saneroen Can we expect the metadata in kwargs['standard_logging_object'] as well?

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
@ghost ghost changed the base branch from main to litellm_oss_staging_02_16_2026 February 16, 2026 16:44
@ghost
Copy link

ghost commented Feb 16, 2026

Can we expect the metadata in kwargs['standard_logging_object'] as well?

since the slo is built in the logging object, i believe this should work

@ghost ghost merged commit dcff326 into BerriAI:litellm_oss_staging_02_16_2026 Feb 16, 2026
11 of 18 checks passed
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Metadata is no longer passed to custom callback during chat completion calls to codex models

2 participants