Skip to content

Add Noma guardrails v2 based on custom guardrails#21400

Merged
krrishdholakia merged 13 commits intoBerriAI:mainfrom
Noma-Security:tom/NOM-6904-support-unified-guardrails-in-litellm
Feb 23, 2026
Merged

Add Noma guardrails v2 based on custom guardrails#21400
krrishdholakia merged 13 commits intoBerriAI:mainfrom
Noma-Security:tom/NOM-6904-support-unified-guardrails-in-litellm

Conversation

@TomAlon
Copy link
Contributor

@TomAlon TomAlon commented Feb 17, 2026

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • [V] I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • [V] My PR passes all unit tests on make test-unit
  • [V] My PR's scope is as isolated as possible, it only solves 1 specific problem

CI (LiteLLM team)

CI status guideline:

  • 50-55 passing tests: main is stable with minor issues.
  • 45-49 passing tests: acceptable but needs attention
  • <= 40 passing tests: unstable; be careful with your merges and assess the risk.
  • Branch creation CI run
    Link:

  • CI run for the last commit
    Link:

  • Merge / cherry-pick CI run
    Links:

Type

🆕 New Feature

Changes

Adding Noma guardrails V2, based on custom guardrails

@vercel
Copy link

vercel bot commented Feb 17, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 23, 2026 8:31am

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 17, 2026

Greptile Summary

This PR adds a new noma_v2 guardrail integration based on the CustomGuardrail.apply_guardrail pattern, deprecating the legacy noma guardrail. The v2 implementation sends the full request payload (including model_call_details from the logging object) to a new Noma endpoint (/litellm/guardrail) and supports three response actions: NONE, BLOCKED, and GUARDRAIL_INTERVENED. A migration path is provided via a use_v2: true flag on the existing guardrail: noma configuration.

  • Security concern: _build_scan_payload includes model_call_details (which contains the LLM provider api_key) in the payload sent to the external Noma API — this is a key leak risk that should be addressed before merging.
  • Documentation mismatch: The docs claim application_id falls back to x-noma-application-id header and user_api_key_alias, but the code only resolves from dynamic params and the configured value. The test suite explicitly confirms these fallbacks are absent.
  • Good test coverage with mock-only tests covering configuration, all three action types, fail-open/fail-closed behavior, monitor mode, and application ID resolution.
  • Clean integration with the existing guardrail registry and type system.

Confidence Score: 2/5

  • This PR has a security concern where LLM provider API keys may be leaked to the external Noma API endpoint via model_call_details.
  • The core guardrail logic is well-structured and follows existing patterns, but the API key leakage via model_call_details sent to an external endpoint is a significant security concern that should be resolved before merging. The documentation also inaccurately describes application_id fallback behavior that doesn't exist in the code.
  • Pay close attention to litellm/proxy/guardrails/guardrail_hooks/noma/noma_v2.py (API key leakage) and docs/my-website/docs/proxy/guardrails/noma_security.md (inaccurate application_id fallback docs).

Important Files Changed

Filename Overview
litellm/proxy/guardrails/guardrail_hooks/noma/noma_v2.py New Noma v2 guardrail implementation. Security concern: _build_scan_payload includes model_call_details (which contains the provider api_key) in the payload sent to the external Noma API.
docs/my-website/docs/proxy/guardrails/noma_security.md Documentation for Noma v2 guardrails. The application_id fallback description doesn't match the v2 implementation behavior.
litellm/proxy/guardrails/guardrail_hooks/noma/init.py Adds v2 initialization, use_v2 migration toggle, and registry entries. Clean implementation following existing patterns.
litellm/proxy/guardrails/guardrail_hooks/noma/noma.py Minor change adding a deprecation warning for the legacy Noma guardrail. Clean implementation.
litellm/types/guardrails.py Adds NOMA_V2 enum and use_v2 field to NomaGuardrailConfigModel. Consistent with existing patterns.
litellm/types/proxy/guardrails/guardrail_hooks/noma.py Adds NomaV2GuardrailConfigModel and use_v2 field to existing config model. Clean implementation.
tests/test_litellm/proxy/guardrails/guardrail_hooks/test_noma_v2.py Comprehensive mock-only test suite for Noma v2 guardrail. Good coverage of configuration, actions, failure modes, and application ID resolution.
tests/test_litellm/proxy/guardrails/guardrail_hooks/test_noma.py Adds deprecation warning test and v2 migration routing test to existing Noma test suite. Tests are mock-only.
tests/test_litellm/proxy/guardrails/test_guardrail_registry.py Adds registry resolution test for noma/noma_v2 entries. Clean mock-only test.

Sequence Diagram

sequenceDiagram
    participant Client
    participant LiteLLM Proxy
    participant NomaV2Guardrail
    participant Noma API

    Client->>LiteLLM Proxy: Chat completion request
    LiteLLM Proxy->>NomaV2Guardrail: apply_guardrail(inputs, request_data, "request")
    NomaV2Guardrail->>NomaV2Guardrail: Resolve application_id (dynamic > configured)
    NomaV2Guardrail->>NomaV2Guardrail: _build_scan_payload()
    NomaV2Guardrail->>NomaV2Guardrail: _sanitize_payload_for_transport()
    NomaV2Guardrail->>Noma API: POST /litellm/guardrail (sanitized payload)
    Noma API-->>NomaV2Guardrail: {action: NONE|BLOCKED|GUARDRAIL_INTERVENED, ...}

    alt action = NONE
        NomaV2Guardrail-->>LiteLLM Proxy: Return original inputs
    else action = BLOCKED
        NomaV2Guardrail-->>LiteLLM Proxy: Raise NomaBlockedMessage (HTTP 400)
    else action = GUARDRAIL_INTERVENED
        NomaV2Guardrail->>NomaV2Guardrail: Update texts/images/tools/tool_calls from response
        NomaV2Guardrail-->>LiteLLM Proxy: Return modified inputs
    end

    NomaV2Guardrail->>NomaV2Guardrail: _add_guardrail_observability()
    LiteLLM Proxy-->>Client: Response (or 400 if blocked)
Loading

Last reviewed commit: 19d8ea2

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

@shivamrawat1
Copy link
Collaborator

@greptile re-review after the fixes

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 17, 2026

Greptile Summary

This PR adds a new noma_v2 guardrail integration built on the CustomGuardrail.apply_guardrail interface, alongside a migration path (use_v2: true) from the legacy noma guardrail. The v2 implementation communicates with a new Noma endpoint (/litellm/guardrail), supports three response actions (NONE, BLOCKED, GUARDRAIL_INTERVENED), and includes proper observability, fail-open/closed semantics, and monitor mode.

  • Adds NomaV2Guardrail class implementing apply_guardrail with action-based response handling, payload sanitization, and configurable monitor_mode / block_failures
  • Deprecates legacy noma guardrail with a runtime DeprecationWarning and provides use_v2: true migration toggle
  • Registers noma_v2 in the guardrail initializer and class registries
  • Adds NomaV2GuardrailConfigModel for UI config and use_v2 field to NomaGuardrailConfigModel / LitellmParams
  • Comprehensive mock-only test suite (481 lines) covering configuration, auth paths, action handling, application ID resolution, and error scenarios
  • Prior review flagged: model_call_details (containing LLM provider api_key) is included in the payload sent to the external Noma endpoint — the author should address this
  • Prior review flagged: Docs claim application_id falls back to x-noma-application-id header and user_api_key_alias, but the code does not implement these fallbacks

Confidence Score: 3/5

  • This PR is generally safe to merge but has an unresolved security concern about API key exposure in payloads sent to an external endpoint.
  • The code is well-structured, follows existing guardrail patterns, and has thorough test coverage. However, the previously flagged API key leak in model_call_details remains unresolved, and the docs-vs-code mismatch for application_id fallbacks needs attention. The core guardrail logic is sound with proper error handling and fail-open/closed support.
  • Pay close attention to litellm/proxy/guardrails/guardrail_hooks/noma/noma_v2.py (API key in model_call_details sent externally) and docs/my-website/docs/proxy/guardrails/noma_security.md (inaccurate fallback documentation).

Important Files Changed

Filename Overview
litellm/proxy/guardrails/guardrail_hooks/noma/noma_v2.py New Noma v2 guardrail implementation. Well-structured with proper error handling, observability, and sanitization. Sends model_call_details (including api_key) to external endpoint (flagged in prior thread). One potential issue with _sanitize_payload_for_transport silently returning empty dict on failure.
litellm/proxy/guardrails/guardrail_hooks/noma/init.py Adds v2 initialization and registry entries. Clean migration path with use_v2 toggle. Correct routing logic for both direct noma_v2 and migration noma + use_v2=True.
litellm/proxy/guardrails/guardrail_hooks/noma/noma.py Minimal changes: adds deprecation warning on legacy guardrail construction. Clean implementation with global flag to avoid repeated warnings.
litellm/types/guardrails.py Adds NOMA_V2 to SupportedGuardrailIntegrations enum and use_v2 field to NomaGuardrailConfigModel. The use_v2 field propagates to all guardrails via LitellmParams inheritance but is harmless due to extra="allow" pattern.
litellm/types/proxy/guardrails/guardrail_hooks/noma.py Adds NomaV2GuardrailConfigModel for UI config and use_v2 field to existing NomaGuardrailConfigModel. Properly inherits from GuardrailConfigModel. Clean and consistent with other guardrail patterns.
docs/my-website/docs/proxy/guardrails/noma_security.md Documentation for v2 guardrails with migration guide. Contains inaccurate description of application_id fallback behavior (flagged in prior thread).
tests/test_litellm/proxy/guardrails/guardrail_hooks/test_noma_v2.py Comprehensive test suite covering configuration, authentication, action handling, application ID resolution, monitor mode, and fail-open/closed behavior. All tests use mocks, no real network calls.
tests/test_litellm/proxy/guardrails/guardrail_hooks/test_noma.py Adds tests for deprecation warning and v2 migration routing via use_v2=True. Uses mocks appropriately, no real network calls.
tests/test_litellm/proxy/guardrails/test_guardrail_registry.py Adds registry resolution test for noma/noma_v2. Removes unnecessary print statements. Clean and focused.

Flowchart

flowchart TD
    A[Incoming Request] --> B{guardrail config}
    B -->|"guardrail: noma_v2"| C[initialize_guardrail_v2]
    B -->|"guardrail: noma + use_v2: true"| C
    B -->|"guardrail: noma"| D[initialize_guardrail legacy]

    C --> E[NomaV2Guardrail.apply_guardrail]
    E --> F[Resolve application_id<br>dynamic_params → config → None]
    F --> G[_build_scan_payload<br>inputs + request_data + logging_obj]
    G --> H[_sanitize_payload_for_transport]
    H --> I[POST /litellm/guardrail]
    I --> J{Response action}
    J -->|NONE| K[Return original inputs<br>status: success]
    J -->|GUARDRAIL_INTERVENED| L[Update texts/images/tools/tool_calls<br>status: guardrail_intervened]
    J -->|BLOCKED| M[Raise NomaBlockedMessage<br>status: guardrail_intervened]
    J -->|monitor_mode=true| N[Force action=NONE<br>Return original inputs]
    I -->|Error + block_failures=true| O[Raise exception]
    I -->|Error + block_failures=false| P[Return original inputs<br>fail-open]

    D --> Q[NomaGuardrail legacy flow<br>DeprecationWarning emitted]
Loading

Last reviewed commit: 19d8ea2

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

@TomAlon
Copy link
Contributor Author

TomAlon commented Feb 18, 2026

@greptile re-review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 18, 2026

Greptile Summary

This PR adds a new noma_v2 guardrail integration based on the CustomGuardrail.apply_guardrail pattern, alongside a deprecation path for the legacy noma guardrail.

  • Introduces NomaV2Guardrail class that implements apply_guardrail with three action modes: NONE (pass-through), BLOCKED (raise NomaBlockedMessage), and GUARDRAIL_INTERVENED (modify input fields like texts, images, tools, tool_calls)
  • Adds noma_v2 as a new SupportedGuardrailIntegrations enum value with its own initializer and class registry entries
  • Provides a use_v2: true migration toggle on the legacy noma guardrail for gradual migration
  • Adds deprecation warning to the legacy NomaGuardrail constructor
  • Adds NomaV2GuardrailConfigModel for UI configuration and deepcopy-based request data isolation
  • Comprehensive mock-only test suite covering configuration, all action paths, application ID resolution, monitor mode, and fail-open/closed behavior
  • Documentation updated with v2 section and deprecation notice, though legacy heading hierarchy needs attention

Confidence Score: 4/5

  • This PR is safe to merge with minor documentation fixes recommended.
  • The v2 implementation follows established patterns (CustomGuardrail.apply_guardrail), has comprehensive mock-only tests, clean error handling with configurable fail-open/closed behavior, and proper registry integration. The main prior concern (API key leakage via model_call_details) was already flagged. Remaining issues are stylistic (doc heading hierarchy, action error distinguishability).
  • docs/my-website/docs/proxy/guardrails/noma_security.md has a heading hierarchy issue. litellm/proxy/guardrails/guardrail_hooks/noma/noma_v2.py has the previously-flagged API key leakage concern and a minor error-distinguishability suggestion.

Important Files Changed

Filename Overview
litellm/proxy/guardrails/guardrail_hooks/noma/noma_v2.py New v2 guardrail implementation with clean architecture using apply_guardrail pattern. Uses deepcopy for request data isolation, has proper error handling with fail-open/fail-closed modes, and sanitization for transport. Previous review threads flagged API key leakage via model_call_details — still present.
litellm/proxy/guardrails/guardrail_hooks/noma/noma.py Adds deprecation warning for legacy noma guardrail using a module-level flag. Clean implementation with proper stacklevel and one-time warning behavior.
litellm/proxy/guardrails/guardrail_hooks/noma/init.py Adds noma_v2 registration in both initializer and class registries. Includes use_v2 migration toggle routing from legacy noma to v2. Clean implementation.
litellm/types/guardrails.py Adds NOMA_V2 enum value and use_v2 field to NomaGuardrailConfigModel. Changes are minimal and correctly integrated into the LitellmParams MRO chain.
litellm/types/proxy/guardrails/guardrail_hooks/noma.py Adds NomaV2GuardrailConfigModel for UI config, adds use_v2 to legacy config. Clean type definitions inheriting from GuardrailConfigModel.
tests/test_litellm/proxy/guardrails/guardrail_hooks/test_noma_v2.py Comprehensive mock-only tests covering configuration, action behaviors (NONE, BLOCKED, GUARDRAIL_INTERVENED), application ID resolution, monitor mode, fail-open/closed, and serialization. All tests properly mock network calls.
tests/test_litellm/proxy/guardrails/guardrail_hooks/test_noma.py Adds deprecation warning test and use_v2 routing test to existing legacy guardrail tests. All properly mocked.
tests/test_litellm/proxy/guardrails/test_guardrail_registry.py Adds registry resolution test verifying both noma and noma_v2 entries exist. Removes print statements from existing tests.
docs/my-website/docs/proxy/guardrails/noma_security.md Adds deprecation notice, v2 documentation section with config examples and environment variables. Has a heading hierarchy issue where legacy subsections are at the same level as the legacy section header.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[LiteLLM Proxy Request] --> B{Guardrail Config}
    B -->|"guardrail: noma_v2"| C[NomaV2Guardrail.apply_guardrail]
    B -->|"guardrail: noma + use_v2: true"| C
    B -->|"guardrail: noma"| D[Legacy NomaGuardrail]
    
    C --> E[Build Scan Payload]
    E --> F["deepcopy(request_data) + sanitize"]
    F --> G["POST /litellm/guardrail → Noma API"]
    
    G --> H{monitor_mode?}
    H -->|true| I["action = NONE (pass-through)"]
    H -->|false| J{Response Action}
    
    J -->|NONE| I
    J -->|BLOCKED| K[Raise NomaBlockedMessage]
    J -->|GUARDRAIL_INTERVENED| L[Update inputs: texts, images, tools, tool_calls]
    J -->|Invalid/Missing| M{block_failures?}
    
    G -->|Network Error| M
    M -->|true| N[Raise Exception]
    M -->|false| O[Return original inputs]
    
    I --> P[Return processed inputs]
    L --> P
    P --> Q[Add observability logging]
    K --> Q
    N --> Q
    O --> Q
Loading

Last reviewed commit: 8ad877c

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +16 to 18
## Noma guardrails (Legacy)

## Quick Start
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Broken heading hierarchy for legacy section

The new ## Noma guardrails (Legacy) header on line 16 is immediately followed by the pre-existing ## Quick Start on line 18. Since both are ## headings, the rendered docs sidebar will show "Quick Start" as a sibling of "Noma guardrails (Legacy)" rather than a child. All legacy subsections (Quick Start, Supported Params, Environment Variables, etc.) should be demoted by one level so they nest under the legacy header.

Suggested change
## Noma guardrails (Legacy)
## Quick Start
## Noma guardrails (Legacy)
### Quick Start

Comment on lines +108 to +119
def _resolve_action_from_response(
self,
response_json: dict,
) -> _Action:
action = response_json.get("action")
if isinstance(action, str):
try:
return _Action(action)
except ValueError:
pass

raise ValueError("Noma v2 response missing valid action")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Invalid action silently passes through when block_failures=False

When the Noma API returns a response with an unrecognized or missing action field, _resolve_action_from_response raises ValueError("Noma v2 response missing valid action"). This ValueError is caught by the generic except Exception branch in apply_guardrail (line 296), which — when block_failures=False — silently returns the original inputs, logging only a generic error.

This means an API contract change (e.g., Noma starts returning "action": "REDACTED") would be indistinguishable from a network failure. Consider either: (a) logging the unexpected action value at warning/error level inside _resolve_action_from_response before raising, or (b) using a distinct exception type so apply_guardrail can differentiate contract violations from transport errors.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change the name, instead of just rewriting the noma.py?

this adds additional code to maintain

@krrishdholakia krrishdholakia merged commit 99184c4 into BerriAI:main Feb 23, 2026
30 of 31 checks passed
krrishdholakia added a commit that referenced this pull request Feb 24, 2026
…21970)

* auth_with_role_name add region_name arg for cross-account sts

* update tests to include case with aws_region_name for _auth_with_aws_role

* Only pass region_name to STS client when aws_region_name is set

* Add optional aws_sts_endpoint to _auth_with_aws_role

* Parametrize ambient-credentials test for no opts, region_name, and aws_sts_endpoint

* consistently passing region and endpoint args into explicit credentials irsa

* fix env var leakage

* fix: bedrock openai-compatible imported-model should also have model arn encoded

* feat: show proxy url in ModelHub (#21660)

* fix(bedrock): correct modelInput format for Converse API batch models (#21656)

* fix(proxy): add model_ids param to access group endpoints for precise deployment tagging (#21655)

POST /access_group/new and PUT /access_group/{name}/update now accept an
optional model_ids list that targets specific deployments by their unique
model_id, instead of tagging every deployment that shares a model_name.

When model_ids is provided it takes priority over model_names, giving
API callers the same single-deployment precision that the UI already has
via PATCH /model/{model_id}/update.

Backward compatible: model_names continues to work as before.

Closes #21544

* feat(proxy): add custom favicon support\n\nAdd ability to configure a custom favicon for the litellm proxy UI.\n\n- Add favicon_url field to UIThemeConfig model\n- Add LITELLM_FAVICON_URL env var support\n- Add /get_favicon endpoint to serve custom favicons\n- Update ThemeContext to dynamically set favicon\n- Add favicon URL input to UI theme settings page\n- Add comprehensive tests\n\nCloses #8323 (#21653)

* fix(bedrock): prevent double UUID in create_file S3 key (#21650)

In create_file for Bedrock, get_complete_file_url is called twice:
once in the sync handler (generating UUID-1 for api_base) and once
inside transform_create_file_request (generating UUID-2 for the
actual S3 upload). The Bedrock provider correctly writes UUID-2 into
litellm_params["upload_url"], but the sync handler unconditionally
overwrites it with api_base (UUID-1). This causes the returned
file_id to point to a non-existent S3 key.

Fix: only set upload_url to api_base when transform_create_file_request
has not already set it, preserving the Bedrock provider's value.

Closes #21546

* feat(semantic-cache): support configurable vector dimensions for Qdrant (#21649)

Add vector_size parameter to QdrantSemanticCache and expose it through
the Cache facade as qdrant_semantic_cache_vector_size. This allows users
to use embedding models with dimensions other than the default 1536,
enabling cheaper/stronger models like Stella (1024d), bge-en-icl (4096d),
voyage, cohere, etc.

The parameter defaults to QDRANT_VECTOR_SIZE (env var or 1536) for
backward compatibility. When creating new collections, the configured
vector_size is used instead of the hardcoded constant.

Closes #9377

* fix(utils): normalize camelCase thinking param keys to snake_case (#21762)

Clients like OpenCode's @ai-sdk/openai-compatible send budgetTokens
(camelCase) instead of budget_tokens in the thinking parameter, causing
validation errors. Add early normalization in completion().

* feat: add optional digest mode for Slack alert types (#21683)

Adds per-alert-type digest mode that aggregates duplicate alerts
within a configurable time window and emits a single summary message
with count, start/end timestamps.

Configuration via general_settings.alert_type_config:
  alert_type_config:
    llm_requests_hanging:
      digest: true
      digest_interval: 86400

Digest key: (alert_type, request_model, api_base)
Default interval: 24 hours
Window type: fixed interval

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add blog_posts.json and local backup

* feat: add GetBlogPosts utility with GitHub fetch and local fallback

Adds GetBlogPosts class that fetches blog posts from GitHub with a 1-hour
in-process TTL cache, validates the response, and falls back to the bundled
blog_posts_backup.json on any network or validation failure.

* test: add cache reset fixture and LITELLM_LOCAL_BLOG_POSTS test

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add GET /public/litellm_blog_posts endpoint

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: log fallback warning in blog posts endpoint and tighten test

* feat: add disable_show_blog to UISettings

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add useUISettings and useDisableShowBlog hooks

* fix: rename useUISettings to useUISettingsFlags to avoid naming collision

* fix: use existing useUISettings hook in useDisableShowBlog to avoid cache duplication

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown component with react-query and error/retry state

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: enforce 5-post limit in BlogDropdown and add cap test

* fix: add retry, stable post key, enabled guard in BlogDropdown

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown to navbar after Docs link

* feat: add network_mock transport for benchmarking proxy overhead without real API calls

Intercepts at httpx transport layer so the full proxy path (auth, routing,
OpenAI SDK, response transformation) is exercised with zero-latency responses.
Activated via `litellm_settings: { network_mock: true }` in proxy config.

* Litellm dev 02 19 2026 p2 (#21871)

* feat(ui/): new guardrails monitor 'demo

mock representation of what guardrails monitor looks like

* fix: ui updates

* style(ui/): fix styling

* feat: enable running ai monitor on individual guardrails

* feat: add backend logic for guardrail monitoring

* fix(guardrails/usage_endpoints.py): fix usage dashboard

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo (#21754)

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo

* fix(budget): update stale docstring on get_budget_reset_time

* fix: add missing return type annotations to iterator protocol methods in streaming_handler (#21750)

* fix: add return type annotations to iterator protocol methods in streaming_handler

Add missing return type annotations to __iter__, __aiter__, __next__, and __anext__ methods in CustomStreamWrapper and related classes.

- __iter__(self) -> Iterator["ModelResponseStream"]
- __aiter__(self) -> AsyncIterator["ModelResponseStream"]
- __next__(self) -> "ModelResponseStream"
- __anext__(self) -> "ModelResponseStream"

Also adds AsyncIterator and Iterator to typing imports.

Fixes issue with PLR0915 noqa comments and ensures proper type checking support.
Related to: #8304

* fix: add ruff PLR0915 noqa for files with too many statements

* Add gollem Go agent framework cookbook example (#21747)

Show how to use gollem, a production Go agent framework, with
LiteLLM proxy for multi-provider LLM access including tool use
and streaming.

* fix: avoid mutating caller-owned dicts in SpendUpdateQueue aggregation (#21742)

* fix(vertex_ai): enable context-1m-2025-08-07 beta header (#21870)

* server root path regression doc

* fixing syntax

* fix: replace Zapier webhook with Google Form for survey submission (#21621)

* Replace Zapier webhook with Google Form for survey submission

* Add back error logging for survey submission debugging

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "Merge pull request #21140 from BerriAI/litellm_perf_user_api_key_auth"

This reverts commit 0e1db3f, reversing
changes made to 7e2d6f2.

* test_vertex_ai_gemini_2_5_pro_streaming

* UI new build

* fix rendering

* ui new build

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* release note docs

* docs

* adding image

* fix(vertex_ai): enable context-1m-2025-08-07 beta header

The `context-1m-2025-08-07` Anthropic beta header was set to `null` for vertex_ai,
causing it to be filtered out when users set `extra_headers: {anthropic-beta: context-1m-2025-08-07}`.

This prevented using Claude's 1M context window feature via Vertex AI, resulting in
`prompt is too long: 460500 tokens > 200000 maximum` errors.

Fixes #21861

---------

Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "fix(vertex_ai): enable context-1m-2025-08-07 beta header (#21870)" (#21876)

This reverts commit bce078a.

* docs(ui): add pre-PR checklist to UI contributing guide

Add testing and build verification steps per maintainer feedback
from @yjiang-litellm. Contributors should run their related tests
per-file and ensure npm run build passes before opening PRs.

* Fix entries with fast and us/

* Add tests for fast and us

* Add support for Priority PayGo for vertex ai and gemini

* Add model pricing

* fix: ensure arrival_time is set before calculating queue time

* Fix: Anthropic model wildcard access issue

* Add incident report

* Add ability to see which model cost map is getting used

* Fix name of title

* Readd tpm limit

* State management fixes for CheckBatchCost

* Fix PR review comments

* State management fixes for CheckBatchCost - Address greptile comments

* fix mypy issues:

* Add Noma guardrails v2 based on custom guardrails (#21400)

* Fix code qa issues

* Fix mypy issues

* Fix mypy issues

* Fix test_aaamodel_prices_and_context_window_json_is_valid

* fix: update calendly on repo

* fix(tests): use counter-based mock for time.time in prisma self-heal test

The test used a fixed side_effect list for time.time(), but the number
of calls varies by Python version, causing StopIteration on 3.12 and
AssertionError on 3.14. Replace with an infinite counter-based callable
and assert the timestamp was updated rather than checking for an exact
value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tests): use absolute path for model_prices JSON in validation test

The test used a relative path 'litellm/model_prices_and_context_window.json'
which only works when pytest runs from a specific working directory.
Use os.path based on __file__ to resolve the path reliably.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update tests/test_litellm/test_utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(tests): use os.path instead of Path to avoid NameError

Path is not imported at module level. Use os.path.join which is already
available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* clean up mock transport: remove streaming, add defensive parsing

* docs: add Google GenAI SDK tutorial (JS & Python) (#21885)

* docs: add Google GenAI SDK tutorial for JS and Python

Add tutorial for using Google's official GenAI SDK (@google/genai for JS,
google-genai for Python) with LiteLLM proxy. Covers pass-through and
native router endpoints, streaming, multi-turn chat, and multi-provider
routing via model_group_alias. Also updates pass-through docs to use the
new SDK replacing the deprecated @google/generative-ai.

* fix(docs): correct Python SDK env var name in GenAI tutorial

GOOGLE_GENAI_API_KEY does not exist in the google-genai SDK.
The correct env var is GEMINI_API_KEY (or GOOGLE_API_KEY).
Also note that the Python SDK has no base URL env var.

* fix(docs): replace non-existent GOOGLE_GENAI_BASE_URL env var in interactions.md

The Python google-genai SDK does not read GOOGLE_GENAI_BASE_URL.
Use http_options={"base_url": "..."} in code instead.

* docs: add network mock benchmarking section

* docs: tweak benchmarks wording

* fix: add auth headers and empty latencies guard to benchmark script

* refactor: use method-level import for MockOpenAITransport

* fix: guard print_aggregate against empty latencies

* fix: add INCOMPLETE status to Interactions API enum and test

Google added INCOMPLETE to the Interactions API OpenAPI spec status enum.
Update both the Status3 enum in the SDK types and the test's expected
values to match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Guardrail Monitor - measure guardrail reliability in prod  (#21944)

* fix: fix log viewer for guardrail monitoring

* feat(ui/): fix rendering logs per guardrail

* fix: fix viewing logs on overview tab of guardrail

* fix: log viewer

* fix: fix naming to align with metric

* docs: add performance & reliability section to v1.81.14 release notes

* fix(tests): make RPM limit test sequential to avoid race condition

Concurrent requests via run_in_executor + asyncio.gather caused a race
condition where more requests slipped through the rate limiter than
expected, leading to flaky test failures (e.g. 3 successes instead of 2
with rpm_limit=2).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: Singapore guardrail policies (PDPA + MAS AI Risk Management) (#21948)

* feat: Singapore PDPA PII protection guardrail policy template

Add Singapore Personal Data Protection Act (PDPA) guardrail support:

Regex patterns (patterns.json):
- sg_nric: NRIC/FIN detection ([STFGM] + 7 digits + checksum letter)
- sg_phone: Singapore phone numbers (+65/0065/65 prefix)
- sg_postal_code: 6-digit postal codes (contextual)
- passport_singapore: Passport numbers (E/K + 7 digits, contextual)
- sg_uen: Unique Entity Numbers (3 formats)
- sg_bank_account: Bank account numbers (dash format, contextual)

YAML policy templates (5 sub-guardrails):
- sg_pdpa_personal_identifiers: s.13 Consent
- sg_pdpa_sensitive_data: Advisory Guidelines
- sg_pdpa_do_not_call: Part IX DNC Registry
- sg_pdpa_data_transfer: s.26 overseas transfers
- sg_pdpa_profiling_automated_decisions: Model AI Governance Framework

Policy template entry in policy_templates.json with 9 guardrail definitions
(4 regex-based + 5 YAML conditional keyword matching).

Tests:
- test_sg_patterns.py: regex pattern unit tests
- test_sg_pdpa_guardrails.py: conditional keyword matching tests (100+ cases)

* feat: MAS AI Risk Management Guidelines guardrail policy template

Add Monetary Authority of Singapore (MAS) AI Risk Management Guidelines
guardrail support for financial institutions:

YAML policy templates (5 sub-guardrails):
- sg_mas_fairness_bias: Blocks discriminatory financial AI (credit/loans/insurance by protected attributes)
- sg_mas_transparency_explainability: Blocks opaque/unexplainable AI for consequential financial decisions
- sg_mas_human_oversight: Blocks fully automated financial decisions without human-in-the-loop
- sg_mas_data_governance: Blocks unauthorized sharing/mishandling of financial customer data
- sg_mas_model_security: Blocks adversarial attacks, model poisoning, inversion on financial AI

Policy template entry in policy_templates.json with 5 guardrail definitions.
Aligned with MAS FEAT Principles, Project MindForge, and NIST AI RMF.

Tests:
- test_sg_mas_ai_guardrails.py: conditional keyword matching tests (100+ cases)

* fix: address SG pattern review feedback

- Update NRIC lowercase test for IGNORECASE runtime behavior
- Add keyword context guard to sg_uen pattern to reduce false positives

* docs: clarify MAS AIRM timeline references

- Explicitly mark MAS AIRM as Nov 2025 consultation draft
- Add 2018 qualifier for FEAT principles in MAS policy descriptions
- Update MAS guardrail wording to avoid release-year ambiguity

* chore: commit resolved MAS policy conflicts

* test:

* chore:

* Add OpenAI Agents SDK tutorial with LiteLLM Proxy to docs  (#21221)

* Add OpenAI Agents SDK tutorial to docs

* Update OpenAI Agents SDK tutorial to use LiteLLM environment variables

* Enhance OpenAI Agents SDK tutorial with built-in LiteLLM extension details and updated configuration steps. Adjust section headings for clarity and improve the flow of information regarding model setup and usage.

* adjust blog posts to fetch from github first

* feat(videos): add variant parameter to video content download (#21955)

openai videos models support the features to download variants.
See more details here: https://developers.openai.com/api/docs/guides/video-generation#use-image-references.
Plumb variant (e.g. "thumbnail", "spritesheet") through the full
video content download chain: avideo_content → video_content →
video_content_handler → transform_video_content_request. OpenAI
appends ?variant=<value> to the GET URL; other providers accept
the parameter in their signature but ignore it.

* fixing path

* adjust blog post path

* Revert duplicate issue checker to text-based matching, remove duplicate PR workflow

Remove the Claude Code-powered duplicate PR detection workflow and revert
the duplicate issue checker back to wow-actions/potential-duplicates with
text similarity matching.

* ui changes

* adding tests

* adjust default aggregation threshold

* fix(videos): pass api_key from litellm_params to video remix handlers (#21965)

video_remix_handler and async_video_remix_handler were not falling back
to litellm_params.api_key when the api_key parameter was None, causing
Authorization: Bearer None to be sent to the provider. This matches the
pattern already used by async_video_generation_handler.

* adding testing coverage + fixing flaky tests

* fix(ollama): thread api_base through get_model_info and add graceful fallback

When users pass api_base to litellm.completion() for Ollama, the model
info fetch (context window, function_calling support) was ignoring the
user's api_base and only reading OLLAMA_API_BASE env var or defaulting
to localhost:11434. This caused confusing errors in logs when Ollama
runs on a remote server.

Thread api_base from litellm_params through the get_model_info call
chain so OllamaConfig.get_model_info() uses the correct server. Also
return safe defaults instead of raising when the server is unreachable.

Fixes #21967

---------

Co-authored-by: An Tang <ta@stripe.com>
Co-authored-by: janfrederickk <75388864+janfrederickk@users.noreply.github.com>
Co-authored-by: Zhenting Huang <3061613175@qq.com>
Co-authored-by: Darien Kindlund <darien@kindlund.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Ryan Crabbe <rcrabbe@berkeley.edu>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: LeeJuOh <56071126+LeeJuOh@users.noreply.github.com>
Co-authored-by: Monesh Ram <31161039+WhoisMonesh@users.noreply.github.com>
Co-authored-by: Trevor Prater <trevor.prater@gmail.com>
Co-authored-by: The Mavik <179817126+themavik@users.noreply.github.com>
Co-authored-by: Edwin Isac <33712823+edwiniac@users.noreply.github.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: TomAlon <tom@noma.security>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Ron Zhong <ron-zhong@hotmail.com>
Co-authored-by: Arindam Majumder <109217591+Arindam200@users.noreply.github.com>
Co-authored-by: Lei Nie <lenie@quora.com>
krrishdholakia added a commit that referenced this pull request Feb 24, 2026
…voke (#21964)

* auth_with_role_name add region_name arg for cross-account sts

* update tests to include case with aws_region_name for _auth_with_aws_role

* Only pass region_name to STS client when aws_region_name is set

* Add optional aws_sts_endpoint to _auth_with_aws_role

* Parametrize ambient-credentials test for no opts, region_name, and aws_sts_endpoint

* consistently passing region and endpoint args into explicit credentials irsa

* fix env var leakage

* fix: bedrock openai-compatible imported-model should also have model arn encoded

* feat: show proxy url in ModelHub (#21660)

* fix(bedrock): correct modelInput format for Converse API batch models (#21656)

* fix(proxy): add model_ids param to access group endpoints for precise deployment tagging (#21655)

POST /access_group/new and PUT /access_group/{name}/update now accept an
optional model_ids list that targets specific deployments by their unique
model_id, instead of tagging every deployment that shares a model_name.

When model_ids is provided it takes priority over model_names, giving
API callers the same single-deployment precision that the UI already has
via PATCH /model/{model_id}/update.

Backward compatible: model_names continues to work as before.

Closes #21544

* feat(proxy): add custom favicon support\n\nAdd ability to configure a custom favicon for the litellm proxy UI.\n\n- Add favicon_url field to UIThemeConfig model\n- Add LITELLM_FAVICON_URL env var support\n- Add /get_favicon endpoint to serve custom favicons\n- Update ThemeContext to dynamically set favicon\n- Add favicon URL input to UI theme settings page\n- Add comprehensive tests\n\nCloses #8323 (#21653)

* fix(bedrock): prevent double UUID in create_file S3 key (#21650)

In create_file for Bedrock, get_complete_file_url is called twice:
once in the sync handler (generating UUID-1 for api_base) and once
inside transform_create_file_request (generating UUID-2 for the
actual S3 upload). The Bedrock provider correctly writes UUID-2 into
litellm_params["upload_url"], but the sync handler unconditionally
overwrites it with api_base (UUID-1). This causes the returned
file_id to point to a non-existent S3 key.

Fix: only set upload_url to api_base when transform_create_file_request
has not already set it, preserving the Bedrock provider's value.

Closes #21546

* feat(semantic-cache): support configurable vector dimensions for Qdrant (#21649)

Add vector_size parameter to QdrantSemanticCache and expose it through
the Cache facade as qdrant_semantic_cache_vector_size. This allows users
to use embedding models with dimensions other than the default 1536,
enabling cheaper/stronger models like Stella (1024d), bge-en-icl (4096d),
voyage, cohere, etc.

The parameter defaults to QDRANT_VECTOR_SIZE (env var or 1536) for
backward compatibility. When creating new collections, the configured
vector_size is used instead of the hardcoded constant.

Closes #9377

* fix(utils): normalize camelCase thinking param keys to snake_case (#21762)

Clients like OpenCode's @ai-sdk/openai-compatible send budgetTokens
(camelCase) instead of budget_tokens in the thinking parameter, causing
validation errors. Add early normalization in completion().

* feat: add optional digest mode for Slack alert types (#21683)

Adds per-alert-type digest mode that aggregates duplicate alerts
within a configurable time window and emits a single summary message
with count, start/end timestamps.

Configuration via general_settings.alert_type_config:
  alert_type_config:
    llm_requests_hanging:
      digest: true
      digest_interval: 86400

Digest key: (alert_type, request_model, api_base)
Default interval: 24 hours
Window type: fixed interval

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add blog_posts.json and local backup

* feat: add GetBlogPosts utility with GitHub fetch and local fallback

Adds GetBlogPosts class that fetches blog posts from GitHub with a 1-hour
in-process TTL cache, validates the response, and falls back to the bundled
blog_posts_backup.json on any network or validation failure.

* test: add cache reset fixture and LITELLM_LOCAL_BLOG_POSTS test

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add GET /public/litellm_blog_posts endpoint

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: log fallback warning in blog posts endpoint and tighten test

* feat: add disable_show_blog to UISettings

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add useUISettings and useDisableShowBlog hooks

* fix: rename useUISettings to useUISettingsFlags to avoid naming collision

* fix: use existing useUISettings hook in useDisableShowBlog to avoid cache duplication

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown component with react-query and error/retry state

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: enforce 5-post limit in BlogDropdown and add cap test

* fix: add retry, stable post key, enabled guard in BlogDropdown

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown to navbar after Docs link

* feat: add network_mock transport for benchmarking proxy overhead without real API calls

Intercepts at httpx transport layer so the full proxy path (auth, routing,
OpenAI SDK, response transformation) is exercised with zero-latency responses.
Activated via `litellm_settings: { network_mock: true }` in proxy config.

* Litellm dev 02 19 2026 p2 (#21871)

* feat(ui/): new guardrails monitor 'demo

mock representation of what guardrails monitor looks like

* fix: ui updates

* style(ui/): fix styling

* feat: enable running ai monitor on individual guardrails

* feat: add backend logic for guardrail monitoring

* fix(guardrails/usage_endpoints.py): fix usage dashboard

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo (#21754)

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo

* fix(budget): update stale docstring on get_budget_reset_time

* fix: add missing return type annotations to iterator protocol methods in streaming_handler (#21750)

* fix: add return type annotations to iterator protocol methods in streaming_handler

Add missing return type annotations to __iter__, __aiter__, __next__, and __anext__ methods in CustomStreamWrapper and related classes.

- __iter__(self) -> Iterator["ModelResponseStream"]
- __aiter__(self) -> AsyncIterator["ModelResponseStream"]
- __next__(self) -> "ModelResponseStream"
- __anext__(self) -> "ModelResponseStream"

Also adds AsyncIterator and Iterator to typing imports.

Fixes issue with PLR0915 noqa comments and ensures proper type checking support.
Related to: #8304

* fix: add ruff PLR0915 noqa for files with too many statements

* Add gollem Go agent framework cookbook example (#21747)

Show how to use gollem, a production Go agent framework, with
LiteLLM proxy for multi-provider LLM access including tool use
and streaming.

* fix: avoid mutating caller-owned dicts in SpendUpdateQueue aggregation (#21742)

* fix(vertex_ai): enable context-1m-2025-08-07 beta header (#21870)

* server root path regression doc

* fixing syntax

* fix: replace Zapier webhook with Google Form for survey submission (#21621)

* Replace Zapier webhook with Google Form for survey submission

* Add back error logging for survey submission debugging

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "Merge pull request #21140 from BerriAI/litellm_perf_user_api_key_auth"

This reverts commit 0e1db3f, reversing
changes made to 7e2d6f2.

* test_vertex_ai_gemini_2_5_pro_streaming

* UI new build

* fix rendering

* ui new build

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* release note docs

* docs

* adding image

* fix(vertex_ai): enable context-1m-2025-08-07 beta header

The `context-1m-2025-08-07` Anthropic beta header was set to `null` for vertex_ai,
causing it to be filtered out when users set `extra_headers: {anthropic-beta: context-1m-2025-08-07}`.

This prevented using Claude's 1M context window feature via Vertex AI, resulting in
`prompt is too long: 460500 tokens > 200000 maximum` errors.

Fixes #21861

---------

Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "fix(vertex_ai): enable context-1m-2025-08-07 beta header (#21870)" (#21876)

This reverts commit bce078a.

* docs(ui): add pre-PR checklist to UI contributing guide

Add testing and build verification steps per maintainer feedback
from @yjiang-litellm. Contributors should run their related tests
per-file and ensure npm run build passes before opening PRs.

* Fix entries with fast and us/

* Add tests for fast and us

* Add support for Priority PayGo for vertex ai and gemini

* Add model pricing

* fix: ensure arrival_time is set before calculating queue time

* Fix: Anthropic model wildcard access issue

* Add incident report

* Add ability to see which model cost map is getting used

* Fix name of title

* Readd tpm limit

* State management fixes for CheckBatchCost

* Fix PR review comments

* State management fixes for CheckBatchCost - Address greptile comments

* fix mypy issues:

* Add Noma guardrails v2 based on custom guardrails (#21400)

* Fix code qa issues

* Fix mypy issues

* Fix mypy issues

* Fix test_aaamodel_prices_and_context_window_json_is_valid

* fix: update calendly on repo

* fix(tests): use counter-based mock for time.time in prisma self-heal test

The test used a fixed side_effect list for time.time(), but the number
of calls varies by Python version, causing StopIteration on 3.12 and
AssertionError on 3.14. Replace with an infinite counter-based callable
and assert the timestamp was updated rather than checking for an exact
value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tests): use absolute path for model_prices JSON in validation test

The test used a relative path 'litellm/model_prices_and_context_window.json'
which only works when pytest runs from a specific working directory.
Use os.path based on __file__ to resolve the path reliably.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update tests/test_litellm/test_utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(tests): use os.path instead of Path to avoid NameError

Path is not imported at module level. Use os.path.join which is already
available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* clean up mock transport: remove streaming, add defensive parsing

* docs: add Google GenAI SDK tutorial (JS & Python) (#21885)

* docs: add Google GenAI SDK tutorial for JS and Python

Add tutorial for using Google's official GenAI SDK (@google/genai for JS,
google-genai for Python) with LiteLLM proxy. Covers pass-through and
native router endpoints, streaming, multi-turn chat, and multi-provider
routing via model_group_alias. Also updates pass-through docs to use the
new SDK replacing the deprecated @google/generative-ai.

* fix(docs): correct Python SDK env var name in GenAI tutorial

GOOGLE_GENAI_API_KEY does not exist in the google-genai SDK.
The correct env var is GEMINI_API_KEY (or GOOGLE_API_KEY).
Also note that the Python SDK has no base URL env var.

* fix(docs): replace non-existent GOOGLE_GENAI_BASE_URL env var in interactions.md

The Python google-genai SDK does not read GOOGLE_GENAI_BASE_URL.
Use http_options={"base_url": "..."} in code instead.

* docs: add network mock benchmarking section

* docs: tweak benchmarks wording

* fix: add auth headers and empty latencies guard to benchmark script

* refactor: use method-level import for MockOpenAITransport

* fix: guard print_aggregate against empty latencies

* fix: add INCOMPLETE status to Interactions API enum and test

Google added INCOMPLETE to the Interactions API OpenAPI spec status enum.
Update both the Status3 enum in the SDK types and the test's expected
values to match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Guardrail Monitor - measure guardrail reliability in prod  (#21944)

* fix: fix log viewer for guardrail monitoring

* feat(ui/): fix rendering logs per guardrail

* fix: fix viewing logs on overview tab of guardrail

* fix: log viewer

* fix: fix naming to align with metric

* docs: add performance & reliability section to v1.81.14 release notes

* fix(tests): make RPM limit test sequential to avoid race condition

Concurrent requests via run_in_executor + asyncio.gather caused a race
condition where more requests slipped through the rate limiter than
expected, leading to flaky test failures (e.g. 3 successes instead of 2
with rpm_limit=2).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: Singapore guardrail policies (PDPA + MAS AI Risk Management) (#21948)

* feat: Singapore PDPA PII protection guardrail policy template

Add Singapore Personal Data Protection Act (PDPA) guardrail support:

Regex patterns (patterns.json):
- sg_nric: NRIC/FIN detection ([STFGM] + 7 digits + checksum letter)
- sg_phone: Singapore phone numbers (+65/0065/65 prefix)
- sg_postal_code: 6-digit postal codes (contextual)
- passport_singapore: Passport numbers (E/K + 7 digits, contextual)
- sg_uen: Unique Entity Numbers (3 formats)
- sg_bank_account: Bank account numbers (dash format, contextual)

YAML policy templates (5 sub-guardrails):
- sg_pdpa_personal_identifiers: s.13 Consent
- sg_pdpa_sensitive_data: Advisory Guidelines
- sg_pdpa_do_not_call: Part IX DNC Registry
- sg_pdpa_data_transfer: s.26 overseas transfers
- sg_pdpa_profiling_automated_decisions: Model AI Governance Framework

Policy template entry in policy_templates.json with 9 guardrail definitions
(4 regex-based + 5 YAML conditional keyword matching).

Tests:
- test_sg_patterns.py: regex pattern unit tests
- test_sg_pdpa_guardrails.py: conditional keyword matching tests (100+ cases)

* feat: MAS AI Risk Management Guidelines guardrail policy template

Add Monetary Authority of Singapore (MAS) AI Risk Management Guidelines
guardrail support for financial institutions:

YAML policy templates (5 sub-guardrails):
- sg_mas_fairness_bias: Blocks discriminatory financial AI (credit/loans/insurance by protected attributes)
- sg_mas_transparency_explainability: Blocks opaque/unexplainable AI for consequential financial decisions
- sg_mas_human_oversight: Blocks fully automated financial decisions without human-in-the-loop
- sg_mas_data_governance: Blocks unauthorized sharing/mishandling of financial customer data
- sg_mas_model_security: Blocks adversarial attacks, model poisoning, inversion on financial AI

Policy template entry in policy_templates.json with 5 guardrail definitions.
Aligned with MAS FEAT Principles, Project MindForge, and NIST AI RMF.

Tests:
- test_sg_mas_ai_guardrails.py: conditional keyword matching tests (100+ cases)

* fix: address SG pattern review feedback

- Update NRIC lowercase test for IGNORECASE runtime behavior
- Add keyword context guard to sg_uen pattern to reduce false positives

* docs: clarify MAS AIRM timeline references

- Explicitly mark MAS AIRM as Nov 2025 consultation draft
- Add 2018 qualifier for FEAT principles in MAS policy descriptions
- Update MAS guardrail wording to avoid release-year ambiguity

* chore: commit resolved MAS policy conflicts

* test:

* chore:

* Add OpenAI Agents SDK tutorial with LiteLLM Proxy to docs  (#21221)

* Add OpenAI Agents SDK tutorial to docs

* Update OpenAI Agents SDK tutorial to use LiteLLM environment variables

* Enhance OpenAI Agents SDK tutorial with built-in LiteLLM extension details and updated configuration steps. Adjust section headings for clarity and improve the flow of information regarding model setup and usage.

* adjust blog posts to fetch from github first

* feat(videos): add variant parameter to video content download (#21955)

openai videos models support the features to download variants.
See more details here: https://developers.openai.com/api/docs/guides/video-generation#use-image-references.
Plumb variant (e.g. "thumbnail", "spritesheet") through the full
video content download chain: avideo_content → video_content →
video_content_handler → transform_video_content_request. OpenAI
appends ?variant=<value> to the GET URL; other providers accept
the parameter in their signature but ignore it.

* fixing path

* adjust blog post path

* Revert duplicate issue checker to text-based matching, remove duplicate PR workflow

Remove the Claude Code-powered duplicate PR detection workflow and revert
the duplicate issue checker back to wow-actions/potential-duplicates with
text similarity matching.

* ui changes

* adding tests

* fix(anthropic): sanitize tool_use IDs in assistant messages

Apply _sanitize_anthropic_tool_use_id to tool_use blocks in
convert_to_anthropic_tool_invoke, not just tool_result blocks.
IDs from external frameworks (e.g. MiniMax) may contain characters
like colons that violate Anthropic's ^[a-zA-Z0-9_-]+$ pattern.

Adds test for invalid ID sanitization in tool_use blocks.

---------

Co-authored-by: An Tang <ta@stripe.com>
Co-authored-by: janfrederickk <75388864+janfrederickk@users.noreply.github.com>
Co-authored-by: Zhenting Huang <3061613175@qq.com>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Darien Kindlund <darien@kindlund.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Ryan Crabbe <rcrabbe@berkeley.edu>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: LeeJuOh <56071126+LeeJuOh@users.noreply.github.com>
Co-authored-by: Monesh Ram <31161039+WhoisMonesh@users.noreply.github.com>
Co-authored-by: Trevor Prater <trevor.prater@gmail.com>
Co-authored-by: The Mavik <179817126+themavik@users.noreply.github.com>
Co-authored-by: Edwin Isac <33712823+edwiniac@users.noreply.github.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Chesars <cesarponce19544@gmail.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: TomAlon <tom@noma.security>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Ron Zhong <ron-zhong@hotmail.com>
Co-authored-by: Arindam Majumder <109217591+Arindam200@users.noreply.github.com>
Co-authored-by: Lei Nie <lenie@quora.com>
damhau pushed a commit to damhau/litellm that referenced this pull request Feb 26, 2026
…erriAI#21970)

* auth_with_role_name add region_name arg for cross-account sts

* update tests to include case with aws_region_name for _auth_with_aws_role

* Only pass region_name to STS client when aws_region_name is set

* Add optional aws_sts_endpoint to _auth_with_aws_role

* Parametrize ambient-credentials test for no opts, region_name, and aws_sts_endpoint

* consistently passing region and endpoint args into explicit credentials irsa

* fix env var leakage

* fix: bedrock openai-compatible imported-model should also have model arn encoded

* feat: show proxy url in ModelHub (BerriAI#21660)

* fix(bedrock): correct modelInput format for Converse API batch models (BerriAI#21656)

* fix(proxy): add model_ids param to access group endpoints for precise deployment tagging (BerriAI#21655)

POST /access_group/new and PUT /access_group/{name}/update now accept an
optional model_ids list that targets specific deployments by their unique
model_id, instead of tagging every deployment that shares a model_name.

When model_ids is provided it takes priority over model_names, giving
API callers the same single-deployment precision that the UI already has
via PATCH /model/{model_id}/update.

Backward compatible: model_names continues to work as before.

Closes BerriAI#21544

* feat(proxy): add custom favicon support\n\nAdd ability to configure a custom favicon for the litellm proxy UI.\n\n- Add favicon_url field to UIThemeConfig model\n- Add LITELLM_FAVICON_URL env var support\n- Add /get_favicon endpoint to serve custom favicons\n- Update ThemeContext to dynamically set favicon\n- Add favicon URL input to UI theme settings page\n- Add comprehensive tests\n\nCloses BerriAI#8323 (BerriAI#21653)

* fix(bedrock): prevent double UUID in create_file S3 key (BerriAI#21650)

In create_file for Bedrock, get_complete_file_url is called twice:
once in the sync handler (generating UUID-1 for api_base) and once
inside transform_create_file_request (generating UUID-2 for the
actual S3 upload). The Bedrock provider correctly writes UUID-2 into
litellm_params["upload_url"], but the sync handler unconditionally
overwrites it with api_base (UUID-1). This causes the returned
file_id to point to a non-existent S3 key.

Fix: only set upload_url to api_base when transform_create_file_request
has not already set it, preserving the Bedrock provider's value.

Closes BerriAI#21546

* feat(semantic-cache): support configurable vector dimensions for Qdrant (BerriAI#21649)

Add vector_size parameter to QdrantSemanticCache and expose it through
the Cache facade as qdrant_semantic_cache_vector_size. This allows users
to use embedding models with dimensions other than the default 1536,
enabling cheaper/stronger models like Stella (1024d), bge-en-icl (4096d),
voyage, cohere, etc.

The parameter defaults to QDRANT_VECTOR_SIZE (env var or 1536) for
backward compatibility. When creating new collections, the configured
vector_size is used instead of the hardcoded constant.

Closes BerriAI#9377

* fix(utils): normalize camelCase thinking param keys to snake_case (BerriAI#21762)

Clients like OpenCode's @ai-sdk/openai-compatible send budgetTokens
(camelCase) instead of budget_tokens in the thinking parameter, causing
validation errors. Add early normalization in completion().

* feat: add optional digest mode for Slack alert types (BerriAI#21683)

Adds per-alert-type digest mode that aggregates duplicate alerts
within a configurable time window and emits a single summary message
with count, start/end timestamps.

Configuration via general_settings.alert_type_config:
  alert_type_config:
    llm_requests_hanging:
      digest: true
      digest_interval: 86400

Digest key: (alert_type, request_model, api_base)
Default interval: 24 hours
Window type: fixed interval

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add blog_posts.json and local backup

* feat: add GetBlogPosts utility with GitHub fetch and local fallback

Adds GetBlogPosts class that fetches blog posts from GitHub with a 1-hour
in-process TTL cache, validates the response, and falls back to the bundled
blog_posts_backup.json on any network or validation failure.

* test: add cache reset fixture and LITELLM_LOCAL_BLOG_POSTS test

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add GET /public/litellm_blog_posts endpoint

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: log fallback warning in blog posts endpoint and tighten test

* feat: add disable_show_blog to UISettings

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add useUISettings and useDisableShowBlog hooks

* fix: rename useUISettings to useUISettingsFlags to avoid naming collision

* fix: use existing useUISettings hook in useDisableShowBlog to avoid cache duplication

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown component with react-query and error/retry state

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: enforce 5-post limit in BlogDropdown and add cap test

* fix: add retry, stable post key, enabled guard in BlogDropdown

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown to navbar after Docs link

* feat: add network_mock transport for benchmarking proxy overhead without real API calls

Intercepts at httpx transport layer so the full proxy path (auth, routing,
OpenAI SDK, response transformation) is exercised with zero-latency responses.
Activated via `litellm_settings: { network_mock: true }` in proxy config.

* Litellm dev 02 19 2026 p2 (BerriAI#21871)

* feat(ui/): new guardrails monitor 'demo

mock representation of what guardrails monitor looks like

* fix: ui updates

* style(ui/): fix styling

* feat: enable running ai monitor on individual guardrails

* feat: add backend logic for guardrail monitoring

* fix(guardrails/usage_endpoints.py): fix usage dashboard

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo (BerriAI#21754)

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo

* fix(budget): update stale docstring on get_budget_reset_time

* fix: add missing return type annotations to iterator protocol methods in streaming_handler (BerriAI#21750)

* fix: add return type annotations to iterator protocol methods in streaming_handler

Add missing return type annotations to __iter__, __aiter__, __next__, and __anext__ methods in CustomStreamWrapper and related classes.

- __iter__(self) -> Iterator["ModelResponseStream"]
- __aiter__(self) -> AsyncIterator["ModelResponseStream"]
- __next__(self) -> "ModelResponseStream"
- __anext__(self) -> "ModelResponseStream"

Also adds AsyncIterator and Iterator to typing imports.

Fixes issue with PLR0915 noqa comments and ensures proper type checking support.
Related to: BerriAI#8304

* fix: add ruff PLR0915 noqa for files with too many statements

* Add gollem Go agent framework cookbook example (BerriAI#21747)

Show how to use gollem, a production Go agent framework, with
LiteLLM proxy for multi-provider LLM access including tool use
and streaming.

* fix: avoid mutating caller-owned dicts in SpendUpdateQueue aggregation (BerriAI#21742)

* fix(vertex_ai): enable context-1m-2025-08-07 beta header (BerriAI#21870)

* server root path regression doc

* fixing syntax

* fix: replace Zapier webhook with Google Form for survey submission (BerriAI#21621)

* Replace Zapier webhook with Google Form for survey submission

* Add back error logging for survey submission debugging

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "Merge pull request BerriAI#21140 from BerriAI/litellm_perf_user_api_key_auth"

This reverts commit 0e1db3f, reversing
changes made to 7e2d6f2.

* test_vertex_ai_gemini_2_5_pro_streaming

* UI new build

* fix rendering

* ui new build

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* release note docs

* docs

* adding image

* fix(vertex_ai): enable context-1m-2025-08-07 beta header

The `context-1m-2025-08-07` Anthropic beta header was set to `null` for vertex_ai,
causing it to be filtered out when users set `extra_headers: {anthropic-beta: context-1m-2025-08-07}`.

This prevented using Claude's 1M context window feature via Vertex AI, resulting in
`prompt is too long: 460500 tokens > 200000 maximum` errors.

Fixes BerriAI#21861

---------

Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "fix(vertex_ai): enable context-1m-2025-08-07 beta header (BerriAI#21870)" (BerriAI#21876)

This reverts commit bce078a.

* docs(ui): add pre-PR checklist to UI contributing guide

Add testing and build verification steps per maintainer feedback
from @yjiang-litellm. Contributors should run their related tests
per-file and ensure npm run build passes before opening PRs.

* Fix entries with fast and us/

* Add tests for fast and us

* Add support for Priority PayGo for vertex ai and gemini

* Add model pricing

* fix: ensure arrival_time is set before calculating queue time

* Fix: Anthropic model wildcard access issue

* Add incident report

* Add ability to see which model cost map is getting used

* Fix name of title

* Readd tpm limit

* State management fixes for CheckBatchCost

* Fix PR review comments

* State management fixes for CheckBatchCost - Address greptile comments

* fix mypy issues:

* Add Noma guardrails v2 based on custom guardrails (BerriAI#21400)

* Fix code qa issues

* Fix mypy issues

* Fix mypy issues

* Fix test_aaamodel_prices_and_context_window_json_is_valid

* fix: update calendly on repo

* fix(tests): use counter-based mock for time.time in prisma self-heal test

The test used a fixed side_effect list for time.time(), but the number
of calls varies by Python version, causing StopIteration on 3.12 and
AssertionError on 3.14. Replace with an infinite counter-based callable
and assert the timestamp was updated rather than checking for an exact
value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tests): use absolute path for model_prices JSON in validation test

The test used a relative path 'litellm/model_prices_and_context_window.json'
which only works when pytest runs from a specific working directory.
Use os.path based on __file__ to resolve the path reliably.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update tests/test_litellm/test_utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(tests): use os.path instead of Path to avoid NameError

Path is not imported at module level. Use os.path.join which is already
available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* clean up mock transport: remove streaming, add defensive parsing

* docs: add Google GenAI SDK tutorial (JS & Python) (BerriAI#21885)

* docs: add Google GenAI SDK tutorial for JS and Python

Add tutorial for using Google's official GenAI SDK (@google/genai for JS,
google-genai for Python) with LiteLLM proxy. Covers pass-through and
native router endpoints, streaming, multi-turn chat, and multi-provider
routing via model_group_alias. Also updates pass-through docs to use the
new SDK replacing the deprecated @google/generative-ai.

* fix(docs): correct Python SDK env var name in GenAI tutorial

GOOGLE_GENAI_API_KEY does not exist in the google-genai SDK.
The correct env var is GEMINI_API_KEY (or GOOGLE_API_KEY).
Also note that the Python SDK has no base URL env var.

* fix(docs): replace non-existent GOOGLE_GENAI_BASE_URL env var in interactions.md

The Python google-genai SDK does not read GOOGLE_GENAI_BASE_URL.
Use http_options={"base_url": "..."} in code instead.

* docs: add network mock benchmarking section

* docs: tweak benchmarks wording

* fix: add auth headers and empty latencies guard to benchmark script

* refactor: use method-level import for MockOpenAITransport

* fix: guard print_aggregate against empty latencies

* fix: add INCOMPLETE status to Interactions API enum and test

Google added INCOMPLETE to the Interactions API OpenAPI spec status enum.
Update both the Status3 enum in the SDK types and the test's expected
values to match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Guardrail Monitor - measure guardrail reliability in prod  (BerriAI#21944)

* fix: fix log viewer for guardrail monitoring

* feat(ui/): fix rendering logs per guardrail

* fix: fix viewing logs on overview tab of guardrail

* fix: log viewer

* fix: fix naming to align with metric

* docs: add performance & reliability section to v1.81.14 release notes

* fix(tests): make RPM limit test sequential to avoid race condition

Concurrent requests via run_in_executor + asyncio.gather caused a race
condition where more requests slipped through the rate limiter than
expected, leading to flaky test failures (e.g. 3 successes instead of 2
with rpm_limit=2).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: Singapore guardrail policies (PDPA + MAS AI Risk Management) (BerriAI#21948)

* feat: Singapore PDPA PII protection guardrail policy template

Add Singapore Personal Data Protection Act (PDPA) guardrail support:

Regex patterns (patterns.json):
- sg_nric: NRIC/FIN detection ([STFGM] + 7 digits + checksum letter)
- sg_phone: Singapore phone numbers (+65/0065/65 prefix)
- sg_postal_code: 6-digit postal codes (contextual)
- passport_singapore: Passport numbers (E/K + 7 digits, contextual)
- sg_uen: Unique Entity Numbers (3 formats)
- sg_bank_account: Bank account numbers (dash format, contextual)

YAML policy templates (5 sub-guardrails):
- sg_pdpa_personal_identifiers: s.13 Consent
- sg_pdpa_sensitive_data: Advisory Guidelines
- sg_pdpa_do_not_call: Part IX DNC Registry
- sg_pdpa_data_transfer: s.26 overseas transfers
- sg_pdpa_profiling_automated_decisions: Model AI Governance Framework

Policy template entry in policy_templates.json with 9 guardrail definitions
(4 regex-based + 5 YAML conditional keyword matching).

Tests:
- test_sg_patterns.py: regex pattern unit tests
- test_sg_pdpa_guardrails.py: conditional keyword matching tests (100+ cases)

* feat: MAS AI Risk Management Guidelines guardrail policy template

Add Monetary Authority of Singapore (MAS) AI Risk Management Guidelines
guardrail support for financial institutions:

YAML policy templates (5 sub-guardrails):
- sg_mas_fairness_bias: Blocks discriminatory financial AI (credit/loans/insurance by protected attributes)
- sg_mas_transparency_explainability: Blocks opaque/unexplainable AI for consequential financial decisions
- sg_mas_human_oversight: Blocks fully automated financial decisions without human-in-the-loop
- sg_mas_data_governance: Blocks unauthorized sharing/mishandling of financial customer data
- sg_mas_model_security: Blocks adversarial attacks, model poisoning, inversion on financial AI

Policy template entry in policy_templates.json with 5 guardrail definitions.
Aligned with MAS FEAT Principles, Project MindForge, and NIST AI RMF.

Tests:
- test_sg_mas_ai_guardrails.py: conditional keyword matching tests (100+ cases)

* fix: address SG pattern review feedback

- Update NRIC lowercase test for IGNORECASE runtime behavior
- Add keyword context guard to sg_uen pattern to reduce false positives

* docs: clarify MAS AIRM timeline references

- Explicitly mark MAS AIRM as Nov 2025 consultation draft
- Add 2018 qualifier for FEAT principles in MAS policy descriptions
- Update MAS guardrail wording to avoid release-year ambiguity

* chore: commit resolved MAS policy conflicts

* test:

* chore:

* Add OpenAI Agents SDK tutorial with LiteLLM Proxy to docs  (BerriAI#21221)

* Add OpenAI Agents SDK tutorial to docs

* Update OpenAI Agents SDK tutorial to use LiteLLM environment variables

* Enhance OpenAI Agents SDK tutorial with built-in LiteLLM extension details and updated configuration steps. Adjust section headings for clarity and improve the flow of information regarding model setup and usage.

* adjust blog posts to fetch from github first

* feat(videos): add variant parameter to video content download (BerriAI#21955)

openai videos models support the features to download variants.
See more details here: https://developers.openai.com/api/docs/guides/video-generation#use-image-references.
Plumb variant (e.g. "thumbnail", "spritesheet") through the full
video content download chain: avideo_content → video_content →
video_content_handler → transform_video_content_request. OpenAI
appends ?variant=<value> to the GET URL; other providers accept
the parameter in their signature but ignore it.

* fixing path

* adjust blog post path

* Revert duplicate issue checker to text-based matching, remove duplicate PR workflow

Remove the Claude Code-powered duplicate PR detection workflow and revert
the duplicate issue checker back to wow-actions/potential-duplicates with
text similarity matching.

* ui changes

* adding tests

* adjust default aggregation threshold

* fix(videos): pass api_key from litellm_params to video remix handlers (BerriAI#21965)

video_remix_handler and async_video_remix_handler were not falling back
to litellm_params.api_key when the api_key parameter was None, causing
Authorization: Bearer None to be sent to the provider. This matches the
pattern already used by async_video_generation_handler.

* adding testing coverage + fixing flaky tests

* fix(ollama): thread api_base through get_model_info and add graceful fallback

When users pass api_base to litellm.completion() for Ollama, the model
info fetch (context window, function_calling support) was ignoring the
user's api_base and only reading OLLAMA_API_BASE env var or defaulting
to localhost:11434. This caused confusing errors in logs when Ollama
runs on a remote server.

Thread api_base from litellm_params through the get_model_info call
chain so OllamaConfig.get_model_info() uses the correct server. Also
return safe defaults instead of raising when the server is unreachable.

Fixes BerriAI#21967

---------

Co-authored-by: An Tang <ta@stripe.com>
Co-authored-by: janfrederickk <75388864+janfrederickk@users.noreply.github.com>
Co-authored-by: Zhenting Huang <3061613175@qq.com>
Co-authored-by: Darien Kindlund <darien@kindlund.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Ryan Crabbe <rcrabbe@berkeley.edu>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: LeeJuOh <56071126+LeeJuOh@users.noreply.github.com>
Co-authored-by: Monesh Ram <31161039+WhoisMonesh@users.noreply.github.com>
Co-authored-by: Trevor Prater <trevor.prater@gmail.com>
Co-authored-by: The Mavik <179817126+themavik@users.noreply.github.com>
Co-authored-by: Edwin Isac <33712823+edwiniac@users.noreply.github.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: TomAlon <tom@noma.security>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Ron Zhong <ron-zhong@hotmail.com>
Co-authored-by: Arindam Majumder <109217591+Arindam200@users.noreply.github.com>
Co-authored-by: Lei Nie <lenie@quora.com>
damhau pushed a commit to damhau/litellm that referenced this pull request Feb 26, 2026
…voke (BerriAI#21964)

* auth_with_role_name add region_name arg for cross-account sts

* update tests to include case with aws_region_name for _auth_with_aws_role

* Only pass region_name to STS client when aws_region_name is set

* Add optional aws_sts_endpoint to _auth_with_aws_role

* Parametrize ambient-credentials test for no opts, region_name, and aws_sts_endpoint

* consistently passing region and endpoint args into explicit credentials irsa

* fix env var leakage

* fix: bedrock openai-compatible imported-model should also have model arn encoded

* feat: show proxy url in ModelHub (BerriAI#21660)

* fix(bedrock): correct modelInput format for Converse API batch models (BerriAI#21656)

* fix(proxy): add model_ids param to access group endpoints for precise deployment tagging (BerriAI#21655)

POST /access_group/new and PUT /access_group/{name}/update now accept an
optional model_ids list that targets specific deployments by their unique
model_id, instead of tagging every deployment that shares a model_name.

When model_ids is provided it takes priority over model_names, giving
API callers the same single-deployment precision that the UI already has
via PATCH /model/{model_id}/update.

Backward compatible: model_names continues to work as before.

Closes BerriAI#21544

* feat(proxy): add custom favicon support\n\nAdd ability to configure a custom favicon for the litellm proxy UI.\n\n- Add favicon_url field to UIThemeConfig model\n- Add LITELLM_FAVICON_URL env var support\n- Add /get_favicon endpoint to serve custom favicons\n- Update ThemeContext to dynamically set favicon\n- Add favicon URL input to UI theme settings page\n- Add comprehensive tests\n\nCloses BerriAI#8323 (BerriAI#21653)

* fix(bedrock): prevent double UUID in create_file S3 key (BerriAI#21650)

In create_file for Bedrock, get_complete_file_url is called twice:
once in the sync handler (generating UUID-1 for api_base) and once
inside transform_create_file_request (generating UUID-2 for the
actual S3 upload). The Bedrock provider correctly writes UUID-2 into
litellm_params["upload_url"], but the sync handler unconditionally
overwrites it with api_base (UUID-1). This causes the returned
file_id to point to a non-existent S3 key.

Fix: only set upload_url to api_base when transform_create_file_request
has not already set it, preserving the Bedrock provider's value.

Closes BerriAI#21546

* feat(semantic-cache): support configurable vector dimensions for Qdrant (BerriAI#21649)

Add vector_size parameter to QdrantSemanticCache and expose it through
the Cache facade as qdrant_semantic_cache_vector_size. This allows users
to use embedding models with dimensions other than the default 1536,
enabling cheaper/stronger models like Stella (1024d), bge-en-icl (4096d),
voyage, cohere, etc.

The parameter defaults to QDRANT_VECTOR_SIZE (env var or 1536) for
backward compatibility. When creating new collections, the configured
vector_size is used instead of the hardcoded constant.

Closes BerriAI#9377

* fix(utils): normalize camelCase thinking param keys to snake_case (BerriAI#21762)

Clients like OpenCode's @ai-sdk/openai-compatible send budgetTokens
(camelCase) instead of budget_tokens in the thinking parameter, causing
validation errors. Add early normalization in completion().

* feat: add optional digest mode for Slack alert types (BerriAI#21683)

Adds per-alert-type digest mode that aggregates duplicate alerts
within a configurable time window and emits a single summary message
with count, start/end timestamps.

Configuration via general_settings.alert_type_config:
  alert_type_config:
    llm_requests_hanging:
      digest: true
      digest_interval: 86400

Digest key: (alert_type, request_model, api_base)
Default interval: 24 hours
Window type: fixed interval

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add blog_posts.json and local backup

* feat: add GetBlogPosts utility with GitHub fetch and local fallback

Adds GetBlogPosts class that fetches blog posts from GitHub with a 1-hour
in-process TTL cache, validates the response, and falls back to the bundled
blog_posts_backup.json on any network or validation failure.

* test: add cache reset fixture and LITELLM_LOCAL_BLOG_POSTS test

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add GET /public/litellm_blog_posts endpoint

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: log fallback warning in blog posts endpoint and tighten test

* feat: add disable_show_blog to UISettings

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add useUISettings and useDisableShowBlog hooks

* fix: rename useUISettings to useUISettingsFlags to avoid naming collision

* fix: use existing useUISettings hook in useDisableShowBlog to avoid cache duplication

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown component with react-query and error/retry state

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: enforce 5-post limit in BlogDropdown and add cap test

* fix: add retry, stable post key, enabled guard in BlogDropdown

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown to navbar after Docs link

* feat: add network_mock transport for benchmarking proxy overhead without real API calls

Intercepts at httpx transport layer so the full proxy path (auth, routing,
OpenAI SDK, response transformation) is exercised with zero-latency responses.
Activated via `litellm_settings: { network_mock: true }` in proxy config.

* Litellm dev 02 19 2026 p2 (BerriAI#21871)

* feat(ui/): new guardrails monitor 'demo

mock representation of what guardrails monitor looks like

* fix: ui updates

* style(ui/): fix styling

* feat: enable running ai monitor on individual guardrails

* feat: add backend logic for guardrail monitoring

* fix(guardrails/usage_endpoints.py): fix usage dashboard

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo (BerriAI#21754)

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo

* fix(budget): update stale docstring on get_budget_reset_time

* fix: add missing return type annotations to iterator protocol methods in streaming_handler (BerriAI#21750)

* fix: add return type annotations to iterator protocol methods in streaming_handler

Add missing return type annotations to __iter__, __aiter__, __next__, and __anext__ methods in CustomStreamWrapper and related classes.

- __iter__(self) -> Iterator["ModelResponseStream"]
- __aiter__(self) -> AsyncIterator["ModelResponseStream"]
- __next__(self) -> "ModelResponseStream"
- __anext__(self) -> "ModelResponseStream"

Also adds AsyncIterator and Iterator to typing imports.

Fixes issue with PLR0915 noqa comments and ensures proper type checking support.
Related to: BerriAI#8304

* fix: add ruff PLR0915 noqa for files with too many statements

* Add gollem Go agent framework cookbook example (BerriAI#21747)

Show how to use gollem, a production Go agent framework, with
LiteLLM proxy for multi-provider LLM access including tool use
and streaming.

* fix: avoid mutating caller-owned dicts in SpendUpdateQueue aggregation (BerriAI#21742)

* fix(vertex_ai): enable context-1m-2025-08-07 beta header (BerriAI#21870)

* server root path regression doc

* fixing syntax

* fix: replace Zapier webhook with Google Form for survey submission (BerriAI#21621)

* Replace Zapier webhook with Google Form for survey submission

* Add back error logging for survey submission debugging

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "Merge pull request BerriAI#21140 from BerriAI/litellm_perf_user_api_key_auth"

This reverts commit 0e1db3f, reversing
changes made to 7e2d6f2.

* test_vertex_ai_gemini_2_5_pro_streaming

* UI new build

* fix rendering

* ui new build

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* release note docs

* docs

* adding image

* fix(vertex_ai): enable context-1m-2025-08-07 beta header

The `context-1m-2025-08-07` Anthropic beta header was set to `null` for vertex_ai,
causing it to be filtered out when users set `extra_headers: {anthropic-beta: context-1m-2025-08-07}`.

This prevented using Claude's 1M context window feature via Vertex AI, resulting in
`prompt is too long: 460500 tokens > 200000 maximum` errors.

Fixes BerriAI#21861

---------

Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "fix(vertex_ai): enable context-1m-2025-08-07 beta header (BerriAI#21870)" (BerriAI#21876)

This reverts commit bce078a.

* docs(ui): add pre-PR checklist to UI contributing guide

Add testing and build verification steps per maintainer feedback
from @yjiang-litellm. Contributors should run their related tests
per-file and ensure npm run build passes before opening PRs.

* Fix entries with fast and us/

* Add tests for fast and us

* Add support for Priority PayGo for vertex ai and gemini

* Add model pricing

* fix: ensure arrival_time is set before calculating queue time

* Fix: Anthropic model wildcard access issue

* Add incident report

* Add ability to see which model cost map is getting used

* Fix name of title

* Readd tpm limit

* State management fixes for CheckBatchCost

* Fix PR review comments

* State management fixes for CheckBatchCost - Address greptile comments

* fix mypy issues:

* Add Noma guardrails v2 based on custom guardrails (BerriAI#21400)

* Fix code qa issues

* Fix mypy issues

* Fix mypy issues

* Fix test_aaamodel_prices_and_context_window_json_is_valid

* fix: update calendly on repo

* fix(tests): use counter-based mock for time.time in prisma self-heal test

The test used a fixed side_effect list for time.time(), but the number
of calls varies by Python version, causing StopIteration on 3.12 and
AssertionError on 3.14. Replace with an infinite counter-based callable
and assert the timestamp was updated rather than checking for an exact
value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tests): use absolute path for model_prices JSON in validation test

The test used a relative path 'litellm/model_prices_and_context_window.json'
which only works when pytest runs from a specific working directory.
Use os.path based on __file__ to resolve the path reliably.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update tests/test_litellm/test_utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(tests): use os.path instead of Path to avoid NameError

Path is not imported at module level. Use os.path.join which is already
available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* clean up mock transport: remove streaming, add defensive parsing

* docs: add Google GenAI SDK tutorial (JS & Python) (BerriAI#21885)

* docs: add Google GenAI SDK tutorial for JS and Python

Add tutorial for using Google's official GenAI SDK (@google/genai for JS,
google-genai for Python) with LiteLLM proxy. Covers pass-through and
native router endpoints, streaming, multi-turn chat, and multi-provider
routing via model_group_alias. Also updates pass-through docs to use the
new SDK replacing the deprecated @google/generative-ai.

* fix(docs): correct Python SDK env var name in GenAI tutorial

GOOGLE_GENAI_API_KEY does not exist in the google-genai SDK.
The correct env var is GEMINI_API_KEY (or GOOGLE_API_KEY).
Also note that the Python SDK has no base URL env var.

* fix(docs): replace non-existent GOOGLE_GENAI_BASE_URL env var in interactions.md

The Python google-genai SDK does not read GOOGLE_GENAI_BASE_URL.
Use http_options={"base_url": "..."} in code instead.

* docs: add network mock benchmarking section

* docs: tweak benchmarks wording

* fix: add auth headers and empty latencies guard to benchmark script

* refactor: use method-level import for MockOpenAITransport

* fix: guard print_aggregate against empty latencies

* fix: add INCOMPLETE status to Interactions API enum and test

Google added INCOMPLETE to the Interactions API OpenAPI spec status enum.
Update both the Status3 enum in the SDK types and the test's expected
values to match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Guardrail Monitor - measure guardrail reliability in prod  (BerriAI#21944)

* fix: fix log viewer for guardrail monitoring

* feat(ui/): fix rendering logs per guardrail

* fix: fix viewing logs on overview tab of guardrail

* fix: log viewer

* fix: fix naming to align with metric

* docs: add performance & reliability section to v1.81.14 release notes

* fix(tests): make RPM limit test sequential to avoid race condition

Concurrent requests via run_in_executor + asyncio.gather caused a race
condition where more requests slipped through the rate limiter than
expected, leading to flaky test failures (e.g. 3 successes instead of 2
with rpm_limit=2).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: Singapore guardrail policies (PDPA + MAS AI Risk Management) (BerriAI#21948)

* feat: Singapore PDPA PII protection guardrail policy template

Add Singapore Personal Data Protection Act (PDPA) guardrail support:

Regex patterns (patterns.json):
- sg_nric: NRIC/FIN detection ([STFGM] + 7 digits + checksum letter)
- sg_phone: Singapore phone numbers (+65/0065/65 prefix)
- sg_postal_code: 6-digit postal codes (contextual)
- passport_singapore: Passport numbers (E/K + 7 digits, contextual)
- sg_uen: Unique Entity Numbers (3 formats)
- sg_bank_account: Bank account numbers (dash format, contextual)

YAML policy templates (5 sub-guardrails):
- sg_pdpa_personal_identifiers: s.13 Consent
- sg_pdpa_sensitive_data: Advisory Guidelines
- sg_pdpa_do_not_call: Part IX DNC Registry
- sg_pdpa_data_transfer: s.26 overseas transfers
- sg_pdpa_profiling_automated_decisions: Model AI Governance Framework

Policy template entry in policy_templates.json with 9 guardrail definitions
(4 regex-based + 5 YAML conditional keyword matching).

Tests:
- test_sg_patterns.py: regex pattern unit tests
- test_sg_pdpa_guardrails.py: conditional keyword matching tests (100+ cases)

* feat: MAS AI Risk Management Guidelines guardrail policy template

Add Monetary Authority of Singapore (MAS) AI Risk Management Guidelines
guardrail support for financial institutions:

YAML policy templates (5 sub-guardrails):
- sg_mas_fairness_bias: Blocks discriminatory financial AI (credit/loans/insurance by protected attributes)
- sg_mas_transparency_explainability: Blocks opaque/unexplainable AI for consequential financial decisions
- sg_mas_human_oversight: Blocks fully automated financial decisions without human-in-the-loop
- sg_mas_data_governance: Blocks unauthorized sharing/mishandling of financial customer data
- sg_mas_model_security: Blocks adversarial attacks, model poisoning, inversion on financial AI

Policy template entry in policy_templates.json with 5 guardrail definitions.
Aligned with MAS FEAT Principles, Project MindForge, and NIST AI RMF.

Tests:
- test_sg_mas_ai_guardrails.py: conditional keyword matching tests (100+ cases)

* fix: address SG pattern review feedback

- Update NRIC lowercase test for IGNORECASE runtime behavior
- Add keyword context guard to sg_uen pattern to reduce false positives

* docs: clarify MAS AIRM timeline references

- Explicitly mark MAS AIRM as Nov 2025 consultation draft
- Add 2018 qualifier for FEAT principles in MAS policy descriptions
- Update MAS guardrail wording to avoid release-year ambiguity

* chore: commit resolved MAS policy conflicts

* test:

* chore:

* Add OpenAI Agents SDK tutorial with LiteLLM Proxy to docs  (BerriAI#21221)

* Add OpenAI Agents SDK tutorial to docs

* Update OpenAI Agents SDK tutorial to use LiteLLM environment variables

* Enhance OpenAI Agents SDK tutorial with built-in LiteLLM extension details and updated configuration steps. Adjust section headings for clarity and improve the flow of information regarding model setup and usage.

* adjust blog posts to fetch from github first

* feat(videos): add variant parameter to video content download (BerriAI#21955)

openai videos models support the features to download variants.
See more details here: https://developers.openai.com/api/docs/guides/video-generation#use-image-references.
Plumb variant (e.g. "thumbnail", "spritesheet") through the full
video content download chain: avideo_content → video_content →
video_content_handler → transform_video_content_request. OpenAI
appends ?variant=<value> to the GET URL; other providers accept
the parameter in their signature but ignore it.

* fixing path

* adjust blog post path

* Revert duplicate issue checker to text-based matching, remove duplicate PR workflow

Remove the Claude Code-powered duplicate PR detection workflow and revert
the duplicate issue checker back to wow-actions/potential-duplicates with
text similarity matching.

* ui changes

* adding tests

* fix(anthropic): sanitize tool_use IDs in assistant messages

Apply _sanitize_anthropic_tool_use_id to tool_use blocks in
convert_to_anthropic_tool_invoke, not just tool_result blocks.
IDs from external frameworks (e.g. MiniMax) may contain characters
like colons that violate Anthropic's ^[a-zA-Z0-9_-]+$ pattern.

Adds test for invalid ID sanitization in tool_use blocks.

---------

Co-authored-by: An Tang <ta@stripe.com>
Co-authored-by: janfrederickk <75388864+janfrederickk@users.noreply.github.com>
Co-authored-by: Zhenting Huang <3061613175@qq.com>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Darien Kindlund <darien@kindlund.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Ryan Crabbe <rcrabbe@berkeley.edu>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: LeeJuOh <56071126+LeeJuOh@users.noreply.github.com>
Co-authored-by: Monesh Ram <31161039+WhoisMonesh@users.noreply.github.com>
Co-authored-by: Trevor Prater <trevor.prater@gmail.com>
Co-authored-by: The Mavik <179817126+themavik@users.noreply.github.com>
Co-authored-by: Edwin Isac <33712823+edwiniac@users.noreply.github.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Chesars <cesarponce19544@gmail.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: TomAlon <tom@noma.security>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Ron Zhong <ron-zhong@hotmail.com>
Co-authored-by: Arindam Majumder <109217591+Arindam200@users.noreply.github.com>
Co-authored-by: Lei Nie <lenie@quora.com>
Sameerlite added a commit that referenced this pull request Mar 3, 2026
…21970)

* auth_with_role_name add region_name arg for cross-account sts

* update tests to include case with aws_region_name for _auth_with_aws_role

* Only pass region_name to STS client when aws_region_name is set

* Add optional aws_sts_endpoint to _auth_with_aws_role

* Parametrize ambient-credentials test for no opts, region_name, and aws_sts_endpoint

* consistently passing region and endpoint args into explicit credentials irsa

* fix env var leakage

* fix: bedrock openai-compatible imported-model should also have model arn encoded

* feat: show proxy url in ModelHub (#21660)

* fix(bedrock): correct modelInput format for Converse API batch models (#21656)

* fix(proxy): add model_ids param to access group endpoints for precise deployment tagging (#21655)

POST /access_group/new and PUT /access_group/{name}/update now accept an
optional model_ids list that targets specific deployments by their unique
model_id, instead of tagging every deployment that shares a model_name.

When model_ids is provided it takes priority over model_names, giving
API callers the same single-deployment precision that the UI already has
via PATCH /model/{model_id}/update.

Backward compatible: model_names continues to work as before.

Closes #21544

* feat(proxy): add custom favicon support\n\nAdd ability to configure a custom favicon for the litellm proxy UI.\n\n- Add favicon_url field to UIThemeConfig model\n- Add LITELLM_FAVICON_URL env var support\n- Add /get_favicon endpoint to serve custom favicons\n- Update ThemeContext to dynamically set favicon\n- Add favicon URL input to UI theme settings page\n- Add comprehensive tests\n\nCloses #8323 (#21653)

* fix(bedrock): prevent double UUID in create_file S3 key (#21650)

In create_file for Bedrock, get_complete_file_url is called twice:
once in the sync handler (generating UUID-1 for api_base) and once
inside transform_create_file_request (generating UUID-2 for the
actual S3 upload). The Bedrock provider correctly writes UUID-2 into
litellm_params["upload_url"], but the sync handler unconditionally
overwrites it with api_base (UUID-1). This causes the returned
file_id to point to a non-existent S3 key.

Fix: only set upload_url to api_base when transform_create_file_request
has not already set it, preserving the Bedrock provider's value.

Closes #21546

* feat(semantic-cache): support configurable vector dimensions for Qdrant (#21649)

Add vector_size parameter to QdrantSemanticCache and expose it through
the Cache facade as qdrant_semantic_cache_vector_size. This allows users
to use embedding models with dimensions other than the default 1536,
enabling cheaper/stronger models like Stella (1024d), bge-en-icl (4096d),
voyage, cohere, etc.

The parameter defaults to QDRANT_VECTOR_SIZE (env var or 1536) for
backward compatibility. When creating new collections, the configured
vector_size is used instead of the hardcoded constant.

Closes #9377

* fix(utils): normalize camelCase thinking param keys to snake_case (#21762)

Clients like OpenCode's @ai-sdk/openai-compatible send budgetTokens
(camelCase) instead of budget_tokens in the thinking parameter, causing
validation errors. Add early normalization in completion().

* feat: add optional digest mode for Slack alert types (#21683)

Adds per-alert-type digest mode that aggregates duplicate alerts
within a configurable time window and emits a single summary message
with count, start/end timestamps.

Configuration via general_settings.alert_type_config:
  alert_type_config:
    llm_requests_hanging:
      digest: true
      digest_interval: 86400

Digest key: (alert_type, request_model, api_base)
Default interval: 24 hours
Window type: fixed interval

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add blog_posts.json and local backup

* feat: add GetBlogPosts utility with GitHub fetch and local fallback

Adds GetBlogPosts class that fetches blog posts from GitHub with a 1-hour
in-process TTL cache, validates the response, and falls back to the bundled
blog_posts_backup.json on any network or validation failure.

* test: add cache reset fixture and LITELLM_LOCAL_BLOG_POSTS test

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add GET /public/litellm_blog_posts endpoint

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: log fallback warning in blog posts endpoint and tighten test

* feat: add disable_show_blog to UISettings

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add useUISettings and useDisableShowBlog hooks

* fix: rename useUISettings to useUISettingsFlags to avoid naming collision

* fix: use existing useUISettings hook in useDisableShowBlog to avoid cache duplication

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown component with react-query and error/retry state

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: enforce 5-post limit in BlogDropdown and add cap test

* fix: add retry, stable post key, enabled guard in BlogDropdown

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown to navbar after Docs link

* feat: add network_mock transport for benchmarking proxy overhead without real API calls

Intercepts at httpx transport layer so the full proxy path (auth, routing,
OpenAI SDK, response transformation) is exercised with zero-latency responses.
Activated via `litellm_settings: { network_mock: true }` in proxy config.

* Litellm dev 02 19 2026 p2 (#21871)

* feat(ui/): new guardrails monitor 'demo

mock representation of what guardrails monitor looks like

* fix: ui updates

* style(ui/): fix styling

* feat: enable running ai monitor on individual guardrails

* feat: add backend logic for guardrail monitoring

* fix(guardrails/usage_endpoints.py): fix usage dashboard

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo (#21754)

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo

* fix(budget): update stale docstring on get_budget_reset_time

* fix: add missing return type annotations to iterator protocol methods in streaming_handler (#21750)

* fix: add return type annotations to iterator protocol methods in streaming_handler

Add missing return type annotations to __iter__, __aiter__, __next__, and __anext__ methods in CustomStreamWrapper and related classes.

- __iter__(self) -> Iterator["ModelResponseStream"]
- __aiter__(self) -> AsyncIterator["ModelResponseStream"]
- __next__(self) -> "ModelResponseStream"
- __anext__(self) -> "ModelResponseStream"

Also adds AsyncIterator and Iterator to typing imports.

Fixes issue with PLR0915 noqa comments and ensures proper type checking support.
Related to: #8304

* fix: add ruff PLR0915 noqa for files with too many statements

* Add gollem Go agent framework cookbook example (#21747)

Show how to use gollem, a production Go agent framework, with
LiteLLM proxy for multi-provider LLM access including tool use
and streaming.

* fix: avoid mutating caller-owned dicts in SpendUpdateQueue aggregation (#21742)

* fix(vertex_ai): enable context-1m-2025-08-07 beta header (#21870)

* server root path regression doc

* fixing syntax

* fix: replace Zapier webhook with Google Form for survey submission (#21621)

* Replace Zapier webhook with Google Form for survey submission

* Add back error logging for survey submission debugging

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "Merge pull request #21140 from BerriAI/litellm_perf_user_api_key_auth"

This reverts commit 0e1db3f, reversing
changes made to 7e2d6f2.

* test_vertex_ai_gemini_2_5_pro_streaming

* UI new build

* fix rendering

* ui new build

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* release note docs

* docs

* adding image

* fix(vertex_ai): enable context-1m-2025-08-07 beta header

The `context-1m-2025-08-07` Anthropic beta header was set to `null` for vertex_ai,
causing it to be filtered out when users set `extra_headers: {anthropic-beta: context-1m-2025-08-07}`.

This prevented using Claude's 1M context window feature via Vertex AI, resulting in
`prompt is too long: 460500 tokens > 200000 maximum` errors.

Fixes #21861

---------

Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "fix(vertex_ai): enable context-1m-2025-08-07 beta header (#21870)" (#21876)

This reverts commit bce078a.

* docs(ui): add pre-PR checklist to UI contributing guide

Add testing and build verification steps per maintainer feedback
from @yjiang-litellm. Contributors should run their related tests
per-file and ensure npm run build passes before opening PRs.

* Fix entries with fast and us/

* Add tests for fast and us

* Add support for Priority PayGo for vertex ai and gemini

* Add model pricing

* fix: ensure arrival_time is set before calculating queue time

* Fix: Anthropic model wildcard access issue

* Add incident report

* Add ability to see which model cost map is getting used

* Fix name of title

* Readd tpm limit

* State management fixes for CheckBatchCost

* Fix PR review comments

* State management fixes for CheckBatchCost - Address greptile comments

* fix mypy issues:

* Add Noma guardrails v2 based on custom guardrails (#21400)

* Fix code qa issues

* Fix mypy issues

* Fix mypy issues

* Fix test_aaamodel_prices_and_context_window_json_is_valid

* fix: update calendly on repo

* fix(tests): use counter-based mock for time.time in prisma self-heal test

The test used a fixed side_effect list for time.time(), but the number
of calls varies by Python version, causing StopIteration on 3.12 and
AssertionError on 3.14. Replace with an infinite counter-based callable
and assert the timestamp was updated rather than checking for an exact
value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tests): use absolute path for model_prices JSON in validation test

The test used a relative path 'litellm/model_prices_and_context_window.json'
which only works when pytest runs from a specific working directory.
Use os.path based on __file__ to resolve the path reliably.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update tests/test_litellm/test_utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(tests): use os.path instead of Path to avoid NameError

Path is not imported at module level. Use os.path.join which is already
available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* clean up mock transport: remove streaming, add defensive parsing

* docs: add Google GenAI SDK tutorial (JS & Python) (#21885)

* docs: add Google GenAI SDK tutorial for JS and Python

Add tutorial for using Google's official GenAI SDK (@google/genai for JS,
google-genai for Python) with LiteLLM proxy. Covers pass-through and
native router endpoints, streaming, multi-turn chat, and multi-provider
routing via model_group_alias. Also updates pass-through docs to use the
new SDK replacing the deprecated @google/generative-ai.

* fix(docs): correct Python SDK env var name in GenAI tutorial

GOOGLE_GENAI_API_KEY does not exist in the google-genai SDK.
The correct env var is GEMINI_API_KEY (or GOOGLE_API_KEY).
Also note that the Python SDK has no base URL env var.

* fix(docs): replace non-existent GOOGLE_GENAI_BASE_URL env var in interactions.md

The Python google-genai SDK does not read GOOGLE_GENAI_BASE_URL.
Use http_options={"base_url": "..."} in code instead.

* docs: add network mock benchmarking section

* docs: tweak benchmarks wording

* fix: add auth headers and empty latencies guard to benchmark script

* refactor: use method-level import for MockOpenAITransport

* fix: guard print_aggregate against empty latencies

* fix: add INCOMPLETE status to Interactions API enum and test

Google added INCOMPLETE to the Interactions API OpenAPI spec status enum.
Update both the Status3 enum in the SDK types and the test's expected
values to match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Guardrail Monitor - measure guardrail reliability in prod  (#21944)

* fix: fix log viewer for guardrail monitoring

* feat(ui/): fix rendering logs per guardrail

* fix: fix viewing logs on overview tab of guardrail

* fix: log viewer

* fix: fix naming to align with metric

* docs: add performance & reliability section to v1.81.14 release notes

* fix(tests): make RPM limit test sequential to avoid race condition

Concurrent requests via run_in_executor + asyncio.gather caused a race
condition where more requests slipped through the rate limiter than
expected, leading to flaky test failures (e.g. 3 successes instead of 2
with rpm_limit=2).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: Singapore guardrail policies (PDPA + MAS AI Risk Management) (#21948)

* feat: Singapore PDPA PII protection guardrail policy template

Add Singapore Personal Data Protection Act (PDPA) guardrail support:

Regex patterns (patterns.json):
- sg_nric: NRIC/FIN detection ([STFGM] + 7 digits + checksum letter)
- sg_phone: Singapore phone numbers (+65/0065/65 prefix)
- sg_postal_code: 6-digit postal codes (contextual)
- passport_singapore: Passport numbers (E/K + 7 digits, contextual)
- sg_uen: Unique Entity Numbers (3 formats)
- sg_bank_account: Bank account numbers (dash format, contextual)

YAML policy templates (5 sub-guardrails):
- sg_pdpa_personal_identifiers: s.13 Consent
- sg_pdpa_sensitive_data: Advisory Guidelines
- sg_pdpa_do_not_call: Part IX DNC Registry
- sg_pdpa_data_transfer: s.26 overseas transfers
- sg_pdpa_profiling_automated_decisions: Model AI Governance Framework

Policy template entry in policy_templates.json with 9 guardrail definitions
(4 regex-based + 5 YAML conditional keyword matching).

Tests:
- test_sg_patterns.py: regex pattern unit tests
- test_sg_pdpa_guardrails.py: conditional keyword matching tests (100+ cases)

* feat: MAS AI Risk Management Guidelines guardrail policy template

Add Monetary Authority of Singapore (MAS) AI Risk Management Guidelines
guardrail support for financial institutions:

YAML policy templates (5 sub-guardrails):
- sg_mas_fairness_bias: Blocks discriminatory financial AI (credit/loans/insurance by protected attributes)
- sg_mas_transparency_explainability: Blocks opaque/unexplainable AI for consequential financial decisions
- sg_mas_human_oversight: Blocks fully automated financial decisions without human-in-the-loop
- sg_mas_data_governance: Blocks unauthorized sharing/mishandling of financial customer data
- sg_mas_model_security: Blocks adversarial attacks, model poisoning, inversion on financial AI

Policy template entry in policy_templates.json with 5 guardrail definitions.
Aligned with MAS FEAT Principles, Project MindForge, and NIST AI RMF.

Tests:
- test_sg_mas_ai_guardrails.py: conditional keyword matching tests (100+ cases)

* fix: address SG pattern review feedback

- Update NRIC lowercase test for IGNORECASE runtime behavior
- Add keyword context guard to sg_uen pattern to reduce false positives

* docs: clarify MAS AIRM timeline references

- Explicitly mark MAS AIRM as Nov 2025 consultation draft
- Add 2018 qualifier for FEAT principles in MAS policy descriptions
- Update MAS guardrail wording to avoid release-year ambiguity

* chore: commit resolved MAS policy conflicts

* test:

* chore:

* Add OpenAI Agents SDK tutorial with LiteLLM Proxy to docs  (#21221)

* Add OpenAI Agents SDK tutorial to docs

* Update OpenAI Agents SDK tutorial to use LiteLLM environment variables

* Enhance OpenAI Agents SDK tutorial with built-in LiteLLM extension details and updated configuration steps. Adjust section headings for clarity and improve the flow of information regarding model setup and usage.

* adjust blog posts to fetch from github first

* feat(videos): add variant parameter to video content download (#21955)

openai videos models support the features to download variants.
See more details here: https://developers.openai.com/api/docs/guides/video-generation#use-image-references.
Plumb variant (e.g. "thumbnail", "spritesheet") through the full
video content download chain: avideo_content → video_content →
video_content_handler → transform_video_content_request. OpenAI
appends ?variant=<value> to the GET URL; other providers accept
the parameter in their signature but ignore it.

* fixing path

* adjust blog post path

* Revert duplicate issue checker to text-based matching, remove duplicate PR workflow

Remove the Claude Code-powered duplicate PR detection workflow and revert
the duplicate issue checker back to wow-actions/potential-duplicates with
text similarity matching.

* ui changes

* adding tests

* adjust default aggregation threshold

* fix(videos): pass api_key from litellm_params to video remix handlers (#21965)

video_remix_handler and async_video_remix_handler were not falling back
to litellm_params.api_key when the api_key parameter was None, causing
Authorization: Bearer None to be sent to the provider. This matches the
pattern already used by async_video_generation_handler.

* adding testing coverage + fixing flaky tests

* fix(ollama): thread api_base through get_model_info and add graceful fallback

When users pass api_base to litellm.completion() for Ollama, the model
info fetch (context window, function_calling support) was ignoring the
user's api_base and only reading OLLAMA_API_BASE env var or defaulting
to localhost:11434. This caused confusing errors in logs when Ollama
runs on a remote server.

Thread api_base from litellm_params through the get_model_info call
chain so OllamaConfig.get_model_info() uses the correct server. Also
return safe defaults instead of raising when the server is unreachable.

Fixes #21967

---------

Co-authored-by: An Tang <ta@stripe.com>
Co-authored-by: janfrederickk <75388864+janfrederickk@users.noreply.github.com>
Co-authored-by: Zhenting Huang <3061613175@qq.com>
Co-authored-by: Darien Kindlund <darien@kindlund.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Ryan Crabbe <rcrabbe@berkeley.edu>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: LeeJuOh <56071126+LeeJuOh@users.noreply.github.com>
Co-authored-by: Monesh Ram <31161039+WhoisMonesh@users.noreply.github.com>
Co-authored-by: Trevor Prater <trevor.prater@gmail.com>
Co-authored-by: The Mavik <179817126+themavik@users.noreply.github.com>
Co-authored-by: Edwin Isac <33712823+edwiniac@users.noreply.github.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: TomAlon <tom@noma.security>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Ron Zhong <ron-zhong@hotmail.com>
Co-authored-by: Arindam Majumder <109217591+Arindam200@users.noreply.github.com>
Co-authored-by: Lei Nie <lenie@quora.com>
Sameerlite added a commit that referenced this pull request Mar 3, 2026
…voke (#21964)

* auth_with_role_name add region_name arg for cross-account sts

* update tests to include case with aws_region_name for _auth_with_aws_role

* Only pass region_name to STS client when aws_region_name is set

* Add optional aws_sts_endpoint to _auth_with_aws_role

* Parametrize ambient-credentials test for no opts, region_name, and aws_sts_endpoint

* consistently passing region and endpoint args into explicit credentials irsa

* fix env var leakage

* fix: bedrock openai-compatible imported-model should also have model arn encoded

* feat: show proxy url in ModelHub (#21660)

* fix(bedrock): correct modelInput format for Converse API batch models (#21656)

* fix(proxy): add model_ids param to access group endpoints for precise deployment tagging (#21655)

POST /access_group/new and PUT /access_group/{name}/update now accept an
optional model_ids list that targets specific deployments by their unique
model_id, instead of tagging every deployment that shares a model_name.

When model_ids is provided it takes priority over model_names, giving
API callers the same single-deployment precision that the UI already has
via PATCH /model/{model_id}/update.

Backward compatible: model_names continues to work as before.

Closes #21544

* feat(proxy): add custom favicon support\n\nAdd ability to configure a custom favicon for the litellm proxy UI.\n\n- Add favicon_url field to UIThemeConfig model\n- Add LITELLM_FAVICON_URL env var support\n- Add /get_favicon endpoint to serve custom favicons\n- Update ThemeContext to dynamically set favicon\n- Add favicon URL input to UI theme settings page\n- Add comprehensive tests\n\nCloses #8323 (#21653)

* fix(bedrock): prevent double UUID in create_file S3 key (#21650)

In create_file for Bedrock, get_complete_file_url is called twice:
once in the sync handler (generating UUID-1 for api_base) and once
inside transform_create_file_request (generating UUID-2 for the
actual S3 upload). The Bedrock provider correctly writes UUID-2 into
litellm_params["upload_url"], but the sync handler unconditionally
overwrites it with api_base (UUID-1). This causes the returned
file_id to point to a non-existent S3 key.

Fix: only set upload_url to api_base when transform_create_file_request
has not already set it, preserving the Bedrock provider's value.

Closes #21546

* feat(semantic-cache): support configurable vector dimensions for Qdrant (#21649)

Add vector_size parameter to QdrantSemanticCache and expose it through
the Cache facade as qdrant_semantic_cache_vector_size. This allows users
to use embedding models with dimensions other than the default 1536,
enabling cheaper/stronger models like Stella (1024d), bge-en-icl (4096d),
voyage, cohere, etc.

The parameter defaults to QDRANT_VECTOR_SIZE (env var or 1536) for
backward compatibility. When creating new collections, the configured
vector_size is used instead of the hardcoded constant.

Closes #9377

* fix(utils): normalize camelCase thinking param keys to snake_case (#21762)

Clients like OpenCode's @ai-sdk/openai-compatible send budgetTokens
(camelCase) instead of budget_tokens in the thinking parameter, causing
validation errors. Add early normalization in completion().

* feat: add optional digest mode for Slack alert types (#21683)

Adds per-alert-type digest mode that aggregates duplicate alerts
within a configurable time window and emits a single summary message
with count, start/end timestamps.

Configuration via general_settings.alert_type_config:
  alert_type_config:
    llm_requests_hanging:
      digest: true
      digest_interval: 86400

Digest key: (alert_type, request_model, api_base)
Default interval: 24 hours
Window type: fixed interval

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add blog_posts.json and local backup

* feat: add GetBlogPosts utility with GitHub fetch and local fallback

Adds GetBlogPosts class that fetches blog posts from GitHub with a 1-hour
in-process TTL cache, validates the response, and falls back to the bundled
blog_posts_backup.json on any network or validation failure.

* test: add cache reset fixture and LITELLM_LOCAL_BLOG_POSTS test

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add GET /public/litellm_blog_posts endpoint

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: log fallback warning in blog posts endpoint and tighten test

* feat: add disable_show_blog to UISettings

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add useUISettings and useDisableShowBlog hooks

* fix: rename useUISettings to useUISettingsFlags to avoid naming collision

* fix: use existing useUISettings hook in useDisableShowBlog to avoid cache duplication

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown component with react-query and error/retry state

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: enforce 5-post limit in BlogDropdown and add cap test

* fix: add retry, stable post key, enabled guard in BlogDropdown

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown to navbar after Docs link

* feat: add network_mock transport for benchmarking proxy overhead without real API calls

Intercepts at httpx transport layer so the full proxy path (auth, routing,
OpenAI SDK, response transformation) is exercised with zero-latency responses.
Activated via `litellm_settings: { network_mock: true }` in proxy config.

* Litellm dev 02 19 2026 p2 (#21871)

* feat(ui/): new guardrails monitor 'demo

mock representation of what guardrails monitor looks like

* fix: ui updates

* style(ui/): fix styling

* feat: enable running ai monitor on individual guardrails

* feat: add backend logic for guardrail monitoring

* fix(guardrails/usage_endpoints.py): fix usage dashboard

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo (#21754)

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo

* fix(budget): update stale docstring on get_budget_reset_time

* fix: add missing return type annotations to iterator protocol methods in streaming_handler (#21750)

* fix: add return type annotations to iterator protocol methods in streaming_handler

Add missing return type annotations to __iter__, __aiter__, __next__, and __anext__ methods in CustomStreamWrapper and related classes.

- __iter__(self) -> Iterator["ModelResponseStream"]
- __aiter__(self) -> AsyncIterator["ModelResponseStream"]
- __next__(self) -> "ModelResponseStream"
- __anext__(self) -> "ModelResponseStream"

Also adds AsyncIterator and Iterator to typing imports.

Fixes issue with PLR0915 noqa comments and ensures proper type checking support.
Related to: #8304

* fix: add ruff PLR0915 noqa for files with too many statements

* Add gollem Go agent framework cookbook example (#21747)

Show how to use gollem, a production Go agent framework, with
LiteLLM proxy for multi-provider LLM access including tool use
and streaming.

* fix: avoid mutating caller-owned dicts in SpendUpdateQueue aggregation (#21742)

* fix(vertex_ai): enable context-1m-2025-08-07 beta header (#21870)

* server root path regression doc

* fixing syntax

* fix: replace Zapier webhook with Google Form for survey submission (#21621)

* Replace Zapier webhook with Google Form for survey submission

* Add back error logging for survey submission debugging

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "Merge pull request #21140 from BerriAI/litellm_perf_user_api_key_auth"

This reverts commit 0e1db3f, reversing
changes made to 7e2d6f2.

* test_vertex_ai_gemini_2_5_pro_streaming

* UI new build

* fix rendering

* ui new build

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* release note docs

* docs

* adding image

* fix(vertex_ai): enable context-1m-2025-08-07 beta header

The `context-1m-2025-08-07` Anthropic beta header was set to `null` for vertex_ai,
causing it to be filtered out when users set `extra_headers: {anthropic-beta: context-1m-2025-08-07}`.

This prevented using Claude's 1M context window feature via Vertex AI, resulting in
`prompt is too long: 460500 tokens > 200000 maximum` errors.

Fixes #21861

---------

Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "fix(vertex_ai): enable context-1m-2025-08-07 beta header (#21870)" (#21876)

This reverts commit bce078a.

* docs(ui): add pre-PR checklist to UI contributing guide

Add testing and build verification steps per maintainer feedback
from @yjiang-litellm. Contributors should run their related tests
per-file and ensure npm run build passes before opening PRs.

* Fix entries with fast and us/

* Add tests for fast and us

* Add support for Priority PayGo for vertex ai and gemini

* Add model pricing

* fix: ensure arrival_time is set before calculating queue time

* Fix: Anthropic model wildcard access issue

* Add incident report

* Add ability to see which model cost map is getting used

* Fix name of title

* Readd tpm limit

* State management fixes for CheckBatchCost

* Fix PR review comments

* State management fixes for CheckBatchCost - Address greptile comments

* fix mypy issues:

* Add Noma guardrails v2 based on custom guardrails (#21400)

* Fix code qa issues

* Fix mypy issues

* Fix mypy issues

* Fix test_aaamodel_prices_and_context_window_json_is_valid

* fix: update calendly on repo

* fix(tests): use counter-based mock for time.time in prisma self-heal test

The test used a fixed side_effect list for time.time(), but the number
of calls varies by Python version, causing StopIteration on 3.12 and
AssertionError on 3.14. Replace with an infinite counter-based callable
and assert the timestamp was updated rather than checking for an exact
value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tests): use absolute path for model_prices JSON in validation test

The test used a relative path 'litellm/model_prices_and_context_window.json'
which only works when pytest runs from a specific working directory.
Use os.path based on __file__ to resolve the path reliably.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update tests/test_litellm/test_utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(tests): use os.path instead of Path to avoid NameError

Path is not imported at module level. Use os.path.join which is already
available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* clean up mock transport: remove streaming, add defensive parsing

* docs: add Google GenAI SDK tutorial (JS & Python) (#21885)

* docs: add Google GenAI SDK tutorial for JS and Python

Add tutorial for using Google's official GenAI SDK (@google/genai for JS,
google-genai for Python) with LiteLLM proxy. Covers pass-through and
native router endpoints, streaming, multi-turn chat, and multi-provider
routing via model_group_alias. Also updates pass-through docs to use the
new SDK replacing the deprecated @google/generative-ai.

* fix(docs): correct Python SDK env var name in GenAI tutorial

GOOGLE_GENAI_API_KEY does not exist in the google-genai SDK.
The correct env var is GEMINI_API_KEY (or GOOGLE_API_KEY).
Also note that the Python SDK has no base URL env var.

* fix(docs): replace non-existent GOOGLE_GENAI_BASE_URL env var in interactions.md

The Python google-genai SDK does not read GOOGLE_GENAI_BASE_URL.
Use http_options={"base_url": "..."} in code instead.

* docs: add network mock benchmarking section

* docs: tweak benchmarks wording

* fix: add auth headers and empty latencies guard to benchmark script

* refactor: use method-level import for MockOpenAITransport

* fix: guard print_aggregate against empty latencies

* fix: add INCOMPLETE status to Interactions API enum and test

Google added INCOMPLETE to the Interactions API OpenAPI spec status enum.
Update both the Status3 enum in the SDK types and the test's expected
values to match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Guardrail Monitor - measure guardrail reliability in prod  (#21944)

* fix: fix log viewer for guardrail monitoring

* feat(ui/): fix rendering logs per guardrail

* fix: fix viewing logs on overview tab of guardrail

* fix: log viewer

* fix: fix naming to align with metric

* docs: add performance & reliability section to v1.81.14 release notes

* fix(tests): make RPM limit test sequential to avoid race condition

Concurrent requests via run_in_executor + asyncio.gather caused a race
condition where more requests slipped through the rate limiter than
expected, leading to flaky test failures (e.g. 3 successes instead of 2
with rpm_limit=2).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: Singapore guardrail policies (PDPA + MAS AI Risk Management) (#21948)

* feat: Singapore PDPA PII protection guardrail policy template

Add Singapore Personal Data Protection Act (PDPA) guardrail support:

Regex patterns (patterns.json):
- sg_nric: NRIC/FIN detection ([STFGM] + 7 digits + checksum letter)
- sg_phone: Singapore phone numbers (+65/0065/65 prefix)
- sg_postal_code: 6-digit postal codes (contextual)
- passport_singapore: Passport numbers (E/K + 7 digits, contextual)
- sg_uen: Unique Entity Numbers (3 formats)
- sg_bank_account: Bank account numbers (dash format, contextual)

YAML policy templates (5 sub-guardrails):
- sg_pdpa_personal_identifiers: s.13 Consent
- sg_pdpa_sensitive_data: Advisory Guidelines
- sg_pdpa_do_not_call: Part IX DNC Registry
- sg_pdpa_data_transfer: s.26 overseas transfers
- sg_pdpa_profiling_automated_decisions: Model AI Governance Framework

Policy template entry in policy_templates.json with 9 guardrail definitions
(4 regex-based + 5 YAML conditional keyword matching).

Tests:
- test_sg_patterns.py: regex pattern unit tests
- test_sg_pdpa_guardrails.py: conditional keyword matching tests (100+ cases)

* feat: MAS AI Risk Management Guidelines guardrail policy template

Add Monetary Authority of Singapore (MAS) AI Risk Management Guidelines
guardrail support for financial institutions:

YAML policy templates (5 sub-guardrails):
- sg_mas_fairness_bias: Blocks discriminatory financial AI (credit/loans/insurance by protected attributes)
- sg_mas_transparency_explainability: Blocks opaque/unexplainable AI for consequential financial decisions
- sg_mas_human_oversight: Blocks fully automated financial decisions without human-in-the-loop
- sg_mas_data_governance: Blocks unauthorized sharing/mishandling of financial customer data
- sg_mas_model_security: Blocks adversarial attacks, model poisoning, inversion on financial AI

Policy template entry in policy_templates.json with 5 guardrail definitions.
Aligned with MAS FEAT Principles, Project MindForge, and NIST AI RMF.

Tests:
- test_sg_mas_ai_guardrails.py: conditional keyword matching tests (100+ cases)

* fix: address SG pattern review feedback

- Update NRIC lowercase test for IGNORECASE runtime behavior
- Add keyword context guard to sg_uen pattern to reduce false positives

* docs: clarify MAS AIRM timeline references

- Explicitly mark MAS AIRM as Nov 2025 consultation draft
- Add 2018 qualifier for FEAT principles in MAS policy descriptions
- Update MAS guardrail wording to avoid release-year ambiguity

* chore: commit resolved MAS policy conflicts

* test:

* chore:

* Add OpenAI Agents SDK tutorial with LiteLLM Proxy to docs  (#21221)

* Add OpenAI Agents SDK tutorial to docs

* Update OpenAI Agents SDK tutorial to use LiteLLM environment variables

* Enhance OpenAI Agents SDK tutorial with built-in LiteLLM extension details and updated configuration steps. Adjust section headings for clarity and improve the flow of information regarding model setup and usage.

* adjust blog posts to fetch from github first

* feat(videos): add variant parameter to video content download (#21955)

openai videos models support the features to download variants.
See more details here: https://developers.openai.com/api/docs/guides/video-generation#use-image-references.
Plumb variant (e.g. "thumbnail", "spritesheet") through the full
video content download chain: avideo_content → video_content →
video_content_handler → transform_video_content_request. OpenAI
appends ?variant=<value> to the GET URL; other providers accept
the parameter in their signature but ignore it.

* fixing path

* adjust blog post path

* Revert duplicate issue checker to text-based matching, remove duplicate PR workflow

Remove the Claude Code-powered duplicate PR detection workflow and revert
the duplicate issue checker back to wow-actions/potential-duplicates with
text similarity matching.

* ui changes

* adding tests

* fix(anthropic): sanitize tool_use IDs in assistant messages

Apply _sanitize_anthropic_tool_use_id to tool_use blocks in
convert_to_anthropic_tool_invoke, not just tool_result blocks.
IDs from external frameworks (e.g. MiniMax) may contain characters
like colons that violate Anthropic's ^[a-zA-Z0-9_-]+$ pattern.

Adds test for invalid ID sanitization in tool_use blocks.

---------

Co-authored-by: An Tang <ta@stripe.com>
Co-authored-by: janfrederickk <75388864+janfrederickk@users.noreply.github.com>
Co-authored-by: Zhenting Huang <3061613175@qq.com>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Darien Kindlund <darien@kindlund.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Ryan Crabbe <rcrabbe@berkeley.edu>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: LeeJuOh <56071126+LeeJuOh@users.noreply.github.com>
Co-authored-by: Monesh Ram <31161039+WhoisMonesh@users.noreply.github.com>
Co-authored-by: Trevor Prater <trevor.prater@gmail.com>
Co-authored-by: The Mavik <179817126+themavik@users.noreply.github.com>
Co-authored-by: Edwin Isac <33712823+edwiniac@users.noreply.github.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Chesars <cesarponce19544@gmail.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: TomAlon <tom@noma.security>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Ron Zhong <ron-zhong@hotmail.com>
Co-authored-by: Arindam Majumder <109217591+Arindam200@users.noreply.github.com>
Co-authored-by: Lei Nie <lenie@quora.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants