Skip to content

fix: add case-insensitive support for guardrail mode and actions#19480

Merged
krrishdholakia merged 1 commit intoBerriAI:litellm_staging_01_22_2026from
Harshit28j:fix/prisma-migration-issues
Jan 22, 2026
Merged

fix: add case-insensitive support for guardrail mode and actions#19480
krrishdholakia merged 1 commit intoBerriAI:litellm_staging_01_22_2026from
Harshit28j:fix/prisma-migration-issues

Conversation

@Harshit28j
Copy link
Collaborator

@Harshit28j Harshit28j commented Jan 21, 2026

Relevant issues

follow up #19281
make sure this one is merged first #19281

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

CI (LiteLLM team)

CI status guideline:

  • 50-55 passing tests: main is stable with minor issues.
  • 45-49 passing tests: acceptable but needs attention
  • <= 40 passing tests: unstable; be careful with your merges and assess the risk.
  • Branch creation CI run
    Link:
  • CI run for the last commit
    Link:
  • Merge / cherry-pick CI run
    Links:

Type

🐛 Bug Fix
✅ Test

Changes

Implemented case-normalized guardrail validation

image

image

@vercel
Copy link

vercel bot commented Jan 21, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
litellm Error Error Jan 21, 2026 9:51am

Request Review

@krrishdholakia krrishdholakia changed the base branch from main to litellm_staging_01_22_2026 January 22, 2026 04:51
@krrishdholakia krrishdholakia merged commit 22000f3 into BerriAI:litellm_staging_01_22_2026 Jan 22, 2026
3 of 7 checks passed
shriharsha98 added a commit to juspay/litellm that referenced this pull request Feb 13, 2026
* [Fix] LiteLLM VertexAI Pass through - ensuring incoming headers are forwarded down to target  (BerriAI#19524)

* test_vertex_passthrough_forwards_anthropic_beta_header

* add_incoming_headers

* fix linting errors

* fix lint

* fix: Send litellm_trace_id to Langfuse to link LiteLLM logs with Langfuse logs

* test: update langfuse trace_id tests to use litellm_trace_id

* Fix virtual keys table sorting

* Adding tests

* feat: add GMI Cloud provider support (BerriAI#19376)

* feat: add GMI Cloud provider support

Add GMI Cloud as an OpenAI-compatible provider with:
- Provider configuration in providers.json
- Documentation page with usage examples
- Model pricing for 16 models (Claude, GPT, DeepSeek, Gemini, etc.)
- Sidebar entry for docs navigation

* Add gmi_cloud to provider_endpoints_support.json

Add provider entry to pass CI validation check that ensures all
providers in openai_like/providers.json are documented.

* Fix provider key: gmi_cloud -> gmi

Match the provider key with providers.json

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* Cut chat_completion latency by ~21% by reducing pre-call processing time (BerriAI#19535)

* Adding scope to /models

* e2e test internal viewer sidebar

* Model Select for Create Team

* create team model select

* fixing build

* [Fix] VertexAI Pass through - Ensure only anthropic betas are forwarded down to LLM API (BerriAI#19542)

* fix ALLOWED_VERTEX_AI_PASSTHROUGH_HEADERS

* test_vertex_passthrough_forwards_anthropic_beta_header

* fix test_vertex_passthrough_forwards_anthropic_beta_header

* test_vertex_passthrough_does_not_forward_litellm_auth_token

* fix utils

* Using Anthropic Beta Features on Vertex AI

* test_forward_headers_from_request_x_pass_prefix

* [Fix] VertexAI Pass through - Ensure only anthropic betas are forwarded down to LLM API (BerriAI#19542)

* fix ALLOWED_VERTEX_AI_PASSTHROUGH_HEADERS

* test_vertex_passthrough_forwards_anthropic_beta_header

* fix test_vertex_passthrough_forwards_anthropic_beta_header

* test_vertex_passthrough_does_not_forward_litellm_auth_token

* fix utils

* Using Anthropic Beta Features on Vertex AI

* test_forward_headers_from_request_x_pass_prefix

* fix(mcp): forward static_headers to MCP servers (BerriAI#19341) (BerriAI#19366)

Forward static_headers from /mcp-rest/test/* routes into the MCP client so headers are present during session.initialize() and tool discovery.

Also add a shared merge_mcp_headers() helper to keep header precedence consistent and ensure OpenAPI-to-MCP generated tools include static_headers.

Tests:
- pytest tests/test_litellm/proxy/_experimental/mcp_server/test_rest_endpoints.py
- pytest tests/test_litellm/proxy/_experimental/mcp_server/test_mcp_server_manager.py -k register_openapi_tools_includes_static_headers

Fixes BerriAI#19341

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(azure): preserve content_policy_violation details for images (BerriAI#19328) (BerriAI#19372)

Azure OpenAI Images (DALL·E 3) returns policy violations as a structured payload under body["error"], including inner_error.content_filter_results and revised_prompt.

LiteLLM previously:
- Failed to extract nested error messages (get_error_message only handled body["message"])
- Missed policy violation detection when error strings were generic
- Dropped inner_error details when raising ContentPolicyViolationError

This change:
- Extracts nested Azure error fields (code/type/message + inner_error)
- Detects policy violations via structured error codes
- Passes an OpenAI-style error body + provider_specific_fields to preserve details

Tests:
- python3 -m pytest tests/test_litellm/llms/azure/test_azure_exception_mapping.py
- python3 -m pytest tests/test_litellm/litellm_core_utils/test_exception_mapping_utils.py

Fixes BerriAI#19328

* [Feat] Add Structured output for /v1/messages with Anthropic API, Azure Anthropic API, Bedrock Converse  (BerriAI#19545)

* fix: add AnthropicMessagesRequestOptionalParams

* add _update_headers_with_anthropic_beta

* fix output format tests

* test_structured_output_e2e

* TestAnthropicAPIStructuredOutput

* test_structured_output_e2e

* fix BASE

* TestAzureAnthropicStructuredOutput

* fix: Bedrock Converse

* add nthropic Messages Pass-Through Architecture

* fix: bedrock invoke output_format

* fix: transform_anthropic_messages_request for vertex anthropic

* TestBedrockInvokeStructuredOutput

* docs anthropic vertex

* docs fix

* docs fix

* fixing prompt-security's guardrail implementation (BerriAI#19374)

* Consolidated change

* fix(prompt_security): update message processing to persist sanitized files and filter for API calls

* fix per krrishdholakia suggestion

* Fix/per service ssl override v2 (BerriAI#19538)

* refactor(ssl): support per-service SSL verification overrides

* add test cases for ssl

* docs: update Claude Code integration guides (BerriAI#19415)

* docs: document Claude Code default models and env var overrides

- Update config example with current Claude Code 2.1.x model names
- Add section documenting default models (sonnet/haiku) that Claude Code requests
- Document env var overrides (ANTHROPIC_DEFAULT_SONNET_MODEL, etc.)
- Show how model_name alias can route to any provider (Bedrock, Vertex, etc.)

* Update docs

Removed warning about changing model names in Claude Code versions.

* docs: add 1M context support and improve Claude Code quickstart guide

- Add comprehensive 1M context window documentation
- Document [1m] suffix usage and shell escaping requirements
- Clarify that LiteLLM config should NOT include [1m] in model names
- Add standalone claude_code_1m_context.md guide
- Improve model selection documentation with environment variables
- Add section on default models used by Claude Code v2.1.14
- Add troubleshooting for 1M context issues
- Reorganize to emphasize environment variables approach

Addresses GitHub issue BerriAI#14444

* docs: reorder model selection options - prioritize --model over env vars

- Move command line/session model selection to Option 1 (most reliable)
- Move environment variables to Option 2
- Add note that env vars may be cached from previous session
- Emphasize that --model always uses exact model specified

* docs: reorganize 1M context section - separate command line from env vars

- Split 1M context examples into two clear sections
- Show command line usage first (--model and /model)
- Show environment variables as alternative approach
- Improves readability and emphasizes most reliable method

* docs: remove misleading default models section from website tutorial

- Remove 'Default Models Used by Claude Code' section (misleading)
- Remove claim that config must match exact default model names
- Update config comment to be more general
- Add claude-opus-4-5-20251101 to example config
- Keep authentication section as-is

* docs: correct model selection in website tutorial

- Remove incorrect claim that Claude Code automatically uses proxy models
- Add explicit model selection examples with --model and /model
- Show environment variables as alternative approach
- Remove misleading comment about 'multiple configured'

* docs: add 1M context section to website tutorial

- Add section on using [1m] suffix for 1 million token context
- Include warning about shell escaping (quotes required)
- Explain how Claude Code handles [1m] internally
- Add /context verification command
- Note that LiteLLM config should NOT include [1m]

* docs: add tip about using .env for API keys

- Add note that ANTHROPIC_API_KEY can be stored in .env file
- Clarifies alternative to exporting environment variables

* add redisvl dependency to the root requiremnts.tx (BerriAI#19417)

* [Fix] UI Cost Estimator - Fix model dropdown (BerriAI#19529)

* add cost estimator

* ui fix show errors

* test_estimate_cost_resolves_router_model_alias

* fix: UI 404 error when SERVER_ROOT_PATH is set (BerriAI#19467)

* fix: add case-insensitive support for guardrail mode and actions (BerriAI#19480)

* fix(bedrock): correct streaming choice index for tool calls (BerriAI#19506)

Bedrock's contentBlockIndex identifies content blocks within a message
(text=0, tool_call=1), not OpenAI's choice index (which varies with n>1).
This caused OpenAI SDK's ChatCompletionAccumulator to fail when tool call
chunks arrived on index 1 while finish_reason arrived on index 0.

Bedrock doesn't support n>1 (no such parameter exists):
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html

OpenAI choice index spec:
https://platform.openai.com/docs/api-reference/chat/streaming

* Fix Azure RPM calculation formula (BerriAI#19513)

* Fix Azure RPM calculation formula

* updated test

* fix(azure response api): flatten tools for responses api to support nested definitions (BerriAI#19526)

The Azure Responses API uses a different schema (flattened) for tools compared to the standard OpenAI/Azure Chat Completions API (nested). This caused a `BadRequestError` when users passed standard tool definitions.

Changes:
- Implemented tool flattening logic in `AzureOpenAIResponsesAPIConfig.transform_responses_api_request`.
- Added comprehensive unit tests in test_azure_transformation.py to verify nested-to-flat transformation, pass-through of flat tools, and immutability.
- Ensures cross-provider compatibility for tool definitions.

Fixes BerriAI#19523

* Fix date overflow/division by zero in proxy utils (BerriAI#19527)

* Fix date overflow/division by zero in proxy utils

* Fix projected spend calculation

* Strengthen projected spend tests

* Fix Azure AI costs for Anthropic models (BerriAI#19530)

* Fix Azure AI cost calculation

* fixup

* feat: Add MCP tools response to chat completions

* feat: display mcp output on the play ground

* Fix: generation config empty for batch

* Add custom vertex ai mapping to the output

* Add support for output formatfor bedrock invoke via v1/messages

* feat: Limit stop sequence as per openai spec

* Fix mypy error in litellm_staging_01_21_2026

* Fix: imagegeneration@006 has been deprecated

* Fix : test_anthropic_via_responses_api

* Fix: Responses API usage field type mismatch

* Fix: Httpx timeout test failures

* Fix: generationConfig removal from tests

* fix: mypy error

* comment code not used

* feat: Add MCP tools response to chat completions

* feat: display mcp output on the play ground

* Fix batch tests

* fix: mypy error

* fix: mypy error

* Fix:test_multiple_function_call

* build(deps): bump lodash from 4.17.21 to 4.17.23 in /docs/my-website

Bumps [lodash](https://github.com/lodash/lodash) from 4.17.21 to 4.17.23.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](lodash/lodash@4.17.21...4.17.23)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.17.23
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Metrics prometheus user team count (BerriAI#19520)

* add user count and team count prometheus metrics

* rebase

* revert mistaken deletion

* fix ui build and mypy lint

* Adding python3-dev to non root

* adding node-tar cve allowlist

* fix(websearch_interception): filter internal kwargs before follow-up request (BerriAI#19577)

The websearch interception handler was passing internal flags like
`_websearch_interception_converted_stream` to the follow-up LLM request.
This caused "Extra inputs are not permitted" errors from providers like
Bedrock that use strict Pydantic validation.

Fix: Filter out all kwargs starting with `_websearch_interception` prefix
before making the follow-up anthropic_messages.acreate() call.

* skip brave tests

* Fix unsafe access to request attribute (BerriAI#19573)

* updating promethus tests

* Fix non-root proxy tests

* Adding lodash-es to allowlist

* attempt fix translation tests

* fix: change oss staging branch name to reflect they're oss

* Revert "[Infra] UI - E2E Tests: Internal Viewer Sidebar"

* Overriding lodash-es with version 4.17.23 in docs

* updating lodash for dashboard

* bump: version 1.81.1 → 1.81.2

* Add reusable model select to update organization page

* Fixing tests

* Adding EOS to finish reasons

* Adding retries to flaky tests

* add opencode tutorial (BerriAI#19602)

* Fix org all proxy model case

* adjust opencode tutorial (BerriAI#19605)

* Add OSS Adopters section to README

* fix: completions mcp output ordering

* feat(helm): Enable PreStop hook configuration in values.yaml (BerriAI#19613)

* Fix: litellm/tests/test_proxy_server_non_root.py

* Update README.md

* Update README.md

* [Feat] New LiteLLM Policy engine - create policies to manage guardrails, conditions - permissions per Key, Team (BerriAI#19612)

* init PolicyMatcher

* TestPolicyMatcherGetMatchingPolicies

* TestPolicyMatcherGetMatchingPolicies

* feat: init PolicyResolver

* init resolver types

* init policy from config

* inint PolicyValidator

* validate policy

* init Architecture Diagram

* test_add_guardrails_from_policy_engine

* init _init_policy_engine

* test updates

* test fixws

* new attachment config

* simplify types

* TestPolicyResolverInheritance

* fix policy resolver

* fix policies

* fix applied policy

* docs fix

* docs fix

* fix linting + QA checks

* fix linting + QA fixes

* test fixes

* docs fix

* fix: pass through endpoints update registry (BerriAI#19420)

* fix: pass through endpoints update registry

* add test case, fix lint error and comment to avoid confusion

* fix pass through endpoints test case

* [Fix] Anthropic models on Azure AI cache pricing (BerriAI#19532) (BerriAI#19614)

* Update README.md

* fix: for test

* All Models Backend Search

* adding test

* test: completions mcp output test

* chore: fix lint error

* test: Skip anthropic model test when ANTHROPIC_API_KEY is not set

* fix: include tool arguments in proxy_server_request for spend logs callbacks

* feat: hashicorp vault rotate support

* Add tool choice mapping for giga chat

* Fix: Responses API logging error for StopIteration

* Fix: test_nova_invoke_streaming_chunk_parsing

* Remove f string

* fix BerriAI#19620: SSO user roles are not updated for existing users (BerriAI#19621)

* Fix: SSO user roles are not updated for existing users
Fixes BerriAI#19620

* Refactor: Remove redundant user_info retrieval in SSOAuthenticationHandler

* Test: add new tests for user creation and updates in get_user_info_from_db

* ci cd fixes - linting security

* resetting poetry and requirements

* fixing security checks

* docs fix

* fixing config

* skipping flaky tests

* skipping non root tests entirely

* security scan

* attempt fix flaky tests

* fixing flaky tests

* [Feat] Guardrail Policy Management - Allow using UI to manage guardrail policies  (BerriAI#19668)

* init UI

* init schema.prisma

* fix: policy_crud_router

* UI fixes

* update gitignore

* working v0 for policy mgmt

* fix: endpoints to resolve guardrails

* fix code QA checks

* ui build issues

* schema fixes

* fix checks

* docs fix

* remove imports from functions

* add schema.prisma

* add migrtion

* fix schema.prisma

* remove imports from functions

* fix lint

* BUMP pyproject

* add spend-queue-troubleshooting docs (BerriAI#19659)

* add spend-queue-troubleshooting docs

* adjust spend-queue-troubleshooting docs

* fix linting

* New add fallbacks modal

* adding tests

* Add Langfuse mock mode for testing without API calls (BerriAI#19676)

* Add GCS mock mode for testing without API calls (BerriAI#19683)

* Adding router settings to create team and key

* fixing build

* fixing tests

* perf: Optimize strip_trailing_slash with O(1) index check (BerriAI#19679)

* perf: Optimize strip_trailing_slash with O(1) index check

Replace rstrip("/") with direct index check for O(1) performance
instead of O(n) string scanning.

Results:
- strip_trailing_slash: 311ms → 13ms (96% faster)
- get_standard_logging_object_payload: 6.11s → 5.80s (5% faster)

* Handle multiple trailing slashes in strip_trailing_slash

Use rstrip for correctness when URL ends with "//" or more,
otherwise use O(1) index check for single trailing slash.

* Fixing tests

* perf: Optimize use_custom_pricing_for_model with set intersection (BerriAI#19677)

* perf: Optimize use_custom_pricing_for_model with set intersection

Cache CustomPricingLiteLLMParams.model_fields.keys() as a module-level
frozenset and use set intersection to reduce loop iterations from 882k
to 90k (only iterating over keys that exist in both sets).

Performance improvement: 84% faster (6.3x speedup)
- Before: 1.17s total, 65µs per call
- After: 0.19s total, 10µs per call

* Use .get() for defensive dictionary access

* perf: skip pattern_router.route() for non-wildcard models (BerriAI#19664)

Check "*" in model before calling pattern_router.route() to avoid
unnecessary pattern matching for non-wildcard model configurations.

* perf: Add LRU caching to get_model_info for faster cost lookups (BerriAI#19606)

- Add @lru_cache decorator to get_model_info() and _cached_get_model_info_helper()
- Update _invalidate_model_cost_lowercase_map() to clear these caches when model_cost changes
- Update test to call cache invalidation after modifying litellm.model_cost

Reduces get_model_cost_information from 46% to <1% of request handling time.

* UI: new build

* redirect to login on expired jwt

* [Feat] UI + Backend - Allow adding policies on Keys/Teams  + Viewing on Info panels  (BerriAI#19688)

* ui for policy mgmt

* test_add_guardrails_from_policy_engine_accepts_dynamic_policies_and_pops_from_data

* docs: add litellm-enterprise requirement for managed files (BerriAI#19689)

* Update Gemini 2.0 Flash deprecation dates to March 31, 2026 (BerriAI#19592)

Google announced that Gemini 2.0 Flash and Flash Lite models will be discontinued on March 31, 2026. Updated deprecation_date field for all affected model variants across different providers (vertex_ai, gemini, deepinfra, openrouter, vercel_ai_gateway).

Models updated:
- gemini-2.0-flash (added deprecation date)
- gemini-2.0-flash-001 (updated from 2026-02-05)
- gemini-2.0-flash-lite (added deprecation date)
- gemini-2.0-flash-lite-001 (updated from 2026-02-25)

All variants now correctly reflect the March 31, 2026 shutdown date.

* fixing build

* Fixing failing tests

* deactivating non root tests

* fixing arize tests

* cache tests serial

* fixing circleci config

* fixing circleci config

* Update OSS Adopters section with new table format

* Fixing ruff check

* bump: version 1.81.2 → 1.81.3

* chore: update Next.js build artifacts (2026-01-24 17:18 UTC, node v22.16.0)

* CI/CD fixes  - split local testing

* fix: _apply_search_filter_to_models mypy linting

* test_partner_models_httpx_streaming

* test_web_search

* Fix: log duplication when json_logs is enabled (BerriAI#19705)

* fix: FLAKY tests

* fix unstable tests

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* test_get_default_unvicorn_init_args

* fix flaky tests

* test_hanging_request_azure

* test_team_update_sc_2

* BUMP extras

* test fixes

* test fixes

* test_retrieve_container_basic

* Model and Team filtering

* TestBedrockInvokeToolSearch

* fix(presidio): resolve runtime error by handling asyncio loops in bac… (BerriAI#19714)

* fix(presidio): resolve runtime error by handling asyncio loops in background threads

* add test case for thread safety

* UI Keys Teams Router Settings docs

* chore: update Next.js build artifacts (2026-01-25 00:27 UTC, node v22.16.0)

* test_stream_transformation_error_sync

* fix patch reliability mock tests

* fix MCP tests

* fix: server rooth path (BerriAI#19790)

* feat: tpm-rpm limit in prometheus metrics (BerriAI#19725)

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(proxy): support slashes in google generateContent model names (BerriAI#19737)

* fix(proxy): support slashes in google route params

* fix(proxy): extract google model ids with slashes

* test(proxy): cover google model ids with slashes

* fix(vertex_ai): support model names with slashes in passthrough URLs (BerriAI#19944)

The regex in get_vertex_model_id_from_url() was using [^/:]+
which stopped at the first slash, truncating model names like
'gcp/google/gemini-2.5-flash' to just 'gcp'. This caused
access_groups checks to fail for custom model names.

Changed the pattern to [^:]+ to allow slashes in model names,
only stopping at the colon before the action (e.g., :generateContent).

* [Fix] VertexAI Pass through - fix regression that caused vertex ai passthroughs to stop working for router models (BerriAI#19967)

* fix(vertex_ai): replace custom model names with actual Vertex AI model names in passthrough URLs (BerriAI#19948)

When the passthrough URL already contains project and location, the code
was skipping the deployment lookup and forwarding the URL as-is to Vertex AI.
For custom model names like gcp/google/gemini-2.5-flash, Vertex AI returned
404 because it only knows the actual model name (gemini-2.5-flash).

The fix makes the deployment lookup always run, so the custom model name
gets replaced with the actual Vertex AI model name before forwarding.

* add _resolve_vertex_model_from_router

* fix: get_llm_provider

* Potential fix for code scanning alert no. 4020: Clear-text logging of sensitive information

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

---------

Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* [Feat] - Search API add /list endpoint to list what search tools exist in router  (BerriAI#19969)

* feat: List all available search tools configured in the router.

* add debugging search API

* add debugging search API

* perf(prometheus): parallelize budget metrics, fix caching bug, reduce CPU by ~40% (BerriAI#20544)

* fix: revert httpx client caching that caused closed client errors

AsyncHTTPHandler.__del__ was closing httpx clients still in use by
AsyncOpenAI/AsyncAzureOpenAI due to independent cache lifecycles.
Restores standalone httpx client creation for OpenAI/Azure providers.

* Revert "Merge pull request BerriAI#18790 from BerriAI/litellm_key_team_routing_3"

This reverts commit ae26d8e, reversing
changes made to 864e8c6.

* fix MYPY lint

* fixed build errors after merge

* least busy debug logs

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: mubashir1osmani <mubashir.osmani777@gmail.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: YutaSaito <36355491+uc4w6c@users.noreply.github.com>
Co-authored-by: Yuta Saito <uc4w6c@bma.biglobe.ne.jp>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Alexsander Hamir <alexsanderhamirgomesbaptista@gmail.com>
Co-authored-by: jay prajapati <79649559+jayy-77@users.noreply.github.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: davida-ps <david.a@prompt.security>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: houdataali <84786211+houdataali@users.noreply.github.com>
Co-authored-by: João Dinis Ferreira <hello@joaof.eu>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Yogeshwaran Ravichandran <96047771+yogeshwaran10@users.noreply.github.com>
Co-authored-by: Will Chen <willchen90@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Eric Cao <ecao310@gmail.com>
Co-authored-by: mpcusack-altos <mcusack@altoslabs.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: John Greek <2006605+jgreek@users.noreply.github.com>
Co-authored-by: xqe2011 <gz923553148@gmail.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
shriharsha98 added a commit to juspay/litellm that referenced this pull request Feb 19, 2026
* Fix virtual keys table sorting

* Adding tests

* feat: add GMI Cloud provider support (BerriAI#19376)

* feat: add GMI Cloud provider support

Add GMI Cloud as an OpenAI-compatible provider with:
- Provider configuration in providers.json
- Documentation page with usage examples
- Model pricing for 16 models (Claude, GPT, DeepSeek, Gemini, etc.)
- Sidebar entry for docs navigation

* Add gmi_cloud to provider_endpoints_support.json

Add provider entry to pass CI validation check that ensures all
providers in openai_like/providers.json are documented.

* Fix provider key: gmi_cloud -> gmi

Match the provider key with providers.json

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* Cut chat_completion latency by ~21% by reducing pre-call processing time (BerriAI#19535)

* Adding scope to /models

* e2e test internal viewer sidebar

* Model Select for Create Team

* create team model select

* fixing build

* [Fix] VertexAI Pass through - Ensure only anthropic betas are forwarded down to LLM API (BerriAI#19542)

* fix ALLOWED_VERTEX_AI_PASSTHROUGH_HEADERS

* test_vertex_passthrough_forwards_anthropic_beta_header

* fix test_vertex_passthrough_forwards_anthropic_beta_header

* test_vertex_passthrough_does_not_forward_litellm_auth_token

* fix utils

* Using Anthropic Beta Features on Vertex AI

* test_forward_headers_from_request_x_pass_prefix

* [Fix] VertexAI Pass through - Ensure only anthropic betas are forwarded down to LLM API (BerriAI#19542)

* fix ALLOWED_VERTEX_AI_PASSTHROUGH_HEADERS

* test_vertex_passthrough_forwards_anthropic_beta_header

* fix test_vertex_passthrough_forwards_anthropic_beta_header

* test_vertex_passthrough_does_not_forward_litellm_auth_token

* fix utils

* Using Anthropic Beta Features on Vertex AI

* test_forward_headers_from_request_x_pass_prefix

* fix(mcp): forward static_headers to MCP servers (BerriAI#19341) (BerriAI#19366)

Forward static_headers from /mcp-rest/test/* routes into the MCP client so headers are present during session.initialize() and tool discovery.

Also add a shared merge_mcp_headers() helper to keep header precedence consistent and ensure OpenAPI-to-MCP generated tools include static_headers.

Tests:
- pytest tests/test_litellm/proxy/_experimental/mcp_server/test_rest_endpoints.py
- pytest tests/test_litellm/proxy/_experimental/mcp_server/test_mcp_server_manager.py -k register_openapi_tools_includes_static_headers

Fixes BerriAI#19341

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(azure): preserve content_policy_violation details for images (BerriAI#19328) (BerriAI#19372)

Azure OpenAI Images (DALL·E 3) returns policy violations as a structured payload under body["error"], including inner_error.content_filter_results and revised_prompt.

LiteLLM previously:
- Failed to extract nested error messages (get_error_message only handled body["message"])
- Missed policy violation detection when error strings were generic
- Dropped inner_error details when raising ContentPolicyViolationError

This change:
- Extracts nested Azure error fields (code/type/message + inner_error)
- Detects policy violations via structured error codes
- Passes an OpenAI-style error body + provider_specific_fields to preserve details

Tests:
- python3 -m pytest tests/test_litellm/llms/azure/test_azure_exception_mapping.py
- python3 -m pytest tests/test_litellm/litellm_core_utils/test_exception_mapping_utils.py

Fixes BerriAI#19328

* [Feat] Add Structured output for /v1/messages with Anthropic API, Azure Anthropic API, Bedrock Converse  (BerriAI#19545)

* fix: add AnthropicMessagesRequestOptionalParams

* add _update_headers_with_anthropic_beta

* fix output format tests

* test_structured_output_e2e

* TestAnthropicAPIStructuredOutput

* test_structured_output_e2e

* fix BASE

* TestAzureAnthropicStructuredOutput

* fix: Bedrock Converse

* add nthropic Messages Pass-Through Architecture

* fix: bedrock invoke output_format

* fix: transform_anthropic_messages_request for vertex anthropic

* TestBedrockInvokeStructuredOutput

* docs anthropic vertex

* docs fix

* docs fix

* fixing prompt-security's guardrail implementation (BerriAI#19374)

* Consolidated change

* fix(prompt_security): update message processing to persist sanitized files and filter for API calls

* fix per krrishdholakia suggestion

* Fix/per service ssl override v2 (BerriAI#19538)

* refactor(ssl): support per-service SSL verification overrides

* add test cases for ssl

* docs: update Claude Code integration guides (BerriAI#19415)

* docs: document Claude Code default models and env var overrides

- Update config example with current Claude Code 2.1.x model names
- Add section documenting default models (sonnet/haiku) that Claude Code requests
- Document env var overrides (ANTHROPIC_DEFAULT_SONNET_MODEL, etc.)
- Show how model_name alias can route to any provider (Bedrock, Vertex, etc.)

* Update docs

Removed warning about changing model names in Claude Code versions.

* docs: add 1M context support and improve Claude Code quickstart guide

- Add comprehensive 1M context window documentation
- Document [1m] suffix usage and shell escaping requirements
- Clarify that LiteLLM config should NOT include [1m] in model names
- Add standalone claude_code_1m_context.md guide
- Improve model selection documentation with environment variables
- Add section on default models used by Claude Code v2.1.14
- Add troubleshooting for 1M context issues
- Reorganize to emphasize environment variables approach

Addresses GitHub issue BerriAI#14444

* docs: reorder model selection options - prioritize --model over env vars

- Move command line/session model selection to Option 1 (most reliable)
- Move environment variables to Option 2
- Add note that env vars may be cached from previous session
- Emphasize that --model always uses exact model specified

* docs: reorganize 1M context section - separate command line from env vars

- Split 1M context examples into two clear sections
- Show command line usage first (--model and /model)
- Show environment variables as alternative approach
- Improves readability and emphasizes most reliable method

* docs: remove misleading default models section from website tutorial

- Remove 'Default Models Used by Claude Code' section (misleading)
- Remove claim that config must match exact default model names
- Update config comment to be more general
- Add claude-opus-4-5-20251101 to example config
- Keep authentication section as-is

* docs: correct model selection in website tutorial

- Remove incorrect claim that Claude Code automatically uses proxy models
- Add explicit model selection examples with --model and /model
- Show environment variables as alternative approach
- Remove misleading comment about 'multiple configured'

* docs: add 1M context section to website tutorial

- Add section on using [1m] suffix for 1 million token context
- Include warning about shell escaping (quotes required)
- Explain how Claude Code handles [1m] internally
- Add /context verification command
- Note that LiteLLM config should NOT include [1m]

* docs: add tip about using .env for API keys

- Add note that ANTHROPIC_API_KEY can be stored in .env file
- Clarifies alternative to exporting environment variables

* add redisvl dependency to the root requiremnts.tx (BerriAI#19417)

* [Fix] UI Cost Estimator - Fix model dropdown (BerriAI#19529)

* add cost estimator

* ui fix show errors

* test_estimate_cost_resolves_router_model_alias

* fix: UI 404 error when SERVER_ROOT_PATH is set (BerriAI#19467)

* fix: add case-insensitive support for guardrail mode and actions (BerriAI#19480)

* fix(bedrock): correct streaming choice index for tool calls (BerriAI#19506)

Bedrock's contentBlockIndex identifies content blocks within a message
(text=0, tool_call=1), not OpenAI's choice index (which varies with n>1).
This caused OpenAI SDK's ChatCompletionAccumulator to fail when tool call
chunks arrived on index 1 while finish_reason arrived on index 0.

Bedrock doesn't support n>1 (no such parameter exists):
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html

OpenAI choice index spec:
https://platform.openai.com/docs/api-reference/chat/streaming

* Fix Azure RPM calculation formula (BerriAI#19513)

* Fix Azure RPM calculation formula

* updated test

* fix(azure response api): flatten tools for responses api to support nested definitions (BerriAI#19526)

The Azure Responses API uses a different schema (flattened) for tools compared to the standard OpenAI/Azure Chat Completions API (nested). This caused a `BadRequestError` when users passed standard tool definitions.

Changes:
- Implemented tool flattening logic in `AzureOpenAIResponsesAPIConfig.transform_responses_api_request`.
- Added comprehensive unit tests in test_azure_transformation.py to verify nested-to-flat transformation, pass-through of flat tools, and immutability.
- Ensures cross-provider compatibility for tool definitions.

Fixes BerriAI#19523

* Fix date overflow/division by zero in proxy utils (BerriAI#19527)

* Fix date overflow/division by zero in proxy utils

* Fix projected spend calculation

* Strengthen projected spend tests

* Fix Azure AI costs for Anthropic models (BerriAI#19530)

* Fix Azure AI cost calculation

* fixup

* feat: Add MCP tools response to chat completions

* feat: display mcp output on the play ground

* Fix: generation config empty for batch

* Add custom vertex ai mapping to the output

* Add support for output formatfor bedrock invoke via v1/messages

* feat: Limit stop sequence as per openai spec

* Fix mypy error in litellm_staging_01_21_2026

* Fix: imagegeneration@006 has been deprecated

* Fix : test_anthropic_via_responses_api

* Fix: Responses API usage field type mismatch

* Fix: Httpx timeout test failures

* Fix: generationConfig removal from tests

* fix: mypy error

* comment code not used

* feat: Add MCP tools response to chat completions

* feat: display mcp output on the play ground

* Fix batch tests

* fix: mypy error

* fix: mypy error

* Fix:test_multiple_function_call

* build(deps): bump lodash from 4.17.21 to 4.17.23 in /docs/my-website

Bumps [lodash](https://github.com/lodash/lodash) from 4.17.21 to 4.17.23.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](lodash/lodash@4.17.21...4.17.23)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.17.23
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Metrics prometheus user team count (BerriAI#19520)

* add user count and team count prometheus metrics

* rebase

* revert mistaken deletion

* fix ui build and mypy lint

* Adding python3-dev to non root

* adding node-tar cve allowlist

* fix(websearch_interception): filter internal kwargs before follow-up request (BerriAI#19577)

The websearch interception handler was passing internal flags like
`_websearch_interception_converted_stream` to the follow-up LLM request.
This caused "Extra inputs are not permitted" errors from providers like
Bedrock that use strict Pydantic validation.

Fix: Filter out all kwargs starting with `_websearch_interception` prefix
before making the follow-up anthropic_messages.acreate() call.

* skip brave tests

* Fix unsafe access to request attribute (BerriAI#19573)

* updating promethus tests

* Fix non-root proxy tests

* Adding lodash-es to allowlist

* attempt fix translation tests

* fix: change oss staging branch name to reflect they're oss

* Revert "[Infra] UI - E2E Tests: Internal Viewer Sidebar"

* Overriding lodash-es with version 4.17.23 in docs

* updating lodash for dashboard

* bump: version 1.81.1 → 1.81.2

* Add reusable model select to update organization page

* Fixing tests

* Adding EOS to finish reasons

* Adding retries to flaky tests

* add opencode tutorial (BerriAI#19602)

* Fix org all proxy model case

* adjust opencode tutorial (BerriAI#19605)

* Add OSS Adopters section to README

* fix: completions mcp output ordering

* feat(helm): Enable PreStop hook configuration in values.yaml (BerriAI#19613)

* Fix: litellm/tests/test_proxy_server_non_root.py

* Update README.md

* Update README.md

* [Feat] New LiteLLM Policy engine - create policies to manage guardrails, conditions - permissions per Key, Team (BerriAI#19612)

* init PolicyMatcher

* TestPolicyMatcherGetMatchingPolicies

* TestPolicyMatcherGetMatchingPolicies

* feat: init PolicyResolver

* init resolver types

* init policy from config

* inint PolicyValidator

* validate policy

* init Architecture Diagram

* test_add_guardrails_from_policy_engine

* init _init_policy_engine

* test updates

* test fixws

* new attachment config

* simplify types

* TestPolicyResolverInheritance

* fix policy resolver

* fix policies

* fix applied policy

* docs fix

* docs fix

* fix linting + QA checks

* fix linting + QA fixes

* test fixes

* docs fix

* fix: pass through endpoints update registry (BerriAI#19420)

* fix: pass through endpoints update registry

* add test case, fix lint error and comment to avoid confusion

* fix pass through endpoints test case

* [Fix] Anthropic models on Azure AI cache pricing (BerriAI#19532) (BerriAI#19614)

* Update README.md

* fix: for test

* All Models Backend Search

* adding test

* test: completions mcp output test

* chore: fix lint error

* test: Skip anthropic model test when ANTHROPIC_API_KEY is not set

* fix: include tool arguments in proxy_server_request for spend logs callbacks

* feat: hashicorp vault rotate support

* Add tool choice mapping for giga chat

* Fix: Responses API logging error for StopIteration

* Fix: test_nova_invoke_streaming_chunk_parsing

* Remove f string

* fix BerriAI#19620: SSO user roles are not updated for existing users (BerriAI#19621)

* Fix: SSO user roles are not updated for existing users
Fixes BerriAI#19620

* Refactor: Remove redundant user_info retrieval in SSOAuthenticationHandler

* Test: add new tests for user creation and updates in get_user_info_from_db

* ci cd fixes - linting security

* resetting poetry and requirements

* fixing security checks

* docs fix

* fixing config

* skipping flaky tests

* skipping non root tests entirely

* security scan

* attempt fix flaky tests

* fixing flaky tests

* [Feat] Guardrail Policy Management - Allow using UI to manage guardrail policies  (BerriAI#19668)

* init UI

* init schema.prisma

* fix: policy_crud_router

* UI fixes

* update gitignore

* working v0 for policy mgmt

* fix: endpoints to resolve guardrails

* fix code QA checks

* ui build issues

* schema fixes

* fix checks

* docs fix

* remove imports from functions

* add schema.prisma

* add migrtion

* fix schema.prisma

* remove imports from functions

* fix lint

* BUMP pyproject

* add spend-queue-troubleshooting docs (BerriAI#19659)

* add spend-queue-troubleshooting docs

* adjust spend-queue-troubleshooting docs

* fix linting

* New add fallbacks modal

* adding tests

* Add Langfuse mock mode for testing without API calls (BerriAI#19676)

* Add GCS mock mode for testing without API calls (BerriAI#19683)

* Adding router settings to create team and key

* fixing build

* fixing tests

* perf: Optimize strip_trailing_slash with O(1) index check (BerriAI#19679)

* perf: Optimize strip_trailing_slash with O(1) index check

Replace rstrip("/") with direct index check for O(1) performance
instead of O(n) string scanning.

Results:
- strip_trailing_slash: 311ms → 13ms (96% faster)
- get_standard_logging_object_payload: 6.11s → 5.80s (5% faster)

* Handle multiple trailing slashes in strip_trailing_slash

Use rstrip for correctness when URL ends with "//" or more,
otherwise use O(1) index check for single trailing slash.

* Fixing tests

* perf: Optimize use_custom_pricing_for_model with set intersection (BerriAI#19677)

* perf: Optimize use_custom_pricing_for_model with set intersection

Cache CustomPricingLiteLLMParams.model_fields.keys() as a module-level
frozenset and use set intersection to reduce loop iterations from 882k
to 90k (only iterating over keys that exist in both sets).

Performance improvement: 84% faster (6.3x speedup)
- Before: 1.17s total, 65µs per call
- After: 0.19s total, 10µs per call

* Use .get() for defensive dictionary access

* perf: skip pattern_router.route() for non-wildcard models (BerriAI#19664)

Check "*" in model before calling pattern_router.route() to avoid
unnecessary pattern matching for non-wildcard model configurations.

* perf: Add LRU caching to get_model_info for faster cost lookups (BerriAI#19606)

- Add @lru_cache decorator to get_model_info() and _cached_get_model_info_helper()
- Update _invalidate_model_cost_lowercase_map() to clear these caches when model_cost changes
- Update test to call cache invalidation after modifying litellm.model_cost

Reduces get_model_cost_information from 46% to <1% of request handling time.

* UI: new build

* redirect to login on expired jwt

* [Feat] UI + Backend - Allow adding policies on Keys/Teams  + Viewing on Info panels  (BerriAI#19688)

* ui for policy mgmt

* test_add_guardrails_from_policy_engine_accepts_dynamic_policies_and_pops_from_data

* docs: add litellm-enterprise requirement for managed files (BerriAI#19689)

* Update Gemini 2.0 Flash deprecation dates to March 31, 2026 (BerriAI#19592)

Google announced that Gemini 2.0 Flash and Flash Lite models will be discontinued on March 31, 2026. Updated deprecation_date field for all affected model variants across different providers (vertex_ai, gemini, deepinfra, openrouter, vercel_ai_gateway).

Models updated:
- gemini-2.0-flash (added deprecation date)
- gemini-2.0-flash-001 (updated from 2026-02-05)
- gemini-2.0-flash-lite (added deprecation date)
- gemini-2.0-flash-lite-001 (updated from 2026-02-25)

All variants now correctly reflect the March 31, 2026 shutdown date.

* fixing build

* Fixing failing tests

* deactivating non root tests

* fixing arize tests

* cache tests serial

* fixing circleci config

* fixing circleci config

* Update OSS Adopters section with new table format

* Fixing ruff check

* bump: version 1.81.2 → 1.81.3

* chore: update Next.js build artifacts (2026-01-24 17:18 UTC, node v22.16.0)

* CI/CD fixes  - split local testing

* fix: _apply_search_filter_to_models mypy linting

* test_partner_models_httpx_streaming

* test_web_search

* Fix: log duplication when json_logs is enabled (BerriAI#19705)

* fix: FLAKY tests

* fix unstable tests

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* test_get_default_unvicorn_init_args

* fix flaky tests

* test_hanging_request_azure

* test_team_update_sc_2

* BUMP extras

* test fixes

* test fixes

* test_retrieve_container_basic

* Model and Team filtering

* TestBedrockInvokeToolSearch

* fix(presidio): resolve runtime error by handling asyncio loops in bac… (BerriAI#19714)

* fix(presidio): resolve runtime error by handling asyncio loops in background threads

* add test case for thread safety

* UI Keys Teams Router Settings docs

* chore: update Next.js build artifacts (2026-01-25 00:27 UTC, node v22.16.0)

* test_stream_transformation_error_sync

* fix patch reliability mock tests

* fix MCP tests

* fix: server rooth path (BerriAI#19790)

* feat: tpm-rpm limit in prometheus metrics (BerriAI#19725)

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(proxy): support slashes in google generateContent model names (BerriAI#19737)

* fix(proxy): support slashes in google route params

* fix(proxy): extract google model ids with slashes

* test(proxy): cover google model ids with slashes

* fix(vertex_ai): support model names with slashes in passthrough URLs (BerriAI#19944)

The regex in get_vertex_model_id_from_url() was using [^/:]+
which stopped at the first slash, truncating model names like
'gcp/google/gemini-2.5-flash' to just 'gcp'. This caused
access_groups checks to fail for custom model names.

Changed the pattern to [^:]+ to allow slashes in model names,
only stopping at the colon before the action (e.g., :generateContent).

* [Fix] VertexAI Pass through - fix regression that caused vertex ai passthroughs to stop working for router models (BerriAI#19967)

* fix(vertex_ai): replace custom model names with actual Vertex AI model names in passthrough URLs (BerriAI#19948)

When the passthrough URL already contains project and location, the code
was skipping the deployment lookup and forwarding the URL as-is to Vertex AI.
For custom model names like gcp/google/gemini-2.5-flash, Vertex AI returned
404 because it only knows the actual model name (gemini-2.5-flash).

The fix makes the deployment lookup always run, so the custom model name
gets replaced with the actual Vertex AI model name before forwarding.

* add _resolve_vertex_model_from_router

* fix: get_llm_provider

* Potential fix for code scanning alert no. 4020: Clear-text logging of sensitive information

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

---------

Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* [Feat] - Search API add /list endpoint to list what search tools exist in router  (BerriAI#19969)

* feat: List all available search tools configured in the router.

* add debugging search API

* add debugging search API

* perf(prometheus): parallelize budget metrics, fix caching bug, reduce CPU by ~40% (BerriAI#20544)

* fix: revert httpx client caching that caused closed client errors

AsyncHTTPHandler.__del__ was closing httpx clients still in use by
AsyncOpenAI/AsyncAzureOpenAI due to independent cache lifecycles.
Restores standalone httpx client creation for OpenAI/Azure providers.

* Revert "Merge pull request BerriAI#18790 from BerriAI/litellm_key_team_routing_3"

This reverts commit ae26d8e, reversing
changes made to 864e8c6.

* fix MYPY lint

* fixed build errors after merge

* added sandbox branch for gcr push (#61)

* added sandbox branch for gcr push

* jenkins setup for sbx

* build fix

* addding sync/v[0-9] branches for gcr push

* build fix

* least busy debug logs

* Fix: remove x-anthropic-billing block

* added backl anthropic envs

* merge fixes

* least busy router changes

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: YutaSaito <36355491+uc4w6c@users.noreply.github.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Alexsander Hamir <alexsanderhamirgomesbaptista@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: jay prajapati <79649559+jayy-77@users.noreply.github.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: davida-ps <david.a@prompt.security>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: houdataali <84786211+houdataali@users.noreply.github.com>
Co-authored-by: João Dinis Ferreira <hello@joaof.eu>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Yogeshwaran Ravichandran <96047771+yogeshwaran10@users.noreply.github.com>
Co-authored-by: Will Chen <willchen90@gmail.com>
Co-authored-by: Yuta Saito <uc4w6c@bma.biglobe.ne.jp>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Eric Cao <ecao310@gmail.com>
Co-authored-by: mpcusack-altos <mcusack@altoslabs.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: John Greek <2006605+jgreek@users.noreply.github.com>
Co-authored-by: xqe2011 <gz923553148@gmail.com>
Co-authored-by: mubashir1osmani <mubashir.osmani777@gmail.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: pramodp-dotcom <pramod.p@juspay.in>
shriharsha98 added a commit to juspay/litellm that referenced this pull request Feb 23, 2026
* added sandbox branch for gcr push

* jenkins setup for sbx

* build fix

* addding sync/v[0-9] branches for gcr push

* build fix

* Feature/upgrade to v1.81.3 stable (#63)

* [Fix] LiteLLM VertexAI Pass through - ensuring incoming headers are forwarded down to target  (BerriAI#19524)

* test_vertex_passthrough_forwards_anthropic_beta_header

* add_incoming_headers

* fix linting errors

* fix lint

* fix: Send litellm_trace_id to Langfuse to link LiteLLM logs with Langfuse logs

* test: update langfuse trace_id tests to use litellm_trace_id

* Fix virtual keys table sorting

* Adding tests

* feat: add GMI Cloud provider support (BerriAI#19376)

* feat: add GMI Cloud provider support

Add GMI Cloud as an OpenAI-compatible provider with:
- Provider configuration in providers.json
- Documentation page with usage examples
- Model pricing for 16 models (Claude, GPT, DeepSeek, Gemini, etc.)
- Sidebar entry for docs navigation

* Add gmi_cloud to provider_endpoints_support.json

Add provider entry to pass CI validation check that ensures all
providers in openai_like/providers.json are documented.

* Fix provider key: gmi_cloud -> gmi

Match the provider key with providers.json

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* Cut chat_completion latency by ~21% by reducing pre-call processing time (BerriAI#19535)

* Adding scope to /models

* e2e test internal viewer sidebar

* Model Select for Create Team

* create team model select

* fixing build

* [Fix] VertexAI Pass through - Ensure only anthropic betas are forwarded down to LLM API (BerriAI#19542)

* fix ALLOWED_VERTEX_AI_PASSTHROUGH_HEADERS

* test_vertex_passthrough_forwards_anthropic_beta_header

* fix test_vertex_passthrough_forwards_anthropic_beta_header

* test_vertex_passthrough_does_not_forward_litellm_auth_token

* fix utils

* Using Anthropic Beta Features on Vertex AI

* test_forward_headers_from_request_x_pass_prefix

* [Fix] VertexAI Pass through - Ensure only anthropic betas are forwarded down to LLM API (BerriAI#19542)

* fix ALLOWED_VERTEX_AI_PASSTHROUGH_HEADERS

* test_vertex_passthrough_forwards_anthropic_beta_header

* fix test_vertex_passthrough_forwards_anthropic_beta_header

* test_vertex_passthrough_does_not_forward_litellm_auth_token

* fix utils

* Using Anthropic Beta Features on Vertex AI

* test_forward_headers_from_request_x_pass_prefix

* fix(mcp): forward static_headers to MCP servers (BerriAI#19341) (BerriAI#19366)

Forward static_headers from /mcp-rest/test/* routes into the MCP client so headers are present during session.initialize() and tool discovery.

Also add a shared merge_mcp_headers() helper to keep header precedence consistent and ensure OpenAPI-to-MCP generated tools include static_headers.

Tests:
- pytest tests/test_litellm/proxy/_experimental/mcp_server/test_rest_endpoints.py
- pytest tests/test_litellm/proxy/_experimental/mcp_server/test_mcp_server_manager.py -k register_openapi_tools_includes_static_headers

Fixes BerriAI#19341

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(azure): preserve content_policy_violation details for images (BerriAI#19328) (BerriAI#19372)

Azure OpenAI Images (DALL·E 3) returns policy violations as a structured payload under body["error"], including inner_error.content_filter_results and revised_prompt.

LiteLLM previously:
- Failed to extract nested error messages (get_error_message only handled body["message"])
- Missed policy violation detection when error strings were generic
- Dropped inner_error details when raising ContentPolicyViolationError

This change:
- Extracts nested Azure error fields (code/type/message + inner_error)
- Detects policy violations via structured error codes
- Passes an OpenAI-style error body + provider_specific_fields to preserve details

Tests:
- python3 -m pytest tests/test_litellm/llms/azure/test_azure_exception_mapping.py
- python3 -m pytest tests/test_litellm/litellm_core_utils/test_exception_mapping_utils.py

Fixes BerriAI#19328

* [Feat] Add Structured output for /v1/messages with Anthropic API, Azure Anthropic API, Bedrock Converse  (BerriAI#19545)

* fix: add AnthropicMessagesRequestOptionalParams

* add _update_headers_with_anthropic_beta

* fix output format tests

* test_structured_output_e2e

* TestAnthropicAPIStructuredOutput

* test_structured_output_e2e

* fix BASE

* TestAzureAnthropicStructuredOutput

* fix: Bedrock Converse

* add nthropic Messages Pass-Through Architecture

* fix: bedrock invoke output_format

* fix: transform_anthropic_messages_request for vertex anthropic

* TestBedrockInvokeStructuredOutput

* docs anthropic vertex

* docs fix

* docs fix

* fixing prompt-security's guardrail implementation (BerriAI#19374)

* Consolidated change

* fix(prompt_security): update message processing to persist sanitized files and filter for API calls

* fix per krrishdholakia suggestion

* Fix/per service ssl override v2 (BerriAI#19538)

* refactor(ssl): support per-service SSL verification overrides

* add test cases for ssl

* docs: update Claude Code integration guides (BerriAI#19415)

* docs: document Claude Code default models and env var overrides

- Update config example with current Claude Code 2.1.x model names
- Add section documenting default models (sonnet/haiku) that Claude Code requests
- Document env var overrides (ANTHROPIC_DEFAULT_SONNET_MODEL, etc.)
- Show how model_name alias can route to any provider (Bedrock, Vertex, etc.)

* Update docs

Removed warning about changing model names in Claude Code versions.

* docs: add 1M context support and improve Claude Code quickstart guide

- Add comprehensive 1M context window documentation
- Document [1m] suffix usage and shell escaping requirements
- Clarify that LiteLLM config should NOT include [1m] in model names
- Add standalone claude_code_1m_context.md guide
- Improve model selection documentation with environment variables
- Add section on default models used by Claude Code v2.1.14
- Add troubleshooting for 1M context issues
- Reorganize to emphasize environment variables approach

Addresses GitHub issue BerriAI#14444

* docs: reorder model selection options - prioritize --model over env vars

- Move command line/session model selection to Option 1 (most reliable)
- Move environment variables to Option 2
- Add note that env vars may be cached from previous session
- Emphasize that --model always uses exact model specified

* docs: reorganize 1M context section - separate command line from env vars

- Split 1M context examples into two clear sections
- Show command line usage first (--model and /model)
- Show environment variables as alternative approach
- Improves readability and emphasizes most reliable method

* docs: remove misleading default models section from website tutorial

- Remove 'Default Models Used by Claude Code' section (misleading)
- Remove claim that config must match exact default model names
- Update config comment to be more general
- Add claude-opus-4-5-20251101 to example config
- Keep authentication section as-is

* docs: correct model selection in website tutorial

- Remove incorrect claim that Claude Code automatically uses proxy models
- Add explicit model selection examples with --model and /model
- Show environment variables as alternative approach
- Remove misleading comment about 'multiple configured'

* docs: add 1M context section to website tutorial

- Add section on using [1m] suffix for 1 million token context
- Include warning about shell escaping (quotes required)
- Explain how Claude Code handles [1m] internally
- Add /context verification command
- Note that LiteLLM config should NOT include [1m]

* docs: add tip about using .env for API keys

- Add note that ANTHROPIC_API_KEY can be stored in .env file
- Clarifies alternative to exporting environment variables

* add redisvl dependency to the root requiremnts.tx (BerriAI#19417)

* [Fix] UI Cost Estimator - Fix model dropdown (BerriAI#19529)

* add cost estimator

* ui fix show errors

* test_estimate_cost_resolves_router_model_alias

* fix: UI 404 error when SERVER_ROOT_PATH is set (BerriAI#19467)

* fix: add case-insensitive support for guardrail mode and actions (BerriAI#19480)

* fix(bedrock): correct streaming choice index for tool calls (BerriAI#19506)

Bedrock's contentBlockIndex identifies content blocks within a message
(text=0, tool_call=1), not OpenAI's choice index (which varies with n>1).
This caused OpenAI SDK's ChatCompletionAccumulator to fail when tool call
chunks arrived on index 1 while finish_reason arrived on index 0.

Bedrock doesn't support n>1 (no such parameter exists):
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html

OpenAI choice index spec:
https://platform.openai.com/docs/api-reference/chat/streaming

* Fix Azure RPM calculation formula (BerriAI#19513)

* Fix Azure RPM calculation formula

* updated test

* fix(azure response api): flatten tools for responses api to support nested definitions (BerriAI#19526)

The Azure Responses API uses a different schema (flattened) for tools compared to the standard OpenAI/Azure Chat Completions API (nested). This caused a `BadRequestError` when users passed standard tool definitions.

Changes:
- Implemented tool flattening logic in `AzureOpenAIResponsesAPIConfig.transform_responses_api_request`.
- Added comprehensive unit tests in test_azure_transformation.py to verify nested-to-flat transformation, pass-through of flat tools, and immutability.
- Ensures cross-provider compatibility for tool definitions.

Fixes BerriAI#19523

* Fix date overflow/division by zero in proxy utils (BerriAI#19527)

* Fix date overflow/division by zero in proxy utils

* Fix projected spend calculation

* Strengthen projected spend tests

* Fix Azure AI costs for Anthropic models (BerriAI#19530)

* Fix Azure AI cost calculation

* fixup

* feat: Add MCP tools response to chat completions

* feat: display mcp output on the play ground

* Fix: generation config empty for batch

* Add custom vertex ai mapping to the output

* Add support for output formatfor bedrock invoke via v1/messages

* feat: Limit stop sequence as per openai spec

* Fix mypy error in litellm_staging_01_21_2026

* Fix: imagegeneration@006 has been deprecated

* Fix : test_anthropic_via_responses_api

* Fix: Responses API usage field type mismatch

* Fix: Httpx timeout test failures

* Fix: generationConfig removal from tests

* fix: mypy error

* comment code not used

* feat: Add MCP tools response to chat completions

* feat: display mcp output on the play ground

* Fix batch tests

* fix: mypy error

* fix: mypy error

* Fix:test_multiple_function_call

* build(deps): bump lodash from 4.17.21 to 4.17.23 in /docs/my-website

Bumps [lodash](https://github.com/lodash/lodash) from 4.17.21 to 4.17.23.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](lodash/lodash@4.17.21...4.17.23)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.17.23
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Metrics prometheus user team count (BerriAI#19520)

* add user count and team count prometheus metrics

* rebase

* revert mistaken deletion

* fix ui build and mypy lint

* Adding python3-dev to non root

* adding node-tar cve allowlist

* fix(websearch_interception): filter internal kwargs before follow-up request (BerriAI#19577)

The websearch interception handler was passing internal flags like
`_websearch_interception_converted_stream` to the follow-up LLM request.
This caused "Extra inputs are not permitted" errors from providers like
Bedrock that use strict Pydantic validation.

Fix: Filter out all kwargs starting with `_websearch_interception` prefix
before making the follow-up anthropic_messages.acreate() call.

* skip brave tests

* Fix unsafe access to request attribute (BerriAI#19573)

* updating promethus tests

* Fix non-root proxy tests

* Adding lodash-es to allowlist

* attempt fix translation tests

* fix: change oss staging branch name to reflect they're oss

* Revert "[Infra] UI - E2E Tests: Internal Viewer Sidebar"

* Overriding lodash-es with version 4.17.23 in docs

* updating lodash for dashboard

* bump: version 1.81.1 → 1.81.2

* Add reusable model select to update organization page

* Fixing tests

* Adding EOS to finish reasons

* Adding retries to flaky tests

* add opencode tutorial (BerriAI#19602)

* Fix org all proxy model case

* adjust opencode tutorial (BerriAI#19605)

* Add OSS Adopters section to README

* fix: completions mcp output ordering

* feat(helm): Enable PreStop hook configuration in values.yaml (BerriAI#19613)

* Fix: litellm/tests/test_proxy_server_non_root.py

* Update README.md

* Update README.md

* [Feat] New LiteLLM Policy engine - create policies to manage guardrails, conditions - permissions per Key, Team (BerriAI#19612)

* init PolicyMatcher

* TestPolicyMatcherGetMatchingPolicies

* TestPolicyMatcherGetMatchingPolicies

* feat: init PolicyResolver

* init resolver types

* init policy from config

* inint PolicyValidator

* validate policy

* init Architecture Diagram

* test_add_guardrails_from_policy_engine

* init _init_policy_engine

* test updates

* test fixws

* new attachment config

* simplify types

* TestPolicyResolverInheritance

* fix policy resolver

* fix policies

* fix applied policy

* docs fix

* docs fix

* fix linting + QA checks

* fix linting + QA fixes

* test fixes

* docs fix

* fix: pass through endpoints update registry (BerriAI#19420)

* fix: pass through endpoints update registry

* add test case, fix lint error and comment to avoid confusion

* fix pass through endpoints test case

* [Fix] Anthropic models on Azure AI cache pricing (BerriAI#19532) (BerriAI#19614)

* Update README.md

* fix: for test

* All Models Backend Search

* adding test

* test: completions mcp output test

* chore: fix lint error

* test: Skip anthropic model test when ANTHROPIC_API_KEY is not set

* fix: include tool arguments in proxy_server_request for spend logs callbacks

* feat: hashicorp vault rotate support

* Add tool choice mapping for giga chat

* Fix: Responses API logging error for StopIteration

* Fix: test_nova_invoke_streaming_chunk_parsing

* Remove f string

* fix BerriAI#19620: SSO user roles are not updated for existing users (BerriAI#19621)

* Fix: SSO user roles are not updated for existing users
Fixes BerriAI#19620

* Refactor: Remove redundant user_info retrieval in SSOAuthenticationHandler

* Test: add new tests for user creation and updates in get_user_info_from_db

* ci cd fixes - linting security

* resetting poetry and requirements

* fixing security checks

* docs fix

* fixing config

* skipping flaky tests

* skipping non root tests entirely

* security scan

* attempt fix flaky tests

* fixing flaky tests

* [Feat] Guardrail Policy Management - Allow using UI to manage guardrail policies  (BerriAI#19668)

* init UI

* init schema.prisma

* fix: policy_crud_router

* UI fixes

* update gitignore

* working v0 for policy mgmt

* fix: endpoints to resolve guardrails

* fix code QA checks

* ui build issues

* schema fixes

* fix checks

* docs fix

* remove imports from functions

* add schema.prisma

* add migrtion

* fix schema.prisma

* remove imports from functions

* fix lint

* BUMP pyproject

* add spend-queue-troubleshooting docs (BerriAI#19659)

* add spend-queue-troubleshooting docs

* adjust spend-queue-troubleshooting docs

* fix linting

* New add fallbacks modal

* adding tests

* Add Langfuse mock mode for testing without API calls (BerriAI#19676)

* Add GCS mock mode for testing without API calls (BerriAI#19683)

* Adding router settings to create team and key

* fixing build

* fixing tests

* perf: Optimize strip_trailing_slash with O(1) index check (BerriAI#19679)

* perf: Optimize strip_trailing_slash with O(1) index check

Replace rstrip("/") with direct index check for O(1) performance
instead of O(n) string scanning.

Results:
- strip_trailing_slash: 311ms → 13ms (96% faster)
- get_standard_logging_object_payload: 6.11s → 5.80s (5% faster)

* Handle multiple trailing slashes in strip_trailing_slash

Use rstrip for correctness when URL ends with "//" or more,
otherwise use O(1) index check for single trailing slash.

* Fixing tests

* perf: Optimize use_custom_pricing_for_model with set intersection (BerriAI#19677)

* perf: Optimize use_custom_pricing_for_model with set intersection

Cache CustomPricingLiteLLMParams.model_fields.keys() as a module-level
frozenset and use set intersection to reduce loop iterations from 882k
to 90k (only iterating over keys that exist in both sets).

Performance improvement: 84% faster (6.3x speedup)
- Before: 1.17s total, 65µs per call
- After: 0.19s total, 10µs per call

* Use .get() for defensive dictionary access

* perf: skip pattern_router.route() for non-wildcard models (BerriAI#19664)

Check "*" in model before calling pattern_router.route() to avoid
unnecessary pattern matching for non-wildcard model configurations.

* perf: Add LRU caching to get_model_info for faster cost lookups (BerriAI#19606)

- Add @lru_cache decorator to get_model_info() and _cached_get_model_info_helper()
- Update _invalidate_model_cost_lowercase_map() to clear these caches when model_cost changes
- Update test to call cache invalidation after modifying litellm.model_cost

Reduces get_model_cost_information from 46% to <1% of request handling time.

* UI: new build

* redirect to login on expired jwt

* [Feat] UI + Backend - Allow adding policies on Keys/Teams  + Viewing on Info panels  (BerriAI#19688)

* ui for policy mgmt

* test_add_guardrails_from_policy_engine_accepts_dynamic_policies_and_pops_from_data

* docs: add litellm-enterprise requirement for managed files (BerriAI#19689)

* Update Gemini 2.0 Flash deprecation dates to March 31, 2026 (BerriAI#19592)

Google announced that Gemini 2.0 Flash and Flash Lite models will be discontinued on March 31, 2026. Updated deprecation_date field for all affected model variants across different providers (vertex_ai, gemini, deepinfra, openrouter, vercel_ai_gateway).

Models updated:
- gemini-2.0-flash (added deprecation date)
- gemini-2.0-flash-001 (updated from 2026-02-05)
- gemini-2.0-flash-lite (added deprecation date)
- gemini-2.0-flash-lite-001 (updated from 2026-02-25)

All variants now correctly reflect the March 31, 2026 shutdown date.

* fixing build

* Fixing failing tests

* deactivating non root tests

* fixing arize tests

* cache tests serial

* fixing circleci config

* fixing circleci config

* Update OSS Adopters section with new table format

* Fixing ruff check

* bump: version 1.81.2 → 1.81.3

* chore: update Next.js build artifacts (2026-01-24 17:18 UTC, node v22.16.0)

* CI/CD fixes  - split local testing

* fix: _apply_search_filter_to_models mypy linting

* test_partner_models_httpx_streaming

* test_web_search

* Fix: log duplication when json_logs is enabled (BerriAI#19705)

* fix: FLAKY tests

* fix unstable tests

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* test_get_default_unvicorn_init_args

* fix flaky tests

* test_hanging_request_azure

* test_team_update_sc_2

* BUMP extras

* test fixes

* test fixes

* test_retrieve_container_basic

* Model and Team filtering

* TestBedrockInvokeToolSearch

* fix(presidio): resolve runtime error by handling asyncio loops in bac… (BerriAI#19714)

* fix(presidio): resolve runtime error by handling asyncio loops in background threads

* add test case for thread safety

* UI Keys Teams Router Settings docs

* chore: update Next.js build artifacts (2026-01-25 00:27 UTC, node v22.16.0)

* test_stream_transformation_error_sync

* fix patch reliability mock tests

* fix MCP tests

* fix: server rooth path (BerriAI#19790)

* feat: tpm-rpm limit in prometheus metrics (BerriAI#19725)

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(proxy): support slashes in google generateContent model names (BerriAI#19737)

* fix(proxy): support slashes in google route params

* fix(proxy): extract google model ids with slashes

* test(proxy): cover google model ids with slashes

* fix(vertex_ai): support model names with slashes in passthrough URLs (BerriAI#19944)

The regex in get_vertex_model_id_from_url() was using [^/:]+
which stopped at the first slash, truncating model names like
'gcp/google/gemini-2.5-flash' to just 'gcp'. This caused
access_groups checks to fail for custom model names.

Changed the pattern to [^:]+ to allow slashes in model names,
only stopping at the colon before the action (e.g., :generateContent).

* [Fix] VertexAI Pass through - fix regression that caused vertex ai passthroughs to stop working for router models (BerriAI#19967)

* fix(vertex_ai): replace custom model names with actual Vertex AI model names in passthrough URLs (BerriAI#19948)

When the passthrough URL already contains project and location, the code
was skipping the deployment lookup and forwarding the URL as-is to Vertex AI.
For custom model names like gcp/google/gemini-2.5-flash, Vertex AI returned
404 because it only knows the actual model name (gemini-2.5-flash).

The fix makes the deployment lookup always run, so the custom model name
gets replaced with the actual Vertex AI model name before forwarding.

* add _resolve_vertex_model_from_router

* fix: get_llm_provider

* Potential fix for code scanning alert no. 4020: Clear-text logging of sensitive information

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

---------

Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* [Feat] - Search API add /list endpoint to list what search tools exist in router  (BerriAI#19969)

* feat: List all available search tools configured in the router.

* add debugging search API

* add debugging search API

* perf(prometheus): parallelize budget metrics, fix caching bug, reduce CPU by ~40% (BerriAI#20544)

* fix: revert httpx client caching that caused closed client errors

AsyncHTTPHandler.__del__ was closing httpx clients still in use by
AsyncOpenAI/AsyncAzureOpenAI due to independent cache lifecycles.
Restores standalone httpx client creation for OpenAI/Azure providers.

* Revert "Merge pull request BerriAI#18790 from BerriAI/litellm_key_team_routing_3"

This reverts commit ae26d8e, reversing
changes made to 864e8c6.

* fix MYPY lint

* fixed build errors after merge

* least busy debug logs

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: mubashir1osmani <mubashir.osmani777@gmail.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: YutaSaito <36355491+uc4w6c@users.noreply.github.com>
Co-authored-by: Yuta Saito <uc4w6c@bma.biglobe.ne.jp>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Alexsander Hamir <alexsanderhamirgomesbaptista@gmail.com>
Co-authored-by: jay prajapati <79649559+jayy-77@users.noreply.github.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: davida-ps <david.a@prompt.security>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: houdataali <84786211+houdataali@users.noreply.github.com>
Co-authored-by: João Dinis Ferreira <hello@joaof.eu>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Yogeshwaran Ravichandran <96047771+yogeshwaran10@users.noreply.github.com>
Co-authored-by: Will Chen <willchen90@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Eric Cao <ecao310@gmail.com>
Co-authored-by: mpcusack-altos <mcusack@altoslabs.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: John Greek <2006605+jgreek@users.noreply.github.com>
Co-authored-by: xqe2011 <gz923553148@gmail.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* Sync/v1.81.3 stable (#67)

* Fix virtual keys table sorting

* Adding tests

* feat: add GMI Cloud provider support (BerriAI#19376)

* feat: add GMI Cloud provider support

Add GMI Cloud as an OpenAI-compatible provider with:
- Provider configuration in providers.json
- Documentation page with usage examples
- Model pricing for 16 models (Claude, GPT, DeepSeek, Gemini, etc.)
- Sidebar entry for docs navigation

* Add gmi_cloud to provider_endpoints_support.json

Add provider entry to pass CI validation check that ensures all
providers in openai_like/providers.json are documented.

* Fix provider key: gmi_cloud -> gmi

Match the provider key with providers.json

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* Cut chat_completion latency by ~21% by reducing pre-call processing time (BerriAI#19535)

* Adding scope to /models

* e2e test internal viewer sidebar

* Model Select for Create Team

* create team model select

* fixing build

* [Fix] VertexAI Pass through - Ensure only anthropic betas are forwarded down to LLM API (BerriAI#19542)

* fix ALLOWED_VERTEX_AI_PASSTHROUGH_HEADERS

* test_vertex_passthrough_forwards_anthropic_beta_header

* fix test_vertex_passthrough_forwards_anthropic_beta_header

* test_vertex_passthrough_does_not_forward_litellm_auth_token

* fix utils

* Using Anthropic Beta Features on Vertex AI

* test_forward_headers_from_request_x_pass_prefix

* [Fix] VertexAI Pass through - Ensure only anthropic betas are forwarded down to LLM API (BerriAI#19542)

* fix ALLOWED_VERTEX_AI_PASSTHROUGH_HEADERS

* test_vertex_passthrough_forwards_anthropic_beta_header

* fix test_vertex_passthrough_forwards_anthropic_beta_header

* test_vertex_passthrough_does_not_forward_litellm_auth_token

* fix utils

* Using Anthropic Beta Features on Vertex AI

* test_forward_headers_from_request_x_pass_prefix

* fix(mcp): forward static_headers to MCP servers (BerriAI#19341) (BerriAI#19366)

Forward static_headers from /mcp-rest/test/* routes into the MCP client so headers are present during session.initialize() and tool discovery.

Also add a shared merge_mcp_headers() helper to keep header precedence consistent and ensure OpenAPI-to-MCP generated tools include static_headers.

Tests:
- pytest tests/test_litellm/proxy/_experimental/mcp_server/test_rest_endpoints.py
- pytest tests/test_litellm/proxy/_experimental/mcp_server/test_mcp_server_manager.py -k register_openapi_tools_includes_static_headers

Fixes BerriAI#19341

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(azure): preserve content_policy_violation details for images (BerriAI#19328) (BerriAI#19372)

Azure OpenAI Images (DALL·E 3) returns policy violations as a structured payload under body["error"], including inner_error.content_filter_results and revised_prompt.

LiteLLM previously:
- Failed to extract nested error messages (get_error_message only handled body["message"])
- Missed policy violation detection when error strings were generic
- Dropped inner_error details when raising ContentPolicyViolationError

This change:
- Extracts nested Azure error fields (code/type/message + inner_error)
- Detects policy violations via structured error codes
- Passes an OpenAI-style error body + provider_specific_fields to preserve details

Tests:
- python3 -m pytest tests/test_litellm/llms/azure/test_azure_exception_mapping.py
- python3 -m pytest tests/test_litellm/litellm_core_utils/test_exception_mapping_utils.py

Fixes BerriAI#19328

* [Feat] Add Structured output for /v1/messages with Anthropic API, Azure Anthropic API, Bedrock Converse  (BerriAI#19545)

* fix: add AnthropicMessagesRequestOptionalParams

* add _update_headers_with_anthropic_beta

* fix output format tests

* test_structured_output_e2e

* TestAnthropicAPIStructuredOutput

* test_structured_output_e2e

* fix BASE

* TestAzureAnthropicStructuredOutput

* fix: Bedrock Converse

* add nthropic Messages Pass-Through Architecture

* fix: bedrock invoke output_format

* fix: transform_anthropic_messages_request for vertex anthropic

* TestBedrockInvokeStructuredOutput

* docs anthropic vertex

* docs fix

* docs fix

* fixing prompt-security's guardrail implementation (BerriAI#19374)

* Consolidated change

* fix(prompt_security): update message processing to persist sanitized files and filter for API calls

* fix per krrishdholakia suggestion

* Fix/per service ssl override v2 (BerriAI#19538)

* refactor(ssl): support per-service SSL verification overrides

* add test cases for ssl

* docs: update Claude Code integration guides (BerriAI#19415)

* docs: document Claude Code default models and env var overrides

- Update config example with current Claude Code 2.1.x model names
- Add section documenting default models (sonnet/haiku) that Claude Code requests
- Document env var overrides (ANTHROPIC_DEFAULT_SONNET_MODEL, etc.)
- Show how model_name alias can route to any provider (Bedrock, Vertex, etc.)

* Update docs

Removed warning about changing model names in Claude Code versions.

* docs: add 1M context support and improve Claude Code quickstart guide

- Add comprehensive 1M context window documentation
- Document [1m] suffix usage and shell escaping requirements
- Clarify that LiteLLM config should NOT include [1m] in model names
- Add standalone claude_code_1m_context.md guide
- Improve model selection documentation with environment variables
- Add section on default models used by Claude Code v2.1.14
- Add troubleshooting for 1M context issues
- Reorganize to emphasize environment variables approach

Addresses GitHub issue BerriAI#14444

* docs: reorder model selection options - prioritize --model over env vars

- Move command line/session model selection to Option 1 (most reliable)
- Move environment variables to Option 2
- Add note that env vars may be cached from previous session
- Emphasize that --model always uses exact model specified

* docs: reorganize 1M context section - separate command line from env vars

- Split 1M context examples into two clear sections
- Show command line usage first (--model and /model)
- Show environment variables as alternative approach
- Improves readability and emphasizes most reliable method

* docs: remove misleading default models section from website tutorial

- Remove 'Default Models Used by Claude Code' section (misleading)
- Remove claim that config must match exact default model names
- Update config comment to be more general
- Add claude-opus-4-5-20251101 to example config
- Keep authentication section as-is

* docs: correct model selection in website tutorial

- Remove incorrect claim that Claude Code automatically uses proxy models
- Add explicit model selection examples with --model and /model
- Show environment variables as alternative approach
- Remove misleading comment about 'multiple configured'

* docs: add 1M context section to website tutorial

- Add section on using [1m] suffix for 1 million token context
- Include warning about shell escaping (quotes required)
- Explain how Claude Code handles [1m] internally
- Add /context verification command
- Note that LiteLLM config should NOT include [1m]

* docs: add tip about using .env for API keys

- Add note that ANTHROPIC_API_KEY can be stored in .env file
- Clarifies alternative to exporting environment variables

* add redisvl dependency to the root requiremnts.tx (BerriAI#19417)

* [Fix] UI Cost Estimator - Fix model dropdown (BerriAI#19529)

* add cost estimator

* ui fix show errors

* test_estimate_cost_resolves_router_model_alias

* fix: UI 404 error when SERVER_ROOT_PATH is set (BerriAI#19467)

* fix: add case-insensitive support for guardrail mode and actions (BerriAI#19480)

* fix(bedrock): correct streaming choice index for tool calls (BerriAI#19506)

Bedrock's contentBlockIndex identifies content blocks within a message
(text=0, tool_call=1), not OpenAI's choice index (which varies with n>1).
This caused OpenAI SDK's ChatCompletionAccumulator to fail when tool call
chunks arrived on index 1 while finish_reason arrived on index 0.

Bedrock doesn't support n>1 (no such parameter exists):
https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InferenceConfiguration.html

OpenAI choice index spec:
https://platform.openai.com/docs/api-reference/chat/streaming

* Fix Azure RPM calculation formula (BerriAI#19513)

* Fix Azure RPM calculation formula

* updated test

* fix(azure response api): flatten tools for responses api to support nested definitions (BerriAI#19526)

The Azure Responses API uses a different schema (flattened) for tools compared to the standard OpenAI/Azure Chat Completions API (nested). This caused a `BadRequestError` when users passed standard tool definitions.

Changes:
- Implemented tool flattening logic in `AzureOpenAIResponsesAPIConfig.transform_responses_api_request`.
- Added comprehensive unit tests in test_azure_transformation.py to verify nested-to-flat transformation, pass-through of flat tools, and immutability.
- Ensures cross-provider compatibility for tool definitions.

Fixes BerriAI#19523

* Fix date overflow/division by zero in proxy utils (BerriAI#19527)

* Fix date overflow/division by zero in proxy utils

* Fix projected spend calculation

* Strengthen projected spend tests

* Fix Azure AI costs for Anthropic models (BerriAI#19530)

* Fix Azure AI cost calculation

* fixup

* feat: Add MCP tools response to chat completions

* feat: display mcp output on the play ground

* Fix: generation config empty for batch

* Add custom vertex ai mapping to the output

* Add support for output formatfor bedrock invoke via v1/messages

* feat: Limit stop sequence as per openai spec

* Fix mypy error in litellm_staging_01_21_2026

* Fix: imagegeneration@006 has been deprecated

* Fix : test_anthropic_via_responses_api

* Fix: Responses API usage field type mismatch

* Fix: Httpx timeout test failures

* Fix: generationConfig removal from tests

* fix: mypy error

* comment code not used

* feat: Add MCP tools response to chat completions

* feat: display mcp output on the play ground

* Fix batch tests

* fix: mypy error

* fix: mypy error

* Fix:test_multiple_function_call

* build(deps): bump lodash from 4.17.21 to 4.17.23 in /docs/my-website

Bumps [lodash](https://github.com/lodash/lodash) from 4.17.21 to 4.17.23.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](lodash/lodash@4.17.21...4.17.23)

---
updated-dependencies:
- dependency-name: lodash
  dependency-version: 4.17.23
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Metrics prometheus user team count (BerriAI#19520)

* add user count and team count prometheus metrics

* rebase

* revert mistaken deletion

* fix ui build and mypy lint

* Adding python3-dev to non root

* adding node-tar cve allowlist

* fix(websearch_interception): filter internal kwargs before follow-up request (BerriAI#19577)

The websearch interception handler was passing internal flags like
`_websearch_interception_converted_stream` to the follow-up LLM request.
This caused "Extra inputs are not permitted" errors from providers like
Bedrock that use strict Pydantic validation.

Fix: Filter out all kwargs starting with `_websearch_interception` prefix
before making the follow-up anthropic_messages.acreate() call.

* skip brave tests

* Fix unsafe access to request attribute (BerriAI#19573)

* updating promethus tests

* Fix non-root proxy tests

* Adding lodash-es to allowlist

* attempt fix translation tests

* fix: change oss staging branch name to reflect they're oss

* Revert "[Infra] UI - E2E Tests: Internal Viewer Sidebar"

* Overriding lodash-es with version 4.17.23 in docs

* updating lodash for dashboard

* bump: version 1.81.1 → 1.81.2

* Add reusable model select to update organization page

* Fixing tests

* Adding EOS to finish reasons

* Adding retries to flaky tests

* add opencode tutorial (BerriAI#19602)

* Fix org all proxy model case

* adjust opencode tutorial (BerriAI#19605)

* Add OSS Adopters section to README

* fix: completions mcp output ordering

* feat(helm): Enable PreStop hook configuration in values.yaml (BerriAI#19613)

* Fix: litellm/tests/test_proxy_server_non_root.py

* Update README.md

* Update README.md

* [Feat] New LiteLLM Policy engine - create policies to manage guardrails, conditions - permissions per Key, Team (BerriAI#19612)

* init PolicyMatcher

* TestPolicyMatcherGetMatchingPolicies

* TestPolicyMatcherGetMatchingPolicies

* feat: init PolicyResolver

* init resolver types

* init policy from config

* inint PolicyValidator

* validate policy

* init Architecture Diagram

* test_add_guardrails_from_policy_engine

* init _init_policy_engine

* test updates

* test fixws

* new attachment config

* simplify types

* TestPolicyResolverInheritance

* fix policy resolver

* fix policies

* fix applied policy

* docs fix

* docs fix

* fix linting + QA checks

* fix linting + QA fixes

* test fixes

* docs fix

* fix: pass through endpoints update registry (BerriAI#19420)

* fix: pass through endpoints update registry

* add test case, fix lint error and comment to avoid confusion

* fix pass through endpoints test case

* [Fix] Anthropic models on Azure AI cache pricing (BerriAI#19532) (BerriAI#19614)

* Update README.md

* fix: for test

* All Models Backend Search

* adding test

* test: completions mcp output test

* chore: fix lint error

* test: Skip anthropic model test when ANTHROPIC_API_KEY is not set

* fix: include tool arguments in proxy_server_request for spend logs callbacks

* feat: hashicorp vault rotate support

* Add tool choice mapping for giga chat

* Fix: Responses API logging error for StopIteration

* Fix: test_nova_invoke_streaming_chunk_parsing

* Remove f string

* fix BerriAI#19620: SSO user roles are not updated for existing users (BerriAI#19621)

* Fix: SSO user roles are not updated for existing users
Fixes BerriAI#19620

* Refactor: Remove redundant user_info retrieval in SSOAuthenticationHandler

* Test: add new tests for user creation and updates in get_user_info_from_db

* ci cd fixes - linting security

* resetting poetry and requirements

* fixing security checks

* docs fix

* fixing config

* skipping flaky tests

* skipping non root tests entirely

* security scan

* attempt fix flaky tests

* fixing flaky tests

* [Feat] Guardrail Policy Management - Allow using UI to manage guardrail policies  (BerriAI#19668)

* init UI

* init schema.prisma

* fix: policy_crud_router

* UI fixes

* update gitignore

* working v0 for policy mgmt

* fix: endpoints to resolve guardrails

* fix code QA checks

* ui build issues

* schema fixes

* fix checks

* docs fix

* remove imports from functions

* add schema.prisma

* add migrtion

* fix schema.prisma

* remove imports from functions

* fix lint

* BUMP pyproject

* add spend-queue-troubleshooting docs (BerriAI#19659)

* add spend-queue-troubleshooting docs

* adjust spend-queue-troubleshooting docs

* fix linting

* New add fallbacks modal

* adding tests

* Add Langfuse mock mode for testing without API calls (BerriAI#19676)

* Add GCS mock mode for testing without API calls (BerriAI#19683)

* Adding router settings to create team and key

* fixing build

* fixing tests

* perf: Optimize strip_trailing_slash with O(1) index check (BerriAI#19679)

* perf: Optimize strip_trailing_slash with O(1) index check

Replace rstrip("/") with direct index check for O(1) performance
instead of O(n) string scanning.

Results:
- strip_trailing_slash: 311ms → 13ms (96% faster)
- get_standard_logging_object_payload: 6.11s → 5.80s (5% faster)

* Handle multiple trailing slashes in strip_trailing_slash

Use rstrip for correctness when URL ends with "//" or more,
otherwise use O(1) index check for single trailing slash.

* Fixing tests

* perf: Optimize use_custom_pricing_for_model with set intersection (BerriAI#19677)

* perf: Optimize use_custom_pricing_for_model with set intersection

Cache CustomPricingLiteLLMParams.model_fields.keys() as a module-level
frozenset and use set intersection to reduce loop iterations from 882k
to 90k (only iterating over keys that exist in both sets).

Performance improvement: 84% faster (6.3x speedup)
- Before: 1.17s total, 65µs per call
- After: 0.19s total, 10µs per call

* Use .get() for defensive dictionary access

* perf: skip pattern_router.route() for non-wildcard models (BerriAI#19664)

Check "*" in model before calling pattern_router.route() to avoid
unnecessary pattern matching for non-wildcard model configurations.

* perf: Add LRU caching to get_model_info for faster cost lookups (BerriAI#19606)

- Add @lru_cache decorator to get_model_info() and _cached_get_model_info_helper()
- Update _invalidate_model_cost_lowercase_map() to clear these caches when model_cost changes
- Update test to call cache invalidation after modifying litellm.model_cost

Reduces get_model_cost_information from 46% to <1% of request handling time.

* UI: new build

* redirect to login on expired jwt

* [Feat] UI + Backend - Allow adding policies on Keys/Teams  + Viewing on Info panels  (BerriAI#19688)

* ui for policy mgmt

* test_add_guardrails_from_policy_engine_accepts_dynamic_policies_and_pops_from_data

* docs: add litellm-enterprise requirement for managed files (BerriAI#19689)

* Update Gemini 2.0 Flash deprecation dates to March 31, 2026 (BerriAI#19592)

Google announced that Gemini 2.0 Flash and Flash Lite models will be discontinued on March 31, 2026. Updated deprecation_date field for all affected model variants across different providers (vertex_ai, gemini, deepinfra, openrouter, vercel_ai_gateway).

Models updated:
- gemini-2.0-flash (added deprecation date)
- gemini-2.0-flash-001 (updated from 2026-02-05)
- gemini-2.0-flash-lite (added deprecation date)
- gemini-2.0-flash-lite-001 (updated from 2026-02-25)

All variants now correctly reflect the March 31, 2026 shutdown date.

* fixing build

* Fixing failing tests

* deactivating non root tests

* fixing arize tests

* cache tests serial

* fixing circleci config

* fixing circleci config

* Update OSS Adopters section with new table format

* Fixing ruff check

* bump: version 1.81.2 → 1.81.3

* chore: update Next.js build artifacts (2026-01-24 17:18 UTC, node v22.16.0)

* CI/CD fixes  - split local testing

* fix: _apply_search_filter_to_models mypy linting

* test_partner_models_httpx_streaming

* test_web_search

* Fix: log duplication when json_logs is enabled (BerriAI#19705)

* fix: FLAKY tests

* fix unstable tests

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* test_get_default_unvicorn_init_args

* fix flaky tests

* test_hanging_request_azure

* test_team_update_sc_2

* BUMP extras

* test fixes

* test fixes

* test_retrieve_container_basic

* Model and Team filtering

* TestBedrockInvokeToolSearch

* fix(presidio): resolve runtime error by handling asyncio loops in bac… (BerriAI#19714)

* fix(presidio): resolve runtime error by handling asyncio loops in background threads

* add test case for thread safety

* UI Keys Teams Router Settings docs

* chore: update Next.js build artifacts (2026-01-25 00:27 UTC, node v22.16.0)

* test_stream_transformation_error_sync

* fix patch reliability mock tests

* fix MCP tests

* fix: server rooth path (BerriAI#19790)

* feat: tpm-rpm limit in prometheus metrics (BerriAI#19725)

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix(proxy): support slashes in google generateContent model names (BerriAI#19737)

* fix(proxy): support slashes in google route params

* fix(proxy): extract google model ids with slashes

* test(proxy): cover google model ids with slashes

* fix(vertex_ai): support model names with slashes in passthrough URLs (BerriAI#19944)

The regex in get_vertex_model_id_from_url() was using [^/:]+
which stopped at the first slash, truncating model names like
'gcp/google/gemini-2.5-flash' to just 'gcp'. This caused
access_groups checks to fail for custom model names.

Changed the pattern to [^:]+ to allow slashes in model names,
only stopping at the colon before the action (e.g., :generateContent).

* [Fix] VertexAI Pass through - fix regression that caused vertex ai passthroughs to stop working for router models (BerriAI#19967)

* fix(vertex_ai): replace custom model names with actual Vertex AI model names in passthrough URLs (BerriAI#19948)

When the passthrough URL already contains project and location, the code
was skipping the deployment lookup and forwarding the URL as-is to Vertex AI.
For custom model names like gcp/google/gemini-2.5-flash, Vertex AI returned
404 because it only knows the actual model name (gemini-2.5-flash).

The fix makes the deployment lookup always run, so the custom model name
gets replaced with the actual Vertex AI model name before forwarding.

* add _resolve_vertex_model_from_router

* fix: get_llm_provider

* Potential fix for code scanning alert no. 4020: Clear-text logging of sensitive information

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

---------

Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* [Feat] - Search API add /list endpoint to list what search tools exist in router  (BerriAI#19969)

* feat: List all available search tools configured in the router.

* add debugging search API

* add debugging search API

* perf(prometheus): parallelize budget metrics, fix caching bug, reduce CPU by ~40% (BerriAI#20544)

* fix: revert httpx client caching that caused closed client errors

AsyncHTTPHandler.__del__ was closing httpx clients still in use by
AsyncOpenAI/AsyncAzureOpenAI due to independent cache lifecycles.
Restores standalone httpx client creation for OpenAI/Azure providers.

* Revert "Merge pull request BerriAI#18790 from BerriAI/litellm_key_team_routing_3"

This reverts commit ae26d8e, reversing
changes made to 864e8c6.

* fix MYPY lint

* fixed build errors after merge

* added sandbox branch for gcr push (#61)

* added sandbox branch for gcr push

* jenkins setup for sbx

* build fix

* addding sync/v[0-9] branches for gcr push

* build fix

* least busy debug logs

* Fix: remove x-anthropic-billing block

* added backl anthropic envs

* merge fixes

* least busy router changes

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: YutaSaito <36355491+uc4w6c@users.noreply.github.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Alexsander Hamir <alexsanderhamirgomesbaptista@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: jay prajapati <79649559+jayy-77@users.noreply.github.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: davida-ps <david.a@prompt.security>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: houdataali <84786211+houdataali@users.noreply.github.com>
Co-authored-by: João Dinis Ferreira <hello@joaof.eu>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Yogeshwaran Ravichandran <96047771+yogeshwaran10@users.noreply.github.com>
Co-authored-by: Will Chen <willchen90@gmail.com>
Co-authored-by: Yuta Saito <uc4w6c@bma.biglobe.ne.jp>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Eric Cao <ecao310@gmail.com>
Co-authored-by: mpcusack-altos <mcusack@altoslabs.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: John Greek <2006605+jgreek@users.noreply.github.com>
Co-authored-by: xqe2011 <gz923553148@gmail.com>
Co-authored-by: mubashir1osmani <mubashir.osmani777@gmail.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: pramodp-dotcom <pramod.p@juspay.in>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Pramod P <pramod.p@juspay.in>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: mubashir1osmani <mubashir.osmani777@gmail.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: YutaSaito <36355491+uc4w6c@users.noreply.github.com>
Co-authored-by: Yuta Saito <uc4w6c@bma.biglobe.ne.jp>
Co-authored-by: Cesar Garcia <128240629+Chesars@users.noreply.github.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Alexsander Hamir <alexsanderhamirgomesbaptista@gmail.com>
Co-authored-by: jay prajapati <79649559+jayy-77@users.noreply.github.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: davida-ps <david.a@prompt.security>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: houdataali <84786211+houdataali@users.noreply.github.com>
Co-authored-by: João Dinis Ferreira <hello@joaof.eu>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Yogeshwaran Ravichandran <96047771+yogeshwaran10@users.noreply.github.com>
Co-authored-by: Will Chen <willchen90@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Eric Cao <ecao310@gmail.com>
Co-authored-by: mpcusack-altos <mcusack@altoslabs.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: John Greek <2006605+jgreek@users.noreply.github.com>
Co-authored-by: xqe2011 <gz923553148@gmail.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: michelligabriele <gabriele.michelli@icloud.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants