Skip to content

[Feat] - Ishaan main merge branch #23596

Merged
ishaan-jaff merged 4 commits intomainfrom
litellm_ishaan_march_13
Mar 14, 2026
Merged

[Feat] - Ishaan main merge branch #23596
ishaan-jaff merged 4 commits intomainfrom
litellm_ishaan_march_13

Conversation

@ishaan-jaff
Copy link
Member

  • fix(bedrock): respect s3_region_name for batch file uploads (GovCloud fix)

  • fix: s3_region_name always wins over aws_region_name for S3 signing (Greptile feedback)

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/test_litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem
  • I have requested a Greptile review by commenting @greptileai and received a Confidence Score of at least 4/5 before requesting a maintainer review

CI (LiteLLM team)

CI status guideline:

  • 50-55 passing tests: main is stable with minor issues.
  • 45-49 passing tests: acceptable but needs attention
  • <= 40 passing tests: unstable; be careful with your merges and assess the risk.
  • Branch creation CI run
    Link:

  • CI run for the last commit
    Link:

  • Merge / cherry-pick CI run
    Links:

Type

🆕 New Feature
🐛 Bug Fix
🧹 Refactoring
📖 Documentation
🚄 Infrastructure
✅ Test

Changes

* fix(bedrock): respect s3_region_name for batch file uploads (GovCloud fix)

* fix: s3_region_name always wins over aws_region_name for S3 signing (Greptile feedback)
@vercel
Copy link

vercel bot commented Mar 14, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Mar 14, 2026 4:37am

Request Review

Comment on lines +176 to +181
s3_region_name = litellm_params.get("s3_region_name") or optional_params.get(
"s3_region_name"
)
aws_region_name = s3_region_name or self._get_aws_region_name(
optional_params, model
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicated s3_region_name extraction logic

The same region resolution block (lines 176-181 and lines 409-413) is duplicated across two methods in the same class. If the priority logic ever needs to change (e.g., adding an AWS_S3_REGION env-var fallback), it must be updated in both places, which is a maintenance hazard.

Consider extracting a small private helper:

def _get_s3_region_name(
    self, litellm_params: dict, optional_params: dict
) -> Optional[str]:
    return litellm_params.get("s3_region_name") or optional_params.get(
        "s3_region_name"
    )

Then both call-sites become a single line:

s3_region_name = self._get_s3_region_name(litellm_params, optional_params)

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

* fix: _filter_headers_for_aws_signature

* fix: filter None header values in all post-signing re-merge paths

Addresses Greptile feedback: None-valued headers were being filtered
during SigV4 signing but re-merged back into the final headers dict
afterward, which would cause downstream HTTP client failures.

Made-with: Cursor
…er-developer tag config (#23594)

* feat(router): add tag_regex support for header-based routing

Adds a new `tag_regex` field to litellm_params that lets operators route
requests based on regex patterns matched against request headers — primarily
User-Agent — without requiring per-developer tag configuration.

Use case: route all Claude Code traffic (User-Agent: claude-code/x.y.z) to
a dedicated deployment by setting:

  tag_regex:
    - "^User-Agent: claude-code\\/"

in the deployment's litellm_params. Works alongside existing `tags` routing;
exact tag match takes precedence over regex match. Unmatched requests fall
through to deployments tagged `default`.

The matched deployment, pattern, and user_agent are recorded in
`metadata["tag_routing"]` so they flow through to SpendLogs automatically.

* fix(tag_regex): address backwards-compat, metadata overwrite, and warning noise

Three issues from code review:

1. Backwards-compat: `has_tag_filter` was widened to activate on any non-empty
   User-Agent, which would raise ValueError for existing deployments using plain
   tags without a `default` fallback. Fix: only activate header-based regex
   filtering when at least one candidate deployment has `tag_regex` configured.

2. Metadata overwrite: `metadata["tag_routing"]` was overwritten for every
   matching deployment in the loop, leaving inaccurate provenance when multiple
   deployments match. Fix: write only for the first match.

3. Warning noise: an invalid regex pattern logged one warning per header string
   rather than once per pattern. Fix: compile first (catching re.error once),
   then iterate over header strings.

Also adds two new tests covering these cases, and adds docs page for
tag_regex routing with a Claude Code walk-through.

* refactor(tag_regex): remove unnecessary _healthy_list copy

* docs: merge tag_regex section into tag_routing.md, remove standalone page

- Add ## Regex-based tag routing (tag_regex) section to existing
  tag_routing.md instead of a separate page
- Remove tag_regex_routing.md standalone doc (odd UX to have a separate
  page for a sub-feature)
- Remove proxy/tag_regex_routing from sidebars.js
- Add match_any=False debug warning in tag_based_routing.py when regex
  routing fires under strict mode (regex always uses OR semantics)

* fix(tag_regex): address greptile review - security docs, strict-mode enforcement, validation order

- Strengthen security note in tag_routing.md: explicitly state User-Agent
  is client-supplied and can be set to any value; frame tag_regex as a
  traffic classification hint, not an access-control mechanism
- Move tag_regex startup validation before _add_deployment() so an invalid
  pattern never leaves partial router state
- Enforce match_any=False strict-tag policy: when a deployment has both
  tags and tag_regex and the strict tag check fails, skip the regex fallback
  rather than silently bypassing the operator's intent
- Extract per-deployment match logic into _match_deployment() helper to
  keep get_deployments_for_tag() readable
- Add two new tests: strict-mode blocks regex fallback, regex-only
  deployment still matches under match_any=False
Comment on lines +106 to +119
# 2. Regex match against request headers.
# When match_any=False and the deployment has both plain tags and tag_regex,
# the strict tag check has already failed (step 1 returned None). Allow
# the regex to fire only when the deployment has NO plain tags, so we never
# use regex as a backdoor around the operator's strict-tag policy.
strict_tag_check_failed = (
not match_any
and bool(deployment_tags)
and bool(request_tags)
)
if deployment_tag_regex and header_strings and not strict_tag_check_failed:
regex_match = _is_valid_deployment_tag_regex(deployment_tag_regex, header_strings)
if regex_match is not None:
return {"matched_via": "tag_regex", "matched_value": regex_match}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regex can bypass tag requirements when match_any=True

The strict_tag_check_failed guard only activates when match_any=False. When match_any=True (the default), if a deployment has both tags: ["premium"] and a tag_regex, a request with tags: ["free"] (which fails the tag check) and a matching User-Agent will still be routed to that deployment via the regex path:

  • Step 1: is_valid_deployment_tag(["premium"], ["free"], match_any=True)False, falls through
  • strict_tag_check_failed = not True and ... = False
  • Step 2: regex matches → deployment matched

This may be intentional (both tags and tag_regex act as independent OR conditions), but it creates a surprising asymmetry: in match_any=True mode a "wrong-tagged" request can still hit a deployment just because its User-Agent matches. Operators who configure tags: ["premium"] expecting it to restrict access will find that any client with the right UA bypasses the restriction.

If this is by design, it should be explicitly stated in the documentation (the current docs only note that tag_filtering_match_any=False doesn't apply to regex, not that regex overrides failed tag checks in match_any=True mode).

…g tests

- Run Black formatter on 14 files that were failing the lint check
- Replace caplog-based assertions in TestAliasConflicts with
  unittest.mock.patch on verbose_logger.warning for xdist compatibility
- The caplog fixture can produce empty text in pytest-xdist workers
  in certain CI environments, causing flaky test failures

Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com>
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ ishaan-jaff
❌ cursoragent
You have signed the CLA already but the status is still pending? Let us recheck it.

Comment on lines +174 to +176
has_tag_filter = bool(request_tags) or (
bool(header_strings) and has_regex_deployments
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tagless requests enter filtering loop when any tag_regex deployment exists

Because the LiteLLM proxy unconditionally writes metadata["user_agent"] from the HTTP User-Agent header for every request (see litellm_pre_call_utils.py:1201), header_strings is almost always non-empty. This means has_tag_filter evaluates to True for every request whenever even one deployment in the pool has tag_regex configured.

For a tagless request (no metadata["tags"]) hitting a mixed pool like:

Deployment A: tags: ["team-a"]        # no tag_regex, no "default"
Deployment B: tag_regex: ["^User-Agent: claude-code\/"]

The filtering loop runs, but:

  • Deployment A: _match_deploymentNone (step 1 skips because request_tags=None, step 2 skips because no tag_regex)
  • Deployment B: _match_deploymentNone (regex doesn't match, e.g. Mozilla/5.0)

Result: new_healthy_deployments=[], default_deployments=[]ValueError raised.

Before this PR, has_tag_filter = bool(request_tags) = False for tagless requests, so all healthy deployments were returned. Adding any tag_regex deployment to an existing pool (without ensuring a tags: ["default"] fallback) is a silent backwards-incompatible change for tagless clients.

Consider adding a safeguard — e.g. fall through to healthy_deployments instead of raising when request_tags is None, or document clearly that a tags: ["default"] deployment is required when using tag_regex.

if len(new_healthy_deployments) == 0 and len(default_deployments) == 0:
    # If the request had no explicit tags, fall through to the unfiltered path
    # rather than raising, to avoid breaking tagless clients when tag_regex
    # deployments are mixed with plain-tag deployments.
    if not request_tags:
        pass  # fall through to the bottom default/all-deployments path
    else:
        raise ValueError(
            f"{RouterErrors.no_deployments_with_tag_routing.value}. Passed model={model} and tags={request_tags}"
        )

@BerriAI BerriAI deleted a comment from greptile-apps bot Mar 14, 2026
@yuneng-jiang
Copy link
Collaborator

lgtm

@yuneng-jiang yuneng-jiang self-requested a review March 14, 2026 16:35
@ishaan-jaff ishaan-jaff merged commit b87d1f8 into main Mar 14, 2026
94 of 98 checks passed
RheagalFire pushed a commit that referenced this pull request Mar 15, 2026
…der (#23663)

* fix: forward extra_headers to HuggingFace embedding calls (#23525)

Fixes #23502

The huggingface_embed.embedding() call was not receiving the headers
parameter, causing extra_headers (e.g., X-HF-Bill-To) to be silently
dropped. Other providers (openrouter, vercel_ai_gateway, bedrock) already
pass headers correctly. This fix adds headers=headers to match the
behavior of other providers.

Co-authored-by: Jah-yee <sparklab@outlook.com>

* fix: add getPopupContainer to Select components in fallback modal to fix z-index issue (#23516)

The model dropdown menus in the Add Fallbacks modal were rendering behind
the modal overlay because Ant Design portals Select dropdowns to document.body
by default. By setting getPopupContainer to attach the dropdown to its parent
element, the dropdown inherits the modal's stacking context and renders above
the modal.

Fixes #17895

* PR #22867 added _remove_scope_from_cache_control for Bedrock and Azur… (#23183)

* PR #22867 added _remove_scope_from_cache_control for Bedrock and Azure AI but omitted Vertex AI. This applies the same pattern to VertexAIPartnerModelsAnthropicMessagesConfig."

* PR #22867 added _remove_scope_from_cache_control for Bedrock and Azure AI but omitted Vertex AI. This applies the same pattern to VertexAIPartnerModelsAnthropicMessagesConfig."

* PR #22867 added _remove_scope_from_cache_control to AzureAnthropicMessagesConfig
 but missed VertexAIPartnerModelsAnthropicMessagesConfi Rather than duplicating the method again, moved it up to the base AnthropicMessagesConfig so all providers
  inherit it, and removed the now-redundant copy from the Azure AI subclass.

* PR #22867 added _remove_scope_from_cache_control to AzureAnthropicMessagesConfig
 but missed VertexAIPartnerModelsAnthropicMessagesConfi Rather than duplicating the method again, moved it up to the base AnthropicMessagesConfig so all providers
  inherit it, and removed the now-redundant copy from the Azure AI subclass.

---------

Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>

* fix: auto-fill reasoning_content for moonshot kimi reasoning models in multi-turn tool calling (#23580)

* Handle response.failed, response.incomplete, and response.cancelled (#23492)

* Handle response.failed, response.incomplete, and response.cancelled terminal events in background streaming

Previously the background streaming task only handled response.completed and
hardcoded the final status to "completed". This missed three other terminal
event types from the OpenAI streaming spec, causing failed/incomplete/cancelled
responses to be incorrectly marked as completed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Committed-By-Agent: claude

* Remove unused terminal_response_data variable

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Committed-By-Agent: claude

* Address code review: derive fallback status from event type, rewrite tests as integration tests

1. Replace hardcoded "completed" fallback in response_data.get("status")
   with _event_to_status lookup so that response.incomplete and
   response.cancelled events get the correct fallback if the response
   body ever omits the status field.

2. Replace duplicated-logic unit tests with integration tests that
   exercise background_streaming_task directly using mocked streaming
   responses and assert on the final update_state call arguments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Committed-By-Agent: claude

* Remove dead mock_processor and unused mock_response parameter from test helper

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Committed-By-Agent: claude

* Remove FastAPI and UserAPIKeyAuth imports from test file

These types were only used as Mock(spec=...) arguments. Drop the spec
constraints and remove the top-level imports to avoid pulling FastAPI
into test files outside litellm/proxy/.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Committed-By-Agent: claude

* Log warning when streaming response has no body_iterator

If base_process_llm_request returns a non-streaming response (no
body_iterator), log a warning since this likely indicates a
misconfiguration or provider error rather than a successful completion.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Committed-By-Agent: claude

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* fix(security): bump tar to 7.5.11 and tornado to 6.5.5 (#23602)

* fix(security): bump tar to 7.5.11 and tornado to 6.5.5

- tar >=7.5.11: fixes CVE-2026-31802 (HIGH) in node-pkg
- tornado >=6.5.5: fixes CVE-2026-31958 (HIGH) and GHSA-78cv-mqj4-43f7 (MEDIUM) in python-pkg

Addresses vulnerabilities found in ghcr.io/berriai/litellm:main-v1.82.0-stable Trivy scan.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: document tar override is enforced via Dockerfile, not npm

* fix: revert invalid JSON comment in package.json tar override

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* [Feat] - Ishaan main merge branch  (#23596)

* fix(bedrock): respect s3_region_name for batch file uploads (#23569)

* fix(bedrock): respect s3_region_name for batch file uploads (GovCloud fix)

* fix: s3_region_name always wins over aws_region_name for S3 signing (Greptile feedback)

* fix: _filter_headers_for_aws_signature - Bedrock KB (#23571)

* fix: _filter_headers_for_aws_signature

* fix: filter None header values in all post-signing re-merge paths

Addresses Greptile feedback: None-valued headers were being filtered
during SigV4 signing but re-merged back into the final headers dict
afterward, which would cause downstream HTTP client failures.

Made-with: Cursor

* feat(router): tag_regex routing — route by User-Agent regex without per-developer tag config (#23594)

* feat(router): add tag_regex support for header-based routing

Adds a new `tag_regex` field to litellm_params that lets operators route
requests based on regex patterns matched against request headers — primarily
User-Agent — without requiring per-developer tag configuration.

Use case: route all Claude Code traffic (User-Agent: claude-code/x.y.z) to
a dedicated deployment by setting:

  tag_regex:
    - "^User-Agent: claude-code\\/"

in the deployment's litellm_params. Works alongside existing `tags` routing;
exact tag match takes precedence over regex match. Unmatched requests fall
through to deployments tagged `default`.

The matched deployment, pattern, and user_agent are recorded in
`metadata["tag_routing"]` so they flow through to SpendLogs automatically.

* fix(tag_regex): address backwards-compat, metadata overwrite, and warning noise

Three issues from code review:

1. Backwards-compat: `has_tag_filter` was widened to activate on any non-empty
   User-Agent, which would raise ValueError for existing deployments using plain
   tags without a `default` fallback. Fix: only activate header-based regex
   filtering when at least one candidate deployment has `tag_regex` configured.

2. Metadata overwrite: `metadata["tag_routing"]` was overwritten for every
   matching deployment in the loop, leaving inaccurate provenance when multiple
   deployments match. Fix: write only for the first match.

3. Warning noise: an invalid regex pattern logged one warning per header string
   rather than once per pattern. Fix: compile first (catching re.error once),
   then iterate over header strings.

Also adds two new tests covering these cases, and adds docs page for
tag_regex routing with a Claude Code walk-through.

* refactor(tag_regex): remove unnecessary _healthy_list copy

* docs: merge tag_regex section into tag_routing.md, remove standalone page

- Add ## Regex-based tag routing (tag_regex) section to existing
  tag_routing.md instead of a separate page
- Remove tag_regex_routing.md standalone doc (odd UX to have a separate
  page for a sub-feature)
- Remove proxy/tag_regex_routing from sidebars.js
- Add match_any=False debug warning in tag_based_routing.py when regex
  routing fires under strict mode (regex always uses OR semantics)

* fix(tag_regex): address greptile review - security docs, strict-mode enforcement, validation order

- Strengthen security note in tag_routing.md: explicitly state User-Agent
  is client-supplied and can be set to any value; frame tag_regex as a
  traffic classification hint, not an access-control mechanism
- Move tag_regex startup validation before _add_deployment() so an invalid
  pattern never leaves partial router state
- Enforce match_any=False strict-tag policy: when a deployment has both
  tags and tag_regex and the strict tag check fails, skip the regex fallback
  rather than silently bypassing the operator's intent
- Extract per-deployment match logic into _match_deployment() helper to
  keep get_deployments_for_tag() readable
- Add two new tests: strict-mode blocks regex fallback, regex-only
  deployment still matches under match_any=False

* fix(ci): apply Black formatting to 14 files and stabilize flaky caplog tests

- Run Black formatter on 14 files that were failing the lint check
- Replace caplog-based assertions in TestAliasConflicts with
  unittest.mock.patch on verbose_logger.warning for xdist compatibility
- The caplog fixture can produce empty text in pytest-xdist workers
  in certain CI environments, causing flaky test failures

Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com>

* fix: tiktoken cache nonroot offline (#23498)

* fix: restore offline tiktoken cache for non-root envs

Made-with: Cursor

* chore: mkdir for custom tiktoken cache dir

Made-with: Cursor

* test: patch tiktoken.get_encoding in custom-dir test to avoid network

Made-with: Cursor

* test: clear CUSTOM_TIKTOKEN_CACHE_DIR in helper for test isolation

Made-with: Cursor

* test: restore default_encoding module state after custom-dir test

Made-with: Cursor

* fix: normalize content_filtered finish_reason (#23564)

Map provider finish_reason "content_filtered" to the OpenAI-compatible "content_filter" and extend core_helpers tests to cover this case.

Made-with: Cursor

* fix: Fixes #23185 (#23647)

* fix: merge annotations from all streaming chunks in stream_chunk_builder

Previously, stream_chunk_builder only took annotations from the first
chunk that contained them, losing any annotations from later chunks.

This is a problem because providers like Gemini/Vertex AI send grounding
metadata (converted to annotations) in the final streaming chunk, while
other providers may spread annotations across multiple chunks.

Changes:
- Collect and merge annotations from ALL annotation-bearing chunks
  instead of only using the first one

---------

Co-authored-by: RoomWithOutRoof <166608075+Jah-yee@users.noreply.github.com>
Co-authored-by: Jah-yee <sparklab@outlook.com>
Co-authored-by: Ethan T. <ethanchang32@gmail.com>
Co-authored-by: Awais Qureshi <awais.qureshi@arbisoft.com>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Pradyumna Yadav <pradyumna.aky@gmail.com>
Co-authored-by: xianzongxie-stripe <87151258+xianzongxie-stripe@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Joe Reyna <joseph.reyna@gmail.com>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com>
Co-authored-by: milan-berri <milan@berri.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants