Skip to content

Conversation

@tlecomte
Copy link
Contributor

Title

Read the custom llm provider option from the request header "custom-llm-provider".

Relevant issues

Fixes #15522

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

Several LiteLLM operations require passing a custom_llm_provider entry in the request. Some requests need this parameter in the request body, others in a query string. Here we add the ability to pass this parameter in a http header instead.

Passing this parameter in a header is easier compared to passing it in the request body or as a query parameter:

  1. It would be consistent across all request types
  2. The Python and the .NET client for OpenAI both allow to add headers easily
  3. Headers are easy to manipulate in a proxy

@vercel
Copy link

vercel bot commented Oct 14, 2025

@tlecomte is attempting to deploy a commit to the CLERKIEAI Team on Vercel.

A member of the Team first needs to authorize it.

@krrishdholakia krrishdholakia merged commit 3ef9b20 into BerriAI:main Oct 19, 2025
7 of 10 checks passed
@krrishdholakia
Copy link
Contributor

can you document this flow? @tlecomte

@tlecomte tlecomte deleted the custom-llm-provider-header branch October 20, 2025 07:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: pass custom_llm_provider in a http header instead of body or query string

2 participants