Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions kubernetes/apps/ai/litellm/app/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,24 @@ data:
# Required to configure self-hosted SearXNG endpoint
allow_client_side_credentials: true

# =====================================================
# STORY-063: Anthropic Pass-Through for Claude Max OAuth
# Workaround for LiteLLM bug (Issue #19618) where Authorization
# header is stripped before forwarding to Anthropic.
# Pass-through with forward_headers: true preserves OAuth token.
# Reference: https://docs.litellm.ai/docs/pass_through/anthropic_completion
# =====================================================
pass_through_endpoints:
- path: "/anthropic"
target: "https://api.anthropic.com"
forward_headers: true
Comment on lines +234 to +240
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

forward_headers: true will forward all incoming headers to https://api.anthropic.com, which can leak LiteLLM authentication material (e.g., Authorization: Bearer <master/virtual key> for clients that use bearer auth, and/or x-litellm-api-key) to the upstream provider. Since litellm ingress allows traffic from any pod in the ai namespace (and the internal gateway), this is a real credential-exfil path. Prefer forwarding only the minimal required header(s) for Claude Max OAuth (typically Authorization) and explicitly ensure LiteLLM auth headers are not forwarded upstream (e.g., via an allowlist/denylist mechanism or a dedicated auth header for LiteLLM separate from Authorization).

Suggested change
# Pass-through with forward_headers: true preserves OAuth token.
# Reference: https://docs.litellm.ai/docs/pass_through/anthropic_completion
# =====================================================
pass_through_endpoints:
- path: "/anthropic"
target: "https://api.anthropic.com"
forward_headers: true
# Pass-through with a restricted forward_headers allowlist preserves
# the OAuth token without leaking LiteLLM-specific auth headers.
# Reference: https://docs.litellm.ai/docs/pass_through/anthropic_completion
# =====================================================
pass_through_endpoints:
- path: "/anthropic"
target: "https://api.anthropic.com"
forward_headers:
- "authorization"
- "anthropic-version"
- "anthropic-beta"

Copilot uses AI. Check for mistakes.
auth: true
Comment on lines +238 to +241
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

LiteLLM forward_headers allowlist header filtering pass-through proxy

💡 Result:

In LiteLLM pass-through endpoints, header forwarding is currently all-or-nothing:

  • forward_headers: true will forward all incoming request headers to the target endpoint. (docs.litellm.ai)
  • There is no documented “allowlist” / per-header filter option for pass-through routes (the pass-through config spec only exposes headers: (static headers) and forward_headers: (boolean)). (docs.litellm.ai)

Example (forwards everything):

general_settings:
  pass_through_endpoints:
    - path: "/v1/rerank"
      target: "https://api.cohere.com/v1/rerank"
      headers:
        Authorization: "bearer os.environ/COHERE_API_KEY"
        content-type: application/json
        accept: application/json
      forward_headers: true

(docs.litellm.ai)

If you need an allowlist (only some headers)

You’ll need to do it outside of pass-through, e.g.:

  • Put an upstream proxy (NGINX/Envoy/API Gateway) in front of LiteLLM to strip/allow only specific headers before they reach LiteLLM, or
  • Write a custom adapter/callback that explicitly constructs the outbound request/headers, instead of using a pure pass-through route. (docs.litellm.ai)

Related (non pass-through) header forwarding controls

For normal model calls (not pass-through), LiteLLM provides forward_client_headers_to_llm_api, which enables forwarding for specific model groups, but it still isn’t a per-header allowlist; it forwards certain categories (notably x-* headers, with some exclusions) when enabled. (docs.litellm.ai)

Also note there’s been historical confusion/bugs around pass-through header behavior (see GitHub issue). (github.com)

Citations:


🌐 Web query:

LiteLLM pass-through proxy configuration header forwarding documentation

💡 Result:

LiteLLM “pass-through proxy” header forwarding is documented under Pass Through Endpoints:

  • Config key: general_settings.pass_through_endpoints[].forward_headers: true
    This enables forwarding the original incoming request headers to the upstream “target” endpoint (in addition to any static headers: you define). Example in docs shows headers: plus forward_headers: true. [1]

  • Where it’s documented: “Create Pass Through Endpoints” → Config.yaml Setup / Configuration Reference (includes headers and forward_headers). [1]

Related (non-pass-through) header forwarding docs (for forwarding client headers to LLM provider APIs, not arbitrary pass-through targets):

  • general_settings.forward_client_headers_to_llm_api: true (global) or enable per model group via litellm_settings.model_group_settings.forward_client_headers_to_llm_api. [3]
  • Overview of supported/special request headers is listed under “Request Headers”. [2]

Sources:
[1] LiteLLM Docs — Create Pass Through Endpoints (pass-through + forward_headers) (docs.litellm.ai)
[2] LiteLLM Docs — Request Headers (docs.litellm.ai)
[3] LiteLLM Docs — Forward Client Headers to LLM API (docs.litellm.ai)

Citations:


🌐 Web query:

LiteLLM "forward_headers" allowed_headers whitelist configuration

💡 Result:

LiteLLM has two different “forward headers” features, and neither currently supports an allowed_headers (whitelist) config knob:

1) Proxy → LLM provider header forwarding (model-group controlled)

This is configured via forward_client_headers_to_llm_api (globally or per model group). When enabled, LiteLLM forwards only specific categories (effectively a built-in allowlist), e.g.:

  • Custom x-... headers (with exceptions like x-stainless-*)
  • Some provider-specific headers (e.g., Anthropic beta headers; OpenAI org header when separately enabled)
    There is no documented allowed_headers whitelist you can set—control is via enabling/disabling forwarding per model group. [1]

2) Pass-through endpoints header forwarding

For pass_through_endpoints, the per-route flag is:

general_settings:
  pass_through_endpoints:
    - path: "/v1/..."
      target: "https://..."
      forward_headers: true

forward_headers: true forwards all incoming request headers to the target endpoint; docs do not mention any allowed_headers filtering/whitelist option. [2]

If you need a whitelist today

  • For pass-through endpoints, you generally have to not use forward_headers: true and instead explicitly set/override the needed headers under headers: (but that won’t dynamically “pass through” arbitrary client headers). Lack of fine-grained “forward only these headers” control has been raised as a feature request/limitation. [3]

Sources: Forward client headers docs [1], pass-through endpoint docs [2], related GitHub issue discussing header pass-through control [3].


Remove forward_headers: true or implement header filtering to prevent credential leakage.

forward_headers: true forwards all incoming client headers to Anthropic, including internal LiteLLM auth credentials and cookies. LiteLLM's pass-through endpoint configuration does not support an allowlist—you must either:

  1. Remove forward_headers: true and explicitly define only required headers under headers:
  2. Deploy an upstream proxy (NGINX/Envoy) in front of LiteLLM to strip sensitive headers before reaching the pass-through endpoint
  3. Replace the pass-through route with a custom adapter that constructs requests with only safe headers

Without one of these mitigations, all client headers are forwarded to the external Anthropic API.

🤖 Prompt for AI Agents
In `@kubernetes/apps/ai/litellm/app/configmap.yaml` around lines 238 - 241, The
pass-through route for path "/anthropic" currently uses forward_headers: true
which leaks all client headers; remove forward_headers: true from the /anthropic
route in configmap.yaml and either add a explicit headers: list containing only
required headers for Anthropic, or implement an upstream proxy (NGINX/Envoy) to
strip sensitive headers before requests reach the /anthropic target, or replace
the pass-through with a custom adapter that constructs outgoing requests with
only safe headers; update the configuration for the "/anthropic" route (path
"/anthropic", target "https://api.anthropic.com") accordingly.

guardrails:
lakera-guard:
request_fields: ["messages[*].content"]
openai-mod:
request_fields: ["messages[*].content"]

litellm_settings:
# Drop unsupported params instead of erroring
drop_params: true
Expand Down
7 changes: 3 additions & 4 deletions kubernetes/apps/ai/moltbot/app/helmrelease.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -101,11 +101,10 @@ spec:
name: moltbot-secrets
key: ANTHROPIC_OAUTH_TOKEN
# STORY-063: Route Claude Code through LiteLLM for guardrails
# OAuth token forwarded via per-model forward_client_headers_to_llm_api
# Uses pass-through endpoint with forward_headers: true to preserve OAuth token
# Workaround for LiteLLM bug #19618 (clean_headers strips Authorization)
ANTHROPIC_BASE_URL:
value: "http://litellm.ai.svc.cluster.local:4000"
ANTHROPIC_MODEL:
value: "claude-default"
value: "http://litellm.ai.svc.cluster.local:4000/anthropic"
# x-litellm-api-key header for LiteLLM authentication/tracking
# Pre-composed in 1Password (K8s $(VAR) substitution doesn't work with secretKeyRef)
ANTHROPIC_CUSTOM_HEADERS:
Expand Down