Add full support for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI)#20551
Add full support for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI)#20551Sameerlite merged 28 commits intomainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile OverviewGreptile SummaryThis PR adds support for adaptive thinking for Claude Opus 4.6 across multiple providers (Anthropic, Azure AI, Bedrock, Vertex AI). Key Changes:
Issues Found:
Confidence Score: 3/5
|
| Filename | Overview |
|---|---|
| litellm/llms/anthropic/chat/transformation.py | Added adaptive thinking support for Opus 4.6, but AnthropicThinkingParam type definition doesn't include "adaptive" as a valid literal type |
| litellm/llms/bedrock/chat/converse_transformation.py | Updated to pass model parameter to _map_reasoning_effort method, correctly forwards adaptive thinking support |
| litellm/llms/databricks/chat/transformation.py | Updated to pass model parameter to _map_reasoning_effort method for Databricks-hosted Claude models |
| tests/test_litellm/llms/anthropic/chat/test_anthropic_chat_transformation.py | Added comprehensive tests for adaptive thinking mapping for Opus 4.6 and budget-based thinking for other models |
Sequence Diagram
sequenceDiagram
participant User
participant LiteLLM
participant AnthropicConfig
participant BedrockConverse
participant DatabricksConfig
participant AnthropicAPI
User->>LiteLLM: completion(model="claude-opus-4-6", reasoning_effort="high")
alt Anthropic Direct API
LiteLLM->>AnthropicConfig: map_openai_params(reasoning_effort="high")
AnthropicConfig->>AnthropicConfig: _is_claude_opus_4_6(model)
AnthropicConfig-->>AnthropicConfig: True
AnthropicConfig->>AnthropicConfig: _map_reasoning_effort(reasoning_effort, model)
Note over AnthropicConfig: For Opus 4.6: return {type: "adaptive"}
AnthropicConfig-->>LiteLLM: optional_params["thinking"] = {type: "adaptive"}
LiteLLM->>AnthropicAPI: POST /messages with thinking={type: "adaptive"}
else Bedrock Converse
LiteLLM->>BedrockConverse: map_reasoning_effort(reasoning_effort, model)
BedrockConverse->>AnthropicConfig: _map_reasoning_effort(reasoning_effort, model)
AnthropicConfig->>AnthropicConfig: _is_claude_opus_4_6(model)
AnthropicConfig-->>AnthropicConfig: True
AnthropicConfig-->>BedrockConverse: {type: "adaptive"}
BedrockConverse-->>LiteLLM: optional_params["thinking"] = {type: "adaptive"}
LiteLLM->>AnthropicAPI: Bedrock Converse with thinking={type: "adaptive"}
else Databricks Claude
LiteLLM->>DatabricksConfig: map_openai_params(reasoning_effort="high")
DatabricksConfig->>AnthropicConfig: _map_reasoning_effort(reasoning_effort, model)
AnthropicConfig->>AnthropicConfig: _is_claude_opus_4_6(model)
AnthropicConfig-->>AnthropicConfig: True
AnthropicConfig-->>DatabricksConfig: {type: "adaptive"}
DatabricksConfig-->>LiteLLM: optional_params["thinking"] = {type: "adaptive"}
LiteLLM->>AnthropicAPI: Databricks API with thinking={type: "adaptive"}
end
AnthropicAPI-->>User: Response with adaptive thinking
| return AnthropicThinkingParam( | ||
| type="enabled", | ||
| budget_tokens=DEFAULT_REASONING_EFFORT_HIGH_THINKING_BUDGET, | ||
| ) | ||
| elif reasoning_effort == "minimal": | ||
| return AnthropicThinkingParam( | ||
| type="enabled", | ||
| budget_tokens=DEFAULT_REASONING_EFFORT_MINIMAL_THINKING_BUDGET, | ||
| type="adaptive", | ||
| ) |
There was a problem hiding this comment.
The AnthropicThinkingParam type definition in litellm/types/llms/anthropic.py:616 only allows type: Literal["enabled"], but this code sets type="adaptive". Update the type definition to include "adaptive":
| return AnthropicThinkingParam( | |
| type="enabled", | |
| budget_tokens=DEFAULT_REASONING_EFFORT_HIGH_THINKING_BUDGET, | |
| ) | |
| elif reasoning_effort == "minimal": | |
| return AnthropicThinkingParam( | |
| type="enabled", | |
| budget_tokens=DEFAULT_REASONING_EFFORT_MINIMAL_THINKING_BUDGET, | |
| type="adaptive", | |
| ) | |
| return AnthropicThinkingParam( | |
| type="adaptive", # type: ignore[typeddict-item] | |
| ) |
Or better, update the TypedDict in litellm/types/llms/anthropic.py:
class AnthropicThinkingParam(TypedDict, total=False):
type: Literal["enabled", "adaptive"]
budget_tokens: intPrompt To Fix With AI
This is a comment left during a code review.
Path: litellm/llms/anthropic/chat/transformation.py
Line: 667:669
Comment:
The `AnthropicThinkingParam` type definition in `litellm/types/llms/anthropic.py:616` only allows `type: Literal["enabled"]`, but this code sets `type="adaptive"`. Update the type definition to include `"adaptive"`:
```suggestion
return AnthropicThinkingParam(
type="adaptive", # type: ignore[typeddict-item]
)
```
Or better, update the TypedDict in `litellm/types/llms/anthropic.py`:
```python
class AnthropicThinkingParam(TypedDict, total=False):
type: Literal["enabled", "adaptive"]
budget_tokens: int
```
How can I resolve this? If you propose a fix, please make it concise.Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Add unsupported claude code beta headers in json
| # Skip if header is unsupported | ||
| if header in unsupported_headers: | ||
| verbose_logger.debug( | ||
| f"Dropping unsupported beta header '{header}' for provider '{provider}'" |
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
General approach: avoid logging potentially sensitive user‑controlled header values directly. Instead, log only non‑sensitive context (e.g., provider name, counts) or a redacted/hashed representation. This preserves debuggability (you can still see that a header was dropped) while ensuring no clear‑text data from headers flows to logs.
Best targeted fix here: change the debug log inside filter_and_transform_beta_headers so it no longer interpolates the actual header value. Since CodeQL’s variants all point to the same formatted string, a single change resolves all of them. We keep the provider name and a generic message; that’s enough for debugging configuration issues and doesn’t materially alter functionality (filtering behavior is unchanged).
Concretely in litellm/anthropic_beta_headers_manager.py:
- In
filter_and_transform_beta_headers, replace:
verbose_logger.debug(
f"Dropping unsupported beta header '{header}' for provider '{provider}'"
)with something like:
verbose_logger.debug(
"Dropping unsupported beta header for provider '%s'", provider
)or a similar message that omits the raw header value. The logger in this codebase is already a standard logging.Logger‑style object, so using parameterized logging (no f‑string) avoids any accidental concatenation of tainted data.
No additional imports or helper functions are needed. All other code paths remain identical, so the behavior of beta header filtering and provider logic is unchanged; only the log content changes.
| @@ -109,8 +109,9 @@ | ||
|
|
||
| # Skip if header is unsupported | ||
| if header in unsupported_headers: | ||
| # Avoid logging raw beta header values to prevent leaking user-controlled data | ||
| verbose_logger.debug( | ||
| f"Dropping unsupported beta header '{header}' for provider '{provider}'" | ||
| "Dropping unsupported beta header for provider '%s'", provider | ||
| ) | ||
| continue | ||
|
|
|
@Sameerlite Not sure if this was an oversight, but this PR neglected to add support for structured outputs on Opus 4.6. The PR you closed in favor of this one (#20518) had the relevant change, which was not replicated in this PR. Structured outputs on Opus 4.6 are a critical feature. PR to fix the omission is here |
|
It was an oversight. Thanks for the PR! |
Add full support for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI)
Add full support for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI)
Relevant issues
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitCI (LiteLLM team)
Branch creation CI run
Link:
CI run for the last commit
Link:
Merge / cherry-pick CI run
Links:
Type
🆕 New Feature
Changes