Skip to content

Add full support for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI)#20551

Merged
Sameerlite merged 28 commits intomainfrom
litellm_opus_4.6_thinking
Feb 6, 2026
Merged

Add full support for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI)#20551
Sameerlite merged 28 commits intomainfrom
litellm_opus_4.6_thinking

Conversation

@Sameerlite
Copy link
Contributor

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

CI (LiteLLM team)

CI status guideline:

  • 50-55 passing tests: main is stable with minor issues.
  • 45-49 passing tests: acceptable but needs attention
  • <= 40 passing tests: unstable; be careful with your merges and assess the risk.
  • Branch creation CI run
    Link:

  • CI run for the last commit
    Link:

  • Merge / cherry-pick CI run
    Links:

Type

🆕 New Feature

Changes

@vercel
Copy link

vercel bot commented Feb 6, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 6, 2026 1:42pm

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 6, 2026

Greptile Overview

Greptile Summary

This PR adds support for adaptive thinking for Claude Opus 4.6 across multiple providers (Anthropic, Azure AI, Bedrock, Vertex AI).

Key Changes:

  • Modified _map_reasoning_effort to detect Opus 4.6 models and return adaptive thinking type instead of budget-based thinking
  • Updated Bedrock and Databricks transformations to pass model parameter to _map_reasoning_effort
  • Fixed Bedrock model names by removing trailing :0 version suffix for consistency
  • Added comprehensive tests for the new adaptive thinking behavior

Issues Found:

  • AnthropicThinkingParam type definition only allows type: Literal["enabled"] but code now uses type="adaptive" - type definition needs updating
  • Minor docstring inconsistency (says "Opus 4.5" but should say "Opus 4.6")
  • Logic question: adaptive thinking is returned even when reasoning_effort is None for Opus 4.6 - verify if this is intended behavior

Confidence Score: 3/5

  • This PR has type definition issues that will cause MyPy errors and should be fixed before merging
  • Score of 3 reflects that while the core logic for adaptive thinking is sound and well-tested, there's a critical type definition mismatch where AnthropicThinkingParam doesn't include "adaptive" as a valid literal type. This will cause type checking failures. Additionally, the behavior when reasoning_effort is None for Opus 4.6 needs clarification.
  • Pay close attention to litellm/llms/anthropic/chat/transformation.py - the type definition for AnthropicThinkingParam needs to be updated in litellm/types/llms/anthropic.py to include "adaptive" as a valid type literal

Important Files Changed

Filename Overview
litellm/llms/anthropic/chat/transformation.py Added adaptive thinking support for Opus 4.6, but AnthropicThinkingParam type definition doesn't include "adaptive" as a valid literal type
litellm/llms/bedrock/chat/converse_transformation.py Updated to pass model parameter to _map_reasoning_effort method, correctly forwards adaptive thinking support
litellm/llms/databricks/chat/transformation.py Updated to pass model parameter to _map_reasoning_effort method for Databricks-hosted Claude models
tests/test_litellm/llms/anthropic/chat/test_anthropic_chat_transformation.py Added comprehensive tests for adaptive thinking mapping for Opus 4.6 and budget-based thinking for other models

Sequence Diagram

sequenceDiagram
    participant User
    participant LiteLLM
    participant AnthropicConfig
    participant BedrockConverse
    participant DatabricksConfig
    participant AnthropicAPI

    User->>LiteLLM: completion(model="claude-opus-4-6", reasoning_effort="high")
    
    alt Anthropic Direct API
        LiteLLM->>AnthropicConfig: map_openai_params(reasoning_effort="high")
        AnthropicConfig->>AnthropicConfig: _is_claude_opus_4_6(model)
        AnthropicConfig-->>AnthropicConfig: True
        AnthropicConfig->>AnthropicConfig: _map_reasoning_effort(reasoning_effort, model)
        Note over AnthropicConfig: For Opus 4.6: return {type: "adaptive"}
        AnthropicConfig-->>LiteLLM: optional_params["thinking"] = {type: "adaptive"}
        LiteLLM->>AnthropicAPI: POST /messages with thinking={type: "adaptive"}
    else Bedrock Converse
        LiteLLM->>BedrockConverse: map_reasoning_effort(reasoning_effort, model)
        BedrockConverse->>AnthropicConfig: _map_reasoning_effort(reasoning_effort, model)
        AnthropicConfig->>AnthropicConfig: _is_claude_opus_4_6(model)
        AnthropicConfig-->>AnthropicConfig: True
        AnthropicConfig-->>BedrockConverse: {type: "adaptive"}
        BedrockConverse-->>LiteLLM: optional_params["thinking"] = {type: "adaptive"}
        LiteLLM->>AnthropicAPI: Bedrock Converse with thinking={type: "adaptive"}
    else Databricks Claude
        LiteLLM->>DatabricksConfig: map_openai_params(reasoning_effort="high")
        DatabricksConfig->>AnthropicConfig: _map_reasoning_effort(reasoning_effort, model)
        AnthropicConfig->>AnthropicConfig: _is_claude_opus_4_6(model)
        AnthropicConfig-->>AnthropicConfig: True
        AnthropicConfig-->>DatabricksConfig: {type: "adaptive"}
        DatabricksConfig-->>LiteLLM: optional_params["thinking"] = {type: "adaptive"}
        LiteLLM->>AnthropicAPI: Databricks API with thinking={type: "adaptive"}
    end
    
    AnthropicAPI-->>User: Response with adaptive thinking
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Comment on lines 667 to 669
return AnthropicThinkingParam(
type="enabled",
budget_tokens=DEFAULT_REASONING_EFFORT_HIGH_THINKING_BUDGET,
)
elif reasoning_effort == "minimal":
return AnthropicThinkingParam(
type="enabled",
budget_tokens=DEFAULT_REASONING_EFFORT_MINIMAL_THINKING_BUDGET,
type="adaptive",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AnthropicThinkingParam type definition in litellm/types/llms/anthropic.py:616 only allows type: Literal["enabled"], but this code sets type="adaptive". Update the type definition to include "adaptive":

Suggested change
return AnthropicThinkingParam(
type="enabled",
budget_tokens=DEFAULT_REASONING_EFFORT_HIGH_THINKING_BUDGET,
)
elif reasoning_effort == "minimal":
return AnthropicThinkingParam(
type="enabled",
budget_tokens=DEFAULT_REASONING_EFFORT_MINIMAL_THINKING_BUDGET,
type="adaptive",
)
return AnthropicThinkingParam(
type="adaptive", # type: ignore[typeddict-item]
)

Or better, update the TypedDict in litellm/types/llms/anthropic.py:

class AnthropicThinkingParam(TypedDict, total=False):
    type: Literal["enabled", "adaptive"]
    budget_tokens: int
Prompt To Fix With AI
This is a comment left during a code review.
Path: litellm/llms/anthropic/chat/transformation.py
Line: 667:669

Comment:
The `AnthropicThinkingParam` type definition in `litellm/types/llms/anthropic.py:616` only allows `type: Literal["enabled"]`, but this code sets `type="adaptive"`. Update the type definition to include `"adaptive"`:

```suggestion
            return AnthropicThinkingParam(
                type="adaptive",  # type: ignore[typeddict-item]
            )
```

Or better, update the TypedDict in `litellm/types/llms/anthropic.py`:
```python
class AnthropicThinkingParam(TypedDict, total=False):
    type: Literal["enabled", "adaptive"]
    budget_tokens: int
```

How can I resolve this? If you propose a fix, please make it concise.

Sameerlite and others added 6 commits February 6, 2026 11:31
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
@Sameerlite Sameerlite changed the title Add support for adaptive thinking for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI) Add full support for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI) Feb 6, 2026
Add unsupported claude code beta headers in json
# Skip if header is unsupported
if header in unsupported_headers:
verbose_logger.debug(
f"Dropping unsupported beta header '{header}' for provider '{provider}'"

Check failure

Code scanning / CodeQL

Clear-text logging of sensitive information High

This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expression logs
sensitive data (secret)
as clear text.
This expression logs
sensitive data (password)
as clear text.
This expres

Copilot Autofix

AI about 2 months ago

General approach: avoid logging potentially sensitive user‑controlled header values directly. Instead, log only non‑sensitive context (e.g., provider name, counts) or a redacted/hashed representation. This preserves debuggability (you can still see that a header was dropped) while ensuring no clear‑text data from headers flows to logs.

Best targeted fix here: change the debug log inside filter_and_transform_beta_headers so it no longer interpolates the actual header value. Since CodeQL’s variants all point to the same formatted string, a single change resolves all of them. We keep the provider name and a generic message; that’s enough for debugging configuration issues and doesn’t materially alter functionality (filtering behavior is unchanged).

Concretely in litellm/anthropic_beta_headers_manager.py:

  • In filter_and_transform_beta_headers, replace:
verbose_logger.debug(
    f"Dropping unsupported beta header '{header}' for provider '{provider}'"
)

with something like:

verbose_logger.debug(
    "Dropping unsupported beta header for provider '%s'", provider
)

or a similar message that omits the raw header value. The logger in this codebase is already a standard logging.Logger‑style object, so using parameterized logging (no f‑string) avoids any accidental concatenation of tainted data.

No additional imports or helper functions are needed. All other code paths remain identical, so the behavior of beta header filtering and provider logic is unchanged; only the log content changes.


Suggested changeset 1
litellm/anthropic_beta_headers_manager.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/litellm/anthropic_beta_headers_manager.py b/litellm/anthropic_beta_headers_manager.py
--- a/litellm/anthropic_beta_headers_manager.py
+++ b/litellm/anthropic_beta_headers_manager.py
@@ -109,8 +109,9 @@
         
         # Skip if header is unsupported
         if header in unsupported_headers:
+            # Avoid logging raw beta header values to prevent leaking user-controlled data
             verbose_logger.debug(
-                f"Dropping unsupported beta header '{header}' for provider '{provider}'"
+                "Dropping unsupported beta header for provider '%s'", provider
             )
             continue
         
EOF
@@ -109,8 +109,9 @@

# Skip if header is unsupported
if header in unsupported_headers:
# Avoid logging raw beta header values to prevent leaking user-controlled data
verbose_logger.debug(
f"Dropping unsupported beta header '{header}' for provider '{provider}'"
"Dropping unsupported beta header for provider '%s'", provider
)
continue

Copilot is powered by AI and may make mistakes. Always verify output.
@kelvin-tran
Copy link
Contributor

@Sameerlite Not sure if this was an oversight, but this PR neglected to add support for structured outputs on Opus 4.6. The PR you closed in favor of this one (#20518) had the relevant change, which was not replicated in this PR.

Structured outputs on Opus 4.6 are a critical feature. PR to fix the omission is here

@Sameerlite
Copy link
Contributor Author

It was an oversight. Thanks for the PR!

ishaan-jaff pushed a commit that referenced this pull request Feb 11, 2026
Add full support  for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI)
Sameerlite added a commit that referenced this pull request Feb 13, 2026
Add full support  for Opus 4.6 (Anthropic, Azure AI, Bedrock, Vertex AI)
ishaan-jaff pushed a commit that referenced this pull request Feb 18, 2026
@ishaan-berri ishaan-berri deleted the litellm_opus_4.6_thinking branch March 26, 2026 22:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants