feat(bedrock): support native structured outputs API (outputConfig.textFormat)#21222
Conversation
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
…28j/litellm into fix/sso_PKCE_deployments
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
…ile_content directly
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile OverviewGreptile SummaryAdds native structured outputs support for Bedrock models via the Converse API's
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/llms/bedrock/chat/converse_transformation.py | Core implementation of native structured outputs via outputConfig.textFormat. Clean branching between native and tool-call fallback paths. Minor gaps: schema normalization misses definitions keyword; empty schema for json_object type may cause issues. |
| litellm/types/llms/bedrock.py | Adds well-structured TypedDict definitions (OutputConfigBlock, OutputFormat, OutputFormatStructure, JsonSchemaDefinition) matching the AWS Converse API schema. Clean addition to CommonRequestObject. |
| tests/test_litellm/llms/bedrock/chat/test_converse_transformation.py | Comprehensive test suite (12 tests) covering model detection, schema normalization, outputConfig creation, request building, response handling, and streaming behavior. All tests are pure unit tests with no network calls. |
Flowchart
flowchart TD
A["response_format param received"] --> B{"Model supports native\nstructured outputs?"}
B -->|Yes| C["_create_output_config_for_response_format()"]
C --> D["Normalize schema:\nadditionalProperties: false"]
D --> E["Set outputConfig.textFormat\nwith json_schema type"]
E --> F["json_mode = True\nNo fake_stream needed"]
B -->|No| G["_create_json_tool_call_for_response_format()"]
G --> H["Inject synthetic tool +\ntool_choice forced"]
H --> I["json_mode = True\nfake_stream = True if streaming"]
F --> J["_prepare_request_params()"]
I --> J
J --> K["_transform_request_helper()"]
K -->|Native path| L["outputConfig added as\ntop-level request field"]
K -->|Fallback path| M["toolConfig contains\nsynthetic tool"]
L --> N["Bedrock Converse API"]
M --> N
N -->|Response| O{"Tool call with\nRESPONSE_FORMAT name?"}
O -->|Yes: fallback path| P["Extract JSON from\ntool arguments"]
O -->|No tools / real tools| Q["Pass text content\nthrough directly"]
P --> R["Return ModelResponse"]
Q --> R
Last reviewed commit: 2129061
…eta_header Litellm anthropic doc beta header
…erriAI#21125) (BerriAI#21244) * Fix tool params reported as supported for models without function calling (BerriAI#21125) JSON-configured providers (e.g. PublicAI) inherited all OpenAI params including tools, tool_choice, function_call, and functions — even for models that don't support function calling. This caused an inconsistency where get_supported_openai_params included "tools" but supports_function_calling returned False. The fix checks supports_function_calling in the dynamic config's get_supported_openai_params and removes tool-related params when the model doesn't support it. Follows the same pattern used by OVHCloud and Fireworks AI providers. * Style: move verbose_logger to module-level import, remove redundant try/except Address review feedback from Greptile bot: - Move verbose_logger import to top-level (matches project convention) - Remove redundant try/except around supports_function_calling() since it already handles exceptions internally via _supports_factory()
…AI#21239) * fix: handle missing database url in append_query_params * Update litellm/proxy/proxy_cli.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> --------- Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
|
@ndgigliotti unable to merge due to conflicts, can you resolve and bump me please?
@jquinter for monitoring |
…iAI#21323) PR BerriAI#19809 changed stateless=True to stateless=False to enable progress notifications for MCP tool calls. This caused the mcp library to enforce mcp-session-id headers on all non-initialize requests, breaking MCP Inspector, curl, and any client without automatic session management. Revert to stateless=True to restore compatibility with all MCP clients. The progress notification code already handles missing sessions gracefully (defensive checks + try/except), so no other changes are needed. Fixes BerriAI#20242
…ories + go to next page (BerriAI#21223) * feat(ui/): allow viewing content filter categories on guardrail info * fix(add_guardrail_form.tsx): add validation check to prevent adding empty content filter guardrails * feat(ui/): improve ux around adding new content filter categories easy to skip adding a category, so make it a 1-click thing
[Infra] Bumping proxy extras version
Resolve conflict in test_converse_transformation.py by keeping both the structured outputs tests (from this branch) and the TestBedrockMinThinkingBudgetTokens tests (from main).
|
@krrishdholakia Conflicts resolved and pushed. The 6 failing checks ( All checks relevant to this PR are passing: |
Keep both structured output tests (ours) and min thinking budget tests (staging). Accept staging poetry.lock.
|
Resolved merge conflicts with the staging branch. |
3a34b63
into
BerriAI:litellm_oss_staging_02_16_2026
…de 4.5+) For Bedrock InvokeModel Claude models that support native structured outputs (Haiku 4.5, Sonnet 4.5, Opus 4.5, Opus 4.6), use output_config.format with json_schema instead of the synthetic json_tool_call workaround. Unsupported models automatically fall back to the existing tool-call approach. Completes the Invoke API portion of BerriAI#21208 (Converse was merged in BerriAI#21222).
…de 4.5+) For Bedrock InvokeModel Claude models that support native structured outputs (Haiku 4.5, Sonnet 4.5, Opus 4.5, Opus 4.6), use output_config.format with json_schema instead of the synthetic json_tool_call workaround. Unsupported models automatically fall back to the existing tool-call approach. Completes the Invoke API portion of BerriAI#21208 (Converse was merged in BerriAI#21222).
…de 4.5+) For Bedrock InvokeModel Claude models that support native structured outputs (Haiku 4.5, Sonnet 4.5, Opus 4.5, Opus 4.6), use output_config.format with json_schema instead of the synthetic json_tool_call workaround. Unsupported models automatically fall back to the existing tool-call approach. Completes the Invoke API portion of BerriAI#21208 (Converse was merged in BerriAI#21222).
…de 4.5+) For Bedrock InvokeModel Claude models that support native structured outputs (Haiku 4.5, Sonnet 4.5, Opus 4.5, Opus 4.6), use output_config.format with json_schema instead of the synthetic json_tool_call workaround. Unsupported models automatically fall back to the existing tool-call approach. Completes the Invoke API portion of BerriAI#21208 (Converse was merged in BerriAI#21222).

Relevant issues
Fixes #21208
Pre-Submission checklist
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit@greptileaiand received a Confidence Score of at least 4/5 before requesting a maintainer reviewType
🆕 New Feature
Changes
For Bedrock models that support native structured outputs, use the Converse API's
outputConfig.textFormatfield (true constrained decoding) instead of the syntheticjson_tool_callworkaround. Unsupported models automatically fall back to the existing tool-call approach.What changed
litellm/types/llms/bedrock.py—OutputConfigBlock,OutputFormat,OutputFormatStructure,JsonSchemaDefinitionmatching the AWS Converse API schemaBEDROCK_NATIVE_STRUCTURED_OUTPUT_MODELSset with substring matching (same pattern as Anthropic's direct API)_add_additional_properties_to_schema()recursively ensuresadditionalProperties: falseon all object types, which Bedrock requires_translate_response_format_param— supported models getoutputConfig(no synthetic tool, nofake_stream); unsupported models use the existing tool-call path unchangedoutputConfigflows through_prepare_request_params→_transform_request_helper→ final API request; response handling passes JSON text content through directly when no synthetic tool call is presentSupported models (native path)
Anthropic Claude 4.5+, Qwen3, DeepSeek V3.1, Gemma 3, MiniMax M2, Mistral Large 3, Ministral, Voxtral, Moonshot Kimi K2, NVIDIA Nemotron Nano.
GPT-OSS and Magistral Small are excluded despite AWS listing them — their constrained decoding is broken on Bedrock (produces invalid JSON). They fall back to tool-call which works reliably for GPT-OSS.
Why
response_formatandtoolswork together cleanly (helps with [Bug]: Bedrock returns fake json_tool_call tool when using response_format + tools together, breaking OpenAI Agents SDK #18381)fake_streamneeded for native pathAWS docs: https://docs.aws.amazon.com/bedrock/latest/userguide/structured-output.html