Conversation
|
|
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (9)
📝 WalkthroughSummary by CodeRabbit
WalkthroughThis PR refactors Bedrock-Anthropic structured output handling to use a unified synthetic tool-based approach instead of provider-native output configuration, adds comprehensive test coverage for this behavior, and bumps the Helm chart version with new authentication configuration options. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.4)level=error msg="[linters_context] typechecking error: pattern ./...: directory prefix . does not contain main module or its selected dependencies" Review rate limit: 3/5 reviews remaining, refill in 19 minutes and 13 seconds. Comment |
Confidence Score: 4/5Core fix is correct and well-tested; the open Helm values.yaml behavioral change (enforceAuthOnInference: true) from a prior thread is the remaining concern before merging. No new P0/P1 issues found in this pass. The previously flagged P1 (enforceAuthOnInference: true rendering a live config change on Helm upgrade) is still unresolved, which keeps the ceiling at 4/5. helm-charts/bifrost/values.yaml — the enforceAuthOnInference: true field is active and will render into deployed configs on chart upgrade. Important Files Changed
Reviews (2): Last reviewed commit: "framework: bump core to v1.5.7 --skip-ci" | Re-trigger Greptile |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/bedrock/bedrock_test.go`:
- Around line 3444-3565: The test suite now enforces Route A (Anthropic on
Bedrock must use the synthetic bf_so_* tool and must NOT set output_config), but
several legacy tests still assert the old behavior; update or remove those tests
so they match Route A: locate the legacy tests
TestToBedrockResponsesRequest_AnthropicTextFormatUsesOutputConfig,
TestAnthropicStructuredOutputUsesOutputConfigWithoutForcedToolChoice,
TestAnthropicStructuredOutputAcceptsOrderedMaps, and
TestAnthropicStructuredOutputMergesAdditionalModelRequestFieldPaths and change
their assertions to the Route A contract — i.e., assert
AdditionalModelRequestFields does NOT contain "output_config" or the
structured-outputs beta tunnel and assert ToolConfig contains a bf_so_* tool
with ToolChoice forcing the specific tool (e.g., "bf_so_classification") — or
remove the obsolete tests entirely if they duplicate coverage already provided
by TestBedrockAnthropicChatStructuredOutputUsesSyntheticTool and
TestToBedrockResponsesRequest_AnthropicStructuredOutputUsesSyntheticTool.
In `@core/providers/bedrock/responses.go`:
- Around line 1842-1847: Update the outdated comment to reflect the new
behavior: note that convertTextFormatToTool can return a synthetic "bf_so_*"
tool for Anthropic (and non-Anthropic) models, that this value is captured in
responseFormatTool, and that the code later injects it via a forced tool_choice
rather than dropping it; mention that output_config.format is still handled
accordingly and that streaming-json-parser enforces JSON shape when no synthetic
tool is present. Reference convertTextFormatToTool, responseFormatTool, and the
synthetic bf_so_* tool in the comment so future readers don't assume a (nil,
nil) return for Anthropic.
In `@core/providers/bedrock/utils.go`:
- Around line 80-86: Update the outdated helper comment around the
convertResponseFormatToTool call: remove references to the old Anthropic path
that returned (nil, nil) and to dropping output_config.format, and instead state
that convertResponseFormatToTool now consistently synthesizes a bf_so_* tool for
all models (so responseFormatTool is always used) and that downstream JSON-shape
guidance/validation is handled by the streaming-json-parser plugin; update the
same wording at the other occurrences referenced (around lines where
responseFormatTool is used with bifrostReq.Model and bifrostReq.Params).
In `@helm-charts/bifrost/values.yaml`:
- Around line 205-209: The values file is actively setting
enforceAuthOnInference: true which forces rendering due to the template's hasKey
logic and changes upgrade behavior; remove or comment out the
enforceAuthOnInference key (leave enforceGovernanceHeader comment as-is or
adjust its note) so the key is absent and the template falls back to
authConfig.disableAuthOnInference as intended; ensure references to
enforceAuthOnInference in the Helm templates that use hasKey continue to rely on
absence rather than a default true value.
In `@helm-charts/index.yaml`:
- Around line 6-8: The index.yaml entry for the helm-charts repository contains
stale/missing metadata (empty digest for version 2.1.13 and a future created
timestamp), so regenerate the chart index using the standard Helm workflow:
re-package any changed charts (helm package) then run helm repo index . with
--merge to update the existing index (and --url set to the chart hosting URL) so
that the generated fields "digest", "created" (per-release), and top-level
"generated" are recomputed consistently; verify the 2.1.13 entry now has a
non-empty digest and no future timestamp, then commit the updated
helm-charts/index.yaml.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c06c528c-9ecd-48e4-9f80-9c59b09542d7
📒 Files selected for processing (10)
.gitignore.infisical.jsoncore/changelog.mdcore/providers/bedrock/bedrock_test.gocore/providers/bedrock/responses.gocore/providers/bedrock/utils.gohelm-charts/bifrost/Chart.yamlhelm-charts/bifrost/README.mdhelm-charts/bifrost/values.yamlhelm-charts/index.yaml
6237130 to
46c7ef2
Compare
Merge activity
|
…output tool
Bedrock Converse rejects toolConfig.toolChoice.tool on Meta Llama variants
with HTTP 400 ("This model doesn't support the toolConfig.toolChoice.tool
field. Remove toolConfig.toolChoice.tool and try again."). The synthetic
bf_so_* tool path used for structured output unconditionally pinned the
synthetic tool via toolChoice.tool, which broke every with_structured_output
caller against Llama 4 Maverick / Scout / Llama 3.x on Bedrock.
This patch adds an IsLlamaModel helper next to the existing IsNovaModel /
IsAnthropicModel family and gates the forced-tool emission off it in three
places:
1. core/providers/bedrock/utils.go: ChatCompletions synthetic-tool injection
(response_format=json_schema -> bf_so_* tool path).
2. core/providers/bedrock/responses.go: OpenAI Responses API mirror path
(text.format=json_schema -> bf_so_* tool path).
3. core/providers/bedrock/utils.go: convertToolConfigFromFiltered's explicit
tool_choice path (defense-in-depth for callers using bind_tools(tool_choice=
{"type": "function", "function": {"name": "X"}}) directly).
With one synthetic tool bound and Bedrock's default "auto" behavior, omitting
tool_choice yields the same outcome on Llama because there's exactly one tool
the model can call. This mirrors the gate langchain-aws ships
(supports_tool_choice_values for llama4 / llama3-1 / llama3-3 sets only
"auto", causing with_structured_output to emit no tool_choice at all).
Tests added (mirroring PR maximhq#3184's TestBedrockAnthropicChatStructuredOutputUsesSyntheticTool
naming and structure):
- TestBedrockLlamaChatStructuredOutputOmitsForcedToolChoice — synthetic tool
present, ToolChoice nil for Llama on the ChatCompletions path.
- TestBedrockNonLlamaChatStructuredOutputForcesToolChoice — regression guard:
Nova / Anthropic still get the forced tool_choice (gate doesn't over-fire).
- TestToBedrockResponsesRequest_LlamaStructuredOutputOmitsForcedToolChoice —
same negative+positive on the Responses API path.
- TestBedrockLlamaConvertToolConfigOmitsForcedToolChoice — defense-in-depth
coverage for explicit tool_choice struct callers.
Refs:
- https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ToolChoice.html
- https://github.com/langchain-ai/langchain-aws (bedrock_converse.py
supports_tool_choice_values family-detection lines 678-711).
- PR maximhq#3184 (Anthropic structured-output fallback to synthetic tool path) —
immediate predecessor; this PR completes the per-family gating story.
…output tool
Bedrock Converse rejects toolConfig.toolChoice.tool on Meta Llama variants
with HTTP 400 ("This model doesn't support the toolConfig.toolChoice.tool
field. Remove toolConfig.toolChoice.tool and try again."). The synthetic
bf_so_* tool path used for structured output unconditionally pinned the
synthetic tool via toolChoice.tool, which broke every with_structured_output
caller against Llama 4 Maverick / Scout / Llama 3.x on Bedrock.
This patch adds an IsLlamaModel helper next to the existing IsNovaModel /
IsAnthropicModel family and gates the forced-tool emission off it in four
places (covering both API surfaces and both the synthetic and explicit
tool_choice paths on each):
1. core/providers/bedrock/utils.go: ChatCompletions synthetic-tool injection
(response_format=json_schema -> bf_so_* tool path).
2. core/providers/bedrock/responses.go: OpenAI Responses API mirror of maximhq#1
(text.format=json_schema -> bf_so_* tool path).
3. core/providers/bedrock/utils.go: convertToolConfigFromFiltered's explicit
tool_choice path (defense-in-depth for ChatCompletions callers using
bind_tools(tool_choice={"type": "function", "function": {"name": "X"}})
directly).
4. core/providers/bedrock/responses.go: Responses API mirror of maximhq#3 — the
explicit tool_choice path runs convertResponsesToolChoice and writes
BedrockToolChoice{Tool: ...} straight onto the request, which trips the
same Llama 400 without the gate.
With one synthetic tool bound and Bedrock's default "auto" behavior, omitting
tool_choice yields the same outcome on Llama because there's exactly one tool
the model can call. This mirrors the gate langchain-aws ships
(supports_tool_choice_values for llama4 / llama3-1 / llama3-3 sets only
"auto", causing with_structured_output to emit no tool_choice at all).
Tests added (mirroring PR maximhq#3184's TestBedrockAnthropicChatStructuredOutputUsesSyntheticTool
naming and structure):
- TestBedrockLlamaChatStructuredOutputOmitsForcedToolChoice — synthetic tool
present, ToolChoice nil for Llama on the ChatCompletions path.
- TestBedrockNonLlamaChatStructuredOutputForcesToolChoice — regression guard:
Nova / Anthropic still get the forced tool_choice (gate doesn't over-fire).
- TestToBedrockResponsesRequest_LlamaStructuredOutputOmitsForcedToolChoice —
same negative+positive on the Responses API synthetic-tool path.
- TestBedrockLlamaConvertToolConfigOmitsForcedToolChoice — defense-in-depth
coverage for ChatCompletions explicit tool_choice struct callers.
- TestToBedrockResponsesRequest_LlamaConvertResponsesToolChoiceOmitsForcedToolChoice
— defense-in-depth coverage for Responses API explicit tool_choice
(function-typed) struct callers — the gap surfaced in code review.
- TestToBedrockResponsesRequest_NonLlamaConvertResponsesToolChoiceForcesToolChoice
— Anthropic regression guard for maximhq#5 so the Responses-side Llama gate
doesn't over-fire.
Refs:
- https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ToolChoice.html
- https://github.com/langchain-ai/langchain-aws (bedrock_converse.py
supports_tool_choice_values family-detection lines 678-711).
- PR maximhq#3184 (Anthropic structured-output fallback to synthetic tool path) —
immediate predecessor; this PR completes the per-family gating story.
…output tool (#3196) Bedrock Converse rejects toolConfig.toolChoice.tool on Meta Llama variants with HTTP 400 ("This model doesn't support the toolConfig.toolChoice.tool field. Remove toolConfig.toolChoice.tool and try again."). The synthetic bf_so_* tool path used for structured output unconditionally pinned the synthetic tool via toolChoice.tool, which broke every with_structured_output caller against Llama 4 Maverick / Scout / Llama 3.x on Bedrock. This patch adds an IsLlamaModel helper next to the existing IsNovaModel / IsAnthropicModel family and gates the forced-tool emission off it in four places (covering both API surfaces and both the synthetic and explicit tool_choice paths on each): 1. core/providers/bedrock/utils.go: ChatCompletions synthetic-tool injection (response_format=json_schema -> bf_so_* tool path). 2. core/providers/bedrock/responses.go: OpenAI Responses API mirror of #1 (text.format=json_schema -> bf_so_* tool path). 3. core/providers/bedrock/utils.go: convertToolConfigFromFiltered's explicit tool_choice path (defense-in-depth for ChatCompletions callers using bind_tools(tool_choice={"type": "function", "function": {"name": "X"}}) directly). 4. core/providers/bedrock/responses.go: Responses API mirror of #3 — the explicit tool_choice path runs convertResponsesToolChoice and writes BedrockToolChoice{Tool: ...} straight onto the request, which trips the same Llama 400 without the gate. With one synthetic tool bound and Bedrock's default "auto" behavior, omitting tool_choice yields the same outcome on Llama because there's exactly one tool the model can call. This mirrors the gate langchain-aws ships (supports_tool_choice_values for llama4 / llama3-1 / llama3-3 sets only "auto", causing with_structured_output to emit no tool_choice at all). Tests added (mirroring PR #3184's TestBedrockAnthropicChatStructuredOutputUsesSyntheticTool naming and structure): - TestBedrockLlamaChatStructuredOutputOmitsForcedToolChoice — synthetic tool present, ToolChoice nil for Llama on the ChatCompletions path. - TestBedrockNonLlamaChatStructuredOutputForcesToolChoice — regression guard: Nova / Anthropic still get the forced tool_choice (gate doesn't over-fire). - TestToBedrockResponsesRequest_LlamaStructuredOutputOmitsForcedToolChoice — same negative+positive on the Responses API synthetic-tool path. - TestBedrockLlamaConvertToolConfigOmitsForcedToolChoice — defense-in-depth coverage for ChatCompletions explicit tool_choice struct callers. - TestToBedrockResponsesRequest_LlamaConvertResponsesToolChoiceOmitsForcedToolChoice — defense-in-depth coverage for Responses API explicit tool_choice (function-typed) struct callers — the gap surfaced in code review. - TestToBedrockResponsesRequest_NonLlamaConvertResponsesToolChoiceForcesToolChoice — Anthropic regression guard for #5 so the Responses-side Llama gate doesn't over-fire. Refs: - https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ToolChoice.html - https://github.com/langchain-ai/langchain-aws (bedrock_converse.py supports_tool_choice_values family-detection lines 678-711). - PR #3184 (Anthropic structured-output fallback to synthetic tool path) — immediate predecessor; this PR completes the per-family gating story.

Summary
Fixes a user-reported regression where Claude Opus 4.7 structured output requests on Bedrock Converse failed with
output_config.format: Extra inputs are not permitted. The root cause was that PR #3053 routed Anthropic-on-Bedrock structured outputs throughoutput_config.format+anthropic_beta: structured-outputs-2025-11-13inadditionalModelRequestFields, which Bedrock Converse rejects for certain Claude variants (including Opus 4.7). The fix routes all Bedrock + Anthropic structured output requests through the syntheticbf_so_*tool path (the same path used by non-Anthropic Bedrock models), which is a standard Converse tool call accepted by all Claude variants.Changes
convertResponseFormatToToolandconvertTextFormatToToolincore/providers/bedrock/utils.gono longer branch onIsAnthropicModelto emitoutput_config.format; both functions now fall through to thebf_so_*synthetic-tool synthesis for all Bedrock models.ToBedrockResponsesRequestincore/providers/bedrock/responses.godrops theoutput_config.format+anthropic_betainjection block that was added in bedrock cli compatibility changes #3053.convertChatParametersincore/providers/bedrock/utils.gosimilarly drops theoutput_config.format+anthropic_betainjection block.TestBedrockAnthropicChatStructuredOutputUsesSyntheticToolandTestToBedrockResponsesRequest_AnthropicStructuredOutputUsesSyntheticToolunit tests that assertoutput_configand the structured-outputs beta header are absent, and that thebf_so_*tool is injected and forced viatool_choice.BedrockOpus47Tests/TestBedrockOpus47StructuredOutputRegression(skipped unlessBEDROCK_OPUS_47_MODEL_IDis set) that mirrors the exact user-reported Python snippet through the same wire path.2.1.13;enforceAuthOnInferencesurfaced as a commented default invalues.yaml;enforceGovernanceHeadermarked deprecated.Type of change
Affected areas
How to test
The live repro test sends a structured output request with
output_config.format(json_schema withanyOf-style nullable fields) and no outeranthropic-betaheader — the exact shape of the user's failing request. A passing run logsBedrock Opus 4.7 structured-output request SUCCEEDED.Breaking changes
Related issues
Regression introduced in #3053 (commit
7df13ab45).Security considerations
No auth, secrets, or PII changes. The
.infisicaldirectory is now gitignored to prevent accidental credential leakage.Checklist
docs/contributing/README.mdand followed the guidelines