Skip to content

fix: add support for Anthropic structured output and response format#1972

Merged
akshaydeo merged 14 commits intomaximhq:mainfrom
emirhanmutlu-natuvion:fix/anthropic-formats
Apr 17, 2026
Merged

fix: add support for Anthropic structured output and response format#1972
akshaydeo merged 14 commits intomaximhq:mainfrom
emirhanmutlu-natuvion:fix/anthropic-formats

Conversation

@emirhanmutlu-natuvion
Copy link
Copy Markdown
Contributor

Summary

Fix Bedrock Anthropic structured output conversion so it uses native Anthropic Bedrock request fields instead of synthetic tool forcing.
This resolves the incompatibility where structured output could force tool use and break thinking/reasoning in the same call.

Changes

  • Updated Bedrock chat parameter conversion for Anthropic models:
    • response_format (json_schema) now maps to native additionalModelRequestFields.output_config.format
    • no synthetic response-format tool is created for Anthropic
    • no forced toolChoice is injected for Anthropic structured output
  • Added safe merge logic for output_config:
    • keeps both output_config.format and reasoning-related output_config.effort when both are present
  • Added regression test:
    • TestAnthropicStructuredOutputUsesOutputConfigWithoutForcedToolChoice
    • verifies:
      • output_config.format exists
      • thinking remains present when reasoning is set
      • structured output does not force ToolChoice

Notable design decision:

  • Behavior remains unchanged for non-Anthropic Bedrock models (still uses existing response_format->tool conversion where applicable).

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Documentation
  • Chore/CI

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations
  • Plugins
  • UI (Next.js)
  • Docs

How to test

go version
go test ./providers/bedrock -run "TestAnthropicReasoningConfigUsesThinkingField|TestAnthropicStructuredOutputUsesOutputConfigWithoutForcedToolChoice"
go test ./...

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 6, 2026

📝 Walkthrough

Summary by CodeRabbit

  • New Features

    • Support for requesting multiple chat completions (n parameter).
  • Bug Fixes

    • Improved structured-output handling: Anthropic-native output_config is preserved and deep-merged; non-Anthropic requests still use the legacy tool-based path with synthetic tooling when required.
    • Better preservation and mapping of reasoning/effort and other additional model request fields across conversions.
  • Tests

    • Added unit tests covering structured-output conversions, reasoning mapping, merge behavior, and chat-N preservation.

Walkthrough

Preserve Anthropic-style ordered output_config.format, add OrderedMap deep-merge/upsert utilities for AdditionalModelRequestFields, defer synthetic structured-output tool injection for non‑Anthropic flows, extend converters to return Anthropic output-format data, add tests, and add N *int to chat parameters.

Changes

Cohort / File(s) Summary
Bedrock tests
core/providers/bedrock/bedrock_test.go
Updated and added tests validating Anthropic vs non‑Anthropic structured-output handling, ordered output_config.format preservation, deep-merge semantics for additionalModelRequestFieldPaths, reasoning/effort mapping, synthetic tool forcing, and minor JSON alignment fixes.
Bedrock utils & helpers
core/providers/bedrock/utils.go
Changed convertResponseFormatToTool and convertTextFormatToTool to return (*BedrockTool, any) allowing return of an Anthropic output_config.format. Added helpers: setOutputConfigField, mergeAdditionalModelRequestFields, mergeOrderedMapInto, toOrderedMap, newAnthropicOutputFormatOrderedMap. convertChatParameters now upserts/merges Anthropic output_config fields.
Bedrock responses
core/providers/bedrock/responses.go
Made thinking.output_config.effort extraction OrderedMap-aware; replaced direct output_config writes with setOutputConfigField; deep-merged incoming AdditionalModelRequestFields; deferred synthetic structured-output tool injection and prepend/force it after conversion pass when required.
OpenAI tests
core/providers/openai/chat_test.go
Added TestToOpenAIChatRequest_PreservesN to assert Params.N is preserved through ToOpenAIChatRequest.
Schemas
core/schemas/chatcompletions.go
Added N *int \json:"n,omitempty"`toChatParameters` to support requesting multiple chat completions.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Bifrost as BifrostReq
    participant Converter
    participant Merger as MergeHelpers
    participant Bedrock as BedrockReq

    Client->>Bifrost: submit Bifrost request (model, params, additionalPaths)
    Bifrost->>Converter: convertResponseFormatToTool(ctx, model, params)
    alt Anthropic json_schema (ordered)
        Converter-->>Bifrost: (nil, anthropicOutputFormat)
        Bifrost->>Merger: setOutputConfigField("format", anthropicOutputFormat)
        Merger-->>Bedrock: upsert/merge into AdditionalModelRequestFields
    else Non-Anthropic or tool-needed
        Converter-->>Bifrost: (syntheticBedrockTool, nil)
        Bifrost->>Bedrock: defer synthetic tool (prepend later) + mark forced ToolChoice
    end
    Bifrost->>Merger: mergeAdditionalModelRequestFields(existing, incoming)
    Merger-->>Bedrock: deep-merged AdditionalModelRequestFields
    Bifrost->>Bedrock: emit final Bedrock request (with synthetic tool prepended if applicable)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hopped through ordered maps at night,
Merged keys with care and set them right.
Anthropic schemas kept their gentle form,
Synthetic tools lined up in a warm swarm.
A tiny n made completions bright. 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 53.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding support for Anthropic structured output and response format to Bedrock, which aligns with the core problem solved.
Description check ✅ Passed The description follows the template structure with all required sections completed: Summary, Changes, Type of change, Affected areas, How to test, and includes breaking changes checkbox. All sections are substantive and relevant.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/bedrock/utils.go`:
- Around line 77-91: The responses.go adaptive-thinking path currently calls
Set("output_config", {"effort": effort}) which overwrites any previously merged
output_config (losing structured output.format set by setOutputConfigField);
replace that direct Set call with a merge using
setOutputConfigField(bedrockReq.AdditionalModelRequestFields, "effort", effort)
so the effort key is added into the existing output_config map instead of
replacing it, ensuring both format and effort coexist for Anthropic/Opus
requests.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: be3124c6-f9e5-45c3-9960-1f57ddb9b318

📥 Commits

Reviewing files that changed from the base of the PR and between 615b702 and 64c3046.

📒 Files selected for processing (2)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/utils.go

Comment thread core/providers/bedrock/utils.go
Comment thread core/providers/bedrock/utils.go
Comment thread core/providers/bedrock/utils.go Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/bedrock/utils.go`:
- Around line 77-86: The code currently sets anthropic output format into
bedrockReq.AdditionalModelRequestFields via setOutputConfigField after calling
convertResponseFormatToTool, but later logic (the code that applies
extra_params.additionalModelRequestFieldPaths and assigns
bedrockReq.AdditionalModelRequestFields) overwrites the map and drops those
keys; change the merge behavior so that when applying
extra_params.additionalModelRequestFieldPaths you merge into the existing
bedrockReq.AdditionalModelRequestFields (not replace it), and perform a
deep-merge for nested keys under "output_config" so existing entries like the
"format" and any "thinking" fields added by setOutputConfigField are preserved
and merged rather than overwritten; update the code paths around
convertResponseFormatToTool, setOutputConfigField and the place that assigns
bedrockReq.AdditionalModelRequestFields to perform this merge.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c76cbb82-8fdc-4305-92f1-224aec9a3c1d

📥 Commits

Reviewing files that changed from the base of the PR and between ae56245 and af10021.

📒 Files selected for processing (1)
  • core/providers/bedrock/utils.go

Comment thread core/providers/bedrock/utils.go
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
core/providers/bedrock/utils.go (1)

352-366: Consider handling the error returned by koanfmaps.Merge.

koanfmaps.Merge returns an error that is currently ignored. While unlikely to fail for simple map structures, errors from deep copy operations could be silently swallowed.

♻️ Optional: log merge errors for observability
 func setOutputConfigField(fields *schemas.OrderedMap, key string, value any) {
 	if fields == nil {
 		return
 	}
 	current := map[string]interface{}{}
 	if existing, ok := fields.Get("output_config"); ok {
 		if m, ok := toStringInterfaceMap(existing); ok && m != nil {
 			current = m
 		}
 	}
-	koanfmaps.Merge(map[string]interface{}{key: value}, current)
+	if err := koanfmaps.Merge(map[string]interface{}{key: value}, current); err != nil {
+		// Log but don't fail - merge errors are unlikely for simple map structures
+		// Consider using the package-level logger if available
+	}
 	fields.Set("output_config", current)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/utils.go` around lines 352 - 366, The call to
koanfmaps.Merge in setOutputConfigField ignores its returned error; update
setOutputConfigField to capture the error (err := koanfmaps.Merge(...)) and
handle it (e.g., log a warning/error with context including key and current
output_config, and abort or fallback to a safe merge) before calling
fields.Set("output_config", current); ensure you reference the existing symbols
(setOutputConfigField, koanfmaps.Merge, fields.Set, "output_config") so the
change is applied in the same function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/go.mod`:
- Around line 56-62: The go.mod currently pins
github.com/mitchellh/copystructure v1.2.0 (archived) and
github.com/knadh/koanf/maps v0.1.2; while no advisories are reported, treat
copystructure as a supply-chain risk—run govulncheck ./... locally to confirm no
reachable vulnerabilities, and either upgrade to a maintained release or replace
the dependency (e.g., a maintained fork or alternate library) if available;
update go.mod/go.sum accordingly and add a short PR note documenting the audit
results and remediation decision referencing the modules
github.com/mitchellh/copystructure and github.com/knadh/koanf/maps.

---

Nitpick comments:
In `@core/providers/bedrock/utils.go`:
- Around line 352-366: The call to koanfmaps.Merge in setOutputConfigField
ignores its returned error; update setOutputConfigField to capture the error
(err := koanfmaps.Merge(...)) and handle it (e.g., log a warning/error with
context including key and current output_config, and abort or fallback to a safe
merge) before calling fields.Set("output_config", current); ensure you reference
the existing symbols (setOutputConfigField, koanfmaps.Merge, fields.Set,
"output_config") so the change is applied in the same function.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c4c0f1fa-ff14-4c65-9eab-e463a8f12a75

📥 Commits

Reviewing files that changed from the base of the PR and between af10021 and d3a57fa.

⛔ Files ignored due to path filters (1)
  • core/go.sum is excluded by !**/*.sum
📒 Files selected for processing (4)
  • core/go.mod
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/responses.go
  • core/providers/bedrock/utils.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • core/providers/bedrock/responses.go

Comment thread core/go.mod Outdated
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Mar 7, 2026

CLA assistant check
All committers have signed the CLA.

@emirhanmutlu-natuvion
Copy link
Copy Markdown
Contributor Author

@akshaydeo would appreciate if you could have another look at the updated implementation, thanks a lot in advance!

Comment thread core/providers/bedrock/utils.go Outdated
Comment thread core/providers/bedrock/utils.go Outdated
@akshaydeo
Copy link
Copy Markdown
Contributor

@emirhanmutlu-natuvion also can you run tests on non-anthropic models once and share the results 🙇

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
core/providers/bedrock/utils.go (1)

441-445: ⚠️ Potential issue | 🟠 Major

Preserve ordered response_format values instead of downcasting them to plain maps.

This new path still only accepts map[string]interface{} at every level. If ResponseFormat, json_schema, or schema is supplied as schemas.OrderedMap, convertResponseFormatToTool now returns (nil, nil) and the Anthropic structured-output branch is skipped. Even on the success path, forcing schema back to a plain map throws away the ordered representation this PR is trying to preserve.

Suggested direction
-func newAnthropicOutputFormatOrderedMap(schemaObj map[string]interface{}) *schemas.OrderedMap {
+func newAnthropicOutputFormatOrderedMap(schemaObj any) *schemas.OrderedMap {
 	return schemas.NewOrderedMapFromPairs(
 		schemas.KV("type", "json_schema"),
 		schemas.KV("schema", schemaObj),
 	)
 }
@@
-	responseFormatMap, ok := (*params.ResponseFormat).(map[string]interface{})
+	responseFormatMap, ok := schemas.SafeExtractOrderedMap(*params.ResponseFormat)
 	if !ok {
 		return nil, nil
 	}
@@
-	formatType, ok := responseFormatMap["type"].(string)
+	formatTypeValue, ok := responseFormatMap.Get("type")
+	formatType, ok := formatTypeValue.(string)
 	if !ok || formatType != "json_schema" {
 		return nil, nil
 	}
@@
-	jsonSchemaObj, ok := responseFormatMap["json_schema"].(map[string]interface{})
+	jsonSchemaValue, ok := responseFormatMap.Get("json_schema")
+	if !ok {
+		return nil, nil
+	}
+	jsonSchemaObj, ok := schemas.SafeExtractOrderedMap(jsonSchemaValue)
 	if !ok {
 		return nil, nil
 	}
@@
-	schemaObj, ok := jsonSchemaObj["schema"].(map[string]interface{})
+	schemaObj, ok := jsonSchemaObj.Get("schema")
 	if !ok {
 		return nil, nil
 	}

The later name/description reads should switch to Get(...) as well.

Also applies to: 925-960

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/utils.go` around lines 441 - 445, The code currently
downcasts ordered response_format structures to plain map[string]interface{} in
newAnthropicOutputFormatOrderedMap and in convertResponseFormatToTool, which
causes branches to return nil when an input is a schemas.OrderedMap and discards
ordering; update newAnthropicOutputFormatOrderedMap and the response-format
handling in convertResponseFormatToTool to accept and propagate
*schemas.OrderedMap (or detect and convert when necessary) so the returned
structure preserves ordering instead of forcing map types, and change any direct
map indexing for name/description to use the OrderedMap Get(...) accessor so
those reads work with both OrderedMap and plain maps (apply same fix for the
other similar block around the convertResponseFormatToTool logic).
🧹 Nitpick comments (1)
core/providers/openai/chat_test.go (1)

10-23: Use schemas.Ptr(2) in this fixture.

The test is fine, but this repo consistently prefers the pointer helper over taking the address of a local, even in tests.

Suggested cleanup
-	n := 2
 	req := &schemas.BifrostChatRequest{
 		Provider: schemas.OpenAI,
 		Model:    "gpt-4.1",
@@
 		Params: &schemas.ChatParameters{
-			N: &n,
+			N: schemas.Ptr(2),
 		},
 	}

Based on learnings: "In the maximhq/bifrost repository, prefer using bifrost.Ptr() to create pointers instead of the address operator (&) even when & would be valid syntactically."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chat_test.go` around lines 10 - 23, The test currently
creates a local variable n := 2 and assigns its address to Params.N; replace
that with the pointer helper by setting Params.N to schemas.Ptr(2) instead.
Locate the fixture that builds the schemas.BifrostChatRequest (the
ChatParameters.N field) and change the code to use schemas.Ptr(2) to follow the
repository convention for pointer values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@core/providers/bedrock/utils.go`:
- Around line 441-445: The code currently downcasts ordered response_format
structures to plain map[string]interface{} in newAnthropicOutputFormatOrderedMap
and in convertResponseFormatToTool, which causes branches to return nil when an
input is a schemas.OrderedMap and discards ordering; update
newAnthropicOutputFormatOrderedMap and the response-format handling in
convertResponseFormatToTool to accept and propagate *schemas.OrderedMap (or
detect and convert when necessary) so the returned structure preserves ordering
instead of forcing map types, and change any direct map indexing for
name/description to use the OrderedMap Get(...) accessor so those reads work
with both OrderedMap and plain maps (apply same fix for the other similar block
around the convertResponseFormatToTool logic).

---

Nitpick comments:
In `@core/providers/openai/chat_test.go`:
- Around line 10-23: The test currently creates a local variable n := 2 and
assigns its address to Params.N; replace that with the pointer helper by setting
Params.N to schemas.Ptr(2) instead. Locate the fixture that builds the
schemas.BifrostChatRequest (the ChatParameters.N field) and change the code to
use schemas.Ptr(2) to follow the repository convention for pointer values.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 17612654-2e1c-4e0d-9fa6-e72711827c49

📥 Commits

Reviewing files that changed from the base of the PR and between ef65fed and 161f19f.

📒 Files selected for processing (4)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/utils.go
  • core/providers/openai/chat_test.go
  • core/schemas/chatcompletions.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • core/providers/bedrock/bedrock_test.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
core/providers/bedrock/bedrock_test.go (1)

2843-2998: Add a case that exercises the adaptive output_config.effort branch.

Both new tests stay on Sonnet + Reasoning.MaxTokens, so they never hit the SupportsAdaptiveThinking path in core/providers/bedrock/utils.go that writes output_config.effort. That leaves the exact merge this PR is protecting — output_config.format coexisting with output_config.effort — unguarded by regression coverage.

🧪 Suggested regression case
+func TestAnthropicStructuredOutputPreservesAdaptiveEffort(t *testing.T) {
+	responseFormat := any(map[string]any{
+		"type": "json_schema",
+		"json_schema": map[string]any{
+			"name": "classification",
+			"schema": map[string]any{
+				"type": "object",
+			},
+		},
+	})
+
+	req := &schemas.BifrostChatRequest{
+		Model: "anthropic.claude-opus-4-6-v1",
+		Input: []schemas.ChatMessage{
+			{
+				Role: schemas.ChatMessageRoleUser,
+				Content: &schemas.ChatMessageContent{
+					ContentStr: schemas.Ptr("Classify this"),
+				},
+			},
+		},
+		Params: &schemas.ChatParameters{
+			ResponseFormat: &responseFormat,
+			Reasoning: &schemas.ChatReasoning{
+				Effort: schemas.Ptr("high"),
+			},
+		},
+	}
+
+	result, err := bedrock.ToBedrockChatCompletionRequest(
+		schemas.NewBifrostContext(context.Background(), schemas.NoDeadline),
+		req,
+	)
+	require.NoError(t, err)
+	require.NotNil(t, result)
+	require.NotNil(t, result.AdditionalModelRequestFields)
+
+	outputConfigRaw, _ := result.AdditionalModelRequestFields.Get("output_config")
+	outputConfig := outputConfigRaw.(*schemas.OrderedMap)
+	_, hasFormat := outputConfig.Get("format")
+	effort, hasEffort := outputConfig.Get("effort")
+	require.True(t, hasFormat)
+	require.True(t, hasEffort)
+	assert.Equal(t, "high", effort)
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/bedrock_test.go` around lines 2843 - 2998, Add a new
test case similar to
TestAnthropicStructuredOutputUsesOutputConfigWithoutForcedToolChoice/TestAnthropicStructuredOutputMergesAdditionalModelRequestFieldPaths
that forces the adaptive-thinking branch so output_config.effort is emitted and
validated; specifically, call bedrock.ToBedrockChatCompletionRequest with a
model that triggers SupportsAdaptiveThinking (or mock/override
SupportsAdaptiveThinking to return true) and include Reasoning.MaxTokens so the
code path in core/providers/bedrock/utils.go that writes output_config.effort
runs, then assert that result.AdditionalModelRequestFields contains
output_config.format (json_schema) and output_config.effort simultaneously and
that merging behavior still deep-merges format without being overwritten. Ensure
the new test references the existing helpers and symbols
(ToBedrockChatCompletionRequest, SupportsAdaptiveThinking, output_config.effort,
output_config.format) so the adaptive effort branch is covered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@core/providers/bedrock/bedrock_test.go`:
- Around line 2843-2998: Add a new test case similar to
TestAnthropicStructuredOutputUsesOutputConfigWithoutForcedToolChoice/TestAnthropicStructuredOutputMergesAdditionalModelRequestFieldPaths
that forces the adaptive-thinking branch so output_config.effort is emitted and
validated; specifically, call bedrock.ToBedrockChatCompletionRequest with a
model that triggers SupportsAdaptiveThinking (or mock/override
SupportsAdaptiveThinking to return true) and include Reasoning.MaxTokens so the
code path in core/providers/bedrock/utils.go that writes output_config.effort
runs, then assert that result.AdditionalModelRequestFields contains
output_config.format (json_schema) and output_config.effort simultaneously and
that merging behavior still deep-merges format without being overwritten. Ensure
the new test references the existing helpers and symbols
(ToBedrockChatCompletionRequest, SupportsAdaptiveThinking, output_config.effort,
output_config.format) so the adaptive effort branch is covered.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: fc9d6493-0a77-49cf-928f-09547391eaf3

📥 Commits

Reviewing files that changed from the base of the PR and between 161f19f and 52a9f21.

📒 Files selected for processing (5)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/responses.go
  • core/providers/bedrock/utils.go
  • core/providers/openai/chat_test.go
  • core/schemas/chatcompletions.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • core/providers/bedrock/responses.go
  • core/schemas/chatcompletions.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
core/providers/bedrock/responses.go (1)

1766-1840: ⚠️ Potential issue | 🟠 Major

Anthropic Responses structured output is still on the forced-tool path.

Lines 1827-1839 still run Params.Text.Format through convertTextFormatToTool() and force ToolChoice. That means Anthropic Responses requests with reasoning + structured output still take the same path this PR just removed for chat. Please emit native output_config.format here as well.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/responses.go` around lines 1766 - 1840, The code still
forces structured-output through convertTextFormatToTool() and ToolChoice for
requests that should use Anthropic native output_config.format; update the
branch that handles bifrostReq.Params.Text.Format so that for Anthropic
responses (use schemas.IsAnthropicModel(bifrostReq.Model) or
anthropic.SupportsAdaptiveThinking checks as appropriate) you do not
create/append a response tool or set ToolConfig.ToolChoice, and instead set
bedrockReq.AdditionalModelRequestFields.Set("output_config",
map[string]any{"format": <converted format object>}) (mirroring how chat was
changed), while retaining the existing tool-based behavior for non-Anthropic
models; use convertTextFormatToTool only for non-Anthropic paths and ensure
ToolConfig.ToolChoice is not set for Anthropic models.
core/providers/bedrock/utils.go (1)

1003-1005: ⚠️ Potential issue | 🔴 Critical

Return both values from convertResponseFormatToTool.

Line 1005 returns only nil, but the function signature declares 2 return values: (*BedrockTool, any). This will cause a Go compilation error. Change to return nil, nil.

Proposed fix
 	schemaObjBytes, err := providerUtils.MarshalSorted(schemaObj)
 	if err != nil {
-		return nil
+		return nil, nil
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/utils.go` around lines 1003 - 1005, The return in
convertResponseFormatToTool is missing the second value: when
providerUtils.MarshalSorted(schemaObj) errors the code currently does "return
nil" but the function signature is "(*BedrockTool, any)"; update that error path
to "return nil, nil" so both return values are provided (locate the error
handling after "schemaObjBytes, err := providerUtils.MarshalSorted(schemaObj)"
inside convertResponseFormatToTool and change the single-value return to return
nil, nil).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/openai/chat_test.go`:
- Around line 99-101: The test function TestToOpenAIChatRequest_PreservesN is
missing the closing braces for its if block and the function; add a closing
brace to end the if (after the t.Fatalf call) and another to close the
TestToOpenAIChatRequest_PreservesN function so it compiles (ensure the braces
match the opening ones in TestToOpenAIChatRequest_PreservesN and the inner if).
After fixing, run go vet/go test to verify compilation.

---

Outside diff comments:
In `@core/providers/bedrock/responses.go`:
- Around line 1766-1840: The code still forces structured-output through
convertTextFormatToTool() and ToolChoice for requests that should use Anthropic
native output_config.format; update the branch that handles
bifrostReq.Params.Text.Format so that for Anthropic responses (use
schemas.IsAnthropicModel(bifrostReq.Model) or anthropic.SupportsAdaptiveThinking
checks as appropriate) you do not create/append a response tool or set
ToolConfig.ToolChoice, and instead set
bedrockReq.AdditionalModelRequestFields.Set("output_config",
map[string]any{"format": <converted format object>}) (mirroring how chat was
changed), while retaining the existing tool-based behavior for non-Anthropic
models; use convertTextFormatToTool only for non-Anthropic paths and ensure
ToolConfig.ToolChoice is not set for Anthropic models.

In `@core/providers/bedrock/utils.go`:
- Around line 1003-1005: The return in convertResponseFormatToTool is missing
the second value: when providerUtils.MarshalSorted(schemaObj) errors the code
currently does "return nil" but the function signature is "(*BedrockTool, any)";
update that error path to "return nil, nil" so both return values are provided
(locate the error handling after "schemaObjBytes, err :=
providerUtils.MarshalSorted(schemaObj)" inside convertResponseFormatToTool and
change the single-value return to return nil, nil).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: db74be37-f4fa-449f-8a78-b648c0c01f96

📥 Commits

Reviewing files that changed from the base of the PR and between c5d132d and 21500f9.

📒 Files selected for processing (5)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/responses.go
  • core/providers/bedrock/utils.go
  • core/providers/openai/chat_test.go
  • core/schemas/chatcompletions.go
✅ Files skipped from review due to trivial changes (1)
  • core/schemas/chatcompletions.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • core/providers/bedrock/bedrock_test.go

Comment thread core/providers/bedrock/responses.go Outdated
Comment thread core/providers/openai/chat_test.go
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
core/providers/bedrock/responses.go (1)

1772-1772: ⚠️ Potential issue | 🟠 Major

output_config still won’t round-trip through ToBifrostResponsesRequest.

After this line writes output_config via setOutputConfigField, ToBifrostResponsesRequest still only reads output_config as map[string]interface{} on Line 1505. When the stored value is *schemas.OrderedMap, Anthropic reasoning settings are skipped during Bedrock→Bifrost conversion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/responses.go` at line 1772, The conversion path loses
Anthropic reasoning fields because setOutputConfigField stores output_config as
a *schemas.OrderedMap but ToBifrostResponsesRequest only expects
map[string]interface{} and thus skips those fields; update
ToBifrostResponsesRequest to detect and handle *schemas.OrderedMap (and other
possible container types) by converting it to a map[string]interface{}
(iterating keys/values and preserving nested structures) before reading keys
like "effort", or add a small helper that normalizes output_config to
map[string]interface{} and use it where output_config is read so Anthropic
settings round-trip correctly; reference functions: setOutputConfigField,
ToBifrostResponsesRequest, and the output_config value stored in
bedrockReq.AdditionalModelRequestFields.
🧹 Nitpick comments (1)
core/providers/bedrock/bedrock_test.go (1)

2941-2946: Make the thinking assertion representation-agnostic.

This assertion is currently tied to map[string]any; if converter internals return *schemas.OrderedMap with identical semantics, the test fails unnecessarily.

♻️ Proposed test-hardening change
-	thinkingRaw, hasThinking := result.AdditionalModelRequestFields.Get("thinking")
-	require.True(t, hasThinking, "expected thinking field for anthropic reasoning")
-	thinking, ok := thinkingRaw.(map[string]any)
-	require.True(t, ok, "expected thinking to be a map")
-	assert.Equal(t, "enabled", thinking["type"])
+	thinkingRaw, hasThinking := result.AdditionalModelRequestFields.Get("thinking")
+	require.True(t, hasThinking, "expected thinking field for anthropic reasoning")
+	switch v := thinkingRaw.(type) {
+	case map[string]any:
+		assert.Equal(t, "enabled", v["type"])
+	case *schemas.OrderedMap:
+		typ, ok := v.Get("type")
+		require.True(t, ok, "expected thinking.type")
+		assert.Equal(t, "enabled", typ)
+	default:
+		t.Fatalf("unexpected thinking type: %T", thinkingRaw)
+	}

Based on learnings: In core/providers/bedrock tests, unit tests should avoid brittle representation-specific checks and prefer structural/field validation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/bedrock_test.go` around lines 2941 - 2946, The test
assumes thinkingRaw is a map[string]any; make it representation-agnostic by
handling both map[string]any and pointer-based ordered maps: inspect thinkingRaw
(variable thinkingRaw) with a type switch (e.g., case map[string]any, case
*schemas.OrderedMap or any type that exposes a Get/key access method), extract
the "type" value into a local (currently named thinking or typeVal) and assert
it equals "enabled"; update the require/ok checks (currently using hasThinking
and ok) to reflect success/failure from the extraction instead of a single
concrete type assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/bedrock/utils.go`:
- Around line 932-936: The function convertResponseFormatToTool currently
declares two return values (*BedrockTool, any) but contains a bare "return nil"
in its error path which causes a compile error; update that error return (and
any other single-value returns) to return two values (e.g., "nil, nil" or "nil,
<appropriate error/value>") so the function's return arity matches its
signature, locating the change in convertResponseFormatToTool and the error
handling block around the current bare return.

---

Duplicate comments:
In `@core/providers/bedrock/responses.go`:
- Line 1772: The conversion path loses Anthropic reasoning fields because
setOutputConfigField stores output_config as a *schemas.OrderedMap but
ToBifrostResponsesRequest only expects map[string]interface{} and thus skips
those fields; update ToBifrostResponsesRequest to detect and handle
*schemas.OrderedMap (and other possible container types) by converting it to a
map[string]interface{} (iterating keys/values and preserving nested structures)
before reading keys like "effort", or add a small helper that normalizes
output_config to map[string]interface{} and use it where output_config is read
so Anthropic settings round-trip correctly; reference functions:
setOutputConfigField, ToBifrostResponsesRequest, and the output_config value
stored in bedrockReq.AdditionalModelRequestFields.

---

Nitpick comments:
In `@core/providers/bedrock/bedrock_test.go`:
- Around line 2941-2946: The test assumes thinkingRaw is a map[string]any; make
it representation-agnostic by handling both map[string]any and pointer-based
ordered maps: inspect thinkingRaw (variable thinkingRaw) with a type switch
(e.g., case map[string]any, case *schemas.OrderedMap or any type that exposes a
Get/key access method), extract the "type" value into a local (currently named
thinking or typeVal) and assert it equals "enabled"; update the require/ok
checks (currently using hasThinking and ok) to reflect success/failure from the
extraction instead of a single concrete type assertion.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 40eae5a7-1206-498b-89da-cdb276369f58

📥 Commits

Reviewing files that changed from the base of the PR and between 21500f9 and d416167.

📒 Files selected for processing (5)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/responses.go
  • core/providers/bedrock/utils.go
  • core/providers/openai/chat_test.go
  • core/schemas/chatcompletions.go
✅ Files skipped from review due to trivial changes (1)
  • core/providers/openai/chat_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • core/schemas/chatcompletions.go

Comment thread core/providers/bedrock/utils.go
coderabbitai[bot]
coderabbitai Bot previously approved these changes Mar 23, 2026
@emirhanmutlu-natuvion
Copy link
Copy Markdown
Contributor Author

@emirhanmutlu-natuvion also can you run tests on non-anthropic models once and share the results 🙇

Apologies for the late response (went on a vacation);

-> Added TestNonAnthropicStructuredOutputStillUsesToolConversion unit test for checking Non-Anthropic models
-> Also ran manual tests on AWS Bedrock with models eu.amazon.nova-micro-v1:0 and eu.amazon.nova-lite-v1:0 and confirmed the results (please see attached artifact)

Looking forward to your feedback! @akshaydeo

non_anthropic_bedrock_results.txt

akshaydeo
akshaydeo previously approved these changes Mar 27, 2026
Copy link
Copy Markdown
Contributor

@akshaydeo akshaydeo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me - will wait for @TejasGhatte 's approval too

@TejasGhatte
Copy link
Copy Markdown
Collaborator

Hey @emirhanmutlu-natuvion we would need to apply the same output_config.format conversion for anthropic models Responses API request as well, we convert the text parameter from responses parameter to tool.

@emirhanmutlu-natuvion
Copy link
Copy Markdown
Contributor Author

Hey @emirhanmutlu-natuvion we would need to apply the same output_config.format conversion for anthropic models Responses API request as well, we convert the text parameter from responses parameter to tool.

Hi, updated the implementation.

Anthropic Responses requests now use native output_config.format for params.text.format, while non-Anthropic models keep the existing tool-conversion path.

Added following regression tests for both paths and reran the focused Bedrock test set successfully:

0d50543#diff-8af780be45ba138fae87a5efca352a8fd35efe2492eab1de983ff6e20c17f858R2214

0d50543#diff-8af780be45ba138fae87a5efca352a8fd35efe2492eab1de983ff6e20c17f858R2273

Looking forward to your feedback @akshaydeo @TejasGhatte

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/providers/bedrock/responses.go (1)

1491-1497: ⚠️ Potential issue | 🟡 Minor

Handle thinking / reasoningConfig as OrderedMap too.

Line 1496 and Line 1562 still hard-cast these blobs to map[string]interface{}. After mergeAdditionalModelRequestFields() preserves nested OrderedMap values, an OrderedMap-valued thinking or reasoningConfig will silently skip Params.Reasoning reconstruction even though output_config now supports both shapes. Using schemas.SafeExtractOrderedMap(...) for these reads would cover both paths.

Also applies to: 1509-1516, 1561-1563

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/responses.go` around lines 1491 - 1497, The code
currently assumes reasoningConfig/thinking is a map[string]interface{} and skips
OrderedMap values; update the reads around AdditionalModelRequestFields (the
reasoningConfig/thinking handling) to use schemas.SafeExtractOrderedMap(...)
(falling back to the existing map extraction) so both map[string]interface{} and
OrderedMap shapes are supported, then use schemas.SafeExtractString on the
OrderedMap entries (e.g., "type" and "output_config") to reconstruct
Params.Reasoning the same way as for the map path; apply this change to the
occurrences around the reasoningConfig extraction (the blocks using
reasoningConfig.(map[string]interface{})) and the similar block near line 1561
so both shapes are handled uniformly.
🧹 Nitpick comments (4)
core/providers/bedrock/utils.go (1)

351-364: Canonicalize output_config key order in the helper.

This helper preserves insertion order, so caller order becomes wire order. convertChatParameters writes format before effort, while ToBedrockResponsesRequest writes effort before format, which means the same Anthropic structured-output config serializes differently between Chat and Responses. Normalizing format first here would keep the payload stable.

♻️ One way to normalize it
 func setOutputConfigField(fields *schemas.OrderedMap, key string, value any) {
 	if fields == nil {
 		return
 	}
 	current := schemas.NewOrderedMap()
 	if existing, ok := fields.Get("output_config"); ok {
 		if om, ok := toOrderedMap(existing); ok && om != nil {
 			current = om
 		}
 	}
 	current.Set(key, value)
-	fields.Set("output_config", current)
+	if formatValue, ok := current.Get("format"); ok {
+		ordered := schemas.NewOrderedMap()
+		ordered.Set("format", formatValue)
+		current.Range(func(k string, v interface{}) bool {
+			if k != "format" {
+				ordered.Set(k, v)
+			}
+			return true
+		})
+		current = ordered
+	}
+	fields.Set("output_config", current)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/utils.go` around lines 351 - 364, setOutputConfigField
currently preserves caller insertion order for output_config causing differing
wire order between convertChatParameters (writes format then effort) and
ToBedrockResponsesRequest (writes effort then format); change
setOutputConfigField to ensure a canonical key order by always placing "format"
into the output_config before any other keys (e.g., when adding a new key, if
key != "format" ensure existing or newly created map has "format" set/ordered
first), so that subsequent Set(key, value) calls (for "effort" or others) append
after "format" and serialization is stable; update logic in setOutputConfigField
(and the helper toOrderedMap if needed) to reorder or reconstruct current so
"format" is first while preserving other entries and values.
core/providers/bedrock/bedrock_test.go (3)

3250-3269: Assert the preserved key sequence, not just OrderedMap type.

This currently passes as long as output_config.format.schema comes back as an OrderedMap. If the converter rebuilds that map with a different key order, the regression still stays green. Since ordered preservation is the behavior being protected here, compare the marshaled schema or explicit key sequence as well.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/bedrock_test.go` around lines 3250 - 3269, The test
only asserts that output_config.format.schema is an OrderedMap type but not that
its key order was preserved; update the assertions after retrieving schemaRaw
(and converting to *schemas.OrderedMap) to assert the actual key sequence:
either marshal the OrderedMap to JSON and compare to the expected JSON string or
extract the OrderedMap key sequence (e.g., via its Keys()/iteration) and assert
it equals the expected []string order; keep the existing type assertions around
schemaRaw and add the key-sequence/marshaled-string equality check to protect
ordering regression.

3354-3413: Cover the format + effort merge path directly.

This only proves output_config.format survives alongside an arbitrary foo key on Claude 3.7. The risky path in this PR is Anthropic Opus 4.6, where reasoning writes output_config.effort; a regression that drops either field would still pass this suite. Please add a case with Model: "anthropic.claude-opus-4-6-v1" and Reasoning.Effort set, then assert that both output_config.format and output_config.effort are present after conversion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/bedrock_test.go` around lines 3354 - 3413, Add a new
test case variant that covers the Anthropic Opus 4.6 merge path by copying the
existing bifrostReq setup but set Model to "anthropic.claude-opus-4-6-v1" and
set Params.Reasoning.Effort (e.g., schemas.Ptr(some int or value) on
bifrostReq.Params.Reasoning.Effort); call
bedrock.ToBedrockChatCompletionRequest(ctx, bifrostReq) and then assert that
result.AdditionalModelRequestFields contains output_config with both the
preserved output_config.format (check format/type/schema as in the existing
assertions) and an output_config.effort entry (lookup via
outputConfig.Get("effort") and assert its presence/value) while also verifying
top-level keys (thinking/customField) remain merged as before.

2372-2378: Avoid asserting tool slice order here.

This regression only needs to prove that the synthetic structured-output tool is selected and the user tool is preserved. Hard-coding Tools[0]/Tools[1] makes the test fail on harmless insertion-order changes.

Suggested assertion shape
 require.NotNil(t, bedrockReq.ToolConfig, "expected tool_config to be initialized")
 require.Len(t, bedrockReq.ToolConfig.Tools, 2, "expected synthetic structured output tool plus user tool")
 require.NotNil(t, bedrockReq.ToolConfig.ToolChoice, "expected structured output tool choice to be forced")
 require.NotNil(t, bedrockReq.ToolConfig.ToolChoice.Tool, "expected structured output tool choice to target the synthetic tool")
 assert.Equal(t, "bf_so_classification", bedrockReq.ToolConfig.ToolChoice.Tool.Name)
-assert.Equal(t, "bf_so_classification", bedrockReq.ToolConfig.Tools[0].ToolSpec.Name)
-assert.Equal(t, "get_weather", bedrockReq.ToolConfig.Tools[1].ToolSpec.Name)
+var sawStructuredTool, sawUserTool bool
+for _, tool := range bedrockReq.ToolConfig.Tools {
+	require.NotNil(t, tool.ToolSpec)
+	switch tool.ToolSpec.Name {
+	case "bf_so_classification":
+		sawStructuredTool = true
+	case "get_weather":
+		sawUserTool = true
+	}
+}
+assert.True(t, sawStructuredTool, "expected synthetic structured output tool to be present")
+assert.True(t, sawUserTool, "expected user tool to be preserved")

Based on learnings: In core/providers/bedrock tests, follow a layered testing approach: Unit tests should perform structural comparisons and type/field checks to avoid brittleness from dynamic fields.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/bedrock_test.go` around lines 2372 - 2378, The test
currently assumes ordering of bedrockReq.ToolConfig.Tools by indexing
Tools[0]/[1]; instead, change assertions to verify presence and properties
without relying on slice order: assert bedrockReq.ToolConfig and ToolChoice are
non-nil, assert ToolChoice.Tool.Name == "bf_so_classification", then scan
bedrockReq.ToolConfig.Tools to confirm there exists a tool with ToolSpec.Name ==
"bf_so_classification" and another with ToolSpec.Name == "get_weather" (or
otherwise assert both names are present), ensuring the user tool is preserved
and the synthetic structured-output tool is selected regardless of insertion
order.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@core/providers/bedrock/responses.go`:
- Around line 1491-1497: The code currently assumes reasoningConfig/thinking is
a map[string]interface{} and skips OrderedMap values; update the reads around
AdditionalModelRequestFields (the reasoningConfig/thinking handling) to use
schemas.SafeExtractOrderedMap(...) (falling back to the existing map extraction)
so both map[string]interface{} and OrderedMap shapes are supported, then use
schemas.SafeExtractString on the OrderedMap entries (e.g., "type" and
"output_config") to reconstruct Params.Reasoning the same way as for the map
path; apply this change to the occurrences around the reasoningConfig extraction
(the blocks using reasoningConfig.(map[string]interface{})) and the similar
block near line 1561 so both shapes are handled uniformly.

---

Nitpick comments:
In `@core/providers/bedrock/bedrock_test.go`:
- Around line 3250-3269: The test only asserts that output_config.format.schema
is an OrderedMap type but not that its key order was preserved; update the
assertions after retrieving schemaRaw (and converting to *schemas.OrderedMap) to
assert the actual key sequence: either marshal the OrderedMap to JSON and
compare to the expected JSON string or extract the OrderedMap key sequence
(e.g., via its Keys()/iteration) and assert it equals the expected []string
order; keep the existing type assertions around schemaRaw and add the
key-sequence/marshaled-string equality check to protect ordering regression.
- Around line 3354-3413: Add a new test case variant that covers the Anthropic
Opus 4.6 merge path by copying the existing bifrostReq setup but set Model to
"anthropic.claude-opus-4-6-v1" and set Params.Reasoning.Effort (e.g.,
schemas.Ptr(some int or value) on bifrostReq.Params.Reasoning.Effort); call
bedrock.ToBedrockChatCompletionRequest(ctx, bifrostReq) and then assert that
result.AdditionalModelRequestFields contains output_config with both the
preserved output_config.format (check format/type/schema as in the existing
assertions) and an output_config.effort entry (lookup via
outputConfig.Get("effort") and assert its presence/value) while also verifying
top-level keys (thinking/customField) remain merged as before.
- Around line 2372-2378: The test currently assumes ordering of
bedrockReq.ToolConfig.Tools by indexing Tools[0]/[1]; instead, change assertions
to verify presence and properties without relying on slice order: assert
bedrockReq.ToolConfig and ToolChoice are non-nil, assert ToolChoice.Tool.Name ==
"bf_so_classification", then scan bedrockReq.ToolConfig.Tools to confirm there
exists a tool with ToolSpec.Name == "bf_so_classification" and another with
ToolSpec.Name == "get_weather" (or otherwise assert both names are present),
ensuring the user tool is preserved and the synthetic structured-output tool is
selected regardless of insertion order.

In `@core/providers/bedrock/utils.go`:
- Around line 351-364: setOutputConfigField currently preserves caller insertion
order for output_config causing differing wire order between
convertChatParameters (writes format then effort) and ToBedrockResponsesRequest
(writes effort then format); change setOutputConfigField to ensure a canonical
key order by always placing "format" into the output_config before any other
keys (e.g., when adding a new key, if key != "format" ensure existing or newly
created map has "format" set/ordered first), so that subsequent Set(key, value)
calls (for "effort" or others) append after "format" and serialization is
stable; update logic in setOutputConfigField (and the helper toOrderedMap if
needed) to reorder or reconstruct current so "format" is first while preserving
other entries and values.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ed9a7aaf-ad06-42ae-8da2-c3db51a652fa

📥 Commits

Reviewing files that changed from the base of the PR and between a7e9036 and 4b1fea7.

📒 Files selected for processing (3)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/responses.go
  • core/providers/bedrock/utils.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
core/providers/bedrock/bedrock_test.go (1)

3144-3217: Consider using SafeExtractOrderedMap for consistency with Responses API tests.

The test correctly validates the Chat Completion structured output behavior for Anthropic. However, the type assertion pattern differs from the Responses API tests:

  • Lines 3191-3192, 3197-3198: Direct type assertion to *schemas.OrderedMap
  • Lines 2260-2261, 2266-2267 (Responses tests): Uses schemas.SafeExtractOrderedMap

If the returned type ever changes between *schemas.OrderedMap and schemas.OrderedMap, the direct assertion could fail unexpectedly.

Optional: Align with SafeExtractOrderedMap pattern
-	outputConfig, ok := outputConfigRaw.(*schemas.OrderedMap)
-	require.True(t, ok, "expected output_config to be an ordered map")
+	outputConfig, ok := schemas.SafeExtractOrderedMap(outputConfigRaw)
+	require.True(t, ok, "expected output_config to be an ordered map")

-	format, ok := formatRaw.(*schemas.OrderedMap)
-	require.True(t, ok, "expected output_config.format to be an ordered map")
+	format, ok := schemas.SafeExtractOrderedMap(formatRaw)
+	require.True(t, ok, "expected output_config.format to be an ordered map")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/bedrock_test.go` around lines 3144 - 3217, In
TestAnthropicStructuredOutputUsesOutputConfigWithoutForcedToolChoice, replace
the direct type assertions to *schemas.OrderedMap for the output_config and its
format with schemas.SafeExtractOrderedMap calls (use the values returned from
result.AdditionalModelRequestFields.Get("output_config") and then
SafeExtractOrderedMap on that result, and likewise on the formatRaw from
outputConfig.Get("format")); update the subsequent has/ok checks to use the
SafeExtractOrderedMap return values and keep the same assertions for format.type
and schema and the existing checks for thinking and ToolConfig.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@core/providers/bedrock/bedrock_test.go`:
- Around line 3144-3217: In
TestAnthropicStructuredOutputUsesOutputConfigWithoutForcedToolChoice, replace
the direct type assertions to *schemas.OrderedMap for the output_config and its
format with schemas.SafeExtractOrderedMap calls (use the values returned from
result.AdditionalModelRequestFields.Get("output_config") and then
SafeExtractOrderedMap on that result, and likewise on the formatRaw from
outputConfig.Get("format")); update the subsequent has/ok checks to use the
SafeExtractOrderedMap return values and keep the same assertions for format.type
and schema and the existing checks for thinking and ToolConfig.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4cdeee34-d2c6-47bb-bbc2-5ae5cf75ea41

📥 Commits

Reviewing files that changed from the base of the PR and between 4b1fea7 and 9fe71c7.

📒 Files selected for processing (5)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/responses.go
  • core/providers/bedrock/utils.go
  • core/providers/openai/chat_test.go
  • core/schemas/chatcompletions.go
✅ Files skipped from review due to trivial changes (2)
  • core/schemas/chatcompletions.go
  • core/providers/bedrock/utils.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • core/providers/openai/chat_test.go
  • core/providers/bedrock/responses.go

@emirhanmutlu-natuvion
Copy link
Copy Markdown
Contributor Author

Hello again, just wanted to kindly bump regarding the completed status of the requested items.

Looking forward to your feedback, thank you for your time!

Anthropic Responses requests now use native output_config.format for params.text.format, while non-Anthropic models keep the existing tool-conversion path.

Added following regression tests for both paths and reran the focused Bedrock test set successfully:

0d50543#diff-8af780be45ba138fae87a5efca352a8fd35efe2492eab1de983ff6e20c17f858R2214

0d50543#diff-8af780be45ba138fae87a5efca352a8fd35efe2492eab1de983ff6e20c17f858R2273

@akshaydeo @TejasGhatte

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 8, 2026

Tip:

Greploops — Automatically fix all review issues by running /greploops in Claude Code. It iterates: fix, push, re-review, repeat until 5/5 confidence.

Use the Greptile plugin for Claude Code to query reviews, search comments, and manage custom context directly from your terminal.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/bedrock/utils.go`:
- Around line 381-390: When merging the "output_config" ordered maps in the
incoming.Range block (the code that creates `current` from `merged.Get` and
calls `mergeOrderedMapInto(current, incomingMap)`), change the merge behavior so
that if the sibling key "format" exists in both `current` and `incomingMap` you
replace the entire `format` value from `current` with the `format` value from
`incomingMap` instead of recursively merging it; i.e., before calling
mergeOrderedMapInto (or inside it when handling ordered maps), detect
`incomingMap.Get("format")` and `current.Get("format")` and copy/overwrite
`current["format"] = incomingMap["format"]` (leave other sibling keys like
"effort" merged normally). Apply the same replace-on-conflict logic in the
analogous merging block around the other occurrence (the similar code at the
428-438 region) so `format` is treated as replace-on-conflict everywhere
`output_config` maps are merged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8e4d831a-b565-4c5d-a76d-d0880bc30902

📥 Commits

Reviewing files that changed from the base of the PR and between 9fe71c7 and 52da282.

📒 Files selected for processing (5)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/responses.go
  • core/providers/bedrock/utils.go
  • core/providers/openai/chat_test.go
  • core/schemas/chatcompletions.go
✅ Files skipped from review due to trivial changes (2)
  • core/schemas/chatcompletions.go
  • core/providers/bedrock/bedrock_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • core/providers/openai/chat_test.go

Comment thread core/providers/bedrock/utils.go
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
core/providers/bedrock/utils.go (1)

390-390: ⚠️ Potential issue | 🟠 Major

Replace output_config.format atomically instead of recursively merging nested schema maps.

At Line 390 and Line 432, recursive deep merge can still combine two different output_config.format.schema objects into a hybrid schema when both sources provide format. format should be replace-on-conflict, while siblings like effort should still merge.

💡 Suggested fix
 func mergeAdditionalModelRequestFields(existing, incoming *schemas.OrderedMap) *schemas.OrderedMap {
@@
 	incoming.Range(func(key string, value interface{}) bool {
 		if key == "output_config" {
 			current := schemas.NewOrderedMap()
@@
 			if incomingMap, ok := toOrderedMap(value); ok && incomingMap != nil {
-				mergeOrderedMapInto(current, incomingMap)
+				mergeOutputConfigInto(current, incomingMap)
 				merged.Set(key, current)
 			} else {
 				merged.Set(key, value)
 			}
 			return true
 		}
@@
 	return merged
 }
 
+// mergeOutputConfigInto merges output_config with replace-on-conflict semantics
+// for "format" and recursive merge for other keys (e.g. "effort").
+func mergeOutputConfigInto(dst, src *schemas.OrderedMap) {
+	if dst == nil || src == nil {
+		return
+	}
+
+	if formatVal, ok := src.Get("format"); ok {
+		dst.Set("format", formatVal) // replace as a unit
+	}
+
+	src.Range(func(key string, srcVal interface{}) bool {
+		if key == "format" {
+			return true
+		}
+		if srcMap, ok := toOrderedMap(srcVal); ok && srcMap != nil {
+			if dstVal, exists := dst.Get(key); exists {
+				if dstMap, ok := toOrderedMap(dstVal); ok && dstMap != nil {
+					mergeOrderedMapInto(dstMap, srcMap)
+					dst.Set(key, dstMap)
+					return true
+				}
+			}
+		}
+		dst.Set(key, srcVal)
+		return true
+	})
+}

Also applies to: 428-433

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/utils.go` at line 390, The recursive merge in
mergeOrderedMapInto is combining distinct output_config.format.schema maps into
a hybrid; change the merge so that when merging a map entry with the key
"format" under the output_config context you perform a replace (assign incoming
value to destination) instead of recursively merging its nested map, while
leaving other sibling keys (e.g., "effort") to merge as before; implement this
by detecting the key name "format" (or by tracking the current parent path
includes "output_config") inside mergeOrderedMapInto and short-circuiting to
destination[key] = incoming[key] for that case, skipping further recursive merge
logic.
🧹 Nitpick comments (1)
core/providers/bedrock/bedrock_test.go (1)

2384-2389: Avoid asserting a fixed tool slice order here.

Line 2388 and Line 2389 assume the synthetic structured-output tool is always inserted before user tools. The contract being tested is presence plus forced ToolChoice, so pinning index order makes the test brittle.

♻️ Suggested assertion
 	require.Len(t, bedrockReq.ToolConfig.Tools, 2, "expected synthetic structured output tool plus user tool")
 	require.NotNil(t, bedrockReq.ToolConfig.ToolChoice, "expected structured output tool choice to be forced")
 	require.NotNil(t, bedrockReq.ToolConfig.ToolChoice.Tool, "expected structured output tool choice to target the synthetic tool")
 	assert.Equal(t, "bf_so_classification", bedrockReq.ToolConfig.ToolChoice.Tool.Name)
-	assert.Equal(t, "bf_so_classification", bedrockReq.ToolConfig.Tools[0].ToolSpec.Name)
-	assert.Equal(t, "get_weather", bedrockReq.ToolConfig.Tools[1].ToolSpec.Name)
+	toolNames := []string{
+		bedrockReq.ToolConfig.Tools[0].ToolSpec.Name,
+		bedrockReq.ToolConfig.Tools[1].ToolSpec.Name,
+	}
+	assert.ElementsMatch(t, []string{"bf_so_classification", "get_weather"}, toolNames)

Based on learnings: In core/providers/bedrock tests, follow a layered testing approach: unit tests should perform structural comparisons and type/field checks to avoid brittleness from dynamic fields.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/bedrock_test.go` around lines 2384 - 2389, The test
currently assumes a fixed order in bedrockReq.ToolConfig.Tools; instead keep the
length and forced choice assertions (require.Len(t, bedrockReq.ToolConfig.Tools,
2), require.NotNil(t, bedrockReq.ToolConfig.ToolChoice), require.NotNil(t,
bedrockReq.ToolConfig.ToolChoice.Tool)) and replace the index-based name checks
with presence checks: assert that bedrockReq.ToolConfig.ToolChoice.Tool.Name
equals "bf_so_classification" and that the Tools slice contains one
ToolSpec.Name "bf_so_classification" and one "get_weather" (e.g., iterate
bedrockReq.ToolConfig.Tools and collect ToolSpec.Name into a set and assert both
names are present), removing any assertions that rely on specific slice
ordering.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@core/providers/bedrock/utils.go`:
- Line 390: The recursive merge in mergeOrderedMapInto is combining distinct
output_config.format.schema maps into a hybrid; change the merge so that when
merging a map entry with the key "format" under the output_config context you
perform a replace (assign incoming value to destination) instead of recursively
merging its nested map, while leaving other sibling keys (e.g., "effort") to
merge as before; implement this by detecting the key name "format" (or by
tracking the current parent path includes "output_config") inside
mergeOrderedMapInto and short-circuiting to destination[key] = incoming[key] for
that case, skipping further recursive merge logic.

---

Nitpick comments:
In `@core/providers/bedrock/bedrock_test.go`:
- Around line 2384-2389: The test currently assumes a fixed order in
bedrockReq.ToolConfig.Tools; instead keep the length and forced choice
assertions (require.Len(t, bedrockReq.ToolConfig.Tools, 2), require.NotNil(t,
bedrockReq.ToolConfig.ToolChoice), require.NotNil(t,
bedrockReq.ToolConfig.ToolChoice.Tool)) and replace the index-based name checks
with presence checks: assert that bedrockReq.ToolConfig.ToolChoice.Tool.Name
equals "bf_so_classification" and that the Tools slice contains one
ToolSpec.Name "bf_so_classification" and one "get_weather" (e.g., iterate
bedrockReq.ToolConfig.Tools and collect ToolSpec.Name into a set and assert both
names are present), removing any assertions that rely on specific slice
ordering.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7aa2df0e-4e33-4576-b1ef-9aa2d793cf17

📥 Commits

Reviewing files that changed from the base of the PR and between 5089f8b and 083aaeb.

📒 Files selected for processing (5)
  • core/providers/bedrock/bedrock_test.go
  • core/providers/bedrock/responses.go
  • core/providers/bedrock/utils.go
  • core/providers/openai/chat_test.go
  • core/schemas/chatcompletions.go
✅ Files skipped from review due to trivial changes (2)
  • core/schemas/chatcompletions.go
  • core/providers/openai/chat_test.go

Copy link
Copy Markdown
Contributor

akshaydeo commented Apr 17, 2026

Merge activity

  • Apr 17, 1:58 PM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Apr 17, 1:58 PM UTC: Graphite couldn't merge this PR because it failed for an unknown reason (Fast-forward merges are not supported for forked repositories. Please create a branch in the target repository in order to merge).

@akshaydeo akshaydeo merged commit 8c11869 into maximhq:main Apr 17, 2026
3 of 4 checks passed
dwjwlxs added a commit to dwjwlxs/bifrost that referenced this pull request Apr 20, 2026
* fix: delete fallbacks from anthropic req (#2754)

## Summary

Remove the `fallbacks` field from Anthropic provider request bodies to ensure compatibility with the Anthropic API specification.

## Changes

- Added logic to delete the `fallbacks` field from JSON request bodies in the Anthropic provider's `getRequestBodyForResponses` function
- Implemented proper error handling for the field deletion operation with appropriate Bifrost error wrapping

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Test Anthropic provider requests to ensure the `fallbacks` field is properly removed and requests succeed:

```sh
# Core/Transports
go version
go test ./...

# Test specific Anthropic provider functionality
go test ./core/providers/anthropic/...
```

Verify that requests to the Anthropic API no longer include the `fallbacks` field and complete successfully.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No security implications - this change only removes an unsupported field from API requests.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* fix: preserve context values in async requests (#2703)

## Summary

Refactors async job execution to pass the full BifrostContext instead of just the virtual key value, enabling proper context preservation for background operations including virtual keys, tracing headers, and other request metadata.

## Changes

- Modified `AsyncJobExecutor.SubmitJob()` to accept `*schemas.BifrostContext` instead of `*string` for virtual key
- Updated `executeJob()` to restore all original request context values in the background goroutine
- Added `getVirtualKeyFromContext()` helper function to extract virtual key from BifrostContext
- Updated all async handler methods to pass BifrostContext directly to `SubmitJob()`
- Removed redundant virtual key extraction logic from HTTP handlers

## Type of change

- [x] Refactor

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)

## How to test

Verify async job execution preserves request context properly:

```sh
# Core/Transports
go version
go test ./...

# Test async endpoints with virtual keys and tracing headers
curl -X POST http://localhost:8080/v1/async/chat/completions \
  -H "Authorization: Bearer vk_test_key" \
  -H "X-Trace-Id: test-trace-123" \
  -d '{"model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "Hello"}]}'

# Verify job execution maintains context
curl http://localhost:8080/v1/async/jobs/{job_id}
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

Improves security by ensuring proper context isolation and virtual key handling in async operations.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* [fix]: Gemini provider - handle content block tool outputs in Responses API path (#2692)

When function_call_output messages arrive via the Anthropic Responses API
format, their output is an array of content blocks (ResponsesFunctionToolCallOutputBlocks),
not a plain string (ResponsesToolCallOutputStr). The Gemini provider's
convertResponsesMessagesToGeminiContents only checked the string case,
silently dropping all tool result content and sending empty {} responses
to Gemini. This caused the model to loop endlessly retrying tool calls
it never saw results for.

Other providers (Bedrock, OpenAI, Cohere) already handle both output
formats. This aligns the Gemini provider with them.

Affected packages:
- core/providers/gemini/responses.go - Add ResponsesFunctionToolCallOutputBlocks handling
- core/providers/gemini/gemini_test.go - Add test for content block outputs

Co-authored-by: tom <tom@asteroid.ai>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Akshay Deo <akshay@akshaydeo.com>

* fix: gemini thinking level and finish reason round-trip preservation (#2697)

## Summary

Fixes two critical regressions in the Gemini provider's GenAI integration: preserves `thinkingLevel` parameters during round-trip conversions and ensures `MAX_TOKENS` finish reasons survive Bifrost transformations.

## Changes

- **Fixed thinking level preservation**: Modified `convertGenerationConfigToResponsesParameters()` to only set effort from `thinkingLevel` without deriving a `thinkingBudget`, preventing unwanted behavior changes in Gemini 3.x models
- **Enhanced finish reason handling**: Added bidirectional conversion between Gemini and Bifrost finish reasons, prioritizing `StopReason` over `IncompleteDetails` to preserve `MAX_TOKENS` finish reasons
- **Expanded finish reason support**: Added new Gemini finish reason constants for image generation, tool calls, and malformed responses
- **Improved response conversion**: Updated response conversion logic to properly handle error finish reasons and set appropriate status/error fields

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Validate the thinking level and finish reason preservation:

```sh
# Run Gemini provider tests
go test ./core/providers/gemini/... -v

# Specifically test the regression fixes
go test ./core/providers/gemini/... -run "TestGenAIThinkingLevel_RoundTripPreservesLevelNotBudget|TestGenAIFinishReasonMaxTokens_PersistsThroughBifrostRoundTrip" -v
```

Test with actual Gemini API calls using thinking levels and verify that:
- `thinkingLevel` parameters are preserved without generating unwanted `thinkingBudget` values
- Responses with `MAX_TOKENS` finish reason maintain that status through the conversion pipeline

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

Addresses regressions in GenAI path where thinking configuration and finish reasons were being incorrectly transformed during Bifrost conversions.

## Security considerations

No security implications - this change only affects internal data structure conversions and doesn't modify authentication, secrets handling, or data exposure.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* fix: remove cc user agent guard from streaming in anthropic (#2706)

## Summary

Fixes WebSearch tool argument handling for all clients by removing the Claude Code user agent restriction. Previously, only Claude Code clients received proper WebSearch query arguments in the streaming response, while other clients lost the query data due to skipped argument deltas.

## Changes

- Removed the `IsClaudeCodeRequest(ctx)` check that was restricting WebSearch argument sanitization and synthetic delta generation to only Claude Code clients
- WebSearch tool arguments are now sanitized and synthetic `input_json_delta` events are generated for all clients during `output_item.done` events
- Added comprehensive test coverage for the WebSearch tool flow including argument delta skipping, synthetic delta generation, and full end-to-end streaming scenarios
- Enhanced code comments to clarify the WebSearch tool handling logic

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Validate the WebSearch tool behavior with the new test suite:

```sh
# Run the new WebSearch tests
go test ./core/providers/anthropic -run TestWebSearch -v

# Run all provider tests to ensure no regressions
go test ./core/providers/anthropic/...

# Full test suite
go test ./...
```

Test with different user agents to verify WebSearch queries are properly streamed to all clients, not just Claude Code.

## Screenshots/Recordings

N/A - This is a backend streaming API fix.

## Breaking changes

- [ ] Yes
- [x] No

This change expands functionality to previously broken clients without affecting existing working behavior.

## Related issues

Fixes WebSearch tool argument streaming for non-Claude Code clients.

## Security considerations

The change maintains existing argument sanitization for WebSearch tools while expanding it to all clients, preserving the same security posture.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* remove unnecessary marshalling of payload (#2770)

## Summary

Optimized JSON parsing in the Anthropic integration by replacing full JSON unmarshaling with targeted field extraction using gjson for retrieving the "type" field from streaming responses.

## Changes

- Replaced `sonic.Unmarshal()` with `gjson.Get()` to extract only the "type" field from Anthropic stream events
- Eliminated the need to unmarshal the entire JSON response into an `AnthropicStreamEvent` struct
- Improved performance by avoiding unnecessary JSON parsing overhead

## Type of change

- [x] Refactor
- [ ] Bug fix
- [ ] Feature
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Test streaming responses from the Anthropic integration to ensure the type field is correctly extracted:

```sh
# Core/Transports
go version
go test ./...

# Test specifically the Anthropic integration
go test ./transports/bifrost-http/integrations/
```

## Screenshots/Recordings

N/A

## Breaking changes

- [x] No
- [ ] Yes

## Related issues

N/A

## Security considerations

No security implications - this is a performance optimization that maintains the same functionality.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* feat: claude opus 4.7 compatibility (#2773)

## Summary

Adds support for Claude Opus 4.7 model with specific parameter handling and reasoning configuration changes. Opus 4.7 rejects temperature, top_p, and top_k parameters and only supports adaptive thinking mode without budget tokens.

## Changes

- Added `IsOpus47()` function to detect Claude Opus 4.7 models
- Modified parameter handling to skip temperature, top_p, and top_k for Opus 4.7 models
- Updated reasoning configuration to use adaptive thinking only for Opus 4.7 (no budget_tokens)
- Added support for `display` parameter in thinking configuration to control output visibility
- Extended adaptive thinking support to include Sonnet 4.6 models
- Added task budget support with new beta header `task-budgets-2026-03-13`
- Updated effort mapping to handle Opus 4.7's "xhigh" effort level
- Added comprehensive test coverage for Opus 4.7 specific behaviors
- Fixed OpenAI responses to filter out Anthropic-specific summary:"none" parameter

## Type of change

- [x] Feature
- [x] Bug fix

## Affected areas

- [x] Core (Go)
- [x] Providers/Integrations

## How to test

Validate the changes with the following tests:

```sh
# Core/Transports
go version
go test ./core/providers/anthropic/...

# Specific test cases for Opus 4.7
go test -run TestToAnthropicChatRequest_Opus47 ./core/providers/anthropic/
go test -run TestSupportsAdaptiveThinking ./core/providers/anthropic/
go test -run TestAddMissingBetaHeadersToContext_TaskBudgets ./core/providers/anthropic/
```

Test with Claude Opus 4.7 model requests to ensure:
- Temperature, top_p, top_k parameters are stripped
- Reasoning uses adaptive thinking without budget_tokens
- Task budget beta headers are properly added

## Breaking changes

- [ ] Yes
- [x] No

The changes maintain backward compatibility while adding new model support.

## Security considerations

No security implications. Changes only affect parameter handling and model-specific configurations for Anthropic's Claude models.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* docs: restructure helm guide into comprehensive multi-page reference (#2776)

* docs: restructure helm guide into comprehensive multi-page reference (#2771)

## Summary

Restructures the Helm deployment documentation into a comprehensive multi-page guide with dedicated sections for each configuration area. The main Helm page now provides quickstart instructions for both OSS and Enterprise deployments, while detailed configuration is split into focused sub-pages.

## Changes

- **Restructured main Helm page**: Condensed from 740+ lines to 103 lines with clear quickstart tabs for OSS vs Enterprise
- **Added 8 new dedicated configuration pages**:
  - `values.mdx` - Complete values reference with examples and common patterns
  - `client.mdx` - Client configuration (pool size, logging, CORS, auth, compat shims)
  - `providers.mdx` - Provider setup for all 23+ supported LLM providers with cloud-native auth
  - `storage.mdx` - Storage backends (SQLite, PostgreSQL, object storage, vector stores)
  - `plugins.mdx` - Plugin configuration (telemetry, logging, semantic cache, OTel, Datadog)
  - `governance.mdx` - Governance setup (budgets, rate limits, virtual keys, routing rules)
  - `cluster.mdx` - Multi-replica HA with gossip-based peer discovery
  - `troubleshooting.mdx` - Common issues and diagnostic commands
- **Updated chart version**: Bumped from 1.5.0 to 2.1.0
- **Enhanced navigation**: Added nested Helm section in docs.json with proper icons and organization

## Type of change

- [x] Documentation

## Affected areas

- [x] Docs

## How to test

Navigate through the new Helm documentation structure:

1. Visit the main Helm page for quickstart instructions
2. Follow the quickstart for either OSS or Enterprise deployment
3. Use the sub-pages for detailed configuration of specific areas
4. Verify all internal links work correctly
5. Test the troubleshooting commands on a real deployment

The documentation now provides both quick-start paths and comprehensive reference material for production deployments.

## Screenshots/Recordings

N/A - Documentation changes only

## Breaking changes

- [ ] Yes
- [x] No

This is purely a documentation restructure with no functional changes to the Helm chart itself.

## Related issues

Improves Helm documentation organization and usability for both new users and production deployments.

## Security considerations

The new documentation emphasizes security best practices:
- Kubernetes Secrets for all sensitive values
- Cloud-native authentication (IRSA, Workload Identity, Managed Identity)
- Proper RBAC setup for cluster mode
- Compliance considerations (HIPAA, PCI) for content logging

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* docs: update guardrails docs

* v1.4.23 cut (#2778)

## Summary

Release version 1.4.20-1.4.23 with bug fixes for provider integrations, streaming error handling, and migration test improvements. This release addresses critical issues in Gemini, Bedrock, and Anthropic providers while adding support for Claude Opus 4.7.

## Changes

- **Provider Fixes**: Fixed Gemini tool outputs handling, Bedrock streaming events, and image content preservation in tool results
- **Streaming Improvements**: Added proper error capture for Responses streaming API to prevent silent failures
- **Migration Tests**: Added support for v1.4.22 governance model pricing flex tier columns in both PostgreSQL and SQLite migration tests
- **Anthropic Enhancements**: Removed fallback fields from outgoing requests and added Claude Opus 4.7 compatibility
- **Framework Fixes**: Improved async context propagation and custom provider model validation
- **Plugin Updates**: Enhanced OTEL metrics and configuration defaults

## Type of change

- [x] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Validate the migration test changes and provider fixes:

```sh
# Test migration scripts
./.github/workflows/scripts/run-migration-tests.sh

# Core/Transports
go version
go test ./...

# Test provider integrations
go test ./transports/...
go test ./plugins/...
```

Test the new governance model pricing columns are properly handled in migration scenarios.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

Addresses multiple provider integration issues and streaming API error handling improvements.

## Security considerations

No security implications - changes are focused on bug fixes and migration test improvements.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* validator fix (#2780)

## Summary

Enhanced GitHub Actions security by transitioning from audit-only to strict network egress control using step-security/harden-runner. This change blocks all outbound network traffic by default and explicitly allows only required endpoints for each workflow.

## Changes

- Changed `egress-policy` from `audit` to `block` across all GitHub Actions workflows
- Added comprehensive `allowed-endpoints` lists for each job, specifying only the necessary external services
- Updated step names from "Harden the runner (Audit all outbound calls)" to "Harden Runner" for consistency
- Fixed schema validation script to use correct JSON paths for concurrency and SCIM configuration validation
- Reformatted JSON schema file for improved readability (whitespace and formatting changes only)

## Type of change

- [x] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [x] Docs

## How to test

Verify that all GitHub Actions workflows continue to function properly with the new network restrictions:

```sh
# Trigger workflows by pushing to a branch or creating a PR
git push origin feature-branch

# Monitor workflow runs in GitHub Actions tab to ensure:
# - All jobs complete successfully
# - No network connectivity errors occur
# - All required external services remain accessible
```

Key endpoints that should remain accessible include:
- GitHub API and release assets
- Package registries (npm, PyPI, Go modules)
- Docker registries
- Cloud storage services
- External APIs used by tests and integrations

## Screenshots/Recordings

N/A - Infrastructure/CI changes only

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

This change significantly improves security posture by:
- Preventing unauthorized outbound network connections from CI runners
- Creating an explicit allowlist of required external services
- Reducing attack surface for supply chain attacks
- Providing better visibility into network dependencies

The transition from audit to block mode ensures that any new network dependencies must be explicitly approved and documented.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* fix: token usage for vllm --skip-pipeline (#2784)

## Summary

Fixed token usage attribution for vLLM by treating empty-string content the same as nil content in streaming responses. vLLM sends `delta.content=""` (instead of `delta: null`) in finish_reason chunks, which was being forwarded and causing the synthesis chunk to lose its finish_reason, breaking usage attribution in logs and UI.

## Changes

- Modified streaming content handling to check for both nil and empty string content before processing chunks
- This prevents empty content deltas from being forwarded, ensuring finish_reason is preserved for proper token usage tracking
- Removed extraneous whitespace and formatting inconsistencies throughout the OpenAI provider code

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Test with vLLM provider to ensure token usage is properly attributed:

```sh
# Core/Transports
go version
go test ./...

# Test streaming chat completion with vLLM
# Verify that finish_reason is preserved in final chunks
# Check that token usage appears correctly in logs/UI
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

Fixes token usage tracking issues with vLLM provider.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* [fix]: OpenAI provider - flatten array-form tool_result output for Responses API (#2781) --skip-pipeline

When Anthropic tool_result blocks arrive with array-form content (the
standard shape for multi-turn tool exchanges), the OpenAI provider's
MarshalJSON emitted the output as a JSON array on the wire. The OpenAI
Responses API defines function_call_output.output as a string — strict
upstreams (Ollama Cloud, openai-go typed models) reject the array form
with HTTP 400.

Fix: before marshaling, collapse text-only
ResponsesFunctionToolCallOutputBlocks into a newline-joined string.
Non-text blocks (images, files) are left as-is. The schema type is
unchanged; the transformation lives in the OpenAI provider's outbound
marshaler only.

Closes #2779

Affected packages:
- core/providers/openai/types.go - Flatten text-only output blocks to string
- core/providers/openai/responses_marshal_test.go - Three regression tests
- core/changelog.md - Changelog entry

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: prevent send on closed channel panic in provider queue shutdown --skip-pipeline (#2725)

## Summary

Fixes a race condition in provider queue shutdown that caused "send on closed channel" panics in production. The issue occurred when producers passed the `isClosing()` check but then attempted to send to a queue that was closed before they reached the select statement.

## Changes

- **Removed queue channel closure**: Queue channels are never closed to prevent "send on closed channel" panics
- **Updated worker exit mechanism**: Workers now exit via the `done` channel signal instead of waiting for queue closure
- **Enhanced shutdown handling**: Workers drain remaining buffered requests and send shutdown errors when `done` is signaled
- **Added producer re-routing**: Stale producers can transparently re-route to new queues during `UpdateProvider`
- **Improved error handling**: Added rollback logic for failed provider updates with proper cleanup
- **Enhanced transfer logic**: Buffered requests are transferred before signaling shutdown to ensure they reach new workers
- **Added comprehensive tests**: Race condition demonstration and validation of the fix across multiple scenarios

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Run the new race condition test to verify the fix:

```sh
go test -run TestProviderQueue_SendOnClosedChannel_Race ./core -v
```

Run the comprehensive provider lifecycle tests:

```sh
go test -run TestProviderQueue ./core -v
go test -run TestUpdateProvider ./core -v
go test -run TestRemoveProvider ./core -v
```

Run the full test suite to ensure no regressions:

```sh
go test ./...
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

Fixes production panics related to concurrent provider queue operations during shutdown/updates.

## Security considerations

None - this is an internal concurrency fix that doesn't affect external interfaces or data handling.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* feat: preserve MCP tool annotations in bidirectional conversion --skip-pipeline (#2746)

## Summary

Adds support for preserving MCP tool annotations when converting between MCP tools and Bifrost schemas. This enables MCP servers to provide behavioral hints (read-only, destructive, idempotent, open-world) that help agents make better reasoning decisions about tool usage.

## Changes

- Added `MCPToolAnnotations` struct to capture MCP spec hints including title, read-only, destructive, idempotent, and open-world indicators
- Modified `convertMCPToolToBifrostSchema` to preserve MCP tool annotations when converting from MCP tools to Bifrost chat tools
- Updated `ChatToolFunction` to include optional annotations field
- Enhanced MCP server sync logic to map Bifrost annotations back to MCP tool annotations for bidirectional compatibility

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Test with an MCP server that provides tool annotations to verify they are preserved through the conversion process:

```sh
# Core/Transports
go version
go test ./...

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

Verify that MCP tools with annotations maintain their behavioral hints when converted to Bifrost schemas and back.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No security implications - this change only preserves metadata hints that help with tool behavior classification.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* fix: add support for Anthropic structured output and response format (#1972)

* fix: add support for Anthropic structured output and response format conversion

* fix: refactor output configuration setting in ToBedrockResponsesRequest

* run go fmt on responses.go

* fix: streamline response format conversion for Anthropic models

* fix: enhance merging of additional model request fields and output configuration

* fix: remove koanf/maps dependency and replace its usage with internal merge function

* preserve order in output_config

* update type casting

* add non-anthropic test-case

* check for output_config first

* diversify anthropic output formats

* move bifrost ctx update

* guard tested field

* guard format.jsonschema

* test fixes --skip-pipeline (#2782)

## Summary

Updates test configurations to align with current API specifications and replaces deprecated utility function usage.

## Changes

- Replaced `schemas.Ptr("test")` with `new("test")` in Anthropic chat test for string pointer creation
- Updated MCP client configuration tests to use `sse` connection type instead of `websocket` with simplified `connection_string` field
- Modified HTTP MCP client config to use `connection_string` instead of nested `http_config` object
- Changed OpenTelemetry plugin tests to use `genai_extension` trace type instead of `otel`

## Type of change

- [x] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations

## How to test

Validate that all tests pass with the updated configurations:

```sh
# Core/Transports
go version
go test ./...
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No security implications - these are test configuration updates only.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* anthropic container changes --skip-pipeline (#2783)

## Summary

Briefly explain the purpose of this PR and the problem it solves.

## Changes

- What was changed and why
- Any notable design decisions or trade-offs

## Type of change

- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Describe the steps to validate this change. Include commands and expected outcomes.

```sh
# Core/Transports
go version
go test ./...

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

If adding new configs or environment variables, document them here.

## Screenshots/Recordings

If UI changes, add before/after screenshots or short clips.

## Breaking changes

- [ ] Yes
- [ ] No

If yes, describe impact and migration instructions.

## Related issues

Link related issues and discussions. Example: Closes #123

## Security considerations

Note any security implications (auth, secrets, PII, sandboxing, etc.).

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* core schema changes --skip-pipeline (#2787)

## Summary

Promotes Anthropic-native parameters to the neutral ChatParameters layer, enabling direct access to advanced Anthropic features like containers, MCP servers, task budgets, and enhanced tool configurations without requiring ExtraParams.

## Changes

- Added neutral fields to `ChatParameters` for Anthropic-specific features: `TopK`, `Speed`, `InferenceGeo`, `MCPServers`, `Container`, `CacheControl`, `TaskBudget`, and `ContextManagement`
- Enhanced `ChatTool` with Anthropic tool flags: `DeferLoading`, `AllowedCallers`, `InputExamples`, and `EagerInputStreaming`
- Added `Display` field to `ChatReasoning` for Anthropic adaptive thinking control
- Implemented `StripUnsupportedAnthropicFields` function to remove unsupported features based on provider capabilities
- Updated parameter mapping logic to prefer neutral fields over ExtraParams with fallback support
- Added comprehensive JSON marshaling/unmarshaling for union types like `ChatContainer`

The design maintains backward compatibility by falling back to ExtraParams when neutral fields are not set, while providing type-safe access to advanced Anthropic features.

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Validate the new parameter handling and provider feature gating:

```sh
# Core/Transports
go version
go test ./...

# Test Anthropic provider parameter mapping
go test ./core/providers/anthropic/...

# Verify schema validation
go test ./core/schemas/...
```

Test with requests containing the new neutral fields to ensure proper mapping to Anthropic API format and appropriate stripping for unsupported providers.

## Screenshots/Recordings

N/A - Backend API changes only.

## Breaking changes

- [ ] Yes
- [x] No

This change is fully backward compatible. Existing ExtraParams usage continues to work, while new neutral fields provide enhanced type safety.

## Related issues

N/A

## Security considerations

The new MCP server configuration includes authorization tokens. Ensure proper handling of sensitive credentials in the `ChatMCPServer.AuthorizationToken` field.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* dependabot fixes --skip-pipeline (#2788)

## Summary

This PR adds the Hono web framework as a direct dependency to all MCP server examples and updates various dependencies across the project to their latest versions.

## Changes

- Added `hono@^4.12.14` as a direct dependency to all MCP server examples (edge-case-server, error-test-server, parallel-test-server, temperature, test-tools-server)
- Upgraded Hono from version 4.11.4 to 4.12.14 and changed it from a peer dependency to a direct dependency
- Updated Python dependencies including authlib (1.6.6 → 1.6.11), langchain-core (1.2.28 → 1.2.31), langchain-openai (1.1.4 → 1.1.14), langchain-text-splitters (1.1.0 → 1.1.2), langsmith (0.5.0 → 0.7.32), openai (2.13.0 → 2.32.0), and python-multipart (0.0.20 → 0.0.26)
- Updated TypeScript dependencies including langsmith (0.5.18 → 0.5.19) and added it as a direct dependency
- Added `github.com/tidwall/gjson v1.18.0` as a direct dependency in Go transports module
- Updated UI dependencies including dompurify (3.3.3 → 3.4.0) and follow-redirects (1.15.11 → 1.16.0) via package overrides

## Type of change

- [x] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] UI (Next.js)

## How to test

Validate that all dependencies are properly installed and examples still function:

```sh
# Test MCP server examples
cd examples/mcps/temperature
npm install
npm run build

# Test Go transports
cd transports
go mod tidy
go test ./...

# Test Python integrations
cd tests/integrations/python
uv sync
uv run python -m pytest

# Test TypeScript integrations
cd tests/integrations/typescript
npm install
npm test

# Test UI
cd ui
pnpm install
pnpm test
pnpm build
```

## Screenshots/Recordings

N/A - dependency updates only

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

The dependency updates include security patches, particularly for dompurify and follow-redirects which are explicitly overridden in the UI package.json for security reasons.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* move back go to 1.26.1 (#2792)

## Summary

Downgrade Go version from 1.26.2 to 1.26.1 across all GitHub Actions workflows, Go modules, and Docker images to address compatibility issues.

## Changes

- Downgraded Go version from 1.26.2 to 1.26.1 in all GitHub Actions workflows (e2e-tests, pr-tests, release-cli, release-pipeline, snyk)
- Updated go.mod files for core, CLI, examples, and test modules to use Go 1.26.1
- Updated Docker base images in transports/Dockerfile and transports/Dockerfile.local to use golang:1.26.1-alpine3.23
- Added stream cancellation safety improvements with guarded channel sends and finalizer protection to prevent goroutine leaks when clients disconnect
- Enhanced stream error checking with context cancellation support to properly drain upstream channels

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Validate the Go version downgrade and streaming improvements:

```sh
# Verify Go version
go version

# Core/Transports
go test ./...

# Test streaming endpoints with client disconnection scenarios
# to verify proper cleanup and no goroutine leaks
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

The streaming improvements enhance resource cleanup and prevent potential goroutine leaks when clients disconnect unexpectedly, improving overall system stability.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* temp gotoolchain auto (#2809)

* temp hack for tests (#2810)

## Summary

The Go workspace setup script was not specifying a `go` directive or toolchain version, which caused `GOTOOLCHAIN=auto` to select a Go version lower than what `core@v1.4.19` requires. This adds an explicit `go 1.26.2` and `toolchain go1.26.2` directive to the workspace so the correct toolchain is used automatically.

## Changes

- Added `go work edit -go=1.26.2 -toolchain=go1.26.2` to `setup-go-workspace.sh` so that `GOTOOLCHAIN=auto` selects Go >= 1.26.2, satisfying the minimum version required by the published `core@v1.4.19` module referenced in `transports/go.mod`.

## Type of change

- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [ ] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

```sh
# Verify the workspace is initialized with the correct Go version
bash .github/workflows/scripts/setup-go-workspace.sh
grep -E "^go |^toolchain" go.work
# Expected output:
# go 1.26.2
# toolchain go1.26.2

go test ./...
```

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

## Security considerations

None.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* temp block docker build (#2811)

## Summary

Temporarily disables the `test-docker-image-amd64` and `test-docker-image-arm64` CI jobs in the release pipeline by commenting them out.

## Changes

- Both Docker image test jobs (`test-docker-image-amd64` and `test-docker-image-arm64`) have been commented out rather than removed, preserving the full job definitions for easy re-enablement later.

## Type of change

- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

No functional code changes. Verify the release pipeline runs without executing the Docker image test jobs.

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No security implications.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* removed docker build steps (#2812)

## Summary

The `test-docker-image-amd64` and `test-docker-image-arm64` CI jobs have been removed from the release pipeline. These jobs were already commented out and non-functional, and all references to them as dependencies and gate conditions in downstream release jobs have been cleaned up.

## Changes

- Deleted the commented-out `test-docker-image-amd64` and `test-docker-image-arm64` job definitions from the release pipeline.
- Removed `test-docker-image-amd64` and `test-docker-image-arm64` from the `needs` arrays of `core-release`, `framework-release`, `plugins-release`, `bifrost-http-release`, the Docker build/push jobs, the manifest job, and the final notification job.
- Removed the corresponding result checks for those two jobs from all `if` conditions in the affected release jobs.

## Type of change

- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Trigger the release pipeline and confirm that all release jobs proceed without waiting on or referencing the removed Docker image test jobs.

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

## Security considerations

None.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* moves tests to 1.26.2 and 1.26.1 (#2813)

## Summary

Bumps the Go version used across all release pipeline jobs from `1.26.1` to `1.26.2` to keep the CI environment on the latest patch release.

## Changes

- Updated Go version from `1.26.1` to `1.26.2` in all `setup-go` steps within the release pipeline workflow.

## Type of change

- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

The release pipeline will use the updated Go version on the next run. No additional manual steps are required beyond verifying the CI pipeline passes.

```sh
go version
go test ./...
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

Patch releases often include security and bug fixes. Staying on the latest patch version reduces exposure to known vulnerabilities in the Go toolchain.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* ocr test fixes (#2814)

## Summary

Adds an operation-allowed check for OCR requests before they are dispatched to a provider, and fixes the Mistral provider to return its custom provider name when one is configured.

## Changes

- Added a `CheckOperationAllowed` guard for `OCRRequest` in `handleProviderRequest`, consistent with how other request types are gated. If the operation is not permitted, a `BifrostError` is returned with the provider key, request type, and requested model populated.
- Updated `MistralProvider.GetProviderKey()` to use `providerUtils.GetProviderName` so that custom provider configurations are respected, rather than always returning the hardcoded `schemas.Mistral` value.

## Type of change

- [ ] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

```sh
go version
go test ./...
```

- Configure a custom provider wrapping Mistral and verify that `GetProviderKey()` returns the custom provider name rather than `mistral`.
- Attempt an OCR request against a provider where the operation is not allowed and confirm a `BifrostError` is returned with the correct `Provider`, `RequestType`, and `ModelRequested` fields set.
- Attempt an OCR request against a provider where the operation is allowed and confirm the request proceeds normally.

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

## Security considerations

None.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* revert to old schema (#2815)

## Summary

This PR simplifies and consolidates the `config.schema.json` by removing several features, collapsing provider-specific schema variants, and restructuring key configuration definitions to reduce complexity and align with updated runtime semantics.

## Changes

- Removed the top-level `version` field that controlled allow-list semantics for empty arrays
- Removed the `compat` plugin configuration block (including `convert_text_to_chat`, `convert_chat_to_responses`, `should_drop_params`, `should_convert_params`)
- Replaced `compat` with a simpler `enable_litellm_fallbacks` boolean for Groq text completion fallbacks
- Removed `mcp_disable_auto_tool_inject` and `routing_chain_max_depth` from server config
- Collapsed `provider_with_ollama_config`, `provider_with_sgl_config`, and `provider_with_replicate_config` into the generic `provider` definition; removed their corresponding key types (`ollama_key`, `sgl_key`, `replicate_key`) and `network_config_without_base_url`
- Removed providers `nebius`, `xai`, and `runway` from the providers block
- Moved `calendar_aligned` from `virtual_key` to the `budget` object; removed `virtual_key_id` and `provider_config_id` from budget in favor of a standalone `budget_id` reference on virtual keys
- Removed `chain_rule` from routing rules and relaxed the `scope_id` conditional requirement
- Simplified `virtual_key_provider_config` to inline key definitions with full provider-specific key configs (Azure, Vertex, Bedrock, VLLM), replacing the separate `key_ids` and `keys` split
- Removed `mcp_client_name` and `allow_on_all_virtual_keys` from MCP configs; removed `allowed_extra_headers` and `disable_auto_tool_inject` from MCP client config
- Added `websocket` as a supported MCP connection type with a dedicated `websocket_config` block; removed `inprocess` connection type
- Removed `per_user_oauth` as an MCP auth type and dropped the conditional `oauth_config_id` requirement
- Renamed `concurrency_and_buffer_size` to `concurrency_config`; renamed `retry_backoff_initial`/`retry_backoff_max` to `retry_backoff_initial_ms`/`retry_backoff_max_ms`; removed `enforce_http2` and `openai_config` from network config
- Moved `pricing_overrides` from the top-level config into individual provider definitions
- Simplified `provider_pricing_override` schema, removing scoped fields (`scope_kind`, `virtual_key_id`, `provider_id`, `provider_key_id`) and replacing `pattern` with `model_pattern`; added `regex` as a valid `match_type`; expanded supported `request_types`
- Renamed `scim_config` to `saml_config` in the top-level schema
- Removed `apiToken` from Okta config and made `clientSecret` optional; updated required fields to only `issuerUrl` and `clientId`
- Removed `object_storage` and `retention_days` from the logs store config
- Removed `id` and `description` fields from provider config entries in the `provider_configs` array
- Removed `websocket_responses` and `realtime` from `custom_provider_config` allowed requests; removed the enum constraint on `base_provider_type`
- Removed `disable_auto_tool_inject` from `mcp_client_config` VFS settings
- Added `deployments` mapping to `azure_key_config` and `vertex_key_config`
- Updated `otel` plugin `trace_type` to only accept `"otel"` (removed `genai_extension`, `vercel`, `open_inference`)
- Removed `prompts` from the built-in plugin name list
- Removed `builtin` as a valid plugin `placement` value
- Changed `cluster_config` discovery `dial_timeout` from a Go duration string to an integer (nanoseconds)
- Reformatted many inline `required` arrays to multi-line style for readability

## Type of change

- [ ] Bug fix
- [ ] Feature
- [x] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Validate existing configs against the updated schema to confirm they parse correctly. Verify that configs using removed fields (`version`, `compat`, `mcp_disable_auto_tool_inject`, `chain_rule`, etc.) are rejected by the schema validator.

```sh
go test ./...
```

Confirm that provider configs for Ollama, SGL, and Replicate continue to work using the generic `provider` definition. Confirm MCP clients using `websocket` connection type validate correctly with a `websocket_config` block.

## Breaking changes

- [x] Yes
- [ ] No

The following fields have been removed and configs using them will fail schema validation:

- `version` (top-level)
- `compat` block under server config
- `mcp_disable_auto_tool_inject` and `routing_chain_max_depth` under server config
- `chain_rule` on routing rules
- `calendar_aligned` on virtual keys (now on budgets)
- `virtual_key_id` / `provider_config_id` on budgets
- `apiToken` on Okta config (now optional `clientSecret` only)
- `object_storage` and `retention_days` on logs store
- `id`, `description` on provider config entries
- `allow_on_all_virtual_keys`, `allowed_extra_headers`, `disable_auto_tool_inject` on MCP client config
- `inprocess` MCP connection type and `per_user_oauth` auth type
- `enforce_http2` and `openai_config` from network config
- `builtin` plugin placement value; `prompts` built-in plugin name
- `nebius`, `xai`, `runway` provider entries

Migrate by removing or replacing these fields according to the updated schema definitions.

## Related issues

## Security considerations

Removal of `per_user_oauth` as an MCP auth type should be reviewed to ensure no active integrations depend on it. The relaxed `scope_id` requirement on routing rules should be validated to confirm it does not inadvertently broaden access scope.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* reduced release pipeline for this cut for go downgrade (#2816)

## Summary

This PR removes all test jobs from the release pipeline and decouples them from the release gate conditions, allowing releases to proceed without waiting for (often flaky) provider API test results. It also significantly expands and restructures `config.schema.json` to reflect new features, provider support, and breaking semantic changes introduced in v1.5.0.

## Changes

- **Release pipeline**: Removed `test-core`, `approve-flaky-test-core`, `test-framework`, `test-plugins`, `test-bifrost-http`, `test-migrations`, and `test-e2e-ui` jobs entirely from `release-pipeline.yml`. All release jobs (`core-release`, `framework-release`, `plugins-release`, `bifrost-http-prep`, `docker-build-amd64`, `docker-build-arm64`, `push-mintlify-changelog`) now depend only on change detection and upstream release jobs, not on test outcomes.

- **Schema: deny-by-default semantics (v1.5.0)**: Empty arrays in `provider_configs`, `mcp_configs`, `allowed_models`, `key_ids`, and `tools_to_execute` now mean "deny all" rather than "allow all". Use `["*"]` to allow all. A top-level `version` field (enum `1` or `2`, default `2`) controls which semantic applies, with `1` restoring v1.4.x behavior.

- **Schema: new providers**: Added `nebius`, `xai`, and `runway` as first-class provider entries.

- **Schema: provider key restructuring**: Replaced the inline key object definition in `virtual_key_provider_config` with a flat `key_ids` string array. Introduced dedicated key types `ollama_key`, `sgl_key`, and `replicate_key` with their own `_key_config` blocks. Removed `deployments` from `azure_key_config` and `vertex_key_config` (replaced by `aliases` on `base_key`). Added `aliases` to `base_key` for model-to-deployment/inference-profile mappings.

- **Schema: provider variants**: `ollama` and `sgl` now reference `provider_with_ollama_config` and `provider_with_sgl_config` respectively, which use `network_config_without_base_url` (URL is per-key). `replicate` references `provider_with_replicate_config`. Added `openai_config` def with `disable_store` for the Responses API. Renamed `concurrency_config` to `concurrency_and_buffer_size`.

- **Schema: network config**: Split `network_config` into `network_config` (with `base_url`) and `network_config_without_base_url`. Added `enforce_http2`, `stream_idle_timeout_in_seconds`, `max_conns_per_host`, `beta_header_overrides`, and `ca_cert_pem` fields. Renamed `retry_backoff_initial_ms`/`retry_backoff_max_ms` to `retry_backoff_initial`/`retry_backoff_max`.

- **Schema: MCP changes**: Removed `websocket` connection type; added `inprocess`. Added `per_user_oauth` auth type. Added `mcp_client_name` for config-file resolution. Added `allowed_extra_headers` and `allow_on_all_virtual_keys` to `mcp_client_config`. Added `disable_auto_tool_inject` to MCP plugin config. Added global `mcp_disable_auto_tool_inject` and `routing_chain_max_depth` to server config.

- **Schema: routing rules**: Added `chain_rule` boolean to `routing_rule`. Made `scope_id` required (non-null string) when `scope` is `team`, `customer`, or `virtual_key`.

- **Schema: budgets**: Moved `calendar_aligned` from the budget object to the virtual key level. Replaced `budget_id` on virtual key with `virtual_key_id`/`provider_config_id` on the budget object itself. Removed `budget_id` from `virtual_key_provider_config`.

- **Schema: logs store**: Added `object_storage` (S3/GCS) and `retention_days` to the logs store config.

- **Schema: pricing overrides**: Moved `pricing_overrides` from per-provider to a top-level array with scoped `provider_pricing_override` objects supporting `scope_kind`, `virtual_key_id`, `provider_id`, `provider_key_id`, `match_type`, `pattern`, `request_types`, and `pricing_patch`.

- **Schema: compat plugin**: Replaced `enable_litellm_fallbacks` with a structured `compat` object supporting `convert_text_to_chat`, `convert_chat_to_responses`, `should_drop_params`, and `should_convert_params`.

- **Schema: OTEL plugin**: Expanded `trace_type` enum to `genai_extension`, `vercel`, `open_inference` (was only `otel`).

- **Schema: SCIM**: Renamed `saml_config` to `scim_config`. Added `apiToken` to `okta_config` and made `clientSecret` and `apiToken` required. Changed cluster `dial_timeout` from integer (nanoseconds) to Go duration string.

- **Schema: misc**: Added `prompts` and `builtin` to plugin name/placement enums. Added `provider_configs` fields `id`, `description`, `network_config`, `proxy_config`, `custom_provider_config`, `concurrency_and_buffer_size`, and `openai_config`. Added `scim_config` top-level ref. Normalized multi-item `required` arrays to single-line format throughout.

## Type of change

- [ ] Bug fix
- [x] Feature
- [x] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

```sh
# Validate schema against existing configs
npx ajv validate -s transports/config.schema.json -d your-config.json

# Verify release pipeline runs without test gate
# Push a tagged commit and confirm release jobs trigger directly after detect-changes
```

If upgrading from v1.4.x, set `"version": 1` in your config to preserve allow-all semantics for empty arrays, or migrate empty arrays to `["*"]` and adopt v2 deny-by-default semantics.

## Breaking changes

- [x] Yes
- [ ] No

**Empty arrays in `allowed_models`, `key_ids`, `tools_to_execute`, `provider_configs`, and `mcp_configs` now deny all access by default (v2 semantics).** To allow all, use `["*"]`. To restore v1.4.x behavior, set `"version": 1` at the top level of your config.

`enable_litellm_fallbacks` has been removed; replace with the `compat` object. `saml_config` has been renamed to `scim_config`. `budget_id` has been removed from virtual keys and `virtual_key_provider_config`. `calendar_aligned` has moved from the budget object to the virtual key. `deployments` has been removed from `azure_key_config` and `vertex_key_config`; use `aliases` on the key instead. `retry_backoff_initial_ms`/`retry_backoff_max_ms` renamed to `retry_backoff_initial`/`retry_backoff_max`. The `websocket` MCP connection type has been removed; use `http` or `sse`. Okta SCIM config now requires `clientSecret` and `apiToken`.

## Related issues

N/A

## Security considerations

The `insecure_skip_verify` and `ca_cert_pem` fields on `network_config` expose TLS bypass options; these should only be used in controlled environments. The `per_user_oauth` auth type for MCP introduces per-user credential flows that require careful OAuth config management. Removal of test gates from the release pipeline means regressions from flaky provider APIs will no longer block releases, but also means real failures could ship if not caught by other means.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* force verstion back to go 1.26.1 (#2817)

## Summary

Bumps `core` to v1.4.21 and updates `transports` to depend on `core` v1.4.20, while removing a now-unnecessary workspace Go directive workaround that was previously required to satisfy the toolchain constraint introduced by `core` v1.4.19.

## Changes

- Incremented `core` version from `1.4.20` to `1.4.21`
- Updated `transports/go.mod` to reference `core` v1.4.20 (previously v1.4.19)
- Removed the `go work edit -go=1.26.2 -toolchain=go1.26.2` workaround from the workspace setup script, which was only needed to satisfy the toolchain requirement imposed by the published `core` v1.4.19

## Type of change

- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

```sh
go work sync
go test ./...
```

Verify the workspace initializes without the explicit Go/toolchain directive and that all modules resolve correctly.

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

## Security considerations

None.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* revert everything to go1.26.1 (#2818)

## Summary

Bumps the core version to `1.4.22` and rolls back dependency versions across the framework, plugins, and transports to align with a prior stable set of releases. This resolves a version inconsistency introduced by forward-referencing newer module versions that were not yet intended to be consumed by downstream packages.

## Changes

- Incremented `core/version` from `1.4.21` to `1.4.22`
- Downgraded `bifrost/core` from `v1.4.19` → `v1.4.17` across `framework`, `governance`, `jsonparser`, `litellmcompat`, `logging`, `maxim`, `mocker`, `otel`, `semanticcache`, and `telemetry` plugins
- Downgraded `bifrost/framework` from `v1.2.38` → `v1.2.36` (or `v1.2.35` for `semanticcache`) across all dependent plugins
- Downgraded `bifrost/core` in `transports` from `v1.4.20` → `v1.4.19`
- Downgraded all plugin versions referenced in `transports` (governance, litellmcompat, logging, maxim, otel, semanticcache, telemetry) to their corresponding prior releases
- Downgraded `go.opentelemetry.io/otel/sdk` and `go.opentelemetry.io/otel/sdk/metric` from `v1.43.0` → `v1.40.0` in affected plugins
- Bumped Go toolchain version in `transports/go.mod` from `1.26.1` to `1.26.2`

## Type of change

- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

```sh
go test ./...
```

Verify that all modules resolve correctly with the pinned dependency versions and that no import errors occur during build.

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

None. These are internal module version adjustments with no changes to auth, secrets, or data handling.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* bumped up hello-world dep (#2819)

## Summary

Pins the `bifrost/core` dependency in the example plugin modules to a consistent released version (`v1.4.17`), removing a local `replace` directive that was pointing to the local `core` module path.

## Changes

- Replaced the local `replace` directive in `hello-world-wasm-go/go.mod` with a direct reference to `github.com/maximhq/bifrost/core v1.4.17`
- Downgraded `hello-world/go.mod` from `v1.4.19` to `v1.4.17` to align both example plugins on the same released version

## Type of change

- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

```sh
cd examples/plugins/hello-world-wasm-go
go mod tidy
go build ./...

cd examples/plugins/hello-world
go mod tidy
go build ./...
```

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

## Security considerations

No security implications. This change only affects dependency resolution for example plugin modules.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* framework: bump core to v1.4.22 --skip-pipeline

* plugins/governance: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* plugins/jsonparser: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* plugins/litellmcompat: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* plugins/logging: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* plugins/maxim: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* plugins/mocker: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* plugins/otel: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* plugins/semanticcache: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* plugins/telemetry: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline

* enforce go 1.26.1 (#2820)

* transports: update dependencies --skip-pipeline

* Adds changelog for v1.4.23 --skip-pipeline

* V1.5.0 (#2245)

* refactor: standardize empty array conventions for VK Provider & MCP Configs, and makes Provider Config weight optional for routing (#1932)

## Summary

Changes Virtual Key provider and MCP configurations from "allow-all by default" to "deny-by-default" security model. Virtual Keys now require explicit provider and MCP client configurations to allow access, improving security posture.

## Changes

- **Provider Configs**: Empty `provider_configs` now blocks all providers instead of allowing all
- **MCP Configs**: Empty `mcp_configs` now blocks all MCP tools instead of allowing all  
- **Weight Field**: Changed provider `weight` from required `float64` to optional `*float64` - null weight excludes provider from weighted routin…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants