anthropic container changes --skip-pipeline#2783
Conversation
📝 WalkthroughWalkthroughAdd Anthropic container union types, expand content-block and tool/result schemas, introduce per-tool eager input streaming with beta-header handling, convert sources to object-form, and add tests + a streaming test harness integration. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Bifrost
participant Anthropic
participant ToolContainer
Client->>Bifrost: Send chat request (tool with eager_input_streaming)
Bifrost->>Anthropic: Convert request (container/tool flags, beta headers)
Anthropic->>ToolContainer: Invoke tool/container (eager streaming calls)
ToolContainer-->>Anthropic: Stream tool-call deltas/results
Anthropic-->>Bifrost: Stream deltas (content blocks with object-form sources)
Bifrost-->>Client: Emit streaming events to caller
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ❌ 5❌ Failed checks (3 warnings, 2 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
This stack of pull requests is managed by Graphite. Learn more about stacking. |
|
|
Confidence Score: 5/5Safe to merge — no P0/P1 issues; the EagerInputStreaming and AnthropicContainer implementations are complete and well-tested All previous concerns from prior review rounds have been addressed. The EagerInputStreaming beta header is fully plumbed (constant, prefix, feature flags for all four providers, auto-injection, filter registration, neutral schema field, conversion in ToAnthropicChatRequest) and covered by unit + E2E tests. The AnthropicContainer union type has correct null-safe unmarshaling. The only remaining finding is a P2 test-fixture gap (allHeaders missing the new constant), which does not affect production behavior. No files require special attention Important Files Changed
Reviews (7): Last reviewed commit: "anthropic container changes" | Re-trigger Greptile |
15e2b48 to
313124f
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/anthropic/types.go`:
- Around line 1063-1069: The AllowedDomains and BlockedDomains fields on
AnthropicToolWebFetch currently use []string with `omitempty`, which cannot
represent "field present but empty"; change their types to pointer-to-slice
(*[]string) so a non-nil pointer to an empty slice marshals as [] while nil
still omits the field (e.g., AllowedDomains *[]string
`json:"allowed_domains,omitempty"` and BlockedDomains *[]string
`json:"blocked_domains,omitempty"`), and update any construction sites or tests
that build AnthropicToolWebFetch (places that currently set
AllowedDomains/BlockedDomains) to pass pointers (including &[]string{} to
represent an explicit empty list).
- Line 764: The new discriminator AnthropicContentBlockTypeSearchResult was
added but AnthropicContentBlock lacks a typed field and (un)marshalling logic to
preserve the search_result payload (its string-valued source), so parsed blocks
will lose data on round-trips; add a dedicated typed slot (e.g., Source string
or a SearchResult struct) to AnthropicContentBlock, implement custom
MarshalJSON/UnmarshalJSON for AnthropicContentBlock to handle the
"search_result" case (populate the typed slot and the generic payload
consistently), and update any request-side ExtraParams handling references so
response parsing preserves the original source string for search_result blocks.
- Around line 994-997: Add the four new enum constants
(AnthropicToolTypeBash20241022, AnthropicToolTypeBash20250124,
AnthropicToolTypeComputer20241022, AnthropicToolTypeComputer20250124) into the
same switch/case branches where other AnthropicToolType variants are handled:
include them in the truncation/beta-header handling branch and in the
tool-to-response conversion branches so they are treated like the existing
legacy tool variants rather than falling through to the generic path; update the
relevant response conversion helpers and utility switch lists to mirror the
handling for the other AnthropicToolType values so truncation and beta-header
behavior is applied to these new constants.
- Around line 221-230: In AnthropicContainer.UnmarshalJSON, when a string branch
successfully decodes into ContainerStr you must nil out ContainerObject (and
conversely, when decoding into AnthropicContainerObject sets ContainerObject you
must nil out ContainerStr) so the opposite union arm is cleared; update the
UnmarshalJSON implementation (referencing AnthropicContainer, ContainerStr,
ContainerObject, and MarshalJSON) to explicitly set the other field to nil after
a successful unmarshal to avoid leaving both populated.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f81df34b-4a8d-42a3-bcd1-2cfb3177c9ce
📒 Files selected for processing (1)
core/providers/anthropic/types.go
313124f to
eb97daa
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/providers/anthropic/utils.go (1)
1158-1233:⚠️ Potential issue | 🟠 MajorAdd FileID handling to both Anthropic document block converters.
ChatInputFile.FileID(lines 967 in schemas) and theFileIDfield onResponsesMessageContentBlock(line 652) are not being wired through to Anthropic'sfilesource type. BothConvertToAnthropicDocumentBlockandConvertResponsesFileBlockToAnthropicmust check for FileID and setSourceObj.Type = "file"withSourceObj.FileIDpopulated. Currently, onlyFileURLandFileDatabranches are handled, which makes uploaded file references unusable.For
ConvertToAnthropicDocumentBlock: add a FileID check before the FileURL branch.
ForConvertResponsesFileBlockToAnthropic: update the signature to accept FileID and add the same check.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/anthropic/utils.go` around lines 1158 - 1233, ConvertToAnthropicDocumentBlock currently ignores ChatInputFile.FileID — add a check in ConvertToAnthropicDocumentBlock (before the FileURL branch) to see if file.FileID != nil/empty and, if present, set documentBlock.Source.SourceObj.Type = "file" and documentBlock.Source.SourceObj.FileID = file.FileID then return; likewise update the ConvertResponsesFileBlockToAnthropic function signature to accept a FileID parameter and add the same FileID-first branch (set SourceObj.Type="file" and SourceObj.FileID) so uploaded file references are wired through to Anthropic instead of falling back to URL/data/base64 handling.
🧹 Nitpick comments (1)
core/providers/anthropic/utils.go (1)
1158-1233: Deduplicate the document-source builders.These two converters already drift on the same plain-text case: the Responses path emits
media_type: "text/plain"on Lines 1262-1264, while the Chat path on Lines 1194-1195 omits it. Pull theSourceObjconstruction into a shared helper so Chat and Responses file blocks stay serialized the same way as more source variants get added.Also applies to: 1235-1310
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/anthropic/utils.go` around lines 1158 - 1233, Extract the repeated construction of AnthropicBlockSource.SourceObj into a shared helper (e.g., buildAnthropicSourceObj or NewAnthropicSourceFromFile) and call it from ConvertToAnthropicDocumentBlock and the Responses-file converter so both paths produce the same serialization; the helper should accept the file struct and encapsulate the logic for URL, data URL, base64, text/data handling, set SourceObj.Type/Data/URL/MediaType consistently, and ensure that plain-text branches always set MediaType to "text/plain" (falling back to file.FileType when present or to "application/pdf" for binary defaults).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/anthropic/types.go`:
- Around line 963-999: The UnmarshalJSON for AnthropicBlockSource can leave the
other union field populated when reusing the struct; update
AnthropicBlockSource.UnmarshalJSON so that when decoding into a string it
explicitly sets SourceObj = nil (and when decoding into an AnthropicSource it
sets SourceStr = nil) before assigning the decoded value, ensuring MarshalJSON
later sees only one non-nil arm (refer to AnthropicBlockSource, UnmarshalJSON,
MarshalJSON, SourceStr, and SourceObj).
---
Outside diff comments:
In `@core/providers/anthropic/utils.go`:
- Around line 1158-1233: ConvertToAnthropicDocumentBlock currently ignores
ChatInputFile.FileID — add a check in ConvertToAnthropicDocumentBlock (before
the FileURL branch) to see if file.FileID != nil/empty and, if present, set
documentBlock.Source.SourceObj.Type = "file" and
documentBlock.Source.SourceObj.FileID = file.FileID then return; likewise update
the ConvertResponsesFileBlockToAnthropic function signature to accept a FileID
parameter and add the same FileID-first branch (set SourceObj.Type="file" and
SourceObj.FileID) so uploaded file references are wired through to Anthropic
instead of falling back to URL/data/base64 handling.
---
Nitpick comments:
In `@core/providers/anthropic/utils.go`:
- Around line 1158-1233: Extract the repeated construction of
AnthropicBlockSource.SourceObj into a shared helper (e.g.,
buildAnthropicSourceObj or NewAnthropicSourceFromFile) and call it from
ConvertToAnthropicDocumentBlock and the Responses-file converter so both paths
produce the same serialization; the helper should accept the file struct and
encapsulate the logic for URL, data URL, base64, text/data handling, set
SourceObj.Type/Data/URL/MediaType consistently, and ensure that plain-text
branches always set MediaType to "text/plain" (falling back to file.FileType
when present or to "application/pdf" for binary defaults).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: dc773e9b-6df2-49c1-a0ba-7188b9b6304e
📒 Files selected for processing (3)
core/providers/anthropic/responses.gocore/providers/anthropic/types.gocore/providers/anthropic/utils.go
eb97daa to
d56f182
Compare
afddd4c to
0484302
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
core/internal/llmtests/account.go (1)
92-92: Consider making the scenario comment provider-agnostic.Line [92] mentions Anthropic specifically, but this scenario is now being enabled across multiple providers in this stack.
✏️ Suggested comment tweak
- EagerInputStreaming bool // Fine-grained tool input streaming (Anthropic fine-grained-tool-streaming-2025-05-14) + EagerInputStreaming bool // Fine-grained tool input streaming scenario (provider-specific capability-gated)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/internal/llmtests/account.go` at line 92, The comment on the EagerInputStreaming field is provider-specific; update it to be provider-agnostic by removing the "Anthropic" reference and date and instead describe the feature generically (e.g., "Fine-grained tool input streaming (feature: fine-grained-tool-streaming)"). Locate the EagerInputStreaming bool declaration and replace the inline comment so it documents the capability rather than a single provider or date, preserving clarity about what the flag enables.core/internal/llmtests/eager_input_streaming.go (1)
42-43: Preferbifrost.Ptr(true)for consistency.Line 54 correctly uses
bifrost.Ptr(200), but these lines create a local variable just to take its address. For consistency with repository conventions, usebifrost.Ptr(true)directly.♻️ Suggested refactor
- eager := true - chatTool.EagerInputStreaming = &eager + chatTool.EagerInputStreaming = bifrost.Ptr(true)Based on learnings: "In the maximhq/bifrost repository, prefer using bifrost.Ptr() to create pointers instead of the address operator (&) even when & would be valid syntactically."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/internal/llmtests/eager_input_streaming.go` around lines 42 - 43, Replace the local boolean pointer pattern with the repository convention using bifrost.Ptr: instead of creating a local variable eager and assigning its address to chatTool.EagerInputStreaming, set chatTool.EagerInputStreaming = bifrost.Ptr(true) (consistent with existing use of bifrost.Ptr(200)); update the assignment where chatTool.EagerInputStreaming is set so it directly uses bifrost.Ptr(true).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/anthropic/types.go`:
- Around line 909-916: Remove the stale workaround comment about
search_result/source and ExtraParams: the union is now modeled by Source
*AnthropicBlockSource on AnthropicContentBlock, so delete or replace the block
that claims the string-valued "source" has no typed slot and suggests using
ExtraParams; update any remaining explanatory text to state that
AnthropicContentBlock.Source (type *AnthropicBlockSource) handles the
string-or-object union for search_result blocks.
- Around line 94-120: Summary: The feature matrix conflates input_examples with
AdvancedToolUse; add a separate ProviderFeatureSupport.InputExamples bool so
Bedrock's standalone tool-examples header can be represented without enabling
the full advanced-tool-use bundle. Fix: in the ProviderFeatureSupport struct add
a new InputExamples bool field (separate from AdvancedToolUse), update any
references/consumers that check AdvancedToolUse to instead check InputExamples
where only tool examples are required (look for usages of
ProviderFeatureSupport.AdvancedToolUse and ProviderFeatureSupport in downstream
stripping/header logic), and update related documentation/comments to note
Bedrock supports InputExamples but not AdvancedToolUse; apply the same change to
the other feature-matrix declarations referenced alongside
ProviderFeatureSupport.
- Around line 950-962: The AnthropicSource struct is missing a field to hold the
nested "content" used by the "content_block" variant; add a Content field to
AnthropicSource (e.g., Content *json.RawMessage `json:"content,omitempty"`) so
the struct can hold either a string or an array of nested content blocks and
preserve round-trips; update any related code that marshals/unmarshals or
constructs DocumentBlockParam to use AnthropicSource.Content when Type ==
"content_block" and keep the json tag `content` to match the API.
---
Nitpick comments:
In `@core/internal/llmtests/account.go`:
- Line 92: The comment on the EagerInputStreaming field is provider-specific;
update it to be provider-agnostic by removing the "Anthropic" reference and date
and instead describe the feature generically (e.g., "Fine-grained tool input
streaming (feature: fine-grained-tool-streaming)"). Locate the
EagerInputStreaming bool declaration and replace the inline comment so it
documents the capability rather than a single provider or date, preserving
clarity about what the flag enables.
In `@core/internal/llmtests/eager_input_streaming.go`:
- Around line 42-43: Replace the local boolean pointer pattern with the
repository convention using bifrost.Ptr: instead of creating a local variable
eager and assigning its address to chatTool.EagerInputStreaming, set
chatTool.EagerInputStreaming = bifrost.Ptr(true) (consistent with existing use
of bifrost.Ptr(200)); update the assignment where chatTool.EagerInputStreaming
is set so it directly uses bifrost.Ptr(true).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 96be28a1-cf13-4956-b9d5-dadb133e0990
📒 Files selected for processing (14)
core/internal/llmtests/account.gocore/internal/llmtests/eager_input_streaming.gocore/internal/llmtests/provider_feature_support_test.gocore/internal/llmtests/tests.gocore/providers/anthropic/anthropic_test.gocore/providers/anthropic/chat.gocore/providers/anthropic/responses.gocore/providers/anthropic/types.gocore/providers/anthropic/utils.gocore/providers/anthropic/utils_test.gocore/providers/azure/azure_test.gocore/providers/bedrock/bedrock_test.gocore/providers/vertex/vertex_test.gocore/schemas/chatcompletions.go
✅ Files skipped from review due to trivial changes (2)
- core/providers/azure/azure_test.go
- core/providers/anthropic/utils_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- core/providers/anthropic/responses.go
d56f182 to
485a546
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
core/providers/anthropic/utils.go (2)
288-300:⚠️ Potential issue | 🟠 MajorRestore standalone
tool-examples-*handling in the same beta-header path.
tool.InputExamplesis still effectively bundled underadvanced-tool-use-*, and the prefix tables still don't know aboutAnthropicToolExamplesBetaHeaderPrefix. That keeps Bedrock on the wrong path: it supports input examples only via the standalone tool-examples beta, so passthrough filtering and auto-injection will still mis-handle requests that carryinput_examples.Suggested fix
- if len(tool.InputExamples) > 0 { - headers = appendUniqueHeader(headers, AnthropicAdvancedToolUseBetaHeader) - } + if len(tool.InputExamples) > 0 { + switch { + case !hasProvider || features.AdvancedToolUse: + headers = appendUniqueHeader(headers, AnthropicAdvancedToolUseBetaHeader) + case features.InputExamples: + headers = appendUniqueHeader(headers, AnthropicToolExamplesBetaHeader) + } + }var betaHeaderPrefixKnown = []string{ "computer-use-", AnthropicStructuredOutputsBetaHeaderPrefix, AnthropicMCPClientBetaHeaderPrefix, AnthropicPromptCachingScopeBetaHeaderPrefix, "compact-", "context-management-", "files-api-", AnthropicAdvancedToolUseBetaHeaderPrefix, + AnthropicToolExamplesBetaHeaderPrefix, AnthropicInterleavedThinkingBetaHeaderPrefix, AnthropicSkillsBetaHeaderPrefix, AnthropicContext1MBetaHeaderPrefix, AnthropicFastModeBetaHeaderPrefix, AnthropicRedactThinkingBetaHeaderPrefix, AnthropicTaskBudgetsBetaHeaderPrefix, AnthropicEagerInputStreamingBetaHeaderPrefix, }var betaHeaderPrefixToFeature = map[string]func(ProviderFeatureSupport) bool{ "computer-use-": func(f ProviderFeatureSupport) bool { return f.ComputerUse }, AnthropicStructuredOutputsBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.StructuredOutputs }, AnthropicMCPClientBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.MCP }, AnthropicPromptCachingScopeBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.PromptCachingScope }, "compact-": func(f ProviderFeatureSupport) bool { return f.Compaction }, "context-management-": func(f ProviderFeatureSupport) bool { return f.ContextEditing }, "files-api-": func(f ProviderFeatureSupport) bool { return f.FilesAPI }, AnthropicAdvancedToolUseBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.AdvancedToolUse }, + AnthropicToolExamplesBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.InputExamples }, AnthropicInterleavedThinkingBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.InterleavedThinking }, AnthropicSkillsBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.Skills }, AnthropicContext1MBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.Context1M }, AnthropicFastModeBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.FastMode }, AnthropicRedactThinkingBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.RedactThinking }, AnthropicTaskBudgetsBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.TaskBudgets }, AnthropicEagerInputStreamingBetaHeaderPrefix: func(f ProviderFeatureSupport) bool { return f.EagerInputStreaming }, }Based on learnings: Bedrock supports tool input examples via the standalone
tool-examples-2025-10-29beta header, andtool.InputExamplesmust be gated onProviderFeatureSupport.InputExamples, notAdvancedToolUse.Also applies to: 417-432, 613-629
1169-1240:⚠️ Potential issue | 🟠 MajorUploaded-file references are being dropped in the document-block converters.
ConvertToAnthropicDocumentBlockandConvertResponsesFileBlockToAnthropicinitializeAnthropicBlockSource{SourceObj: &AnthropicSource{}}but only populateType,Data,URL, andMediaType. They never checkblock.File.FileIDor setsource.type = "file"withsource.file_id.Since
ChatInputFile.FileIDandResponsesMessageContentBlock.FileIDnow carry uploaded-file identifiers, andAnthropicSource.FileID(gated by thefiles-api-2025-04-14beta) is the documented way to reference them, this path silently falls back to incorrect transport (base64/url instead of the efficient file reference).Add a check for
file.FileIDbefore the FileURL and FileData blocks to emittype: "file"and populate thefile_idfield.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/anthropic/utils.go` around lines 1169 - 1240, ConvertToAnthropicDocumentBlock currently never emits a "file" reference; before the FileURL and FileData branches check for file.FileID and if present set documentBlock.Source.SourceObj.Type = "file" and populate documentBlock.Source.SourceObj.FileID from file.FileID then return the documentBlock. Apply the same fix in ConvertResponsesFileBlockToAnthropic (ensure both functions use AnthropicBlockSource / AnthropicSource and set SourceObj.Type="file" and SourceObj.FileID when file.FileID is present so uploaded-file identifiers are preserved instead of falling back to URL/base64).
🧹 Nitpick comments (1)
core/providers/anthropic/types.go (1)
909-916: Stale workaround comment should be updated.The comment states
search_resultblock's source string "has no typed slot yet" and suggests usingExtraParams. However,Source *AnthropicBlockSource(line 898) now models the string-or-object union directly —AnthropicBlockSource.SourceStrhandles the string form forsearch_resultblocks. This comment is outdated and may mislead future maintainers.Consider updating to reflect the current implementation:
- // search_result block: the API uses the literal key "source" with a plain - // string value, which collides with the existing Source *AnthropicSource - // field (object form, used by image/document). Supporting both requires - // either (a) a string-or-object union type for Source, or (b) full custom - // Marshal/Unmarshal on AnthropicContentBlock. Deferred until we decide the - // representation — search_result block enum is present above but its - // source string has no typed slot yet. Callers needing it can use - // ExtraParams pass-through on the request side in the meantime. + // search_result block: uses a plain string "source" value (URL/path). + // This is modeled by Source *AnthropicBlockSource which is a union type + // supporting both the string form (search_result) and object form + // (image/document) under the single "source" JSON key.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/anthropic/types.go` around lines 909 - 916, The comment on AnthropicContentBlock is stale: it claims the search_result "source" string has no typed slot and suggests ExtraParams, but the code now models the string-or-object union via Source *AnthropicBlockSource and AnthropicBlockSource.SourceStr handles the string form; update the comment to reflect that the string form is supported through AnthropicBlockSource.SourceStr (and leave guidance about using Source for object form and SourceStr for literal string form) and remove the misleading note about needing custom Marshal/Unmarshal or using ExtraParams.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@core/providers/anthropic/utils.go`:
- Around line 1169-1240: ConvertToAnthropicDocumentBlock currently never emits a
"file" reference; before the FileURL and FileData branches check for file.FileID
and if present set documentBlock.Source.SourceObj.Type = "file" and populate
documentBlock.Source.SourceObj.FileID from file.FileID then return the
documentBlock. Apply the same fix in ConvertResponsesFileBlockToAnthropic
(ensure both functions use AnthropicBlockSource / AnthropicSource and set
SourceObj.Type="file" and SourceObj.FileID when file.FileID is present so
uploaded-file identifiers are preserved instead of falling back to URL/base64).
---
Nitpick comments:
In `@core/providers/anthropic/types.go`:
- Around line 909-916: The comment on AnthropicContentBlock is stale: it claims
the search_result "source" string has no typed slot and suggests ExtraParams,
but the code now models the string-or-object union via Source
*AnthropicBlockSource and AnthropicBlockSource.SourceStr handles the string
form; update the comment to reflect that the string form is supported through
AnthropicBlockSource.SourceStr (and leave guidance about using Source for object
form and SourceStr for literal string form) and remove the misleading note about
needing custom Marshal/Unmarshal or using ExtraParams.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 76c33e3e-7191-409e-9b0f-45880f1171ca
📒 Files selected for processing (14)
core/internal/llmtests/account.gocore/internal/llmtests/eager_input_streaming.gocore/internal/llmtests/provider_feature_support_test.gocore/internal/llmtests/tests.gocore/providers/anthropic/anthropic_test.gocore/providers/anthropic/chat.gocore/providers/anthropic/responses.gocore/providers/anthropic/types.gocore/providers/anthropic/utils.gocore/providers/anthropic/utils_test.gocore/providers/azure/azure_test.gocore/providers/bedrock/bedrock_test.gocore/providers/vertex/vertex_test.gocore/schemas/chatcompletions.go
✅ Files skipped from review due to trivial changes (2)
- core/internal/llmtests/tests.go
- core/providers/bedrock/bedrock_test.go
🚧 Files skipped from review as they are similar to previous changes (5)
- core/providers/vertex/vertex_test.go
- core/providers/azure/azure_test.go
- core/providers/anthropic/responses.go
- core/schemas/chatcompletions.go
- core/internal/llmtests/eager_input_streaming.go
485a546 to
26a6e7f
Compare
0484302 to
9e337c9
Compare
26a6e7f to
e2a8c35
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
core/providers/anthropic/chat.go (1)
154-156: Clarify this beta-header comment to match actual behavior.The current wording says header injection happens “when this is set”, but injection occurs only when
EagerInputStreamingis explicitlytrue.Proposed wording update
-// Anthropic auto-injects beta header fine-grained-tool-streaming-2025-05-14 -// via AddMissingBetaHeadersToContext when this is set. +// Anthropic auto-injects beta header fine-grained-tool-streaming-2025-05-14 +// via AddMissingBetaHeadersToContext when this is explicitly true.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/anthropic/chat.go` around lines 154 - 156, Update the comment to accurately state that the beta header "fine-grained-tool-streaming-2025-05-14" is injected by AddMissingBetaHeadersToContext only when ChatTool.EagerInputStreaming is explicitly true (not merely set on the tool); reference ChatTool, EagerInputStreaming, and AddMissingBetaHeadersToContext in the comment so readers understand the exact condition that triggers header injection and mention the header name fine-grained-tool-streaming-2025-05-14.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/internal/llmtests/tests.go`:
- Line 123: The repository defines the config flag ServerToolsViaOpenAIEndpoint
(from account.go) but no test runner; add parity with EagerInputStreaming by
implementing a RunServerToolsViaOpenAIEndpointTest function and wiring it into
the test harness: create the runner (mirroring the pattern used by
RunEagerInputStreamingTest) that checks the ServerToolsViaOpenAIEndpoint config
and exercises the server-tools flow, then add a call to
RunServerToolsViaOpenAIEndpointTest inside RunAllComprehensiveTests so it runs
with other scenario tests; alternatively, if this flag is intentionally
config-only, add an in-code comment in tests.go explaining why no runner exists
instead of adding a test.
---
Nitpick comments:
In `@core/providers/anthropic/chat.go`:
- Around line 154-156: Update the comment to accurately state that the beta
header "fine-grained-tool-streaming-2025-05-14" is injected by
AddMissingBetaHeadersToContext only when ChatTool.EagerInputStreaming is
explicitly true (not merely set on the tool); reference ChatTool,
EagerInputStreaming, and AddMissingBetaHeadersToContext in the comment so
readers understand the exact condition that triggers header injection and
mention the header name fine-grained-tool-streaming-2025-05-14.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7bb018dc-084d-4b03-9197-90056c7410e6
📒 Files selected for processing (14)
core/internal/llmtests/account.gocore/internal/llmtests/eager_input_streaming.gocore/internal/llmtests/provider_feature_support_test.gocore/internal/llmtests/tests.gocore/providers/anthropic/anthropic_test.gocore/providers/anthropic/chat.gocore/providers/anthropic/responses.gocore/providers/anthropic/types.gocore/providers/anthropic/utils.gocore/providers/anthropic/utils_test.gocore/providers/azure/azure_test.gocore/providers/bedrock/bedrock_test.gocore/providers/vertex/vertex_test.gocore/schemas/chatcompletions.go
✅ Files skipped from review due to trivial changes (3)
- core/internal/llmtests/provider_feature_support_test.go
- core/providers/bedrock/bedrock_test.go
- core/internal/llmtests/eager_input_streaming.go
🚧 Files skipped from review as they are similar to previous changes (8)
- core/providers/anthropic/anthropic_test.go
- core/providers/vertex/vertex_test.go
- core/providers/anthropic/responses.go
- core/schemas/chatcompletions.go
- core/providers/azure/azure_test.go
- core/providers/anthropic/utils_test.go
- core/providers/anthropic/utils.go
- core/providers/anthropic/types.go
Merge activity
|
The base branch was changed.
* fix: delete fallbacks from anthropic req (#2754)
## Summary
Remove the `fallbacks` field from Anthropic provider request bodies to ensure compatibility with the Anthropic API specification.
## Changes
- Added logic to delete the `fallbacks` field from JSON request bodies in the Anthropic provider's `getRequestBodyForResponses` function
- Implemented proper error handling for the field deletion operation with appropriate Bifrost error wrapping
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Test Anthropic provider requests to ensure the `fallbacks` field is properly removed and requests succeed:
```sh
# Core/Transports
go version
go test ./...
# Test specific Anthropic provider functionality
go test ./core/providers/anthropic/...
```
Verify that requests to the Anthropic API no longer include the `fallbacks` field and complete successfully.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No security implications - this change only removes an unsupported field from API requests.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* fix: preserve context values in async requests (#2703)
## Summary
Refactors async job execution to pass the full BifrostContext instead of just the virtual key value, enabling proper context preservation for background operations including virtual keys, tracing headers, and other request metadata.
## Changes
- Modified `AsyncJobExecutor.SubmitJob()` to accept `*schemas.BifrostContext` instead of `*string` for virtual key
- Updated `executeJob()` to restore all original request context values in the background goroutine
- Added `getVirtualKeyFromContext()` helper function to extract virtual key from BifrostContext
- Updated all async handler methods to pass BifrostContext directly to `SubmitJob()`
- Removed redundant virtual key extraction logic from HTTP handlers
## Type of change
- [x] Refactor
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
## How to test
Verify async job execution preserves request context properly:
```sh
# Core/Transports
go version
go test ./...
# Test async endpoints with virtual keys and tracing headers
curl -X POST http://localhost:8080/v1/async/chat/completions \
-H "Authorization: Bearer vk_test_key" \
-H "X-Trace-Id: test-trace-123" \
-d '{"model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "Hello"}]}'
# Verify job execution maintains context
curl http://localhost:8080/v1/async/jobs/{job_id}
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
Improves security by ensuring proper context isolation and virtual key handling in async operations.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* [fix]: Gemini provider - handle content block tool outputs in Responses API path (#2692)
When function_call_output messages arrive via the Anthropic Responses API
format, their output is an array of content blocks (ResponsesFunctionToolCallOutputBlocks),
not a plain string (ResponsesToolCallOutputStr). The Gemini provider's
convertResponsesMessagesToGeminiContents only checked the string case,
silently dropping all tool result content and sending empty {} responses
to Gemini. This caused the model to loop endlessly retrying tool calls
it never saw results for.
Other providers (Bedrock, OpenAI, Cohere) already handle both output
formats. This aligns the Gemini provider with them.
Affected packages:
- core/providers/gemini/responses.go - Add ResponsesFunctionToolCallOutputBlocks handling
- core/providers/gemini/gemini_test.go - Add test for content block outputs
Co-authored-by: tom <tom@asteroid.ai>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Akshay Deo <akshay@akshaydeo.com>
* fix: gemini thinking level and finish reason round-trip preservation (#2697)
## Summary
Fixes two critical regressions in the Gemini provider's GenAI integration: preserves `thinkingLevel` parameters during round-trip conversions and ensures `MAX_TOKENS` finish reasons survive Bifrost transformations.
## Changes
- **Fixed thinking level preservation**: Modified `convertGenerationConfigToResponsesParameters()` to only set effort from `thinkingLevel` without deriving a `thinkingBudget`, preventing unwanted behavior changes in Gemini 3.x models
- **Enhanced finish reason handling**: Added bidirectional conversion between Gemini and Bifrost finish reasons, prioritizing `StopReason` over `IncompleteDetails` to preserve `MAX_TOKENS` finish reasons
- **Expanded finish reason support**: Added new Gemini finish reason constants for image generation, tool calls, and malformed responses
- **Improved response conversion**: Updated response conversion logic to properly handle error finish reasons and set appropriate status/error fields
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Validate the thinking level and finish reason preservation:
```sh
# Run Gemini provider tests
go test ./core/providers/gemini/... -v
# Specifically test the regression fixes
go test ./core/providers/gemini/... -run "TestGenAIThinkingLevel_RoundTripPreservesLevelNotBudget|TestGenAIFinishReasonMaxTokens_PersistsThroughBifrostRoundTrip" -v
```
Test with actual Gemini API calls using thinking levels and verify that:
- `thinkingLevel` parameters are preserved without generating unwanted `thinkingBudget` values
- Responses with `MAX_TOKENS` finish reason maintain that status through the conversion pipeline
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
Addresses regressions in GenAI path where thinking configuration and finish reasons were being incorrectly transformed during Bifrost conversions.
## Security considerations
No security implications - this change only affects internal data structure conversions and doesn't modify authentication, secrets handling, or data exposure.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* fix: remove cc user agent guard from streaming in anthropic (#2706)
## Summary
Fixes WebSearch tool argument handling for all clients by removing the Claude Code user agent restriction. Previously, only Claude Code clients received proper WebSearch query arguments in the streaming response, while other clients lost the query data due to skipped argument deltas.
## Changes
- Removed the `IsClaudeCodeRequest(ctx)` check that was restricting WebSearch argument sanitization and synthetic delta generation to only Claude Code clients
- WebSearch tool arguments are now sanitized and synthetic `input_json_delta` events are generated for all clients during `output_item.done` events
- Added comprehensive test coverage for the WebSearch tool flow including argument delta skipping, synthetic delta generation, and full end-to-end streaming scenarios
- Enhanced code comments to clarify the WebSearch tool handling logic
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Validate the WebSearch tool behavior with the new test suite:
```sh
# Run the new WebSearch tests
go test ./core/providers/anthropic -run TestWebSearch -v
# Run all provider tests to ensure no regressions
go test ./core/providers/anthropic/...
# Full test suite
go test ./...
```
Test with different user agents to verify WebSearch queries are properly streamed to all clients, not just Claude Code.
## Screenshots/Recordings
N/A - This is a backend streaming API fix.
## Breaking changes
- [ ] Yes
- [x] No
This change expands functionality to previously broken clients without affecting existing working behavior.
## Related issues
Fixes WebSearch tool argument streaming for non-Claude Code clients.
## Security considerations
The change maintains existing argument sanitization for WebSearch tools while expanding it to all clients, preserving the same security posture.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* remove unnecessary marshalling of payload (#2770)
## Summary
Optimized JSON parsing in the Anthropic integration by replacing full JSON unmarshaling with targeted field extraction using gjson for retrieving the "type" field from streaming responses.
## Changes
- Replaced `sonic.Unmarshal()` with `gjson.Get()` to extract only the "type" field from Anthropic stream events
- Eliminated the need to unmarshal the entire JSON response into an `AnthropicStreamEvent` struct
- Improved performance by avoiding unnecessary JSON parsing overhead
## Type of change
- [x] Refactor
- [ ] Bug fix
- [ ] Feature
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Test streaming responses from the Anthropic integration to ensure the type field is correctly extracted:
```sh
# Core/Transports
go version
go test ./...
# Test specifically the Anthropic integration
go test ./transports/bifrost-http/integrations/
```
## Screenshots/Recordings
N/A
## Breaking changes
- [x] No
- [ ] Yes
## Related issues
N/A
## Security considerations
No security implications - this is a performance optimization that maintains the same functionality.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* feat: claude opus 4.7 compatibility (#2773)
## Summary
Adds support for Claude Opus 4.7 model with specific parameter handling and reasoning configuration changes. Opus 4.7 rejects temperature, top_p, and top_k parameters and only supports adaptive thinking mode without budget tokens.
## Changes
- Added `IsOpus47()` function to detect Claude Opus 4.7 models
- Modified parameter handling to skip temperature, top_p, and top_k for Opus 4.7 models
- Updated reasoning configuration to use adaptive thinking only for Opus 4.7 (no budget_tokens)
- Added support for `display` parameter in thinking configuration to control output visibility
- Extended adaptive thinking support to include Sonnet 4.6 models
- Added task budget support with new beta header `task-budgets-2026-03-13`
- Updated effort mapping to handle Opus 4.7's "xhigh" effort level
- Added comprehensive test coverage for Opus 4.7 specific behaviors
- Fixed OpenAI responses to filter out Anthropic-specific summary:"none" parameter
## Type of change
- [x] Feature
- [x] Bug fix
## Affected areas
- [x] Core (Go)
- [x] Providers/Integrations
## How to test
Validate the changes with the following tests:
```sh
# Core/Transports
go version
go test ./core/providers/anthropic/...
# Specific test cases for Opus 4.7
go test -run TestToAnthropicChatRequest_Opus47 ./core/providers/anthropic/
go test -run TestSupportsAdaptiveThinking ./core/providers/anthropic/
go test -run TestAddMissingBetaHeadersToContext_TaskBudgets ./core/providers/anthropic/
```
Test with Claude Opus 4.7 model requests to ensure:
- Temperature, top_p, top_k parameters are stripped
- Reasoning uses adaptive thinking without budget_tokens
- Task budget beta headers are properly added
## Breaking changes
- [ ] Yes
- [x] No
The changes maintain backward compatibility while adding new model support.
## Security considerations
No security implications. Changes only affect parameter handling and model-specific configurations for Anthropic's Claude models.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* docs: restructure helm guide into comprehensive multi-page reference (#2776)
* docs: restructure helm guide into comprehensive multi-page reference (#2771)
## Summary
Restructures the Helm deployment documentation into a comprehensive multi-page guide with dedicated sections for each configuration area. The main Helm page now provides quickstart instructions for both OSS and Enterprise deployments, while detailed configuration is split into focused sub-pages.
## Changes
- **Restructured main Helm page**: Condensed from 740+ lines to 103 lines with clear quickstart tabs for OSS vs Enterprise
- **Added 8 new dedicated configuration pages**:
- `values.mdx` - Complete values reference with examples and common patterns
- `client.mdx` - Client configuration (pool size, logging, CORS, auth, compat shims)
- `providers.mdx` - Provider setup for all 23+ supported LLM providers with cloud-native auth
- `storage.mdx` - Storage backends (SQLite, PostgreSQL, object storage, vector stores)
- `plugins.mdx` - Plugin configuration (telemetry, logging, semantic cache, OTel, Datadog)
- `governance.mdx` - Governance setup (budgets, rate limits, virtual keys, routing rules)
- `cluster.mdx` - Multi-replica HA with gossip-based peer discovery
- `troubleshooting.mdx` - Common issues and diagnostic commands
- **Updated chart version**: Bumped from 1.5.0 to 2.1.0
- **Enhanced navigation**: Added nested Helm section in docs.json with proper icons and organization
## Type of change
- [x] Documentation
## Affected areas
- [x] Docs
## How to test
Navigate through the new Helm documentation structure:
1. Visit the main Helm page for quickstart instructions
2. Follow the quickstart for either OSS or Enterprise deployment
3. Use the sub-pages for detailed configuration of specific areas
4. Verify all internal links work correctly
5. Test the troubleshooting commands on a real deployment
The documentation now provides both quick-start paths and comprehensive reference material for production deployments.
## Screenshots/Recordings
N/A - Documentation changes only
## Breaking changes
- [ ] Yes
- [x] No
This is purely a documentation restructure with no functional changes to the Helm chart itself.
## Related issues
Improves Helm documentation organization and usability for both new users and production deployments.
## Security considerations
The new documentation emphasizes security best practices:
- Kubernetes Secrets for all sensitive values
- Cloud-native authentication (IRSA, Workload Identity, Managed Identity)
- Proper RBAC setup for cluster mode
- Compliance considerations (HIPAA, PCI) for content logging
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* docs: update guardrails docs
* v1.4.23 cut (#2778)
## Summary
Release version 1.4.20-1.4.23 with bug fixes for provider integrations, streaming error handling, and migration test improvements. This release addresses critical issues in Gemini, Bedrock, and Anthropic providers while adding support for Claude Opus 4.7.
## Changes
- **Provider Fixes**: Fixed Gemini tool outputs handling, Bedrock streaming events, and image content preservation in tool results
- **Streaming Improvements**: Added proper error capture for Responses streaming API to prevent silent failures
- **Migration Tests**: Added support for v1.4.22 governance model pricing flex tier columns in both PostgreSQL and SQLite migration tests
- **Anthropic Enhancements**: Removed fallback fields from outgoing requests and added Claude Opus 4.7 compatibility
- **Framework Fixes**: Improved async context propagation and custom provider model validation
- **Plugin Updates**: Enhanced OTEL metrics and configuration defaults
## Type of change
- [x] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Validate the migration test changes and provider fixes:
```sh
# Test migration scripts
./.github/workflows/scripts/run-migration-tests.sh
# Core/Transports
go version
go test ./...
# Test provider integrations
go test ./transports/...
go test ./plugins/...
```
Test the new governance model pricing columns are properly handled in migration scenarios.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
Addresses multiple provider integration issues and streaming API error handling improvements.
## Security considerations
No security implications - changes are focused on bug fixes and migration test improvements.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* validator fix (#2780)
## Summary
Enhanced GitHub Actions security by transitioning from audit-only to strict network egress control using step-security/harden-runner. This change blocks all outbound network traffic by default and explicitly allows only required endpoints for each workflow.
## Changes
- Changed `egress-policy` from `audit` to `block` across all GitHub Actions workflows
- Added comprehensive `allowed-endpoints` lists for each job, specifying only the necessary external services
- Updated step names from "Harden the runner (Audit all outbound calls)" to "Harden Runner" for consistency
- Fixed schema validation script to use correct JSON paths for concurrency and SCIM configuration validation
- Reformatted JSON schema file for improved readability (whitespace and formatting changes only)
## Type of change
- [x] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [x] Docs
## How to test
Verify that all GitHub Actions workflows continue to function properly with the new network restrictions:
```sh
# Trigger workflows by pushing to a branch or creating a PR
git push origin feature-branch
# Monitor workflow runs in GitHub Actions tab to ensure:
# - All jobs complete successfully
# - No network connectivity errors occur
# - All required external services remain accessible
```
Key endpoints that should remain accessible include:
- GitHub API and release assets
- Package registries (npm, PyPI, Go modules)
- Docker registries
- Cloud storage services
- External APIs used by tests and integrations
## Screenshots/Recordings
N/A - Infrastructure/CI changes only
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
This change significantly improves security posture by:
- Preventing unauthorized outbound network connections from CI runners
- Creating an explicit allowlist of required external services
- Reducing attack surface for supply chain attacks
- Providing better visibility into network dependencies
The transition from audit to block mode ensures that any new network dependencies must be explicitly approved and documented.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* fix: token usage for vllm --skip-pipeline (#2784)
## Summary
Fixed token usage attribution for vLLM by treating empty-string content the same as nil content in streaming responses. vLLM sends `delta.content=""` (instead of `delta: null`) in finish_reason chunks, which was being forwarded and causing the synthesis chunk to lose its finish_reason, breaking usage attribution in logs and UI.
## Changes
- Modified streaming content handling to check for both nil and empty string content before processing chunks
- This prevents empty content deltas from being forwarded, ensuring finish_reason is preserved for proper token usage tracking
- Removed extraneous whitespace and formatting inconsistencies throughout the OpenAI provider code
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Test with vLLM provider to ensure token usage is properly attributed:
```sh
# Core/Transports
go version
go test ./...
# Test streaming chat completion with vLLM
# Verify that finish_reason is preserved in final chunks
# Check that token usage appears correctly in logs/UI
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
Fixes token usage tracking issues with vLLM provider.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* [fix]: OpenAI provider - flatten array-form tool_result output for Responses API (#2781) --skip-pipeline
When Anthropic tool_result blocks arrive with array-form content (the
standard shape for multi-turn tool exchanges), the OpenAI provider's
MarshalJSON emitted the output as a JSON array on the wire. The OpenAI
Responses API defines function_call_output.output as a string — strict
upstreams (Ollama Cloud, openai-go typed models) reject the array form
with HTTP 400.
Fix: before marshaling, collapse text-only
ResponsesFunctionToolCallOutputBlocks into a newline-joined string.
Non-text blocks (images, files) are left as-is. The schema type is
unchanged; the transformation lives in the OpenAI provider's outbound
marshaler only.
Closes #2779
Affected packages:
- core/providers/openai/types.go - Flatten text-only output blocks to string
- core/providers/openai/responses_marshal_test.go - Three regression tests
- core/changelog.md - Changelog entry
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: prevent send on closed channel panic in provider queue shutdown --skip-pipeline (#2725)
## Summary
Fixes a race condition in provider queue shutdown that caused "send on closed channel" panics in production. The issue occurred when producers passed the `isClosing()` check but then attempted to send to a queue that was closed before they reached the select statement.
## Changes
- **Removed queue channel closure**: Queue channels are never closed to prevent "send on closed channel" panics
- **Updated worker exit mechanism**: Workers now exit via the `done` channel signal instead of waiting for queue closure
- **Enhanced shutdown handling**: Workers drain remaining buffered requests and send shutdown errors when `done` is signaled
- **Added producer re-routing**: Stale producers can transparently re-route to new queues during `UpdateProvider`
- **Improved error handling**: Added rollback logic for failed provider updates with proper cleanup
- **Enhanced transfer logic**: Buffered requests are transferred before signaling shutdown to ensure they reach new workers
- **Added comprehensive tests**: Race condition demonstration and validation of the fix across multiple scenarios
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Run the new race condition test to verify the fix:
```sh
go test -run TestProviderQueue_SendOnClosedChannel_Race ./core -v
```
Run the comprehensive provider lifecycle tests:
```sh
go test -run TestProviderQueue ./core -v
go test -run TestUpdateProvider ./core -v
go test -run TestRemoveProvider ./core -v
```
Run the full test suite to ensure no regressions:
```sh
go test ./...
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
Fixes production panics related to concurrent provider queue operations during shutdown/updates.
## Security considerations
None - this is an internal concurrency fix that doesn't affect external interfaces or data handling.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* feat: preserve MCP tool annotations in bidirectional conversion --skip-pipeline (#2746)
## Summary
Adds support for preserving MCP tool annotations when converting between MCP tools and Bifrost schemas. This enables MCP servers to provide behavioral hints (read-only, destructive, idempotent, open-world) that help agents make better reasoning decisions about tool usage.
## Changes
- Added `MCPToolAnnotations` struct to capture MCP spec hints including title, read-only, destructive, idempotent, and open-world indicators
- Modified `convertMCPToolToBifrostSchema` to preserve MCP tool annotations when converting from MCP tools to Bifrost chat tools
- Updated `ChatToolFunction` to include optional annotations field
- Enhanced MCP server sync logic to map Bifrost annotations back to MCP tool annotations for bidirectional compatibility
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Test with an MCP server that provides tool annotations to verify they are preserved through the conversion process:
```sh
# Core/Transports
go version
go test ./...
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
Verify that MCP tools with annotations maintain their behavioral hints when converted to Bifrost schemas and back.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No security implications - this change only preserves metadata hints that help with tool behavior classification.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* fix: add support for Anthropic structured output and response format (#1972)
* fix: add support for Anthropic structured output and response format conversion
* fix: refactor output configuration setting in ToBedrockResponsesRequest
* run go fmt on responses.go
* fix: streamline response format conversion for Anthropic models
* fix: enhance merging of additional model request fields and output configuration
* fix: remove koanf/maps dependency and replace its usage with internal merge function
* preserve order in output_config
* update type casting
* add non-anthropic test-case
* check for output_config first
* diversify anthropic output formats
* move bifrost ctx update
* guard tested field
* guard format.jsonschema
* test fixes --skip-pipeline (#2782)
## Summary
Updates test configurations to align with current API specifications and replaces deprecated utility function usage.
## Changes
- Replaced `schemas.Ptr("test")` with `new("test")` in Anthropic chat test for string pointer creation
- Updated MCP client configuration tests to use `sse` connection type instead of `websocket` with simplified `connection_string` field
- Modified HTTP MCP client config to use `connection_string` instead of nested `http_config` object
- Changed OpenTelemetry plugin tests to use `genai_extension` trace type instead of `otel`
## Type of change
- [x] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
## How to test
Validate that all tests pass with the updated configurations:
```sh
# Core/Transports
go version
go test ./...
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No security implications - these are test configuration updates only.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* anthropic container changes --skip-pipeline (#2783)
## Summary
Briefly explain the purpose of this PR and the problem it solves.
## Changes
- What was changed and why
- Any notable design decisions or trade-offs
## Type of change
- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Describe the steps to validate this change. Include commands and expected outcomes.
```sh
# Core/Transports
go version
go test ./...
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
If adding new configs or environment variables, document them here.
## Screenshots/Recordings
If UI changes, add before/after screenshots or short clips.
## Breaking changes
- [ ] Yes
- [ ] No
If yes, describe impact and migration instructions.
## Related issues
Link related issues and discussions. Example: Closes #123
## Security considerations
Note any security implications (auth, secrets, PII, sandboxing, etc.).
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* core schema changes --skip-pipeline (#2787)
## Summary
Promotes Anthropic-native parameters to the neutral ChatParameters layer, enabling direct access to advanced Anthropic features like containers, MCP servers, task budgets, and enhanced tool configurations without requiring ExtraParams.
## Changes
- Added neutral fields to `ChatParameters` for Anthropic-specific features: `TopK`, `Speed`, `InferenceGeo`, `MCPServers`, `Container`, `CacheControl`, `TaskBudget`, and `ContextManagement`
- Enhanced `ChatTool` with Anthropic tool flags: `DeferLoading`, `AllowedCallers`, `InputExamples`, and `EagerInputStreaming`
- Added `Display` field to `ChatReasoning` for Anthropic adaptive thinking control
- Implemented `StripUnsupportedAnthropicFields` function to remove unsupported features based on provider capabilities
- Updated parameter mapping logic to prefer neutral fields over ExtraParams with fallback support
- Added comprehensive JSON marshaling/unmarshaling for union types like `ChatContainer`
The design maintains backward compatibility by falling back to ExtraParams when neutral fields are not set, while providing type-safe access to advanced Anthropic features.
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Validate the new parameter handling and provider feature gating:
```sh
# Core/Transports
go version
go test ./...
# Test Anthropic provider parameter mapping
go test ./core/providers/anthropic/...
# Verify schema validation
go test ./core/schemas/...
```
Test with requests containing the new neutral fields to ensure proper mapping to Anthropic API format and appropriate stripping for unsupported providers.
## Screenshots/Recordings
N/A - Backend API changes only.
## Breaking changes
- [ ] Yes
- [x] No
This change is fully backward compatible. Existing ExtraParams usage continues to work, while new neutral fields provide enhanced type safety.
## Related issues
N/A
## Security considerations
The new MCP server configuration includes authorization tokens. Ensure proper handling of sensitive credentials in the `ChatMCPServer.AuthorizationToken` field.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* dependabot fixes --skip-pipeline (#2788)
## Summary
This PR adds the Hono web framework as a direct dependency to all MCP server examples and updates various dependencies across the project to their latest versions.
## Changes
- Added `hono@^4.12.14` as a direct dependency to all MCP server examples (edge-case-server, error-test-server, parallel-test-server, temperature, test-tools-server)
- Upgraded Hono from version 4.11.4 to 4.12.14 and changed it from a peer dependency to a direct dependency
- Updated Python dependencies including authlib (1.6.6 → 1.6.11), langchain-core (1.2.28 → 1.2.31), langchain-openai (1.1.4 → 1.1.14), langchain-text-splitters (1.1.0 → 1.1.2), langsmith (0.5.0 → 0.7.32), openai (2.13.0 → 2.32.0), and python-multipart (0.0.20 → 0.0.26)
- Updated TypeScript dependencies including langsmith (0.5.18 → 0.5.19) and added it as a direct dependency
- Added `github.com/tidwall/gjson v1.18.0` as a direct dependency in Go transports module
- Updated UI dependencies including dompurify (3.3.3 → 3.4.0) and follow-redirects (1.15.11 → 1.16.0) via package overrides
## Type of change
- [x] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] UI (Next.js)
## How to test
Validate that all dependencies are properly installed and examples still function:
```sh
# Test MCP server examples
cd examples/mcps/temperature
npm install
npm run build
# Test Go transports
cd transports
go mod tidy
go test ./...
# Test Python integrations
cd tests/integrations/python
uv sync
uv run python -m pytest
# Test TypeScript integrations
cd tests/integrations/typescript
npm install
npm test
# Test UI
cd ui
pnpm install
pnpm test
pnpm build
```
## Screenshots/Recordings
N/A - dependency updates only
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
The dependency updates include security patches, particularly for dompurify and follow-redirects which are explicitly overridden in the UI package.json for security reasons.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* move back go to 1.26.1 (#2792)
## Summary
Downgrade Go version from 1.26.2 to 1.26.1 across all GitHub Actions workflows, Go modules, and Docker images to address compatibility issues.
## Changes
- Downgraded Go version from 1.26.2 to 1.26.1 in all GitHub Actions workflows (e2e-tests, pr-tests, release-cli, release-pipeline, snyk)
- Updated go.mod files for core, CLI, examples, and test modules to use Go 1.26.1
- Updated Docker base images in transports/Dockerfile and transports/Dockerfile.local to use golang:1.26.1-alpine3.23
- Added stream cancellation safety improvements with guarded channel sends and finalizer protection to prevent goroutine leaks when clients disconnect
- Enhanced stream error checking with context cancellation support to properly drain upstream channels
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Validate the Go version downgrade and streaming improvements:
```sh
# Verify Go version
go version
# Core/Transports
go test ./...
# Test streaming endpoints with client disconnection scenarios
# to verify proper cleanup and no goroutine leaks
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
The streaming improvements enhance resource cleanup and prevent potential goroutine leaks when clients disconnect unexpectedly, improving overall system stability.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* temp gotoolchain auto (#2809)
* temp hack for tests (#2810)
## Summary
The Go workspace setup script was not specifying a `go` directive or toolchain version, which caused `GOTOOLCHAIN=auto` to select a Go version lower than what `core@v1.4.19` requires. This adds an explicit `go 1.26.2` and `toolchain go1.26.2` directive to the workspace so the correct toolchain is used automatically.
## Changes
- Added `go work edit -go=1.26.2 -toolchain=go1.26.2` to `setup-go-workspace.sh` so that `GOTOOLCHAIN=auto` selects Go >= 1.26.2, satisfying the minimum version required by the published `core@v1.4.19` module referenced in `transports/go.mod`.
## Type of change
- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [ ] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
```sh
# Verify the workspace is initialized with the correct Go version
bash .github/workflows/scripts/setup-go-workspace.sh
grep -E "^go |^toolchain" go.work
# Expected output:
# go 1.26.2
# toolchain go1.26.2
go test ./...
```
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
## Security considerations
None.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* temp block docker build (#2811)
## Summary
Temporarily disables the `test-docker-image-amd64` and `test-docker-image-arm64` CI jobs in the release pipeline by commenting them out.
## Changes
- Both Docker image test jobs (`test-docker-image-amd64` and `test-docker-image-arm64`) have been commented out rather than removed, preserving the full job definitions for easy re-enablement later.
## Type of change
- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
No functional code changes. Verify the release pipeline runs without executing the Docker image test jobs.
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No security implications.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* removed docker build steps (#2812)
## Summary
The `test-docker-image-amd64` and `test-docker-image-arm64` CI jobs have been removed from the release pipeline. These jobs were already commented out and non-functional, and all references to them as dependencies and gate conditions in downstream release jobs have been cleaned up.
## Changes
- Deleted the commented-out `test-docker-image-amd64` and `test-docker-image-arm64` job definitions from the release pipeline.
- Removed `test-docker-image-amd64` and `test-docker-image-arm64` from the `needs` arrays of `core-release`, `framework-release`, `plugins-release`, `bifrost-http-release`, the Docker build/push jobs, the manifest job, and the final notification job.
- Removed the corresponding result checks for those two jobs from all `if` conditions in the affected release jobs.
## Type of change
- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Trigger the release pipeline and confirm that all release jobs proceed without waiting on or referencing the removed Docker image test jobs.
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
## Security considerations
None.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* moves tests to 1.26.2 and 1.26.1 (#2813)
## Summary
Bumps the Go version used across all release pipeline jobs from `1.26.1` to `1.26.2` to keep the CI environment on the latest patch release.
## Changes
- Updated Go version from `1.26.1` to `1.26.2` in all `setup-go` steps within the release pipeline workflow.
## Type of change
- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
The release pipeline will use the updated Go version on the next run. No additional manual steps are required beyond verifying the CI pipeline passes.
```sh
go version
go test ./...
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
Patch releases often include security and bug fixes. Staying on the latest patch version reduces exposure to known vulnerabilities in the Go toolchain.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* ocr test fixes (#2814)
## Summary
Adds an operation-allowed check for OCR requests before they are dispatched to a provider, and fixes the Mistral provider to return its custom provider name when one is configured.
## Changes
- Added a `CheckOperationAllowed` guard for `OCRRequest` in `handleProviderRequest`, consistent with how other request types are gated. If the operation is not permitted, a `BifrostError` is returned with the provider key, request type, and requested model populated.
- Updated `MistralProvider.GetProviderKey()` to use `providerUtils.GetProviderName` so that custom provider configurations are respected, rather than always returning the hardcoded `schemas.Mistral` value.
## Type of change
- [ ] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
```sh
go version
go test ./...
```
- Configure a custom provider wrapping Mistral and verify that `GetProviderKey()` returns the custom provider name rather than `mistral`.
- Attempt an OCR request against a provider where the operation is not allowed and confirm a `BifrostError` is returned with the correct `Provider`, `RequestType`, and `ModelRequested` fields set.
- Attempt an OCR request against a provider where the operation is allowed and confirm the request proceeds normally.
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
## Security considerations
None.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* revert to old schema (#2815)
## Summary
This PR simplifies and consolidates the `config.schema.json` by removing several features, collapsing provider-specific schema variants, and restructuring key configuration definitions to reduce complexity and align with updated runtime semantics.
## Changes
- Removed the top-level `version` field that controlled allow-list semantics for empty arrays
- Removed the `compat` plugin configuration block (including `convert_text_to_chat`, `convert_chat_to_responses`, `should_drop_params`, `should_convert_params`)
- Replaced `compat` with a simpler `enable_litellm_fallbacks` boolean for Groq text completion fallbacks
- Removed `mcp_disable_auto_tool_inject` and `routing_chain_max_depth` from server config
- Collapsed `provider_with_ollama_config`, `provider_with_sgl_config`, and `provider_with_replicate_config` into the generic `provider` definition; removed their corresponding key types (`ollama_key`, `sgl_key`, `replicate_key`) and `network_config_without_base_url`
- Removed providers `nebius`, `xai`, and `runway` from the providers block
- Moved `calendar_aligned` from `virtual_key` to the `budget` object; removed `virtual_key_id` and `provider_config_id` from budget in favor of a standalone `budget_id` reference on virtual keys
- Removed `chain_rule` from routing rules and relaxed the `scope_id` conditional requirement
- Simplified `virtual_key_provider_config` to inline key definitions with full provider-specific key configs (Azure, Vertex, Bedrock, VLLM), replacing the separate `key_ids` and `keys` split
- Removed `mcp_client_name` and `allow_on_all_virtual_keys` from MCP configs; removed `allowed_extra_headers` and `disable_auto_tool_inject` from MCP client config
- Added `websocket` as a supported MCP connection type with a dedicated `websocket_config` block; removed `inprocess` connection type
- Removed `per_user_oauth` as an MCP auth type and dropped the conditional `oauth_config_id` requirement
- Renamed `concurrency_and_buffer_size` to `concurrency_config`; renamed `retry_backoff_initial`/`retry_backoff_max` to `retry_backoff_initial_ms`/`retry_backoff_max_ms`; removed `enforce_http2` and `openai_config` from network config
- Moved `pricing_overrides` from the top-level config into individual provider definitions
- Simplified `provider_pricing_override` schema, removing scoped fields (`scope_kind`, `virtual_key_id`, `provider_id`, `provider_key_id`) and replacing `pattern` with `model_pattern`; added `regex` as a valid `match_type`; expanded supported `request_types`
- Renamed `scim_config` to `saml_config` in the top-level schema
- Removed `apiToken` from Okta config and made `clientSecret` optional; updated required fields to only `issuerUrl` and `clientId`
- Removed `object_storage` and `retention_days` from the logs store config
- Removed `id` and `description` fields from provider config entries in the `provider_configs` array
- Removed `websocket_responses` and `realtime` from `custom_provider_config` allowed requests; removed the enum constraint on `base_provider_type`
- Removed `disable_auto_tool_inject` from `mcp_client_config` VFS settings
- Added `deployments` mapping to `azure_key_config` and `vertex_key_config`
- Updated `otel` plugin `trace_type` to only accept `"otel"` (removed `genai_extension`, `vercel`, `open_inference`)
- Removed `prompts` from the built-in plugin name list
- Removed `builtin` as a valid plugin `placement` value
- Changed `cluster_config` discovery `dial_timeout` from a Go duration string to an integer (nanoseconds)
- Reformatted many inline `required` arrays to multi-line style for readability
## Type of change
- [ ] Bug fix
- [ ] Feature
- [x] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Validate existing configs against the updated schema to confirm they parse correctly. Verify that configs using removed fields (`version`, `compat`, `mcp_disable_auto_tool_inject`, `chain_rule`, etc.) are rejected by the schema validator.
```sh
go test ./...
```
Confirm that provider configs for Ollama, SGL, and Replicate continue to work using the generic `provider` definition. Confirm MCP clients using `websocket` connection type validate correctly with a `websocket_config` block.
## Breaking changes
- [x] Yes
- [ ] No
The following fields have been removed and configs using them will fail schema validation:
- `version` (top-level)
- `compat` block under server config
- `mcp_disable_auto_tool_inject` and `routing_chain_max_depth` under server config
- `chain_rule` on routing rules
- `calendar_aligned` on virtual keys (now on budgets)
- `virtual_key_id` / `provider_config_id` on budgets
- `apiToken` on Okta config (now optional `clientSecret` only)
- `object_storage` and `retention_days` on logs store
- `id`, `description` on provider config entries
- `allow_on_all_virtual_keys`, `allowed_extra_headers`, `disable_auto_tool_inject` on MCP client config
- `inprocess` MCP connection type and `per_user_oauth` auth type
- `enforce_http2` and `openai_config` from network config
- `builtin` plugin placement value; `prompts` built-in plugin name
- `nebius`, `xai`, `runway` provider entries
Migrate by removing or replacing these fields according to the updated schema definitions.
## Related issues
## Security considerations
Removal of `per_user_oauth` as an MCP auth type should be reviewed to ensure no active integrations depend on it. The relaxed `scope_id` requirement on routing rules should be validated to confirm it does not inadvertently broaden access scope.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* reduced release pipeline for this cut for go downgrade (#2816)
## Summary
This PR removes all test jobs from the release pipeline and decouples them from the release gate conditions, allowing releases to proceed without waiting for (often flaky) provider API test results. It also significantly expands and restructures `config.schema.json` to reflect new features, provider support, and breaking semantic changes introduced in v1.5.0.
## Changes
- **Release pipeline**: Removed `test-core`, `approve-flaky-test-core`, `test-framework`, `test-plugins`, `test-bifrost-http`, `test-migrations`, and `test-e2e-ui` jobs entirely from `release-pipeline.yml`. All release jobs (`core-release`, `framework-release`, `plugins-release`, `bifrost-http-prep`, `docker-build-amd64`, `docker-build-arm64`, `push-mintlify-changelog`) now depend only on change detection and upstream release jobs, not on test outcomes.
- **Schema: deny-by-default semantics (v1.5.0)**: Empty arrays in `provider_configs`, `mcp_configs`, `allowed_models`, `key_ids`, and `tools_to_execute` now mean "deny all" rather than "allow all". Use `["*"]` to allow all. A top-level `version` field (enum `1` or `2`, default `2`) controls which semantic applies, with `1` restoring v1.4.x behavior.
- **Schema: new providers**: Added `nebius`, `xai`, and `runway` as first-class provider entries.
- **Schema: provider key restructuring**: Replaced the inline key object definition in `virtual_key_provider_config` with a flat `key_ids` string array. Introduced dedicated key types `ollama_key`, `sgl_key`, and `replicate_key` with their own `_key_config` blocks. Removed `deployments` from `azure_key_config` and `vertex_key_config` (replaced by `aliases` on `base_key`). Added `aliases` to `base_key` for model-to-deployment/inference-profile mappings.
- **Schema: provider variants**: `ollama` and `sgl` now reference `provider_with_ollama_config` and `provider_with_sgl_config` respectively, which use `network_config_without_base_url` (URL is per-key). `replicate` references `provider_with_replicate_config`. Added `openai_config` def with `disable_store` for the Responses API. Renamed `concurrency_config` to `concurrency_and_buffer_size`.
- **Schema: network config**: Split `network_config` into `network_config` (with `base_url`) and `network_config_without_base_url`. Added `enforce_http2`, `stream_idle_timeout_in_seconds`, `max_conns_per_host`, `beta_header_overrides`, and `ca_cert_pem` fields. Renamed `retry_backoff_initial_ms`/`retry_backoff_max_ms` to `retry_backoff_initial`/`retry_backoff_max`.
- **Schema: MCP changes**: Removed `websocket` connection type; added `inprocess`. Added `per_user_oauth` auth type. Added `mcp_client_name` for config-file resolution. Added `allowed_extra_headers` and `allow_on_all_virtual_keys` to `mcp_client_config`. Added `disable_auto_tool_inject` to MCP plugin config. Added global `mcp_disable_auto_tool_inject` and `routing_chain_max_depth` to server config.
- **Schema: routing rules**: Added `chain_rule` boolean to `routing_rule`. Made `scope_id` required (non-null string) when `scope` is `team`, `customer`, or `virtual_key`.
- **Schema: budgets**: Moved `calendar_aligned` from the budget object to the virtual key level. Replaced `budget_id` on virtual key with `virtual_key_id`/`provider_config_id` on the budget object itself. Removed `budget_id` from `virtual_key_provider_config`.
- **Schema: logs store**: Added `object_storage` (S3/GCS) and `retention_days` to the logs store config.
- **Schema: pricing overrides**: Moved `pricing_overrides` from per-provider to a top-level array with scoped `provider_pricing_override` objects supporting `scope_kind`, `virtual_key_id`, `provider_id`, `provider_key_id`, `match_type`, `pattern`, `request_types`, and `pricing_patch`.
- **Schema: compat plugin**: Replaced `enable_litellm_fallbacks` with a structured `compat` object supporting `convert_text_to_chat`, `convert_chat_to_responses`, `should_drop_params`, and `should_convert_params`.
- **Schema: OTEL plugin**: Expanded `trace_type` enum to `genai_extension`, `vercel`, `open_inference` (was only `otel`).
- **Schema: SCIM**: Renamed `saml_config` to `scim_config`. Added `apiToken` to `okta_config` and made `clientSecret` and `apiToken` required. Changed cluster `dial_timeout` from integer (nanoseconds) to Go duration string.
- **Schema: misc**: Added `prompts` and `builtin` to plugin name/placement enums. Added `provider_configs` fields `id`, `description`, `network_config`, `proxy_config`, `custom_provider_config`, `concurrency_and_buffer_size`, and `openai_config`. Added `scim_config` top-level ref. Normalized multi-item `required` arrays to single-line format throughout.
## Type of change
- [ ] Bug fix
- [x] Feature
- [x] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
```sh
# Validate schema against existing configs
npx ajv validate -s transports/config.schema.json -d your-config.json
# Verify release pipeline runs without test gate
# Push a tagged commit and confirm release jobs trigger directly after detect-changes
```
If upgrading from v1.4.x, set `"version": 1` in your config to preserve allow-all semantics for empty arrays, or migrate empty arrays to `["*"]` and adopt v2 deny-by-default semantics.
## Breaking changes
- [x] Yes
- [ ] No
**Empty arrays in `allowed_models`, `key_ids`, `tools_to_execute`, `provider_configs`, and `mcp_configs` now deny all access by default (v2 semantics).** To allow all, use `["*"]`. To restore v1.4.x behavior, set `"version": 1` at the top level of your config.
`enable_litellm_fallbacks` has been removed; replace with the `compat` object. `saml_config` has been renamed to `scim_config`. `budget_id` has been removed from virtual keys and `virtual_key_provider_config`. `calendar_aligned` has moved from the budget object to the virtual key. `deployments` has been removed from `azure_key_config` and `vertex_key_config`; use `aliases` on the key instead. `retry_backoff_initial_ms`/`retry_backoff_max_ms` renamed to `retry_backoff_initial`/`retry_backoff_max`. The `websocket` MCP connection type has been removed; use `http` or `sse`. Okta SCIM config now requires `clientSecret` and `apiToken`.
## Related issues
N/A
## Security considerations
The `insecure_skip_verify` and `ca_cert_pem` fields on `network_config` expose TLS bypass options; these should only be used in controlled environments. The `per_user_oauth` auth type for MCP introduces per-user credential flows that require careful OAuth config management. Removal of test gates from the release pipeline means regressions from flaky provider APIs will no longer block releases, but also means real failures could ship if not caught by other means.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* force verstion back to go 1.26.1 (#2817)
## Summary
Bumps `core` to v1.4.21 and updates `transports` to depend on `core` v1.4.20, while removing a now-unnecessary workspace Go directive workaround that was previously required to satisfy the toolchain constraint introduced by `core` v1.4.19.
## Changes
- Incremented `core` version from `1.4.20` to `1.4.21`
- Updated `transports/go.mod` to reference `core` v1.4.20 (previously v1.4.19)
- Removed the `go work edit -go=1.26.2 -toolchain=go1.26.2` workaround from the workspace setup script, which was only needed to satisfy the toolchain requirement imposed by the published `core` v1.4.19
## Type of change
- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
```sh
go work sync
go test ./...
```
Verify the workspace initializes without the explicit Go/toolchain directive and that all modules resolve correctly.
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
## Security considerations
None.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* revert everything to go1.26.1 (#2818)
## Summary
Bumps the core version to `1.4.22` and rolls back dependency versions across the framework, plugins, and transports to align with a prior stable set of releases. This resolves a version inconsistency introduced by forward-referencing newer module versions that were not yet intended to be consumed by downstream packages.
## Changes
- Incremented `core/version` from `1.4.21` to `1.4.22`
- Downgraded `bifrost/core` from `v1.4.19` → `v1.4.17` across `framework`, `governance`, `jsonparser`, `litellmcompat`, `logging`, `maxim`, `mocker`, `otel`, `semanticcache`, and `telemetry` plugins
- Downgraded `bifrost/framework` from `v1.2.38` → `v1.2.36` (or `v1.2.35` for `semanticcache`) across all dependent plugins
- Downgraded `bifrost/core` in `transports` from `v1.4.20` → `v1.4.19`
- Downgraded all plugin versions referenced in `transports` (governance, litellmcompat, logging, maxim, otel, semanticcache, telemetry) to their corresponding prior releases
- Downgraded `go.opentelemetry.io/otel/sdk` and `go.opentelemetry.io/otel/sdk/metric` from `v1.43.0` → `v1.40.0` in affected plugins
- Bumped Go toolchain version in `transports/go.mod` from `1.26.1` to `1.26.2`
## Type of change
- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
```sh
go test ./...
```
Verify that all modules resolve correctly with the pinned dependency versions and that no import errors occur during build.
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
None. These are internal module version adjustments with no changes to auth, secrets, or data handling.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* bumped up hello-world dep (#2819)
## Summary
Pins the `bifrost/core` dependency in the example plugin modules to a consistent released version (`v1.4.17`), removing a local `replace` directive that was pointing to the local `core` module path.
## Changes
- Replaced the local `replace` directive in `hello-world-wasm-go/go.mod` with a direct reference to `github.com/maximhq/bifrost/core v1.4.17`
- Downgraded `hello-world/go.mod` from `v1.4.19` to `v1.4.17` to align both example plugins on the same released version
## Type of change
- [ ] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [x] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
```sh
cd examples/plugins/hello-world-wasm-go
go mod tidy
go build ./...
cd examples/plugins/hello-world
go mod tidy
go build ./...
```
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
## Security considerations
No security implications. This change only affects dependency resolution for example plugin modules.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* framework: bump core to v1.4.22 --skip-pipeline
* plugins/governance: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* plugins/jsonparser: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* plugins/litellmcompat: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* plugins/logging: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* plugins/maxim: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* plugins/mocker: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* plugins/otel: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* plugins/semanticcache: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* plugins/telemetry: bump core to v1.4.22 and framework to v1.2.39 --skip-pipeline
* enforce go 1.26.1 (#2820)
* transports: update dependencies --skip-pipeline
* Adds changelog for v1.4.23 --skip-pipeline
* V1.5.0 (#2245)
* refactor: standardize empty array conventions for VK Provider & MCP Configs, and makes Provider Config weight optional for routing (#1932)
## Summary
Changes Virtual Key provider and MCP configurations from "allow-all by default" to "deny-by-default" security model. Virtual Keys now require explicit provider and MCP client configurations to allow access, improving security posture.
## Changes
- **Provider Configs**: Empty `provider_configs` now blocks all providers instead of allowing all
- **MCP Configs**: Empty `mcp_configs` now blocks all MCP tools instead of allowing all
- **Weight Field**: Changed provider `weight` from required `float64` to optional `*float64` - null weight excludes provider from weighted routin…

Summary
Briefly explain the purpose of this PR and the problem it solves.
Changes
Type of change
Affected areas
How to test
Describe the steps to validate this change. Include commands and expected outcomes.
If adding new configs or environment variables, document them here.
Screenshots/Recordings
If UI changes, add before/after screenshots or short clips.
Breaking changes
If yes, describe impact and migration instructions.
Related issues
Link related issues and discussions. Example: Closes #123
Security considerations
Note any security implications (auth, secrets, PII, sandboxing, etc.).
Checklist
docs/contributing/README.mdand followed the guidelines