feat: VK MCP config now works as an AllowList#1940
Conversation
This stack of pull requests is managed by Graphite. Learn more about stacking. |
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (15)
📝 WalkthroughSummary by CodeRabbit
WalkthroughSwitch MCP APIs from context.Context to *schemas.BifrostContext, record MCP-added tool names in the Bifrost context, and add governance controls: virtual‑key MCP configs enforced as execution‑time allow‑lists plus a DisableAutoToolInject toggle that conditions auto‑injection. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Transport as Transport
participant Governance as Governance
participant MCP as MCPManager
participant ToolRuntime as ToolRuntime
Client->>Transport: send request (headers/body)
Transport->>Transport: build *schemas.BifrostContext (NoDeadline)
Transport->>Governance: HTTPTransportPreHook (check headers, DisableAutoToolInject)
Governance-->>Transport: pre-hook decision (allow or skip auto-inject)
Transport->>MCP: GetAvailableMCPTools(*schemas.BifrostContext)
MCP->>MCP: enumerate clients, dedupe, mark MCP-added tool names in BifrostContext
Transport->>MCP: ParseAndAddToolsToRequest(*schemas.BifrostContext, req)
MCP->>ToolRuntime: ExecuteToolCall(*schemas.BifrostContext, tool)
ToolRuntime-->>MCP: tool result
MCP-->>Transport: aggregated tools/results or DecisionMCPToolBlocked
Transport-->>Client: response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
b515efc to
1ee6d71
Compare
8af3ad3 to
dafd4b6
Compare
1ee6d71 to
57671fd
Compare
dafd4b6 to
df4061c
Compare
57671fd to
dae2e01
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/mcp/toolmanager.go`:
- Around line 197-203: In GetAvailableTools, avoid dereferencing
tool.Function.Name before nil checks and make tool suppression deterministic:
first check tool.Function != nil && tool.Function.Name != "" and only then call
schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools,
tool.Function.Name); next, replace the current global gating on
includeCodeModeTools with a per-tool check (e.g., skip adding the tool only if
!includeCodeModeTools && tool.IsCodeMode or equivalent flag on the tool) and
then use seenToolNames[tool.Function.Name] to deduplicate and append to
availableTools; update references to clientTools, includeCodeModeTools,
seenToolNames, availableTools, and schemas.AppendToContextList accordingly.
In `@plugins/governance/main.go`:
- Around line 1202-1228: The current blind single-tool check using
p.store.GetVirtualKey(virtualKeyValue) lets execution proceed (return req, nil,
nil) if the virtual key is missing/inactive even though
evaluateGovernanceRequest already ran, which creates a race that can allow
unauthorized tool execution; change this to fail-closed: when GetVirtualKey
returns !ok || vk == nil || !vk.IsActive, return an MCPPluginShortCircuit
BifrostError (DecisionMCPToolBlocked, 403) denying execution, and keep the
existing handling for len(vk.MCPConfigs) == 0 and isMCPToolAllowedByVK(vk,
toolName) as-is so empty configs or disallowed tools also return the same
short-circuit error.
In `@transports/changelog.md`:
- Around line 7-8: Remove the duplicated changelog entries by keeping only one
instance of the "feat: add option to disable automatic MCP tool injection per
request" entry and one instance of the "fix: preserve original audio filename in
transcription requests" entry; locate the duplicate lines in
transports/changelog.md (the exact strings above) and delete the repeated
occurrences so each change appears only once in the file.
In `@ui/app/workspace/config/views/mcpView.tsx`:
- Line 185: Update the explanatory sentence in the MCP view text (the JSX string
in mcpView.tsx where the paragraph about header-based tool inclusion is
rendered) to explicitly state that using the x-bf-mcp-include-tools header does
not bypass Virtual Key (VK) MCP execution-time allow-lists; tool injection via
the header is still subject to VK MCP allow-list checks and may be blocked by
them. Locate the paragraph around the existing wording "When enabled, MCP tools
are not automatically included..." and replace it with a clarified sentence that
mentions both the header and the VK MCP allow-lists (keep the existing header
code element <code className="text-xs">x-bf-mcp-include-tools</code> intact).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e14b7c87-859a-40e2-95da-57c4be9301a8
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
dae2e01 to
fc9336c
Compare
df4061c to
9255034
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
core/mcp/toolmanager.go (1)
193-203:⚠️ Potential issue | 🟠 MajorFix nondeterministic tool availability and incorrect
MCPAddedToolstracking inGetAvailableTools.At Line 200, regular tool append is globally gated by
includeCodeModeTools, so once any code-mode client is encountered, regular tools may be dropped depending on map iteration order.
At Line 199,BifrostContextKeyMCPAddedToolsis updated before confirming the tool was actually added. Also, code-mode tools appended at Lines 212-217 are not tracked.💡 Proposed fix
for clientName, clientTools := range availableToolsPerClient { client := m.clientManager.GetClientByName(clientName) if client == nil { m.logger.Warn("%s Client %s not found, skipping", MCPLogPrefix, clientName) continue } - if client.ExecutionConfig.IsCodeModeClient { + isCodeModeClient := client.ExecutionConfig.IsCodeModeClient + if isCodeModeClient { includeCodeModeTools = true + // Code mode tools are injected from m.codeMode.GetTools() below. + continue } // Add tools from this client, checking for duplicates for _, tool := range clientTools { - if tool.Function != nil && tool.Function.Name != "" && !seenToolNames[tool.Function.Name] { - schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) - if !includeCodeModeTools { - availableTools = append(availableTools, tool) - seenToolNames[tool.Function.Name] = true - } - } + if tool.Function == nil || tool.Function.Name == "" { + continue + } + if seenToolNames[tool.Function.Name] { + continue + } + availableTools = append(availableTools, tool) + seenToolNames[tool.Function.Name] = true + schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) } } @@ if includeCodeModeTools && m.codeMode != nil { codeModeTools := m.codeMode.GetTools() // Add code mode tools, checking for duplicates for _, tool := range codeModeTools { if tool.Function != nil && tool.Function.Name != "" { if !seenToolNames[tool.Function.Name] { availableTools = append(availableTools, tool) seenToolNames[tool.Function.Name] = true + schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) } } } }As per coding guidelines:
core/mcp/*.go: “MCP tool access filtering follows a 4-level hierarchy: Global filter → Client-level filter → Tool-level filter → Per-request filter (HTTP headers). All four levels must agree for a tool to be available.”Also applies to: 212-217
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/toolmanager.go` around lines 193 - 203, The GetAvailableTools loop currently gates adding regular tools via a global includeCodeModeTools flag and updates schemas.BifrostContextKeyMCPAddedTools before confirming addition, which causes nondeterministic drops and missed tracking; update the logic inside the clientTools iteration (referencing client.ExecutionConfig.IsCodeModeClient, includeCodeModeTools, seenToolNames, availableTools, and schemas.AppendToContextList) so that: evaluate per-tool whether code-mode allows adding it (do not let a single client set includeCodeModeTools globally to veto other tools), only call schemas.AppendToContextList when the tool is actually appended to availableTools or otherwise accepted, mark seenToolNames only upon successful addition, and ensure the same tracking is applied for code-mode tool additions (the code-path that appends code-mode tools must also update seenToolNames and MCPAddedTools).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/governance/main.go`:
- Around line 405-407: The header presence check currently uses a case-sensitive
map lookup on req.Headers (e.g., _, headerPresent :=
req.Headers["x-bf-mcp-include-tools"]) which can miss differently-cased incoming
headers and cause unintended injection; change the logic in the handler that
decides whether to call p.addMCPIncludeTools (and the similar check around lines
412-415) to perform a case-insensitive lookup — either normalize keys with
http.CanonicalHeaderKey or iterate req.Headers and use strings.EqualFold to
detect "x-bf-mcp-include-tools" — and only call addMCPIncludeTools(nil,
virtualKey) when no matching header is found ignoring case.
---
Duplicate comments:
In `@core/mcp/toolmanager.go`:
- Around line 193-203: The GetAvailableTools loop currently gates adding regular
tools via a global includeCodeModeTools flag and updates
schemas.BifrostContextKeyMCPAddedTools before confirming addition, which causes
nondeterministic drops and missed tracking; update the logic inside the
clientTools iteration (referencing client.ExecutionConfig.IsCodeModeClient,
includeCodeModeTools, seenToolNames, availableTools, and
schemas.AppendToContextList) so that: evaluate per-tool whether code-mode allows
adding it (do not let a single client set includeCodeModeTools globally to veto
other tools), only call schemas.AppendToContextList when the tool is actually
appended to availableTools or otherwise accepted, mark seenToolNames only upon
successful addition, and ensure the same tracking is applied for code-mode tool
additions (the code-path that appends code-mode tools must also update
seenToolNames and MCPAddedTools).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5847046b-32f7-41bf-a6ab-c734e9f6a6a4
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
🚧 Files skipped from review as they are similar to previous changes (6)
- ui/app/workspace/config/views/mcpView.tsx
- core/mcp/mcp.go
- plugins/governance/resolver.go
- core/bifrost.go
- core/changelog.md
- plugins/governance/changelog.md
|
|
fc9336c to
bc2090d
Compare
9255034 to
0c1f9ca
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
core/bifrost.go (1)
3257-3261:⚠️ Potential issue | 🟠 MajorNormalize
nilcontext before delegating to the MCP manager.Public APIs in this file consistently normalize
nilcontexts tobifrost.ctx(e.g., lines 387–389, 409–411, 683–685).GetAvailableMCPToolsbreaks that pattern by forwardingnildirectly to the MCP manager. While downstream code is safe from panics due to defensive nil checks onctx.Value(), a nil context will silently skip all request-scoped MCP filtering and tracking—defeating the governance purpose of the new*schemas.BifrostContextcontract.Suggested fix
func (bifrost *Bifrost) GetAvailableMCPTools(ctx *schemas.BifrostContext) []schemas.ChatTool { if bifrost.MCPManager == nil { return nil } + if ctx == nil { + ctx = bifrost.ctx + } return bifrost.MCPManager.GetAvailableTools(ctx) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/bifrost.go` around lines 3257 - 3261, GetAvailableMCPTools currently forwards a nil *schemas.BifrostContext to MCPManager.GetAvailableTools which bypasses request-scoped filtering; update GetAvailableMCPTools to normalize a nil ctx to bifrost.ctx before delegating (i.e., if ctx == nil { ctx = bifrost.ctx }) so that MCPManager.GetAvailableTools always receives a valid context and request-scoped MCP filtering/tracking is enforced; locate this logic inside the GetAvailableMCPTools method and adjust the variable passed to MCPManager.GetAvailableTools accordingly.plugins/governance/main.go (1)
467-479:⚠️ Potential issue | 🟠 MajorMirror the auto-inject guard in large-payload mode.
This path still always calls
addMCPIncludeTools, so streamed/large-body requests ignore bothdisable_auto_tool_injectand a caller-suppliedx-bf-mcp-include-toolsheader. That makes governance behavior depend on which request path handled the body.Proposed fix
- // MCP tool headers — header-only, no body needed - req.Headers, err = p.addMCPIncludeTools(req.Headers, virtualKey) - if err != nil { - p.logger.Error("failed to add MCP include tools: %v", err) - return nil, nil - } + // MCP tool headers — header-only, no body needed + p.cfgMutex.RLock() + autoInjectDisabled := p.disableAutoToolInject != nil && *p.disableAutoToolInject + p.cfgMutex.RUnlock() + headerPresent := false + for h := range req.Headers { + if strings.EqualFold(h, "x-bf-mcp-include-tools") { + headerPresent = true + break + } + } + if !autoInjectDisabled && !headerPresent { + req.Headers, err = p.addMCPIncludeTools(req.Headers, virtualKey) + if err != nil { + p.logger.Error("failed to add MCP include tools: %v", err) + return nil, nil + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/governance/main.go` around lines 467 - 479, The code path handling virtualKey calls addMCPIncludeTools unconditionally, which bypasses the same auto-inject guard used for large/streamed payloads; update the block around loadBalanceProvider and addMCPIncludeTools to first check the auto-inject guard (the disable_auto_tool_inject flag and presence of the x-bf-mcp-include-tools header on req.Headers) and only call p.addMCPIncludeTools when auto-injection is allowed, mirroring the existing logic used elsewhere so that addMCPIncludeTools, req.Headers modifications, and logging respect the same conditions; use the same variables/functions (virtualKey, loadBalanceProvider, addMCPIncludeTools, req.Headers, disable_auto_tool_inject, x-bf-mcp-include-tools) to locate and apply the change.
♻️ Duplicate comments (1)
core/mcp/toolmanager.go (1)
193-202:⚠️ Potential issue | 🔴 CriticalDon't let code-mode clients suppress unrelated MCP tools.
Once any code-mode client flips
includeCodeModeToolstotrue, every later regular tool hitsif !includeCodeModeToolsand gets dropped, so the returned tool set depends on map iteration order.AppendToContextListalso runs before the append, so the context can record tools that were never actually exposed, which can falsely trigger the new governance check.Proposed fix
if client.ExecutionConfig.IsCodeModeClient { includeCodeModeTools = true + continue // code mode tools are added below from m.codeMode.GetTools() } - // Add tools from this client, checking for duplicates for _, tool := range clientTools { - if tool.Function != nil && tool.Function.Name != "" && !seenToolNames[tool.Function.Name] { - schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) - if !includeCodeModeTools { - availableTools = append(availableTools, tool) - seenToolNames[tool.Function.Name] = true - } - } + if tool.Function == nil || tool.Function.Name == "" || seenToolNames[tool.Function.Name] { + continue + } + availableTools = append(availableTools, tool) + seenToolNames[tool.Function.Name] = true + schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/toolmanager.go` around lines 193 - 202, The current logic uses a global includeCodeModeTools flag so once any client sets client.ExecutionConfig.IsCodeModeClient true all subsequent non-code tools are skipped and AppendToContextList runs for tools that may not be added; fix by treating tools per-client: for each tool from clientTools, check if this client is a code-mode client (client.ExecutionConfig.IsCodeModeClient) and only allow adding code-mode tools when includeCodeModeTools is true, otherwise always allow non-code tools; move the schemas.AppendToContextList call to run only when you actually append to availableTools and set seenToolNames[tool.Function.Name] only upon successful append; reference functions/vars: client.ExecutionConfig.IsCodeModeClient, includeCodeModeTools, clientTools loop, availableTools, seenToolNames, and schemas.AppendToContextList/BifrostContextKeyMCPAddedTools.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/features/governance/mcp-tools.mdx`:
- Around line 23-24: The docs incorrectly refer to the config key as
"mcp_disable_auto_tool_inject"; update the text and any notes to use the correct
key "disable_auto_tool_inject" (replace occurrences of
mcp_disable_auto_tool_inject with disable_auto_tool_inject) so the documentation
matches the actual config/json tag used by the plugin and update any surrounding
explanatory text as needed (references: the lines mentioning
x-bf-mcp-include-tools and the note that discusses auto-injection).
In `@plugins/governance/main.go`:
- Around line 405-410: The current check uses
req.CaseInsensitiveHeaderLookup("x-bf-mcp-include-tools") != "" which treats an
explicitly-present empty header as missing; change the logic to detect header
presence (key exists) rather than non-empty value so an explicit empty header
blocks injection. Locate the block around p.cfgMutex, p.disableAutoToolInject
and replace the headerPresent calculation to test for the header key’s existence
(e.g., check the request header map for "x-bf-mcp-include-tools") so that an
explicit header with an empty value is considered present and prevents
reinjection.
- Around line 1089-1110: The function isMCPToolAllowedByVK currently uses
strings.HasPrefix to match client prefixes which allows shorter client names to
incorrectly match longer client names; change the matching logic in
isMCPToolAllowedByVK to first parse the toolPattern into a client segment and an
optional tool suffix (e.g., split on the first '-' or find the first '-') and
then compare that parsed client segment for exact equality with
mcpConfig.MCPClient.Name; for the wildcard case treat pattern client+"-*" by
checking the parsed client equals mcpConfig.MCPClient.Name and then verifying
ToolsToExecute is non-empty, and for specific tools check ToolsToExecute for "*"
or the parsed suffix instead of using strings.HasPrefix.
---
Outside diff comments:
In `@core/bifrost.go`:
- Around line 3257-3261: GetAvailableMCPTools currently forwards a nil
*schemas.BifrostContext to MCPManager.GetAvailableTools which bypasses
request-scoped filtering; update GetAvailableMCPTools to normalize a nil ctx to
bifrost.ctx before delegating (i.e., if ctx == nil { ctx = bifrost.ctx }) so
that MCPManager.GetAvailableTools always receives a valid context and
request-scoped MCP filtering/tracking is enforced; locate this logic inside the
GetAvailableMCPTools method and adjust the variable passed to
MCPManager.GetAvailableTools accordingly.
In `@plugins/governance/main.go`:
- Around line 467-479: The code path handling virtualKey calls
addMCPIncludeTools unconditionally, which bypasses the same auto-inject guard
used for large/streamed payloads; update the block around loadBalanceProvider
and addMCPIncludeTools to first check the auto-inject guard (the
disable_auto_tool_inject flag and presence of the x-bf-mcp-include-tools header
on req.Headers) and only call p.addMCPIncludeTools when auto-injection is
allowed, mirroring the existing logic used elsewhere so that addMCPIncludeTools,
req.Headers modifications, and logging respect the same conditions; use the same
variables/functions (virtualKey, loadBalanceProvider, addMCPIncludeTools,
req.Headers, disable_auto_tool_inject, x-bf-mcp-include-tools) to locate and
apply the change.
---
Duplicate comments:
In `@core/mcp/toolmanager.go`:
- Around line 193-202: The current logic uses a global includeCodeModeTools flag
so once any client sets client.ExecutionConfig.IsCodeModeClient true all
subsequent non-code tools are skipped and AppendToContextList runs for tools
that may not be added; fix by treating tools per-client: for each tool from
clientTools, check if this client is a code-mode client
(client.ExecutionConfig.IsCodeModeClient) and only allow adding code-mode tools
when includeCodeModeTools is true, otherwise always allow non-code tools; move
the schemas.AppendToContextList call to run only when you actually append to
availableTools and set seenToolNames[tool.Function.Name] only upon successful
append; reference functions/vars: client.ExecutionConfig.IsCodeModeClient,
includeCodeModeTools, clientTools loop, availableTools, seenToolNames, and
schemas.AppendToContextList/BifrostContextKeyMCPAddedTools.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: dec13942-8e2e-4d68-9ad7-ee5b4265daab
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
🚧 Files skipped from review as they are similar to previous changes (7)
- ui/app/workspace/config/views/mcpView.tsx
- ui/components/sidebar.tsx
- plugins/governance/changelog.md
- transports/changelog.md
- transports/bifrost-http/server/plugins.go
- core/changelog.md
- transports/bifrost-http/server/server.go
bc2090d to
bd48b10
Compare
0c1f9ca to
a7173c4
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
core/mcp/toolmanager.go (1)
193-202:⚠️ Potential issue | 🔴 CriticalDo not let a code-mode client suppress later regular tools.
Line 194 flips a global flag, so after iteration hits any code-mode client, every later regular client is skipped. Because this loop ranges over a map, that makes
availableToolsnondeterministic. Line 199 also records those skipped tools inBifrostContextKeyMCPAddedTools, soplugins/governance/main.gocan reject requests for tools that were never actually injected.💡 Proposed fix
for clientName, clientTools := range availableToolsPerClient { client := m.clientManager.GetClientByName(clientName) if client == nil { m.logger.Warn("%s Client %s not found, skipping", MCPLogPrefix, clientName) continue } if client.ExecutionConfig.IsCodeModeClient { includeCodeModeTools = true + // Code mode tools are sourced from m.codeMode.GetTools() below. + continue } // Add tools from this client, checking for duplicates for _, tool := range clientTools { - if tool.Function != nil && tool.Function.Name != "" && !seenToolNames[tool.Function.Name] { - schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) - if !includeCodeModeTools { - availableTools = append(availableTools, tool) - seenToolNames[tool.Function.Name] = true - } - } + if tool.Function == nil || tool.Function.Name == "" || seenToolNames[tool.Function.Name] { + continue + } + availableTools = append(availableTools, tool) + seenToolNames[tool.Function.Name] = true + schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/toolmanager.go` around lines 193 - 202, The loop currently flips a global includeCodeModeTools when it sees a code-mode client, causing later regular clients to be skipped and their names still appended to BifrostContextKeyMCPAddedTools; change the logic so the "code-mode" decision is local to the current client iteration (e.g., compute a local bool like isThisClientCodeMode := client.ExecutionConfig.IsCodeModeClient) and use that to decide whether to append to availableTools, set seenToolNames[tool.Function.Name], and call schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, ...); do not mutate a global includeCodeModeTools that affects subsequent clients and only record a tool in BifrostContextKeyMCPAddedTools when it is actually added to availableTools.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/governance/main.go`:
- Around line 490-492: The large-payload path is always rewriting the
x-bf-mcp-include-tools header by calling p.addMCPIncludeTools(req.Headers,
virtualKey) even when auto-inject should be disabled; update that path to first
check the same guard used elsewhere (DisableAutoToolInject and whether the
caller provided a specific x-bf-mcp-include-tools value) and only call
p.addMCPIncludeTools when auto-inject is enabled and the header is not already
explicitly set, or alternatively make addMCPIncludeTools respect the
DisableAutoToolInject flag; ensure you reference and preserve req.Headers,
virtualKey, the DisableAutoToolInject flag, and the x-bf-mcp-include-tools
header when implementing the guard.
---
Duplicate comments:
In `@core/mcp/toolmanager.go`:
- Around line 193-202: The loop currently flips a global includeCodeModeTools
when it sees a code-mode client, causing later regular clients to be skipped and
their names still appended to BifrostContextKeyMCPAddedTools; change the logic
so the "code-mode" decision is local to the current client iteration (e.g.,
compute a local bool like isThisClientCodeMode :=
client.ExecutionConfig.IsCodeModeClient) and use that to decide whether to
append to availableTools, set seenToolNames[tool.Function.Name], and call
schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, ...);
do not mutate a global includeCodeModeTools that affects subsequent clients and
only record a tool in BifrostContextKeyMCPAddedTools when it is actually added
to availableTools.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c1964731-1f4b-4e47-ae7e-c43558fee220
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
🚧 Files skipped from review as they are similar to previous changes (9)
- transports/bifrost-http/server/plugins.go
- plugins/governance/changelog.md
- core/changelog.md
- ui/components/sidebar.tsx
- docs/features/governance/mcp-tools.mdx
- core/bifrost.go
- core/mcp/mcp.go
- core/schemas/bifrost.go
- transports/changelog.md
bd48b10 to
94904f7
Compare
a7173c4 to
237dc14
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
core/mcp/toolmanager.go (1)
193-203:⚠️ Potential issue | 🟠 MajorKeep MCP discovery deterministic and only record tools that are actually injected.
Because
availableToolsPerClientis a map, settingincludeCodeModeToolshere makes later regular tools disappear based on iteration order. Also,BifrostContextKeyMCPAddedToolsis updated during discovery, so a read-only availability call marks tools as “added”, duplicates skipped byduplicateCheckMapare still recorded, and code-mode tools appended in the later block are never tracked. In this stack that context key feeds MCP governance, so the bookkeeping can drift from the final request payload.💡 Suggested direction
@@ - if client.ExecutionConfig.IsCodeModeClient { - includeCodeModeTools = true - } - // Add tools from this client, checking for duplicates - for _, tool := range clientTools { - if tool.Function != nil && tool.Function.Name != "" && !seenToolNames[tool.Function.Name] { - schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) - if !includeCodeModeTools { - availableTools = append(availableTools, tool) - seenToolNames[tool.Function.Name] = true - } - } - } + if client.ExecutionConfig.IsCodeModeClient { + includeCodeModeTools = true + continue + } + for _, tool := range clientTools { + if tool.Function == nil || tool.Function.Name == "" || seenToolNames[tool.Function.Name] { + continue + } + availableTools = append(availableTools, tool) + seenToolNames[tool.Function.Name] = true + } @@ - if !duplicateCheckMap[toolName] { - tools = append(tools, mcpTool) - // Update the map to prevent duplicates within MCP tools as well - duplicateCheckMap[toolName] = true - } + if !duplicateCheckMap[toolName] { + tools = append(tools, mcpTool) + duplicateCheckMap[toolName] = true + schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, toolName) + } @@ - if !duplicateCheckMap[toolName] { + if !duplicateCheckMap[toolName] { responsesTool := mcpTool.ToResponsesTool() // Skip if the converted tool has nil Name if responsesTool.Name == nil { continue } tools = append(tools, *responsesTool) - // Update the map to prevent duplicates within MCP tools as well duplicateCheckMap[toolName] = true + schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, toolName) }Also applies to: 208-218, 337-349, 387-396
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/toolmanager.go` around lines 193 - 203, The discovery mutates includeCodeModeTools and updates BifrostContextKeyMCPAddedTools during iteration over the map availableToolsPerClient which makes discovery order-dependent and records tools that may never be injected; fix by iterating clients in a deterministic order (e.g., sort the map keys), compute per-client includeCodeModeTools without changing global state for other clients (use a local variable scoped to each client loop), only call schemas.AppendToContextList(BifrostContextKeyMCPAddedTools, ...) at the moment you actually append the tool to availableTools (and when duplicateCheckMap indicates it's newly injected), and ensure code-mode tools are recorded the same way when they are finally injected (so duplicateCheckMap/seenToolNames logic and the code-mode branch both update the context consistently).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@plugins/governance/changelog.md`:
- Line 1: Changelog uses the transport/client config key
mcp_disable_auto_tool_inject but the governance plugin exposes it as
disable_auto_tool_inject; update the changelog entry in
plugins/governance/changelog.md to reference the plugin config key
disable_auto_tool_inject (and optionally note the transport mapping
mcp_disable_auto_tool_inject) so readers are pointed at the correct public
surface; look for the summary line mentioning mcp_disable_auto_tool_inject and
replace it with disable_auto_tool_inject while keeping the rest of the
description intact.
In `@plugins/governance/main.go`:
- Around line 155-159: The plugin must stop holding a live *bool pointer from
external config; instead snapshot the value into a plugin-owned boolean and
update it only via the plugin's reload/setter path. Replace uses of the shared
pointer (e.g. disableAutoToolInject stored on GovernancePlugin and any similar
fields at lines 267-271) with a plain bool field on GovernancePlugin, initialize
it by copying the dereferenced value under p.cfgMutex (or from the config
snapshot), and provide a dedicated update method invoked by
ReloadClientConfigFromConfigStore (or the config reload flow) to change that
bool under the plugin's mutex; remove any code that directly dereferences
s.Config.ClientConfig.DisableAutoToolInject at request time so concurrent
rewrites cannot race.
---
Duplicate comments:
In `@core/mcp/toolmanager.go`:
- Around line 193-203: The discovery mutates includeCodeModeTools and updates
BifrostContextKeyMCPAddedTools during iteration over the map
availableToolsPerClient which makes discovery order-dependent and records tools
that may never be injected; fix by iterating clients in a deterministic order
(e.g., sort the map keys), compute per-client includeCodeModeTools without
changing global state for other clients (use a local variable scoped to each
client loop), only call
schemas.AppendToContextList(BifrostContextKeyMCPAddedTools, ...) at the moment
you actually append the tool to availableTools (and when duplicateCheckMap
indicates it's newly injected), and ensure code-mode tools are recorded the same
way when they are finally injected (so duplicateCheckMap/seenToolNames logic and
the code-mode branch both update the context consistently).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f09783bc-41c4-4073-b060-4853a2109e47
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
🚧 Files skipped from review as they are similar to previous changes (6)
- plugins/governance/resolver.go
- core/changelog.md
- ui/app/workspace/config/views/mcpView.tsx
- core/schemas/bifrost.go
- ui/components/sidebar.tsx
- transports/changelog.md
237dc14 to
266c534
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/bifrost.go (1)
3257-3261:⚠️ Potential issue | 🟠 MajorDefault
niltobifrost.ctxbefore delegating.Unlike the other public APIs in this file, this method now passes
ctxstraight through after the*schemas.BifrostContextmigration. That makesGetAvailableMCPTools(nil)a breaking edge case and can crash once the MCP manager touches request-scoped context state for tool tracking/governance.🛠️ Minimal fix
func (bifrost *Bifrost) GetAvailableMCPTools(ctx *schemas.BifrostContext) []schemas.ChatTool { + if ctx == nil { + ctx = bifrost.ctx + } if bifrost.MCPManager == nil { return nil } return bifrost.MCPManager.GetAvailableTools(ctx) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/bifrost.go` around lines 3257 - 3261, The GetAvailableMCPTools method currently forwards the incoming *schemas.BifrostContext directly and can crash if ctx is nil; update Bifrost.GetAvailableMCPTools to default the parameter to bifrost.ctx when ctx == nil before calling bifrost.MCPManager.GetAvailableTools(ctx), preserving the existing nil-check for bifrost.MCPManager and using the internal bifrost.ctx (of type *schemas.BifrostContext) as the fallback context for request-scoped state.
♻️ Duplicate comments (1)
core/mcp/toolmanager.go (1)
193-203:⚠️ Potential issue | 🟠 Major
BifrostContextKeyMCPAddedToolsis recording the wrong tool set.This loop still mixes “discovered” with “actually injected”. Once any code-mode client is seen,
if !includeCodeModeToolsstops appending later regular MCP tools based on map iteration order, but Line 199 still records those skipped tools in the context. Thenplugins/governance/main.gouses that context list as the exact allow-list input, so requests can be rejected for phantom tools while actual code-mode tools appended later are never validated at inference time.💡 Suggested direction
for clientName, clientTools := range availableToolsPerClient { client := m.clientManager.GetClientByName(clientName) if client == nil { m.logger.Warn("%s Client %s not found, skipping", MCPLogPrefix, clientName) continue } if client.ExecutionConfig.IsCodeModeClient { includeCodeModeTools = true + continue // code-mode tools come from m.codeMode.GetTools() below } // Add tools from this client, checking for duplicates for _, tool := range clientTools { - if tool.Function != nil && tool.Function.Name != "" && !seenToolNames[tool.Function.Name] { - schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) - if !includeCodeModeTools { - availableTools = append(availableTools, tool) - seenToolNames[tool.Function.Name] = true - } - } + if tool.Function == nil || tool.Function.Name == "" || seenToolNames[tool.Function.Name] { + continue + } + availableTools = append(availableTools, tool) + seenToolNames[tool.Function.Name] = true } }Then move
schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, ...)to the two places inParseAndAddToolsToRequestwhere a tool is actually appended to the outgoing request, and do the same for appended code-mode tools.As per coding guidelines, MCP tool access filtering follows a 4-level hierarchy: Global filter → Client-level filter → Tool-level filter → Per-request filter (HTTP headers). All four levels must agree for a tool to be available.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/toolmanager.go` around lines 193 - 203, The context list BifrostContextKeyMCPAddedTools is being populated too early for every discovered tool regardless of whether it is actually appended to the outgoing request; update ParseAndAddToolsToRequest so schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, ...) is only called at the exact points where a tool is appended to availableTools (and similarly when code-mode tools are appended), instead of inside the discovery loop that iterates clientTools using seenToolNames and includeCodeModeTools; move the AppendToContextList calls into the two branches that push to availableTools (and the branch that appends code-mode tools when client.ExecutionConfig.IsCodeModeClient is true) so the context reflects the true injected tool set used for request validation.
🧹 Nitpick comments (1)
core/mcp/interface.go (1)
23-26: Avoid making callers preserveDisableAutoToolInjectby convention.This comment documents a brittle contract: any partial
UpdateToolManagerConfigcall can silently flip auto-injection back tofalsebecauseToolsManager.UpdateConfigstores the bool unconditionally. Prefer a dedicated setter or an optional field here so “unchanged” is representable and unrelated runtime updates can’t widen tool exposure accidentally.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/interface.go` around lines 23 - 26, The current UpdateToolManagerConfig(config *schemas.MCPToolManagerConfig) API forces callers to preserve DisableAutoToolInject by convention, which is brittle; change the contract so “unchanged” is representable and unrelated updates cannot flip the flag. Modify the config type or API so DisableAutoToolInject is optional (e.g., make DisableAutoToolInject *bool on schemas.MCPToolManagerConfig or add a separate SetDisableAutoToolInject(bool) method) and update ToolsManager.UpdateConfig to only overwrite the existing setting when the incoming optional value is non-nil; alternatively add a dedicated ToolsManager.SetDisableAutoToolInject setter and document using it for toggles while UpdateConfig only updates other fields.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/features/governance/mcp-tools.mdx`:
- Line 24: The sentence about skipping auto-injection when the caller supplies
the x-bf-mcp-include-tools header is ambiguous about when the VK allow-list is
enforced; update the bullet so it explicitly states that the VK allow-list is
enforced both at inference time (when deciding whether to allow tool inclusion)
and again at MCP tool execution time, e.g., rephrase to: "If the caller already
includes an `x-bf-mcp-include-tools` header, auto-injection is skipped — but the
VK allow-list is enforced at inference time and still enforced again at MCP tool
execution time." Include the terms `x-bf-mcp-include-tools` and "VK allow-list"
exactly to match other docs.
---
Outside diff comments:
In `@core/bifrost.go`:
- Around line 3257-3261: The GetAvailableMCPTools method currently forwards the
incoming *schemas.BifrostContext directly and can crash if ctx is nil; update
Bifrost.GetAvailableMCPTools to default the parameter to bifrost.ctx when ctx ==
nil before calling bifrost.MCPManager.GetAvailableTools(ctx), preserving the
existing nil-check for bifrost.MCPManager and using the internal bifrost.ctx (of
type *schemas.BifrostContext) as the fallback context for request-scoped state.
---
Duplicate comments:
In `@core/mcp/toolmanager.go`:
- Around line 193-203: The context list BifrostContextKeyMCPAddedTools is being
populated too early for every discovered tool regardless of whether it is
actually appended to the outgoing request; update ParseAndAddToolsToRequest so
schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, ...) is
only called at the exact points where a tool is appended to availableTools (and
similarly when code-mode tools are appended), instead of inside the discovery
loop that iterates clientTools using seenToolNames and includeCodeModeTools;
move the AppendToContextList calls into the two branches that push to
availableTools (and the branch that appends code-mode tools when
client.ExecutionConfig.IsCodeModeClient is true) so the context reflects the
true injected tool set used for request validation.
---
Nitpick comments:
In `@core/mcp/interface.go`:
- Around line 23-26: The current UpdateToolManagerConfig(config
*schemas.MCPToolManagerConfig) API forces callers to preserve
DisableAutoToolInject by convention, which is brittle; change the contract so
“unchanged” is representable and unrelated updates cannot flip the flag. Modify
the config type or API so DisableAutoToolInject is optional (e.g., make
DisableAutoToolInject *bool on schemas.MCPToolManagerConfig or add a separate
SetDisableAutoToolInject(bool) method) and update ToolsManager.UpdateConfig to
only overwrite the existing setting when the incoming optional value is non-nil;
alternatively add a dedicated ToolsManager.SetDisableAutoToolInject setter and
document using it for toggles while UpdateConfig only updates other fields.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7be51428-8abc-46cb-a6f2-436c15fc4c81
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
✅ Files skipped from review due to trivial changes (2)
- ui/app/workspace/config/views/mcpView.tsx
- core/changelog.md
🚧 Files skipped from review as they are similar to previous changes (4)
- core/schemas/bifrost.go
- plugins/governance/resolver.go
- ui/components/sidebar.tsx
- core/mcp/mcp.go
21465cb to
2c57f2d
Compare
55c1ab7 to
1124ac4
Compare
1124ac4 to
3d47165
Compare
2c57f2d to
4fff2e0
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
core/mcp/toolmanager.go (1)
194-204:⚠️ Potential issue | 🟠 MajorOrder-dependent tool suppression in
GetAvailableToolscan drop valid tools and overpopulate governance context.
includeCodeModeToolsis toggled inside the same client loop and immediately used to gateavailableToolsappends. That makes regular MCP tool inclusion depend on client iteration order, whileBifrostContextKeyMCPAddedToolsis still populated for skipped tools. This can cause inconsistent tool injection and false governance denials downstream.💡 Suggested fix
func (m *ToolsManager) GetAvailableTools(ctx *schemas.BifrostContext) []schemas.ChatTool { availableToolsPerClient := m.clientManager.GetToolPerClient(ctx) // Flatten tools from all clients into a single slice, avoiding duplicates var availableTools []schemas.ChatTool var includeCodeModeTools bool // Track tool names to prevent duplicates seenToolNames := make(map[string]bool) + // Pass 1: detect if any client enables code mode. + for clientName := range availableToolsPerClient { + client := m.clientManager.GetClientByName(clientName) + if client != nil && client.ExecutionConfig.IsCodeModeClient { + includeCodeModeTools = true + break + } + } + + // Pass 2: collect regular MCP tools deterministically. for clientName, clientTools := range availableToolsPerClient { client := m.clientManager.GetClientByName(clientName) if client == nil { m.logger.Warn("%s Client %s not found, skipping", MCPLogPrefix, clientName) continue } if client.ExecutionConfig.IsCodeModeClient { - includeCodeModeTools = true + continue } // Add tools from this client, checking for duplicates for _, tool := range clientTools { if tool.Function != nil && tool.Function.Name != "" && !seenToolNames[tool.Function.Name] { seenToolNames[tool.Function.Name] = true schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) - if !includeCodeModeTools { - availableTools = append(availableTools, tool) - } + availableTools = append(availableTools, tool) } } }#!/bin/bash # Verify the order-sensitive gating and downstream context consumption. rg -nC3 --type go 'for clientName, clientTools := range availableToolsPerClient|includeCodeModeTools|AppendToContextList\(ctx, schemas\.BifrostContextKeyMCPAddedTools|if !includeCodeModeTools' core/mcp/toolmanager.go rg -nC3 --type go 'BifrostContextKeyMCPAddedTools' core/mcp/toolmanager.go plugins/governance/main.go🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/toolmanager.go` around lines 194 - 204, GetAvailableTools currently toggles includeCodeModeTools inside the client loop and immediately uses it to decide whether to append to availableTools while always calling schemas.AppendToContextList with schemas.BifrostContextKeyMCPAddedTools, causing order-dependent suppression and stale governance context; fix by determining includeCodeModeTools for all clients before the per-client tool loop (e.g., a preliminary pass over availableToolsPerClient or using ExecutionConfig flags), then in the second pass only call schemas.AppendToContextList and append to availableTools based on that fully-resolved includeCodeModeTools value so that tool inclusion and BifrostContextKeyMCPAddedTools stay consistent regardless of client iteration order.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@core/mcp/toolmanager.go`:
- Around line 194-204: GetAvailableTools currently toggles includeCodeModeTools
inside the client loop and immediately uses it to decide whether to append to
availableTools while always calling schemas.AppendToContextList with
schemas.BifrostContextKeyMCPAddedTools, causing order-dependent suppression and
stale governance context; fix by determining includeCodeModeTools for all
clients before the per-client tool loop (e.g., a preliminary pass over
availableToolsPerClient or using ExecutionConfig flags), then in the second pass
only call schemas.AppendToContextList and append to availableTools based on that
fully-resolved includeCodeModeTools value so that tool inclusion and
BifrostContextKeyMCPAddedTools stay consistent regardless of client iteration
order.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8f34d1ef-915a-4114-845f-689001f786fd
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
🚧 Files skipped from review as they are similar to previous changes (10)
- ui/app/workspace/config/views/mcpView.tsx
- docs/features/governance/mcp-tools.mdx
- plugins/governance/resolver.go
- core/bifrost.go
- core/mcp/interface.go
- transports/bifrost-http/server/plugins.go
- plugins/governance/changelog.md
- ui/components/sidebar.tsx
- core/changelog.md
- transports/changelog.md
4fff2e0 to
fd79918
Compare
3d47165 to
10fab26
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
core/mcp/toolmanager.go (1)
194-204:⚠️ Potential issue | 🟠 MajorCode-mode detection is still suppressing regular MCP tools nondeterministically.
Line 194 flips a global flag, and Line 202 uses it to skip appending non-code tools. Because map iteration order is undefined, regular tools may disappear depending on which client is visited first.
💡 Proposed fix
for clientName, clientTools := range availableToolsPerClient { client := m.clientManager.GetClientByName(clientName) if client == nil { m.logger.Warn("%s Client %s not found, skipping", MCPLogPrefix, clientName) continue } - if client.ExecutionConfig.IsCodeModeClient { + isCodeModeClient := client.ExecutionConfig.IsCodeModeClient + if isCodeModeClient { includeCodeModeTools = true + // Code-mode tools are sourced from m.codeMode.GetTools() below. + continue } // Add tools from this client, checking for duplicates for _, tool := range clientTools { if tool.Function != nil && tool.Function.Name != "" && !seenToolNames[tool.Function.Name] { seenToolNames[tool.Function.Name] = true schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) - if !includeCodeModeTools { - availableTools = append(availableTools, tool) - } + availableTools = append(availableTools, tool) } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/toolmanager.go` around lines 194 - 204, The code flips the global includeCodeModeTools flag using client.ExecutionConfig.IsCodeModeClient and then uses it while iterating clientTools, which makes omission of non-code tools nondeterministic due to map iteration order; change the logic to compute a per-client boolean (e.g., perClientIsCodeMode := client.ExecutionConfig.IsCodeModeClient) inside the client loop and use that local flag when deciding whether to append to availableTools, leaving seenToolNames and schemas.AppendToContextList behavior unchanged so tools from non-code clients are never suppressed by a prior code-mode client.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@core/mcp/toolmanager.go`:
- Around line 194-204: The code flips the global includeCodeModeTools flag using
client.ExecutionConfig.IsCodeModeClient and then uses it while iterating
clientTools, which makes omission of non-code tools nondeterministic due to map
iteration order; change the logic to compute a per-client boolean (e.g.,
perClientIsCodeMode := client.ExecutionConfig.IsCodeModeClient) inside the
client loop and use that local flag when deciding whether to append to
availableTools, leaving seenToolNames and schemas.AppendToContextList behavior
unchanged so tools from non-code clients are never suppressed by a prior
code-mode client.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 33940a4d-0f82-4952-b77f-159f0a72719c
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
✅ Files skipped from review due to trivial changes (1)
- plugins/governance/changelog.md
🚧 Files skipped from review as they are similar to previous changes (10)
- core/changelog.md
- transports/bifrost-http/server/server.go
- ui/app/workspace/config/views/mcpView.tsx
- docs/features/governance/mcp-tools.mdx
- core/schemas/bifrost.go
- ui/components/sidebar.tsx
- plugins/governance/resolver.go
- core/bifrost.go
- transports/changelog.md
- core/mcp/interface.go
fd79918 to
e9a456d
Compare
10fab26 to
3148afc
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ui/components/sidebar.tsx`:
- Around line 243-255: Add stable data-testid attributes to the interactive
SidebarMenuButton instances so UI tests can target them reliably: update the
SidebarMenuButton at the first block (the button rendering that uses
hasSubItems, onClick={hasSubItems ? handleClick : item.hasAccess ? (e) =>
handleNavigation(item.url, e) : undefined}) to include a data-testid following
the pattern "data-testid=\"nav-button-<qualifier>\"" (use a stable qualifier
derived from the item, e.g., item.id or a slugified item.title); do the same for
the other interactive control in the later block (the submenu toggle / link
around lines 290-301) using a complementary testid like
"nav-submenu-toggle-<qualifier>" so both top-level buttons and submenu toggles
have predictable testids.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8385c914-aec6-43fb-bb9d-02c3a58b5cf9
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
🚧 Files skipped from review as they are similar to previous changes (10)
- ui/app/workspace/config/views/mcpView.tsx
- transports/bifrost-http/server/server.go
- docs/features/governance/mcp-tools.mdx
- core/bifrost.go
- transports/changelog.md
- plugins/governance/changelog.md
- core/schemas/bifrost.go
- core/mcp/mcp.go
- transports/bifrost-http/server/plugins.go
- core/changelog.md
3148afc to
a736638
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/bifrost.go (1)
3408-3413:⚠️ Potential issue | 🟡 MinorDon't forward a
nilBifrostContexthere.This public MCP entrypoint is now the only one in
core/bifrost.gothat delegates a possibly-nilcontext downstream instead of normalizing it first. That makes thecontext.Context→*schemas.BifrostContextmigration easy to regress at runtime and leaves the behavior up toMCPManager.GetAvailableTools.💡 Proposed fix
func (bifrost *Bifrost) GetAvailableMCPTools(ctx *schemas.BifrostContext) []schemas.ChatTool { if bifrost.MCPManager == nil { return nil } + if ctx == nil { + ctx = bifrost.ctx + } return bifrost.MCPManager.GetAvailableTools(ctx) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/bifrost.go` around lines 3408 - 3413, GetAvailableMCPTools currently forwards a potentially nil *schemas.BifrostContext to MCPManager.GetAvailableTools; ensure the context is normalized first by guarding for nil in Bifrost.GetAvailableMCPTools (check ctx == nil) and creating a default &schemas.BifrostContext{} (or using the package's existing context-normalization helper if one exists) before calling bifrost.MCPManager.GetAvailableTools(ctx) so downstream code never receives a nil context.
♻️ Duplicate comments (2)
core/mcp/toolmanager.go (1)
194-203:⚠️ Potential issue | 🟠 MajorCode-mode dedupe order can suppress code-mode tool injection and misreport MCP-added tools.
Line 200 marks code-mode client tools as seen and Line 201 records them in
BifrostContextKeyMCPAddedTools, but Line 202 skips adding them toavailableTools. Then Line 215 skips the same names fromm.codeMode.GetTools(). This can drop code-mode tools entirely while still reporting them as “added.”♻️ Proposed fix
if client.ExecutionConfig.IsCodeModeClient { includeCodeModeTools = true + // Code mode tools are sourced from m.codeMode.GetTools() below. + continue } // Add tools from this client, checking for duplicates for _, tool := range clientTools { if tool.Function != nil && tool.Function.Name != "" && !seenToolNames[tool.Function.Name] { seenToolNames[tool.Function.Name] = true schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) - if !client.ExecutionConfig.IsCodeModeClient { - availableTools = append(availableTools, tool) - } + availableTools = append(availableTools, tool) } }Also applies to: 210-217
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/mcp/toolmanager.go` around lines 194 - 203, The current loop marks code-mode client tool names as seen and logs them to BifrostContextKeyMCPAddedTools before adding them to availableTools, then skips adding code-mode tools and later skips m.codeMode.GetTools() by name, which can drop code-mode tools while still reporting them as added; fix by changing the logic in the clientTools loop (where client.ExecutionConfig.IsCodeModeClient is checked) so that you only mark seenToolNames[tool.Function.Name] = true and call schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools, tool.Function.Name) after you actually append the tool to availableTools (i.e., do not skip appending for code-mode tools or alternatively append them first when client.ExecutionConfig.IsCodeModeClient), and keep the m.codeMode.GetTools() filter using seenToolNames to avoid duplicates—this ensures availableTools and the MCP-added context list remain consistent with m.codeMode.GetTools() deduping.plugins/governance/main.go (1)
1022-1050:⚠️ Potential issue | 🟠 MajorInference-time MCP validation still ignores caller-supplied request tools.
At Line 1027, the check only inspects
BifrostContextKeyMCPAddedTools(auto-added tools). Tools provided directly on the incoming request can bypass this inference-time check and only get blocked later inPreMCPHook, which weakens the stated inference-time + execution-time enforcement behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/governance/main.go` around lines 1022 - 1050, The inference-time MCP validation only checks BifrostContextKeyMCPAddedTools and ignores caller-supplied tools; update the block that builds and checks addedTools to also retrieve the caller-supplied request tools (e.g., from the incoming request object or a context key such as a request-tools key) and merge them into a single slice (e.g., toolsToValidate), then run the same validation loop using isMCPToolAllowedByVK against result.VirtualKey.MCPConfigs and produce DecisionMCPToolBlocked/Reason using result.VirtualKey when any tool in toolsToValidate is disallowed; ensure empty/missing combined lists are treated the same as the current logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@core/bifrost.go`:
- Around line 3408-3413: GetAvailableMCPTools currently forwards a potentially
nil *schemas.BifrostContext to MCPManager.GetAvailableTools; ensure the context
is normalized first by guarding for nil in Bifrost.GetAvailableMCPTools (check
ctx == nil) and creating a default &schemas.BifrostContext{} (or using the
package's existing context-normalization helper if one exists) before calling
bifrost.MCPManager.GetAvailableTools(ctx) so downstream code never receives a
nil context.
---
Duplicate comments:
In `@core/mcp/toolmanager.go`:
- Around line 194-203: The current loop marks code-mode client tool names as
seen and logs them to BifrostContextKeyMCPAddedTools before adding them to
availableTools, then skips adding code-mode tools and later skips
m.codeMode.GetTools() by name, which can drop code-mode tools while still
reporting them as added; fix by changing the logic in the clientTools loop
(where client.ExecutionConfig.IsCodeModeClient is checked) so that you only mark
seenToolNames[tool.Function.Name] = true and call
schemas.AppendToContextList(ctx, schemas.BifrostContextKeyMCPAddedTools,
tool.Function.Name) after you actually append the tool to availableTools (i.e.,
do not skip appending for code-mode tools or alternatively append them first
when client.ExecutionConfig.IsCodeModeClient), and keep the
m.codeMode.GetTools() filter using seenToolNames to avoid duplicates—this
ensures availableTools and the MCP-added context list remain consistent with
m.codeMode.GetTools() deduping.
In `@plugins/governance/main.go`:
- Around line 1022-1050: The inference-time MCP validation only checks
BifrostContextKeyMCPAddedTools and ignores caller-supplied tools; update the
block that builds and checks addedTools to also retrieve the caller-supplied
request tools (e.g., from the incoming request object or a context key such as a
request-tools key) and merge them into a single slice (e.g., toolsToValidate),
then run the same validation loop using isMCPToolAllowedByVK against
result.VirtualKey.MCPConfigs and produce DecisionMCPToolBlocked/Reason using
result.VirtualKey when any tool in toolsToValidate is disallowed; ensure
empty/missing combined lists are treated the same as the current logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6f7d35dc-ff13-4f8a-945c-97116de5ced7
📒 Files selected for processing (15)
core/bifrost.gocore/changelog.mdcore/mcp/interface.gocore/mcp/mcp.gocore/mcp/toolmanager.gocore/schemas/bifrost.godocs/features/governance/mcp-tools.mdxplugins/governance/changelog.mdplugins/governance/main.goplugins/governance/resolver.gotransports/bifrost-http/server/plugins.gotransports/bifrost-http/server/server.gotransports/changelog.mdui/app/workspace/config/views/mcpView.tsxui/components/sidebar.tsx
🚧 Files skipped from review as they are similar to previous changes (6)
- core/changelog.md
- plugins/governance/resolver.go
- ui/app/workspace/config/views/mcpView.tsx
- plugins/governance/changelog.md
- transports/bifrost-http/server/server.go
- ui/components/sidebar.tsx
a736638 to
80cf3d6
Compare
e9a456d to
3a214d9
Compare
Merge activity
|
80cf3d6 to
1569c8c
Compare
* refactor: standardize empty array conventions for VK Provider & MCP Configs, and makes Provider Config weight optional for routing (#1932)
## Summary
Changes Virtual Key provider and MCP configurations from "allow-all by default" to "deny-by-default" security model. Virtual Keys now require explicit provider and MCP client configurations to allow access, improving security posture.
## Changes
- **Provider Configs**: Empty `provider_configs` now blocks all providers instead of allowing all
- **MCP Configs**: Empty `mcp_configs` now blocks all MCP tools instead of allowing all
- **Weight Field**: Changed provider `weight` from required `float64` to optional `*float64` - null weight excludes provider from weighted routing
- **Migration**: Added automatic backfill migration to preserve existing Virtual Key behavior by adding all available providers/MCP clients to VKs with empty configs
- **Documentation**: Updated all references to reflect new deny-by-default behavior
- **UI Updates**: Modified Virtual Key creation/editing interface to reflect new behavior and weight handling
## Type of change
- [x] Feature
- [x] Refactor
- [x] Documentation
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Plugins
- [x] UI (Next.js)
- [x] Docs
## How to test
Test Virtual Key creation and provider/MCP access:
```sh
# Core/Transports
go version
go test ./...
# Test Virtual Key with no provider configs blocks requests
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer sk-bf-empty-vk" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4", "messages": [{"role": "user", "content": "test"}]}'
# Should return error about no providers configured
# Test Virtual Key with provider configs allows requests
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer sk-bf-configured-vk" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4", "messages": [{"role": "user", "content": "test"}]}'
# Should work normally
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
## Breaking changes
- [x] Yes
**Impact**: Existing Virtual Keys with empty `provider_configs` or `mcp_configs` would be blocked after this change.
**Migration**: Automatic migration `migrationBackfillEmptyVirtualKeyConfigs` runs on startup to backfill existing Virtual Keys with all available providers/MCP clients, preserving current behavior. New Virtual Keys created after this change will use deny-by-default.
## Security considerations
This change significantly improves security posture by requiring explicit configuration of allowed providers and MCP tools for Virtual Keys. The automatic migration ensures no disruption to existing deployments while new Virtual Keys benefit from the more secure default behavior.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* feat: add MCP auto tool injection toggle (#1933)
## Summary
Adds a new configuration option `DisableAutoToolInject` to the MCP (Model Context Protocol) system that allows disabling automatic tool injection into requests. When enabled, MCP tools are only included when explicitly requested via context headers or filters, providing more granular control over tool availability.
## Changes
- Added `DisableAutoToolInject` field to `MCPToolManagerConfig` schema with runtime update support
- Implemented atomic boolean storage in `ToolsManager` to safely handle concurrent access
- Added logic in `ParseAndAddToolsToRequest` to respect the disable flag and only inject tools when explicit context filters are present
- Extended configuration management with database migration, UI controls, and API endpoints
- Added hot-reload capability through `UpdateMCPDisableAutoToolInject` methods across the stack
- Updated UI with a toggle switch and clear documentation about the feature's behavior
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [x] UI (Next.js)
- [ ] Docs
## How to test
Validate the new MCP auto tool injection toggle:
```sh
# Core/Transports
go version
go test ./...
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
Test the feature:
1. Configure MCP clients and tools
2. Enable "Disable Auto Tool Injection" in the MCP configuration UI
3. Make requests without explicit tool headers - tools should not be injected
4. Make requests with `x-bf-mcp-include-tools` header - tools should be injected
5. Verify hot-reload works by toggling the setting without server restart
## Screenshots/Recordings
UI changes include a new toggle switch in the MCP configuration view with descriptive text explaining when tools are injected based on explicit headers.
## Breaking changes
- [ ] Yes
- [x] No
This is a backward-compatible addition with a default value of `false` (auto injection enabled).
## Related issues
This addresses the need for more granular control over MCP tool injection behavior in request processing.
## Security considerations
The feature provides better control over tool exposure by allowing administrators to require explicit opt-in for tool injection, potentially reducing unintended tool access.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* feat: VK MCP config now works as an AllowList (#1940)
## Summary
This PR implements MCP tool governance by enforcing virtual key MCP configurations as an execution-time allow-list. When virtual keys have empty MCPConfigs, all MCP tools are denied. When non-empty, each tool is validated against the configured allow-list at both inference time and MCP tool execution.
## Changes
- **Context parameter updates**: Changed MCP-related functions to use `*schemas.BifrostContext` instead of `context.Context` to enable tool tracking
- **Tool tracking**: Added `BifrostContextKeyMCPAddedTools` context key to track which MCP tools are added to requests
- **Governance enforcement**: Virtual key MCP configurations now act as execution-time allow-lists with validation in both `PreMCPHook` and `evaluateGovernanceRequest`
- **Auto-injection control**: Added `DisableAutoToolInject` configuration option that respects the toggle and skips auto-injection when headers are already set by callers
- **Decision type**: Added `DecisionMCPToolBlocked` for MCP tool governance violations
- **UI improvements**: Updated MCP view description and sidebar item naming for better clarity
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [ ] Docs
## How to test
Test MCP tool governance with virtual keys:
```sh
# Core/Transports
go version
go test ./...
# Test with virtual key having empty MCPConfigs (should deny all MCP tools)
curl -X POST /v1/chat/completions \
-H "x-bf-virtual-key: test-vk-empty-mcp" \
-d '{"model": "gpt-4", "messages": [{"role": "user", "content": "test"}]}'
# Test with virtual key having specific MCP tool allowlist
curl -X POST /v1/chat/completions \
-H "x-bf-virtual-key: test-vk-with-mcp" \
-d '{"model": "gpt-4", "messages": [{"role": "user", "content": "test"}]}'
# Test disable auto tool inject configuration
curl -X PUT /v1/config/mcp/disable-auto-tool-inject \
-d '{"disable": true}'
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
New configuration options:
- `disable_auto_tool_inject`: Boolean flag to disable automatic MCP tool injection
- Virtual key `MCPConfigs`: Array of MCP client configurations that act as allow-lists
## Screenshots/Recordings
UI changes include updated MCP configuration view with clearer descriptions for the disable auto tool injection toggle and improved sidebar navigation labels.
## Breaking changes
- [x] Yes
- [ ] No
**Impact**: MCP-related function signatures now require `*schemas.BifrostContext` instead of `context.Context`. Virtual keys with empty MCPConfigs will now deny all MCP tools by default.
**Migration**: Update any custom MCP integrations to use the new context parameter type. Configure MCPConfigs on virtual keys that need MCP tool access.
## Related issues
Implements MCP tool governance and execution-time validation for virtual key configurations.
## Security considerations
- **Access control**: Virtual key MCP configurations now enforce strict allow-lists for tool execution
- **Context isolation**: Tool tracking is isolated per request context to prevent cross-request leakage
- **Validation**: Both pre-execution and execution-time validation prevent unauthorized tool access
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* refactor: standardize empty array conventions for VK Provider Config Allowed Keys (#2006)
## Summary
Migrates VK provider config allowed keys from implicit allow-all semantics to explicit deny-by-default behavior. Adds `AllowAllKeys` boolean field to enable granular key access control while maintaining backward compatibility.
## Changes
- Added `AllowAllKeys` boolean field to `TableVirtualKeyProviderConfig` with database migration
- Backfilled existing configs with `allow_all_keys=true` to preserve current behavior
- Updated key resolution logic: empty keys now denies all access, `["*"]` wildcard allows all keys
- Modified governance resolver to set empty `includeOnlyKeys` slice when no keys are configured
- Enhanced HTTP handlers to recognize `["*"]` wildcard and set `AllowAllKeys` flag appropriately
- Updated UI to display "Allow All Keys" option and show deny-by-default messaging
- Added JSON unmarshaling support for `["*"]` wildcard in config files
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [ ] Docs
## How to test
Validate the migration and new key access control behavior:
```sh
# Core/Transports
go version
go test ./...
# Test migration runs successfully
go run main.go migrate
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
Test scenarios:
1. Create VK with empty `key_ids` - should deny all keys
2. Create VK with `key_ids: ["*"]` - should allow all keys
3. Create VK with specific key IDs - should allow only those keys
4. Verify existing VKs maintain their current behavior after migration
## Screenshots/Recordings
UI now shows:
- "Allow All Keys" option in key selection dropdown
- "No keys allowed" vs "All keys allowed" status indicators
- "No providers configured (deny-by-default)" messaging
## Breaking changes
- [ ] Yes
- [x] No
The migration preserves existing behavior by setting `allow_all_keys=true` for configs that previously had no keys specified.
## Related issues
Part of VK access control enhancement initiative.
## Security considerations
Improves security posture by implementing deny-by-default semantics for key access. Existing deployments maintain current access patterns through automatic backfill migration.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* refactor: standardize empty array conventions for allowed models (#2113)
## Summary
Standardizes empty array conventions across Bifrost to implement deny-by-default security semantics. Previously, empty arrays for `allowed_models` and `Models` fields meant "allow all", creating potential security gaps. Now `["*"]` explicitly means "allow all" while empty arrays mean "deny all".
## Changes
- **Core Logic**: Updated model filtering in `bifrost.go` and `selectKeyFromProviderForModel` to treat empty `Models` arrays as deny-all and `["*"]` as allow-all
- **Database Migration**: Added `migrationBackfillAllowedModelsWildcard` to convert existing empty arrays to `["*"]` preserving current behavior for existing records
- **Model Catalog**: Updated `IsModelAllowedForProvider` to use wildcard semantics with deny-by-default fallback
- **Schema Defaults**: Changed default `Models` value from `[]` to `["*"]` in table definitions and form schemas
- **UI Components**: Enhanced `ModelMultiselect` with `allowAllOption` prop and updated virtual key forms to handle wildcard selection
- **Documentation**: Updated JSON schemas, comments, and tooltips to reflect new conventions
- **Governance**: Updated provider config filtering logic to use new wildcard semantics
- **Server Bootstrap**: Added wildcard filtering when loading models to prevent literal "*" from appearing as a model name
## Type of change
- [x] Refactor
- [ ] Bug fix
- [ ] Feature
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [x] Docs
## How to test
Validate the migration and new semantics:
```sh
# Core/Transports
go version
go test ./...
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
Test scenarios:
1. Create new virtual keys - should default to `["*"]` for allowed models
2. Create new provider keys - should default to `["*"]` for models
3. Verify existing keys with empty arrays are migrated to `["*"]`
4. Test that empty arrays now deny all models/keys as expected
5. Verify UI shows "All models allowed" for wildcard and "No models (deny all)" for empty arrays
## Screenshots/Recordings
UI changes include:
- Model multiselect now shows "Allow All Models" option
- Virtual key details display "All Models" badge for wildcard vs "No models (deny all)" for empty
- Form placeholders updated to reflect new semantics
## Breaking changes
- [x] Yes
- [ ] No
**Migration Impact**: The database migration automatically converts existing empty `allowed_models` and `models_json` arrays to `["*"]`, preserving current behavior. However, any new configurations with empty arrays will now deny access instead of allowing all. Applications relying on "empty = allow all" semantics must be updated to use `["*"]` explicitly.
## Related issues
Part of security hardening initiative to implement explicit allow-lists and deny-by-default semantics across Bifrost configuration.
## Security considerations
This change significantly improves security posture by:
- Eliminating ambiguous "empty means allow all" semantics
- Implementing explicit deny-by-default for new configurations
- Requiring intentional wildcard usage via `["*"]` for broad access
- Maintaining backward compatibility through automatic migration
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* refactor: replace string slices with WhiteList for allowlist fields (#2125)
## Summary
Introduces a new `WhiteList` type to standardize whitelist behavior across the codebase, replacing manual slice operations and string comparisons with semantic methods for handling allow/deny lists.
## Changes
- Added `WhiteList` type with methods `IsAllowed()`, `IsUnrestricted()`, `IsEmpty()`, `Contains()`, and `Validate()`
- Replaced `[]string` fields with `WhiteList` for model restrictions, tool filtering, and key access controls
- Updated all whitelist logic to use semantic methods instead of manual `slices.Contains()` checks
- Added validation to ensure wildcards ("*") aren't mixed with specific values and prevent duplicates
- Improved case-insensitive matching for whitelist comparisons
## Type of change
- [x] Refactor
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Plugins
## How to test
Verify that whitelist behavior remains consistent across all affected components:
```sh
# Core/Transports
go version
go test ./...
# Test specific whitelist scenarios:
# - Empty lists deny all access
# - ["*"] allows all access
# - Specific lists only allow listed items
# - Mixed wildcards and specific items are rejected
# - Duplicate entries are rejected
```
Test key model filtering, MCP tool execution, and virtual key configurations to ensure whitelist logic works correctly.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
The `WhiteList` type maintains the same JSON serialization format as `[]string`, so existing configurations remain compatible.
## Related issues
N/A
## Security considerations
Improves security by standardizing deny-by-default behavior and adding validation to prevent misconfigured whitelists that could inadvertently grant excessive permissions.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* feat: add request-level extra headers support for MCP tool execution (#2126)
## Summary
This PR adds support for request-level extra headers in MCP tool execution, allowing callers to forward specific headers to MCP servers at runtime based on a per-client allowlist configuration.
## Changes
- Added `AllowedExtraHeaders` field to MCP client configuration with allowlist semantics (empty array = deny all, `["*"]` = allow all)
- Introduced `BifrostContextKeyMCPExtraHeaders` context key to track headers forwarded to MCP tools
- Created `core/mcp/utils` package with `GetHeadersForToolExecution` function to merge static and dynamic headers
- Updated MCP tool execution in both regular tool manager and Starlark code mode to use the new header forwarding system
- Added database migration for `allowed_extra_headers_json` column in MCP client table
- Updated UI to include allowed extra headers configuration in MCP client management
- Enhanced auth demo server example to demonstrate tool-execution level authentication patterns
## Type of change
- [x] Feature
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] UI (Next.js)
## How to test
1. Configure an MCP client with allowed extra headers:
```json
{
"name": "test-client",
"connection_string": "http://localhost:3002/",
"auth_type": "headers",
"headers": {
"X-API-Key": "connection-secret"
},
"allowed_extra_headers": ["X-Tool-Token"],
"tools_to_execute": ["*"]
}
```
2. Make requests with extra headers that should be forwarded:
```bash
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer your-key" \
-H "X-Tool-Token: tool-execution-secret" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Use the secret_data tool"}],
"tools": [{"type": "function", "function": {"name": "secret_data"}}]
}'
```
3. Test the auth demo server:
```bash
cd examples/mcps/auth-demo-server
go run main.go
# Server demonstrates two-tier auth: connection-level (X-API-Key) and tool-level (X-Tool-Token)
```
4. Run tests:
```sh
go test ./core/mcp/...
go test ./transports/bifrost-http/...
cd ui
pnpm test
pnpm build
```
## Breaking changes
- [ ] Yes
- [x] No
This is a backward-compatible addition. Existing MCP clients will have empty `allowed_extra_headers` (deny all extra headers) which maintains current behavior.
## Security considerations
- Extra headers are filtered through a strict allowlist per MCP client
- Security denylist prevents auth header overrides via extra headers
- Two-tier authentication pattern demonstrated: connection-level + tool-execution level
- Headers are only forwarded to MCP servers that explicitly allow them
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* fix: apply MCP tool filtering headers to tools/list response when using bifrost as MCP gateway (#2127)
## Summary
Adds support for `x-bf-mcp-include-clients` and `x-bf-mcp-include-tools` request headers to filter MCP tools/list response when using Bifrost as an MCP gateway. This ensures that tool filtering is respected at the MCP protocol level, not just during inference.
## Changes
- Implemented dynamic tool filtering in MCP server handlers that respects per-request include headers
- Added `makeIncludeClientsFilter()` function that filters tools based on request context values
- Registered the tool filter on both global and virtual key MCP servers during initialization
- Updated documentation to clarify that `mcp-include-tools` requires `clientName-toolName` format
- Enhanced examples in documentation to show proper tool naming format
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [x] Docs
## How to test
Test MCP gateway functionality with tool filtering:
```sh
# Test tools/list filtering with include-tools header
curl --location 'http://localhost:8080/mcp/tools/list' \
--header 'x-bf-mcp-include-tools: gmail-send_email,filesystem-read_file' \
--header 'Authorization: Bearer your-vk-here'
# Test tools/list filtering with include-clients header
curl --location 'http://localhost:8080/mcp/tools/list' \
--header 'x-bf-mcp-include-clients: gmail,filesystem' \
--header 'Authorization: Bearer your-vk-here'
# Verify chat completions still respect the same headers
curl --location 'http://localhost:8080/v1/chat/completions' \
--header 'x-bf-mcp-include-tools: gmail-send_email' \
--header 'Content-Type: application/json' \
--data '{
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "What tools are available?"}]
}'
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
The tool filtering mechanism ensures that virtual key restrictions are properly enforced at the MCP protocol level, preventing unauthorized access to tools that should be filtered out based on request headers.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* refactor: parallelize model listing for providers to speed up startup time (#2151)
## Summary
Parallelizes model listing operations for providers during server startup and provider reloading to significantly reduce initialization time. Previously, model listing was performed sequentially for each provider, causing slower startup times especially when multiple providers were configured.
## Changes
- Added concurrent execution using goroutines and sync.WaitGroup for model listing operations in three key functions: `ReloadProvider`, `ForceReloadPricing`, and `Bootstrap`
- In `ReloadProvider`, both filtered and unfiltered model listing requests now run concurrently for the same provider
- In `ForceReloadPricing` and `Bootstrap`, model listing for different providers now runs in parallel instead of sequentially
- Moved provider key retrieval earlier in `ReloadProvider` to ensure it happens before concurrent model listing
- Added proper context cancellation with defer statements for bifrost contexts
## Type of change
- [x] Refactor
## Affected areas
- [x] Transports (HTTP)
## How to test
Test server startup time with multiple providers configured to verify the performance improvement:
```sh
# Core/Transports
go version
go test ./...
# Test with multiple providers configured
# Measure startup time before and after the change
time go run main.go
```
Configure multiple providers in your bifrost configuration and observe faster startup times, especially noticeable when providers have high latency or many models.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No security implications. The change maintains the same authentication and authorization patterns while improving performance through parallelization.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* fix: reorder migrations and set AllowAllKeys to true for virtual key provider configs (#2158)
## Summary
Fixes database migration ordering issue and ensures virtual key configurations are properly initialized with the AllowAllKeys field set to true.
## Changes
- Reordered database migrations to execute `migrationAddAllowAllKeysToProviderConfig` before `migrationBackfillEmptyVirtualKeyConfigs` to ensure the AllowAllKeys column exists before backfilling
- Added `AllowAllKeys: true` to provider configurations created during virtual key backfill migration to enable unrestricted key access by default
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Verify that database migrations run successfully and virtual key configurations are created with proper defaults:
```sh
# Core/Transports
go version
go test ./...
```
Test migration ordering by running against a fresh database to ensure no column reference errors occur.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
This change enables unrestricted key access by default for virtual key configurations, which may have security implications depending on the intended access control model.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* feat: implement scoped pricing override
* refactor: custom pricing refactor
* fix: resolve merge conflicts in config loading and governance functions (#2230)
## Summary
Resolves Git merge conflicts in the bifrost-http configuration loading code by cleaning up duplicate function definitions and consolidating the configuration initialization flow.
## Changes
- Removed Git merge conflict markers and duplicate code blocks from `LoadConfig` function
- Consolidated governance configuration loading by keeping both `loadGovernanceConfigFromFile` and `loadGovernanceConfig` functions with distinct purposes
- Removed duplicate `convertSchemasMCPClientConfigToTable` function definition
- Moved pricing overrides initialization logic to `initFrameworkConfig` function for better organization
- Cleaned up extensive duplicate default configuration loading code that was causing merge conflicts
- Changed error handling for pricing overrides from returning error to logging warning
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Verify that configuration loading works correctly without merge conflicts:
```sh
# Core/Transports
go version
go test ./...
go build ./transports/bifrost-http/...
```
Test configuration loading with various scenarios:
- Config file present
- Config file absent (default loading)
- Store-based configuration
- Governance and MCP configuration loading
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No security implications - this is a merge conflict resolution that maintains existing functionality.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* feat: add Stability AI model support for Bedrock image generation (#2180)
## Summary
Adds support for Stability AI image generation models (stability.stable-image-*) to the Bedrock provider, enabling text-to-image generation with models like stability.stable-image-core-v1:1 and stability.stable-image-ultra-v1:1.
## Changes
- Added `isStabilityAIModel()` function to detect Stability AI models by "stability." prefix
- Created `ToStabilityAIImageGenerationRequest()` to convert Bifrost requests to Stability AI's flat request format
- Implemented `StabilityAIImageGenerationRequest` type with support for prompt, mode, aspect_ratio, output_format, seed, and negative_prompt parameters
- Added conditional routing in `ImageGeneration()` to use Stability AI request format when appropriate
- Extended known fields for image generation parameters to include "aspect_ratio" and "input_images"
- Updated documentation comment to reflect Stability AI model support
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Test Stability AI image generation through the Bedrock provider:
```sh
# Core/Transports
go version
go test ./...
# Test with a Stability AI model
curl -X POST http://localhost:8080/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-key" \
-d '{
"model": "stability.stable-image-core-v1:1",
"prompt": "A beautiful sunset over mountains",
"aspect_ratio": "16:9",
"output_format": "PNG"
}'
```
Ensure AWS credentials are configured for Bedrock access and the Stability AI models are available in your region.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No additional security implications beyond existing Bedrock provider authentication and AWS credential handling.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* feat: add Stability AI image edit models support to Bedrock provider (#2225)
## Summary
Adds support for Stability AI image editing models in the Bedrock provider, expanding image editing capabilities beyond the existing Titan and Nova Canvas models.
## Changes
- Added `getStabilityAIEditTaskType()` function to infer edit task types from Stability AI model names (inpaint, outpaint, recolor, search-replace, erase-object, remove-bg, control-sketch, control-structure, style-guide, style-transfer, upscale-creative, upscale-conservative, upscale-fast)
- Created `ToStabilityAIImageEditRequest()` function to convert Bifrost requests to Stability AI's flat JSON format, with task-specific field validation
- Added `StabilityAIImageEditRequest` struct with comprehensive field support for all Stability AI edit operations
- Enhanced `BedrockImageGenerationResponse` with Seeds and FinishReasons fields for Stability AI compatibility
- Modified `ImageEdit()` method to route requests to appropriate conversion function based on model type
- Updated documentation to reflect expanded model support
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Test with various Stability AI edit models through the Bedrock provider:
```sh
# Core/Transports
go version
go test ./...
# Test image editing with Stability AI models
# Example: stable-image-inpaint, stable-outpaint, stable-creative-upscale, etc.
```
Verify that task-specific parameters are correctly mapped and invalid fields are filtered out based on the detected task type.
## Screenshots/Recordings
N/A - Backend functionality only
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
Image data is handled as base64-encoded strings. Mask and image parameters are properly validated before processing.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* fix: send back accumulated usage in MCP agent mode (#2246)
## Summary
This PR fixes token usage tracking in MCP agent mode by accumulating usage across all LLM calls in the agent loop and returning the total usage in the final response.
## Changes
- Added usage accumulation logic in the MCP agent execution loop to track token consumption across multiple LLM calls
- Implemented `mergeUsage` function to combine token counts and costs from multiple `BifrostLLMUsage` values, handling all detail sub-fields including prompt tokens, completion tokens, and cost breakdowns
- Extended agent API adapters with `extractUsage` and `applyUsage` methods to handle usage extraction and application for both Chat API and Responses API
- Applied accumulated usage to the final response before returning it to the client
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Test MCP agent mode with multiple tool calls to verify usage accumulation:
```sh
# Core/Transports
go version
go test ./...
# Test MCP agent mode with multiple LLM calls
# Verify that the returned usage reflects the sum of all calls in the agent loop
# Check that both token counts and cost details are properly accumulated
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No security implications - this change only affects usage tracking and reporting.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* [codemode]: fixing string escape corruption, enable top-level control flow in starlark, refining the prompt of executecode tool (#2206)
## Changes
- **Enhanced Starlark dialect configuration**: Enabled top-level control flow statements (if/for/while), while loops, set() builtin, global variable reassignment, and recursive functions for a more Python-like experience
- **Improved string escape handling**: Removed automatic `\n` to newline conversion, allowing Starlark's native string escape processing to handle `\n`, `\t`, and other escape sequences correctly
- **Updated tool description**: Streamlined the executeToolCode tool description with clearer syntax notes, explicit documentation of Starlark differences from Python (no try/except, no classes, no imports, no f-strings), and emphasis on fresh isolated scope per execution
- **Enhanced error hints**: Added specific error messages for unsupported Python features like try/except/finally/raise, with guidance on alternative approaches and scope persistence warnings
- **Comprehensive test coverage**: Added tests for dialect options, string escape preservation, unsupported feature detection, and end-to-end JSON deserialization scenarios
## Type of change
- [ ] Feature
- [ ] Bug fix
- [x] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go) - Starlark CodeMode improvements
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Test the enhanced Starlark features with MCP CodeMode:
```sh
# Test dialect options (top-level control flow, while loops, etc.)
make test-mcp TESTCASE=TestStarlarkDialectOptions
# Test string escape handling
make test-mcp PATTERN=TestStarlarkStringEscape
# Test unsupported feature detection
make test-mcp PATTERN=TestStarlarkUnsupportedFeatures
```
## Breaking changes
- [ ] Yes
- [x] No
The Starlark changes are additive and maintain backward compatibility while enabling more Python-like syntax.
## Security considerations
Starlark CodeMode maintains its existing sandboxing with no additional network or filesystem access. The dialect enhancements only affect language features within the existing security boundary.
* logging in plugins (#2215)
## Summary
Reorders middleware initialization in the Bifrost HTTP server to ensure tracing middleware is added before transport interceptor middleware in the inference pipeline.
## Changes
- Moved tracing middleware initialization and setup earlier in the bootstrap process
- Reordered middleware registration so tracing middleware is prepended before transport interceptor middleware
- Updated comments to clarify the middleware ordering logic and rationale
The change ensures that tracing context and trace IDs are properly established before other middleware components process requests.
## Type of change
- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Verify that tracing middleware executes before transport interceptor middleware by checking trace logs and middleware execution order.
```sh
# Core/Transports
go version
go test ./...
```
Test with tracing enabled to ensure trace IDs are properly set in context before subsequent middleware processing.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
No security implications - this is a middleware ordering change that affects observability components.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* fix: handling text, vtt, srt response format for transcriptions (#2102)
* feat: add virtual key access management for MCP clients (#2255)
## Summary
Adds virtual key access management to MCP client configuration, allowing administrators to control which virtual keys can access specific MCP servers and which tools they can execute on a per-VK basis.
## Changes
- Added `vk_configs` field to MCP client update API that accepts an array of virtual key configurations
- Each VK config specifies a virtual key ID and the tools it's allowed to execute on that MCP server
- When `vk_configs` is provided, it atomically replaces all existing VK assignments for the MCP client
- Added database method `GetVirtualKeyMCPConfigsByMCPClientID` to retrieve VK configs by MCP client
- Updated OpenAPI documentation to describe the new VK configuration functionality
- Enhanced UI with virtual key access management section in the MCP client sheet
- Added Go SDK context keys for MCP tool filtering: `MCPContextKeyIncludeClients`, `MCPContextKeyIncludeTools`, and `BifrostContextKeyMCPExtraHeaders`
- Updated context keys documentation with comprehensive MCP configuration examples
## Type of change
- [x] Feature
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] UI (Next.js)
- [x] Docs
## How to test
1. Create an MCP client with tools available
2. Create virtual keys in the system
3. Update the MCP client with VK configurations:
```sh
curl -X PUT /api/mcp/client/{id} \
-H "Content-Type: application/json" \
-d '{
"name": "test-client",
"vk_configs": [
{
"virtual_key_id": "vk-123",
"tools_to_execute": ["*"]
},
{
"virtual_key_id": "vk-456",
"tools_to_execute": ["read_file", "write_file"]
}
]
}'
```
4. Verify VK assignments are created/updated in the database
5. Test the UI by opening an MCP client sheet and managing virtual key access
```sh
# Core/Transports
go version
go test ./...
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
## Screenshots/Recordings
The UI now includes a "Virtual Key Access" section in the MCP client configuration sheet where administrators can:
- Add virtual keys to grant access to the MCP server
- Configure which specific tools each virtual key can execute
- Remove virtual key access entirely
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
This enables fine-grained access control for MCP servers at the virtual key level, complementing the existing governance and budgeting features.
## Security considerations
- VK access controls are enforced through the governance plugin during MCP tool execution
- The atomic replacement of VK assignments prevents partial updates that could leave the system in an inconsistent state
- Tool-level restrictions allow principle of least privilege by limiting which MCP tools each virtual key can access
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* feat: adds option to allow MCP clients to run on all virtual keys (#2258)
## Summary
Adds a new `AllowOnAllVirtualKeys` configuration option for MCP clients that enables them to be accessible to all virtual keys without requiring explicit per-key assignment. When enabled, all tools from the MCP client are available to every virtual key.
## Changes
- Added `AllowOnAllVirtualKeys` boolean field to `MCPClientConfig` schema and database table
- Updated MCP client manager to handle the new field during client updates
- Modified governance plugin to check for clients with `AllowOnAllVirtualKeys` enabled and automatically include their tools for all virtual keys
- Added database migration to add the new column to `TableMCPClient`
- Updated UI to include a toggle for the new setting with tooltip explanation
- Added OpenAPI documentation for the new field
- Updated configuration store methods to persist and retrieve the new field
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [x] Docs
## How to test
1. Create or update an MCP client with `allow_on_all_virtual_keys: true`
2. Verify that the client's tools are available to all virtual keys without explicit assignment
3. Test that the governance plugin correctly allows tools from such clients
4. Verify the UI toggle works correctly in the MCP client edit sheet
```sh
# Core/Transports
go version
go test ./...
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
The new configuration field `allow_on_all_virtual_keys` defaults to `false` to maintain backward compatibility.
## Screenshots/Recordings
If UI changes, add before/after screenshots or short clips.
## Breaking changes
- [ ] Yes
- [x] No
This is a backward-compatible addition with the new field defaulting to `false`.
## Related issues
Link related issues and discussions. Example: Closes #123
## Security considerations
This feature reduces access control granularity by allowing MCP clients to bypass virtual key restrictions when enabled. Administrators should carefully consider which MCP clients should have this permission as it grants broad access across all virtual keys.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* feat: add provider keys CRUD to configstore and in-memory store (#2159)
## Summary
Adds dedicated CRUD operations for individual provider keys at the data layer
(configstore interface + RDB implementation) and in-memory store. This enables
key-level operations without replacing the entire provider key set, which is
required for the new `/api/providers/{provider}/keys/*` endpoints.
## Changes
- Added `GetProviderKeys`, `GetProviderKey`, `CreateProviderKey`,
`UpdateProviderKey`, `DeleteProviderKey` to `ConfigStore` interface
- Implemented all five methods in `RDBConfigStore` with proper GORM queries,
error handling, and `ErrNotFound` propagation
- Extracted `schemaKeyFromTableKey` and `tableKeyFromSchemaKey` helpers to
deduplicate key conversion logic (previously inlined in `GetProvidersConfig`
and `GetProviderConfig`)
- Added `AddProviderKey`, `UpdateProviderKey`, `RemoveProviderKey` to in-memory
`Config` with mutex locking, DB persistence, and rollback on client update
failure
- Added `GetProviderKeysRaw`, `GetProviderKeysRedacted`, `GetProviderKeyRaw`,
`GetProviderKeyRedacted` read methods
- Added `TestProviderKeyCRUD` and `TestProviderKeyCRUD_ProviderMustExist`
integration tests
- Updated `MockConfigStore` with all five new interface methods
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
```sh
# Run configstore tests
go test ./framework/configstore/... -v -run TestProviderKeyCRUD
# Run config tests (mock store)
go test ./transports/bifrost-http/lib/... -v
```
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
Key values are handled through existing redaction infrastructure. No new secret
exposure paths introduced.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* feat: add provider keys HTTP handlers and refactor optional keys (#2160)
## Summary
Adds HTTP handlers for the dedicated provider keys CRUD endpoints and removes
`keys` from provider API responses and payloads. Keys are now exclusively
managed via `/api/providers/{provider}/keys/*`. Also fixes a context timeout bug
in `ReloadProvider` where model discovery could exhaust the shared context
budget, causing subsequent DB calls to fail.
## Changes
### Provider keys handlers (`provider_keys.go`)
- New file with five handlers: `listProviderKeys`, `getProviderKey`,
`createProviderKey`, `updateProviderKey`, `deleteProviderKey`
- Includes `mergeUpdatedKey` (redacted value preservation logic used by
`updateProviderKey`)
- Key handlers enforce keyless provider validation and trigger model discovery
after mutations
### Provider handlers cleanup (`providers.go`)
- Registered new key routes: `GET/POST /api/providers/{provider}/keys`,
`GET/PUT/DELETE /api/providers/{provider}/keys/{key_id}`
- Extracted inline anonymous structs into named `providerCreatePayload` and
`providerUpdatePayload` types (without `Keys` field)
- Removed `Keys` field from `ProviderResponse`
- Switched `addProvider` from `json.Unmarshal` to `sonic.Unmarshal`
- Removed `oldConfigRedacted` fetch and the entire key merge block
(`mergeKeys`, `hasKeys`, `slices` usage) from `updateProvider`
- Removed `Keys` from `getProviderResponseFromConfig` response builder
- Removed unused `encoding/json` import
### Context timeout fix (`server.go`)
- Split shared `bfCtx` in `ReloadProvider` into separate contexts:
`filteredBfCtx` (15s) for filtered `ListModelsRequest` and `unfilteredBfCtx`
(fresh 15s) for unfiltered `ListModelsRequest`, each cancelled after use
- Changed `GetKeysByProvider` to use `context.Background()` since it's a local
DB call that shouldn't be gated by model discovery timeouts
- Added `hasNoKeys` check to emit warn-level logs instead of errors when model
discovery fails because no keys are configured
- Read in-memory key count via `GetProviderKeysRaw` for the `hasNoKeys` check
### Tests (`providers_test.go`)
- Cleared file (contained only tests for removed inline struct decoding)
## Type of change
- [x] Feature
- [x] Bug fix
- [x] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
```sh
# Build
go build ./transports/bifrost-http/...
# Manual: start Bifrost, then test key CRUD
curl -X POST localhost:8080/api/providers/openai/keys -d '{"name":"test-key","value":"sk-test"}'
curl localhost:8080/api/providers/openai/keys
curl -X PUT localhost:8080/api/providers/openai/keys/{key_id} -d '{"name":"updated","value":"sk-new"}'
curl -X DELETE localhost:8080/api/providers/openai/keys/{key_id}
# Verify provider endpoints no longer return keys
curl localhost:8080/api/providers/openai | jq 'has("keys")' # should be false
```
## Screenshots/Recordings
N/A
## Breaking changes
- [x] Yes
- [ ] No
Provider API responses no longer include `keys` field. Provider create/update
payloads no longer accept `keys`. Clients must use the dedicated
`/api/providers/{provider}/keys/*` endpoints for key management.
## Related issues
N/A
## Security considerations
- Key handlers use existing redaction infrastructure (`GetProviderKeyRedacted`)
before returning responses
- Keyless provider validation prevents key creation on providers that don't
support keys
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* feat: migrate frontend to dedicated provider keys API (#2161)
## Summary
Migrates the frontend from reading provider keys via `provider.keys` (removed
from provider API response in PR #2160) to the dedicated `getProviderKeys`
query and `/api/keys` endpoint. Removes `keys` from all provider TypeScript
types. Key mutations patch caches from authoritative server responses; provider
updates invalidate the `ProviderKeys` tag to refresh key statuses after model
discovery. Also adds a read-only routing rule info sheet.
## Changes
### Types (`config.ts`, `schemas.ts`)
- Removed `keys` field from `ModelProviderConfig`, `AddProviderRequest`, and
`UpdateProviderRequest`
- Added `CreateProviderKeyRequest`, `UpdateProviderKeyRequest`,
`ListProviderKeysResponse` types
### Store (`providersApi.ts`, `baseApi.ts`)
- Added `ProviderKeys` tag type to `baseApi`
- Changed `getProviderKeys`/`getProviderKey` from `Providers` tag to
`ProviderKeys` tag (avoids invalidating provider cache on key changes)
- Added `invalidatesTags: [ProviderKeys, DBKeys]` on `updateProvider` mutation
(refreshes key statuses after model discovery)
- Removed `getProvider`/`getProviders` cache patches from `createProviderKey`,
`updateProviderKey`, `deleteProviderKey` (providers no longer carry keys)
- Added duplicate-check guards on `createProviderKey` cache patches to prevent
ghost keys
- Each key mutation patches `getProviderKeys` and `getAllKeys` caches from
authoritative server response
### Components
- **`modelProviderKeysTableView.tsx`**: Already uses `useGetProviderKeysQuery`;
formatting/indentation fixes
- **`page.tsx`**: Removed `keys: []` from fallback provider object and
`createProvider` call; simplified `KeyDiscoveryFailedBadge` to only check
provider-level status (removed per-key status check since keys are no longer
on provider)
- **`routingRuleSheet.tsx`**: `TargetRow` now receives `allKeys` prop (from
`useGetAllKeysQuery`) instead of `providersData` with `.keys`; filters keys
by target provider
- **`routingRuleInfoSheet.tsx`**: New read-only sheet component that displays
routing rule details (conditions, targets with provider icons and weight bars,
fallback chain, scope, priority, timestamps)
- **`settingsPanel.tsx`**: Uses `useGetAllKeysQuery` to determine configured
providers (replaces `p.keys.length > 0` check) and derive
`providerKeyConfigs` per provider
### Other frontend changes (from prior commit, unchanged)
- Added `getProviderKeys`, `getProviderKey` RTK Query endpoints
- Added `createProviderKey`, `updateProviderKey`, `deleteProviderKey` mutations
- Added `buildProviderUpdatePayload` utility for key-free provider updates
- Migrated `providerKeyForm.tsx` to separate create/update mutations
- Updated `addNewKeySheet.tsx` props from `keyIndex` to `keyId`
- Updated all 6 provider form fragments to use `buildProviderUpdatePayload`
- Removed dead `selectedProvider.keys` sync matchers from `providerSlice.ts`
## Type of change
- [x] Feature
- [x] Refactor
- [ ] Bug fix
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [x] UI (Next.js)
- [ ] Docs
## How to test
```sh
cd ui
npm run build
npm run lint
```
Manual testing:
1. Navigate to Providers page, select a provider with keys
2. Verify keys table loads correctly from dedicated API
3. Create a new key — verify it appears immediately (no ghost/duplicate)
4. Toggle enable/disable — verify switch updates immediately
5. Edit a key — verify form pre-populates, save works
6. Delete a key — verify it disappears immediately
7. Update provider settings — verify key statuses refresh after save
8. Check sidebar badge shows provider-level discovery failures
9. Open Playground settings — verify provider/key dropdowns work
10. Open Routing Rules — verify target key selector works
11. Click a routing rule row — verify info sheet opens with correct details
(conditions, targets, fallbacks, scope, priority)
## Screenshots/Recordings
N/A — no visual changes to existing features; routing rule info sheet is new.
## Breaking changes
- [ ] Yes
- [x] No
Frontend-only changes consuming the new API shape from PR #2160.
## Related issues
N/A
## Security considerations
No new security considerations. Key values continue to be handled through
existing redaction on the backend.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* refactor: replace string slice with WhiteList type for model restrictions (#2282)
## Summary
Refactored model access control logic by replacing string slice with a dedicated `WhiteList` type for the `Models` field in `TableKey`. This change introduces a more structured approach to handling wildcard permissions and improves code readability.
## Changes
- Changed `Models` field type from `[]string` to `schemas.WhiteList` in `TableKey` struct
- Replaced manual wildcard checking (`model == "*"`) with `IsUnrestricted()` method calls across multiple functions
- Added missing mock method `GetVirtualKeyMCPConfigsByMCPClientIDs` to test configuration store
- Applied the refactoring consistently in `ReloadProvider`, `ForceReloadPricing`, and `Bootstrap` methods
## Type of change
- [x] Refactor
- [ ] Bug fix
- [ ] Feature
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Verify that model access control continues to work correctly with both wildcard and specific model permissions:
```sh
# Core/Transports
go version
go test ./...
# Test specific areas affected by the changes
go test ./framework/configstore/tables/...
go test ./transports/bifrost-http/...
```
Test scenarios should include:
- Keys with wildcard permissions (`["*"]`)
- Keys with specific model restrictions
- Keys with empty model lists (deny-by-default behavior)
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
This refactoring maintains the existing security model for API key permissions. The deny-by-default behavior and wildcard functionality remain unchanged, just implemented through a more structured type system.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* feat: add Plus icon and responsive text to pricing override create button (#2285)
## Summary
Improves the visual design and mobile responsiveness of the pricing overrides section by adding a Plus icon to the create button and optimizing the button text for different screen sizes.
## Changes
- Added Plus icon import from lucide-react
- Enhanced the "Create Override" button with a Plus icon and responsive text that shows "New Override" on larger screens and hides text on mobile
- Adjusted container spacing by removing top margin and changing flex alignment from `items-start` to `items-center` for better visual balance
## Type of change
- [ ] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [x] UI (Next.js)
- [ ] Docs
## How to test
Navigate to the custom pricing overrides page and verify:
1. The "New Override" button displays with a Plus icon
2. On mobile screens, only the Plus icon is visible
3. On larger screens (sm and above), both icon and "New Override" text are visible
4. The button functionality remains unchanged when clicked
```sh
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
## Screenshots/Recordings
Before/after screenshots showing the button design changes and responsive behavior would be helpful.
## Breaking changes
- [x] Yes
- [ ] No
## Related issues
## Security considerations
No security implications - this is a purely visual enhancement.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* refactor: blacklist models on new convention (#2305)
## Summary
Implements comprehensive blacklist support for model filtering across all providers. This adds the ability to explicitly deny access to specific models at the key level, with blacklist rules taking precedence over allowlist rules.
## Changes
- Added `BlackList` type with semantic validation (supports wildcard "*" for block-all)
- Updated key selection logic to check both allowlist and blacklist constraints
- Modified all provider model listing functions to filter out blacklisted models
- Enhanced UI to support blacklist configuration with improved UX for wildcard selection
- Added blacklist filtering to model catalog and provider handlers
- Updated test cases to verify blacklist functionality
Key design decisions:
- Blacklist always wins over allowlist when conflicts occur
- Wildcard "*" in blacklist blocks all models for that key
- Empty blacklist blocks nothing (permissive default)
- Consistent filtering logic across all providers (Anthropic, Azure, Bedrock, Cohere, etc.)
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [x] UI (Next.js)
- [x] Docs
## How to test
Test blacklist functionality with provider keys:
```sh
# Core/Transports
go version
go test ./...
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```
Example configuration to test:
```json
{
"keys": [{
"id": "test-key",
"models": ["*"],
"blacklisted_models": ["gpt-4", "claude-3"]
}]
}
```
Verify that blacklisted models are excluded from model listings and key selection.
## Screenshots/Recordings
UI now shows "Blocked Models" field with improved tooltips and wildcard handling for denying access to specific models.
## Breaking changes
- [ ] Yes
- [x] No
The `blacklisted_models` field was already present in the schema but not fully implemented. This change makes it functional without breaking existing configurations.
## Related issues
Enhances model access control capabilities for fine-grained permission management.
## Security considerations
Improves security by allowing explicit denial of access to sensitive or expensive models at the key level. Blacklist rules cannot be bypassed by allowlist configurations.
## Checklist
- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable
* minor fix add blacklisted model field in tableKeyFromSchemaKey (#2324)
## Summary
This PR adds support for the `BlacklistedModels` field when converting schema keys to table keys in the configuration store's RDB implementation.
## Changes
- Added `BlacklistedModels: key.BlacklistedModels` field mapping in the `tableKeyFromSchemaKey` function
- Ensures that blacklisted model information is properly preserved when converting between schema and table representations
## Type of change
- [ ] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs
## How to test
Verify that configuration keys with blacklisted models are properly stored and retrieved from the RDB configstore.
```sh
# Core/Transports
go version
go test ./...
```
Test creating configuration entries with `BlacklistedModels` specified and ensure they persist correctly through the RDB layer.
## Screenshots/Recordings
N/A
## Breaking changes
- [ ] Yes
- [x] No
## Related issues
N/A
## Security considerations
None - this change only adds field mapping for existing blacklisted models functionality.
## Checklist
- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable
* feat: add image edit input view on logs (#2321)
## Summary
Adds support for logging image edit and image variation requests by introducing new database columns and UI components to track and display these image manipulation operations alongside existing image generation functionality.
## Changes
- Added `image_edit_input` and `image_variation_input` columns to the logs table with corresponding database migrations
- Extended the Log struct with new fields for storing and parsing image edit/variation input data
- Updated logging plugin to capture image edit and variation request data with large payload threshold handling
- Enhanced UI to display input images and prompts for image edit operations and input images for variation operations
- Added image MIME type detection for proper display of base64-encoded images in the UI
## Type of change
- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI
## Affected areas
- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [ ] Do…

Summary
This PR implements MCP tool governance by enforcing virtual key MCP configurations as an execution-time allow-list. When virtual keys have empty MCPConfigs, all MCP tools are denied. When non-empty, each tool is validated against the configured allow-list at both inference time and MCP tool execution.
Changes
*schemas.BifrostContextinstead ofcontext.Contextto enable tool trackingBifrostContextKeyMCPAddedToolscontext key to track which MCP tools are added to requestsPreMCPHookandevaluateGovernanceRequestDisableAutoToolInjectconfiguration option that respects the toggle and skips auto-injection when headers are already set by callersDecisionMCPToolBlockedfor MCP tool governance violationsType of change
Affected areas
How to test
Test MCP tool governance with virtual keys:
New configuration options:
disable_auto_tool_inject: Boolean flag to disable automatic MCP tool injectionMCPConfigs: Array of MCP client configurations that act as allow-listsScreenshots/Recordings
UI changes include updated MCP configuration view with clearer descriptions for the disable auto tool injection toggle and improved sidebar navigation labels.
Breaking changes
Impact: MCP-related function signatures now require
*schemas.BifrostContextinstead ofcontext.Context. Virtual keys with empty MCPConfigs will now deny all MCP tools by default.Migration: Update any custom MCP integrations to use the new context parameter type. Configure MCPConfigs on virtual keys that need MCP tool access.
Related issues
Implements MCP tool governance and execution-time validation for virtual key configurations.
Security considerations
Checklist
docs/contributing/README.mdand followed the guidelines