Skip to content

refactor: blacklist models on new convention#2305

Merged
Pratham-Mishra04 merged 1 commit intov1.5.0from
03-26-refactor_blacklist_models_on_new_convention
Mar 27, 2026
Merged

refactor: blacklist models on new convention#2305
Pratham-Mishra04 merged 1 commit intov1.5.0from
03-26-refactor_blacklist_models_on_new_convention

Conversation

@TejasGhatte
Copy link
Copy Markdown
Collaborator

@TejasGhatte TejasGhatte commented Mar 26, 2026

Summary

Implements comprehensive blacklist support for model filtering across all providers. This adds the ability to explicitly deny access to specific models at the key level, with blacklist rules taking precedence over allowlist rules.

Changes

  • Added BlackList type with semantic validation (supports wildcard "*" for block-all)
  • Updated key selection logic to check both allowlist and blacklist constraints
  • Modified all provider model listing functions to filter out blacklisted models
  • Enhanced UI to support blacklist configuration with improved UX for wildcard selection
  • Added blacklist filtering to model catalog and provider handlers
  • Updated test cases to verify blacklist functionality

Key design decisions:

  • Blacklist always wins over allowlist when conflicts occur
  • Wildcard "*" in blacklist blocks all models for that key
  • Empty blacklist blocks nothing (permissive default)
  • Consistent filtering logic across all providers (Anthropic, Azure, Bedrock, Cohere, etc.)

Type of change

  • Feature
  • Bug fix
  • Refactor
  • Documentation
  • Chore/CI

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations
  • Plugins
  • UI (Next.js)
  • Docs

How to test

Test blacklist functionality with provider keys:

# Core/Transports
go version
go test ./...

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build

Example configuration to test:

{
  "keys": [{
    "id": "test-key",
    "models": ["*"],
    "blacklisted_models": ["gpt-4", "claude-3"]
  }]
}

Verify that blacklisted models are excluded from model listings and key selection.

Screenshots/Recordings

UI now shows "Blocked Models" field with improved tooltips and wildcard handling for denying access to specific models.

Breaking changes

  • Yes
  • No

The blacklisted_models field was already present in the schema but not fully implemented. This change makes it functional without breaking existing configurations.

Related issues

Enhances model access control capabilities for fine-grained permission management.

Security considerations

Improves security by allowing explicit denial of access to sensitive or expensive models at the key level. Blacklist rules cannot be bypassed by allowlist configurations.

Checklist

  • I read docs/contributing/README.md and followed the guidelines
  • I added/updated tests where appropriate
  • I updated documentation where needed
  • I verified builds succeed (Go and UI)
  • I verified the CI pipeline passes locally if applicable

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


tejas ghatte seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 26, 2026

📝 Walkthrough

Summary by CodeRabbit

  • New Features

    • Model blocking (blacklist) is now enforced across providers and per-key; blocked models take precedence over allowed lists when filtering available models.
  • UI Updates

    • "Blocked Models" label and tooltip updated; "All Models" selection behaves as a global block sentinel with improved multiselect handling.
  • Validation

    • Blocked-model configurations are validated on key create/update to prevent invalid entries.
  • Behavior

    • Model listing and access logic now consistently exclude blocked models.

Walkthrough

Enforces per-key blacklist (denylist) semantics by adding a new BlackList type and making blacklist checks authoritative across key selection, provider model-list conversion, HTTP filtering, DB serialization, model-catalog upsert calls, and UI for blocked models.

Changes

Cohort / File(s) Summary
Core Key Selection & Tests
core/bifrost.go, core/bifrost_test.go
Key-selection now treats BlacklistedModels as authoritative; single- and batch-key selection exclude keys that blacklist the requested model. Test updated to make fallback key explicitly unrestricted.
Blacklist Type & DB Serialization
core/schemas/account.go, framework/configstore/tables/key.go
Added exported BlackList type with helpers (Contains, IsBlocked, IsEmpty, IsBlockAll, Validate). Switched Key.BlacklistedModels from WhiteListBlackList and updated JSON (BeforeSave/AfterFind) handling and validation.
Provider adapters (call sites)
core/providers/.../*.go (anthropic.go, azure.go, bedrock.go, cohere.go, elevenlabs.go, gemini.go, huggingface.go, mistral.go, openai/openai.go, openrouter.go, replicate/replicate.go, vertex/vertex.go, etc.)
All provider list-model call sites now pass key.BlacklistedModels into their response conversion routines. Some provider list flows also adjust early-return fast paths to consider block-all.
Provider converters (model transforms)
core/providers/*/models.go (openai, anthropic, azure, bedrock, cohere, gemini, huggingface, mistral, elevenlabs, replicate, vertex, mistral, replicate, etc.)
ToBifrostListModelsResponse signatures updated to accept blacklistedModels; conversion logic now early-returns on block-all, skips blacklisted models during iteration, and omits blacklisted entries during backfill. Azure/Bedrock added matchesBlacklist helpers for alias/base-model matching.
Vertex utilities
core/providers/vertex/utils.go
buildResponseFromConfig now accepts blacklistedModels and excludes blocked aliases/models when constructing responses.
Model Catalog & Call Sites
framework/modelcatalog/main.go, transports/bifrost-http/server/server.go
Removed deniedModels parameter from UpsertModelDataForProvider; updated all call sites to drop the extra nil argument and simplified allow-all logic.
HTTP handlers / filtering
transports/bifrost-http/handlers/providers.go, transports/bifrost-http/handlers/provider_keys.go
filterModelsByKeys refactored to evaluate each model across matched keys: include if at least one key does not block the model and allows it. Added blacklist validation on key create/update endpoints.
UI: Blocked models UX
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx, ui/components/ui/modelMultiselect.tsx
Label changed to “Blocked Models”, tooltip/placeholder updated, ModelMultiselect onChange now handles * sentinel as “all models blocked”; sentinel label/search behavior adjusted.
Tests & Misc
core/bifrost_test.go, plugins/governance/http_transport_prehook_test.go
Updated tests to match new UpsertModelDataForProvider signature and explicit model allowlist behavior. Minor trailing whitespace/test argument fixes.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant KeySelector as Key Selection<br/>(core/bifrost.go)
    participant Provider as Provider Adapter<br/>(e.g., core/providers/openai/openai.go)
    participant Converter as Response Converter<br/>(models.go)
    participant ModelCatalog as Model Catalog

    Client->>KeySelector: Request key for model
    Note over KeySelector: Require NOT key.BlacklistedModels.IsBlocked(model) AND key.Models allows model
    KeySelector-->>Client: Selected key

    Client->>Provider: List models for key
    Provider->>Provider: Call upstream API
    Provider->>Converter: Convert response (pass key.BlacklistedModels)
    Note over Converter: If blacklistedModels.IsBlockAll() → return empty
    Converter->>Converter: For each model: skip if blacklistedModels.IsBlocked(model)
    Converter-->>Provider: Filtered BifrostListModelsResponse

    Provider->>ModelCatalog: Upsert filtered models
    Provider-->>Client: Return available models
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • akshaydeo
  • danpiths

🐰 I hopped through code to block the sneaks,
Asterisk sentinel guards the peaks,
Models checked left, right, and center too,
No banned names slip — that's my cue!
Now Bifrost's garden grows safe and new.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main change: implementing blacklist model filtering following a new convention. It directly summarizes the core objective of the PR.
Description check ✅ Passed The description comprehensively covers all template sections: Summary, Changes, Type of change, Affected areas, How to test, Breaking changes, Related issues, Security considerations, and Checklist. All required information is present and well-structured.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch 03-26-refactor_blacklist_models_on_new_convention

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Collaborator Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

@TejasGhatte TejasGhatte marked this pull request as ready for review March 26, 2026 11:27
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Mar 26, 2026

Confidence Score: 3/5

Not safe to merge as-is due to blacklist bypass in the OpenRouter model-listing path and a latent wildcard bug in dead code that needs cleanup before it becomes callable.

The core key-selection logic, the schema type, and 11 out of 12 provider implementations are correct. However, the OpenRouter model listing has a verified logic gap — the else branch for unrestricted allowlists never applies the blacklist — making it inconsistent with every other provider and breaking the stated contract that 'blacklist always wins.' The keyAllowsModelForList issue compounds this: even though it is dead code today, it carries a silent wildcard-bypass bug. These two items are concrete, targeted fixes.

core/providers/openrouter/openrouter.go (blacklist not applied in else branch) and transports/bifrost-http/handlers/providers.go (keyAllowsModelForList dead code with latent wildcard bug)

Important Files Changed

Filename Overview
core/schemas/account.go Adds the BlackList type with IsBlocked, IsBlockAll, IsEmpty, Contains, and Validate methods; changes Key.BlacklistedModels from WhiteList to BlackList. Well-structured and mirrors the existing WhiteList design.
framework/configstore/tables/key.go Updates TableKey.BlacklistedModels to BlackList; adds Models.Validate() call (previously missing) and BlacklistedModels.Validate() in BeforeSave; cleans up the old conditional JSON serialization path for blacklisted models. All validation paths are now consistent.
core/providers/openrouter/openrouter.go Adds blacklist early-exit and filters inside the allowedModels.IsRestricted() branch, but the else branch (taken when allowlist is unrestricted ["*"]) still prefixes model IDs without applying the blacklist — blacklisted models leak through when the allowlist is a wildcard.
transports/bifrost-http/handlers/providers.go Refactors filterModelsByKeys to a cleaner key-map + union-access model that correctly integrates the blacklist; however, keyAllowsModelForList (dead code) uses raw slices.Contains that misses the ["*"] wildcard case for both allowlist and blacklist.
transports/bifrost-http/handlers/provider_keys.go Adds BlacklistedModels.Validate() call in both create and update key handlers, mirroring the existing Models.Validate() pattern; correctly rejects malformed wildcard combinations via HTTP 400.
core/bifrost.go Updates both getKeysForBatchAndFileOps and selectKeyFromProviderForModel to reject blacklisted models via IsBlocked; logic is correct and blacklist correctly takes precedence over allowlist.
framework/modelcatalog/main.go Removes the now-redundant deniedModels parameter from UpsertModelDataForProvider since filtering is pushed down to the provider layer; all three call sites passed nil for this parameter, so behaviour is unchanged.
core/providers/vertex/models.go Blacklist filtering added consistently across the main loop, deployment-alias backfill, and allowed-model backfill; the deployment-alias and backfill blocks are already guarded by outer !unfiltered conditions, so no missing guard issue.
core/bifrost_test.go Existing blacklist test updated to add Models: ["*"] to the second key so that key selection logic correctly falls through to it; minor trailing-newline cleanup.
ui/components/ui/modelMultiselect.tsx Renames the wildcard option label from "Allow All Models" to "All Models" and updates the search prefix string to match; clean, low-risk UI change.
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx Adds wildcard UX logic for the blocked-models multiselect: selecting ["*"] collapses to the block-all sentinel, and adding further specific models after selecting ["*"] removes the wildcard. Label and tooltip updated to user-facing "Blocked Models" wording.

Comments Outside Diff (2)

  1. core/providers/openrouter/openrouter.go, line 228-232 (link)

    P1 Blacklist not applied when allowlist is unrestricted

    When allowedModels is ["*"] (unrestricted), allowedModels.IsRestricted() returns false, so the else if branch that applies blacklist filtering is never entered. The final else branch only prefixes model IDs without filtering, so blacklisted models are returned in the model listing.

    For example, with models: ["*"] and blacklisted_models: ["gpt-4o"], OpenRouter's listModelsByKey still returns gpt-4o.

    All other providers (OpenAI, Anthropic, Gemini, etc.) apply the blacklist check inside the main model loop unconditionally, regardless of the allowlist state. The else branch here needs a blacklist pass:

    } else {
        filteredData := make([]schemas.Model, 0, len(openrouterResponse.Data))
        for i := range openrouterResponse.Data {
            rawID := openrouterResponse.Data[i].ID
            if !request.Unfiltered && (blacklistedModels.IsBlocked(rawID) || blacklistedModels.IsBlocked(providerPrefix+rawID)) {
                continue
            }
            openrouterResponse.Data[i].ID = providerPrefix + rawID
            filteredData = append(filteredData, openrouterResponse.Data[i])
        }
        openrouterResponse.Data = filteredData
    }
  2. transports/bifrost-http/handlers/providers.go, line 705-713 (link)

    P1 keyAllowsModelForList doesn't handle ["*"] wildcard blacklist

    Now that the blacklist is functional, keyAllowsModelForList has two problems:

    1. slices.Contains(key.BlacklistedModels, model) does a case-sensitive equality check. It will never match "*", so a key with blacklisted_models: ["*"] (block-all) would never return false here — every model appears allowed, silently defeating the block-all intent.

    2. The same issue applies on the allowlist side: slices.Contains(key.Models, model) also treats ["*"] as a non-matching specific value, so a key with models: ["*"] would appear to deny every specific model.

    Both checks should use the typed BlackList/WhiteList methods:

    Note: this function is not currently called anywhere in the codebase (no call sites found). It should either be removed or wired up — leaving it as dead code risks it being introduced later with the latent bug.

Reviews (3): Last reviewed commit: "refactor: blacklist models on new conven..." | Re-trigger Greptile

Comment thread core/schemas/account.go
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
framework/modelcatalog/main.go (1)

650-719: ⚠️ Potential issue | 🟠 Major

This fallback undoes the provider-side blacklist filtering.

modelData.Data is now the authoritative filtered result from provider list-model handlers, but Lines 679-681 and Lines 711-718 still treat any missing entries as “fill from pricing data”. With allowedModels empty, that re-adds models the provider intentionally removed, including blacklisted_models: ["*"] and partial blacklist cases.

Please keep explicit fallback state in this API again (for example, a usePricingFallback flag or the removed blocked-model input) before merging providerModels; otherwise mc.modelPool will advertise blocked models again.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/modelcatalog/main.go` around lines 650 - 719,
UpsertModelDataForProvider currently merges providerModels (built from
mc.pricingData) into finalModelList whenever allowedModels is empty, which
reintroduces provider-blacklisted models; change logic so merges from
providerModels only occur when an explicit fallback flag is set (e.g., add a
boolean parameter usePricingFallback to UpsertModelDataForProvider or honor an
existing removed blocked-model input) and otherwise respect modelData.Data as
authoritative; update the merge point that currently appends providerModels into
finalModelList to check usePricingFallback (or the restored blocked-model input)
before adding, and ensure mc.modelPool[provider] is only populated from
providerModels when that flag permits fallback.
core/providers/openrouter/openrouter.go (1)

192-238: ⚠️ Potential issue | 🟠 Major

Normalize blacklist matching the same way as the allowlist.

The allowlist path here accepts both rawID and openrouter/<rawID>, but the new blacklist path only checks rawID. A config like blacklisted_models: ["openrouter/openai/gpt-4o"] will still survive the API filter and the backfill path, even though that's the ID this provider returns to callers.

Suggested fix
 	allowedModels := key.Models
 	blacklistedModels := key.BlacklistedModels
 	providerPrefix := string(schemas.OpenRouter) + "/"
+	isBlacklisted := func(rawID string) bool {
+		return blacklistedModels.IsBlocked(rawID) || blacklistedModels.IsBlocked(providerPrefix+rawID)
+	}
 
 	if !request.Unfiltered && (allowedModels.IsEmpty() || blacklistedModels.IsBlockAll()) {
 		openrouterResponse.Data = make([]schemas.Model, 0)
 	} else if !request.Unfiltered && allowedModels.IsRestricted() {
@@
-			if blacklistedModels.IsBlocked(rawID) {
+			if isBlacklisted(rawID) {
 				continue
 			}
@@
-			if blacklistedModels.IsBlocked(rawID) {
+			if isBlacklisted(rawID) {
 				continue
 			}
@@
-			if !request.Unfiltered && blacklistedModels.IsBlocked(rawID) {
+			if !request.Unfiltered && isBlacklisted(rawID) {
 				continue
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openrouter/openrouter.go` around lines 192 - 238, The
blacklist checks only test rawID but must mirror the allowlist logic that
accepts both rawID and providerPrefix+rawID (case-insensitively); update all
blacklistedModels.IsBlocked checks (inside the restricted branch loop and the
backfill loop, and in the non-restricted branch) to test both forms (e.g.,
blacklistedModels.IsBlocked(rawID) ||
blacklistedModels.IsBlocked(providerPrefix+rawID) or compare lowercased
variants) so prefixed IDs like "openrouter/openai/gpt-4o" are correctly matched
and filtered; reference the variables/values openrouterResponse.Data[i].ID
(rawID), providerPrefix, allowedModels, blacklistedModels, includedModels, and
request.Unfiltered when locating where to change the checks.
core/providers/anthropic/models.go (1)

34-49: ⚠️ Potential issue | 🟠 Major

Apply the same alias normalization to blacklist checks that you use for the allowlist.

Line 38 can rewrite modelID to the allowlisted alias before the blacklist check. Since BlackList.IsBlocked() only does exact matching, a blacklist entry on the raw Anthropic ID stops matching after that remap, so a base-equivalent allowlisted alias can still be emitted here and reintroduced in the backfill path. Check blacklist entries against both IDs, or use schemas.SameBaseModel() for blacklist matching as well, so blacklist truly overrides allowlist.

Also applies to: 59-63

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/anthropic/models.go` around lines 34 - 49, The blacklist check
currently uses the possibly-remapped modelID, so after you rewrite modelID to an
allowed alias inside the allowedModels loop (using schemas.SameBaseModel) the
subsequent blacklistedModels.IsBlocked(modelID) can miss base-equivalent
blacklist entries; update the blacklist matching in the same code paths (the
block using allowedModels and the similar block later) to either compare against
both the original/raw Anthropic ID and the remapped modelID or replace the exact
IsBlocked check with a base-equivalence check using schemas.SameBaseModel for
each entry in blacklistedModels so blacklist entries truly override allowlist
aliases.
core/providers/vertex/models.go (1)

116-126: ⚠️ Potential issue | 🟠 Major

The configured Vertex fast path still bypasses deployment-value blacklists.

These changes fix the slow path, but core/providers/vertex/vertex.go:197-201 returns early to buildResponseFromConfig(...) whenever deployments or allowlisted models are configured, and that helper only blocks alias entries in core/providers/vertex/utils.go:186-249. A blacklist on the deployment value / underlying model ID still exposes the alias in the common configured path, so the two Vertex list-models paths are still inconsistent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/vertex/models.go` around lines 116 - 126, The fast path
returns from buildResponseFromConfig without applying deployment/value
blacklists, so ToBifrostListModelsResponse (and the early return in
buildResponseFromConfig) can expose alias entries whose underlying
deployment/model IDs are blacklisted; update the fast path to apply the same
blocking logic used in the slow path: when building bifrostResponse in
ToBifrostListModelsResponse (and before the early return in
buildResponseFromConfig) filter out models whose deployment value or underlying
model ID appear in blacklistedModels or in the deployments blacklist, i.e.,
reuse the same alias/block checks implemented in the utils helper so both paths
consistently remove blacklisted deployment/model entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/bifrost.go`:
- Around line 6291-6292: The multi-key path still omits blacklist checks: update
getKeysForBatchAndFileOps (used by requestWorker) so that when evaluating each
APIKeyCandidate (k) you compute modelSupported the same way as in
selectKeyFromProviderForModel — require k.Models.IsAllowed(model) AND
!k.BlacklistedModels.IsBlocked(model) (and preserve the hasValue check if
present) — so blacklisted models are consistently rejected across
batch/file/container request routing.

In `@core/providers/azure/azure.go`:
- Line 369: The backfill branch in ToBifrostListModelsResponse
(core/providers/azure/models.go) currently only uses
blacklistedModels.Contains(alias) so aliases get re-added when Azure omits the
deployment; update that logic to use the blacklist matcher (call
blacklistedModels.Matches(alias)) and also check the base model/deployment value
with the same matcher before re-adding an alias. In short: replace the Contains
check with blacklistedModels.Matches(alias) and ensure you also call
blacklistedModels.Matches(baseModelOrDeployment) to skip adding the alias when
either matches the blacklist.

In `@core/providers/azure/models.go`:
- Around line 133-135: The code currently only checks model.ID against
blacklistedModels (via matchesBlacklist) but later emits other identifiers
(deploymentAlias, deploymentValue, allowedModel) and uses simple Contains checks
in other branches; update the filtering so every emitted identifier is validated
with matchesBlacklist(blacklistedModels, <identifier>) instead of or in addition
to Contains: replace the direct Contains checks at the backfill/emit paths with
calls to matchesBlacklist for deploymentAlias, deploymentValue and allowedModel,
and ensure the initial skip condition also tests deploymentAlias and
deploymentValue; keep using the same matchesBlacklist function and the
blacklistedModels value so alias-blocked API hits and deployment-value-blocked
backfills are uniformly rejected and add/adjust unit tests to cover alias and
deployment-value blacklist cases.

In `@core/providers/bedrock/models.go`:
- Around line 311-313: The blacklist check currently only runs
matchesBlacklist(blacklistedModels, model.ModelID), allowing aliases/deployment
IDs and code paths using strings.Contains(...) to bypass it; update all Bedrock
output paths to call matchesBlacklist against every identifier that can
represent the model (e.g., model.ModelID, model.DeploymentAlias/deploymentAlias
string, the raw ID value emitted, and any allowlist/backfill candidate IDs) and
replace any strings.Contains(...) checks with matchesBlacklist so blocked models
are consistently filtered in the model selection, backfill, and alias/deployment
branches (look for uses of matchesBlacklist, model.ModelID, deploymentAlias, and
any Contains(...) fallback logic and apply the same matcher to those variables).

In `@core/schemas/account.go`:
- Around line 104-118: The BlackList.Validate() validator is never invoked when
writing keys, so invalid arrays like ["*", "gpt-4"] can be persisted; call
BlackList.Validate() on the BlacklistedModels field before serializing/saving in
the key write path (the save/update function in the key table code that
currently serializes BlacklistedModels), return or propagate the validation
error to the caller to abort the write, and apply the same validation at the
other write location referenced (the second write site noted in the review).

In `@framework/configstore/tables/key.go`:
- Around line 99-103: When serializing blacklists, ensure nil slices are
normalized to an empty JSON array: before calling json.Marshal on
k.BlacklistedModels (the block that sets k.BlacklistedModelsJSON), treat a nil
slice as an empty slice so Marshal produces "[]" not "null"; likewise, ensure
the corresponding deserialization/AfterFind path resets k.BlacklistedModels to
an empty slice when the JSON is empty or missing to avoid reusing stale values.
Update the serialization site that assigns k.BlacklistedModelsJSON and the
AfterFind handler (referencing k.BlacklistedModels, k.BlacklistedModelsJSON, and
AfterFind) to consistently normalize to [].

In `@transports/bifrost-http/handlers/providers.go`:
- Around line 742-745: The code currently returns the full models slice when
len(matchedKeys) == 0; change this so that if a keys filter was supplied (the
incoming variable/param named keys or similar) and no matchedKeys are found, you
return an empty models slice instead of returning all models—i.e., in the
function that builds matchedKeys (look for variables matchedKeys, models and the
keys parameter in handlers/providers.go), detect the case where keys was
provided but matchedKeys is empty and return nil or an empty []Model{},
otherwise preserve the existing “no keys filter supplied” behavior that returns
models.

In `@ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx`:
- Around line 266-289: Add a data-testid prop to the ModelMultiselect instance
for the blocked-models control so tests can target it; update the JSX where
ModelMultiselect is rendered (the component using provider={providerName} and
value={field.value}) to include data-testid="blocked-models-multiselect"
following the ui/* test id pattern (entity-element-qualifier). Ensure the prop
is added alongside the existing props (onChange, placeholder, unfiltered) so the
multiselect becomes selectable in tests.

In `@ui/components/ui/modelMultiselect.tsx`:
- Around line 116-119: The prefix logic in modelMultiselect.tsx prevents typing
"allow" from matching the ALL_MODELS_OPTION because it only checks "all models";
update the condition that builds prefix (the const prefix assignment) to also
treat "allow" (and optionally "allow all models" or the ALL_MODELS_OPTION.label)
as valid search aliases: compute queryLower = query?.toLowerCase() and change
the matcher to something like checking query is empty OR ["all
models","allow","allow all models",
ALL_MODELS_OPTION.label.toLowerCase()].some(alias => alias.includes(queryLower))
so typing "allow" will still surface the wildcard option.

---

Outside diff comments:
In `@core/providers/anthropic/models.go`:
- Around line 34-49: The blacklist check currently uses the possibly-remapped
modelID, so after you rewrite modelID to an allowed alias inside the
allowedModels loop (using schemas.SameBaseModel) the subsequent
blacklistedModels.IsBlocked(modelID) can miss base-equivalent blacklist entries;
update the blacklist matching in the same code paths (the block using
allowedModels and the similar block later) to either compare against both the
original/raw Anthropic ID and the remapped modelID or replace the exact
IsBlocked check with a base-equivalence check using schemas.SameBaseModel for
each entry in blacklistedModels so blacklist entries truly override allowlist
aliases.

In `@core/providers/openrouter/openrouter.go`:
- Around line 192-238: The blacklist checks only test rawID but must mirror the
allowlist logic that accepts both rawID and providerPrefix+rawID
(case-insensitively); update all blacklistedModels.IsBlocked checks (inside the
restricted branch loop and the backfill loop, and in the non-restricted branch)
to test both forms (e.g., blacklistedModels.IsBlocked(rawID) ||
blacklistedModels.IsBlocked(providerPrefix+rawID) or compare lowercased
variants) so prefixed IDs like "openrouter/openai/gpt-4o" are correctly matched
and filtered; reference the variables/values openrouterResponse.Data[i].ID
(rawID), providerPrefix, allowedModels, blacklistedModels, includedModels, and
request.Unfiltered when locating where to change the checks.

In `@core/providers/vertex/models.go`:
- Around line 116-126: The fast path returns from buildResponseFromConfig
without applying deployment/value blacklists, so ToBifrostListModelsResponse
(and the early return in buildResponseFromConfig) can expose alias entries whose
underlying deployment/model IDs are blacklisted; update the fast path to apply
the same blocking logic used in the slow path: when building bifrostResponse in
ToBifrostListModelsResponse (and before the early return in
buildResponseFromConfig) filter out models whose deployment value or underlying
model ID appear in blacklistedModels or in the deployments blacklist, i.e.,
reuse the same alias/block checks implemented in the utils helper so both paths
consistently remove blacklisted deployment/model entries.

In `@framework/modelcatalog/main.go`:
- Around line 650-719: UpsertModelDataForProvider currently merges
providerModels (built from mc.pricingData) into finalModelList whenever
allowedModels is empty, which reintroduces provider-blacklisted models; change
logic so merges from providerModels only occur when an explicit fallback flag is
set (e.g., add a boolean parameter usePricingFallback to
UpsertModelDataForProvider or honor an existing removed blocked-model input) and
otherwise respect modelData.Data as authoritative; update the merge point that
currently appends providerModels into finalModelList to check usePricingFallback
(or the restored blocked-model input) before adding, and ensure
mc.modelPool[provider] is only populated from providerModels when that flag
permits fallback.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 66e65119-d5e0-40bf-8bef-8d694bf730af

📥 Commits

Reviewing files that changed from the base of the PR and between 511804c and 76327ac.

📒 Files selected for processing (35)
  • core/bifrost.go
  • core/bifrost_test.go
  • core/providers/anthropic/anthropic.go
  • core/providers/anthropic/models.go
  • core/providers/azure/azure.go
  • core/providers/azure/models.go
  • core/providers/bedrock/bedrock.go
  • core/providers/bedrock/models.go
  • core/providers/cohere/cohere.go
  • core/providers/cohere/models.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/elevenlabs/models.go
  • core/providers/gemini/gemini.go
  • core/providers/gemini/models.go
  • core/providers/huggingface/huggingface.go
  • core/providers/huggingface/models.go
  • core/providers/mistral/mistral.go
  • core/providers/mistral/models.go
  • core/providers/openai/models.go
  • core/providers/openai/openai.go
  • core/providers/openrouter/openrouter.go
  • core/providers/replicate/models.go
  • core/providers/replicate/replicate.go
  • core/providers/vertex/models.go
  • core/providers/vertex/utils.go
  • core/providers/vertex/vertex.go
  • core/schemas/account.go
  • docs/features/keys-management.mdx
  • framework/configstore/tables/key.go
  • framework/modelcatalog/main.go
  • plugins/governance/http_transport_prehook_test.go
  • transports/bifrost-http/handlers/providers.go
  • transports/bifrost-http/server/server.go
  • ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx
  • ui/components/ui/modelMultiselect.tsx

Comment thread core/bifrost.go
Comment thread core/providers/azure/azure.go
Comment thread core/providers/azure/models.go Outdated
Comment thread core/providers/bedrock/models.go Outdated
Comment thread core/schemas/account.go
Comment thread framework/configstore/tables/key.go
Comment thread transports/bifrost-http/handlers/providers.go
Comment thread ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx
Comment thread ui/components/ui/modelMultiselect.tsx
@TejasGhatte TejasGhatte force-pushed the 03-26-refactor_blacklist_models_on_new_convention branch from 76327ac to 7ce882b Compare March 26, 2026 13:35
@coderabbitai coderabbitai Bot requested review from akshaydeo and danpiths March 26, 2026 13:40
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/providers/vertex/models.go (1)

179-198: ⚠️ Potential issue | 🟠 Major

Blacklist check may miss models identified by deployment alias.

When a deployment match is found, the model ID in the response becomes deploymentAlias (line 193), but the blacklist check at line 179 only validates customModelID. If a user blacklists the alias (which is what appears in the response), the model would still be included.

Consider also checking the alias when a deployment match exists:

🐛 Proposed fix
 			if !unfiltered && blacklistedModels.IsBlocked(customModelID) {
 				continue
 			}
+			// Also check deployment alias if it will be used as the model ID
+			if !unfiltered && deploymentAlias != "" && blacklistedModels.IsBlocked(deploymentAlias) {
+				continue
+			}

 			modelID := customModelID
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/vertex/models.go` around lines 179 - 198, The blacklist check
currently only calls blacklistedModels.IsBlocked(customModelID) before building
modelEntry, which misses cases where you later set modelEntry.ID to the
deploymentAlias; update the logic in the loop (around customModelID,
deploymentValue, deploymentAlias, modelEntry.ID) to also check the alias when
deploymentAlias != "" — call blacklistedModels.IsBlocked(deploymentAlias) (or
check both IDs) and continue if either is blocked, and ensure addedModelIDs uses
the final modelEntry.ID when marking entries as added.
♻️ Duplicate comments (3)
core/providers/bedrock/models.go (1)

311-313: ⚠️ Potential issue | 🟠 Major

Also blacklist the raw deployment value.

Lines 311 and 355 only screen model.ModelID / deploymentAlias or alias. If the user blacklists the configured deployment value itself, a matched deployment or a backfilled deployment row can still be emitted. That breaks the “blacklist wins” rule for deployment-backed Bedrock entries.

Suggested fix
-		if !unfiltered && (matchesBlacklist(blacklistedModels, model.ModelID) ||
-			(deploymentAlias != "" && matchesBlacklist(blacklistedModels, deploymentAlias))) {
+		if !unfiltered && (matchesBlacklist(blacklistedModels, model.ModelID) ||
+			(deploymentAlias != "" && matchesBlacklist(blacklistedModels, deploymentAlias)) ||
+			(deploymentValue != "" && matchesBlacklist(blacklistedModels, deploymentValue))) {
 			continue
 		}
@@
-			if !unfiltered && matchesBlacklist(blacklistedModels, alias) {
+			if !unfiltered && (matchesBlacklist(blacklistedModels, alias) ||
+				matchesBlacklist(blacklistedModels, deploymentValue)) {
 				continue
 			}

Also applies to: 355-356

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/models.go` around lines 311 - 313, The blacklist check
currently only tests model.ModelID and deploymentAlias/alias, so raw deployment
values can slip through; update both conditional blocks that call
matchesBlacklist(blacklistedModels, model.ModelID) || (deploymentAlias != "" &&
matchesBlacklist(..., deploymentAlias)) to also test the raw deployment value
(e.g., matchesBlacklist(blacklistedModels, model.Deployment) or
matchesBlacklist(blacklistedModels, deployment) as appropriate) and likewise add
the same check where alias is used around lines 355-356 so any
configured/backfilled deployment string is also blocked.
core/providers/azure/models.go (1)

133-135: ⚠️ Potential issue | 🟠 Major

Screen deploymentValue before emitting alias-backed models.

Line 133 and Line 172 still never check deploymentValue. If an alias resolves to a blacklisted base model, the alias survives both the API path and the deployment backfill because we only screen model.ID and deploymentAlias. Please add the same matchesBlacklist guard for deploymentValue and cover the alias-allowed / target-blacklisted case.

🔧 Proposed fix
-		if !unfiltered && (matchesBlacklist(blacklistedModels, model.ID) ||
-			(deploymentAlias != "" && matchesBlacklist(blacklistedModels, deploymentAlias))) {
+		if !unfiltered && (matchesBlacklist(blacklistedModels, model.ID) ||
+			(deploymentAlias != "" && matchesBlacklist(blacklistedModels, deploymentAlias)) ||
+			(deploymentValue != "" && matchesBlacklist(blacklistedModels, deploymentValue))) {
 			continue
 		}
@@
-			if !unfiltered && matchesBlacklist(blacklistedModels, alias) {
+			if !unfiltered && (matchesBlacklist(blacklistedModels, alias) ||
+				(deploymentValue != "" && matchesBlacklist(blacklistedModels, deploymentValue))) {
 				continue
 			}

Also applies to: 172-174

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/azure/models.go` around lines 133 - 135, The code filters only
model.ID and deploymentAlias against blacklistedModels but misses checking
deploymentValue, so add the same blacklist guard for deploymentValue wherever
models are emitted or backfilled (the conditional using unfiltered,
matchesBlacklist(blacklistedModels, model.ID) and
matchesBlacklist(blacklistedModels, deploymentAlias)); update both occurrences
(the API-emission branch and the deployment backfill branch) to include
matchesBlacklist(blacklistedModels, deploymentValue) so alias-backed models are
rejected if their resolved deploymentValue is blacklisted, ensuring the
alias-allowed/target-blacklisted case is covered.
transports/bifrost-http/handlers/providers.go (1)

742-745: ⚠️ Potential issue | 🟠 Major

Don't widen the result set when no requested key matches this provider.

If keys is present but every ID is stale or belongs to a different provider, this branch falls back to the full provider catalog. It should return an empty slice; only the no-keys-filter path should bypass filtering.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/handlers/providers.go` around lines 742 - 745, In the
branch inside the provider filtering logic where it currently does "if
len(matchedKeys) == 0 { return models }", change behavior to return an empty
slice when a keys filter was actually supplied: detect whether the incoming
filter "keys" is present/non-empty and, if so, return an empty slice (not
models); only when no keys filter was provided should the code fall back to
returning the full "models" catalog. Update the logic around matchedKeys/models
to check keys (or its equivalent parameter) and return []Model{} (empty result)
when matchedKeys is empty but keys were supplied.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/bedrock/models.go`:
- Line 260: The early-return condition incorrectly uses (allowedModels.IsEmpty()
&& len(deployments) == 0 || blacklistedModels.IsBlockAll()) so a config that
only sets blacklistedModels will short-circuit and yield an empty catalog;
change the boolean logic so the function only returns immediately when
allowedModels is empty AND deployments is empty AND
blacklistedModels.IsBlockAll() (i.e., require all three before returning).
Update the conditional that references allowedModels, deployments and
blacklistedModels in models.go (preserving the surrounding !unfiltered check) so
blacklist-only keys do not bypass normal model enumeration and filtering.

In `@core/providers/openrouter/openrouter.go`:
- Around line 189-190: Blacklisted model checks compare raw IDs but
allowedModels accepts both bare and prefixed forms, so normalize blacklist
lookups to handle "openrouter/foo" as well as "foo": update the logic that
builds/uses blacklistedModels (referencing blacklistedModels and providerPrefix)
to map all entries to a canonical form (e.g., strip the providerPrefix if
present or add it consistently) and use that canonical form when comparing
against allowedModels and incoming model IDs (also normalize incoming IDs
similarly in the same functions/blocks referenced around blacklistedModels,
providerPrefix, and any comparison points such as where allowedModels is
consulted) so that a key with "openrouter/foo" correctly matches and takes
blacklist precedence.

In `@transports/bifrost-http/handlers/provider_keys.go`:
- Around line 101-104: The AfterFind hook in framework/configstore/tables/key.go
currently deserializes BlacklistedModelsJSON but doesn't call Validate(), so
invalid legacy entries (e.g. ["*", "foo"]) slip through and break IsBlockAll()
assumptions; modify the AfterFind function to call
key.BlacklistedModels.Validate() immediately after deserialization and handle
failures: either normalize the list (if it contains "*" plus others, reduce to
["*"]) or log the error and return a wrapped error so the record fails to load,
or alternatively implement a DB migration to normalize existing rows; ensure you
reference and update BlacklistedModelsJSON, AfterFind, Validate(), and
IsBlockAll() behavior accordingly.

---

Outside diff comments:
In `@core/providers/vertex/models.go`:
- Around line 179-198: The blacklist check currently only calls
blacklistedModels.IsBlocked(customModelID) before building modelEntry, which
misses cases where you later set modelEntry.ID to the deploymentAlias; update
the logic in the loop (around customModelID, deploymentValue, deploymentAlias,
modelEntry.ID) to also check the alias when deploymentAlias != "" — call
blacklistedModels.IsBlocked(deploymentAlias) (or check both IDs) and continue if
either is blocked, and ensure addedModelIDs uses the final modelEntry.ID when
marking entries as added.

---

Duplicate comments:
In `@core/providers/azure/models.go`:
- Around line 133-135: The code filters only model.ID and deploymentAlias
against blacklistedModels but misses checking deploymentValue, so add the same
blacklist guard for deploymentValue wherever models are emitted or backfilled
(the conditional using unfiltered, matchesBlacklist(blacklistedModels, model.ID)
and matchesBlacklist(blacklistedModels, deploymentAlias)); update both
occurrences (the API-emission branch and the deployment backfill branch) to
include matchesBlacklist(blacklistedModels, deploymentValue) so alias-backed
models are rejected if their resolved deploymentValue is blacklisted, ensuring
the alias-allowed/target-blacklisted case is covered.

In `@core/providers/bedrock/models.go`:
- Around line 311-313: The blacklist check currently only tests model.ModelID
and deploymentAlias/alias, so raw deployment values can slip through; update
both conditional blocks that call matchesBlacklist(blacklistedModels,
model.ModelID) || (deploymentAlias != "" && matchesBlacklist(...,
deploymentAlias)) to also test the raw deployment value (e.g.,
matchesBlacklist(blacklistedModels, model.Deployment) or
matchesBlacklist(blacklistedModels, deployment) as appropriate) and likewise add
the same check where alias is used around lines 355-356 so any
configured/backfilled deployment string is also blocked.

In `@transports/bifrost-http/handlers/providers.go`:
- Around line 742-745: In the branch inside the provider filtering logic where
it currently does "if len(matchedKeys) == 0 { return models }", change behavior
to return an empty slice when a keys filter was actually supplied: detect
whether the incoming filter "keys" is present/non-empty and, if so, return an
empty slice (not models); only when no keys filter was provided should the code
fall back to returning the full "models" catalog. Update the logic around
matchedKeys/models to check keys (or its equivalent parameter) and return
[]Model{} (empty result) when matchedKeys is empty but keys were supplied.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ace2f4c9-2f96-462c-9181-5297e2e5ba7a

📥 Commits

Reviewing files that changed from the base of the PR and between 76327ac and 7ce882b.

📒 Files selected for processing (35)
  • core/bifrost.go
  • core/bifrost_test.go
  • core/providers/anthropic/anthropic.go
  • core/providers/anthropic/models.go
  • core/providers/azure/azure.go
  • core/providers/azure/models.go
  • core/providers/bedrock/bedrock.go
  • core/providers/bedrock/models.go
  • core/providers/cohere/cohere.go
  • core/providers/cohere/models.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/elevenlabs/models.go
  • core/providers/gemini/gemini.go
  • core/providers/gemini/models.go
  • core/providers/huggingface/huggingface.go
  • core/providers/huggingface/models.go
  • core/providers/mistral/mistral.go
  • core/providers/mistral/models.go
  • core/providers/openai/models.go
  • core/providers/openai/openai.go
  • core/providers/openrouter/openrouter.go
  • core/providers/replicate/models.go
  • core/providers/replicate/replicate.go
  • core/providers/vertex/models.go
  • core/providers/vertex/utils.go
  • core/providers/vertex/vertex.go
  • core/schemas/account.go
  • framework/configstore/tables/key.go
  • framework/modelcatalog/main.go
  • plugins/governance/http_transport_prehook_test.go
  • transports/bifrost-http/handlers/provider_keys.go
  • transports/bifrost-http/handlers/providers.go
  • transports/bifrost-http/server/server.go
  • ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx
  • ui/components/ui/modelMultiselect.tsx
✅ Files skipped from review due to trivial changes (4)
  • plugins/governance/http_transport_prehook_test.go
  • core/bifrost_test.go
  • core/providers/gemini/gemini.go
  • core/providers/vertex/vertex.go
🚧 Files skipped from review as they are similar to previous changes (17)
  • ui/components/ui/modelMultiselect.tsx
  • framework/configstore/tables/key.go
  • core/providers/huggingface/huggingface.go
  • ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx
  • core/providers/mistral/models.go
  • core/providers/bedrock/bedrock.go
  • core/providers/mistral/mistral.go
  • core/providers/elevenlabs/models.go
  • core/providers/anthropic/anthropic.go
  • core/providers/replicate/replicate.go
  • core/providers/azure/azure.go
  • core/providers/vertex/utils.go
  • core/schemas/account.go
  • framework/modelcatalog/main.go
  • core/providers/openai/openai.go
  • core/providers/anthropic/models.go
  • core/bifrost.go

Comment thread core/providers/bedrock/models.go
Comment thread core/providers/openrouter/openrouter.go
Comment thread transports/bifrost-http/handlers/provider_keys.go
@TejasGhatte TejasGhatte force-pushed the 03-26-refactor_blacklist_models_on_new_convention branch from 7ce882b to 1dd55e1 Compare March 26, 2026 14:27
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (3)
core/providers/bedrock/models.go (1)

311-313: ⚠️ Potential issue | 🟠 Major

Blacklist checks still miss deployment values.

Line 311 and Line 355 only gate on model ID / alias. If blacklisted_models contains a deployment value, that entry can still be surfaced.

🔧 Suggested fix
-		if !unfiltered && (matchesBlacklist(blacklistedModels, model.ModelID) ||
-			(deploymentAlias != "" && matchesBlacklist(blacklistedModels, deploymentAlias))) {
+		if !unfiltered && (matchesBlacklist(blacklistedModels, model.ModelID) ||
+			(deploymentAlias != "" && matchesBlacklist(blacklistedModels, deploymentAlias)) ||
+			(deploymentValue != "" && matchesBlacklist(blacklistedModels, deploymentValue))) {
 			continue
 		}
@@
-			if !unfiltered && matchesBlacklist(blacklistedModels, alias) {
+			if !unfiltered && (matchesBlacklist(blacklistedModels, alias) ||
+				matchesBlacklist(blacklistedModels, deploymentValue)) {
 				continue
 			}

Also applies to: 355-357

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/bedrock/models.go` around lines 311 - 313, The blacklist check
currently only tests model.ModelID and deploymentAlias, so entries that match
the actual deployment identifier slip through; update the conditional around
matchesBlacklist to also test the model's deployment value (e.g., add
matchesBlacklist(blacklistedModels, model.Deployment) or
matchesBlacklist(blacklistedModels, model.DeploymentName) depending on the field
your struct uses) alongside model.ModelID and deploymentAlias, and make the same
change for the second occurrence near the later block (the other conditional
that uses unfiltered, matchesBlacklist, blacklistedModels and deploymentAlias)
so deployment values are correctly filtered everywhere.
transports/bifrost-http/handlers/provider_keys.go (1)

101-104: ⚠️ Potential issue | 🟠 Major

Runtime validation still misses legacy blacklisted_models rows.

Validate() here only protects HTTP create/update. Keys rehydrated from blacklisted_models_json can still enter memory with invalid wildcard combinations, so the new IsBlockAll/IsBlocked semantics remain bypassable until the DB load path validates or normalizes them too.

Verify that persisted keys are rejected or normalized on load. Expected: AfterFind (or a migration) calls BlacklistedModels.Validate() or collapses ["*", ...] to ["*"].

#!/bin/bash
set -euo pipefail

echo "== Blacklisted model validation call sites =="
rg -n --type=go '\.BlacklistedModels\.Validate\('

echo
echo "== Configstore key hooks =="
fd 'key.go$' framework/configstore --exec sed -n '470,520p' {}

echo
echo "== Configstore blacklist serialization / migration references =="
rg -n -C3 --type=go 'BlacklistedModelsJSON|blacklisted_models' framework/configstore

Also applies to: 197-200

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/handlers/provider_keys.go` around lines 101 - 104,
Persisted keys loaded from the DB can contain legacy/invalid blacklisted_models
(e.g. ["*", ...]) because Validate() is only called on HTTP create/update;
ensure DB-load path normalizes or rejects these by invoking
BlacklistedModels.Validate() (or collapsing wildcard lists to ["*"]) during
model hydration—add the call inside the key's AfterFind hook (or the equivalent
configstore deserialize path that references
BlacklistedModelsJSON/blacklisted_models_json) and return an error or normalize
when Validate() fails so IsBlockAll/IsBlocked semantics cannot be bypassed.
transports/bifrost-http/handlers/providers.go (1)

742-745: ⚠️ Potential issue | 🟠 Major

Returning all models when no keys match contradicts filter semantics.

This was flagged in a previous review. When a keys filter is supplied but none of the requested key IDs belong to this provider, returning the full models slice breaks the expected filter behavior. The function docstring states a model is included "if at least one of the specified keys grants access" — no matching keys means no models should be accessible.

🐛 Suggested fix
-	// If none of the requested key IDs exist in config, fall back to returning all models
+	// If a keys filter was supplied but none of the IDs belong to this provider,
+	// no model is accessible through the requested keys.
 	if len(matchedKeys) == 0 {
-		return models
+		return []string{}
 	}

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/handlers/providers.go` around lines 742 - 745, The
current branch returns the full models slice when len(matchedKeys) == 0 which
violates the filter semantics; change the behavior so that when no requested key
IDs match (matchedKeys is empty) the function returns no models (an empty slice)
instead of models, and update any related comment/docstring for the function
(the one that says "if at least one of the specified keys grants access") to
reflect this behavior; locate the logic using the matchedKeys and models
variables (in the function that applies the keys filter) and replace the
fallback return of models with an empty result.
🧹 Nitpick comments (1)
ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx (1)

271-280: Extract duplicated "*" normalization logic into a shared helper

The same wildcard-normalization branch logic now exists in both Allowed Models and Blocked Models handlers. Centralizing it reduces drift risk.

♻️ Suggested refactor
+const normalizeAllModelsSelection = (previous: string[] = [], next: string[]): string[] => {
+	const hadStar = previous.includes("*");
+	const hasStar = next.includes("*");
+	if (!hadStar && hasStar) return ["*"];
+	if (hadStar && hasStar && next.length > 1) return next.filter((m) => m !== "*");
+	return next;
+};
...
-									onChange={(models: string[]) => {
-										const hadStar = (field.value || []).includes("*");
-										const hasStar = models.includes("*");
-										if (!hadStar && hasStar) {
-											field.onChange(["*"]);
-										} else if (hadStar && hasStar && models.length > 1) {
-											field.onChange(models.filter((m: string) => m !== "*"));
-										} else {
-											field.onChange(models);
-										}
-									}}
+									onChange={(models: string[]) => {
+										field.onChange(normalizeAllModelsSelection(field.value || [], models));
+									}}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx` around lines
271 - 280, Extract the duplicated "*" normalization logic into a single helper
(e.g., normalizeModelSelection(models: string[], previous?: string[])) that
takes the new models array and optional previous value and returns the
normalized array following the existing rules (if previously no "*" and new
includes "*" -> ["*"]; if previously had "*" and new includes "*" and length>1
-> remove "*" from new; otherwise return new). Replace the identical onChange
branches in both the Allowed Models and Blocked Models handlers to call
normalizeModelSelection(models, field.value) and pass its return to
field.onChange; keep the helper colocated with the fragment or exported from a
small utils file so both handlers can import/use it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@core/providers/bedrock/models.go`:
- Around line 311-313: The blacklist check currently only tests model.ModelID
and deploymentAlias, so entries that match the actual deployment identifier slip
through; update the conditional around matchesBlacklist to also test the model's
deployment value (e.g., add matchesBlacklist(blacklistedModels,
model.Deployment) or matchesBlacklist(blacklistedModels, model.DeploymentName)
depending on the field your struct uses) alongside model.ModelID and
deploymentAlias, and make the same change for the second occurrence near the
later block (the other conditional that uses unfiltered, matchesBlacklist,
blacklistedModels and deploymentAlias) so deployment values are correctly
filtered everywhere.

In `@transports/bifrost-http/handlers/provider_keys.go`:
- Around line 101-104: Persisted keys loaded from the DB can contain
legacy/invalid blacklisted_models (e.g. ["*", ...]) because Validate() is only
called on HTTP create/update; ensure DB-load path normalizes or rejects these by
invoking BlacklistedModels.Validate() (or collapsing wildcard lists to ["*"])
during model hydration—add the call inside the key's AfterFind hook (or the
equivalent configstore deserialize path that references
BlacklistedModelsJSON/blacklisted_models_json) and return an error or normalize
when Validate() fails so IsBlockAll/IsBlocked semantics cannot be bypassed.

In `@transports/bifrost-http/handlers/providers.go`:
- Around line 742-745: The current branch returns the full models slice when
len(matchedKeys) == 0 which violates the filter semantics; change the behavior
so that when no requested key IDs match (matchedKeys is empty) the function
returns no models (an empty slice) instead of models, and update any related
comment/docstring for the function (the one that says "if at least one of the
specified keys grants access") to reflect this behavior; locate the logic using
the matchedKeys and models variables (in the function that applies the keys
filter) and replace the fallback return of models with an empty result.

---

Nitpick comments:
In `@ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx`:
- Around line 271-280: Extract the duplicated "*" normalization logic into a
single helper (e.g., normalizeModelSelection(models: string[], previous?:
string[])) that takes the new models array and optional previous value and
returns the normalized array following the existing rules (if previously no "*"
and new includes "*" -> ["*"]; if previously had "*" and new includes "*" and
length>1 -> remove "*" from new; otherwise return new). Replace the identical
onChange branches in both the Allowed Models and Blocked Models handlers to call
normalizeModelSelection(models, field.value) and pass its return to
field.onChange; keep the helper colocated with the fragment or exported from a
small utils file so both handlers can import/use it.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a892b043-770d-42f1-9a3f-108b40633672

📥 Commits

Reviewing files that changed from the base of the PR and between 7ce882b and 1dd55e1.

📒 Files selected for processing (35)
  • core/bifrost.go
  • core/bifrost_test.go
  • core/providers/anthropic/anthropic.go
  • core/providers/anthropic/models.go
  • core/providers/azure/azure.go
  • core/providers/azure/models.go
  • core/providers/bedrock/bedrock.go
  • core/providers/bedrock/models.go
  • core/providers/cohere/cohere.go
  • core/providers/cohere/models.go
  • core/providers/elevenlabs/elevenlabs.go
  • core/providers/elevenlabs/models.go
  • core/providers/gemini/gemini.go
  • core/providers/gemini/models.go
  • core/providers/huggingface/huggingface.go
  • core/providers/huggingface/models.go
  • core/providers/mistral/mistral.go
  • core/providers/mistral/models.go
  • core/providers/openai/models.go
  • core/providers/openai/openai.go
  • core/providers/openrouter/openrouter.go
  • core/providers/replicate/models.go
  • core/providers/replicate/replicate.go
  • core/providers/vertex/models.go
  • core/providers/vertex/utils.go
  • core/providers/vertex/vertex.go
  • core/schemas/account.go
  • framework/configstore/tables/key.go
  • framework/modelcatalog/main.go
  • plugins/governance/http_transport_prehook_test.go
  • transports/bifrost-http/handlers/provider_keys.go
  • transports/bifrost-http/handlers/providers.go
  • transports/bifrost-http/server/server.go
  • ui/app/workspace/providers/fragments/apiKeysFormFragment.tsx
  • ui/components/ui/modelMultiselect.tsx
✅ Files skipped from review due to trivial changes (2)
  • ui/components/ui/modelMultiselect.tsx
  • plugins/governance/http_transport_prehook_test.go
🚧 Files skipped from review as they are similar to previous changes (20)
  • core/providers/huggingface/huggingface.go
  • core/bifrost_test.go
  • core/providers/azure/azure.go
  • core/providers/bedrock/bedrock.go
  • transports/bifrost-http/server/server.go
  • core/providers/replicate/replicate.go
  • core/providers/mistral/mistral.go
  • framework/configstore/tables/key.go
  • core/providers/openrouter/openrouter.go
  • core/providers/openai/models.go
  • core/providers/elevenlabs/models.go
  • core/providers/mistral/models.go
  • core/providers/anthropic/models.go
  • framework/modelcatalog/main.go
  • core/providers/vertex/utils.go
  • core/schemas/account.go
  • core/providers/huggingface/models.go
  • core/providers/azure/models.go
  • core/providers/vertex/vertex.go
  • core/providers/replicate/models.go

Copy link
Copy Markdown
Collaborator

Pratham-Mishra04 commented Mar 27, 2026

Merge activity

  • Mar 27, 6:59 AM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Mar 27, 6:59 AM UTC: @Pratham-Mishra04 merged this pull request with Graphite.

@Pratham-Mishra04 Pratham-Mishra04 merged commit e92af44 into v1.5.0 Mar 27, 2026
5 of 6 checks passed
@Pratham-Mishra04 Pratham-Mishra04 deleted the 03-26-refactor_blacklist_models_on_new_convention branch March 27, 2026 06:59
@coderabbitai coderabbitai Bot mentioned this pull request Apr 1, 2026
18 tasks
akshaydeo added a commit that referenced this pull request Apr 18, 2026
* refactor: standardize empty array conventions for VK Provider & MCP Configs, and makes Provider Config weight optional for routing (#1932)

## Summary

Changes Virtual Key provider and MCP configurations from "allow-all by default" to "deny-by-default" security model. Virtual Keys now require explicit provider and MCP client configurations to allow access, improving security posture.

## Changes

- **Provider Configs**: Empty `provider_configs` now blocks all providers instead of allowing all
- **MCP Configs**: Empty `mcp_configs` now blocks all MCP tools instead of allowing all  
- **Weight Field**: Changed provider `weight` from required `float64` to optional `*float64` - null weight excludes provider from weighted routing
- **Migration**: Added automatic backfill migration to preserve existing Virtual Key behavior by adding all available providers/MCP clients to VKs with empty configs
- **Documentation**: Updated all references to reflect new deny-by-default behavior
- **UI Updates**: Modified Virtual Key creation/editing interface to reflect new behavior and weight handling

## Type of change

- [x] Feature
- [x] Refactor
- [x] Documentation

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Plugins
- [x] UI (Next.js)
- [x] Docs

## How to test

Test Virtual Key creation and provider/MCP access:

```sh
# Core/Transports
go version
go test ./...

# Test Virtual Key with no provider configs blocks requests
curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Authorization: Bearer sk-bf-empty-vk" \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "test"}]}'
# Should return error about no providers configured

# Test Virtual Key with provider configs allows requests  
curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Authorization: Bearer sk-bf-configured-vk" \
  -H "Content-Type: application/json" \
  -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "test"}]}'
# Should work normally

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

## Breaking changes

- [x] Yes

**Impact**: Existing Virtual Keys with empty `provider_configs` or `mcp_configs` would be blocked after this change.

**Migration**: Automatic migration `migrationBackfillEmptyVirtualKeyConfigs` runs on startup to backfill existing Virtual Keys with all available providers/MCP clients, preserving current behavior. New Virtual Keys created after this change will use deny-by-default.

## Security considerations

This change significantly improves security posture by requiring explicit configuration of allowed providers and MCP tools for Virtual Keys. The automatic migration ensures no disruption to existing deployments while new Virtual Keys benefit from the more secure default behavior.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* feat: add MCP auto tool injection toggle (#1933)

## Summary

Adds a new configuration option `DisableAutoToolInject` to the MCP (Model Context Protocol) system that allows disabling automatic tool injection into requests. When enabled, MCP tools are only included when explicitly requested via context headers or filters, providing more granular control over tool availability.

## Changes

- Added `DisableAutoToolInject` field to `MCPToolManagerConfig` schema with runtime update support
- Implemented atomic boolean storage in `ToolsManager` to safely handle concurrent access
- Added logic in `ParseAndAddToolsToRequest` to respect the disable flag and only inject tools when explicit context filters are present
- Extended configuration management with database migration, UI controls, and API endpoints
- Added hot-reload capability through `UpdateMCPDisableAutoToolInject` methods across the stack
- Updated UI with a toggle switch and clear documentation about the feature's behavior

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [x] UI (Next.js)
- [ ] Docs

## How to test

Validate the new MCP auto tool injection toggle:

```sh
# Core/Transports
go version
go test ./...

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

Test the feature:
1. Configure MCP clients and tools
2. Enable "Disable Auto Tool Injection" in the MCP configuration UI
3. Make requests without explicit tool headers - tools should not be injected
4. Make requests with `x-bf-mcp-include-tools` header - tools should be injected
5. Verify hot-reload works by toggling the setting without server restart

## Screenshots/Recordings

UI changes include a new toggle switch in the MCP configuration view with descriptive text explaining when tools are injected based on explicit headers.

## Breaking changes

- [ ] Yes
- [x] No

This is a backward-compatible addition with a default value of `false` (auto injection enabled).

## Related issues

This addresses the need for more granular control over MCP tool injection behavior in request processing.

## Security considerations

The feature provides better control over tool exposure by allowing administrators to require explicit opt-in for tool injection, potentially reducing unintended tool access.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* feat: VK MCP config now works as an AllowList (#1940)

## Summary

This PR implements MCP tool governance by enforcing virtual key MCP configurations as an execution-time allow-list. When virtual keys have empty MCPConfigs, all MCP tools are denied. When non-empty, each tool is validated against the configured allow-list at both inference time and MCP tool execution.

## Changes

- **Context parameter updates**: Changed MCP-related functions to use `*schemas.BifrostContext` instead of `context.Context` to enable tool tracking
- **Tool tracking**: Added `BifrostContextKeyMCPAddedTools` context key to track which MCP tools are added to requests
- **Governance enforcement**: Virtual key MCP configurations now act as execution-time allow-lists with validation in both `PreMCPHook` and `evaluateGovernanceRequest`
- **Auto-injection control**: Added `DisableAutoToolInject` configuration option that respects the toggle and skips auto-injection when headers are already set by callers
- **Decision type**: Added `DecisionMCPToolBlocked` for MCP tool governance violations
- **UI improvements**: Updated MCP view description and sidebar item naming for better clarity

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [ ] Docs

## How to test

Test MCP tool governance with virtual keys:

```sh
# Core/Transports
go version
go test ./...

# Test with virtual key having empty MCPConfigs (should deny all MCP tools)
curl -X POST /v1/chat/completions \
  -H "x-bf-virtual-key: test-vk-empty-mcp" \
  -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "test"}]}'

# Test with virtual key having specific MCP tool allowlist
curl -X POST /v1/chat/completions \
  -H "x-bf-virtual-key: test-vk-with-mcp" \
  -d '{"model": "gpt-4", "messages": [{"role": "user", "content": "test"}]}'

# Test disable auto tool inject configuration
curl -X PUT /v1/config/mcp/disable-auto-tool-inject \
  -d '{"disable": true}'

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

New configuration options:
- `disable_auto_tool_inject`: Boolean flag to disable automatic MCP tool injection
- Virtual key `MCPConfigs`: Array of MCP client configurations that act as allow-lists

## Screenshots/Recordings

UI changes include updated MCP configuration view with clearer descriptions for the disable auto tool injection toggle and improved sidebar navigation labels.

## Breaking changes

- [x] Yes
- [ ] No

**Impact**: MCP-related function signatures now require `*schemas.BifrostContext` instead of `context.Context`. Virtual keys with empty MCPConfigs will now deny all MCP tools by default.

**Migration**: Update any custom MCP integrations to use the new context parameter type. Configure MCPConfigs on virtual keys that need MCP tool access.

## Related issues

Implements MCP tool governance and execution-time validation for virtual key configurations.

## Security considerations

- **Access control**: Virtual key MCP configurations now enforce strict allow-lists for tool execution
- **Context isolation**: Tool tracking is isolated per request context to prevent cross-request leakage
- **Validation**: Both pre-execution and execution-time validation prevent unauthorized tool access

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* refactor: standardize empty array conventions for VK Provider Config Allowed Keys (#2006)

## Summary

Migrates VK provider config allowed keys from implicit allow-all semantics to explicit deny-by-default behavior. Adds `AllowAllKeys` boolean field to enable granular key access control while maintaining backward compatibility.

## Changes

- Added `AllowAllKeys` boolean field to `TableVirtualKeyProviderConfig` with database migration
- Backfilled existing configs with `allow_all_keys=true` to preserve current behavior
- Updated key resolution logic: empty keys now denies all access, `["*"]` wildcard allows all keys
- Modified governance resolver to set empty `includeOnlyKeys` slice when no keys are configured
- Enhanced HTTP handlers to recognize `["*"]` wildcard and set `AllowAllKeys` flag appropriately
- Updated UI to display "Allow All Keys" option and show deny-by-default messaging
- Added JSON unmarshaling support for `["*"]` wildcard in config files

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [ ] Docs

## How to test

Validate the migration and new key access control behavior:

```sh
# Core/Transports
go version
go test ./...

# Test migration runs successfully
go run main.go migrate

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

Test scenarios:
1. Create VK with empty `key_ids` - should deny all keys
2. Create VK with `key_ids: ["*"]` - should allow all keys  
3. Create VK with specific key IDs - should allow only those keys
4. Verify existing VKs maintain their current behavior after migration

## Screenshots/Recordings

UI now shows:
- "Allow All Keys" option in key selection dropdown
- "No keys allowed" vs "All keys allowed" status indicators
- "No providers configured (deny-by-default)" messaging

## Breaking changes

- [ ] Yes
- [x] No

The migration preserves existing behavior by setting `allow_all_keys=true` for configs that previously had no keys specified.

## Related issues

Part of VK access control enhancement initiative.

## Security considerations

Improves security posture by implementing deny-by-default semantics for key access. Existing deployments maintain current access patterns through automatic backfill migration.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* refactor: standardize empty array conventions for allowed models (#2113)

## Summary

Standardizes empty array conventions across Bifrost to implement deny-by-default security semantics. Previously, empty arrays for `allowed_models` and `Models` fields meant "allow all", creating potential security gaps. Now `["*"]` explicitly means "allow all" while empty arrays mean "deny all".

## Changes

- **Core Logic**: Updated model filtering in `bifrost.go` and `selectKeyFromProviderForModel` to treat empty `Models` arrays as deny-all and `["*"]` as allow-all
- **Database Migration**: Added `migrationBackfillAllowedModelsWildcard` to convert existing empty arrays to `["*"]` preserving current behavior for existing records
- **Model Catalog**: Updated `IsModelAllowedForProvider` to use wildcard semantics with deny-by-default fallback
- **Schema Defaults**: Changed default `Models` value from `[]` to `["*"]` in table definitions and form schemas
- **UI Components**: Enhanced `ModelMultiselect` with `allowAllOption` prop and updated virtual key forms to handle wildcard selection
- **Documentation**: Updated JSON schemas, comments, and tooltips to reflect new conventions
- **Governance**: Updated provider config filtering logic to use new wildcard semantics
- **Server Bootstrap**: Added wildcard filtering when loading models to prevent literal "*" from appearing as a model name

## Type of change

- [x] Refactor
- [ ] Bug fix
- [ ] Feature
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [x] Docs

## How to test

Validate the migration and new semantics:

```sh
# Core/Transports
go version
go test ./...

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

Test scenarios:
1. Create new virtual keys - should default to `["*"]` for allowed models
2. Create new provider keys - should default to `["*"]` for models
3. Verify existing keys with empty arrays are migrated to `["*"]`
4. Test that empty arrays now deny all models/keys as expected
5. Verify UI shows "All models allowed" for wildcard and "No models (deny all)" for empty arrays

## Screenshots/Recordings

UI changes include:
- Model multiselect now shows "Allow All Models" option
- Virtual key details display "All Models" badge for wildcard vs "No models (deny all)" for empty
- Form placeholders updated to reflect new semantics

## Breaking changes

- [x] Yes
- [ ] No

**Migration Impact**: The database migration automatically converts existing empty `allowed_models` and `models_json` arrays to `["*"]`, preserving current behavior. However, any new configurations with empty arrays will now deny access instead of allowing all. Applications relying on "empty = allow all" semantics must be updated to use `["*"]` explicitly.

## Related issues

Part of security hardening initiative to implement explicit allow-lists and deny-by-default semantics across Bifrost configuration.

## Security considerations

This change significantly improves security posture by:
- Eliminating ambiguous "empty means allow all" semantics
- Implementing explicit deny-by-default for new configurations
- Requiring intentional wildcard usage via `["*"]` for broad access
- Maintaining backward compatibility through automatic migration

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* refactor: replace string slices with WhiteList for allowlist fields (#2125)

## Summary

Introduces a new `WhiteList` type to standardize whitelist behavior across the codebase, replacing manual slice operations and string comparisons with semantic methods for handling allow/deny lists.

## Changes

- Added `WhiteList` type with methods `IsAllowed()`, `IsUnrestricted()`, `IsEmpty()`, `Contains()`, and `Validate()`
- Replaced `[]string` fields with `WhiteList` for model restrictions, tool filtering, and key access controls
- Updated all whitelist logic to use semantic methods instead of manual `slices.Contains()` checks
- Added validation to ensure wildcards ("*") aren't mixed with specific values and prevent duplicates
- Improved case-insensitive matching for whitelist comparisons

## Type of change

- [x] Refactor

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Plugins

## How to test

Verify that whitelist behavior remains consistent across all affected components:

```sh
# Core/Transports
go version
go test ./...

# Test specific whitelist scenarios:
# - Empty lists deny all access
# - ["*"] allows all access  
# - Specific lists only allow listed items
# - Mixed wildcards and specific items are rejected
# - Duplicate entries are rejected
```

Test key model filtering, MCP tool execution, and virtual key configurations to ensure whitelist logic works correctly.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

The `WhiteList` type maintains the same JSON serialization format as `[]string`, so existing configurations remain compatible.

## Related issues

N/A

## Security considerations

Improves security by standardizing deny-by-default behavior and adding validation to prevent misconfigured whitelists that could inadvertently grant excessive permissions.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* feat: add request-level extra headers support for MCP tool execution (#2126)

## Summary

This PR adds support for request-level extra headers in MCP tool execution, allowing callers to forward specific headers to MCP servers at runtime based on a per-client allowlist configuration.

## Changes

- Added `AllowedExtraHeaders` field to MCP client configuration with allowlist semantics (empty array = deny all, `["*"]` = allow all)
- Introduced `BifrostContextKeyMCPExtraHeaders` context key to track headers forwarded to MCP tools
- Created `core/mcp/utils` package with `GetHeadersForToolExecution` function to merge static and dynamic headers
- Updated MCP tool execution in both regular tool manager and Starlark code mode to use the new header forwarding system
- Added database migration for `allowed_extra_headers_json` column in MCP client table
- Updated UI to include allowed extra headers configuration in MCP client management
- Enhanced auth demo server example to demonstrate tool-execution level authentication patterns

## Type of change

- [x] Feature

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] UI (Next.js)

## How to test

1. Configure an MCP client with allowed extra headers:
```json
{
  "name": "test-client",
  "connection_string": "http://localhost:3002/",
  "auth_type": "headers",
  "headers": {
    "X-API-Key": "connection-secret"
  },
  "allowed_extra_headers": ["X-Tool-Token"],
  "tools_to_execute": ["*"]
}
```

2. Make requests with extra headers that should be forwarded:
```bash
curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Authorization: Bearer your-key" \
  -H "X-Tool-Token: tool-execution-secret" \
  -d '{
    "model": "gpt-4",
    "messages": [{"role": "user", "content": "Use the secret_data tool"}],
    "tools": [{"type": "function", "function": {"name": "secret_data"}}]
  }'
```

3. Test the auth demo server:
```bash
cd examples/mcps/auth-demo-server
go run main.go
# Server demonstrates two-tier auth: connection-level (X-API-Key) and tool-level (X-Tool-Token)
```

4. Run tests:
```sh
go test ./core/mcp/...
go test ./transports/bifrost-http/...

cd ui
pnpm test
pnpm build
```

## Breaking changes

- [ ] Yes
- [x] No

This is a backward-compatible addition. Existing MCP clients will have empty `allowed_extra_headers` (deny all extra headers) which maintains current behavior.

## Security considerations

- Extra headers are filtered through a strict allowlist per MCP client
- Security denylist prevents auth header overrides via extra headers
- Two-tier authentication pattern demonstrated: connection-level + tool-execution level
- Headers are only forwarded to MCP servers that explicitly allow them

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* fix: apply MCP tool filtering headers to tools/list response when using bifrost as MCP gateway (#2127)

## Summary

Adds support for `x-bf-mcp-include-clients` and `x-bf-mcp-include-tools` request headers to filter MCP tools/list response when using Bifrost as an MCP gateway. This ensures that tool filtering is respected at the MCP protocol level, not just during inference.

## Changes

- Implemented dynamic tool filtering in MCP server handlers that respects per-request include headers
- Added `makeIncludeClientsFilter()` function that filters tools based on request context values
- Registered the tool filter on both global and virtual key MCP servers during initialization
- Updated documentation to clarify that `mcp-include-tools` requires `clientName-toolName` format
- Enhanced examples in documentation to show proper tool naming format

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [x] Docs

## How to test

Test MCP gateway functionality with tool filtering:

```sh
# Test tools/list filtering with include-tools header
curl --location 'http://localhost:8080/mcp/tools/list' \
--header 'x-bf-mcp-include-tools: gmail-send_email,filesystem-read_file' \
--header 'Authorization: Bearer your-vk-here'

# Test tools/list filtering with include-clients header  
curl --location 'http://localhost:8080/mcp/tools/list' \
--header 'x-bf-mcp-include-clients: gmail,filesystem' \
--header 'Authorization: Bearer your-vk-here'

# Verify chat completions still respect the same headers
curl --location 'http://localhost:8080/v1/chat/completions' \
--header 'x-bf-mcp-include-tools: gmail-send_email' \
--header 'Content-Type: application/json' \
--data '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "What tools are available?"}]
}'
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

The tool filtering mechanism ensures that virtual key restrictions are properly enforced at the MCP protocol level, preventing unauthorized access to tools that should be filtered out based on request headers.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* refactor: parallelize model listing for providers to speed up startup time (#2151)

## Summary

Parallelizes model listing operations for providers during server startup and provider reloading to significantly reduce initialization time. Previously, model listing was performed sequentially for each provider, causing slower startup times especially when multiple providers were configured.

## Changes

- Added concurrent execution using goroutines and sync.WaitGroup for model listing operations in three key functions: `ReloadProvider`, `ForceReloadPricing`, and `Bootstrap`
- In `ReloadProvider`, both filtered and unfiltered model listing requests now run concurrently for the same provider
- In `ForceReloadPricing` and `Bootstrap`, model listing for different providers now runs in parallel instead of sequentially
- Moved provider key retrieval earlier in `ReloadProvider` to ensure it happens before concurrent model listing
- Added proper context cancellation with defer statements for bifrost contexts

## Type of change

- [x] Refactor

## Affected areas

- [x] Transports (HTTP)

## How to test

Test server startup time with multiple providers configured to verify the performance improvement:

```sh
# Core/Transports
go version
go test ./...

# Test with multiple providers configured
# Measure startup time before and after the change
time go run main.go
```

Configure multiple providers in your bifrost configuration and observe faster startup times, especially noticeable when providers have high latency or many models.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No security implications. The change maintains the same authentication and authorization patterns while improving performance through parallelization.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* fix: reorder migrations and set AllowAllKeys to true for virtual key provider configs (#2158)

## Summary

Fixes database migration ordering issue and ensures virtual key configurations are properly initialized with the AllowAllKeys field set to true.

## Changes

- Reordered database migrations to execute `migrationAddAllowAllKeysToProviderConfig` before `migrationBackfillEmptyVirtualKeyConfigs` to ensure the AllowAllKeys column exists before backfilling
- Added `AllowAllKeys: true` to provider configurations created during virtual key backfill migration to enable unrestricted key access by default

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Verify that database migrations run successfully and virtual key configurations are created with proper defaults:

```sh
# Core/Transports
go version
go test ./...
```

Test migration ordering by running against a fresh database to ensure no column reference errors occur.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

This change enables unrestricted key access by default for virtual key configurations, which may have security implications depending on the intended access control model.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* feat: implement scoped pricing override

* refactor: custom pricing refactor

* fix: resolve merge conflicts in config loading and governance functions (#2230)

## Summary

Resolves Git merge conflicts in the bifrost-http configuration loading code by cleaning up duplicate function definitions and consolidating the configuration initialization flow.

## Changes

- Removed Git merge conflict markers and duplicate code blocks from `LoadConfig` function
- Consolidated governance configuration loading by keeping both `loadGovernanceConfigFromFile` and `loadGovernanceConfig` functions with distinct purposes
- Removed duplicate `convertSchemasMCPClientConfigToTable` function definition
- Moved pricing overrides initialization logic to `initFrameworkConfig` function for better organization
- Cleaned up extensive duplicate default configuration loading code that was causing merge conflicts
- Changed error handling for pricing overrides from returning error to logging warning

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Verify that configuration loading works correctly without merge conflicts:

```sh
# Core/Transports
go version
go test ./...
go build ./transports/bifrost-http/...
```

Test configuration loading with various scenarios:
- Config file present
- Config file absent (default loading)
- Store-based configuration
- Governance and MCP configuration loading

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No security implications - this is a merge conflict resolution that maintains existing functionality.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* feat: add Stability AI model support for Bedrock image generation (#2180)

## Summary

Adds support for Stability AI image generation models (stability.stable-image-*) to the Bedrock provider, enabling text-to-image generation with models like stability.stable-image-core-v1:1 and stability.stable-image-ultra-v1:1.

## Changes

- Added `isStabilityAIModel()` function to detect Stability AI models by "stability." prefix
- Created `ToStabilityAIImageGenerationRequest()` to convert Bifrost requests to Stability AI's flat request format
- Implemented `StabilityAIImageGenerationRequest` type with support for prompt, mode, aspect_ratio, output_format, seed, and negative_prompt parameters
- Added conditional routing in `ImageGeneration()` to use Stability AI request format when appropriate
- Extended known fields for image generation parameters to include "aspect_ratio" and "input_images"
- Updated documentation comment to reflect Stability AI model support

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Test Stability AI image generation through the Bedrock provider:

```sh
# Core/Transports
go version
go test ./...

# Test with a Stability AI model
curl -X POST http://localhost:8080/v1/images/generations \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-key" \
  -d '{
    "model": "stability.stable-image-core-v1:1",
    "prompt": "A beautiful sunset over mountains",
    "aspect_ratio": "16:9",
    "output_format": "PNG"
  }'
```

Ensure AWS credentials are configured for Bedrock access and the Stability AI models are available in your region.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No additional security implications beyond existing Bedrock provider authentication and AWS credential handling.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* feat: add Stability AI image edit models support to Bedrock provider (#2225)

## Summary

Adds support for Stability AI image editing models in the Bedrock provider, expanding image editing capabilities beyond the existing Titan and Nova Canvas models.

## Changes

- Added `getStabilityAIEditTaskType()` function to infer edit task types from Stability AI model names (inpaint, outpaint, recolor, search-replace, erase-object, remove-bg, control-sketch, control-structure, style-guide, style-transfer, upscale-creative, upscale-conservative, upscale-fast)
- Created `ToStabilityAIImageEditRequest()` function to convert Bifrost requests to Stability AI's flat JSON format, with task-specific field validation
- Added `StabilityAIImageEditRequest` struct with comprehensive field support for all Stability AI edit operations
- Enhanced `BedrockImageGenerationResponse` with Seeds and FinishReasons fields for Stability AI compatibility
- Modified `ImageEdit()` method to route requests to appropriate conversion function based on model type
- Updated documentation to reflect expanded model support

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [ ] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Test with various Stability AI edit models through the Bedrock provider:

```sh
# Core/Transports
go version
go test ./...

# Test image editing with Stability AI models
# Example: stable-image-inpaint, stable-outpaint, stable-creative-upscale, etc.
```

Verify that task-specific parameters are correctly mapped and invalid fields are filtered out based on the detected task type.

## Screenshots/Recordings

N/A - Backend functionality only

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

Image data is handled as base64-encoded strings. Mask and image parameters are properly validated before processing.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* fix: send back accumulated usage in MCP agent mode (#2246)

## Summary

This PR fixes token usage tracking in MCP agent mode by accumulating usage across all LLM calls in the agent loop and returning the total usage in the final response.

## Changes

- Added usage accumulation logic in the MCP agent execution loop to track token consumption across multiple LLM calls
- Implemented `mergeUsage` function to combine token counts and costs from multiple `BifrostLLMUsage` values, handling all detail sub-fields including prompt tokens, completion tokens, and cost breakdowns
- Extended agent API adapters with `extractUsage` and `applyUsage` methods to handle usage extraction and application for both Chat API and Responses API
- Applied accumulated usage to the final response before returning it to the client

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Test MCP agent mode with multiple tool calls to verify usage accumulation:

```sh
# Core/Transports
go version
go test ./...

# Test MCP agent mode with multiple LLM calls
# Verify that the returned usage reflects the sum of all calls in the agent loop
# Check that both token counts and cost details are properly accumulated
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No security implications - this change only affects usage tracking and reporting.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* [codemode]: fixing string escape corruption, enable top-level control flow in starlark, refining the prompt of executecode tool (#2206)

## Changes

- **Enhanced Starlark dialect configuration**: Enabled top-level control flow statements (if/for/while), while loops, set() builtin, global variable reassignment, and recursive functions for a more Python-like experience
- **Improved string escape handling**: Removed automatic `\n` to newline conversion, allowing Starlark's native string escape processing to handle `\n`, `\t`, and other escape sequences correctly
- **Updated tool description**: Streamlined the executeToolCode tool description with clearer syntax notes, explicit documentation of Starlark differences from Python (no try/except, no classes, no imports, no f-strings), and emphasis on fresh isolated scope per execution
- **Enhanced error hints**: Added specific error messages for unsupported Python features like try/except/finally/raise, with guidance on alternative approaches and scope persistence warnings
- **Comprehensive test coverage**: Added tests for dialect options, string escape preservation, unsupported feature detection, and end-to-end JSON deserialization scenarios

## Type of change

- [ ] Feature
- [ ] Bug fix
- [x] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go) - Starlark CodeMode improvements
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Test the enhanced Starlark features with MCP CodeMode:

```sh
# Test dialect options (top-level control flow, while loops, etc.)
make test-mcp TESTCASE=TestStarlarkDialectOptions

# Test string escape handling
make test-mcp PATTERN=TestStarlarkStringEscape

# Test unsupported feature detection
make test-mcp PATTERN=TestStarlarkUnsupportedFeatures
```

## Breaking changes

- [ ] Yes
- [x] No

The Starlark changes are additive and maintain backward compatibility while enabling more Python-like syntax.

## Security considerations

Starlark CodeMode maintains its existing sandboxing with no additional network or filesystem access. The dialect enhancements only affect language features within the existing security boundary.

* logging in plugins (#2215)

## Summary

Reorders middleware initialization in the Bifrost HTTP server to ensure tracing middleware is added before transport interceptor middleware in the inference pipeline.

## Changes

- Moved tracing middleware initialization and setup earlier in the bootstrap process
- Reordered middleware registration so tracing middleware is prepended before transport interceptor middleware
- Updated comments to clarify the middleware ordering logic and rationale

The change ensures that tracing context and trace IDs are properly established before other middleware components process requests.

## Type of change

- [x] Bug fix
- [ ] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Verify that tracing middleware executes before transport interceptor middleware by checking trace logs and middleware execution order.

```sh
# Core/Transports
go version
go test ./...
```

Test with tracing enabled to ensure trace IDs are properly set in context before subsequent middleware processing.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

No security implications - this is a middleware ordering change that affects observability components.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* fix: handling text, vtt, srt response format for transcriptions (#2102)

* feat: add virtual key access management for MCP clients (#2255)

## Summary

Adds virtual key access management to MCP client configuration, allowing administrators to control which virtual keys can access specific MCP servers and which tools they can execute on a per-VK basis.

## Changes

- Added `vk_configs` field to MCP client update API that accepts an array of virtual key configurations
- Each VK config specifies a virtual key ID and the tools it's allowed to execute on that MCP server
- When `vk_configs` is provided, it atomically replaces all existing VK assignments for the MCP client
- Added database method `GetVirtualKeyMCPConfigsByMCPClientID` to retrieve VK configs by MCP client
- Updated OpenAPI documentation to describe the new VK configuration functionality
- Enhanced UI with virtual key access management section in the MCP client sheet
- Added Go SDK context keys for MCP tool filtering: `MCPContextKeyIncludeClients`, `MCPContextKeyIncludeTools`, and `BifrostContextKeyMCPExtraHeaders`
- Updated context keys documentation with comprehensive MCP configuration examples

## Type of change

- [x] Feature

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] UI (Next.js)
- [x] Docs

## How to test

1. Create an MCP client with tools available
2. Create virtual keys in the system
3. Update the MCP client with VK configurations:

```sh
curl -X PUT /api/mcp/client/{id} \
  -H "Content-Type: application/json" \
  -d '{
    "name": "test-client",
    "vk_configs": [
      {
        "virtual_key_id": "vk-123",
        "tools_to_execute": ["*"]
      },
      {
        "virtual_key_id": "vk-456", 
        "tools_to_execute": ["read_file", "write_file"]
      }
    ]
  }'
```

4. Verify VK assignments are created/updated in the database
5. Test the UI by opening an MCP client sheet and managing virtual key access

```sh
# Core/Transports
go version
go test ./...

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

## Screenshots/Recordings

The UI now includes a "Virtual Key Access" section in the MCP client configuration sheet where administrators can:
- Add virtual keys to grant access to the MCP server
- Configure which specific tools each virtual key can execute
- Remove virtual key access entirely

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

This enables fine-grained access control for MCP servers at the virtual key level, complementing the existing governance and budgeting features.

## Security considerations

- VK access controls are enforced through the governance plugin during MCP tool execution
- The atomic replacement of VK assignments prevents partial updates that could leave the system in an inconsistent state
- Tool-level restrictions allow principle of least privilege by limiting which MCP tools each virtual key can access

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* feat: adds option to allow MCP clients to run on all virtual keys (#2258)

## Summary

Adds a new `AllowOnAllVirtualKeys` configuration option for MCP clients that enables them to be accessible to all virtual keys without requiring explicit per-key assignment. When enabled, all tools from the MCP client are available to every virtual key.

## Changes

- Added `AllowOnAllVirtualKeys` boolean field to `MCPClientConfig` schema and database table
- Updated MCP client manager to handle the new field during client updates
- Modified governance plugin to check for clients with `AllowOnAllVirtualKeys` enabled and automatically include their tools for all virtual keys
- Added database migration to add the new column to `TableMCPClient`
- Updated UI to include a toggle for the new setting with tooltip explanation
- Added OpenAPI documentation for the new field
- Updated configuration store methods to persist and retrieve the new field

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [x] Docs

## How to test

1. Create or update an MCP client with `allow_on_all_virtual_keys: true`
2. Verify that the client's tools are available to all virtual keys without explicit assignment
3. Test that the governance plugin correctly allows tools from such clients
4. Verify the UI toggle works correctly in the MCP client edit sheet

```sh
# Core/Transports
go version
go test ./...

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

The new configuration field `allow_on_all_virtual_keys` defaults to `false` to maintain backward compatibility.

## Screenshots/Recordings

If UI changes, add before/after screenshots or short clips.

## Breaking changes

- [ ] Yes
- [x] No

This is a backward-compatible addition with the new field defaulting to `false`.

## Related issues

Link related issues and discussions. Example: Closes #123

## Security considerations

This feature reduces access control granularity by allowing MCP clients to bypass virtual key restrictions when enabled. Administrators should carefully consider which MCP clients should have this permission as it grants broad access across all virtual keys.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* feat: add provider keys CRUD to configstore and in-memory store (#2159)

## Summary

Adds dedicated CRUD operations for individual provider keys at the data layer
(configstore interface + RDB implementation) and in-memory store. This enables
key-level operations without replacing the entire provider key set, which is
required for the new `/api/providers/{provider}/keys/*` endpoints.

## Changes

- Added `GetProviderKeys`, `GetProviderKey`, `CreateProviderKey`,
  `UpdateProviderKey`, `DeleteProviderKey` to `ConfigStore` interface
- Implemented all five methods in `RDBConfigStore` with proper GORM queries,
  error handling, and `ErrNotFound` propagation
- Extracted `schemaKeyFromTableKey` and `tableKeyFromSchemaKey` helpers to
  deduplicate key conversion logic (previously inlined in `GetProvidersConfig`
  and `GetProviderConfig`)
- Added `AddProviderKey`, `UpdateProviderKey`, `RemoveProviderKey` to in-memory
  `Config` with mutex locking, DB persistence, and rollback on client update
  failure
- Added `GetProviderKeysRaw`, `GetProviderKeysRedacted`, `GetProviderKeyRaw`,
  `GetProviderKeyRedacted` read methods
- Added `TestProviderKeyCRUD` and `TestProviderKeyCRUD_ProviderMustExist`
  integration tests
- Updated `MockConfigStore` with all five new interface methods

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

```sh
# Run configstore tests
go test ./framework/configstore/... -v -run TestProviderKeyCRUD

# Run config tests (mock store)
go test ./transports/bifrost-http/lib/... -v
```

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

Key values are handled through existing redaction infrastructure. No new secret
exposure paths introduced.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* feat: add provider keys HTTP handlers and refactor optional keys (#2160)

## Summary

Adds HTTP handlers for the dedicated provider keys CRUD endpoints and removes
`keys` from provider API responses and payloads. Keys are now exclusively
managed via `/api/providers/{provider}/keys/*`. Also fixes a context timeout bug
in `ReloadProvider` where model discovery could exhaust the shared context
budget, causing subsequent DB calls to fail.

## Changes

### Provider keys handlers (`provider_keys.go`)
- New file with five handlers: `listProviderKeys`, `getProviderKey`,
  `createProviderKey`, `updateProviderKey`, `deleteProviderKey`
- Includes `mergeUpdatedKey` (redacted value preservation logic used by
  `updateProviderKey`)
- Key handlers enforce keyless provider validation and trigger model discovery
  after mutations

### Provider handlers cleanup (`providers.go`)
- Registered new key routes: `GET/POST /api/providers/{provider}/keys`,
  `GET/PUT/DELETE /api/providers/{provider}/keys/{key_id}`
- Extracted inline anonymous structs into named `providerCreatePayload` and
  `providerUpdatePayload` types (without `Keys` field)
- Removed `Keys` field from `ProviderResponse`
- Switched `addProvider` from `json.Unmarshal` to `sonic.Unmarshal`
- Removed `oldConfigRedacted` fetch and the entire key merge block
  (`mergeKeys`, `hasKeys`, `slices` usage) from `updateProvider`
- Removed `Keys` from `getProviderResponseFromConfig` response builder
- Removed unused `encoding/json` import

### Context timeout fix (`server.go`)
- Split shared `bfCtx` in `ReloadProvider` into separate contexts:
  `filteredBfCtx` (15s) for filtered `ListModelsRequest` and `unfilteredBfCtx`
  (fresh 15s) for unfiltered `ListModelsRequest`, each cancelled after use
- Changed `GetKeysByProvider` to use `context.Background()` since it's a local
  DB call that shouldn't be gated by model discovery timeouts
- Added `hasNoKeys` check to emit warn-level logs instead of errors when model
  discovery fails because no keys are configured
- Read in-memory key count via `GetProviderKeysRaw` for the `hasNoKeys` check

### Tests (`providers_test.go`)
- Cleared file (contained only tests for removed inline struct decoding)

## Type of change

- [x] Feature
- [x] Bug fix
- [x] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

```sh
# Build
go build ./transports/bifrost-http/...

# Manual: start Bifrost, then test key CRUD
curl -X POST localhost:8080/api/providers/openai/keys -d '{"name":"test-key","value":"sk-test"}'
curl localhost:8080/api/providers/openai/keys
curl -X PUT localhost:8080/api/providers/openai/keys/{key_id} -d '{"name":"updated","value":"sk-new"}'
curl -X DELETE localhost:8080/api/providers/openai/keys/{key_id}

# Verify provider endpoints no longer return keys
curl localhost:8080/api/providers/openai | jq 'has("keys")'  # should be false
```

## Screenshots/Recordings

N/A

## Breaking changes

- [x] Yes
- [ ] No

Provider API responses no longer include `keys` field. Provider create/update
payloads no longer accept `keys`. Clients must use the dedicated
`/api/providers/{provider}/keys/*` endpoints for key management.

## Related issues

N/A

## Security considerations

- Key handlers use existing redaction infrastructure (`GetProviderKeyRedacted`)
  before returning responses
- Keyless provider validation prevents key creation on providers that don't
  support keys

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* feat: migrate frontend to dedicated provider keys API (#2161)

## Summary

Migrates the frontend from reading provider keys via `provider.keys` (removed
from provider API response in PR #2160) to the dedicated `getProviderKeys`
query and `/api/keys` endpoint. Removes `keys` from all provider TypeScript
types. Key mutations patch caches from authoritative server responses; provider
updates invalidate the `ProviderKeys` tag to refresh key statuses after model
discovery. Also adds a read-only routing rule info sheet.

## Changes

### Types (`config.ts`, `schemas.ts`)
- Removed `keys` field from `ModelProviderConfig`, `AddProviderRequest`, and
  `UpdateProviderRequest`
- Added `CreateProviderKeyRequest`, `UpdateProviderKeyRequest`,
  `ListProviderKeysResponse` types

### Store (`providersApi.ts`, `baseApi.ts`)
- Added `ProviderKeys` tag type to `baseApi`
- Changed `getProviderKeys`/`getProviderKey` from `Providers` tag to
  `ProviderKeys` tag (avoids invalidating provider cache on key changes)
- Added `invalidatesTags: [ProviderKeys, DBKeys]` on `updateProvider` mutation
  (refreshes key statuses after model discovery)
- Removed `getProvider`/`getProviders` cache patches from `createProviderKey`,
  `updateProviderKey`, `deleteProviderKey` (providers no longer carry keys)
- Added duplicate-check guards on `createProviderKey` cache patches to prevent
  ghost keys
- Each key mutation patches `getProviderKeys` and `getAllKeys` caches from
  authoritative server response

### Components
- **`modelProviderKeysTableView.tsx`**: Already uses `useGetProviderKeysQuery`;
  formatting/indentation fixes
- **`page.tsx`**: Removed `keys: []` from fallback provider object and
  `createProvider` call; simplified `KeyDiscoveryFailedBadge` to only check
  provider-level status (removed per-key status check since keys are no longer
  on provider)
- **`routingRuleSheet.tsx`**: `TargetRow` now receives `allKeys` prop (from
  `useGetAllKeysQuery`) instead of `providersData` with `.keys`; filters keys
  by target provider
- **`routingRuleInfoSheet.tsx`**: New read-only sheet component that displays
  routing rule details (conditions, targets with provider icons and weight bars,
  fallback chain, scope, priority, timestamps)
- **`settingsPanel.tsx`**: Uses `useGetAllKeysQuery` to determine configured
  providers (replaces `p.keys.length > 0` check) and derive
  `providerKeyConfigs` per provider

### Other frontend changes (from prior commit, unchanged)
- Added `getProviderKeys`, `getProviderKey` RTK Query endpoints
- Added `createProviderKey`, `updateProviderKey`, `deleteProviderKey` mutations
- Added `buildProviderUpdatePayload` utility for key-free provider updates
- Migrated `providerKeyForm.tsx` to separate create/update mutations
- Updated `addNewKeySheet.tsx` props from `keyIndex` to `keyId`
- Updated all 6 provider form fragments to use `buildProviderUpdatePayload`
- Removed dead `selectedProvider.keys` sync matchers from `providerSlice.ts`

## Type of change

- [x] Feature
- [x] Refactor
- [ ] Bug fix
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [x] UI (Next.js)
- [ ] Docs

## How to test

```sh
cd ui
npm run build
npm run lint
```

Manual testing:

1. Navigate to Providers page, select a provider with keys
2. Verify keys table loads correctly from dedicated API
3. Create a new key — verify it appears immediately (no ghost/duplicate)
4. Toggle enable/disable — verify switch updates immediately
5. Edit a key — verify form pre-populates, save works
6. Delete a key — verify it disappears immediately
7. Update provider settings — verify key statuses refresh after save
8. Check sidebar badge shows provider-level discovery failures
9. Open Playground settings — verify provider/key dropdowns work
10. Open Routing Rules — verify target key selector works
11. Click a routing rule row — verify info sheet opens with correct details
    (conditions, targets, fallbacks, scope, priority)

## Screenshots/Recordings

N/A — no visual changes to existing features; routing rule info sheet is new.

## Breaking changes

- [ ] Yes
- [x] No

Frontend-only changes consuming the new API shape from PR #2160.

## Related issues

N/A

## Security considerations

No new security considerations. Key values continue to be handled through
existing redaction on the backend.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* refactor: replace string slice with WhiteList type for model restrictions (#2282)

## Summary

Refactored model access control logic by replacing string slice with a dedicated `WhiteList` type for the `Models` field in `TableKey`. This change introduces a more structured approach to handling wildcard permissions and improves code readability.

## Changes

- Changed `Models` field type from `[]string` to `schemas.WhiteList` in `TableKey` struct
- Replaced manual wildcard checking (`model == "*"`) with `IsUnrestricted()` method calls across multiple functions
- Added missing mock method `GetVirtualKeyMCPConfigsByMCPClientIDs` to test configuration store
- Applied the refactoring consistently in `ReloadProvider`, `ForceReloadPricing`, and `Bootstrap` methods

## Type of change

- [x] Refactor
- [ ] Bug fix
- [ ] Feature
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Verify that model access control continues to work correctly with both wildcard and specific model permissions:

```sh
# Core/Transports
go version
go test ./...

# Test specific areas affected by the changes
go test ./framework/configstore/tables/...
go test ./transports/bifrost-http/...
```

Test scenarios should include:
- Keys with wildcard permissions (`["*"]`)
- Keys with specific model restrictions
- Keys with empty model lists (deny-by-default behavior)

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

This refactoring maintains the existing security model for API key permissions. The deny-by-default behavior and wildcard functionality remain unchanged, just implemented through a more structured type system.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* feat: add Plus icon and responsive text to pricing override create button (#2285)

## Summary

Improves the visual design and mobile responsiveness of the pricing overrides section by adding a Plus icon to the create button and optimizing the button text for different screen sizes.

## Changes

- Added Plus icon import from lucide-react
- Enhanced the "Create Override" button with a Plus icon and responsive text that shows "New Override" on larger screens and hides text on mobile
- Adjusted container spacing by removing top margin and changing flex alignment from `items-start` to `items-center` for better visual balance

## Type of change

- [ ] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [ ] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [x] UI (Next.js)
- [ ] Docs

## How to test

Navigate to the custom pricing overrides page and verify:
1. The "New Override" button displays with a Plus icon
2. On mobile screens, only the Plus icon is visible
3. On larger screens (sm and above), both icon and "New Override" text are visible
4. The button functionality remains unchanged when clicked

```sh
# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

## Screenshots/Recordings

Before/after screenshots showing the button design changes and responsive behavior would be helpful.

## Breaking changes

- [x] Yes
- [ ] No

## Related issues

## Security considerations

No security implications - this is a purely visual enhancement.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* refactor: blacklist models on new convention (#2305)

## Summary

Implements comprehensive blacklist support for model filtering across all providers. This adds the ability to explicitly deny access to specific models at the key level, with blacklist rules taking precedence over allowlist rules.

## Changes

- Added `BlackList` type with semantic validation (supports wildcard "*" for block-all)
- Updated key selection logic to check both allowlist and blacklist constraints
- Modified all provider model listing functions to filter out blacklisted models
- Enhanced UI to support blacklist configuration with improved UX for wildcard selection
- Added blacklist filtering to model catalog and provider handlers
- Updated test cases to verify blacklist functionality

Key design decisions:
- Blacklist always wins over allowlist when conflicts occur
- Wildcard "*" in blacklist blocks all models for that key
- Empty blacklist blocks nothing (permissive default)
- Consistent filtering logic across all providers (Anthropic, Azure, Bedrock, Cohere, etc.)

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [x] Transports (HTTP)
- [x] Providers/Integrations
- [ ] Plugins
- [x] UI (Next.js)
- [x] Docs

## How to test

Test blacklist functionality with provider keys:

```sh
# Core/Transports
go version
go test ./...

# UI
cd ui
pnpm i || npm i
pnpm test || npm test
pnpm build || npm run build
```

Example configuration to test:
```json
{
  "keys": [{
    "id": "test-key",
    "models": ["*"],
    "blacklisted_models": ["gpt-4", "claude-3"]
  }]
}
```

Verify that blacklisted models are excluded from model listings and key selection.

## Screenshots/Recordings

UI now shows "Blocked Models" field with improved tooltips and wildcard handling for denying access to specific models.

## Breaking changes

- [ ] Yes
- [x] No

The `blacklisted_models` field was already present in the schema but not fully implemented. This change makes it functional without breaking existing configurations.

## Related issues

Enhances model access control capabilities for fine-grained permission management.

## Security considerations

Improves security by allowing explicit denial of access to sensitive or expensive models at the key level. Blacklist rules cannot be bypassed by allowlist configurations.

## Checklist

- [x] I read `docs/contributing/README.md` and followed the guidelines
- [x] I added/updated tests where appropriate
- [x] I updated documentation where needed
- [x] I verified builds succeed (Go and UI)
- [x] I verified the CI pipeline passes locally if applicable

* minor fix add blacklisted model field in tableKeyFromSchemaKey (#2324)

## Summary

This PR adds support for the `BlacklistedModels` field when converting schema keys to table keys in the configuration store's RDB implementation.

## Changes

- Added `BlacklistedModels: key.BlacklistedModels` field mapping in the `tableKeyFromSchemaKey` function
- Ensures that blacklisted model information is properly preserved when converting between schema and table representations

## Type of change

- [ ] Bug fix
- [x] Feature
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [ ] Plugins
- [ ] UI (Next.js)
- [ ] Docs

## How to test

Verify that configuration keys with blacklisted models are properly stored and retrieved from the RDB configstore.

```sh
# Core/Transports
go version
go test ./...
```

Test creating configuration entries with `BlacklistedModels` specified and ensure they persist correctly through the RDB layer.

## Screenshots/Recordings

N/A

## Breaking changes

- [ ] Yes
- [x] No

## Related issues

N/A

## Security considerations

None - this change only adds field mapping for existing blacklisted models functionality.

## Checklist

- [ ] I read `docs/contributing/README.md` and followed the guidelines
- [ ] I added/updated tests where appropriate
- [ ] I updated documentation where needed
- [ ] I verified builds succeed (Go and UI)
- [ ] I verified the CI pipeline passes locally if applicable

* feat: add image edit input view on logs (#2321)

## Summary

Adds support for logging image edit and image variation requests by introducing new database columns and UI components to track and display these image manipulation operations alongside existing image generation functionality.

## Changes

- Added `image_edit_input` and `image_variation_input` columns to the logs table with corresponding database migrations
- Extended the Log struct with new fields for storing and parsing image edit/variation input data
- Updated logging plugin to capture image edit and variation request data with large payload threshold handling
- Enhanced UI to display input images and prompts for image edit operations and input images for variation operations
- Added image MIME type detection for proper display of base64-encoded images in the UI

## Type of change

- [x] Feature
- [ ] Bug fix
- [ ] Refactor
- [ ] Documentation
- [ ] Chore/CI

## Affected areas

- [x] Core (Go)
- [ ] Transports (HTTP)
- [ ] Providers/Integrations
- [x] Plugins
- [x] UI (Next.js)
- [ ] Do…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants