Conversation
|
|
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (13)
📝 WalkthroughWalkthroughIntroduces per-route request-model extractors and model-scoped provider discovery: integrations supply GetRequestModel callbacks, the router uses them to resolve providers via a model-aware HandlerStore/Config API, and tests/mocks were updated to the new GetAvailableProviders(model string) signature. Changes
sequenceDiagram
participant Client as Client
participant Router as Router
participant Integration as Integration
participant Config as Config
participant Provider as Provider
Client->>Router: HTTP request (body/URL)
Router->>Integration: Parse request → call GetRequestModel(ctx, parsedReq)
Integration-->>Router: model string
Router->>Config: GetAvailableProviders(model)
Config-->>Router: model-scoped providers
Router->>Provider: Forward converted request to chosen provider
Provider-->>Client: Response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
transports/bifrost-http/lib/config.go (1)
4915-4938: Consider consolidating provider lookup paths to avoid semantic drift.
GetAvailableProvidersdirectly callsc.ModelCatalog.GetProvidersForModel(model)instead of reusingGetProvidersForModel. Centralizing this in one place will reduce divergence risk if fallback rules evolve.♻️ Suggested refactor
func (c *Config) GetAvailableProviders(model string) []schemas.ModelProvider { c.Mu.RLock() defer c.Mu.RUnlock() - availableProviders := []schemas.ModelProvider{} - if c.ModelCatalog != nil { - availableProviders = c.ModelCatalog.GetProvidersForModel(model) - } else { + availableProviders := c.GetProvidersForModel(model) + if len(availableProviders) == 0 { // Return all providers that have at least one key with a non-empty value. for provider, config := range c.Providers { for _, key := range config.Keys { if key.Value.GetValue() != "" || bifrost.CanProviderKeyValueBeEmpty(provider) { if key.Enabled != nil && !*key.Enabled { continue } availableProviders = append(availableProviders, provider) break } } } } return availableProviders }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/lib/config.go` around lines 4915 - 4938, GetAvailableProviders currently calls c.ModelCatalog.GetProvidersForModel(model) directly which duplicates logic and risks semantic drift; replace that direct call with the centralized method c.GetProvidersForModel(model) (or the existing GetProvidersForModel function on Config) so both code paths use the same fallback rules and provider resolution logic (update the branch in GetAvailableProviders to call GetProvidersForModel instead of c.ModelCatalog.GetProvidersForModel and keep existing locking/return behavior).transports/bifrost-http/integrations/cohere.go (1)
74-86: Fail fast on unsupported Cohere request types in model getter.Returning
("", nil)for unknown request types can silently continue with an empty model. Prefer returning an explicit error to avoid hidden misrouting.🛡️ Suggested hardening
func cohereModelGetter(_ *fasthttp.RequestCtx, req interface{}) (string, error) { switch r := req.(type) { case *cohere.CohereChatRequest: return r.Model, nil case *cohere.CohereEmbeddingRequest: return r.Model, nil case *cohere.CohereRerankRequest: return r.Model, nil case *cohere.CohereCountTokensRequest: return r.Model, nil } - return "", nil + return "", errors.New("unsupported cohere request type for model extraction") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/cohere.go` around lines 74 - 86, The cohereModelGetter function currently returns ("", nil) for unknown request types which can let calls proceed with an empty model; change the default branch in cohereModelGetter to return a descriptive error (e.g., errors.New or fmt.Errorf) that includes the concrete type or a message like "unsupported cohere request type" so callers can fail fast; update any callers of cohereModelGetter (if necessary) to handle the non-nil error instead of assuming a model string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@transports/bifrost-http/integrations/openai.go`:
- Around line 305-329: openAIModelGetter can be called for large multipart video
uploads before the request parser has hydrated Model, causing
GetAvailableProviders("") misrouting; add a case for
*openai.OpenAIVideoGenerationRequest in
hydrateOpenAIRequestFromLargePayloadMetadata to populate r.Model from the
large-payload metadata (same approach used for other OpenAI request types), and
ensure openAIModelGetter will then see a non-empty Model; also verify
hydrateOpenAIRequestFromLargePayloadMetadata covers OpenAIVideoGenerationRequest
wherever it's invoked so large video requests are correctly routed.
In `@transports/bifrost-http/integrations/router.go`:
- Around line 738-773: The code logs that the integration route default provider
(RouteConfigTypeToProvider[config.Type]) was chosen but doesn't actually
reorder/enforce it before calling bifrostCtx.SetValue; update the
availableProviders slice returned by
g.handlerStore.GetAvailableProviders(extractedModel) so the preferred provider
(RouteConfigTypeToProvider[config.Type]) is moved to the front (or made the only
entry) before calling
bifrostCtx.SetValue(schemas.BifrostContextKeyAvailableProviders,
availableProviders), making the selection in config.GetRequestModel /
schemas.ParseModelString take effect for downstream code that reads
BifrostContextKeyAvailableProviders.
---
Nitpick comments:
In `@transports/bifrost-http/integrations/cohere.go`:
- Around line 74-86: The cohereModelGetter function currently returns ("", nil)
for unknown request types which can let calls proceed with an empty model;
change the default branch in cohereModelGetter to return a descriptive error
(e.g., errors.New or fmt.Errorf) that includes the concrete type or a message
like "unsupported cohere request type" so callers can fail fast; update any
callers of cohereModelGetter (if necessary) to handle the non-nil error instead
of assuming a model string.
In `@transports/bifrost-http/lib/config.go`:
- Around line 4915-4938: GetAvailableProviders currently calls
c.ModelCatalog.GetProvidersForModel(model) directly which duplicates logic and
risks semantic drift; replace that direct call with the centralized method
c.GetProvidersForModel(model) (or the existing GetProvidersForModel function on
Config) so both code paths use the same fallback rules and provider resolution
logic (update the branch in GetAvailableProviders to call GetProvidersForModel
instead of c.ModelCatalog.GetProvidersForModel and keep existing locking/return
behavior).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b46c7ce3-5693-4f33-a432-2e824eb06b03
📒 Files selected for processing (11)
transports/bifrost-http/handlers/webrtc_realtime_test.gotransports/bifrost-http/handlers/wsresponses_test.gotransports/bifrost-http/integrations/anthropic.gotransports/bifrost-http/integrations/bedrock.gotransports/bifrost-http/integrations/bedrock_test.gotransports/bifrost-http/integrations/cohere.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/openai.gotransports/bifrost-http/integrations/router.gotransports/bifrost-http/lib/config.gotransports/changelog.md
Confidence Score: 4/5Safe to merge after the previously flagged No new P0/P1 issues found beyond what was already identified in prior review rounds. The one P2 here (undocumented behavioral change to
Important Files Changed
Reviews (7): Last reviewed commit: "feat: add default provider selection in ..." | Re-trigger Greptile |
9bebc25 to
3017ffe
Compare
bad7b10 to
75a20e2
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
transports/bifrost-http/integrations/router.go (1)
707-710:⚠️ Potential issue | 🟡 MinorOne early return still leaks the request-scoped context.
The
PreCallbackerror branch returns without callingcancel(), so the cleanup fix added elsewhere increateHandleris still incomplete on this validation path.💡 Proposed fix
if config.PreCallback != nil { if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { + defer cancel() g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) return } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/router.go` around lines 707 - 710, In createHandler, the PreCallback error branch returns without calling cancel(), leaking the request-scoped context; update the block that handles if config.PreCallback != nil so that when config.PreCallback(ctx, bifrostCtx, req) returns an error you call cancel() (or invoke the same cleanup used elsewhere) before calling g.sendError(...) and returning, referencing the PreCallback call site, newBifrostError, config.ErrorConverter and g.sendError to ensure the cleanup path matches other early-return paths.
🧹 Nitpick comments (1)
transports/bifrost-http/integrations/openai.go (1)
1160-1166: MissingExtractAndSetUserAgentFromHeaderscall in video generation PreCallback.The video generation PreCallback now calls
hydrateOpenAIRequestFromLargePayloadMetadata, but unlike other similar routes (e.g., responses at line 785, oropenAILargePayloadPreHook), it does not callschemas.ExtractAndSetUserAgentFromHeaders. This could cause user agent headers to not be extracted for video generation requests.♻️ Proposed fix for consistency
PreCallback: func(ctx *fasthttp.RequestCtx, bifrostCtx *schemas.BifrostContext, req interface{}) error { hydrateOpenAIRequestFromLargePayloadMetadata(ctx, bifrostCtx, req) + schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx), bifrostCtx) if isAzureSDKRequest(ctx) { bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true) } return nil },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/openai.go` around lines 1160 - 1166, The PreCallback for video generation is missing a call to schemas.ExtractAndSetUserAgentFromHeaders, so user-agent headers aren't being extracted consistently; update the PreCallback (the anonymous function assigned to PreCallback where hydrateOpenAIRequestFromLargePayloadMetadata is called) to call schemas.ExtractAndSetUserAgentFromHeaders(ctx, bifrostCtx) before or after hydrateOpenAIRequestFromLargePayloadMetadata, keeping the existing isAzureSDKRequest(ctx) check and bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true) behavior intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 757-770: The code currently replaces the entire availableProviders
slice with only the native provider (using
RouteConfigTypeToProvider[config.Type]), dropping other compatible providers;
instead, keep all providers but prefer the native one by reordering
availableProviders so the preferred provider
(RouteConfigTypeToProvider[config.Type]) is moved to index 0 if present (or left
as-is otherwise), update the AppendRoutingEngineLog message to reflect that
we're preferring/reordering rather than restricting, and then call
bifrostCtx.SetValue(schemas.BifrostContextKeyAvailableProviders,
availableProviders) with the reordered full list so downstream
fallback/rerouting can still try other providers.
---
Outside diff comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 707-710: In createHandler, the PreCallback error branch returns
without calling cancel(), leaking the request-scoped context; update the block
that handles if config.PreCallback != nil so that when config.PreCallback(ctx,
bifrostCtx, req) returns an error you call cancel() (or invoke the same cleanup
used elsewhere) before calling g.sendError(...) and returning, referencing the
PreCallback call site, newBifrostError, config.ErrorConverter and g.sendError to
ensure the cleanup path matches other early-return paths.
---
Nitpick comments:
In `@transports/bifrost-http/integrations/openai.go`:
- Around line 1160-1166: The PreCallback for video generation is missing a call
to schemas.ExtractAndSetUserAgentFromHeaders, so user-agent headers aren't being
extracted consistently; update the PreCallback (the anonymous function assigned
to PreCallback where hydrateOpenAIRequestFromLargePayloadMetadata is called) to
call schemas.ExtractAndSetUserAgentFromHeaders(ctx, bifrostCtx) before or after
hydrateOpenAIRequestFromLargePayloadMetadata, keeping the existing
isAzureSDKRequest(ctx) check and
bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true) behavior
intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 077c12bb-d512-42b2-b36a-05eb12f5ab9f
📒 Files selected for processing (12)
transports/bifrost-http/handlers/webrtc_realtime_test.gotransports/bifrost-http/handlers/wsresponses_test.gotransports/bifrost-http/integrations/anthropic.gotransports/bifrost-http/integrations/bedrock.gotransports/bifrost-http/integrations/bedrock_test.gotransports/bifrost-http/integrations/cohere.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/openai.gotransports/bifrost-http/integrations/router.gotransports/bifrost-http/lib/config.gotransports/bifrost-http/lib/ctx_test.gotransports/changelog.md
✅ Files skipped from review due to trivial changes (1)
- transports/changelog.md
🚧 Files skipped from review as they are similar to previous changes (1)
- transports/bifrost-http/handlers/webrtc_realtime_test.go
75a20e2 to
ab521e0
Compare
3017ffe to
d058099
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
transports/bifrost-http/integrations/openai.go (1)
1160-1162: ReuseopenAILargePayloadPreHookhere.This callback has already drifted from the shared helper and is now the odd OpenAI POST route that skips
ExtractAndSetUserAgentFromHeaders. Calling the helper first keeps large-payload hydration and user-agent metadata aligned across routes.♻️ Suggested cleanup
PreCallback: func(ctx *fasthttp.RequestCtx, bifrostCtx *schemas.BifrostContext, req interface{}) error { - hydrateOpenAIRequestFromLargePayloadMetadata(ctx, bifrostCtx, req) + if err := openAILargePayloadPreHook(ctx, bifrostCtx, req); err != nil { + return err + } if isAzureSDKRequest(ctx) { bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true) } return nil },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/openai.go` around lines 1160 - 1162, Replace the ad-hoc PreCallback body with the shared openAILargePayloadPreHook: instead of directly calling hydrateOpenAIRequestFromLargePayloadMetadata and then isAzureSDKRequest, invoke openAILargePayloadPreHook so the route reuses ExtractAndSetUserAgentFromHeaders and the existing large-payload hydration logic; ensure openAILargePayloadPreHook is called with the same parameters (ctx, bifrostCtx, req) so user-agent metadata and large-payload handling remain consistent across OpenAI POST routes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/utils/utils.go`:
- Around line 2790-2791: The fallback chooses availableProviders[0] which is
nondeterministic because availableProviders originates from map iteration (see
GetProvidersForModel); sort the slice before selecting the first entry to ensure
stable selection — e.g., call slices.Sort on availableProviders (or apply
sorting inside GetProvidersForModel so all callers get a consistent order) and
then return the first element; update the selection in utils.go (and the other
occurrences in transports/bifrost-http/handlers/inference.go and
transports/bifrost-http/integrations/router.go) to use the sorted
availableProviders when returning the fallback provider.
In `@transports/bifrost-http/integrations/anthropic.go`:
- Around line 28-35: The current anthropicModelGetter strips or returns
unprefixed model names too early, allowing an explicit "anthropic/..." pin to be
lost; update anthropicModelGetter (used by GetRequestModel) to preserve any
"anthropic/"-prefixed r.Model values exactly as provided for
routing/provider-resolution, returning the prefixed value and not normalizing it
to a bare model name, and only perform any passthrough/native normalization
after provider resolution (e.g., in checkAnthropicPassthrough or a
post-resolution step); ensure both branches handling
*anthropic.AnthropicTextRequest and *anthropic.AnthropicMessageRequest return
r.Model unchanged when it already starts with "anthropic/" so the explicit
provider pin persists through provider selection.
In `@transports/bifrost-http/integrations/genai.go`:
- Around line 42-59: genAIModelGetter currently returns the bare "{model}" for
schemas.BifrostVideoRetrieveRequest which lets model-aware routing misdispatch
video-operation retrievals; change the BifrostVideoRetrieveRequest case to not
return the bare model — either return an empty string to skip GetRequestModel
for this route or derive a provider-scoped model from the operation id stored in
ctx.UserValue("operation_id") (split on ':' to get provider and build a
provider-scoped identifier) before returning; update genAIModelGetter (and
reference extractGeminiVideoOperationFromPath if you choose to parse
operation_id there) so routing uses the operation-scoped provider or skips model
resolution.
---
Nitpick comments:
In `@transports/bifrost-http/integrations/openai.go`:
- Around line 1160-1162: Replace the ad-hoc PreCallback body with the shared
openAILargePayloadPreHook: instead of directly calling
hydrateOpenAIRequestFromLargePayloadMetadata and then isAzureSDKRequest, invoke
openAILargePayloadPreHook so the route reuses ExtractAndSetUserAgentFromHeaders
and the existing large-payload hydration logic; ensure openAILargePayloadPreHook
is called with the same parameters (ctx, bifrostCtx, req) so user-agent metadata
and large-payload handling remain consistent across OpenAI POST routes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8d60533b-eb94-4c52-85ca-7f48bd0069ab
📒 Files selected for processing (13)
core/providers/utils/utils.gotransports/bifrost-http/handlers/webrtc_realtime_test.gotransports/bifrost-http/handlers/wsresponses_test.gotransports/bifrost-http/integrations/anthropic.gotransports/bifrost-http/integrations/bedrock.gotransports/bifrost-http/integrations/bedrock_test.gotransports/bifrost-http/integrations/cohere.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/openai.gotransports/bifrost-http/integrations/router.gotransports/bifrost-http/lib/config.gotransports/bifrost-http/lib/ctx_test.gotransports/changelog.md
✅ Files skipped from review due to trivial changes (3)
- transports/bifrost-http/integrations/bedrock_test.go
- transports/changelog.md
- transports/bifrost-http/handlers/wsresponses_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- transports/bifrost-http/handlers/webrtc_realtime_test.go
- transports/bifrost-http/lib/config.go
d058099 to
8847366
Compare
ab521e0 to
25b640f
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
transports/bifrost-http/integrations/router.go (1)
846-861:⚠️ Potential issue | 🟠 MajorCancel the derived context on
PreCallbackerrors too.These new
defer cancel()calls fix several early returns, but thePreCallbackbranch at Line 708 still exits without canceling the derived context. A failingPreCallbacknow leaks the same request-scoped context you just fixed elsewhere in this function.🐛 Proposed fix
if config.PreCallback != nil { if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { cancel() g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) return } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/router.go` around lines 846 - 861, The PreCallback error path currently returns without cancelling the derived context, leaking it; update the PreCallback branch in the request handling function so that when config.PreCallback != nil and it returns an error you call cancel() before invoking g.sendError (using newBifrostError) and returning; ensure this mirrors the other early-return branches that call defer cancel()/cancel() so the derived context is always cancelled on errors (same area that calls g.extractAndParseFallbacks, SetRawRequestBody, etc.).
🧹 Nitpick comments (3)
transports/bifrost-http/integrations/cohere.go (1)
72-86: Return an explicit error on unsupported request types incohereModelGetter.The current default
return "", nilcan silently hide route/getter mismatches.♻️ Proposed change
func cohereModelGetter(_ *fasthttp.RequestCtx, req interface{}) (string, error) { switch r := req.(type) { case *cohere.CohereChatRequest: return r.Model, nil case *cohere.CohereEmbeddingRequest: return r.Model, nil case *cohere.CohereRerankRequest: return r.Model, nil case *cohere.CohereCountTokensRequest: return r.Model, nil } - return "", nil + return "", errors.New("invalid request type for Cohere model getter") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/cohere.go` around lines 72 - 86, cohereModelGetter currently returns ("", nil) for unsupported request types which hides mismatches; update cohereModelGetter to return a clear error instead of nil when the type assertion falls through — e.g., if req is not one of cohere.CohereChatRequest, cohere.CohereEmbeddingRequest, cohere.CohereRerankRequest, or cohere.CohereCountTokensRequest, return an explicit error (with context like "unsupported request type for cohereModelGetter") so callers can detect and log route/getter mismatches.transports/bifrost-http/integrations/openai.go (1)
1160-1166: Consider addingExtractAndSetUserAgentFromHeadersfor consistency.The video generation route's
PreCallbacknow correctly callshydrateOpenAIRequestFromLargePayloadMetadata, but unlike similar POST routes (e.g., responses endpoint at line 785), it doesn't callschemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx), bifrostCtx). This may affect user-agent-based logging or observability for video generation requests.♻️ Suggested fix for consistency
PreCallback: func(ctx *fasthttp.RequestCtx, bifrostCtx *schemas.BifrostContext, req interface{}) error { hydrateOpenAIRequestFromLargePayloadMetadata(ctx, bifrostCtx, req) + schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx), bifrostCtx) if isAzureSDKRequest(ctx) { bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true) } return nil },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/openai.go` around lines 1160 - 1166, The PreCallback for the video generation route currently calls hydrateOpenAIRequestFromLargePayloadMetadata and sets the Azure user-agent flag via isAzureSDKRequest, but it misses extracting the user-agent into the context; add a call to schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx), bifrostCtx) inside the same PreCallback (alongside hydrateOpenAIRequestFromLargePayloadMetadata and the isAzureSDKRequest check) so the bifrostCtx receives consistent user-agent info for logging/observability.transports/bifrost-http/lib/config.go (1)
4929-4952: Narrow lock scope in the model-catalog path.At Line [4930],
c.Mu.RLock()is held even when taking the model-catalog branch. Consider returning early in that branch to avoid calling intoModelCatalogunder the config lock.♻️ Proposed refactor
func (c *Config) GetAvailableProviders(model string) []schemas.ModelProvider { - c.Mu.RLock() - defer c.Mu.RUnlock() - availableProviders := []schemas.ModelProvider{} - if c.ModelCatalog != nil { - availableProviders = c.ModelCatalog.GetProvidersForModel(model) - } else { + if c.ModelCatalog != nil { + return c.ModelCatalog.GetProvidersForModel(model) + } + + c.Mu.RLock() + defer c.Mu.RUnlock() + availableProviders := []schemas.ModelProvider{} + { // Return all providers that have at least one key with a non-empty value. for provider, config := range c.Providers { // Check if the provider has at least one key with a non-empty value. If so, add the provider to the list. // If the provider allows empty keys, add the provider to the list. for _, key := range config.Keys { if key.Value.GetValue() != "" || bifrost.CanProviderKeyValueBeEmpty(provider) { if key.Enabled != nil && !*key.Enabled { continue } availableProviders = append(availableProviders, provider) break } } } } return availableProviders }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/lib/config.go` around lines 4929 - 4952, In GetAvailableProviders, avoid calling ModelCatalog.GetProvidersForModel while holding c.Mu; read c.ModelCatalog under the lock into a local variable (mc := c.ModelCatalog), if mc != nil then immediately c.Mu.RUnlock() and return mc.GetProvidersForModel(model); otherwise continue to acquire the read lock (or reuse it) to iterate c.Providers and compute availableProviders as before (checking config.Keys, key.Value.GetValue(), bifrost.CanProviderKeyValueBeEmpty, and key.Enabled) before unlocking and returning—this ensures the ModelCatalog path returns early without invoking ModelCatalog under the config lock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@transports/bifrost-http/integrations/genai.go`:
- Around line 53-60: The pre-callback currently takes the suffix of operation_id
(e.g., "gpt-4o" or "openai/gpt-4o") and writes it directly into
BifrostVideoRetrieveRequest.Provider, which is incorrect; update the logic so
the suffix is treated as the raw model string and parsed into provider + model
using the same parsing used elsewhere (i.e., make
extractGeminiVideoOperationFromPath parse the raw model string rather than
treating it as a provider). Specifically, in the case handling
*schemas.BifrostVideoRetrieveRequest use operationID :=
ctx.UserValue("operation_id").(string) -> rawModel := lastSuffix and then call
the shared parsing routine (or factor out parsing) to set r.Provider to the
canonical provider (e.g., "openai"/"azure"/default) and r.Model to the model
token; do not assign the raw suffix directly to
BifrostVideoRetrieveRequest.Provider so the OpenAI/Azure branch can match.
In `@transports/bifrost-http/integrations/router.go`:
- Around line 416-420: RegisterRoutes currently doesn't validate that inference
routes provide the GetRequestModel callback, so createHandler treats it as
optional and model-aware provider resolution can silently fail; add a guard in
RegisterRoutes (the same place RequestConverter is validated) to check for
non-nil RouteConfig.GetRequestModel on routes where RouteConfig.Type indicates
an inference route (match the existing inference RouteConfigType checks), and
return an error or panic if GetRequestModel is nil so missing getters fail fast
before handler creation.
---
Outside diff comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 846-861: The PreCallback error path currently returns without
cancelling the derived context, leaking it; update the PreCallback branch in the
request handling function so that when config.PreCallback != nil and it returns
an error you call cancel() before invoking g.sendError (using newBifrostError)
and returning; ensure this mirrors the other early-return branches that call
defer cancel()/cancel() so the derived context is always cancelled on errors
(same area that calls g.extractAndParseFallbacks, SetRawRequestBody, etc.).
---
Nitpick comments:
In `@transports/bifrost-http/integrations/cohere.go`:
- Around line 72-86: cohereModelGetter currently returns ("", nil) for
unsupported request types which hides mismatches; update cohereModelGetter to
return a clear error instead of nil when the type assertion falls through —
e.g., if req is not one of cohere.CohereChatRequest,
cohere.CohereEmbeddingRequest, cohere.CohereRerankRequest, or
cohere.CohereCountTokensRequest, return an explicit error (with context like
"unsupported request type for cohereModelGetter") so callers can detect and log
route/getter mismatches.
In `@transports/bifrost-http/integrations/openai.go`:
- Around line 1160-1166: The PreCallback for the video generation route
currently calls hydrateOpenAIRequestFromLargePayloadMetadata and sets the Azure
user-agent flag via isAzureSDKRequest, but it misses extracting the user-agent
into the context; add a call to
schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx),
bifrostCtx) inside the same PreCallback (alongside
hydrateOpenAIRequestFromLargePayloadMetadata and the isAzureSDKRequest check) so
the bifrostCtx receives consistent user-agent info for logging/observability.
In `@transports/bifrost-http/lib/config.go`:
- Around line 4929-4952: In GetAvailableProviders, avoid calling
ModelCatalog.GetProvidersForModel while holding c.Mu; read c.ModelCatalog under
the lock into a local variable (mc := c.ModelCatalog), if mc != nil then
immediately c.Mu.RUnlock() and return mc.GetProvidersForModel(model); otherwise
continue to acquire the read lock (or reuse it) to iterate c.Providers and
compute availableProviders as before (checking config.Keys,
key.Value.GetValue(), bifrost.CanProviderKeyValueBeEmpty, and key.Enabled)
before unlocking and returning—this ensures the ModelCatalog path returns early
without invoking ModelCatalog under the config lock.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7baddd48-589e-4a3a-b195-c1060450558c
📒 Files selected for processing (13)
core/providers/utils/utils.gotransports/bifrost-http/handlers/webrtc_realtime_test.gotransports/bifrost-http/handlers/wsresponses_test.gotransports/bifrost-http/integrations/anthropic.gotransports/bifrost-http/integrations/bedrock.gotransports/bifrost-http/integrations/bedrock_test.gotransports/bifrost-http/integrations/cohere.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/openai.gotransports/bifrost-http/integrations/router.gotransports/bifrost-http/lib/config.gotransports/bifrost-http/lib/ctx_test.gotransports/changelog.md
✅ Files skipped from review due to trivial changes (4)
- transports/changelog.md
- transports/bifrost-http/handlers/webrtc_realtime_test.go
- transports/bifrost-http/integrations/bedrock.go
- transports/bifrost-http/lib/ctx_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- transports/bifrost-http/handlers/wsresponses_test.go
25b640f to
d9c76a7
Compare
8847366 to
63ac49f
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
transports/bifrost-http/lib/config.go (1)
4938-4957:⚠️ Potential issue | 🟠 MajorMake returned provider order deterministic before downstream default selection.
availableProvidersorder is not stabilized here. Since downstream selection can use the first element, provider choice can vary between runs (especially from map iteration in the fallback branch). Sort before returning.♻️ Suggested fix
func (c *Config) GetAvailableProviders(model string) []schemas.ModelProvider { c.Mu.RLock() defer c.Mu.RUnlock() availableProviders := []schemas.ModelProvider{} if c.ModelCatalog != nil { availableProviders = c.ModelCatalog.GetProvidersForModel(model) } else { // Return all providers that have at least one key with a non-empty value. for provider, config := range c.Providers { // Check if the provider has at least one key with a non-empty value. If so, add the provider to the list. // If the provider allows empty keys, add the provider to the list. for _, key := range config.Keys { if key.Value.GetValue() != "" || bifrost.CanProviderKeyValueBeEmpty(provider) { if key.Enabled != nil && !*key.Enabled { continue } availableProviders = append(availableProviders, provider) break } } } } + slices.Sort(availableProviders) return availableProviders }Based on learnings:
CheckAndSetDefaultProviderpicksavailableProviders[0]when the native provider is absent, so slice ordering directly affects routing outcome.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/lib/config.go` around lines 4938 - 4957, The returned availableProviders slice is not deterministic (map iteration order can vary) and CheckAndSetDefaultProvider may pick availableProviders[0], so sort the providers before returning; locate the function where c.ModelCatalog is checked and availableProviders is built (references: c.ModelCatalog, c.Providers, availableProviders, bifrost.CanProviderKeyValueBeEmpty) and add a deterministic sort (e.g., sort.Strings on availableProviders) just before the return to stabilize downstream default selection.transports/bifrost-http/integrations/router.go (1)
707-711:⚠️ Potential issue | 🟠 MajorCancel the derived Bifrost context on
PreCallbackerrors.This is still an early-return path in
createHandlerthat exits without callingcancel(). Repeated validation failures here will keep the derived context alive until timeout instead of releasing it immediately.♻️ Proposed fix
if config.PreCallback != nil { if err := config.PreCallback(ctx, bifrostCtx, req); err != nil { + cancel() g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error())) return } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/router.go` around lines 707 - 711, The early return path in createHandler when config.PreCallback returns an error never calls the derived context's cancel(), leaking the context until its timeout; modify the error handling in the PreCallback branch to call the cancel function for the derived Bifrost context before invoking g.sendError/newBifrostError and returning so the context is released immediately (ensure cancel() is called prior to return even after sending the error).
🧹 Nitpick comments (1)
transports/bifrost-http/integrations/router.go (1)
358-360: Make theGetRequestModelcomments match the actual contract.The type comment says this callback only takes
*fasthttp.RequestCtx, and the field comment says it "SHOULD NOT BE NIL". The signature now also receives the parsed request, and the callback is intentionally optional, so these comments will steer future integrations the wrong way.📝 Proposed wording
-// RequestModelGetter is a function type that accepts only a *fasthttp.RequestCtx and -// returns a string indicating the model derived from the context. +// RequestModelGetter extracts the model string from the HTTP context and/or +// the parsed request object. It may return an error if extraction fails. type RequestModelGetter func(ctx *fasthttp.RequestCtx, req interface{}) (string, error) @@ - GetRequestModel RequestModelGetter // Function to get the model from the context (SHOULD NOT BE NIL) + GetRequestModel RequestModelGetter // Optional: extracts the model for model-aware provider resolutionBased on learnings,
RouteConfig.GetRequestModelis intentionally optional and only used when a route wants model-aware provider resolution.Also applies to: 419-420
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/integrations/router.go` around lines 358 - 360, The comment for RequestModelGetter and the RouteConfig.GetRequestModel field is inaccurate: update the type and field comments to reflect the actual function signature (RequestModelGetter(ctx *fasthttp.RequestCtx, req interface{}) (string, error)) and that the parsed request parameter is provided, and change the "SHOULD NOT BE NIL" phrasing to indicate the callback is optional—used only when a route requires model-aware provider resolution; reference RequestModelGetter and RouteConfig.GetRequestModel when making these comment edits so future integrators see the correct contract and optionality.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 707-711: The early return path in createHandler when
config.PreCallback returns an error never calls the derived context's cancel(),
leaking the context until its timeout; modify the error handling in the
PreCallback branch to call the cancel function for the derived Bifrost context
before invoking g.sendError/newBifrostError and returning so the context is
released immediately (ensure cancel() is called prior to return even after
sending the error).
In `@transports/bifrost-http/lib/config.go`:
- Around line 4938-4957: The returned availableProviders slice is not
deterministic (map iteration order can vary) and CheckAndSetDefaultProvider may
pick availableProviders[0], so sort the providers before returning; locate the
function where c.ModelCatalog is checked and availableProviders is built
(references: c.ModelCatalog, c.Providers, availableProviders,
bifrost.CanProviderKeyValueBeEmpty) and add a deterministic sort (e.g.,
sort.Strings on availableProviders) just before the return to stabilize
downstream default selection.
---
Nitpick comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 358-360: The comment for RequestModelGetter and the
RouteConfig.GetRequestModel field is inaccurate: update the type and field
comments to reflect the actual function signature (RequestModelGetter(ctx
*fasthttp.RequestCtx, req interface{}) (string, error)) and that the parsed
request parameter is provided, and change the "SHOULD NOT BE NIL" phrasing to
indicate the callback is optional—used only when a route requires model-aware
provider resolution; reference RequestModelGetter and
RouteConfig.GetRequestModel when making these comment edits so future
integrators see the correct contract and optionality.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4b7855a9-93e0-4696-be82-0b38feeac6de
📒 Files selected for processing (13)
core/providers/utils/utils.gotransports/bifrost-http/handlers/webrtc_realtime_test.gotransports/bifrost-http/handlers/wsresponses_test.gotransports/bifrost-http/integrations/anthropic.gotransports/bifrost-http/integrations/bedrock.gotransports/bifrost-http/integrations/bedrock_test.gotransports/bifrost-http/integrations/cohere.gotransports/bifrost-http/integrations/genai.gotransports/bifrost-http/integrations/openai.gotransports/bifrost-http/integrations/router.gotransports/bifrost-http/lib/config.gotransports/bifrost-http/lib/ctx_test.gotransports/changelog.md
✅ Files skipped from review due to trivial changes (3)
- transports/changelog.md
- transports/bifrost-http/handlers/wsresponses_test.go
- transports/bifrost-http/integrations/genai.go
🚧 Files skipped from review as they are similar to previous changes (4)
- core/providers/utils/utils.go
- transports/bifrost-http/lib/ctx_test.go
- transports/bifrost-http/integrations/cohere.go
- transports/bifrost-http/integrations/openai.go
Merge activity
|
63ac49f to
f8cb0c4
Compare
d9c76a7 to
960f72c
Compare
The base branch was changed.
f8cb0c4 to
340185f
Compare

Summary
When a request's model string has no provider prefix, Bifrost now resolves the correct provider by looking up the model in the model catalog (if configured) or falling back to the full provider key scan. Previously,
GetAvailableProviderswas called unconditionally before body parsing, meaning the model name was not yet available. This change defers provider resolution until after the request body is parsed and the model is known, passing the model name intoGetAvailableProvidersso the catalog can return only the relevant providers.Changes
GetAvailableProvidersnow accepts amodel stringparameter across theHandlerStoreinterface,Config, and all test stubs, enabling model-aware provider lookup.RequestModelGetterfunction type is introduced onRouteConfig. Each integration (OpenAI, Anthropic, Bedrock, Cohere, GenAI) registers a typed model extractor that reads the model field from the parsed request struct (or from the URL path param for routes like Bedrock and GenAI video retrieve where the model comes from the path).PreCallbackexecution, so the model is fully populated before the catalog is queried.RouteConfigTypeToProvidermap is added to translate route config types to their canonicalModelProvidervalues for this selection logic.GetAvailableProvidersdelegates entirely toModelCatalog.GetProvidersForModel; the existing full-provider-key scan is used only when no catalog is present.defer cancel()calls before early error returns in the request handler are added to prevent context leaks.Type of change
Affected areas
How to test
gpt-4o) to one or more providers.model: gpt-4o(no provider prefix) to any supported integration endpoint.model-catalogentry showing which provider was selected and why.openai/gpt-4o) and verify catalog lookup is skipped.Breaking changes
HandlerStore.GetAvailableProviders()now requires amodel stringargument. Any custom implementation ofHandlerStoremust be updated to accept and handle this parameter.Related issues
Security considerations
No new auth, secrets, or PII surface area introduced. Provider selection logic operates on model name strings only.
Checklist
docs/contributing/README.mdand followed the guidelines