Skip to content

feat: auto-resolve provider when no provider prefix is given on integration routes#3068

Merged
akshaydeo merged 1 commit intomainfrom
04-27-feat_add_default_provider_selection_in_integration_paths
Apr 28, 2026
Merged

feat: auto-resolve provider when no provider prefix is given on integration routes#3068
akshaydeo merged 1 commit intomainfrom
04-27-feat_add_default_provider_selection_in_integration_paths

Conversation

@Pratham-Mishra04
Copy link
Copy Markdown
Collaborator

Summary

When a request's model string has no provider prefix, Bifrost now resolves the correct provider by looking up the model in the model catalog (if configured) or falling back to the full provider key scan. Previously, GetAvailableProviders was called unconditionally before body parsing, meaning the model name was not yet available. This change defers provider resolution until after the request body is parsed and the model is known, passing the model name into GetAvailableProviders so the catalog can return only the relevant providers.

Changes

  • GetAvailableProviders now accepts a model string parameter across the HandlerStore interface, Config, and all test stubs, enabling model-aware provider lookup.
  • A new RequestModelGetter function type is introduced on RouteConfig. Each integration (OpenAI, Anthropic, Bedrock, Cohere, GenAI) registers a typed model extractor that reads the model field from the parsed request struct (or from the URL path param for routes like Bedrock and GenAI video retrieve where the model comes from the path).
  • Provider resolution is moved to after body parsing and PreCallback execution, so the model is fully populated before the catalog is queried.
  • When no provider prefix is present, the routing engine checks whether the integration's native provider is in the catalog result and prefers it; otherwise it selects the first available provider. Both decisions are recorded in the routing engine log.
  • A RouteConfigTypeToProvider map is added to translate route config types to their canonical ModelProvider values for this selection logic.
  • If a model catalog is configured, GetAvailableProviders delegates entirely to ModelCatalog.GetProvidersForModel; the existing full-provider-key scan is used only when no catalog is present.
  • Missing defer cancel() calls before early error returns in the request handler are added to prevent context leaks.

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Documentation
  • Chore/CI

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations
  • Plugins
  • UI (React)
  • Docs

How to test

go test ./transports/bifrost-http/...
go test ./transports/bifrost-http/integrations/...
go test ./transports/bifrost-http/handlers/...
  1. Configure a model catalog that maps a model name (e.g. gpt-4o) to one or more providers.
  2. Send a request with model: gpt-4o (no provider prefix) to any supported integration endpoint.
  3. Verify the routing engine log contains a model-catalog entry showing which provider was selected and why.
  4. Confirm the request is routed to the expected provider.
  5. Send a request with a provider-prefixed model string (e.g. openai/gpt-4o) and verify catalog lookup is skipped.

Breaking changes

  • Yes
  • No

HandlerStore.GetAvailableProviders() now requires a model string argument. Any custom implementation of HandlerStore must be updated to accept and handle this parameter.

Related issues

Security considerations

No new auth, secrets, or PII surface area introduced. Provider selection logic operates on model name strings only.

Checklist

  • I read docs/contributing/README.md and followed the guidelines
  • I added/updated tests where appropriate
  • I updated documentation where needed
  • I verified builds succeed (Go and UI)
  • I verified the CI pipeline passes locally if applicable

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 26, 2026

Warning

Rate limit exceeded

@akshaydeo has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 3 minutes and 48 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3959289f-a5bc-4231-baa4-30b8dd136fd3

📥 Commits

Reviewing files that changed from the base of the PR and between 63ac49f and 340185f.

📒 Files selected for processing (13)
  • core/providers/utils/utils.go
  • transports/bifrost-http/handlers/webrtc_realtime_test.go
  • transports/bifrost-http/handlers/wsresponses_test.go
  • transports/bifrost-http/integrations/anthropic.go
  • transports/bifrost-http/integrations/bedrock.go
  • transports/bifrost-http/integrations/bedrock_test.go
  • transports/bifrost-http/integrations/cohere.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/lib/config.go
  • transports/bifrost-http/lib/ctx_test.go
  • transports/changelog.md
📝 Walkthrough

Walkthrough

Introduces per-route request-model extractors and model-scoped provider discovery: integrations supply GetRequestModel callbacks, the router uses them to resolve providers via a model-aware HandlerStore/Config API, and tests/mocks were updated to the new GetAvailableProviders(model string) signature.

Changes

Cohort / File(s) Summary
Router & Config API
transports/bifrost-http/integrations/router.go, transports/bifrost-http/lib/config.go
Added RequestModelGetter type and RouteConfig.GetRequestModel; added RouteConfigTypeToProvider; changed HandlerStore.GetAvailableProviders and Config.GetAvailableProviders to accept model string; router now conditionally populates available-providers from model/catalog and defers cancels on early returns.
Integrations — model getters wired
transports/bifrost-http/integrations/anthropic.go, transports/bifrost-http/integrations/bedrock.go, transports/bifrost-http/integrations/cohere.go, transports/bifrost-http/integrations/genai.go, transports/bifrost-http/integrations/openai.go
Added per-integration GetRequestModel getters (e.g., anthropicModelGetter, bedrockModelGetter, cohereModelGetter, genAIModelGetter, openAIModelGetter) and attached them to route configs; adjusted OpenAI video hydration ordering and GenAI operation/model parsing.
Handlers & Tests — handler store stubs
transports/bifrost-http/handlers/webrtc_realtime_test.go, transports/bifrost-http/handlers/wsresponses_test.go, transports/bifrost-http/integrations/bedrock_test.go, transports/bifrost-http/lib/ctx_test.go
Updated test doubles/mocks to the new GetAvailableProviders(model string) signature; preserved return behavior (nil or existing slices); minor formatting/alignment tweaks.
Core provider selection logic
core/providers/utils/utils.go
CheckAndSetDefaultProvider now falls back to the first entry of availableProviders when the configured default is not present instead of returning an empty provider.
Changelog
transports/changelog.md
Updated wording to reflect provider auto-resolution behavior when the model string lacks a provider prefix.
sequenceDiagram
    participant Client as Client
    participant Router as Router
    participant Integration as Integration
    participant Config as Config
    participant Provider as Provider

    Client->>Router: HTTP request (body/URL)
    Router->>Integration: Parse request → call GetRequestModel(ctx, parsedReq)
    Integration-->>Router: model string
    Router->>Config: GetAvailableProviders(model)
    Config-->>Router: model-scoped providers
    Router->>Provider: Forward converted request to chosen provider
    Provider-->>Client: Response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I sniff the model from the thread,

I hop between the routes ahead,
Providers line in tidy rows,
I nudge the request and off it goes,
A little hop — the server flows.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly summarizes the main change: auto-resolving providers when no provider prefix is given on integration routes.
Description check ✅ Passed The PR description is comprehensive and well-structured, covering all template sections including summary, changes, type, affected areas, testing instructions, breaking changes, and checklist.
Docstring Coverage ✅ Passed Docstring coverage is 86.96% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch 04-27-feat_add_default_provider_selection_in_integration_paths

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
transports/bifrost-http/lib/config.go (1)

4915-4938: Consider consolidating provider lookup paths to avoid semantic drift.

GetAvailableProviders directly calls c.ModelCatalog.GetProvidersForModel(model) instead of reusing GetProvidersForModel. Centralizing this in one place will reduce divergence risk if fallback rules evolve.

♻️ Suggested refactor
 func (c *Config) GetAvailableProviders(model string) []schemas.ModelProvider {
 	c.Mu.RLock()
 	defer c.Mu.RUnlock()
-	availableProviders := []schemas.ModelProvider{}
-	if c.ModelCatalog != nil {
-		availableProviders = c.ModelCatalog.GetProvidersForModel(model)
-	} else {
+	availableProviders := c.GetProvidersForModel(model)
+	if len(availableProviders) == 0 {
 		// Return all providers that have at least one key with a non-empty value.
 		for provider, config := range c.Providers {
 			for _, key := range config.Keys {
 				if key.Value.GetValue() != "" || bifrost.CanProviderKeyValueBeEmpty(provider) {
 					if key.Enabled != nil && !*key.Enabled {
 						continue
 					}
 					availableProviders = append(availableProviders, provider)
 					break
 				}
 			}
 		}
 	}
 	return availableProviders
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/lib/config.go` around lines 4915 - 4938,
GetAvailableProviders currently calls c.ModelCatalog.GetProvidersForModel(model)
directly which duplicates logic and risks semantic drift; replace that direct
call with the centralized method c.GetProvidersForModel(model) (or the existing
GetProvidersForModel function on Config) so both code paths use the same
fallback rules and provider resolution logic (update the branch in
GetAvailableProviders to call GetProvidersForModel instead of
c.ModelCatalog.GetProvidersForModel and keep existing locking/return behavior).
transports/bifrost-http/integrations/cohere.go (1)

74-86: Fail fast on unsupported Cohere request types in model getter.

Returning ("", nil) for unknown request types can silently continue with an empty model. Prefer returning an explicit error to avoid hidden misrouting.

🛡️ Suggested hardening
 func cohereModelGetter(_ *fasthttp.RequestCtx, req interface{}) (string, error) {
 	switch r := req.(type) {
 	case *cohere.CohereChatRequest:
 		return r.Model, nil
 	case *cohere.CohereEmbeddingRequest:
 		return r.Model, nil
 	case *cohere.CohereRerankRequest:
 		return r.Model, nil
 	case *cohere.CohereCountTokensRequest:
 		return r.Model, nil
 	}
-	return "", nil
+	return "", errors.New("unsupported cohere request type for model extraction")
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/cohere.go` around lines 74 - 86, The
cohereModelGetter function currently returns ("", nil) for unknown request types
which can let calls proceed with an empty model; change the default branch in
cohereModelGetter to return a descriptive error (e.g., errors.New or fmt.Errorf)
that includes the concrete type or a message like "unsupported cohere request
type" so callers can fail fast; update any callers of cohereModelGetter (if
necessary) to handle the non-nil error instead of assuming a model string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@transports/bifrost-http/integrations/openai.go`:
- Around line 305-329: openAIModelGetter can be called for large multipart video
uploads before the request parser has hydrated Model, causing
GetAvailableProviders("") misrouting; add a case for
*openai.OpenAIVideoGenerationRequest in
hydrateOpenAIRequestFromLargePayloadMetadata to populate r.Model from the
large-payload metadata (same approach used for other OpenAI request types), and
ensure openAIModelGetter will then see a non-empty Model; also verify
hydrateOpenAIRequestFromLargePayloadMetadata covers OpenAIVideoGenerationRequest
wherever it's invoked so large video requests are correctly routed.

In `@transports/bifrost-http/integrations/router.go`:
- Around line 738-773: The code logs that the integration route default provider
(RouteConfigTypeToProvider[config.Type]) was chosen but doesn't actually
reorder/enforce it before calling bifrostCtx.SetValue; update the
availableProviders slice returned by
g.handlerStore.GetAvailableProviders(extractedModel) so the preferred provider
(RouteConfigTypeToProvider[config.Type]) is moved to the front (or made the only
entry) before calling
bifrostCtx.SetValue(schemas.BifrostContextKeyAvailableProviders,
availableProviders), making the selection in config.GetRequestModel /
schemas.ParseModelString take effect for downstream code that reads
BifrostContextKeyAvailableProviders.

---

Nitpick comments:
In `@transports/bifrost-http/integrations/cohere.go`:
- Around line 74-86: The cohereModelGetter function currently returns ("", nil)
for unknown request types which can let calls proceed with an empty model;
change the default branch in cohereModelGetter to return a descriptive error
(e.g., errors.New or fmt.Errorf) that includes the concrete type or a message
like "unsupported cohere request type" so callers can fail fast; update any
callers of cohereModelGetter (if necessary) to handle the non-nil error instead
of assuming a model string.

In `@transports/bifrost-http/lib/config.go`:
- Around line 4915-4938: GetAvailableProviders currently calls
c.ModelCatalog.GetProvidersForModel(model) directly which duplicates logic and
risks semantic drift; replace that direct call with the centralized method
c.GetProvidersForModel(model) (or the existing GetProvidersForModel function on
Config) so both code paths use the same fallback rules and provider resolution
logic (update the branch in GetAvailableProviders to call GetProvidersForModel
instead of c.ModelCatalog.GetProvidersForModel and keep existing locking/return
behavior).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b46c7ce3-5693-4f33-a432-2e824eb06b03

📥 Commits

Reviewing files that changed from the base of the PR and between bad7b10 and 9bebc25.

📒 Files selected for processing (11)
  • transports/bifrost-http/handlers/webrtc_realtime_test.go
  • transports/bifrost-http/handlers/wsresponses_test.go
  • transports/bifrost-http/integrations/anthropic.go
  • transports/bifrost-http/integrations/bedrock.go
  • transports/bifrost-http/integrations/bedrock_test.go
  • transports/bifrost-http/integrations/cohere.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/lib/config.go
  • transports/changelog.md

Comment thread transports/bifrost-http/integrations/openai.go
Comment thread transports/bifrost-http/integrations/router.go
@Pratham-Mishra04 Pratham-Mishra04 changed the title feat: auto-resolve provider when no provider prefix is given on integrstion routes feat: auto-resolve provider when no provider prefix is given on integretion routes Apr 26, 2026
@Pratham-Mishra04 Pratham-Mishra04 changed the title feat: auto-resolve provider when no provider prefix is given on integretion routes feat: auto-resolve provider when no provider prefix is given on integration routes Apr 26, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 26, 2026

Confidence Score: 4/5

Safe to merge after the previously flagged defer cancel() issues and empty-model catalog-query edge cases are resolved.

No new P0/P1 issues found beyond what was already identified in prior review rounds. The one P2 here (undocumented behavioral change to CheckAndSetDefaultProvider) is low-risk given the controlled call sites, but worth noting for embedders. Prior P1s (defer cancel() in error paths, empty-model catalog lookup) remain open in previous threads and are the main blockers.

transports/bifrost-http/integrations/router.godefer cancel() error paths and empty-model guard; core/providers/utils/utils.goCheckAndSetDefaultProvider behavioral change.

Important Files Changed

Filename Overview
transports/bifrost-http/integrations/router.go Core routing change: moves GetAvailableProviders call to after body parsing, introduces model-aware provider resolution via catalog; three new defer cancel() usages in error paths are inconsistent with surrounding cancel() calls (flagged in previous threads).
transports/bifrost-http/lib/config.go GetAvailableProviders now accepts a model string and delegates to ModelCatalog.GetProvidersForModel when catalog is present, falling back to full key scan; interface signature is a documented breaking change.
core/providers/utils/utils.go CheckAndSetDefaultProvider now returns availableProviders[0] instead of "" when defaultProvider is not in the list; behavioral change enables the new "first available provider" fallback path.
transports/bifrost-http/integrations/openai.go Adds openAIModelGetter covering all OpenAI request types; adds GetRequestModel to all route configs; fixes missing hydrateOpenAIRequestFromLargePayloadMetadata call for video generation requests in PreCallback.
transports/bifrost-http/integrations/genai.go Adds genAIModelGetter with special handling for BifrostVideoRetrieveRequest (model from operation_id suffix); refactors extractGeminiVideoOperationFromPath to correctly split provider/model from raw model string — fixes a prior bug where "openai/gpt-4o" would be used as the full provider name.
transports/bifrost-http/integrations/bedrock.go Adds bedrockModelGetter; safely handles nil Converse in BedrockCountTokensRequest by returning "" rather than panicking.
transports/bifrost-http/integrations/anthropic.go Adds anthropicModelGetter and wires it into all three Anthropic route configs; straightforward and correct.
transports/bifrost-http/integrations/cohere.go Adds cohereModelGetter covering all four Cohere request types and wires it into all route configs; no issues.
transports/bifrost-http/handlers/webrtc_realtime_test.go Updates testHandlerStore stub to match the new GetAvailableProviders(model string) interface; no logic changes.
transports/bifrost-http/handlers/wsresponses_test.go Updates testWSHandlerStore stub signature; no logic changes.
transports/bifrost-http/integrations/bedrock_test.go Updates mockHandlerStore stub for the new interface; no logic changes.
transports/bifrost-http/lib/ctx_test.go Updates testHandlerStore stub and reformats method declarations; no logic changes.
transports/changelog.md Updates changelog entry to reflect the feature more accurately.

Reviews (7): Last reviewed commit: "feat: add default provider selection in ..." | Re-trigger Greptile

Comment thread transports/bifrost-http/integrations/router.go
Comment thread transports/bifrost-http/integrations/router.go
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-27-feat_add_default_provider_selection_in_integration_paths branch from 9bebc25 to 3017ffe Compare April 27, 2026 08:49
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-26-feat_added_default_provider_selection_on_inference_routes branch from bad7b10 to 75a20e2 Compare April 27, 2026 08:49
Comment thread transports/bifrost-http/integrations/router.go
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
transports/bifrost-http/integrations/router.go (1)

707-710: ⚠️ Potential issue | 🟡 Minor

One early return still leaks the request-scoped context.

The PreCallback error branch returns without calling cancel(), so the cleanup fix added elsewhere in createHandler is still incomplete on this validation path.

💡 Proposed fix
 		if config.PreCallback != nil {
 			if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
+				defer cancel()
 				g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
 				return
 			}
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/router.go` around lines 707 - 710, In
createHandler, the PreCallback error branch returns without calling cancel(),
leaking the request-scoped context; update the block that handles if
config.PreCallback != nil so that when config.PreCallback(ctx, bifrostCtx, req)
returns an error you call cancel() (or invoke the same cleanup used elsewhere)
before calling g.sendError(...) and returning, referencing the PreCallback call
site, newBifrostError, config.ErrorConverter and g.sendError to ensure the
cleanup path matches other early-return paths.
🧹 Nitpick comments (1)
transports/bifrost-http/integrations/openai.go (1)

1160-1166: Missing ExtractAndSetUserAgentFromHeaders call in video generation PreCallback.

The video generation PreCallback now calls hydrateOpenAIRequestFromLargePayloadMetadata, but unlike other similar routes (e.g., responses at line 785, or openAILargePayloadPreHook), it does not call schemas.ExtractAndSetUserAgentFromHeaders. This could cause user agent headers to not be extracted for video generation requests.

♻️ Proposed fix for consistency
 			PreCallback: func(ctx *fasthttp.RequestCtx, bifrostCtx *schemas.BifrostContext, req interface{}) error {
 				hydrateOpenAIRequestFromLargePayloadMetadata(ctx, bifrostCtx, req)
+				schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx), bifrostCtx)
 				if isAzureSDKRequest(ctx) {
 					bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true)
 				}
 				return nil
 			},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/openai.go` around lines 1160 - 1166, The
PreCallback for video generation is missing a call to
schemas.ExtractAndSetUserAgentFromHeaders, so user-agent headers aren't being
extracted consistently; update the PreCallback (the anonymous function assigned
to PreCallback where hydrateOpenAIRequestFromLargePayloadMetadata is called) to
call schemas.ExtractAndSetUserAgentFromHeaders(ctx, bifrostCtx) before or after
hydrateOpenAIRequestFromLargePayloadMetadata, keeping the existing
isAzureSDKRequest(ctx) check and
bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true) behavior
intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 757-770: The code currently replaces the entire availableProviders
slice with only the native provider (using
RouteConfigTypeToProvider[config.Type]), dropping other compatible providers;
instead, keep all providers but prefer the native one by reordering
availableProviders so the preferred provider
(RouteConfigTypeToProvider[config.Type]) is moved to index 0 if present (or left
as-is otherwise), update the AppendRoutingEngineLog message to reflect that
we're preferring/reordering rather than restricting, and then call
bifrostCtx.SetValue(schemas.BifrostContextKeyAvailableProviders,
availableProviders) with the reordered full list so downstream
fallback/rerouting can still try other providers.

---

Outside diff comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 707-710: In createHandler, the PreCallback error branch returns
without calling cancel(), leaking the request-scoped context; update the block
that handles if config.PreCallback != nil so that when config.PreCallback(ctx,
bifrostCtx, req) returns an error you call cancel() (or invoke the same cleanup
used elsewhere) before calling g.sendError(...) and returning, referencing the
PreCallback call site, newBifrostError, config.ErrorConverter and g.sendError to
ensure the cleanup path matches other early-return paths.

---

Nitpick comments:
In `@transports/bifrost-http/integrations/openai.go`:
- Around line 1160-1166: The PreCallback for video generation is missing a call
to schemas.ExtractAndSetUserAgentFromHeaders, so user-agent headers aren't being
extracted consistently; update the PreCallback (the anonymous function assigned
to PreCallback where hydrateOpenAIRequestFromLargePayloadMetadata is called) to
call schemas.ExtractAndSetUserAgentFromHeaders(ctx, bifrostCtx) before or after
hydrateOpenAIRequestFromLargePayloadMetadata, keeping the existing
isAzureSDKRequest(ctx) check and
bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true) behavior
intact.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 077c12bb-d512-42b2-b36a-05eb12f5ab9f

📥 Commits

Reviewing files that changed from the base of the PR and between 9bebc25 and 3017ffe.

📒 Files selected for processing (12)
  • transports/bifrost-http/handlers/webrtc_realtime_test.go
  • transports/bifrost-http/handlers/wsresponses_test.go
  • transports/bifrost-http/integrations/anthropic.go
  • transports/bifrost-http/integrations/bedrock.go
  • transports/bifrost-http/integrations/bedrock_test.go
  • transports/bifrost-http/integrations/cohere.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/lib/config.go
  • transports/bifrost-http/lib/ctx_test.go
  • transports/changelog.md
✅ Files skipped from review due to trivial changes (1)
  • transports/changelog.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • transports/bifrost-http/handlers/webrtc_realtime_test.go

Comment thread transports/bifrost-http/integrations/router.go
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-26-feat_added_default_provider_selection_on_inference_routes branch from 75a20e2 to ab521e0 Compare April 27, 2026 09:57
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-27-feat_add_default_provider_selection_in_integration_paths branch from 3017ffe to d058099 Compare April 27, 2026 09:57
Comment thread transports/bifrost-http/integrations/router.go
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
transports/bifrost-http/integrations/openai.go (1)

1160-1162: Reuse openAILargePayloadPreHook here.

This callback has already drifted from the shared helper and is now the odd OpenAI POST route that skips ExtractAndSetUserAgentFromHeaders. Calling the helper first keeps large-payload hydration and user-agent metadata aligned across routes.

♻️ Suggested cleanup
 		PreCallback: func(ctx *fasthttp.RequestCtx, bifrostCtx *schemas.BifrostContext, req interface{}) error {
-			hydrateOpenAIRequestFromLargePayloadMetadata(ctx, bifrostCtx, req)
+			if err := openAILargePayloadPreHook(ctx, bifrostCtx, req); err != nil {
+				return err
+			}
 			if isAzureSDKRequest(ctx) {
 				bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true)
 			}
 			return nil
 		},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/openai.go` around lines 1160 - 1162,
Replace the ad-hoc PreCallback body with the shared openAILargePayloadPreHook:
instead of directly calling hydrateOpenAIRequestFromLargePayloadMetadata and
then isAzureSDKRequest, invoke openAILargePayloadPreHook so the route reuses
ExtractAndSetUserAgentFromHeaders and the existing large-payload hydration
logic; ensure openAILargePayloadPreHook is called with the same parameters (ctx,
bifrostCtx, req) so user-agent metadata and large-payload handling remain
consistent across OpenAI POST routes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/utils/utils.go`:
- Around line 2790-2791: The fallback chooses availableProviders[0] which is
nondeterministic because availableProviders originates from map iteration (see
GetProvidersForModel); sort the slice before selecting the first entry to ensure
stable selection — e.g., call slices.Sort on availableProviders (or apply
sorting inside GetProvidersForModel so all callers get a consistent order) and
then return the first element; update the selection in utils.go (and the other
occurrences in transports/bifrost-http/handlers/inference.go and
transports/bifrost-http/integrations/router.go) to use the sorted
availableProviders when returning the fallback provider.

In `@transports/bifrost-http/integrations/anthropic.go`:
- Around line 28-35: The current anthropicModelGetter strips or returns
unprefixed model names too early, allowing an explicit "anthropic/..." pin to be
lost; update anthropicModelGetter (used by GetRequestModel) to preserve any
"anthropic/"-prefixed r.Model values exactly as provided for
routing/provider-resolution, returning the prefixed value and not normalizing it
to a bare model name, and only perform any passthrough/native normalization
after provider resolution (e.g., in checkAnthropicPassthrough or a
post-resolution step); ensure both branches handling
*anthropic.AnthropicTextRequest and *anthropic.AnthropicMessageRequest return
r.Model unchanged when it already starts with "anthropic/" so the explicit
provider pin persists through provider selection.

In `@transports/bifrost-http/integrations/genai.go`:
- Around line 42-59: genAIModelGetter currently returns the bare "{model}" for
schemas.BifrostVideoRetrieveRequest which lets model-aware routing misdispatch
video-operation retrievals; change the BifrostVideoRetrieveRequest case to not
return the bare model — either return an empty string to skip GetRequestModel
for this route or derive a provider-scoped model from the operation id stored in
ctx.UserValue("operation_id") (split on ':' to get provider and build a
provider-scoped identifier) before returning; update genAIModelGetter (and
reference extractGeminiVideoOperationFromPath if you choose to parse
operation_id there) so routing uses the operation-scoped provider or skips model
resolution.

---

Nitpick comments:
In `@transports/bifrost-http/integrations/openai.go`:
- Around line 1160-1162: Replace the ad-hoc PreCallback body with the shared
openAILargePayloadPreHook: instead of directly calling
hydrateOpenAIRequestFromLargePayloadMetadata and then isAzureSDKRequest, invoke
openAILargePayloadPreHook so the route reuses ExtractAndSetUserAgentFromHeaders
and the existing large-payload hydration logic; ensure openAILargePayloadPreHook
is called with the same parameters (ctx, bifrostCtx, req) so user-agent metadata
and large-payload handling remain consistent across OpenAI POST routes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8d60533b-eb94-4c52-85ca-7f48bd0069ab

📥 Commits

Reviewing files that changed from the base of the PR and between 3017ffe and d058099.

📒 Files selected for processing (13)
  • core/providers/utils/utils.go
  • transports/bifrost-http/handlers/webrtc_realtime_test.go
  • transports/bifrost-http/handlers/wsresponses_test.go
  • transports/bifrost-http/integrations/anthropic.go
  • transports/bifrost-http/integrations/bedrock.go
  • transports/bifrost-http/integrations/bedrock_test.go
  • transports/bifrost-http/integrations/cohere.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/lib/config.go
  • transports/bifrost-http/lib/ctx_test.go
  • transports/changelog.md
✅ Files skipped from review due to trivial changes (3)
  • transports/bifrost-http/integrations/bedrock_test.go
  • transports/changelog.md
  • transports/bifrost-http/handlers/wsresponses_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • transports/bifrost-http/handlers/webrtc_realtime_test.go
  • transports/bifrost-http/lib/config.go

Comment thread core/providers/utils/utils.go
Comment thread transports/bifrost-http/integrations/anthropic.go
Comment thread transports/bifrost-http/integrations/genai.go
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-27-feat_add_default_provider_selection_in_integration_paths branch from d058099 to 8847366 Compare April 27, 2026 11:33
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-26-feat_added_default_provider_selection_on_inference_routes branch from ab521e0 to 25b640f Compare April 27, 2026 11:33
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
transports/bifrost-http/integrations/router.go (1)

846-861: ⚠️ Potential issue | 🟠 Major

Cancel the derived context on PreCallback errors too.

These new defer cancel() calls fix several early returns, but the PreCallback branch at Line 708 still exits without canceling the derived context. A failing PreCallback now leaks the same request-scoped context you just fixed elsewhere in this function.

🐛 Proposed fix
if config.PreCallback != nil {
	if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
		cancel()
		g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
		return
	}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/router.go` around lines 846 - 861, The
PreCallback error path currently returns without cancelling the derived context,
leaking it; update the PreCallback branch in the request handling function so
that when config.PreCallback != nil and it returns an error you call cancel()
before invoking g.sendError (using newBifrostError) and returning; ensure this
mirrors the other early-return branches that call defer cancel()/cancel() so the
derived context is always cancelled on errors (same area that calls
g.extractAndParseFallbacks, SetRawRequestBody, etc.).
🧹 Nitpick comments (3)
transports/bifrost-http/integrations/cohere.go (1)

72-86: Return an explicit error on unsupported request types in cohereModelGetter.

The current default return "", nil can silently hide route/getter mismatches.

♻️ Proposed change
 func cohereModelGetter(_ *fasthttp.RequestCtx, req interface{}) (string, error) {
 	switch r := req.(type) {
 	case *cohere.CohereChatRequest:
 		return r.Model, nil
 	case *cohere.CohereEmbeddingRequest:
 		return r.Model, nil
 	case *cohere.CohereRerankRequest:
 		return r.Model, nil
 	case *cohere.CohereCountTokensRequest:
 		return r.Model, nil
 	}
-	return "", nil
+	return "", errors.New("invalid request type for Cohere model getter")
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/cohere.go` around lines 72 - 86,
cohereModelGetter currently returns ("", nil) for unsupported request types
which hides mismatches; update cohereModelGetter to return a clear error instead
of nil when the type assertion falls through — e.g., if req is not one of
cohere.CohereChatRequest, cohere.CohereEmbeddingRequest,
cohere.CohereRerankRequest, or cohere.CohereCountTokensRequest, return an
explicit error (with context like "unsupported request type for
cohereModelGetter") so callers can detect and log route/getter mismatches.
transports/bifrost-http/integrations/openai.go (1)

1160-1166: Consider adding ExtractAndSetUserAgentFromHeaders for consistency.

The video generation route's PreCallback now correctly calls hydrateOpenAIRequestFromLargePayloadMetadata, but unlike similar POST routes (e.g., responses endpoint at line 785), it doesn't call schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx), bifrostCtx). This may affect user-agent-based logging or observability for video generation requests.

♻️ Suggested fix for consistency
 		PreCallback: func(ctx *fasthttp.RequestCtx, bifrostCtx *schemas.BifrostContext, req interface{}) error {
 			hydrateOpenAIRequestFromLargePayloadMetadata(ctx, bifrostCtx, req)
+			schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx), bifrostCtx)
 			if isAzureSDKRequest(ctx) {
 				bifrostCtx.SetValue(schemas.BifrostContextKeyIsAzureUserAgent, true)
 			}
 			return nil
 		},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/openai.go` around lines 1160 - 1166, The
PreCallback for the video generation route currently calls
hydrateOpenAIRequestFromLargePayloadMetadata and sets the Azure user-agent flag
via isAzureSDKRequest, but it misses extracting the user-agent into the context;
add a call to
schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx),
bifrostCtx) inside the same PreCallback (alongside
hydrateOpenAIRequestFromLargePayloadMetadata and the isAzureSDKRequest check) so
the bifrostCtx receives consistent user-agent info for logging/observability.
transports/bifrost-http/lib/config.go (1)

4929-4952: Narrow lock scope in the model-catalog path.

At Line [4930], c.Mu.RLock() is held even when taking the model-catalog branch. Consider returning early in that branch to avoid calling into ModelCatalog under the config lock.

♻️ Proposed refactor
 func (c *Config) GetAvailableProviders(model string) []schemas.ModelProvider {
-	c.Mu.RLock()
-	defer c.Mu.RUnlock()
-	availableProviders := []schemas.ModelProvider{}
-	if c.ModelCatalog != nil {
-		availableProviders = c.ModelCatalog.GetProvidersForModel(model)
-	} else {
+	if c.ModelCatalog != nil {
+		return c.ModelCatalog.GetProvidersForModel(model)
+	}
+
+	c.Mu.RLock()
+	defer c.Mu.RUnlock()
+	availableProviders := []schemas.ModelProvider{}
+	{
 		// Return all providers that have at least one key with a non-empty value.
 		for provider, config := range c.Providers {
 			// Check if the provider has at least one key with a non-empty value. If so, add the provider to the list.
 			// If the provider allows empty keys, add the provider to the list.
 			for _, key := range config.Keys {
 				if key.Value.GetValue() != "" || bifrost.CanProviderKeyValueBeEmpty(provider) {
 					if key.Enabled != nil && !*key.Enabled {
 						continue
 					}
 					availableProviders = append(availableProviders, provider)
 					break
 				}
 			}
 		}
 	}
 	return availableProviders
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/lib/config.go` around lines 4929 - 4952, In
GetAvailableProviders, avoid calling ModelCatalog.GetProvidersForModel while
holding c.Mu; read c.ModelCatalog under the lock into a local variable (mc :=
c.ModelCatalog), if mc != nil then immediately c.Mu.RUnlock() and return
mc.GetProvidersForModel(model); otherwise continue to acquire the read lock (or
reuse it) to iterate c.Providers and compute availableProviders as before
(checking config.Keys, key.Value.GetValue(), bifrost.CanProviderKeyValueBeEmpty,
and key.Enabled) before unlocking and returning—this ensures the ModelCatalog
path returns early without invoking ModelCatalog under the config lock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@transports/bifrost-http/integrations/genai.go`:
- Around line 53-60: The pre-callback currently takes the suffix of operation_id
(e.g., "gpt-4o" or "openai/gpt-4o") and writes it directly into
BifrostVideoRetrieveRequest.Provider, which is incorrect; update the logic so
the suffix is treated as the raw model string and parsed into provider + model
using the same parsing used elsewhere (i.e., make
extractGeminiVideoOperationFromPath parse the raw model string rather than
treating it as a provider). Specifically, in the case handling
*schemas.BifrostVideoRetrieveRequest use operationID :=
ctx.UserValue("operation_id").(string) -> rawModel := lastSuffix and then call
the shared parsing routine (or factor out parsing) to set r.Provider to the
canonical provider (e.g., "openai"/"azure"/default) and r.Model to the model
token; do not assign the raw suffix directly to
BifrostVideoRetrieveRequest.Provider so the OpenAI/Azure branch can match.

In `@transports/bifrost-http/integrations/router.go`:
- Around line 416-420: RegisterRoutes currently doesn't validate that inference
routes provide the GetRequestModel callback, so createHandler treats it as
optional and model-aware provider resolution can silently fail; add a guard in
RegisterRoutes (the same place RequestConverter is validated) to check for
non-nil RouteConfig.GetRequestModel on routes where RouteConfig.Type indicates
an inference route (match the existing inference RouteConfigType checks), and
return an error or panic if GetRequestModel is nil so missing getters fail fast
before handler creation.

---

Outside diff comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 846-861: The PreCallback error path currently returns without
cancelling the derived context, leaking it; update the PreCallback branch in the
request handling function so that when config.PreCallback != nil and it returns
an error you call cancel() before invoking g.sendError (using newBifrostError)
and returning; ensure this mirrors the other early-return branches that call
defer cancel()/cancel() so the derived context is always cancelled on errors
(same area that calls g.extractAndParseFallbacks, SetRawRequestBody, etc.).

---

Nitpick comments:
In `@transports/bifrost-http/integrations/cohere.go`:
- Around line 72-86: cohereModelGetter currently returns ("", nil) for
unsupported request types which hides mismatches; update cohereModelGetter to
return a clear error instead of nil when the type assertion falls through —
e.g., if req is not one of cohere.CohereChatRequest,
cohere.CohereEmbeddingRequest, cohere.CohereRerankRequest, or
cohere.CohereCountTokensRequest, return an explicit error (with context like
"unsupported request type for cohereModelGetter") so callers can detect and log
route/getter mismatches.

In `@transports/bifrost-http/integrations/openai.go`:
- Around line 1160-1166: The PreCallback for the video generation route
currently calls hydrateOpenAIRequestFromLargePayloadMetadata and sets the Azure
user-agent flag via isAzureSDKRequest, but it misses extracting the user-agent
into the context; add a call to
schemas.ExtractAndSetUserAgentFromHeaders(extractHeadersFromRequest(ctx),
bifrostCtx) inside the same PreCallback (alongside
hydrateOpenAIRequestFromLargePayloadMetadata and the isAzureSDKRequest check) so
the bifrostCtx receives consistent user-agent info for logging/observability.

In `@transports/bifrost-http/lib/config.go`:
- Around line 4929-4952: In GetAvailableProviders, avoid calling
ModelCatalog.GetProvidersForModel while holding c.Mu; read c.ModelCatalog under
the lock into a local variable (mc := c.ModelCatalog), if mc != nil then
immediately c.Mu.RUnlock() and return mc.GetProvidersForModel(model); otherwise
continue to acquire the read lock (or reuse it) to iterate c.Providers and
compute availableProviders as before (checking config.Keys,
key.Value.GetValue(), bifrost.CanProviderKeyValueBeEmpty, and key.Enabled)
before unlocking and returning—this ensures the ModelCatalog path returns early
without invoking ModelCatalog under the config lock.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7baddd48-589e-4a3a-b195-c1060450558c

📥 Commits

Reviewing files that changed from the base of the PR and between d058099 and 8847366.

📒 Files selected for processing (13)
  • core/providers/utils/utils.go
  • transports/bifrost-http/handlers/webrtc_realtime_test.go
  • transports/bifrost-http/handlers/wsresponses_test.go
  • transports/bifrost-http/integrations/anthropic.go
  • transports/bifrost-http/integrations/bedrock.go
  • transports/bifrost-http/integrations/bedrock_test.go
  • transports/bifrost-http/integrations/cohere.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/lib/config.go
  • transports/bifrost-http/lib/ctx_test.go
  • transports/changelog.md
✅ Files skipped from review due to trivial changes (4)
  • transports/changelog.md
  • transports/bifrost-http/handlers/webrtc_realtime_test.go
  • transports/bifrost-http/integrations/bedrock.go
  • transports/bifrost-http/lib/ctx_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • transports/bifrost-http/handlers/wsresponses_test.go

Comment thread transports/bifrost-http/integrations/genai.go
Comment thread transports/bifrost-http/integrations/router.go
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-26-feat_added_default_provider_selection_on_inference_routes branch from 25b640f to d9c76a7 Compare April 27, 2026 12:18
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-27-feat_add_default_provider_selection_in_integration_paths branch from 8847366 to 63ac49f Compare April 27, 2026 12:18
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
transports/bifrost-http/lib/config.go (1)

4938-4957: ⚠️ Potential issue | 🟠 Major

Make returned provider order deterministic before downstream default selection.

availableProviders order is not stabilized here. Since downstream selection can use the first element, provider choice can vary between runs (especially from map iteration in the fallback branch). Sort before returning.

♻️ Suggested fix
 func (c *Config) GetAvailableProviders(model string) []schemas.ModelProvider {
 	c.Mu.RLock()
 	defer c.Mu.RUnlock()
 	availableProviders := []schemas.ModelProvider{}
 	if c.ModelCatalog != nil {
 		availableProviders = c.ModelCatalog.GetProvidersForModel(model)
 	} else {
 		// Return all providers that have at least one key with a non-empty value.
 		for provider, config := range c.Providers {
 			// Check if the provider has at least one key with a non-empty value. If so, add the provider to the list.
 			// If the provider allows empty keys, add the provider to the list.
 			for _, key := range config.Keys {
 				if key.Value.GetValue() != "" || bifrost.CanProviderKeyValueBeEmpty(provider) {
 					if key.Enabled != nil && !*key.Enabled {
 						continue
 					}
 					availableProviders = append(availableProviders, provider)
 					break
 				}
 			}
 		}
 	}
+	slices.Sort(availableProviders)
 	return availableProviders
 }

Based on learnings: CheckAndSetDefaultProvider picks availableProviders[0] when the native provider is absent, so slice ordering directly affects routing outcome.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/lib/config.go` around lines 4938 - 4957, The returned
availableProviders slice is not deterministic (map iteration order can vary) and
CheckAndSetDefaultProvider may pick availableProviders[0], so sort the providers
before returning; locate the function where c.ModelCatalog is checked and
availableProviders is built (references: c.ModelCatalog, c.Providers,
availableProviders, bifrost.CanProviderKeyValueBeEmpty) and add a deterministic
sort (e.g., sort.Strings on availableProviders) just before the return to
stabilize downstream default selection.
transports/bifrost-http/integrations/router.go (1)

707-711: ⚠️ Potential issue | 🟠 Major

Cancel the derived Bifrost context on PreCallback errors.

This is still an early-return path in createHandler that exits without calling cancel(). Repeated validation failures here will keep the derived context alive until timeout instead of releasing it immediately.

♻️ Proposed fix
 		if config.PreCallback != nil {
 			if err := config.PreCallback(ctx, bifrostCtx, req); err != nil {
+				cancel()
 				g.sendError(ctx, bifrostCtx, config.ErrorConverter, newBifrostError(err, "failed to execute pre-request callback: "+err.Error()))
 				return
 			}
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/router.go` around lines 707 - 711, The
early return path in createHandler when config.PreCallback returns an error
never calls the derived context's cancel(), leaking the context until its
timeout; modify the error handling in the PreCallback branch to call the cancel
function for the derived Bifrost context before invoking
g.sendError/newBifrostError and returning so the context is released immediately
(ensure cancel() is called prior to return even after sending the error).
🧹 Nitpick comments (1)
transports/bifrost-http/integrations/router.go (1)

358-360: Make the GetRequestModel comments match the actual contract.

The type comment says this callback only takes *fasthttp.RequestCtx, and the field comment says it "SHOULD NOT BE NIL". The signature now also receives the parsed request, and the callback is intentionally optional, so these comments will steer future integrations the wrong way.

📝 Proposed wording
-// RequestModelGetter is a function type that accepts only a *fasthttp.RequestCtx and
-// returns a string indicating the model derived from the context.
+// RequestModelGetter extracts the model string from the HTTP context and/or
+// the parsed request object. It may return an error if extraction fails.
 type RequestModelGetter func(ctx *fasthttp.RequestCtx, req interface{}) (string, error)
@@
-	GetRequestModel                        RequestModelGetter                     // Function to get the model from the context (SHOULD NOT BE NIL)
+	GetRequestModel                        RequestModelGetter                     // Optional: extracts the model for model-aware provider resolution

Based on learnings, RouteConfig.GetRequestModel is intentionally optional and only used when a route wants model-aware provider resolution.

Also applies to: 419-420

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/router.go` around lines 358 - 360, The
comment for RequestModelGetter and the RouteConfig.GetRequestModel field is
inaccurate: update the type and field comments to reflect the actual function
signature (RequestModelGetter(ctx *fasthttp.RequestCtx, req interface{})
(string, error)) and that the parsed request parameter is provided, and change
the "SHOULD NOT BE NIL" phrasing to indicate the callback is optional—used only
when a route requires model-aware provider resolution; reference
RequestModelGetter and RouteConfig.GetRequestModel when making these comment
edits so future integrators see the correct contract and optionality.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 707-711: The early return path in createHandler when
config.PreCallback returns an error never calls the derived context's cancel(),
leaking the context until its timeout; modify the error handling in the
PreCallback branch to call the cancel function for the derived Bifrost context
before invoking g.sendError/newBifrostError and returning so the context is
released immediately (ensure cancel() is called prior to return even after
sending the error).

In `@transports/bifrost-http/lib/config.go`:
- Around line 4938-4957: The returned availableProviders slice is not
deterministic (map iteration order can vary) and CheckAndSetDefaultProvider may
pick availableProviders[0], so sort the providers before returning; locate the
function where c.ModelCatalog is checked and availableProviders is built
(references: c.ModelCatalog, c.Providers, availableProviders,
bifrost.CanProviderKeyValueBeEmpty) and add a deterministic sort (e.g.,
sort.Strings on availableProviders) just before the return to stabilize
downstream default selection.

---

Nitpick comments:
In `@transports/bifrost-http/integrations/router.go`:
- Around line 358-360: The comment for RequestModelGetter and the
RouteConfig.GetRequestModel field is inaccurate: update the type and field
comments to reflect the actual function signature (RequestModelGetter(ctx
*fasthttp.RequestCtx, req interface{}) (string, error)) and that the parsed
request parameter is provided, and change the "SHOULD NOT BE NIL" phrasing to
indicate the callback is optional—used only when a route requires model-aware
provider resolution; reference RequestModelGetter and
RouteConfig.GetRequestModel when making these comment edits so future
integrators see the correct contract and optionality.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4b7855a9-93e0-4696-be82-0b38feeac6de

📥 Commits

Reviewing files that changed from the base of the PR and between 8847366 and 63ac49f.

📒 Files selected for processing (13)
  • core/providers/utils/utils.go
  • transports/bifrost-http/handlers/webrtc_realtime_test.go
  • transports/bifrost-http/handlers/wsresponses_test.go
  • transports/bifrost-http/integrations/anthropic.go
  • transports/bifrost-http/integrations/bedrock.go
  • transports/bifrost-http/integrations/bedrock_test.go
  • transports/bifrost-http/integrations/cohere.go
  • transports/bifrost-http/integrations/genai.go
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/integrations/router.go
  • transports/bifrost-http/lib/config.go
  • transports/bifrost-http/lib/ctx_test.go
  • transports/changelog.md
✅ Files skipped from review due to trivial changes (3)
  • transports/changelog.md
  • transports/bifrost-http/handlers/wsresponses_test.go
  • transports/bifrost-http/integrations/genai.go
🚧 Files skipped from review as they are similar to previous changes (4)
  • core/providers/utils/utils.go
  • transports/bifrost-http/lib/ctx_test.go
  • transports/bifrost-http/integrations/cohere.go
  • transports/bifrost-http/integrations/openai.go

coderabbitai[bot]
coderabbitai Bot previously approved these changes Apr 27, 2026
Copy link
Copy Markdown
Contributor

akshaydeo commented Apr 28, 2026

Merge activity

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-27-feat_add_default_provider_selection_in_integration_paths branch from 63ac49f to f8cb0c4 Compare April 28, 2026 07:07
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 04-26-feat_added_default_provider_selection_on_inference_routes branch from d9c76a7 to 960f72c Compare April 28, 2026 07:07
Comment thread transports/bifrost-http/integrations/router.go
@akshaydeo akshaydeo changed the base branch from 04-26-feat_added_default_provider_selection_on_inference_routes to graphite-base/3068 April 28, 2026 07:13
@akshaydeo akshaydeo changed the base branch from graphite-base/3068 to main April 28, 2026 07:15
@akshaydeo akshaydeo dismissed coderabbitai[bot]’s stale review April 28, 2026 07:15

The base branch was changed.

@akshaydeo akshaydeo force-pushed the 04-27-feat_add_default_provider_selection_in_integration_paths branch from f8cb0c4 to 340185f Compare April 28, 2026 07:16
@akshaydeo akshaydeo merged commit 3e50ecb into main Apr 28, 2026
14 of 16 checks passed
@akshaydeo akshaydeo deleted the 04-27-feat_add_default_provider_selection_in_integration_paths branch April 28, 2026 07:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants