Skip to content

fix: default OpenAI provider on /openai/v1/models converter (#2993)#3007

Closed
thiscantbeserious wants to merge 1 commit intomaximhq:mainfrom
thiscantbeserious:fix/2993-openai-v1-models-fanout
Closed

fix: default OpenAI provider on /openai/v1/models converter (#2993)#3007
thiscantbeserious wants to merge 1 commit intomaximhq:mainfrom
thiscantbeserious:fix/2993-openai-v1-models-fanout

Conversation

@thiscantbeserious
Copy link
Copy Markdown

Summary

  • GET /openai/v1/models was fanning out to every configured provider because the RequestConverter did not set Provider on the BifrostRequest.
  • The dispatch layer treats an empty Provider as "list all" and calls ListAllModels, which queries every provider. Each non-OpenAI provider rejected the caller's OpenAI token, and the aggregated error surfaced to the caller with a non-OpenAI provider name and message.
  • This PR adds a PreCallback to detect the Azure OpenAI SDK User-Agent, then defaults Provider to schemas.Azure (Azure SDK) or schemas.OpenAI (all other callers) in the RequestConverter.

Changes

  • transports/bifrost-http/integrations/openai.go: added PreCallback and updated RequestConverter in CreateOpenAIListModelsRouteConfigs. The pattern mirrors the existing /v1/realtime and /v1/responses routes in the same file.
  • transports/bifrost-http/integrations/openai_list_models_test.go: 8 new tests covering the converter, the PreCallback, the Azure SDK path, explicit-provider passthrough, and invalid-request error handling.

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Documentation
  • Chore/CI

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations
  • Plugins
  • UI (React)
  • Docs

How to test

cd transports
go test ./bifrost-http/integrations/... -run TestListModels -v
go build ./bifrost-http/...
go vet ./bifrost-http/...

Manual smoke test (two providers configured):

curl -sS "http://localhost:8080/openai/v1/models" \
  -H "Authorization: Bearer sk-any-openai-key"

Before this fix the response contained key_statuses entries for every provider. After the fix only the OpenAI provider is queried.

Screenshots/Recordings

N/A (transport-layer Go change only).

Breaking changes

  • Yes
  • No

Related issues

Closes #2993

Discovered and originally fixed as a side-effect of PR #2775 (thiscantbeserious/bifrost). Extracted here as a focused change against main.

Security considerations

No auth, secrets, or PII involved. The change narrows the blast radius of a list-models call by scoping it to the intended provider.

Checklist

  • I read docs/contributing/README.md and followed the guidelines
  • I added/updated tests where appropriate
  • I updated documentation where needed
  • I verified builds succeed (Go and UI)
  • I verified the CI pipeline passes locally if applicable

…2993)

The RequestConverter for the /models route did not set Provider on the
BifrostRequest. The dispatch layer treated an empty Provider as "list
all", calling ListAllModels and fanning out to every configured provider
(including Anthropic, Bedrock, etc.). Each non-OpenAI provider rejected
the OpenAI token with its own 401, and the aggregated error surfaced to
the caller with a non-OpenAI provider name and message.

Fix: add a PreCallback that detects the Azure OpenAI SDK User-Agent and
stores the flag in BifrostContext. The RequestConverter then reads that
flag to default Provider to schemas.Azure for Azure SDK callers, or
schemas.OpenAI otherwise. This matches the pattern used by other routes
in the same file (e.g. /v1/realtime, /v1/responses).

Callers that already set Provider explicitly are unaffected.

Closes maximhq#2993
Extracted from maximhq#2775
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

📝 Walkthrough

Summary by CodeRabbit

  • Bug Fixes
    • Fixed an issue where list models requests could return unexpected authorization errors by ensuring requests are properly routed to the correct provider based on the API client being used.

Walkthrough

This change fixes a bug where the /openai/v1/models endpoint was fanning out to all configured providers instead of defaulting to OpenAI. A pre-processing step now tags Azure SDK requests, and the request converter defaults the provider to OpenAI or Azure based on the request source, preventing multi-provider dispatch.

Changes

Cohort / File(s) Summary
Provider defaulting logic
transports/bifrost-http/integrations/openai.go
Adds a PreCallback that detects Azure OpenAI SDK requests via User-Agent inspection and sets a flag in BifrostContext. Modifies the RequestConverter for /models to default Provider to schemas.OpenAI for non-Azure callers and schemas.Azure for Azure SDK callers, preventing fan-out to all providers.
Integration tests
transports/bifrost-http/integrations/openai_list_models_test.go
New test suite validating provider selection logic: defaults to OpenAI when no provider is set and request is non-Azure, defaults to Azure for Azure OpenAI SDK requests, preserves explicit provider values, correctly detects Azure via User-Agent, errors on invalid request types, and confirms all route path variants are registered.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰 A hop and a skip, the provider now sticks—
No more fanning out to each provider that tricks!
Azure and OpenAI, each knows its place,
One SDK, one route, no more messy race! 🎯

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: defaulting the OpenAI provider on the /openai/v1/models converter route.
Description check ✅ Passed The PR description is comprehensive, covering the bug, root cause, fix approach, tests added, and how to test. All key template sections are addressed.
Linked Issues check ✅ Passed The code changes directly address issue #2993 by adding a PreCallback for Azure SDK detection and defaulting Provider to OpenAI/Azure in the RequestConverter, eliminating unwanted fan-out.
Out of Scope Changes check ✅ Passed All changes are scoped to the /openai/v1/models route converter and its tests. No unrelated modifications to other routes, providers, or areas detected.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.11.4)

level=error msg="[linters_context] typechecking error: pattern ./...: directory prefix . does not contain main module or its selected dependencies"

Warning

Review ran into problems

🔥 Problems

Timed out fetching pipeline failures after 30000ms


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

thiscantbeserious added a commit to thiscantbeserious/bifrost that referenced this pull request Apr 24, 2026
This fix for maximhq#2993 was originally included in this feature branch as a
side effect. It has been extracted into a focused PR maximhq#3007 against main.

Reverting the changes here so the feature branch no longer overlaps.
@thiscantbeserious thiscantbeserious marked this pull request as ready for review April 29, 2026 13:26
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 29, 2026

Confidence Score: 5/5

Safe to merge — the fix is narrowly scoped, follows established patterns in the same file, and is covered by 8 new tests.

No P0 or P1 findings. The logic is simple and correct: the provider default only fires when Provider is empty, the Azure flag is stored/read via the existing BifrostContextKeyIsAzureUserAgent mechanism, and the closures in the loop do not capture the loop variable. The fix mirrors the /realtime and /responses patterns used throughout the file.

No files require special attention.

Important Files Changed

Filename Overview
transports/bifrost-http/integrations/openai.go Adds PreCallback + provider-defaulting guard to CreateOpenAIListModelsRouteConfigs; correctly mirrors the existing /realtime and /responses pattern. No closure-capture issue since the closures do not reference the loop variable path.
transports/bifrost-http/integrations/openai_list_models_test.go Eight tests cover the converter default, Azure SDK detection, explicit-provider passthrough, invalid request error, flag presence/absence, path registration, and BifrostRequest population. Coverage is thorough and tests are well-structured.

Reviews (1): Last reviewed commit: "fix: default OpenAI provider on /openai/..." | Re-trigger Greptile

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
transports/bifrost-http/integrations/openai_list_models_test.go (1)

87-101: Add one more passthrough test with Azure UA (optional).

Consider a variant of this test that uses Azure SDK User-Agent and an explicit non-empty provider, to lock in “explicit provider wins” regardless of UA tagging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/integrations/openai_list_models_test.go` around lines
87 - 101, Add a second test variant that mirrors
TestListModelsRequestConverter_RespectsExplicitProvider but simulates an Azure
SDK User-Agent; call findListModelsRoute and applyPreCallback the same way,
build the request via makeListModelsReq with a non-empty explicit provider
(e.g., schemas.Azure) and invoke route.RequestConverter, then assert no error
and that bifrostReq.ListModelsRequest.Provider still equals the explicit
provider to ensure UA tagging does not overwrite an explicitly set provider.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@transports/bifrost-http/integrations/openai_list_models_test.go`:
- Around line 87-101: Add a second test variant that mirrors
TestListModelsRequestConverter_RespectsExplicitProvider but simulates an Azure
SDK User-Agent; call findListModelsRoute and applyPreCallback the same way,
build the request via makeListModelsReq with a non-empty explicit provider
(e.g., schemas.Azure) and invoke route.RequestConverter, then assert no error
and that bifrostReq.ListModelsRequest.Provider still equals the explicit
provider to ensure UA tagging does not overwrite an explicitly set provider.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bb5778cd-8697-4c5d-a882-abb88baa34ff

📥 Commits

Reviewing files that changed from the base of the PR and between 5a2559c and 63541c8.

📒 Files selected for processing (2)
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/integrations/openai_list_models_test.go

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: GET /openai/v1/models fans out to all providers because RequestConverter omits Provider

1 participant