Skip to content

[feature] OAuth passthrough for OpenAI provider (Codex CLI subscription support)#2775

Draft
thiscantbeserious wants to merge 38 commits intomaximhq:mainfrom
thiscantbeserious:feature/openai-oauth-passthrough
Draft

[feature] OAuth passthrough for OpenAI provider (Codex CLI subscription support)#2775
thiscantbeserious wants to merge 38 commits intomaximhq:mainfrom
thiscantbeserious:feature/openai-oauth-passthrough

Conversation

@thiscantbeserious
Copy link
Copy Markdown

@thiscantbeserious thiscantbeserious commented Apr 16, 2026

Summary

Alternative / smaller implementation to #2715 as a Optional OAuth Toggle in the OpenAI provider

Screenshot 2026-04-16 at 21 18 51

Motivation

Codex CLI users with a ChatGPT subscription want to route through Bifrost for logging/governance without paying separately for API credits. Previously, pointing Codex's openai_base_url at Bifrost failed because:

  • Codex sends a ChatGPT OAuth token (scoped openid profile email offline_access) which OpenAI's API gateway rejects with 401 "Missing scopes: api.responses.write" when received through a proxy
  • The actual endpoint Codex uses is chatgpt.com/backend-api/codex/responses, which expects different headers, body format, and paths

Prior art: the community openai-oauth project implements this as a standalone localhost proxy. This PR integrates the same approach natively into Bifrost.

For the Recommended Test-List for Reviewers see:

#2775 (comment)

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Documentation
  • Chore/CI

Affected areas

  • Core (Go)
  • Transports (HTTP)
  • Providers/Integrations
  • Plugins
  • UI (Next.js)
  • Docs

How to test

Describe the steps to validate this change. Include commands and expected outcomes.

Build local Docker:

docker build -f transports/Dockerfile.local -t bifrost:local .
docker run -p 8080:8080 -v $(pwd)/data:/app/data/ bifrost:local
  1. Add a OpenAI Provider -> go to its Settings -> OpenAI Config
  2. Toggle the new OAuth Toggle
  3. Verify it works

If adding new configs or environment variables, document them here.

Breaking changes

  • Yes
  • No

Related Issues

Changes

Adds a ChatGPT OAuth Passthrough toggle on the OpenAI provider that lets users route Codex CLI (or any ChatGPT OAuth client) through Bifrost to the ChatGPT backend API, using their subscription (Plus/Pro/Business/Enterprise) instead of a paid API key.

When the toggle is enabled on the OpenAI provider, Bifrost transparently:

  • Routes all requests from api.openai.com/v1/*chatgpt.com/backend-api/codex/* (strips /v1 prefix)
  • Extracts the OAuth access token from the inbound Authorization header and forwards it upstream (no allow_direct_keys required)
  • Injects the required chatgpt-account-id header (extracted from the JWT's https://api.openai.com/auth.chatgpt_account_id claim)
  • Injects OpenAI-Beta: responses=experimental
  • Mutates the /responses body: ensures instructions exists, forces store=false, removes max_output_tokens, forces stream=true (the ChatGPT backend rejects non-streaming)
  • Appends the required client_version query param on /models

Covers every OpenAI provider route:

  • /responses (HTTP + WebSocket)
  • /models
  • /chat/completions
  • /embeddings
  • /audio/*
  • /images/*
  • /files*
  • /batches*
  • /containers*
  • /videos*
  • /responses/compact
  • /responses/input_tokens
  • /realtime/*.

Design

All ChatGPT-specific logic lives in a single self-contained module: core/providers/openai/chatgpt_oauth.go. Call sites in openai.go / websocket.go are one-liners that delegate to helpers — no ChatGPT-specific code leaks out of the module.

  • extractChatGPTAccountID(jwt) — base64url-decode JWT, extract account ID from https://api.openai.com/auth.chatgpt_account_id claim. Rejects newlines/CR (header injection defense).
  • chatGPTOAuthPath(standardPath) — strips /v1 prefix; appends ?client_version= to /models automatically.
  • transformChatGPTResponsesBody(body) — applies the four required body mutations.
  • chatGPTOAuthMergeHeaders(...) — merges OAuth headers into existing extra headers.
  • chatGPTOAuthApplyRequest(...) — one-call wrapper for HTTP /responses (headers + body transformer).
  • chatGPTOAuthWebSocketURL / chatGPTOAuthWebSocketHeaders — mirrors for upstream WebSocket.

Integration points (tiny):

  • OpenAIProvider.buildRequestURL auto-applies chatGPTOAuthPath when chatgptOAuth is true → every URL-using method is fixed in one place.
  • OpenAIProvider.effectiveExtraHeaders(key) auto-merges OAuth headers → replaces all 30+ direct references to provider.networkConfig.ExtraHeaders.
  • core/bifrost.go auto-injects the Bearer token as a direct key when chatgpt_oauth is enabled, so users don't need to toggle the global allow_direct_keys security setting.

Configuration

Three new fields:

  • OpenAIConfig.ChatGPTOAuth bool (Go schema)
  • openai_config.chatgpt_oauth (JSON / DB JSON blob — no migration needed)
  • UI toggle at Providers → OpenAI → OpenAI Config → ChatGPT OAuth Passthrough

User config in ~/.codex/config.toml:

openai_base_url = "http://localhost:8080/openai/v1"
model = "gpt-5.3-codex"

No env_key, OPENAI_API_KEY, openai/ model prefix, or allow_direct_keys toggle required.

Extracted side-effect fixes

Side-effect fixes found during this work have been extracted into their own focused PRs and removed from this branch. See the triage table in the PR comment at #2775 (comment) for the full map of issue, extraction PR, and removal commit per finding.

Security

  • JWT is decoded only for claim extraction — no signature verification (consistent with the openai-oauth reference implementation; the token is forwarded as-is to the real backend).
  • Account IDs containing \r or \n are rejected to prevent HTTP header injection (3 dedicated test cases).
  • No new network egress: Bifrost only connects to whatever base_url is configured (default https://chatgpt.com/backend-api/codex).

Test Plan

Automated (already green on this branch)

  • Unit tests: go test ./providers/openai/... -count=1 (50+ test cases, 100% coverage on chatgpt_oauth.go — every public function)
  • Static analysis: go vet, gofmt — clean
  • Adversarial review: ran against hostile review agent; findings addressed (header injection fix, map mutation safety)

Manual — ChatGPT OAuth path (author verified)

  • Enable "ChatGPT OAuth Passthrough" toggle on the OpenAI provider via UI
  • Configure Codex CLI with openai_base_url = \"http://localhost:8080/openai/v1\", no env_key, no API key in env
  • Log in to Codex with a ChatGPT Plus/Pro/Business/Enterprise subscription (codex → OAuth browser flow)
  • Run /model in Codex — verify the model list populates from chatgpt.com/backend-api/codex/models
  • Select a model (e.g. gpt-5.3-codex) — verify inference works end-to-end via WebSocket
  • Select a model — verify inference works via HTTP SSE fallback (kill WS pool or block wss)

Files Changed

Category File Change
Schema core/schemas/provider.go Add ChatGPTOAuth bool to OpenAIConfig
Module (new) core/providers/openai/chatgpt_oauth.go All ChatGPT-specific logic: JWT, body transform, path/URL/header helpers
Tests (new) core/providers/openai/chatgpt_oauth_test.go 50+ tests, 100% coverage
Provider core/providers/openai/openai.go Auto-apply path mapping + header merge across all routes
Provider (WS) core/providers/openai/websocket.go Delegate to module helpers for WS URL and headers
Core routing core/bifrost.go Auto-inject Bearer → direct key when chatgpt_oauth is on
Other providers core/providers/{azure,fireworks,openrouter,xai}/*.go Pass nil to new bodyTransformer param on shared streaming helper (no behavior change)
UI ui/lib/types/config.ts, ui/lib/types/schemas.ts, ui/app/workspace/providers/fragments/openaiConfigFormFragment.tsx New toggle + form wiring
Docs docs/cli-agents/codex-cli.mdx Setup guide with both auth methods

Compatibility

  • Fully backward compatible. Toggle defaults to false; when off, provider behaves exactly as before.
  • No DB migration: OpenAIConfig is stored as a JSON blob (open_ai_config_json column) and new fields with zero-value defaults deserialize cleanly.
  • No breaking API changes. The shared HandleOpenAIResponsesRequest and HandleOpenAIResponsesStreaming helpers gained a trailing bodyTransformer func([]byte) ([]byte, error) parameter; all external callers pass nil.

🤖 Generated with Claude Code


Addendum, 2026-04-23 / 2026-04-24 (main merge + extraction of side-effect fixes)

Merged main, discovered and fixed several latent WS-pipeline bugs during live Codex OAuth testing, then extracted each latent fix into its own focused upstream PR so this branch stays scoped to the ChatGPT OAuth feature itself.

Merge

  • Merge commit 0d9e4130d, resolved conflicts in core/bifrost.go, core/providers/azure/azure.go, core/providers/openai/openai.go, core/schemas/provider.go, ui/.../openaiConfigFormFragment.tsx, ui/lib/types/schemas.ts. Main's refactors adopted, ChatGPT OAuth feature preserved.
  • Branch subsequently re-based over a second origin/main merge (a9acea33b).

What stays on this branch

OAuth-specific work only:

  • ChatGPT OAuth provider config toggle, JWT claim extraction, body transform, path/URL/header helpers, auto Bearer injection.
  • OAuth identity defaults (originator: codex_cli_rs, version fallback) injected only when the client sent none, and only on the chatgpt_oauth path. Standard api.openai.com WS is unaffected.
  • Debug gated raw frame logging (ws upstream frame, ws upstream stream ended) added for diagnosis during OAuth bring-up. Recommendation before merge: gate behind an env flag (BIFROST_WS_TRACE_FRAMES=1) or remove, because at LOG_LEVEL=debug it captures raw upstream payload which can be sensitive.

What was extracted and removed from this branch

Each of the following latent fixes was discovered while debugging the OAuth WS path but is orthogonal to the OAuth feature. Each has its own focused upstream PR, and a matching chore: remove ... (extracted to #NNNN) commit on this branch so the feature branch does not overlap the upstream PRs. Full mapping of issue, PR, and removal commit is in the triage comment at #2775 (comment).

Merge order dependency

Because the extracted fixes were removed from this branch, the OAuth feature cannot be demoed end-to-end against a real ChatGPT Codex client at the current tip. The Codex CLI sends bare model strings (e.g. gpt-5-codex), which requires PR #3006 to land on main first. After that, the WS-logging cluster (#3013, #3014, #3016, #3017, #3019, #3020) should land together to restore log finalization and the WS idle-timeout watchdog. Once those are on main, rebase this branch and re-run the manual Codex round trip before unmarking draft.

Previously verified end-to-end (before extraction)

These results were produced when the side-effect fixes were still present on this branch. They will hold again after the extracted PRs land on main and this branch is rebased.

  • Codex CLI 0.123.0 to Bifrost (chatgpt_oauth on, no allow_direct_keys) to chatgpt.com/backend-api/codex/responses via native WebSocket.
  • Response relayed to Codex immediately (seconds, not the pre-fix 2m+).
  • DB log entry transitions processing to success with proper latency, token counts, and request/response bodies when "Store raw request/response" is on.

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 16, 2026

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 16, 2026

📝 Walkthrough

Walkthrough

Adds ChatGPT OAuth passthrough and routing: extracts Bearer tokens and injects a direct key into request context, rewrites OpenAI-style paths to ChatGPT backend (including WebSocket URLs), enforces Responses request transformations/streaming, exposes provider toggle and UI control, adjusts handlers to accept optional request-body transformers.

Changes

Cohort / File(s) Summary
Core: ChatGPT OAuth implementation
core/providers/openai/chatgpt_oauth.go, core/providers/openai/chatgpt_oauth_test.go
New module: Bearer extraction, unsigned-JWT claim parsing (chatgpt_account_id), path/URL/WebSocket mapping, OAuth header construction/merge, Responses body transformer, and comprehensive tests.
OpenAI provider integration
core/providers/openai/openai.go, core/providers/openai/websocket.go, core/providers/openai/types.go
Added chatgptOAuth flag, base URL defaulting to ChatGPT backend, centralized effective extra headers, path/websocket rewriting, optional request body transformer for Responses (request/stream) and JSON unmarshaller for ChatGPT-style model lists.
Request worker token injection
core/bifrost.go
When ChatGPT OAuth is enabled, extracts Bearer token from request headers and injects a direct key (chatgpt-oauth) into request context if no direct key present.
Provider callsite updates
core/providers/azure/azure.go, core/providers/fireworks/fireworks.go, core/providers/openrouter/openrouter.go, core/providers/xai/xai.go
Updated calls to HandleOpenAIResponsesRequest / HandleOpenAIResponsesStreaming to include the new optional bodyTransformer parameter (passed as nil in these callsites).
Schemas / config / UI types
core/schemas/provider.go, ui/lib/types/config.ts, ui/lib/types/schemas.ts
Added ChatGPTOAuth / chatgpt_oauth flags to backend schema and TypeScript/Zod types.
Frontend form
ui/app/workspace/providers/fragments/openaiConfigFormFragment.tsx
Added chatgpt_oauth switch to OpenAI provider form; wired into defaults, reset, submission payload, and validation trigger.
Transports / routing / WS
transports/bifrost-http/handlers/wsresponses.go, transports/bifrost-http/integrations/openai.go, transports/bifrost-http/websocket/pool.go
Default model parsing now assumes OpenAI for bare names; added Azure SDK detection PreCallback for model-list routes; improved WebSocket dial errors to include upstream response status and body snippet when available.
Docs
docs/cli-agents/codex-cli.mdx
Documented Codex CLI two auth flows and added ChatGPT OAuth passthrough instructions; clarified API-key vs OAuth behavior.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant Bifrost as Bifrost\nRequest Worker
    participant KeyStore as Direct Key\nInjection
    participant Provider as OpenAI\nProvider
    participant Backend as ChatGPT\nBackend

    Client->>Bifrost: HTTP request (Authorization: Bearer <token>)
    Bifrost->>Bifrost: Check provider.ChatGPTOAuth enabled
    alt OAuth enabled and no direct key
        Bifrost->>Bifrost: Extract Bearer token
        Bifrost->>KeyStore: Inject direct key "chatgpt-oauth"
    end
    Bifrost->>Provider: Forward request (context includes key)
    Provider->>Provider: Extract chatgpt_account_id from JWT
    Provider->>Provider: Transform /v1/responses body (instructions, store=false, stream=true, remove max_output_tokens)
    Provider->>Provider: Map /v1/* path to ChatGPT backend path / build WS URL
    Provider->>Backend: Upstream request (transformed body + OAuth headers)
    Backend-->>Provider: Response / stream
    Provider-->>Bifrost: Response relayed
    Bifrost-->>Client: Response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐇 I hop on bytes and sniff the air,

A Bearer token tucked with care,
Paths I map, and headers I send,
Streams now flow from end to end,
Hooray — OAuth carrots everywhere! 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 83.78% which is sufficient. The required threshold is 80.00%.
Title check ✅ Passed The title accurately summarizes the main feature: OAuth passthrough for the OpenAI provider to support Codex CLI subscriptions, which aligns with the primary changes across the codebase.
Description check ✅ Passed The PR description is comprehensive and well-structured. It includes a clear summary, detailed motivation, changes, design decisions, configuration, security considerations, and extensive testing documentation. All key sections from the template are present and substantive.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented Apr 16, 2026

I have implemented this for Seat based Subscriptions - however I only tested this with my private account.

This was tested and developed against Anthrophic (also OAuth based) Claude Code and Codex CLI with OAuth both trough a locally deployed docker build.

Basis was a scan against for all routes onto the codex codebase:

https://github.com/openai/codex

credits / idea taken from:

https://github.com/EvanZhouDev/openai-oauth

Its currently in Manual Testing state = DRAFT until I have verified all 400 errors are gone!

› Hello world how are you!?


■ {"type":"error","status":400,"error":
{"code":"server_error","message":"provider API error (status 400)"}}


› test


• test received. If you want, I can also run actual tests in the repo.


› Hello world how are you!?


• I’m fine. You?


› can you scan my dir ?


• I’ll scan the current workspace to see the top-level structure and key files,
  then I’ll summarize what’s here.

• Ran pwd
  └ /Users/xxx/git/yyy/zzz

• Explored
  └ List ls -la
    List rg --files -g AGENTS.md -g go.mod -g 'README*' -g Makefile -g
'.golangci*'
         -g .gitignore

────────────────────────────────────────────────────────────────────────────────

• I have the top-level layout. I’m

Target is to have this working for our Enterprise Setup too ... (which we might be contacting you soon on :) )

@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented Apr 16, 2026

@coderabbitai full review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 16, 2026

✅ Actions performed

Full review triggered.

1 similar comment
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 16, 2026

✅ Actions performed

Full review triggered.

@thiscantbeserious
Copy link
Copy Markdown
Author

Screenshot 2026-04-16 at 21 18 08 Screenshot 2026-04-16 at 21 18 51

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/providers/openai/openai.go (1)

119-149: ⚠️ Potential issue | 🟠 Major

Use the current key when deriving ChatGPT OAuth headers for /models.

HandleOpenAIListModelsRequest fans out per key, but this block builds extraHeaders once from keys[0] and reuses it for every request. In ChatGPT OAuth mode that means later keys send their own bearer token with the first key’s chatgpt-account-id, which can produce 4xxs or cross-account header leakage. Build the headers inside the per-key wrapper instead of sharing one headerKey across the whole call.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/openai.go` around lines 119 - 149, The code builds a
single headerKey from keys[0] and calls
provider.effectiveExtraHeaders(headerKey) once, causing the same
chatgpt-account-id to be reused for every per-key request; change this so
headers are derived per key inside the per-key wrapper used by
HandleOpenAIListModelsRequest/ListModelsByKey instead of reusing headerKey:
remove the shared headerKey usage for the multi-key path and ensure
provider.effectiveExtraHeaders(...) is called with the current key inside the
per-key loop/wrapper (the same pattern already used in the keyless branch or
within ListModelsByKey), so each request uses the matching key's
chatgpt-account-id and bearer token.
🧹 Nitpick comments (2)
ui/lib/types/schemas.ts (1)

702-707: Consider defaulting chatgpt_oauth for backward compatibility with existing stored configs.

chatgpt_oauth: z.boolean() makes the field strictly required at parse time. Since this is a newly introduced toggle and the backend default is false (no DB migration per PR objectives), any persisted OpenAI config form state saved before this change won't include the key and will fail validation here. The pre-existing disable_store has the same shape, but that field has existed long enough for stored state to contain it.

Recommend making it tolerant of missing input:

♻️ Proposed change
 export const openaiConfigFormSchema = z.object({
 	disable_store: z.boolean(),
-	chatgpt_oauth: z.boolean(),
+	chatgpt_oauth: z.boolean().default(false),
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/lib/types/schemas.ts` around lines 702 - 707, The schema currently
requires chatgpt_oauth which breaks parsing of older persisted configs that lack
this new key; update openaiConfigFormSchema so chatgpt_oauth is tolerant of
missing values by making it optional and giving it a default of false (e.g.,
change the chatgpt_oauth entry in openaiConfigFormSchema to an optional boolean
with default false) so parsing/backwards compatibility is preserved; keep the
exported OpenAIConfigFormSchema type inference intact.
core/providers/openai/chatgpt_oauth_test.go (1)

341-383: Add a /v1/models query-preservation test case.

Current cases validate plain /v1/models, but not /v1/models?…. Adding one case here helps lock expected behavior if callers include query params.

Suggested test addition
 	// Models — appends required client_version query param
 	{"models listing", "/v1/models", "/models?client_version=" + ChatGPTOAuthClientVersion},
+	{"models listing with existing query", "/v1/models?limit=20", "/models?limit=20&client_version=" + ChatGPTOAuthClientVersion},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth_test.go` around lines 341 - 383, Add a
test case to TestChatGPTOAuthPath to verify query-preservation for /v1/models:
update the tests slice in TestChatGPTOAuthPath to include an entry like {"models
with query", "/v1/models?foo=bar", "/models?foo=bar&client_version=" +
ChatGPTOAuthClientVersion} so chatGPTOAuthPath is exercised when the incoming
/v1/models already has query params and the client_version param is appended
correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 15-21: The helper buildTestJWT currently ignores json.Marshal
errors; update buildTestJWT to check the error returned by json.Marshal(payload)
and fail fast (e.g., panic with a clear message including the error) instead of
proceeding, so malformed payloads produce an immediate, descriptive failure;
modify the code in buildTestJWT to capture (payloadBytes, err :=
json.Marshal(payload)), handle err (panic or return an explicit error/empty
token) and only continue to encode payloadBytes when err is nil.

In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 121-124: The code uses encoding/json on hot request paths (in
extractChatGPTAccountID and transformChatGPTResponsesBody) which is against
project convention; replace json.Unmarshal/json.Marshal calls with
github.com/bytedance/sonic's sonic.Unmarshal/sonic.Marshal in both functions,
update the import to include "github.com/bytedance/sonic" and remove the unused
encoding/json import, and ensure error handling stays the same (wrap/return
errors as before) since no custom marshaling is required.
- Around line 237-249: When enabled is true, chatGPTOAuthApplyRequest must not
return a bodyTransformer if extracting the ChatGPT account ID fails (to avoid
sending a mutated body without the required chatgpt-account-id header); change
chatGPTOAuthApplyRequest to detect extraction failure from
chatGPTOAuthMergeHeaders (or call extractChatGPTAccountID directly) and return a
sentinel (nil bodyTransformer or an explicit error indicator) instead of
transformChatGPTResponsesBody so the caller can surface a structured "invalid
ChatGPT OAuth token" error; reference chatGPTOAuthApplyRequest,
chatGPTOAuthMergeHeaders, extractChatGPTAccountID and
transformChatGPTResponsesBody when implementing this check and ensure any
logging stays but the function signals failure to callers.

In `@core/providers/openai/openai.go`:
- Around line 5183-5184: The code is manually concatenating
provider.networkConfig.BaseURL + "/v1/..." which bypasses chatgptOAuth path
rewriting; replace those manual concatenations with calls to
buildRequestURL(...) so the URL goes through the ChatGPT OAuth path
normalization (e.g., change the req.SetRequestURI(provider.networkConfig.BaseURL
+ "/v1/files/"+request.FileID) usage to req.SetRequestURI(buildRequestURL(ctx,
provider, "/v1/files/"+request.FileID, key))), keeping
providerUtils.SetExtraHeaders(...) as-is; apply the same change to the other
occurrences you noted (places around the symbols handling files, batch, and
passthrough routes) so chatGPTOAuthPath(...) sees the normalized path.

In `@transports/bifrost-http/handlers/wsresponses.go`:
- Around line 414-418: Update the errModelFormat error message to reflect that a
model name is required (provider prefix is optional) and adjust its declaration
referenced by handleResponseCreate and convertEventToRequest; specifically
change the errModelFormat string (used in wsresponses.go where
schemas.ParseModelString(event.Model, schemas.OpenAI) is called) from "model
should be in provider/model format" to something like "model is required" so
that a missing event.Model returns the correct, non-misleading error message.

---

Outside diff comments:
In `@core/providers/openai/openai.go`:
- Around line 119-149: The code builds a single headerKey from keys[0] and calls
provider.effectiveExtraHeaders(headerKey) once, causing the same
chatgpt-account-id to be reused for every per-key request; change this so
headers are derived per key inside the per-key wrapper used by
HandleOpenAIListModelsRequest/ListModelsByKey instead of reusing headerKey:
remove the shared headerKey usage for the multi-key path and ensure
provider.effectiveExtraHeaders(...) is called with the current key inside the
per-key loop/wrapper (the same pattern already used in the keyless branch or
within ListModelsByKey), so each request uses the matching key's
chatgpt-account-id and bearer token.

---

Nitpick comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 341-383: Add a test case to TestChatGPTOAuthPath to verify
query-preservation for /v1/models: update the tests slice in
TestChatGPTOAuthPath to include an entry like {"models with query",
"/v1/models?foo=bar", "/models?foo=bar&client_version=" +
ChatGPTOAuthClientVersion} so chatGPTOAuthPath is exercised when the incoming
/v1/models already has query params and the client_version param is appended
correctly.

In `@ui/lib/types/schemas.ts`:
- Around line 702-707: The schema currently requires chatgpt_oauth which breaks
parsing of older persisted configs that lack this new key; update
openaiConfigFormSchema so chatgpt_oauth is tolerant of missing values by making
it optional and giving it a default of false (e.g., change the chatgpt_oauth
entry in openaiConfigFormSchema to an optional boolean with default false) so
parsing/backwards compatibility is preserved; keep the exported
OpenAIConfigFormSchema type inference intact.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: fe98f130-54dd-4961-b421-6b4b6eaf13bd

📥 Commits

Reviewing files that changed from the base of the PR and between 83599a2 and a903ebb.

📒 Files selected for processing (17)
  • core/bifrost.go
  • core/providers/azure/azure.go
  • core/providers/fireworks/fireworks.go
  • core/providers/openai/chatgpt_oauth.go
  • core/providers/openai/chatgpt_oauth_test.go
  • core/providers/openai/openai.go
  • core/providers/openai/websocket.go
  • core/providers/openrouter/openrouter.go
  • core/providers/xai/xai.go
  • core/schemas/provider.go
  • docs/cli-agents/codex-cli.mdx
  • transports/bifrost-http/handlers/wsresponses.go
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/websocket/pool.go
  • ui/app/workspace/providers/fragments/openaiConfigFormFragment.tsx
  • ui/lib/types/config.ts
  • ui/lib/types/schemas.ts

Comment thread core/providers/openai/chatgpt_oauth_test.go
Comment thread core/providers/openai/chatgpt_oauth.go
Comment thread core/providers/openai/chatgpt_oauth.go
Comment thread core/providers/openai/openai.go Outdated
Comment thread transports/bifrost-http/handlers/wsresponses.go Outdated
Comment thread core/bifrost.go
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
core/providers/openai/openai.go (1)

119-148: ⚠️ Potential issue | 🟠 Major

Build ChatGPT OAuth headers from the current key, not keys[0].

The merged headers are computed once from headerKey and then reused across all HandleOpenAIListModelsRequest calls. For a provider configured with multiple OAuth tokens, the second and later requests can send bearer token B with chatgpt-account-id extracted from token A, which the ChatGPT backend will reject. This needs to be derived per key inside the list-models fanout path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/openai.go` around lines 119 - 148, The current code
builds merged ChatGPT OAuth headers once using headerKey (the first key) and
reuses them for all fanout requests, which can pair the wrong chatgpt-account-id
with a different bearer token; instead, remove the global headerKey-based header
reuse and derive extra headers per key inside the list-models fanout path:
update the call sites for HandleOpenAIListModelsRequest and ListModelsByKey to
stop passing provider.effectiveExtraHeaders(headerKey) (or
provider.effectiveExtraHeaders) precomputed from keys[0] and ensure the code
that iterates over keys computes provider.effectiveExtraHeaders(k) for each
specific schemas.Key k (so chatgpt-account-id is extracted from the same key
used for the bearer), leaving the keyless branch
(provider.customProviderConfig.IsKeyLess → ListModelsByKey with schemas.Key{})
unchanged.
♻️ Duplicate comments (3)
core/providers/openai/chatgpt_oauth_test.go (1)

15-21: ⚠️ Potential issue | 🟡 Minor

Handle json.Marshal failure in test JWT helper.

Line 17 ignores the marshal error, which can hide malformed fixture bugs and make failures harder to diagnose.

Proposed fix
 func buildTestJWT(payload map[string]interface{}) string {
 	header := base64.RawURLEncoding.EncodeToString([]byte(`{"alg":"RS256","typ":"JWT"}`))
-	payloadBytes, _ := json.Marshal(payload)
+	payloadBytes, err := json.Marshal(payload)
+	if err != nil {
+		panic(err)
+	}
 	payloadB64 := base64.RawURLEncoding.EncodeToString(payloadBytes)
 	sig := base64.RawURLEncoding.EncodeToString([]byte("fake-signature"))
 	return header + "." + payloadB64 + "." + sig
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth_test.go` around lines 15 - 21, The test
helper buildTestJWT currently ignores the json.Marshal error for payload; update
buildTestJWT to check the error returned by json.Marshal(payload) and handle it
instead of discarding it—e.g., if err != nil then panic or fail the test with a
descriptive message including the error (so malformed fixtures surface clearly).
Ensure the error check occurs right after the json.Marshal call that assigns
payloadBytes and err, and include the error in the panic/failure message so
buildTestJWT surfaces marshal failures.
core/providers/openai/chatgpt_oauth.go (2)

244-249: ⚠️ Potential issue | 🟠 Major

chatGPTOAuthApplyRequest still returns the body transformer when extraction fails.

Per the function comment, when JWT extraction fails the headers pass through unchanged but transformChatGPTResponsesBody is still returned — so the upstream request goes out with a mutated body (store=false, stream=true, max_output_tokens dropped, instructions="") and no chatgpt-account-id. That guarantees an upstream 400 and is the primary failure mode called out in the PR description.

Recommend returning (existingExtraHeaders, nil) (or surfacing an error) when chatGPTOAuthMergeHeaders did not actually merge — either by having chatGPTOAuthMergeHeaders signal merge success, or by calling extractChatGPTAccountID directly here and short-circuiting.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth.go` around lines 244 - 249,
chatGPTOAuthApplyRequest currently always returns transformChatGPTResponsesBody
even when JWT/account extraction failed, causing mutated bodies to be sent
without merged headers; modify chatGPTOAuthApplyRequest to detect whether
chatGPTOAuthMergeHeaders actually merged (either by updating
chatGPTOAuthMergeHeaders to return a success flag or by calling
extractChatGPTAccountID here) and if extraction/merge failed return
(existingExtraHeaders, nil) (or surface an error) instead of returning
transformChatGPTResponsesBody; reference chatGPTOAuthApplyRequest,
chatGPTOAuthMergeHeaders, extractChatGPTAccountID and
transformChatGPTResponsesBody when implementing the short-circuit.

79-100: ⚠️ Potential issue | 🟡 Minor

WebSocket headers drop OpenAI-Beta on JWT extraction failure — same pattern as the Responses path.

When extractChatGPTAccountID fails, the function early-returns with only Authorization + existing extras. The upstream WS connect will then go through with no chatgpt-account-id and no OpenAI-Beta: responses=experimental, resulting in an opaque upstream 400/handshake failure visible only as a Warn log — consistent with the unresolved 400s called out in the PR description.

Prefer failing fast by returning an error (or a sentinel nil) so the WS dial path surfaces a structured "invalid ChatGPT OAuth token" error rather than relying on the upstream to reject a malformed request.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth.go` around lines 79 - 100, Update
chatGPTOAuthWebSocketHeaders so it fails fast instead of returning partial
headers: when extractChatGPTAccountID(key.Value.GetValue()) returns an error, do
not return the headers map — return a sentinel nil (or change the function to
return (map[string]string, error) and return an explicit error like "invalid
ChatGPT OAuth token") so callers (the WS dial/connect path) can detect the
failure and surface a structured error; ensure callers that invoke
chatGPTOAuthWebSocketHeaders (and any logic expecting chatGPTOAuthExtraHeaders)
are adjusted to handle a nil/err return and avoid attempting the WebSocket
connect with incomplete headers.
🧹 Nitpick comments (3)
transports/bifrost-http/websocket/pool.go (1)

177-186: Use io.ReadAll(io.LimitReader(...)) for reliable body snippet capture.

A single resp.Body.Read(buf) is not guaranteed to fill the buffer even if more data is available — for chunked or buffered responses it commonly returns far fewer than 512 bytes on the first call, so the captured bodySnippet may be truncated well before the real upstream error message. Since this snippet is surfaced through logger.Warn at wsresponses.go:258, you want the first 512 bytes reliably, not whatever the first Read happens to hand back.

Also, the raw body can contain newlines / HTML / control chars that will be inlined into the error (and log line). Consider trimming and/or quoting it for log hygiene.

♻️ Proposed refactor
-		detail := ""
-		if resp != nil {
-			var bodySnippet string
-			if resp.Body != nil {
-				buf := make([]byte, 512)
-				n, _ := resp.Body.Read(buf)
-				_ = resp.Body.Close()
-				bodySnippet = string(buf[:n])
-			}
-			detail = fmt.Sprintf(" (upstream status %d: %s)", resp.StatusCode, bodySnippet)
-		}
+		detail := ""
+		if resp != nil {
+			var bodySnippet string
+			if resp.Body != nil {
+				body, _ := io.ReadAll(io.LimitReader(resp.Body, 512))
+				_ = resp.Body.Close()
+				bodySnippet = strings.TrimSpace(string(body))
+			}
+			detail = fmt.Sprintf(" (upstream status %d: %q)", resp.StatusCode, bodySnippet)
+		}

Remember to add io and strings to the imports.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/websocket/pool.go` around lines 177 - 186, Replace
the single Read call that builds bodySnippet with a reliable read of up to 512
bytes using io.ReadAll(io.LimitReader(resp.Body, 512)), ensure resp.Body is
closed, then sanitize the result (trim whitespace and/or quote non-printable
chars using strings.TrimSpace or a quoting helper) before inserting into detail;
update imports to include io and strings and keep using resp.StatusCode and the
existing variables (resp.Body, bodySnippet, detail, key.Endpoint, err) so the
surrounding error formatting remains unchanged.
transports/bifrost-http/handlers/wsresponses.go (1)

164-181: Minor: new comment is interleaved with the unrelated Store‑override comment block.

Lines 164–166 document the store‑override logic at lines 176–181, and lines 167–169 document the ParseModelString call at line 170. Having both blocks glued together before ParseModelString makes it read as a single confusing preamble. Consider moving the Store‑override comment down adjacent to its code:

✏️ Proposed tweak
-	// Store override: default to store=true (Codex sends false by default but expects true).
-	// If DisableStore is set in provider config, force store=false.
-	// If client explicitly sets store, respect that value unless DisableStore overrides it.
-	// Default to OpenAI provider since /v1/responses is an OpenAI-format endpoint.
-	// Allows callers (e.g. Codex) to send the bare model name like "gpt-5.3-codex"
-	// without a provider prefix.
+	// Default to OpenAI provider since /v1/responses is an OpenAI-format endpoint.
+	// Allows callers (e.g. Codex) to send the bare model name like "gpt-5.3-codex"
+	// without a provider prefix.
 	provider, modelName := schemas.ParseModelString(event.Model, schemas.OpenAI)
 	if provider == "" || modelName == "" {
 		writeWSError(session, 400, "invalid_request_error", "failed to parse model string")
 		return
 	}
 
+	// Store override: default to store=true (Codex sends false by default but expects true).
+	// If DisableStore is set in provider config, force store=false.
 	if providerCfg, cfgErr := h.config.GetProviderConfigRaw(provider); cfgErr == nil &&

Separately (not part of this diff): the existing "respect client store unless DisableStore overrides" line no longer matches the branch logic below, which unconditionally rewrites event.Store. Worth revisiting when you clean up the comments.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@transports/bifrost-http/handlers/wsresponses.go` around lines 164 - 181, Move
the Store-override comment block so it sits immediately above the branch that
sets event.Store (the code that calls h.config.GetProviderConfigRaw and checks
providerCfg.OpenAIConfig.DisableStore), and leave the ParseModelString comment
(about provider/model defaulting to OpenAI and bare model names) directly above
the provider, modelName := schemas.ParseModelString(...) call; also update the
Store comment text to accurately reflect the implementation (it unconditionally
sets event.Store to true or false based on DisableStore rather than “respecting
client store unless DisableStore overrides”), referencing
schemas.ParseModelString, providerCfg.OpenAIConfig.DisableStore,
h.config.GetProviderConfigRaw, and event.Store so reviewers can find the
relevant code.
core/providers/openai/chatgpt_oauth.go (1)

69-73: strings.Replace scheme swap is brittle for non-canonical base URLs.

If baseURL is missing the scheme, already uses ws(s)://, or has unusual casing (HTTPS://), the conversion silently degrades: a bare host stays bare, already-ws:// URLs pass through (fine), and uppercase schemes are not rewritten (breaks). Consider parsing with net/url and swapping Scheme explicitly, or at least documenting the canonical-input assumption.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth.go` around lines 69 - 73, The current
chatGPTOAuthWebSocketURL function uses strings.Replace which is brittle for
non-canonical baseURL; change it to parse baseURL with net/url (use url.Parse),
then set u.Scheme to "wss" if original scheme is "https" (case-insensitive) or
to "ws" if "http", and handle missing or already-websocket schemes by defaulting
to "wss" for secure or returning an error/normalizing as appropriate; finally
rebuild the URL (u.String()) and append chatGPTOAuthPath(standardPath). Update
chatGPTOAuthWebSocketURL to use url.Parse/URL.Scheme assignment instead of
strings.Replace to ensure correct, case-insensitive scheme handling and to cover
absent schemes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 215-224: The test currently expects the incoming "store" value to
be preserved, but transformChatGPTResponsesBody must force store=false; update
the test in the t.Run block to assert that result["store"] is false (not true)
after calling transformChatGPTResponsesBody using the existing input
(`{"model":"gpt-5.4","store":true}`) and existing variables so the test verifies
the forced false behavior of transformChatGPTResponsesBody.

In `@core/providers/openai/openai.go`:
- Around line 1417-1432: HandleOpenAIResponsesRequest currently forwards large
requests through handleOpenAILargePayloadPassthrough before applying the
bodyTransformer, causing ChatGPT OAuth mutations to be skipped; update
HandleOpenAIResponsesRequest (or the call site in openai.go where you pass
bodyTransformer into HandleOpenAIResponsesRequest) so that the bodyTransformer
is applied to the request payload prior to any large-payload passthrough
decision (i.e., ensure bodyTransformer is invoked and its result used when
determining whether to call handleOpenAILargePayloadPassthrough), and propagate
the transformed body into handleOpenAILargePayloadPassthrough and subsequent
code paths so large /v1/responses requests receive the same ChatGPT OAuth
mutations (store=false, stream=true, etc.) as small requests.
- Around line 91-97: The merge helper chatGPTOAuthMergeHeaders currently copies
maps verbatim causing duplicate header keys with different casing; update
chatGPTOAuthMergeHeaders (and callers like effectiveExtraHeaders) to normalize
header names case-insensitively (e.g., use strings.ToLower) when merging
networkConfig.ExtraHeaders with the ChatGPT OAuth headers so that "openai-beta"
and "chatgpt-account-id" are deduplicated and deterministically overridden by
the OAuth values; ensure the resulting map uses a consistent canonical casing or
lower-case keys before returning so downstream SetExtraHeaders sees only one
entry per logical header.

---

Outside diff comments:
In `@core/providers/openai/openai.go`:
- Around line 119-148: The current code builds merged ChatGPT OAuth headers once
using headerKey (the first key) and reuses them for all fanout requests, which
can pair the wrong chatgpt-account-id with a different bearer token; instead,
remove the global headerKey-based header reuse and derive extra headers per key
inside the list-models fanout path: update the call sites for
HandleOpenAIListModelsRequest and ListModelsByKey to stop passing
provider.effectiveExtraHeaders(headerKey) (or provider.effectiveExtraHeaders)
precomputed from keys[0] and ensure the code that iterates over keys computes
provider.effectiveExtraHeaders(k) for each specific schemas.Key k (so
chatgpt-account-id is extracted from the same key used for the bearer), leaving
the keyless branch (provider.customProviderConfig.IsKeyLess → ListModelsByKey
with schemas.Key{}) unchanged.

---

Duplicate comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 15-21: The test helper buildTestJWT currently ignores the
json.Marshal error for payload; update buildTestJWT to check the error returned
by json.Marshal(payload) and handle it instead of discarding it—e.g., if err !=
nil then panic or fail the test with a descriptive message including the error
(so malformed fixtures surface clearly). Ensure the error check occurs right
after the json.Marshal call that assigns payloadBytes and err, and include the
error in the panic/failure message so buildTestJWT surfaces marshal failures.

In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 244-249: chatGPTOAuthApplyRequest currently always returns
transformChatGPTResponsesBody even when JWT/account extraction failed, causing
mutated bodies to be sent without merged headers; modify
chatGPTOAuthApplyRequest to detect whether chatGPTOAuthMergeHeaders actually
merged (either by updating chatGPTOAuthMergeHeaders to return a success flag or
by calling extractChatGPTAccountID here) and if extraction/merge failed return
(existingExtraHeaders, nil) (or surface an error) instead of returning
transformChatGPTResponsesBody; reference chatGPTOAuthApplyRequest,
chatGPTOAuthMergeHeaders, extractChatGPTAccountID and
transformChatGPTResponsesBody when implementing the short-circuit.
- Around line 79-100: Update chatGPTOAuthWebSocketHeaders so it fails fast
instead of returning partial headers: when
extractChatGPTAccountID(key.Value.GetValue()) returns an error, do not return
the headers map — return a sentinel nil (or change the function to return
(map[string]string, error) and return an explicit error like "invalid ChatGPT
OAuth token") so callers (the WS dial/connect path) can detect the failure and
surface a structured error; ensure callers that invoke
chatGPTOAuthWebSocketHeaders (and any logic expecting chatGPTOAuthExtraHeaders)
are adjusted to handle a nil/err return and avoid attempting the WebSocket
connect with incomplete headers.

---

Nitpick comments:
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 69-73: The current chatGPTOAuthWebSocketURL function uses
strings.Replace which is brittle for non-canonical baseURL; change it to parse
baseURL with net/url (use url.Parse), then set u.Scheme to "wss" if original
scheme is "https" (case-insensitive) or to "ws" if "http", and handle missing or
already-websocket schemes by defaulting to "wss" for secure or returning an
error/normalizing as appropriate; finally rebuild the URL (u.String()) and
append chatGPTOAuthPath(standardPath). Update chatGPTOAuthWebSocketURL to use
url.Parse/URL.Scheme assignment instead of strings.Replace to ensure correct,
case-insensitive scheme handling and to cover absent schemes.

In `@transports/bifrost-http/handlers/wsresponses.go`:
- Around line 164-181: Move the Store-override comment block so it sits
immediately above the branch that sets event.Store (the code that calls
h.config.GetProviderConfigRaw and checks providerCfg.OpenAIConfig.DisableStore),
and leave the ParseModelString comment (about provider/model defaulting to
OpenAI and bare model names) directly above the provider, modelName :=
schemas.ParseModelString(...) call; also update the Store comment text to
accurately reflect the implementation (it unconditionally sets event.Store to
true or false based on DisableStore rather than “respecting client store unless
DisableStore overrides”), referencing schemas.ParseModelString,
providerCfg.OpenAIConfig.DisableStore, h.config.GetProviderConfigRaw, and
event.Store so reviewers can find the relevant code.

In `@transports/bifrost-http/websocket/pool.go`:
- Around line 177-186: Replace the single Read call that builds bodySnippet with
a reliable read of up to 512 bytes using io.ReadAll(io.LimitReader(resp.Body,
512)), ensure resp.Body is closed, then sanitize the result (trim whitespace
and/or quote non-printable chars using strings.TrimSpace or a quoting helper)
before inserting into detail; update imports to include io and strings and keep
using resp.StatusCode and the existing variables (resp.Body, bodySnippet,
detail, key.Endpoint, err) so the surrounding error formatting remains
unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4b74ee08-817c-473f-9b9e-fb58e8eb3371

📥 Commits

Reviewing files that changed from the base of the PR and between 83599a2 and a903ebb.

📒 Files selected for processing (17)
  • core/bifrost.go
  • core/providers/azure/azure.go
  • core/providers/fireworks/fireworks.go
  • core/providers/openai/chatgpt_oauth.go
  • core/providers/openai/chatgpt_oauth_test.go
  • core/providers/openai/openai.go
  • core/providers/openai/websocket.go
  • core/providers/openrouter/openrouter.go
  • core/providers/xai/xai.go
  • core/schemas/provider.go
  • docs/cli-agents/codex-cli.mdx
  • transports/bifrost-http/handlers/wsresponses.go
  • transports/bifrost-http/integrations/openai.go
  • transports/bifrost-http/websocket/pool.go
  • ui/app/workspace/providers/fragments/openaiConfigFormFragment.tsx
  • ui/lib/types/config.ts
  • ui/lib/types/schemas.ts

Comment thread core/providers/openai/chatgpt_oauth_test.go Outdated
Comment thread core/providers/openai/openai.go
Comment thread core/providers/openai/openai.go Outdated
@thiscantbeserious thiscantbeserious marked this pull request as ready for review April 17, 2026 01:38
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
core/providers/openai/chatgpt_oauth.go (1)

61-83: Nit: avoid allocating a lowercase copy on every request.

strings.ToLower(authHeader) allocates a new string just to prefix-check. Use strings.EqualFold on the first 7 bytes, and drop the redundant first-try map lookup (the loop below already handles both cases).

♻️ Proposed tweak
-	authHeader, ok := headers["authorization"]
-	if !ok {
-		// Try case-insensitive fallback since the caller may not lowercase.
-		for k, v := range headers {
-			if strings.EqualFold(k, "authorization") {
-				authHeader = v
-				ok = true
-				break
-			}
-		}
-	}
-	if !ok || authHeader == "" {
-		return ""
-	}
-	if !strings.HasPrefix(strings.ToLower(authHeader), "bearer ") {
-		return ""
-	}
-	return strings.TrimSpace(authHeader[7:])
+	var authHeader string
+	for k, v := range headers {
+		if strings.EqualFold(k, "authorization") {
+			authHeader = v
+			break
+		}
+	}
+	if len(authHeader) < 7 || !strings.EqualFold(authHeader[:7], "bearer ") {
+		return ""
+	}
+	return strings.TrimSpace(authHeader[7:])
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth.go` around lines 61 - 83, The
ExtractChatGPTOAuthBearerToken function currently allocates by calling
strings.ToLower(authHeader) for the prefix check and also does a redundant
direct map lookup; change it to perform only a single case-insensitive header
search (loop over headers and use strings.EqualFold to find the "authorization"
key) and then avoid allocating by checking the header length and using
strings.EqualFold on the first 7 bytes to compare the "Bearer " prefix; finally
return strings.TrimSpace(authHeader[7:]) (ensure you guard for header length >=
7 and keep the nil headers check).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 88-100: The path mapping in chatGPTOAuthPath currently only
appends ChatGPTOAuthClientVersionFallback when mapped == "/models", which misses
cases like "/models?foo=bar"; update chatGPTOAuthPath to parse the mapped path's
query (use net/url.Parse or url.ParseRequestURI on the mapped value), check the
query parameters for the presence of "client_version", and if absent add
client_version=ChatGPTOAuthClientVersionFallback to the query while preserving
any existing query params and the path; ensure the function returns the
reconstructed URL (path + updated query) instead of relying on exact string
equality.
- Around line 114-135: The chatGPTOAuthWebSocketHeaders function can produce
duplicate-case header keys because it only case-folds "Authorization" and then
blindly copies existingExtraHeaders and overlays chatGPTOAuthExtraHeaders;
update chatGPTOAuthWebSocketHeaders to reuse the existing
mergeHeadersCaseInsensitive helper (the same approach used by
chatGPTOAuthApplyRequest/chatGPTOAuthMergeHeaders): build the base map with
"Authorization" using key.Value.GetValue(), then call
mergeHeadersCaseInsensitive(base, existingExtraHeaders) and finally merge the
map returned by chatGPTOAuthExtraHeaders(accountID) through
mergeHeadersCaseInsensitive so all merges are case-insensitive and do not leave
duplicate-cased keys (use extractChatGPTAccountID to get accountID as before).

---

Nitpick comments:
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 61-83: The ExtractChatGPTOAuthBearerToken function currently
allocates by calling strings.ToLower(authHeader) for the prefix check and also
does a redundant direct map lookup; change it to perform only a single
case-insensitive header search (loop over headers and use strings.EqualFold to
find the "authorization" key) and then avoid allocating by checking the header
length and using strings.EqualFold on the first 7 bytes to compare the "Bearer "
prefix; finally return strings.TrimSpace(authHeader[7:]) (ensure you guard for
header length >= 7 and keep the nil headers check).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e60c384d-acd9-41ae-ab90-17ef7b18a547

📥 Commits

Reviewing files that changed from the base of the PR and between a903ebb and 37c7077.

📒 Files selected for processing (5)
  • core/bifrost.go
  • core/providers/openai/chatgpt_oauth.go
  • core/providers/openai/chatgpt_oauth_test.go
  • core/providers/openai/openai.go
  • transports/bifrost-http/handlers/wsresponses.go
✅ Files skipped from review due to trivial changes (1)
  • core/providers/openai/chatgpt_oauth_test.go
🚧 Files skipped from review as they are similar to previous changes (3)
  • transports/bifrost-http/handlers/wsresponses.go
  • core/bifrost.go
  • core/providers/openai/openai.go

Comment thread core/providers/openai/chatgpt_oauth.go
Comment thread core/providers/openai/chatgpt_oauth.go Outdated
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 17, 2026

Confidence Score: 4/5

Safe to merge with two minor URL-construction edge cases to address.

All remaining findings are P2: a malformed URL in Passthrough/PassthroughStream (only triggers on the very unlikely path of chatgptOAuth + passthrough route for /models + non-empty RawQuery) and a dead production function chatGPTOAuthPrepare. The main ChatGPT OAuth code path, the three side-effect bug fixes, and all 50+ unit tests are clean.

core/providers/openai/openai.go — Passthrough and PassthroughStream URL construction when chatgptOAuth is enabled and req.RawQuery is non-empty

Important Files Changed

Filename Overview
core/providers/openai/chatgpt_oauth.go New module encapsulating all ChatGPT OAuth logic: JWT decoding, path/URL/header helpers, body transformer. Well-structured with header-injection guards. chatGPTOAuthPrepare is defined and tested but not called from production code.
core/providers/openai/openai.go ChatGPT OAuth integrated via effectiveExtraHeaders and buildRequestURL/buildFullURL. Non-streaming Responses correctly returns an error; streaming path applies body transformer. Passthrough/PassthroughStream can produce a malformed URL for /models when RawQuery is non-empty with chatgptOAuth enabled.
core/bifrost.go Injects Bearer token as a direct key when chatgpt_oauth is enabled, bypassing the allow_direct_keys gate. Logic is clean and idempotent (skips if direct key is already set).
core/providers/openai/types.go Adds custom UnmarshalJSON on OpenAIListModelsResponse to handle the ChatGPT backend's {models:[{slug}]} shape alongside the standard {data:[...]} shape. Correct precedence: data wins if non-empty.
transports/bifrost-http/handlers/wsresponses.go Correctly defaults ParseModelString to schemas.OpenAI so bare model names (e.g. gpt-5.3-codex) work without a provider prefix. Error message updated accordingly.
transports/bifrost-http/integrations/openai.go Scopes /openai/v1/models to OpenAI (or Azure for Azure SDK callers) to prevent fan-out to every configured provider. Uses PreCallback to bridge fasthttp context for the Azure user-agent check, consistent with existing patterns.
transports/bifrost-http/websocket/pool.go Improves WebSocket dial error messages to include upstream HTTP status and body snippet. Correctly closes response body after reading the snippet.
core/schemas/provider.go Adds ChatGPTOAuth bool to OpenAIConfig. No migration needed (stored as JSON blob). Alignment-only changes to other constants.
core/providers/openai/websocket.go Delegates WebSocket URL and headers to ChatGPT OAuth helpers when enabled. Clean and minimal.
core/providers/openai/chatgpt_oauth_test.go Comprehensive unit tests covering JWT decoding, path mapping, header merging, body transformation, and header-injection defense. All edge cases well covered.

Reviews (2): Last reviewed commit: "test: 100% coverage on OpenAIListModelsR..." | Re-trigger Greptile

Comment thread core/providers/openai/chatgpt_oauth.go
Comment thread core/providers/openai/openai.go
Comment thread core/providers/openai/chatgpt_oauth.go Outdated
Comment thread core/providers/openai/chatgpt_oauth.go
Comment thread core/providers/openai/openai.go Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (5)
core/providers/openai/chatgpt_oauth.go (3)

68-90: Bearer extraction looks correct; the case-insensitive fallback is well-scoped.

One micro-nit: strings.ToLower(authHeader) is applied to the full header value just for a 7-char prefix check. For long tokens this is unnecessary allocation; len(authHeader) >= 7 && strings.EqualFold(authHeader[:7], "bearer ") avoids it. Low impact since this runs once per request — take it or leave it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth.go` around lines 68 - 90, Replace the
allocation-heavy prefix check in ExtractChatGPTOAuthBearerToken: instead of
strings.HasPrefix(strings.ToLower(authHeader), "bearer "), guard with
len(authHeader) >= 7 and use strings.EqualFold(authHeader[:7], "bearer ") to do
a case-insensitive prefix check without creating a lowercased copy; keep the
existing empty checks and return strings.TrimSpace(authHeader[7:]) as before.

171-214: JWT claim extraction is sound; consider a slightly stricter sanitization.

The core logic is correct: 3-segment check, base64.RawURLEncoding (unpadded JWT standard), sonic unmarshal, typed claim assertions, and explicit empty-string rejection. Not verifying the signature is explicitly called out as an intentional passthrough choice.

Minor hardening suggestion: strings.ContainsAny(accountID, "\r\n") catches the main HTTP header-injection vectors, but not other control characters (NUL, tabs, DEL, etc.) that some upstreams parse loosely. Since this value is ultimately emitted as an HTTP header, rejecting any byte outside the visible ASCII range (plus space) would be a more defense-in-depth choice.

🛡️ Optional hardening
-	// Sanitize: reject account IDs containing newlines or carriage returns
-	// to prevent HTTP header injection attacks via crafted JWTs.
-	if strings.ContainsAny(accountID, "\r\n") {
-		return "", fmt.Errorf("chatgpt_account_id contains invalid characters")
-	}
+	// Sanitize: reject account IDs containing any control characters to prevent
+	// HTTP header injection via crafted JWTs. HTTP header values should contain
+	// only visible ASCII (0x20-0x7E) plus horizontal tab.
+	for _, b := range []byte(accountID) {
+		if b < 0x20 || b == 0x7F {
+			return "", fmt.Errorf("chatgpt_account_id contains invalid characters")
+		}
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth.go` around lines 171 - 214, The JWT
extraction works but the sanitization in extractChatGPTAccountID is too narrow;
replace the strings.ContainsAny(accountID, "\r\n") check with a stricter
validation that iterates the accountID runes and rejects any character outside
the visible ASCII range (allow only bytes 0x20 (space) through 0x7E '~'),
returning an error like "chatgpt_account_id contains invalid characters"; update
the check near the end of extractChatGPTAccountID to perform this
rune/byte-range validation on accountID instead of the current newline-only
check.

253-303: Remove chatGPTOAuthPrepare — it is dead code used only in tests.

chatGPTOAuthPrepare is never called from production code (openai.go). The comment claiming it is "the single entry point for all ChatGPT OAuth header/path logic" is inaccurate.

Production uses only chatGPTOAuthMergeHeaders (for headers-only routes) and chatGPTOAuthApplyRequest (for requests needing body transformation). Path mapping is handled separately by buildRequestURL/buildFullURL, which internally apply chatGPTOAuthPath. Remove chatGPTOAuthPrepare and its tests to eliminate dead code; the two active helpers have well-defined, non-overlapping responsibilities.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth.go` around lines 253 - 303, The
chatGPTOAuthPrepare function is dead code (only used in tests) and its comment
is inaccurate; remove the chatGPTOAuthPrepare function and any unit tests that
reference it, and update or remove the misleading comment that claims it is "the
single entry point for all ChatGPT OAuth header/path logic." Ensure existing
production helpers chatGPTOAuthMergeHeaders and chatGPTOAuthApplyRequest remain
unchanged and that any tests now exercise those two functions and
buildRequestURL/buildFullURL (which call chatGPTOAuthPath) instead of the
removed helper.
core/providers/openai/chatgpt_oauth_test.go (2)

35-37: captureLogger.Warn treats msg as a printf format string, which can break if production callers use structured key/value args.

schemas.Logger.Warn(msg string, args ...any) is commonly used in two ways: printf-style (logger.Warn("failed: %v", err)) or structured (logger.Warn("failed to extract account ID", "err", err)). fmt.Sprintf(msg, args...) works for the first, but for the second it produces ...%!(EXTRA ...) noise and makes the recorded string brittle to verb changes in the production code. Today the tests only Contains-check the literal prefix so they pass, but a future caller that adds a % in msg or relies on structured logging will silently garble logger.warns.

🧹 Suggested simpler capture
 func (l *captureLogger) Warn(msg string, args ...any) {
-	l.warns = append(l.warns, fmt.Sprintf(msg, args...))
+	// Record the raw message; callers assert on substrings of msg only.
+	// Avoid fmt.Sprintf so structured key/value args don't produce %!(EXTRA ...) noise.
+	l.warns = append(l.warns, msg)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth_test.go` around lines 35 - 37,
captureLogger.Warn currently treats msg as a printf format and calls
fmt.Sprintf(msg, args...), which breaks structured calls; change
captureLogger.Warn to avoid treating msg as a format string: if len(args) == 0
append msg to l.warns, otherwise append msg + " " + fmt.Sprint(args...) (or join
key/value pairs via fmt.Sprint) so structured logger.Warn("msg","k",v) records a
stable, readable string; update the Warn method (and references to l.warns) to
use this non‑formatting behavior.

371-372: Redundant table entry — both cases exercise the exact same input and expected output.

"responses POST/SSE" and "responses WebSocket" both map /v1/responses/responses. The inline comment notes they differ only by "upgrade", but chatGPTOAuthPath doesn't take a scheme/upgrade argument, so the second row adds no coverage. Consider removing it (or actually exercising the WS variant via chatGPTOAuthWebSocketURL, which you already cover separately in TestChatGPTOAuthWebSocketURL).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@core/providers/openai/chatgpt_oauth_test.go` around lines 371 - 372, The
table contains a redundant entry ("responses WebSocket") that maps
"/v1/responses" → "/responses" but chatGPTOAuthPath does not accept an
upgrade/scheme so it adds no coverage; remove the second row {"responses
WebSocket", "/v1/responses", "/responses"} from the table in
chatgpt_oauth_test.go, or alternatively replace that row with a case that
actually exercises the WebSocket variant by calling chatGPTOAuthWebSocketURL
(which you already test in TestChatGPTOAuthWebSocketURL) so the test either
avoids duplication or explicitly validates the WS URL generation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 35-37: captureLogger.Warn currently treats msg as a printf format
and calls fmt.Sprintf(msg, args...), which breaks structured calls; change
captureLogger.Warn to avoid treating msg as a format string: if len(args) == 0
append msg to l.warns, otherwise append msg + " " + fmt.Sprint(args...) (or join
key/value pairs via fmt.Sprint) so structured logger.Warn("msg","k",v) records a
stable, readable string; update the Warn method (and references to l.warns) to
use this non‑formatting behavior.
- Around line 371-372: The table contains a redundant entry ("responses
WebSocket") that maps "/v1/responses" → "/responses" but chatGPTOAuthPath does
not accept an upgrade/scheme so it adds no coverage; remove the second row
{"responses WebSocket", "/v1/responses", "/responses"} from the table in
chatgpt_oauth_test.go, or alternatively replace that row with a case that
actually exercises the WebSocket variant by calling chatGPTOAuthWebSocketURL
(which you already test in TestChatGPTOAuthWebSocketURL) so the test either
avoids duplication or explicitly validates the WS URL generation.

In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 68-90: Replace the allocation-heavy prefix check in
ExtractChatGPTOAuthBearerToken: instead of
strings.HasPrefix(strings.ToLower(authHeader), "bearer "), guard with
len(authHeader) >= 7 and use strings.EqualFold(authHeader[:7], "bearer ") to do
a case-insensitive prefix check without creating a lowercased copy; keep the
existing empty checks and return strings.TrimSpace(authHeader[7:]) as before.
- Around line 171-214: The JWT extraction works but the sanitization in
extractChatGPTAccountID is too narrow; replace the
strings.ContainsAny(accountID, "\r\n") check with a stricter validation that
iterates the accountID runes and rejects any character outside the visible ASCII
range (allow only bytes 0x20 (space) through 0x7E '~'), returning an error like
"chatgpt_account_id contains invalid characters"; update the check near the end
of extractChatGPTAccountID to perform this rune/byte-range validation on
accountID instead of the current newline-only check.
- Around line 253-303: The chatGPTOAuthPrepare function is dead code (only used
in tests) and its comment is inaccurate; remove the chatGPTOAuthPrepare function
and any unit tests that reference it, and update or remove the misleading
comment that claims it is "the single entry point for all ChatGPT OAuth
header/path logic." Ensure existing production helpers chatGPTOAuthMergeHeaders
and chatGPTOAuthApplyRequest remain unchanged and that any tests now exercise
those two functions and buildRequestURL/buildFullURL (which call
chatGPTOAuthPath) instead of the removed helper.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 37c536ae-eea0-475d-9b57-790bb27b67d5

📥 Commits

Reviewing files that changed from the base of the PR and between 37c7077 and f699ab7.

📒 Files selected for processing (4)
  • core/providers/openai/chatgpt_oauth.go
  • core/providers/openai/chatgpt_oauth_test.go
  • core/providers/openai/openai.go
  • core/providers/openai/types.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • core/providers/openai/openai.go

@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented Apr 17, 2026

In my opinion should be ready for manual review.

I fixed everything the Coderabbit and Greptile found (while keeping KISS) and double checked it myself manually

(with my limited point of view as a new contributor)

Attached is a recommended verification task list that could be tackled in the Review from the Maintainer side!

Cheers!

Manual Confirmation Tasks — for Reviewers

Author does NOT have an OpenAI API key for example or all supported providers — please run before merging!

  • Re-Trigger Greptile
  • With toggle OFF (default): configure OpenAI provider with a real OpenAI API key, openai_base_url unset — verify /v1/responses, /v1/chat/completions, /v1/models, /v1/embeddings all work via api.openai.com as before
  • With toggle OFF: verify WebSocket /v1/responses still works against OpenAI
  • With toggle OFF: verify a client-supplied API key via Authorization: Bearer header still works when allow_direct_keys is enabled
  • Azure OpenAI SDK: verify /v1/models requests from the Azure OpenAI SDK (User-Agent contains AzureOpenAI) still route to Azure, not OpenAI
  • Other providers: run an existing test suite / integration test against Anthropic, Gemini, Bedrock, Fireworks, xAI, OpenRouter — confirm the added bodyTransformer parameter on shared HandleOpenAIResponsesStreaming (passed nil by all non-OpenAI callers) doesn't change behavior
  • Confirm no regression for other providers (anthropic, gemini, bedrock, etc.) routing through the OpenAI integration paths

coderabbitai[bot]
coderabbitai Bot previously approved these changes Apr 17, 2026
@thiscantbeserious
Copy link
Copy Markdown
Author

@akshaydeo would be cool if you could have a look - for us this is a hard blocker in regards to an

enterprise license

(I'm responsible for technical judgement in that regard - roundabout 40 engineers and up)

@thiscantbeserious
Copy link
Copy Markdown
Author

After the recent merges this has conflicts - I'm working to resolve them and test this further until there is a response. Will also try to move any bugfixes out of this and creating issues for them ... so at least they can be merged and do not block this by becoming outdated again.

@thiscantbeserious thiscantbeserious marked this pull request as draft April 23, 2026 09:05
@thiscantbeserious thiscantbeserious changed the title feat: ChatGPT OAuth passthrough for OpenAI provider (Codex CLI subscription support) [feature] OAuth passthrough for OpenAI provider (Codex CLI subscription support) Apr 23, 2026
@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented Apr 23, 2026

Pushed - branch now at 9a2ea1d60 on top of a fresh main merge.

What was done in this pass:

  • Merged latest main (conflict resolution in 6 files; main's refactors adopted, ChatGPT OAuth feature preserved)
  • Fixed Codex WS stalling at chatgpt.com by forwarding first-party identity headers (originator, version, user-agent) and injecting sane defaults scoped to the OAuth path only; blocklist hardened (RFC 7230 hop-by-hop + Origin, Trailer, X-Forwarded-*, X-Real-IP)
  • Fixed the bigger latent issue: native-WS /responses requests never wrote a log row because the trace was never flushed, the accumulator didn't recognise WebSocketResponsesRequest, and the log stream type defaulted wrong — all wired up now
  • Added an idle-timeout watchdog (sourced from NetworkConfig.StreamIdleTimeoutInSeconds, default 60s) so silent upstream stalls finalize as 504 instead of hanging
  • Added clean-close and raw-bytes terminal-event detection fallbacks so logs finalize even when the upstream event shape shifts

Codex end-to-end verified: Codex CLI 0.123.0 → Bifrost (chatgpt_oauth on, no allow_direct_keys) → chatgpt.com → fast native-WS response, log row transitions processingsuccess with full metrics.

Next up: evaluating whether the individual fixes above can be cleanly extracted into their own issues / separate PRs (log-finalize plumbing for WS, idle-timeout watchdog, blocklist hardening are independent of the OAuth feature itself). Will triage as we go.

@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented Apr 23, 2026

Possible Issue extraction from this PR

Routing context learned from #2959: unified /v1/* is multi-provider by design and requires provider/model format. Integration paths /openai/v1/*, /anthropic/v1/* etc. are scoped to a single owning provider and default accordingly.

Update 2026-04-27, most extractions superseded upstream

Maintainer PR #3018 ("fix: native WebSocket /responses reliability issues" by danpiths, merged 2026-04-24) bundled fixes for 8 of the 11 filed issues. Those 8 issues are closed upstream and my corresponding draft PRs (#3012, #3013, #3016, #3017, #3019, #3020, #3021, #3022) are now closed as redundant.

Still open (not covered by #3018): #2995, #2993, #2996. My drafts #3006, #3007, #3014 remain the proposed fixes for these and are still in draft awaiting maintainer review.

Branch state

Build passes, tests pass on affected packages, no dead code leftover. #3018 is already merged to upstream main (2026-04-24). To absorb it, this branch needs a rebase onto upstream main, which will cancel the 8 removal commits against #3018's additions and restore the WS pipeline functionality on the feature branch. The 3 still-open issues remain blockers for end-to-end OAuth Codex demo until they land. Recommendation, hold #2775 as WIP draft until the 3 remaining drafts land or a maintainer addresses them in a follow-up, then rebase and re-verify.

# Finding Type On main Severity Issue Fixed in PR Removed from original PR
1a Unified /v1/responses rejects bare model strings. Feature proposed on #2959: opt-in ClientConfig.DefaultProvider with a UI dropdown populated from configured providers bug feature Yes - #2959 - - -
1b Integration /openai/v1/responses HTTP path rejects bare model strings bug could-not-reproduce No (integration converter already defaults) - #2992 - - -
1c Integration /openai/v1/responses WebSocket path rejects bare model strings (handler reads event.Model directly, bypasses integration converter) bug Yes (live verified) Medium #2995 - #3006 7ad281b6b
2 Integration /openai/v1/models fans out to every provider, RequestConverter does not set Provider bug Yes (live verified) Medium #2993 - #3007 d0a743e9b
3 WS pool Dial error discards upstream HTTP response (401 body invisible) bug Yes (live verified) Low #2994 #3018 #3012 a5b7a4eff
4a x-bf-eh-* headers captured on WS upgrade but never forwarded to upstream (HTTP path works) bug Yes (live verified) Medium #2996 - #3014 be767a6ac
4b OAuth identity defaults injection (originator, version) on ChatGPT OAuth path feature - - stays in this PR - #2775 -
5 Native WS log never finalized when parseUpstreamWSEvent fails on large response.completed or on clean close without terminal event bug Yes (live verified) High #2997 #3018 #3016 870575d5d
6 tryNativeWSUpstream ReadMessage has no idle deadline, silent stalls block forever bug Yes (live verified) High #2998 #3018 #3013 fbf395f28
7a postHookRunner skips tracer.CompleteAndFlushTrace() for WS final chunks bug Yes (live verified) High #2999 #3018 #3017 5c4f9fc58
7b convertToProcessedStreamResponse missing WebSocketResponsesRequest case bug Yes (code verified) Medium #3000 #3018 #3019 9a1771db6
7c Accumulator isResponsesStreaming missing WebSocketResponsesRequest, chunks error out silently, token usage and output lost bug Yes (code verified) High #3001 #3018 #3020 c85ea5d75
8 Debug gated raw frame logging (ws upstream frame, ws upstream stream ended) added in 6e4f4014f. Not a bug, PR-only diagnostic tooling. Concern: at LOG_LEVEL=debug it captures raw upstream payload (sensitive). Recommendation: env-flag (BIFROST_WS_TRACE_FRAMES=1) or remove before merge cleanup No (not on main) - stays in this PR, scope decision - #2775 -
9 WS pool hands out stale session-pinned upstream connections after server-side close (session-pinned reuse, not idle-pool) bug Yes (live verified) Medium #3002 #3018 #3021 d39ea5afe (no feature-branch duplicate)
10 TTL sweeper silently discards orphaned pendingLogsEntries with no DB row written (backstop defect, amplifies #2997 / #2999 / #3001) bug Yes (code verified) Medium #3003 #3018 #3022 1e22773d2 (no feature-branch duplicate)

@thiscantbeserious thiscantbeserious force-pushed the feature/openai-oauth-passthrough branch from 9a2ea1d to b58517e Compare April 24, 2026 11:38
@thiscantbeserious
Copy link
Copy Markdown
Author

Fixed commit signing and pushed the existing commits with proper signatures on my side in-between

…passthrough

# Conflicts:
#	docs/cli-agents/codex-cli.mdx
@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented Apr 24, 2026

Screenshot 2026-04-24 at 14 23 01

further test-evidence meanwhile working on this ... at the current state of things.

No token count, not sure if thats due to codex doing client encryption or not.

…ximhq#3006)

This fix for maximhq#2995 was originally included in this feature branch as a
side effect. It has been extracted into a focused PR maximhq#3006 against main
that implements the narrower path-sniff approach requested by the
maintainer on maximhq#2959 (only the integration path defaults to OpenAI, the
unified path stays strict).

Restoring the empty default at both call sites here so this branch no
longer overlaps with maximhq#3006.
This fix for maximhq#2993 was originally included in this feature branch as a
side effect. It has been extracted into a focused PR maximhq#3007 against main.

Reverting the changes here so the feature branch no longer overlaps.
thiscantbeserious and others added 2 commits April 24, 2026 15:56
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The client-upgrade header forwarding in tryNativeWSUpstream (passing
auth.clientHeaders to mergeClientWSHeaders) was the feature branch
approximation of the x-bf-eh-* forwarding mechanism described in
issue maximhq#2996. That forwarding has been extracted as a focused fix in
PR maximhq#3014 against main, which adds mergeWSExtraHeaders to read
BifrostContextKeyExtraHeaders from the request context and merge
those headers before pool.Get.

Remove auth.clientHeaders from the mergeClientWSHeaders call so the
feature branch no longer carries duplicate coverage for that path.
The chatgpt_oauth identity-defaults injection (originator, version)
via injectCodexDefaults is retained as it is specific to the OAuth
feature and not part of the extracted fix.
…ed to maximhq#3016)

Remove the terminalPostHookFired guard, extractStreamEventType fallback,
synthesizeTerminalStreamResponse helper, and associated tests from the
feature branch. These were extracted into focused PR maximhq#3016 against
maximhq/bifrost main.

The remaining diagnostic frame logging, wsFramePreview, and diag
variables are preserved as they belong to a separate extraction.
The WebSocketResponsesRequest classification in ProcessStreamingResponse
was extracted to maximhq/bifrost PR maximhq#3020 (fixes issue maximhq#3001).

Removing the duplicate from this feature branch to avoid a merge conflict
and keep the feature branch focused on OAuth passthrough concerns.

Extraction commit: 606f47b
…ure-branch duplicate)

The liveness probe for stale session-pinned WS pool connections (maximhq#3002)
was implemented directly on fix/3002-pool-stale-session-conn off main.
The feature branch never contained a parallel implementation of this probe,
so there is no code to remove. This commit marks the extraction as complete
and links the focused fix to this branch for traceability.

Related: maximhq#3021, maximhq#3002

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented Apr 24, 2026

Extraction done, review done too - will wait until until the extracted issues have been solved before continuing further.

I noticed @danpiths is working on most of them - hence I left all the PRs in draft status as duplicates just in case.

This way this PR stays clean - will then see if this can be made smaller, but looking at the code myself this is already in a good direction and clean (from my limited perspective)

@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented Apr 24, 2026

OAuth-path /models list is incomplete, fixing in this PR

Symptom observed during manual testing with Codex CLI on the chatgpt_oauth path: /openai/v1/models returned HTTP 200 with a non-empty list, but the list Codex displayed did not reflect the model set actually available to the signed-in ChatGPT OAuth user.

This is strictly an OAuth-path bug. It is unrelated to row 2 / PR #3007 (fanout). When chatgpt_oauth is on, the upstream for model discovery should be chatgpt.com/backend-api/codex/models with the user's OAuth bearer, chatgpt-account-id, and OpenAI-Beta headers, not api.openai.com/v1/models.

Scope decision

Fixing this in this PR rather than extracting. The OAuth feature owns model discovery on the OAuth path, so the fix belongs with the feature, not as a separate upstream PR.

Fix plan

  • Apply the same path mapping and header injection to the model-listing code that the Responses path already uses on chatgpt_oauth.
  • Verify the returned list matches chatgpt.com/backend-api/codex/models for the signed-in account.
  • Confirm no stale caching across bearer refresh.
  • Add tests on the OAuth provider covering the model-discovery path and header/URL mapping.

@thiscantbeserious
Copy link
Copy Markdown
Author

thiscantbeserious commented May 4, 2026

Bug: Compact is not working yet.

Error running remote compact task: unexpected status 405 Method Not Allowed: Method Not Allowed, url: http://localhost:8080/openai/v1/responses/compact

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants