[feature] OAuth passthrough for OpenAI provider (Codex CLI subscription support)#2775
[feature] OAuth passthrough for OpenAI provider (Codex CLI subscription support)#2775thiscantbeserious wants to merge 38 commits intomaximhq:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds ChatGPT OAuth passthrough and routing: extracts Bearer tokens and injects a direct key into request context, rewrites OpenAI-style paths to ChatGPT backend (including WebSocket URLs), enforces Responses request transformations/streaming, exposes provider toggle and UI control, adjusts handlers to accept optional request-body transformers. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Bifrost as Bifrost\nRequest Worker
participant KeyStore as Direct Key\nInjection
participant Provider as OpenAI\nProvider
participant Backend as ChatGPT\nBackend
Client->>Bifrost: HTTP request (Authorization: Bearer <token>)
Bifrost->>Bifrost: Check provider.ChatGPTOAuth enabled
alt OAuth enabled and no direct key
Bifrost->>Bifrost: Extract Bearer token
Bifrost->>KeyStore: Inject direct key "chatgpt-oauth"
end
Bifrost->>Provider: Forward request (context includes key)
Provider->>Provider: Extract chatgpt_account_id from JWT
Provider->>Provider: Transform /v1/responses body (instructions, store=false, stream=true, remove max_output_tokens)
Provider->>Provider: Map /v1/* path to ChatGPT backend path / build WS URL
Provider->>Backend: Upstream request (transformed body + OAuth headers)
Backend-->>Provider: Response / stream
Provider-->>Bifrost: Response relayed
Bifrost-->>Client: Response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
I have implemented this for Seat based Subscriptions - however I only tested this with my private account. This was tested and developed against Anthrophic (also OAuth based) Claude Code and Codex CLI with OAuth both trough a locally deployed docker build. Basis was a scan against for all routes onto the codex codebase: https://github.com/openai/codex credits / idea taken from: https://github.com/EvanZhouDev/openai-oauth Its currently in Manual Testing state = DRAFT until I have verified all 400 errors are gone! Target is to have this working for our Enterprise Setup too ... (which we might be contacting you soon on :) ) |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
1 similar comment
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/providers/openai/openai.go (1)
119-149:⚠️ Potential issue | 🟠 MajorUse the current key when deriving ChatGPT OAuth headers for
/models.
HandleOpenAIListModelsRequestfans out per key, but this block buildsextraHeadersonce fromkeys[0]and reuses it for every request. In ChatGPT OAuth mode that means later keys send their own bearer token with the first key’schatgpt-account-id, which can produce 4xxs or cross-account header leakage. Build the headers inside the per-key wrapper instead of sharing oneheaderKeyacross the whole call.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/openai.go` around lines 119 - 149, The code builds a single headerKey from keys[0] and calls provider.effectiveExtraHeaders(headerKey) once, causing the same chatgpt-account-id to be reused for every per-key request; change this so headers are derived per key inside the per-key wrapper used by HandleOpenAIListModelsRequest/ListModelsByKey instead of reusing headerKey: remove the shared headerKey usage for the multi-key path and ensure provider.effectiveExtraHeaders(...) is called with the current key inside the per-key loop/wrapper (the same pattern already used in the keyless branch or within ListModelsByKey), so each request uses the matching key's chatgpt-account-id and bearer token.
🧹 Nitpick comments (2)
ui/lib/types/schemas.ts (1)
702-707: Consider defaultingchatgpt_oauthfor backward compatibility with existing stored configs.
chatgpt_oauth: z.boolean()makes the field strictly required at parse time. Since this is a newly introduced toggle and the backend default isfalse(no DB migration per PR objectives), any persisted OpenAI config form state saved before this change won't include the key and will fail validation here. The pre-existingdisable_storehas the same shape, but that field has existed long enough for stored state to contain it.Recommend making it tolerant of missing input:
♻️ Proposed change
export const openaiConfigFormSchema = z.object({ disable_store: z.boolean(), - chatgpt_oauth: z.boolean(), + chatgpt_oauth: z.boolean().default(false), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ui/lib/types/schemas.ts` around lines 702 - 707, The schema currently requires chatgpt_oauth which breaks parsing of older persisted configs that lack this new key; update openaiConfigFormSchema so chatgpt_oauth is tolerant of missing values by making it optional and giving it a default of false (e.g., change the chatgpt_oauth entry in openaiConfigFormSchema to an optional boolean with default false) so parsing/backwards compatibility is preserved; keep the exported OpenAIConfigFormSchema type inference intact.core/providers/openai/chatgpt_oauth_test.go (1)
341-383: Add a/v1/modelsquery-preservation test case.Current cases validate plain
/v1/models, but not/v1/models?…. Adding one case here helps lock expected behavior if callers include query params.Suggested test addition
// Models — appends required client_version query param {"models listing", "/v1/models", "/models?client_version=" + ChatGPTOAuthClientVersion}, + {"models listing with existing query", "/v1/models?limit=20", "/models?limit=20&client_version=" + ChatGPTOAuthClientVersion},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth_test.go` around lines 341 - 383, Add a test case to TestChatGPTOAuthPath to verify query-preservation for /v1/models: update the tests slice in TestChatGPTOAuthPath to include an entry like {"models with query", "/v1/models?foo=bar", "/models?foo=bar&client_version=" + ChatGPTOAuthClientVersion} so chatGPTOAuthPath is exercised when the incoming /v1/models already has query params and the client_version param is appended correctly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 15-21: The helper buildTestJWT currently ignores json.Marshal
errors; update buildTestJWT to check the error returned by json.Marshal(payload)
and fail fast (e.g., panic with a clear message including the error) instead of
proceeding, so malformed payloads produce an immediate, descriptive failure;
modify the code in buildTestJWT to capture (payloadBytes, err :=
json.Marshal(payload)), handle err (panic or return an explicit error/empty
token) and only continue to encode payloadBytes when err is nil.
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 121-124: The code uses encoding/json on hot request paths (in
extractChatGPTAccountID and transformChatGPTResponsesBody) which is against
project convention; replace json.Unmarshal/json.Marshal calls with
github.com/bytedance/sonic's sonic.Unmarshal/sonic.Marshal in both functions,
update the import to include "github.com/bytedance/sonic" and remove the unused
encoding/json import, and ensure error handling stays the same (wrap/return
errors as before) since no custom marshaling is required.
- Around line 237-249: When enabled is true, chatGPTOAuthApplyRequest must not
return a bodyTransformer if extracting the ChatGPT account ID fails (to avoid
sending a mutated body without the required chatgpt-account-id header); change
chatGPTOAuthApplyRequest to detect extraction failure from
chatGPTOAuthMergeHeaders (or call extractChatGPTAccountID directly) and return a
sentinel (nil bodyTransformer or an explicit error indicator) instead of
transformChatGPTResponsesBody so the caller can surface a structured "invalid
ChatGPT OAuth token" error; reference chatGPTOAuthApplyRequest,
chatGPTOAuthMergeHeaders, extractChatGPTAccountID and
transformChatGPTResponsesBody when implementing this check and ensure any
logging stays but the function signals failure to callers.
In `@core/providers/openai/openai.go`:
- Around line 5183-5184: The code is manually concatenating
provider.networkConfig.BaseURL + "/v1/..." which bypasses chatgptOAuth path
rewriting; replace those manual concatenations with calls to
buildRequestURL(...) so the URL goes through the ChatGPT OAuth path
normalization (e.g., change the req.SetRequestURI(provider.networkConfig.BaseURL
+ "/v1/files/"+request.FileID) usage to req.SetRequestURI(buildRequestURL(ctx,
provider, "/v1/files/"+request.FileID, key))), keeping
providerUtils.SetExtraHeaders(...) as-is; apply the same change to the other
occurrences you noted (places around the symbols handling files, batch, and
passthrough routes) so chatGPTOAuthPath(...) sees the normalized path.
In `@transports/bifrost-http/handlers/wsresponses.go`:
- Around line 414-418: Update the errModelFormat error message to reflect that a
model name is required (provider prefix is optional) and adjust its declaration
referenced by handleResponseCreate and convertEventToRequest; specifically
change the errModelFormat string (used in wsresponses.go where
schemas.ParseModelString(event.Model, schemas.OpenAI) is called) from "model
should be in provider/model format" to something like "model is required" so
that a missing event.Model returns the correct, non-misleading error message.
---
Outside diff comments:
In `@core/providers/openai/openai.go`:
- Around line 119-149: The code builds a single headerKey from keys[0] and calls
provider.effectiveExtraHeaders(headerKey) once, causing the same
chatgpt-account-id to be reused for every per-key request; change this so
headers are derived per key inside the per-key wrapper used by
HandleOpenAIListModelsRequest/ListModelsByKey instead of reusing headerKey:
remove the shared headerKey usage for the multi-key path and ensure
provider.effectiveExtraHeaders(...) is called with the current key inside the
per-key loop/wrapper (the same pattern already used in the keyless branch or
within ListModelsByKey), so each request uses the matching key's
chatgpt-account-id and bearer token.
---
Nitpick comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 341-383: Add a test case to TestChatGPTOAuthPath to verify
query-preservation for /v1/models: update the tests slice in
TestChatGPTOAuthPath to include an entry like {"models with query",
"/v1/models?foo=bar", "/models?foo=bar&client_version=" +
ChatGPTOAuthClientVersion} so chatGPTOAuthPath is exercised when the incoming
/v1/models already has query params and the client_version param is appended
correctly.
In `@ui/lib/types/schemas.ts`:
- Around line 702-707: The schema currently requires chatgpt_oauth which breaks
parsing of older persisted configs that lack this new key; update
openaiConfigFormSchema so chatgpt_oauth is tolerant of missing values by making
it optional and giving it a default of false (e.g., change the chatgpt_oauth
entry in openaiConfigFormSchema to an optional boolean with default false) so
parsing/backwards compatibility is preserved; keep the exported
OpenAIConfigFormSchema type inference intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fe98f130-54dd-4961-b421-6b4b6eaf13bd
📒 Files selected for processing (17)
core/bifrost.gocore/providers/azure/azure.gocore/providers/fireworks/fireworks.gocore/providers/openai/chatgpt_oauth.gocore/providers/openai/chatgpt_oauth_test.gocore/providers/openai/openai.gocore/providers/openai/websocket.gocore/providers/openrouter/openrouter.gocore/providers/xai/xai.gocore/schemas/provider.godocs/cli-agents/codex-cli.mdxtransports/bifrost-http/handlers/wsresponses.gotransports/bifrost-http/integrations/openai.gotransports/bifrost-http/websocket/pool.goui/app/workspace/providers/fragments/openaiConfigFormFragment.tsxui/lib/types/config.tsui/lib/types/schemas.ts
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/providers/openai/openai.go (1)
119-148:⚠️ Potential issue | 🟠 MajorBuild ChatGPT OAuth headers from the current key, not
keys[0].The merged headers are computed once from
headerKeyand then reused across allHandleOpenAIListModelsRequestcalls. For a provider configured with multiple OAuth tokens, the second and later requests can send bearer token B withchatgpt-account-idextracted from token A, which the ChatGPT backend will reject. This needs to be derived per key inside the list-models fanout path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/openai.go` around lines 119 - 148, The current code builds merged ChatGPT OAuth headers once using headerKey (the first key) and reuses them for all fanout requests, which can pair the wrong chatgpt-account-id with a different bearer token; instead, remove the global headerKey-based header reuse and derive extra headers per key inside the list-models fanout path: update the call sites for HandleOpenAIListModelsRequest and ListModelsByKey to stop passing provider.effectiveExtraHeaders(headerKey) (or provider.effectiveExtraHeaders) precomputed from keys[0] and ensure the code that iterates over keys computes provider.effectiveExtraHeaders(k) for each specific schemas.Key k (so chatgpt-account-id is extracted from the same key used for the bearer), leaving the keyless branch (provider.customProviderConfig.IsKeyLess → ListModelsByKey with schemas.Key{}) unchanged.
♻️ Duplicate comments (3)
core/providers/openai/chatgpt_oauth_test.go (1)
15-21:⚠️ Potential issue | 🟡 MinorHandle
json.Marshalfailure in test JWT helper.Line 17 ignores the marshal error, which can hide malformed fixture bugs and make failures harder to diagnose.
Proposed fix
func buildTestJWT(payload map[string]interface{}) string { header := base64.RawURLEncoding.EncodeToString([]byte(`{"alg":"RS256","typ":"JWT"}`)) - payloadBytes, _ := json.Marshal(payload) + payloadBytes, err := json.Marshal(payload) + if err != nil { + panic(err) + } payloadB64 := base64.RawURLEncoding.EncodeToString(payloadBytes) sig := base64.RawURLEncoding.EncodeToString([]byte("fake-signature")) return header + "." + payloadB64 + "." + sig }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth_test.go` around lines 15 - 21, The test helper buildTestJWT currently ignores the json.Marshal error for payload; update buildTestJWT to check the error returned by json.Marshal(payload) and handle it instead of discarding it—e.g., if err != nil then panic or fail the test with a descriptive message including the error (so malformed fixtures surface clearly). Ensure the error check occurs right after the json.Marshal call that assigns payloadBytes and err, and include the error in the panic/failure message so buildTestJWT surfaces marshal failures.core/providers/openai/chatgpt_oauth.go (2)
244-249:⚠️ Potential issue | 🟠 Major
chatGPTOAuthApplyRequeststill returns the body transformer when extraction fails.Per the function comment, when JWT extraction fails the headers pass through unchanged but
transformChatGPTResponsesBodyis still returned — so the upstream request goes out with a mutated body (store=false,stream=true,max_output_tokensdropped,instructions="") and nochatgpt-account-id. That guarantees an upstream 400 and is the primary failure mode called out in the PR description.Recommend returning
(existingExtraHeaders, nil)(or surfacing an error) whenchatGPTOAuthMergeHeadersdid not actually merge — either by havingchatGPTOAuthMergeHeaderssignal merge success, or by callingextractChatGPTAccountIDdirectly here and short-circuiting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth.go` around lines 244 - 249, chatGPTOAuthApplyRequest currently always returns transformChatGPTResponsesBody even when JWT/account extraction failed, causing mutated bodies to be sent without merged headers; modify chatGPTOAuthApplyRequest to detect whether chatGPTOAuthMergeHeaders actually merged (either by updating chatGPTOAuthMergeHeaders to return a success flag or by calling extractChatGPTAccountID here) and if extraction/merge failed return (existingExtraHeaders, nil) (or surface an error) instead of returning transformChatGPTResponsesBody; reference chatGPTOAuthApplyRequest, chatGPTOAuthMergeHeaders, extractChatGPTAccountID and transformChatGPTResponsesBody when implementing the short-circuit.
79-100:⚠️ Potential issue | 🟡 MinorWebSocket headers drop
OpenAI-Betaon JWT extraction failure — same pattern as the Responses path.When
extractChatGPTAccountIDfails, the function early-returns with onlyAuthorization+ existing extras. The upstream WS connect will then go through with nochatgpt-account-idand noOpenAI-Beta: responses=experimental, resulting in an opaque upstream 400/handshake failure visible only as aWarnlog — consistent with the unresolved 400s called out in the PR description.Prefer failing fast by returning an error (or a sentinel
nil) so the WS dial path surfaces a structured "invalid ChatGPT OAuth token" error rather than relying on the upstream to reject a malformed request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth.go` around lines 79 - 100, Update chatGPTOAuthWebSocketHeaders so it fails fast instead of returning partial headers: when extractChatGPTAccountID(key.Value.GetValue()) returns an error, do not return the headers map — return a sentinel nil (or change the function to return (map[string]string, error) and return an explicit error like "invalid ChatGPT OAuth token") so callers (the WS dial/connect path) can detect the failure and surface a structured error; ensure callers that invoke chatGPTOAuthWebSocketHeaders (and any logic expecting chatGPTOAuthExtraHeaders) are adjusted to handle a nil/err return and avoid attempting the WebSocket connect with incomplete headers.
🧹 Nitpick comments (3)
transports/bifrost-http/websocket/pool.go (1)
177-186: Useio.ReadAll(io.LimitReader(...))for reliable body snippet capture.A single
resp.Body.Read(buf)is not guaranteed to fill the buffer even if more data is available — for chunked or buffered responses it commonly returns far fewer than 512 bytes on the first call, so the capturedbodySnippetmay be truncated well before the real upstream error message. Since this snippet is surfaced throughlogger.Warnatwsresponses.go:258, you want the first 512 bytes reliably, not whatever the firstReadhappens to hand back.Also, the raw body can contain newlines / HTML / control chars that will be inlined into the error (and log line). Consider trimming and/or quoting it for log hygiene.
♻️ Proposed refactor
- detail := "" - if resp != nil { - var bodySnippet string - if resp.Body != nil { - buf := make([]byte, 512) - n, _ := resp.Body.Read(buf) - _ = resp.Body.Close() - bodySnippet = string(buf[:n]) - } - detail = fmt.Sprintf(" (upstream status %d: %s)", resp.StatusCode, bodySnippet) - } + detail := "" + if resp != nil { + var bodySnippet string + if resp.Body != nil { + body, _ := io.ReadAll(io.LimitReader(resp.Body, 512)) + _ = resp.Body.Close() + bodySnippet = strings.TrimSpace(string(body)) + } + detail = fmt.Sprintf(" (upstream status %d: %q)", resp.StatusCode, bodySnippet) + }Remember to add
ioandstringsto the imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/websocket/pool.go` around lines 177 - 186, Replace the single Read call that builds bodySnippet with a reliable read of up to 512 bytes using io.ReadAll(io.LimitReader(resp.Body, 512)), ensure resp.Body is closed, then sanitize the result (trim whitespace and/or quote non-printable chars using strings.TrimSpace or a quoting helper) before inserting into detail; update imports to include io and strings and keep using resp.StatusCode and the existing variables (resp.Body, bodySnippet, detail, key.Endpoint, err) so the surrounding error formatting remains unchanged.transports/bifrost-http/handlers/wsresponses.go (1)
164-181: Minor: new comment is interleaved with the unrelated Store‑override comment block.Lines 164–166 document the store‑override logic at lines 176–181, and lines 167–169 document the
ParseModelStringcall at line 170. Having both blocks glued together beforeParseModelStringmakes it read as a single confusing preamble. Consider moving the Store‑override comment down adjacent to its code:✏️ Proposed tweak
- // Store override: default to store=true (Codex sends false by default but expects true). - // If DisableStore is set in provider config, force store=false. - // If client explicitly sets store, respect that value unless DisableStore overrides it. - // Default to OpenAI provider since /v1/responses is an OpenAI-format endpoint. - // Allows callers (e.g. Codex) to send the bare model name like "gpt-5.3-codex" - // without a provider prefix. + // Default to OpenAI provider since /v1/responses is an OpenAI-format endpoint. + // Allows callers (e.g. Codex) to send the bare model name like "gpt-5.3-codex" + // without a provider prefix. provider, modelName := schemas.ParseModelString(event.Model, schemas.OpenAI) if provider == "" || modelName == "" { writeWSError(session, 400, "invalid_request_error", "failed to parse model string") return } + // Store override: default to store=true (Codex sends false by default but expects true). + // If DisableStore is set in provider config, force store=false. if providerCfg, cfgErr := h.config.GetProviderConfigRaw(provider); cfgErr == nil &&Separately (not part of this diff): the existing "respect client store unless DisableStore overrides" line no longer matches the branch logic below, which unconditionally rewrites
event.Store. Worth revisiting when you clean up the comments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@transports/bifrost-http/handlers/wsresponses.go` around lines 164 - 181, Move the Store-override comment block so it sits immediately above the branch that sets event.Store (the code that calls h.config.GetProviderConfigRaw and checks providerCfg.OpenAIConfig.DisableStore), and leave the ParseModelString comment (about provider/model defaulting to OpenAI and bare model names) directly above the provider, modelName := schemas.ParseModelString(...) call; also update the Store comment text to accurately reflect the implementation (it unconditionally sets event.Store to true or false based on DisableStore rather than “respecting client store unless DisableStore overrides”), referencing schemas.ParseModelString, providerCfg.OpenAIConfig.DisableStore, h.config.GetProviderConfigRaw, and event.Store so reviewers can find the relevant code.core/providers/openai/chatgpt_oauth.go (1)
69-73:strings.Replacescheme swap is brittle for non-canonical base URLs.If
baseURLis missing the scheme, already usesws(s)://, or has unusual casing (HTTPS://), the conversion silently degrades: a bare host stays bare, already-ws://URLs pass through (fine), and uppercase schemes are not rewritten (breaks). Consider parsing withnet/urland swappingSchemeexplicitly, or at least documenting the canonical-input assumption.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth.go` around lines 69 - 73, The current chatGPTOAuthWebSocketURL function uses strings.Replace which is brittle for non-canonical baseURL; change it to parse baseURL with net/url (use url.Parse), then set u.Scheme to "wss" if original scheme is "https" (case-insensitive) or to "ws" if "http", and handle missing or already-websocket schemes by defaulting to "wss" for secure or returning an error/normalizing as appropriate; finally rebuild the URL (u.String()) and append chatGPTOAuthPath(standardPath). Update chatGPTOAuthWebSocketURL to use url.Parse/URL.Scheme assignment instead of strings.Replace to ensure correct, case-insensitive scheme handling and to cover absent schemes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 215-224: The test currently expects the incoming "store" value to
be preserved, but transformChatGPTResponsesBody must force store=false; update
the test in the t.Run block to assert that result["store"] is false (not true)
after calling transformChatGPTResponsesBody using the existing input
(`{"model":"gpt-5.4","store":true}`) and existing variables so the test verifies
the forced false behavior of transformChatGPTResponsesBody.
In `@core/providers/openai/openai.go`:
- Around line 1417-1432: HandleOpenAIResponsesRequest currently forwards large
requests through handleOpenAILargePayloadPassthrough before applying the
bodyTransformer, causing ChatGPT OAuth mutations to be skipped; update
HandleOpenAIResponsesRequest (or the call site in openai.go where you pass
bodyTransformer into HandleOpenAIResponsesRequest) so that the bodyTransformer
is applied to the request payload prior to any large-payload passthrough
decision (i.e., ensure bodyTransformer is invoked and its result used when
determining whether to call handleOpenAILargePayloadPassthrough), and propagate
the transformed body into handleOpenAILargePayloadPassthrough and subsequent
code paths so large /v1/responses requests receive the same ChatGPT OAuth
mutations (store=false, stream=true, etc.) as small requests.
- Around line 91-97: The merge helper chatGPTOAuthMergeHeaders currently copies
maps verbatim causing duplicate header keys with different casing; update
chatGPTOAuthMergeHeaders (and callers like effectiveExtraHeaders) to normalize
header names case-insensitively (e.g., use strings.ToLower) when merging
networkConfig.ExtraHeaders with the ChatGPT OAuth headers so that "openai-beta"
and "chatgpt-account-id" are deduplicated and deterministically overridden by
the OAuth values; ensure the resulting map uses a consistent canonical casing or
lower-case keys before returning so downstream SetExtraHeaders sees only one
entry per logical header.
---
Outside diff comments:
In `@core/providers/openai/openai.go`:
- Around line 119-148: The current code builds merged ChatGPT OAuth headers once
using headerKey (the first key) and reuses them for all fanout requests, which
can pair the wrong chatgpt-account-id with a different bearer token; instead,
remove the global headerKey-based header reuse and derive extra headers per key
inside the list-models fanout path: update the call sites for
HandleOpenAIListModelsRequest and ListModelsByKey to stop passing
provider.effectiveExtraHeaders(headerKey) (or provider.effectiveExtraHeaders)
precomputed from keys[0] and ensure the code that iterates over keys computes
provider.effectiveExtraHeaders(k) for each specific schemas.Key k (so
chatgpt-account-id is extracted from the same key used for the bearer), leaving
the keyless branch (provider.customProviderConfig.IsKeyLess → ListModelsByKey
with schemas.Key{}) unchanged.
---
Duplicate comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 15-21: The test helper buildTestJWT currently ignores the
json.Marshal error for payload; update buildTestJWT to check the error returned
by json.Marshal(payload) and handle it instead of discarding it—e.g., if err !=
nil then panic or fail the test with a descriptive message including the error
(so malformed fixtures surface clearly). Ensure the error check occurs right
after the json.Marshal call that assigns payloadBytes and err, and include the
error in the panic/failure message so buildTestJWT surfaces marshal failures.
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 244-249: chatGPTOAuthApplyRequest currently always returns
transformChatGPTResponsesBody even when JWT/account extraction failed, causing
mutated bodies to be sent without merged headers; modify
chatGPTOAuthApplyRequest to detect whether chatGPTOAuthMergeHeaders actually
merged (either by updating chatGPTOAuthMergeHeaders to return a success flag or
by calling extractChatGPTAccountID here) and if extraction/merge failed return
(existingExtraHeaders, nil) (or surface an error) instead of returning
transformChatGPTResponsesBody; reference chatGPTOAuthApplyRequest,
chatGPTOAuthMergeHeaders, extractChatGPTAccountID and
transformChatGPTResponsesBody when implementing the short-circuit.
- Around line 79-100: Update chatGPTOAuthWebSocketHeaders so it fails fast
instead of returning partial headers: when
extractChatGPTAccountID(key.Value.GetValue()) returns an error, do not return
the headers map — return a sentinel nil (or change the function to return
(map[string]string, error) and return an explicit error like "invalid ChatGPT
OAuth token") so callers (the WS dial/connect path) can detect the failure and
surface a structured error; ensure callers that invoke
chatGPTOAuthWebSocketHeaders (and any logic expecting chatGPTOAuthExtraHeaders)
are adjusted to handle a nil/err return and avoid attempting the WebSocket
connect with incomplete headers.
---
Nitpick comments:
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 69-73: The current chatGPTOAuthWebSocketURL function uses
strings.Replace which is brittle for non-canonical baseURL; change it to parse
baseURL with net/url (use url.Parse), then set u.Scheme to "wss" if original
scheme is "https" (case-insensitive) or to "ws" if "http", and handle missing or
already-websocket schemes by defaulting to "wss" for secure or returning an
error/normalizing as appropriate; finally rebuild the URL (u.String()) and
append chatGPTOAuthPath(standardPath). Update chatGPTOAuthWebSocketURL to use
url.Parse/URL.Scheme assignment instead of strings.Replace to ensure correct,
case-insensitive scheme handling and to cover absent schemes.
In `@transports/bifrost-http/handlers/wsresponses.go`:
- Around line 164-181: Move the Store-override comment block so it sits
immediately above the branch that sets event.Store (the code that calls
h.config.GetProviderConfigRaw and checks providerCfg.OpenAIConfig.DisableStore),
and leave the ParseModelString comment (about provider/model defaulting to
OpenAI and bare model names) directly above the provider, modelName :=
schemas.ParseModelString(...) call; also update the Store comment text to
accurately reflect the implementation (it unconditionally sets event.Store to
true or false based on DisableStore rather than “respecting client store unless
DisableStore overrides”), referencing schemas.ParseModelString,
providerCfg.OpenAIConfig.DisableStore, h.config.GetProviderConfigRaw, and
event.Store so reviewers can find the relevant code.
In `@transports/bifrost-http/websocket/pool.go`:
- Around line 177-186: Replace the single Read call that builds bodySnippet with
a reliable read of up to 512 bytes using io.ReadAll(io.LimitReader(resp.Body,
512)), ensure resp.Body is closed, then sanitize the result (trim whitespace
and/or quote non-printable chars using strings.TrimSpace or a quoting helper)
before inserting into detail; update imports to include io and strings and keep
using resp.StatusCode and the existing variables (resp.Body, bodySnippet,
detail, key.Endpoint, err) so the surrounding error formatting remains
unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4b74ee08-817c-473f-9b9e-fb58e8eb3371
📒 Files selected for processing (17)
core/bifrost.gocore/providers/azure/azure.gocore/providers/fireworks/fireworks.gocore/providers/openai/chatgpt_oauth.gocore/providers/openai/chatgpt_oauth_test.gocore/providers/openai/openai.gocore/providers/openai/websocket.gocore/providers/openrouter/openrouter.gocore/providers/xai/xai.gocore/schemas/provider.godocs/cli-agents/codex-cli.mdxtransports/bifrost-http/handlers/wsresponses.gotransports/bifrost-http/integrations/openai.gotransports/bifrost-http/websocket/pool.goui/app/workspace/providers/fragments/openaiConfigFormFragment.tsxui/lib/types/config.tsui/lib/types/schemas.ts
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
core/providers/openai/chatgpt_oauth.go (1)
61-83: Nit: avoid allocating a lowercase copy on every request.
strings.ToLower(authHeader)allocates a new string just to prefix-check. Usestrings.EqualFoldon the first 7 bytes, and drop the redundant first-try map lookup (the loop below already handles both cases).♻️ Proposed tweak
- authHeader, ok := headers["authorization"] - if !ok { - // Try case-insensitive fallback since the caller may not lowercase. - for k, v := range headers { - if strings.EqualFold(k, "authorization") { - authHeader = v - ok = true - break - } - } - } - if !ok || authHeader == "" { - return "" - } - if !strings.HasPrefix(strings.ToLower(authHeader), "bearer ") { - return "" - } - return strings.TrimSpace(authHeader[7:]) + var authHeader string + for k, v := range headers { + if strings.EqualFold(k, "authorization") { + authHeader = v + break + } + } + if len(authHeader) < 7 || !strings.EqualFold(authHeader[:7], "bearer ") { + return "" + } + return strings.TrimSpace(authHeader[7:])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth.go` around lines 61 - 83, The ExtractChatGPTOAuthBearerToken function currently allocates by calling strings.ToLower(authHeader) for the prefix check and also does a redundant direct map lookup; change it to perform only a single case-insensitive header search (loop over headers and use strings.EqualFold to find the "authorization" key) and then avoid allocating by checking the header length and using strings.EqualFold on the first 7 bytes to compare the "Bearer " prefix; finally return strings.TrimSpace(authHeader[7:]) (ensure you guard for header length >= 7 and keep the nil headers check).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 88-100: The path mapping in chatGPTOAuthPath currently only
appends ChatGPTOAuthClientVersionFallback when mapped == "/models", which misses
cases like "/models?foo=bar"; update chatGPTOAuthPath to parse the mapped path's
query (use net/url.Parse or url.ParseRequestURI on the mapped value), check the
query parameters for the presence of "client_version", and if absent add
client_version=ChatGPTOAuthClientVersionFallback to the query while preserving
any existing query params and the path; ensure the function returns the
reconstructed URL (path + updated query) instead of relying on exact string
equality.
- Around line 114-135: The chatGPTOAuthWebSocketHeaders function can produce
duplicate-case header keys because it only case-folds "Authorization" and then
blindly copies existingExtraHeaders and overlays chatGPTOAuthExtraHeaders;
update chatGPTOAuthWebSocketHeaders to reuse the existing
mergeHeadersCaseInsensitive helper (the same approach used by
chatGPTOAuthApplyRequest/chatGPTOAuthMergeHeaders): build the base map with
"Authorization" using key.Value.GetValue(), then call
mergeHeadersCaseInsensitive(base, existingExtraHeaders) and finally merge the
map returned by chatGPTOAuthExtraHeaders(accountID) through
mergeHeadersCaseInsensitive so all merges are case-insensitive and do not leave
duplicate-cased keys (use extractChatGPTAccountID to get accountID as before).
---
Nitpick comments:
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 61-83: The ExtractChatGPTOAuthBearerToken function currently
allocates by calling strings.ToLower(authHeader) for the prefix check and also
does a redundant direct map lookup; change it to perform only a single
case-insensitive header search (loop over headers and use strings.EqualFold to
find the "authorization" key) and then avoid allocating by checking the header
length and using strings.EqualFold on the first 7 bytes to compare the "Bearer "
prefix; finally return strings.TrimSpace(authHeader[7:]) (ensure you guard for
header length >= 7 and keep the nil headers check).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e60c384d-acd9-41ae-ab90-17ef7b18a547
📒 Files selected for processing (5)
core/bifrost.gocore/providers/openai/chatgpt_oauth.gocore/providers/openai/chatgpt_oauth_test.gocore/providers/openai/openai.gotransports/bifrost-http/handlers/wsresponses.go
✅ Files skipped from review due to trivial changes (1)
- core/providers/openai/chatgpt_oauth_test.go
🚧 Files skipped from review as they are similar to previous changes (3)
- transports/bifrost-http/handlers/wsresponses.go
- core/bifrost.go
- core/providers/openai/openai.go
Confidence Score: 4/5Safe to merge with two minor URL-construction edge cases to address. All remaining findings are P2: a malformed URL in Passthrough/PassthroughStream (only triggers on the very unlikely path of chatgptOAuth + passthrough route for /models + non-empty RawQuery) and a dead production function core/providers/openai/openai.go — Passthrough and PassthroughStream URL construction when chatgptOAuth is enabled and req.RawQuery is non-empty Important Files Changed
Reviews (2): Last reviewed commit: "test: 100% coverage on OpenAIListModelsR..." | Re-trigger Greptile |
There was a problem hiding this comment.
🧹 Nitpick comments (5)
core/providers/openai/chatgpt_oauth.go (3)
68-90: Bearer extraction looks correct; the case-insensitive fallback is well-scoped.One micro-nit:
strings.ToLower(authHeader)is applied to the full header value just for a 7-char prefix check. For long tokens this is unnecessary allocation;len(authHeader) >= 7 && strings.EqualFold(authHeader[:7], "bearer ")avoids it. Low impact since this runs once per request — take it or leave it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth.go` around lines 68 - 90, Replace the allocation-heavy prefix check in ExtractChatGPTOAuthBearerToken: instead of strings.HasPrefix(strings.ToLower(authHeader), "bearer "), guard with len(authHeader) >= 7 and use strings.EqualFold(authHeader[:7], "bearer ") to do a case-insensitive prefix check without creating a lowercased copy; keep the existing empty checks and return strings.TrimSpace(authHeader[7:]) as before.
171-214: JWT claim extraction is sound; consider a slightly stricter sanitization.The core logic is correct: 3-segment check,
base64.RawURLEncoding(unpadded JWT standard), sonic unmarshal, typed claim assertions, and explicit empty-string rejection. Not verifying the signature is explicitly called out as an intentional passthrough choice.Minor hardening suggestion:
strings.ContainsAny(accountID, "\r\n")catches the main HTTP header-injection vectors, but not other control characters (NUL, tabs, DEL, etc.) that some upstreams parse loosely. Since this value is ultimately emitted as an HTTP header, rejecting any byte outside the visible ASCII range (plus space) would be a more defense-in-depth choice.🛡️ Optional hardening
- // Sanitize: reject account IDs containing newlines or carriage returns - // to prevent HTTP header injection attacks via crafted JWTs. - if strings.ContainsAny(accountID, "\r\n") { - return "", fmt.Errorf("chatgpt_account_id contains invalid characters") - } + // Sanitize: reject account IDs containing any control characters to prevent + // HTTP header injection via crafted JWTs. HTTP header values should contain + // only visible ASCII (0x20-0x7E) plus horizontal tab. + for _, b := range []byte(accountID) { + if b < 0x20 || b == 0x7F { + return "", fmt.Errorf("chatgpt_account_id contains invalid characters") + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth.go` around lines 171 - 214, The JWT extraction works but the sanitization in extractChatGPTAccountID is too narrow; replace the strings.ContainsAny(accountID, "\r\n") check with a stricter validation that iterates the accountID runes and rejects any character outside the visible ASCII range (allow only bytes 0x20 (space) through 0x7E '~'), returning an error like "chatgpt_account_id contains invalid characters"; update the check near the end of extractChatGPTAccountID to perform this rune/byte-range validation on accountID instead of the current newline-only check.
253-303: RemovechatGPTOAuthPrepare— it is dead code used only in tests.
chatGPTOAuthPrepareis never called from production code (openai.go). The comment claiming it is "the single entry point for all ChatGPT OAuth header/path logic" is inaccurate.Production uses only
chatGPTOAuthMergeHeaders(for headers-only routes) andchatGPTOAuthApplyRequest(for requests needing body transformation). Path mapping is handled separately bybuildRequestURL/buildFullURL, which internally applychatGPTOAuthPath. RemovechatGPTOAuthPrepareand its tests to eliminate dead code; the two active helpers have well-defined, non-overlapping responsibilities.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth.go` around lines 253 - 303, The chatGPTOAuthPrepare function is dead code (only used in tests) and its comment is inaccurate; remove the chatGPTOAuthPrepare function and any unit tests that reference it, and update or remove the misleading comment that claims it is "the single entry point for all ChatGPT OAuth header/path logic." Ensure existing production helpers chatGPTOAuthMergeHeaders and chatGPTOAuthApplyRequest remain unchanged and that any tests now exercise those two functions and buildRequestURL/buildFullURL (which call chatGPTOAuthPath) instead of the removed helper.core/providers/openai/chatgpt_oauth_test.go (2)
35-37:captureLogger.Warntreatsmsgas a printf format string, which can break if production callers use structured key/value args.
schemas.Logger.Warn(msg string, args ...any)is commonly used in two ways: printf-style (logger.Warn("failed: %v", err)) or structured (logger.Warn("failed to extract account ID", "err", err)).fmt.Sprintf(msg, args...)works for the first, but for the second it produces...%!(EXTRA ...)noise and makes the recorded string brittle to verb changes in the production code. Today the tests onlyContains-check the literal prefix so they pass, but a future caller that adds a%inmsgor relies on structured logging will silently garblelogger.warns.🧹 Suggested simpler capture
func (l *captureLogger) Warn(msg string, args ...any) { - l.warns = append(l.warns, fmt.Sprintf(msg, args...)) + // Record the raw message; callers assert on substrings of msg only. + // Avoid fmt.Sprintf so structured key/value args don't produce %!(EXTRA ...) noise. + l.warns = append(l.warns, msg) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth_test.go` around lines 35 - 37, captureLogger.Warn currently treats msg as a printf format and calls fmt.Sprintf(msg, args...), which breaks structured calls; change captureLogger.Warn to avoid treating msg as a format string: if len(args) == 0 append msg to l.warns, otherwise append msg + " " + fmt.Sprint(args...) (or join key/value pairs via fmt.Sprint) so structured logger.Warn("msg","k",v) records a stable, readable string; update the Warn method (and references to l.warns) to use this non‑formatting behavior.
371-372: Redundant table entry — both cases exercise the exact same input and expected output.
"responses POST/SSE"and"responses WebSocket"both map/v1/responses→/responses. The inline comment notes they differ only by "upgrade", butchatGPTOAuthPathdoesn't take a scheme/upgrade argument, so the second row adds no coverage. Consider removing it (or actually exercising the WS variant viachatGPTOAuthWebSocketURL, which you already cover separately inTestChatGPTOAuthWebSocketURL).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@core/providers/openai/chatgpt_oauth_test.go` around lines 371 - 372, The table contains a redundant entry ("responses WebSocket") that maps "/v1/responses" → "/responses" but chatGPTOAuthPath does not accept an upgrade/scheme so it adds no coverage; remove the second row {"responses WebSocket", "/v1/responses", "/responses"} from the table in chatgpt_oauth_test.go, or alternatively replace that row with a case that actually exercises the WebSocket variant by calling chatGPTOAuthWebSocketURL (which you already test in TestChatGPTOAuthWebSocketURL) so the test either avoids duplication or explicitly validates the WS URL generation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@core/providers/openai/chatgpt_oauth_test.go`:
- Around line 35-37: captureLogger.Warn currently treats msg as a printf format
and calls fmt.Sprintf(msg, args...), which breaks structured calls; change
captureLogger.Warn to avoid treating msg as a format string: if len(args) == 0
append msg to l.warns, otherwise append msg + " " + fmt.Sprint(args...) (or join
key/value pairs via fmt.Sprint) so structured logger.Warn("msg","k",v) records a
stable, readable string; update the Warn method (and references to l.warns) to
use this non‑formatting behavior.
- Around line 371-372: The table contains a redundant entry ("responses
WebSocket") that maps "/v1/responses" → "/responses" but chatGPTOAuthPath does
not accept an upgrade/scheme so it adds no coverage; remove the second row
{"responses WebSocket", "/v1/responses", "/responses"} from the table in
chatgpt_oauth_test.go, or alternatively replace that row with a case that
actually exercises the WebSocket variant by calling chatGPTOAuthWebSocketURL
(which you already test in TestChatGPTOAuthWebSocketURL) so the test either
avoids duplication or explicitly validates the WS URL generation.
In `@core/providers/openai/chatgpt_oauth.go`:
- Around line 68-90: Replace the allocation-heavy prefix check in
ExtractChatGPTOAuthBearerToken: instead of
strings.HasPrefix(strings.ToLower(authHeader), "bearer "), guard with
len(authHeader) >= 7 and use strings.EqualFold(authHeader[:7], "bearer ") to do
a case-insensitive prefix check without creating a lowercased copy; keep the
existing empty checks and return strings.TrimSpace(authHeader[7:]) as before.
- Around line 171-214: The JWT extraction works but the sanitization in
extractChatGPTAccountID is too narrow; replace the
strings.ContainsAny(accountID, "\r\n") check with a stricter validation that
iterates the accountID runes and rejects any character outside the visible ASCII
range (allow only bytes 0x20 (space) through 0x7E '~'), returning an error like
"chatgpt_account_id contains invalid characters"; update the check near the end
of extractChatGPTAccountID to perform this rune/byte-range validation on
accountID instead of the current newline-only check.
- Around line 253-303: The chatGPTOAuthPrepare function is dead code (only used
in tests) and its comment is inaccurate; remove the chatGPTOAuthPrepare function
and any unit tests that reference it, and update or remove the misleading
comment that claims it is "the single entry point for all ChatGPT OAuth
header/path logic." Ensure existing production helpers chatGPTOAuthMergeHeaders
and chatGPTOAuthApplyRequest remain unchanged and that any tests now exercise
those two functions and buildRequestURL/buildFullURL (which call
chatGPTOAuthPath) instead of the removed helper.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 37c536ae-eea0-475d-9b57-790bb27b67d5
📒 Files selected for processing (4)
core/providers/openai/chatgpt_oauth.gocore/providers/openai/chatgpt_oauth_test.gocore/providers/openai/openai.gocore/providers/openai/types.go
🚧 Files skipped from review as they are similar to previous changes (1)
- core/providers/openai/openai.go
|
In my opinion should be ready for manual review. I fixed everything the Coderabbit and Greptile found (while keeping KISS) and double checked it myself manually (with my limited point of view as a new contributor) Attached is a recommended verification task list that could be tackled in the Review from the Maintainer side! Cheers! Manual Confirmation Tasks — for Reviewers
|
|
@akshaydeo would be cool if you could have a look - for us this is a hard blocker in regards to an enterprise license (I'm responsible for technical judgement in that regard - roundabout 40 engineers and up) |
|
After the recent merges this has conflicts - I'm working to resolve them and test this further until there is a response. Will also try to move any bugfixes out of this and creating issues for them ... so at least they can be merged and do not block this by becoming outdated again. |
|
Pushed - branch now at What was done in this pass:
Codex end-to-end verified: Codex CLI 0.123.0 → Bifrost ( Next up: evaluating whether the individual fixes above can be cleanly extracted into their own issues / separate PRs (log-finalize plumbing for WS, idle-timeout watchdog, blocklist hardening are independent of the OAuth feature itself). Will triage as we go. |
Possible Issue extraction from this PRRouting context learned from #2959: unified Update 2026-04-27, most extractions superseded upstreamMaintainer PR #3018 ("fix: native WebSocket /responses reliability issues" by Still open (not covered by #3018): #2995, #2993, #2996. My drafts #3006, #3007, #3014 remain the proposed fixes for these and are still in draft awaiting maintainer review. Branch stateBuild passes, tests pass on affected packages, no dead code leftover. #3018 is already merged to upstream main (2026-04-24). To absorb it, this branch needs a rebase onto upstream main, which will cancel the 8 removal commits against #3018's additions and restore the WS pipeline functionality on the feature branch. The 3 still-open issues remain blockers for end-to-end OAuth Codex demo until they land. Recommendation, hold #2775 as WIP draft until the 3 remaining drafts land or a maintainer addresses them in a follow-up, then rebase and re-verify.
|
9a2ea1d to
b58517e
Compare
|
Fixed commit signing and pushed the existing commits with proper signatures on my side in-between |
…passthrough # Conflicts: # docs/cli-agents/codex-cli.mdx
…ximhq#3006) This fix for maximhq#2995 was originally included in this feature branch as a side effect. It has been extracted into a focused PR maximhq#3006 against main that implements the narrower path-sniff approach requested by the maintainer on maximhq#2959 (only the integration path defaults to OpenAI, the unified path stays strict). Restoring the empty default at both call sites here so this branch no longer overlaps with maximhq#3006.
This fix for maximhq#2993 was originally included in this feature branch as a side effect. It has been extracted into a focused PR maximhq#3007 against main. Reverting the changes here so the feature branch no longer overlaps.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The client-upgrade header forwarding in tryNativeWSUpstream (passing auth.clientHeaders to mergeClientWSHeaders) was the feature branch approximation of the x-bf-eh-* forwarding mechanism described in issue maximhq#2996. That forwarding has been extracted as a focused fix in PR maximhq#3014 against main, which adds mergeWSExtraHeaders to read BifrostContextKeyExtraHeaders from the request context and merge those headers before pool.Get. Remove auth.clientHeaders from the mergeClientWSHeaders call so the feature branch no longer carries duplicate coverage for that path. The chatgpt_oauth identity-defaults injection (originator, version) via injectCodexDefaults is retained as it is specific to the OAuth feature and not part of the extracted fix.
…ed to maximhq#3016) Remove the terminalPostHookFired guard, extractStreamEventType fallback, synthesizeTerminalStreamResponse helper, and associated tests from the feature branch. These were extracted into focused PR maximhq#3016 against maximhq/bifrost main. The remaining diagnostic frame logging, wsFramePreview, and diag variables are preserved as they belong to a separate extraction.
The WebSocketResponsesRequest classification in ProcessStreamingResponse was extracted to maximhq/bifrost PR maximhq#3020 (fixes issue maximhq#3001). Removing the duplicate from this feature branch to avoid a merge conflict and keep the feature branch focused on OAuth passthrough concerns. Extraction commit: 606f47b
…ure-branch duplicate) The liveness probe for stale session-pinned WS pool connections (maximhq#3002) was implemented directly on fix/3002-pool-stale-session-conn off main. The feature branch never contained a parallel implementation of this probe, so there is no code to remove. This commit marks the extraction as complete and links the focused fix to this branch for traceability. Related: maximhq#3021, maximhq#3002 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…-branch duplicate)
|
Extraction done, review done too - will wait until until the extracted issues have been solved before continuing further. I noticed @danpiths is working on most of them - hence I left all the PRs in draft status as duplicates just in case. This way this PR stays clean - will then see if this can be made smaller, but looking at the code myself this is already in a good direction and clean (from my limited perspective) |
OAuth-path
|
|
Bug: Compact is not working yet.
|



Summary
Alternative / smaller implementation to #2715 as a Optional OAuth Toggle in the
OpenAIproviderMotivation
Codex CLI users with a ChatGPT subscription want to route through Bifrost for logging/governance without paying separately for API credits. Previously, pointing Codex's
openai_base_urlat Bifrost failed because:openid profile email offline_access) which OpenAI's API gateway rejects with 401 "Missing scopes: api.responses.write" when received through a proxychatgpt.com/backend-api/codex/responses, which expects different headers, body format, and pathsPrior art: the community openai-oauth project implements this as a standalone localhost proxy. This PR integrates the same approach natively into Bifrost.
For the Recommended Test-List for Reviewers see:
#2775 (comment)
Type of change
Affected areas
How to test
Describe the steps to validate this change. Include commands and expected outcomes.
Build local Docker:
If adding new configs or environment variables, document them here.
Breaking changes
Related Issues
Changes
Adds a ChatGPT OAuth Passthrough toggle on the OpenAI provider that lets users route Codex CLI (or any ChatGPT OAuth client) through Bifrost to the ChatGPT backend API, using their subscription (Plus/Pro/Business/Enterprise) instead of a paid API key.
When the toggle is enabled on the OpenAI provider, Bifrost transparently:
api.openai.com/v1/*→chatgpt.com/backend-api/codex/*(strips/v1prefix)Authorizationheader and forwards it upstream (noallow_direct_keysrequired)chatgpt-account-idheader (extracted from the JWT'shttps://api.openai.com/auth.chatgpt_account_idclaim)OpenAI-Beta: responses=experimental/responsesbody: ensuresinstructionsexists, forcesstore=false, removesmax_output_tokens, forcesstream=true(the ChatGPT backend rejects non-streaming)client_versionquery param on/modelsCovers every OpenAI provider route:
/responses(HTTP + WebSocket)/models/chat/completions/embeddings/audio/*/images/*/files*/batches*/containers*/videos*/responses/compact/responses/input_tokens/realtime/*.Design
All ChatGPT-specific logic lives in a single self-contained module:
core/providers/openai/chatgpt_oauth.go. Call sites inopenai.go/websocket.goare one-liners that delegate to helpers — no ChatGPT-specific code leaks out of the module.extractChatGPTAccountID(jwt)— base64url-decode JWT, extract account ID fromhttps://api.openai.com/auth.chatgpt_account_idclaim. Rejects newlines/CR (header injection defense).chatGPTOAuthPath(standardPath)— strips/v1prefix; appends?client_version=to/modelsautomatically.transformChatGPTResponsesBody(body)— applies the four required body mutations.chatGPTOAuthMergeHeaders(...)— merges OAuth headers into existing extra headers.chatGPTOAuthApplyRequest(...)— one-call wrapper for HTTP/responses(headers + body transformer).chatGPTOAuthWebSocketURL/chatGPTOAuthWebSocketHeaders— mirrors for upstream WebSocket.Integration points (tiny):
OpenAIProvider.buildRequestURLauto-applieschatGPTOAuthPathwhenchatgptOAuthis true → every URL-using method is fixed in one place.OpenAIProvider.effectiveExtraHeaders(key)auto-merges OAuth headers → replaces all 30+ direct references toprovider.networkConfig.ExtraHeaders.core/bifrost.goauto-injects the Bearer token as a direct key whenchatgpt_oauthis enabled, so users don't need to toggle the globalallow_direct_keyssecurity setting.Configuration
Three new fields:
OpenAIConfig.ChatGPTOAuth bool(Go schema)openai_config.chatgpt_oauth(JSON / DB JSON blob — no migration needed)User config in
~/.codex/config.toml:No
env_key,OPENAI_API_KEY,openai/model prefix, orallow_direct_keystoggle required.Extracted side-effect fixes
Side-effect fixes found during this work have been extracted into their own focused PRs and removed from this branch. See the triage table in the PR comment at #2775 (comment) for the full map of issue, extraction PR, and removal commit per finding.
Security
\ror\nare rejected to prevent HTTP header injection (3 dedicated test cases).base_urlis configured (defaulthttps://chatgpt.com/backend-api/codex).Test Plan
Automated (already green on this branch)
go test ./providers/openai/... -count=1(50+ test cases, 100% coverage onchatgpt_oauth.go— every public function)go vet,gofmt— cleanManual — ChatGPT OAuth path (author verified)
openai_base_url = \"http://localhost:8080/openai/v1\", noenv_key, no API key in envcodex→ OAuth browser flow)/modelin Codex — verify the model list populates fromchatgpt.com/backend-api/codex/modelsgpt-5.3-codex) — verify inference works end-to-end via WebSocketFiles Changed
core/schemas/provider.goChatGPTOAuth booltoOpenAIConfigcore/providers/openai/chatgpt_oauth.gocore/providers/openai/chatgpt_oauth_test.gocore/providers/openai/openai.gocore/providers/openai/websocket.gocore/bifrost.gochatgpt_oauthis oncore/providers/{azure,fireworks,openrouter,xai}/*.gonilto newbodyTransformerparam on shared streaming helper (no behavior change)ui/lib/types/config.ts,ui/lib/types/schemas.ts,ui/app/workspace/providers/fragments/openaiConfigFormFragment.tsxdocs/cli-agents/codex-cli.mdxCompatibility
false; when off, provider behaves exactly as before.OpenAIConfigis stored as a JSON blob (open_ai_config_jsoncolumn) and new fields with zero-value defaults deserialize cleanly.HandleOpenAIResponsesRequestandHandleOpenAIResponsesStreaminghelpers gained a trailingbodyTransformer func([]byte) ([]byte, error)parameter; all external callers passnil.🤖 Generated with Claude Code
Addendum, 2026-04-23 / 2026-04-24 (main merge + extraction of side-effect fixes)
Merged
main, discovered and fixed several latent WS-pipeline bugs during live Codex OAuth testing, then extracted each latent fix into its own focused upstream PR so this branch stays scoped to the ChatGPT OAuth feature itself.Merge
0d9e4130d, resolved conflicts incore/bifrost.go,core/providers/azure/azure.go,core/providers/openai/openai.go,core/schemas/provider.go,ui/.../openaiConfigFormFragment.tsx,ui/lib/types/schemas.ts. Main's refactors adopted, ChatGPT OAuth feature preserved.origin/mainmerge (a9acea33b).What stays on this branch
OAuth-specific work only:
originator: codex_cli_rs,versionfallback) injected only when the client sent none, and only on thechatgpt_oauthpath. Standardapi.openai.comWS is unaffected.ws upstream frame,ws upstream stream ended) added for diagnosis during OAuth bring-up. Recommendation before merge: gate behind an env flag (BIFROST_WS_TRACE_FRAMES=1) or remove, because atLOG_LEVEL=debugit captures raw upstream payload which can be sensitive.What was extracted and removed from this branch
Each of the following latent fixes was discovered while debugging the OAuth WS path but is orthogonal to the OAuth feature. Each has its own focused upstream PR, and a matching
chore: remove ... (extracted to #NNNN)commit on this branch so the feature branch does not overlap the upstream PRs. Full mapping of issue, PR, and removal commit is in the triage comment at #2775 (comment)./openai/v1/responses./openai/v1/modelsprovider fanout.x-bf-eh-*header forwarding on the WS upgrade path.ReadMessageidle timeout.tracer.CompleteAndFlushTrace()on WS final chunks.convertToProcessedStreamResponseWS case.isResponsesStreamingWS case.Merge order dependency
Because the extracted fixes were removed from this branch, the OAuth feature cannot be demoed end-to-end against a real ChatGPT Codex client at the current tip. The Codex CLI sends bare model strings (e.g.
gpt-5-codex), which requires PR #3006 to land on main first. After that, the WS-logging cluster (#3013, #3014, #3016, #3017, #3019, #3020) should land together to restore log finalization and the WS idle-timeout watchdog. Once those are on main, rebase this branch and re-run the manual Codex round trip before unmarking draft.Previously verified end-to-end (before extraction)
These results were produced when the side-effect fixes were still present on this branch. They will hold again after the extracted PRs land on main and this branch is rebased.
chatgpt_oauthon, noallow_direct_keys) to chatgpt.com/backend-api/codex/responses via native WebSocket.processingtosuccesswith proper latency, token counts, and request/response bodies when "Store raw request/response" is on.