feat: plugin interface and tests added#2
Closed
Pratham-Mishra04 wants to merge 1 commit intographite-base/2from
Closed
feat: plugin interface and tests added#2Pratham-Mishra04 wants to merge 1 commit intographite-base/2from
Pratham-Mishra04 wants to merge 1 commit intographite-base/2from
Conversation
Collaborator
Author
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
This was referenced Mar 21, 2025
This was referenced Mar 31, 2025
This was referenced Mar 31, 2025
Closed
This was referenced Apr 9, 2025
Closed
Closed
Closed
Open
1 task
11 tasks
thiscantbeserious
added a commit
to thiscantbeserious/bifrost
that referenced
this pull request
Apr 17, 2026
CodeRabbit round 2:
10. /models query-param fallback now checks for existing client_version
before injecting. Preserves caller-supplied params like foo=bar and
picks up caller's own client_version when present. Uses simple
string split (no net/url allocation) since paths are well-formed.
11. chatGPTOAuthWebSocketHeaders now routes through
mergeHeadersCaseInsensitive so caller-supplied headers with
different casing (OPENAI-BETA, Openai-Beta, etc.) can't cause
duplicate-case headers reaching the wire — OAuth values and the
Authorization bearer always win deterministically.
Greptile findings:
P1 maximhq#1 Non-streaming Responses() path: return a clear BifrostOperationError
pointing callers to ResponsesStream when chatgpt_oauth is on.
The ChatGPT backend only accepts stream=true (matching the
openai-oauth reference proxy behavior), so non-streaming cannot
work via Bifrost without server-side SSE aggregation which is
out of scope for this PR. In practice Codex always streams.
P1 maximhq#2 /models response format: OpenAIListModelsResponse now has a
UnmarshalJSON that accepts BOTH the standard OpenAI shape
({"data":[{id,...}]}) and the ChatGPT backend shape
({"models":[{slug}]}). ChatGPT entries are projected into
OpenAIModel with ID=slug, Object="model", OwnedBy="chatgpt-oauth".
P2 maximhq#3 client_version doc comment: corrected — the fallback only
applies when the inbound path reaches chatGPTOAuthPath without
a caller-supplied client_version (previously the doc implied
forwarding logic that didn't exist; it now does).
P2 maximhq#4 store doc comment: corrected from "sets store to false if not
already present" to "forces store to false" to match the code
and explanation.
P2 maximhq#5 Double JWT decode per Responses/ResponsesStream: fixed by
passing raw networkConfig.ExtraHeaders to chatGPTOAuthApplyRequest
instead of the already-merged effectiveExtraHeaders(key). The
merge now happens exactly once per call.
Test coverage: 100% on every function in chatgpt_oauth.go. Added tests
for query-param preservation, models dual-shape parsing, and the
streaming-only sentinel error.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
22 tasks
2 tasks
thiscantbeserious
added a commit
to thiscantbeserious/bifrost
that referenced
this pull request
Apr 24, 2026
CodeRabbit round 2:
10. /models query-param fallback now checks for existing client_version
before injecting. Preserves caller-supplied params like foo=bar and
picks up caller's own client_version when present. Uses simple
string split (no net/url allocation) since paths are well-formed.
11. chatGPTOAuthWebSocketHeaders now routes through
mergeHeadersCaseInsensitive so caller-supplied headers with
different casing (OPENAI-BETA, Openai-Beta, etc.) can't cause
duplicate-case headers reaching the wire — OAuth values and the
Authorization bearer always win deterministically.
Greptile findings:
P1 maximhq#1 Non-streaming Responses() path: return a clear BifrostOperationError
pointing callers to ResponsesStream when chatgpt_oauth is on.
The ChatGPT backend only accepts stream=true (matching the
openai-oauth reference proxy behavior), so non-streaming cannot
work via Bifrost without server-side SSE aggregation which is
out of scope for this PR. In practice Codex always streams.
P1 maximhq#2 /models response format: OpenAIListModelsResponse now has a
UnmarshalJSON that accepts BOTH the standard OpenAI shape
({"data":[{id,...}]}) and the ChatGPT backend shape
({"models":[{slug}]}). ChatGPT entries are projected into
OpenAIModel with ID=slug, Object="model", OwnedBy="chatgpt-oauth".
P2 maximhq#3 client_version doc comment: corrected — the fallback only
applies when the inbound path reaches chatGPTOAuthPath without
a caller-supplied client_version (previously the doc implied
forwarding logic that didn't exist; it now does).
P2 maximhq#4 store doc comment: corrected from "sets store to false if not
already present" to "forces store to false" to match the code
and explanation.
P2 maximhq#5 Double JWT decode per Responses/ResponsesStream: fixed by
passing raw networkConfig.ExtraHeaders to chatGPTOAuthApplyRequest
instead of the already-merged effectiveExtraHeaders(key). The
merge now happens exactly once per call.
Test coverage: 100% on every function in chatgpt_oauth.go. Added tests
for query-param preservation, models dual-shape parsing, and the
streaming-only sentinel error.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

No description provided.