Skip to content

Adds handlers for SDKs#60

Closed
akshaydeo wants to merge 4 commits intomainfrom
codex/add-middleware-for-openai-and-other-sdks
Closed

Adds handlers for SDKs#60
akshaydeo wants to merge 4 commits intomainfrom
codex/add-middleware-for-openai-and-other-sdks

Conversation

@akshaydeo
Copy link
Copy Markdown
Contributor

@akshaydeo akshaydeo commented Jun 5, 2025

Adds handler SDK routes for

  • OpenAI
  • Anthropic
  • Mistral
  • Langchain
  • Langgraph
  • Litellm

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 5, 2025

Summary by CodeRabbit

  • New Features

    • Added support for Anthropic, LangChain, LangGraph, LiteLLM, Mistral, and OpenAI chat completion API endpoints, enabling integration with multiple AI model providers.
    • Introduced request formats compatible with each provider, allowing flexible chat completion requests through standardized endpoints.
  • Tests

    • Added comprehensive tests for request conversion logic for each new provider integration to ensure correct request handling and compatibility.

Summary by CodeRabbit

  • New Features
    • Added support for Anthropic, LangChain, LangGraph, LiteLLM, Mistral, and OpenAI chat completion APIs via new HTTP endpoints.
    • Enabled handling of chat completion requests with parameters such as model, messages, temperature, stop sequences, penalties, and tool options for each integration.
  • Tests
    • Introduced comprehensive tests to verify correct request conversion and integration behavior for all new chat completion endpoints.
      Error: Could not generate a valid Mermaid diagram after multiple attempts.

Walkthrough

Multiple new HTTP routers and request types were added to support Anthropic, LangChain, LangGraph, LiteLLM, Mistral, and OpenAI chat completion APIs. Each router registers a POST endpoint, unmarshals JSON requests, converts them to a common Bifrost request format, invokes the Bifrost client, and returns JSON responses with appropriate error handling. Corresponding tests verify request conversion logic.

Changes

Files (Grouped by Integration) Change Summary
.../anthropic/router.go Added AnthropicRouter with route /anthropic/v1/messages handling chat completion requests.
.../anthropic/types.go, .../anthropic/types_test.go Defined ChatCompletionRequest struct and conversion method; added tests for conversion correctness.
.../langchain/router.go Added LangChainRouter with route /langchain/v1/chat/completions.
.../langchain/types.go, .../langchain/types_test.go Defined LangChain ChatCompletionRequest struct and conversion method; added conversion tests.
.../langgraph/router.go Added LangGraphRouter with route /langgraph/v1/chat/completions.
.../langgraph/types.go, .../langgraph/types_test.go Defined LangGraph ChatCompletionRequest struct and conversion method; added conversion tests.
.../litellm/router.go Added LiteLLMRouter with route /litellm/v1/chat/completions.
.../litellm/types.go, .../litellm/types_test.go Defined LiteLLM ChatCompletionRequest struct and conversion method; added conversion tests.
.../mistral/router.go Added MistralRouter with route /mistral/v1/chat/completions.
.../mistral/types.go, .../mistral/types_test.go Defined Mistral ChatCompletionRequest struct and conversion method; added conversion tests.
.../openai/router.go Added OpenAIRouter with route /openai/v1/chat/completions.
.../openai/types.go, .../openai/types_test.go Defined OpenAI ChatCompletionRequest struct and conversion method; added conversion tests.
transports/bifrost-http/main.go Extended extensions slice to register all new routers; added model provider constants.

Possibly related PRs

Suggested reviewers

  • danpiths

Poem

🐇 Hopping through code with a joyful cheer,
New routers and types all appear!
Anthropic, LangChain, and LiteLLM too,
Mistral and OpenAI join the crew.
Messages flow, requests convert right,
Bifrost’s now ready to handle the fight!
🐰✨

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@akshaydeo akshaydeo removed the codex label Jun 5, 2025
@akshaydeo akshaydeo changed the title Add tests for integration middleware conversions Adds handlers for SDKs Jun 5, 2025
@coderabbitai coderabbitai Bot requested a review from danpiths June 5, 2025 06:20
@akshaydeo akshaydeo marked this pull request as ready for review June 5, 2025 06:23
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

♻️ Duplicate comments (9)
transports/bifrost-http/integrations/mistral/types.go (1)

7-20: Duplicate comment - same code duplication issue as Anthropic.

This struct is identical to the Anthropic integration. Refer to the refactoring suggestion in the Anthropic file review.

transports/bifrost-http/integrations/litellm/types.go (1)

7-20: Duplicate comment - same code duplication issue.

This struct is identical to the other integrations. Refer to the refactoring suggestion in the Anthropic file review.

transports/bifrost-http/integrations/langgraph/types.go (1)

7-20: Duplicate comment - same code duplication issue.

This struct is identical to the other integrations. Refer to the refactoring suggestion in the Anthropic file review.

transports/bifrost-http/integrations/litellm/router.go (2)

30-34: Improve error handling to avoid exposing internal details.

Directly encoding Go errors to JSON responses can expose internal implementation details. Use structured error responses instead.


40-44: Apply consistent error handling for Bifrost client errors.

Same error handling issue - avoid exposing internal error details to clients.

transports/bifrost-http/integrations/langchain/router.go (2)

30-34: Improve error handling to avoid exposing internal details.

Directly encoding Go errors can expose implementation details to clients.


40-44: Apply consistent error handling for Bifrost client errors.

Avoid exposing internal error details in API responses.

transports/bifrost-http/integrations/anthropic/router.go (1)

30-34: Apply the same error handling improvements.

The same error response format issues identified in the Mistral router apply here.

Also applies to: 40-44

transports/bifrost-http/integrations/openai/router.go (1)

30-34: Apply consistent error handling across all routers.

The same error response format issues apply to this router as well.

Also applies to: 40-44

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ac54c86 and 996446b.

📒 Files selected for processing (19)
  • transports/bifrost-http/integrations/anthropic/router.go (1 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (1 hunks)
  • transports/bifrost-http/integrations/anthropic/types_test.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/router.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/types.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/types_test.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/router.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/types.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/types_test.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/types.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/types_test.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/router.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/types.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/types_test.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/types.go (1 hunks)
  • transports/bifrost-http/integrations/openai/types_test.go (1 hunks)
  • transports/bifrost-http/main.go (2 hunks)
🧰 Additional context used
🧠 Learnings (2)
transports/bifrost-http/integrations/openai/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#54
File: core/schemas/bifrost.go:46-49
Timestamp: 2025-06-04T09:22:18.093Z
Learning: In core/schemas/bifrost.go, the RequestInput struct uses ChatCompletionInput *[]BifrostMessage (pointer-to-slice) rather than []BifrostMessage to properly represent union type semantics. For text completion requests, ChatCompletionInput should be nil to indicate "no chat payload at all", while for chat completion requests it should be non-nil (even if empty slice). This distinguishes between different request types rather than just empty vs non-empty chat messages.
transports/bifrost-http/integrations/langgraph/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#54
File: core/schemas/bifrost.go:46-49
Timestamp: 2025-06-04T09:22:18.093Z
Learning: In core/schemas/bifrost.go, the RequestInput struct uses ChatCompletionInput *[]BifrostMessage (pointer-to-slice) rather than []BifrostMessage to properly represent union type semantics. For text completion requests, ChatCompletionInput should be nil to indicate "no chat payload at all", while for chat completion requests it should be non-nil (even if empty slice). This distinguishes between different request types rather than just empty vs non-empty chat messages.
🧬 Code Graph Analysis (5)
transports/bifrost-http/integrations/langgraph/types_test.go (7)
transports/bifrost-http/integrations/anthropic/types_test.go (1)
  • TestConvertToBifrostRequest (10-34)
transports/bifrost-http/integrations/langchain/types_test.go (1)
  • TestConvertToBifrostRequest (10-34)
transports/bifrost-http/integrations/litellm/types_test.go (1)
  • TestConvertToBifrostRequest (10-34)
transports/bifrost-http/integrations/openai/types_test.go (1)
  • TestConvertToBifrostRequest (10-34)
transports/bifrost-http/integrations/langgraph/types.go (1)
  • ChatCompletionRequest (7-20)
core/schemas/bifrost.go (3)
  • BifrostMessage (143-152)
  • ModelChatMessageRoleUser (24-24)
  • OpenAI (34-34)
core/utils.go (1)
  • Ptr (3-5)
transports/bifrost-http/integrations/mistral/types_test.go (6)
transports/bifrost-http/integrations/anthropic/types_test.go (1)
  • TestConvertToBifrostRequest (10-34)
transports/bifrost-http/integrations/langchain/types_test.go (1)
  • TestConvertToBifrostRequest (10-34)
transports/bifrost-http/integrations/openai/types_test.go (1)
  • TestConvertToBifrostRequest (10-34)
transports/bifrost-http/integrations/mistral/types.go (1)
  • ChatCompletionRequest (7-20)
core/schemas/bifrost.go (2)
  • BifrostMessage (143-152)
  • ModelChatMessageRoleUser (24-24)
core/utils.go (1)
  • Ptr (3-5)
transports/bifrost-http/integrations/litellm/router.go (2)
transports/bifrost-http/integrations/litellm/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/mistral/router.go (2)
transports/bifrost-http/integrations/mistral/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/langchain/router.go (3)
core/bifrost.go (1)
  • Bifrost (39-51)
transports/bifrost-http/integrations/langchain/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
🪛 golangci-lint (1.64.8)
transports/bifrost-http/main.go

37-37: could not import github.com/maximhq/bifrost/transports/bifrost-http/integrations/mistral (-: # github.com/maximhq/bifrost/transports/bifrost-http/integrations/mistral
bifrost-http/integrations/mistral/types.go:41:21: undefined: schemas.Mistral)

(typecheck)

🔇 Additional comments (9)
transports/bifrost-http/integrations/openai/types.go (1)

1-21: LGTM! Well-designed integration structure.

The ChatCompletionRequest struct appropriately mirrors the OpenAI API with comprehensive field coverage and proper JSON tags. The use of pointer types for optional fields follows Go best practices.

transports/bifrost-http/integrations/langgraph/types_test.go (1)

10-34:

Details

✅ Verification successful

Verify the provider assignment for LangGraph.

The test structure and validation logic are well-implemented. However, please confirm that using schemas.OpenAI as the provider for LangGraph is intentional, as this differs from integrations like Anthropic which use their own provider constants.


🏁 Script executed:

#!/bin/bash
# Description: Check if LangGraph should use OpenAI provider or have its own provider constant
# Expected: Verification of provider constants available and their intended usage

# Search for provider constants in schemas
ast-grep --pattern 'const (
  $$$
)'

# Search for any LangGraph-specific provider constant
rg -A 3 -B 3 "LangGraph.*ModelProvider"

Length of output: 4795


🏁 Script executed:

#!/bin/bash
set -e

echo "===== LangGraph ConvertToBifrostRequest implementation ====="
rg -n "ConvertToBifrostRequest" -A3 -B3 transports/bifrost-http/integrations/langgraph

echo
echo "===== LangGraph provider field assignment ====="
rg -n "Provider" -A3 -B3 transports/bifrost-http/integrations/langgraph

echo
echo "===== Anthropic ConvertToBifrostRequest implementation (for comparison) ====="
rg -n "ConvertToBifrostRequest" -A3 -B3 transports/bifrost-http/integrations/anthropic

echo
echo "===== Anthropic provider field assignment ====="
rg -n "Provider" -A3 -B3 transports/bifrost-http/integrations/anthropic

Length of output: 8880


Provider assignment confirmed for LangGraph.

The ConvertToBifrostRequest implementation in transports/bifrost-http/integrations/langgraph/types.go explicitly sets Provider: schemas.OpenAI, matching the test in types_test.go. This aligns with LangGraph’s use of OpenAI under the hood and is consistent with how Anthropic sets its provider to schemas.Anthropic.

No changes needed.

transports/bifrost-http/main.go (1)

185-193: LGTM! Clean integration router registration.

The router registration pattern is consistent and well-structured, enabling unified handling of multiple AI provider integrations.

transports/bifrost-http/integrations/anthropic/types_test.go (1)

10-34: LGTM! Excellent test implementation with correct provider usage.

The test correctly uses schemas.Anthropic as the provider and thoroughly validates the conversion logic including model, temperature, and message handling. This demonstrates the proper pattern for provider-specific integrations.

transports/bifrost-http/integrations/mistral/types.go (1)

41-41: Correct provider mapping for Mistral.

Good choice mapping to schemas.Mistral provider for Mistral API requests.

transports/bifrost-http/integrations/litellm/types.go (1)

41-41: Appropriate provider mapping for LiteLLM.

Mapping to schemas.OpenAI is correct since LiteLLM mirrors the OpenAI API as mentioned in the comment.

transports/bifrost-http/integrations/langgraph/types.go (1)

41-44: Correct provider mapping and input handling.

Good choice mapping to schemas.OpenAI for LangGraph's OpenAI-compatible API. The usage of &r.Messages correctly follows the pointer-to-slice pattern for union type semantics as indicated in the retrieved learning.

transports/bifrost-http/integrations/langgraph/router.go (1)

36-36:

Details

✅ Verification successful

Verify the empty model parameter usage.

The empty string passed to ConvertToBifrostRequest("") means the model will default to req.Model. Ensure this behavior aligns with LangGraph's expected model handling.


🏁 Script executed:

#!/bin/bash
# Check if there are any model override scenarios or configurations
rg -A 5 -B 5 "ConvertToBifrostRequest.*[^\"\"]\""

Length of output: 12679


🏁 Script executed:

#!/bin/bash
# Locate the definition of ConvertToBifrostRequest to verify its behavior with empty string
rg -n "ConvertToBifrostRequest" -C 3

Length of output: 16921


Empty‐model fallback is correct and covered by tests.

ConvertToBifrostRequest("") falls back to r.Model when the override is empty, as you saw in every provider’s types.go (including LangGraph’s) and confirmed by the existing tests in langgraph/types_test.go. No change needed here.

transports/bifrost-http/integrations/langchain/types.go (1)

23-27: Good model override implementation.

The model parameter override logic provides flexibility while maintaining a sensible default. This allows for runtime model specification while falling back to the request's model field.

Comment on lines +23 to +49
// ConvertToBifrostRequest converts the request to a BifrostRequest.
func (r *ChatCompletionRequest) ConvertToBifrostRequest(model string) *schemas.BifrostRequest {
if model == "" {
model = r.Model
}
params := &schemas.ModelParameters{
Temperature: r.Temperature,
TopP: r.TopP,
TopK: r.TopK,
MaxTokens: r.MaxTokens,
StopSequences: r.StopSequences,
PresencePenalty: r.PresencePenalty,
FrequencyPenalty: r.FrequencyPenalty,
ParallelToolCalls: r.ParallelToolCalls,
Tools: r.Tools,
ToolChoice: r.ToolChoice,
}

return &schemas.BifrostRequest{
Provider: schemas.OpenAI,
Model: model,
Input: schemas.RequestInput{
ChatCompletionInput: &r.Messages,
},
Params: params,
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add documentation for the model parameter.

The conversion logic is sound and correctly handles the model override functionality. However, the purpose of the model parameter isn't documented.

-// ConvertToBifrostRequest converts the request to a BifrostRequest.
+// ConvertToBifrostRequest converts the request to a BifrostRequest.
+// If model is provided and non-empty, it overrides the request's model field.
 func (r *ChatCompletionRequest) ConvertToBifrostRequest(model string) *schemas.BifrostRequest {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// ConvertToBifrostRequest converts the request to a BifrostRequest.
func (r *ChatCompletionRequest) ConvertToBifrostRequest(model string) *schemas.BifrostRequest {
if model == "" {
model = r.Model
}
params := &schemas.ModelParameters{
Temperature: r.Temperature,
TopP: r.TopP,
TopK: r.TopK,
MaxTokens: r.MaxTokens,
StopSequences: r.StopSequences,
PresencePenalty: r.PresencePenalty,
FrequencyPenalty: r.FrequencyPenalty,
ParallelToolCalls: r.ParallelToolCalls,
Tools: r.Tools,
ToolChoice: r.ToolChoice,
}
return &schemas.BifrostRequest{
Provider: schemas.OpenAI,
Model: model,
Input: schemas.RequestInput{
ChatCompletionInput: &r.Messages,
},
Params: params,
}
}
// ConvertToBifrostRequest converts the request to a BifrostRequest.
// If model is provided and non-empty, it overrides the request's model field.
func (r *ChatCompletionRequest) ConvertToBifrostRequest(model string) *schemas.BifrostRequest {
if model == "" {
model = r.Model
}
params := &schemas.ModelParameters{
Temperature: r.Temperature,
TopP: r.TopP,
TopK: r.TopK,
MaxTokens: r.MaxTokens,
StopSequences: r.StopSequences,
PresencePenalty: r.PresencePenalty,
FrequencyPenalty: r.FrequencyPenalty,
ParallelToolCalls: r.ParallelToolCalls,
Tools: r.Tools,
ToolChoice: r.ToolChoice,
}
return &schemas.BifrostRequest{
Provider: schemas.OpenAI,
Model: model,
Input: schemas.RequestInput{
ChatCompletionInput: &r.Messages,
},
Params: params,
}
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/types.go around lines 23 to 49,
add a comment above the ConvertToBifrostRequest function to document the model
parameter. Explain that the model parameter allows overriding the
ChatCompletionRequest's default model, and if left empty, the request's own
model field will be used. This clarifies the purpose of the parameter for future
readers.

Comment thread transports/bifrost-http/main.go Outdated
Comment on lines +10 to +34
func TestConvertToBifrostRequest(t *testing.T) {
temp := 0.5
req := ChatCompletionRequest{
Model: "gpt-test",
Messages: []schemas.BifrostMessage{
{Role: schemas.ModelChatMessageRoleUser, Content: bifrost.Ptr("hi")},
},
Temperature: &temp,
}

bfReq := req.ConvertToBifrostRequest("override")

if bfReq.Provider != schemas.OpenAI {
t.Errorf("expected provider %s, got %s", schemas.OpenAI, bfReq.Provider)
}
if bfReq.Model != "override" {
t.Errorf("expected model override, got %s", bfReq.Model)
}
if bfReq.Params == nil || bfReq.Params.Temperature == nil || *bfReq.Params.Temperature != temp {
t.Errorf("temperature not copied")
}
if bfReq.Input.ChatCompletionInput == nil || len(*bfReq.Input.ChatCompletionInput) != 1 {
t.Fatalf("expected 1 message, got %v", bfReq.Input.ChatCompletionInput)
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Expand test coverage beyond the basic happy path.

The test correctly validates the core conversion logic, but consider adding test cases for:

  • Edge cases (nil temperature, empty messages, etc.)
  • Multiple messages in the conversation
  • Different message roles and content types
  • Error conditions

Consider extracting the test validation logic into helper functions to reduce duplication across integration tests:

+func assertBifrostRequestDefaults(t *testing.T, bfReq *schemas.BifrostRequest, expectedProvider schemas.Provider, expectedModel string, expectedTemp float64) {
+    if bfReq.Provider != expectedProvider {
+        t.Errorf("expected provider %s, got %s", expectedProvider, bfReq.Provider)
+    }
+    if bfReq.Model != expectedModel {
+        t.Errorf("expected model %s, got %s", expectedModel, bfReq.Model)
+    }
+    if bfReq.Params == nil || bfReq.Params.Temperature == nil || *bfReq.Params.Temperature != expectedTemp {
+        t.Errorf("temperature not copied correctly")
+    }
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func TestConvertToBifrostRequest(t *testing.T) {
temp := 0.5
req := ChatCompletionRequest{
Model: "gpt-test",
Messages: []schemas.BifrostMessage{
{Role: schemas.ModelChatMessageRoleUser, Content: bifrost.Ptr("hi")},
},
Temperature: &temp,
}
bfReq := req.ConvertToBifrostRequest("override")
if bfReq.Provider != schemas.OpenAI {
t.Errorf("expected provider %s, got %s", schemas.OpenAI, bfReq.Provider)
}
if bfReq.Model != "override" {
t.Errorf("expected model override, got %s", bfReq.Model)
}
if bfReq.Params == nil || bfReq.Params.Temperature == nil || *bfReq.Params.Temperature != temp {
t.Errorf("temperature not copied")
}
if bfReq.Input.ChatCompletionInput == nil || len(*bfReq.Input.ChatCompletionInput) != 1 {
t.Fatalf("expected 1 message, got %v", bfReq.Input.ChatCompletionInput)
}
}
func TestConvertToBifrostRequest(t *testing.T) {
temp := 0.5
req := ChatCompletionRequest{
Model: "gpt-test",
Messages: []schemas.BifrostMessage{
{Role: schemas.ModelChatMessageRoleUser, Content: bifrost.Ptr("hi")},
},
Temperature: &temp,
}
bfReq := req.ConvertToBifrostRequest("override")
if bfReq.Provider != schemas.OpenAI {
t.Errorf("expected provider %s, got %s", schemas.OpenAI, bfReq.Provider)
}
if bfReq.Model != "override" {
t.Errorf("expected model override, got %s", bfReq.Model)
}
if bfReq.Params == nil || bfReq.Params.Temperature == nil || *bfReq.Params.Temperature != temp {
t.Errorf("temperature not copied")
}
if bfReq.Input.ChatCompletionInput == nil || len(*bfReq.Input.ChatCompletionInput) != 1 {
t.Fatalf("expected 1 message, got %v", bfReq.Input.ChatCompletionInput)
}
}
func assertBifrostRequestDefaults(t *testing.T, bfReq *schemas.BifrostRequest, expectedProvider schemas.Provider, expectedModel string, expectedTemp float64) {
if bfReq.Provider != expectedProvider {
t.Errorf("expected provider %s, got %s", expectedProvider, bfReq.Provider)
}
if bfReq.Model != expectedModel {
t.Errorf("expected model %s, got %s", expectedModel, bfReq.Model)
}
if bfReq.Params == nil || bfReq.Params.Temperature == nil || *bfReq.Params.Temperature != expectedTemp {
t.Errorf("temperature not copied correctly")
}
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/types_test.go around lines 10 to
34, the current test only covers the basic successful conversion scenario. To
improve coverage, add test cases for edge conditions such as nil temperature,
empty message slices, multiple messages with varied roles and content types, and
any error scenarios the conversion might encounter. Also, refactor the repeated
validation logic into helper functions to keep tests concise and maintainable.

Comment on lines +10 to +34
func TestConvertToBifrostRequest(t *testing.T) {
temp := 0.5
req := ChatCompletionRequest{
Model: "mistral-test",
Messages: []schemas.BifrostMessage{
{Role: schemas.ModelChatMessageRoleUser, Content: bifrost.Ptr("hi")},
},
Temperature: &temp,
}

bfReq := req.ConvertToBifrostRequest("override")

if bfReq.Provider != schemas.Mistral {
t.Errorf("expected provider %s, got %s", schemas.Mistral, bfReq.Provider)
}
if bfReq.Model != "override" {
t.Errorf("expected model override, got %s", bfReq.Model)
}
if bfReq.Params == nil || bfReq.Params.Temperature == nil || *bfReq.Params.Temperature != temp {
t.Errorf("temperature not copied")
}
if bfReq.Input.ChatCompletionInput == nil || len(*bfReq.Input.ChatCompletionInput) != 1 {
t.Fatalf("expected 1 message, got %v", bfReq.Input.ChatCompletionInput)
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Test correctly validates Mistral-specific conversion.

The test properly verifies that the provider is set to schemas.Mistral and the conversion logic works as expected. However, this test is nearly identical to other integration tests, indicating code duplication.

Consider creating a shared test helper to reduce duplication across integration test files:

+// In a shared test package
+func TestProviderConversion(t *testing.T, 
+    createRequest func() interface{}, 
+    converter func(interface{}, string) *schemas.BifrostRequest,
+    expectedProvider schemas.Provider,
+    modelOverride string) {
+    // Common test logic here
+}

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/mistral/types_test.go around lines 10 to
34, the test for ConvertToBifrostRequest duplicates logic found in other
integration tests. Refactor by extracting common test setup and assertions into
a shared test helper function that can be reused across integration test files.
This helper should accept parameters like the request and expected model
override, perform the conversion, and run the common assertions to reduce code
duplication and improve maintainability.

Comment on lines +20 to +27
bfReq := req.ConvertToBifrostRequest("")

if bfReq.Provider != schemas.OpenAI {
t.Errorf("expected provider %s, got %s", schemas.OpenAI, bfReq.Provider)
}
if bfReq.Model != "gpt-test" {
t.Errorf("expected model gpt-test, got %s", bfReq.Model)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

LiteLLM-specific behavior correctly implemented.

The test properly validates that:

  • LiteLLM requests map to schemas.OpenAI provider (acting as proxy)
  • Empty string model override preserves original model name
  • This differs from other integrations that use non-empty override strings

Add comments to clarify the LiteLLM-specific behavior:

+    // LiteLLM acts as a proxy to OpenAI, so no model override needed
     bfReq := req.ConvertToBifrostRequest("")

+    // LiteLLM maps to OpenAI provider
     if bfReq.Provider != schemas.OpenAI {
         t.Errorf("expected provider %s, got %s", schemas.OpenAI, bfReq.Provider)
     }
+    // Empty override string should preserve original model
     if bfReq.Model != "gpt-test" {
         t.Errorf("expected model gpt-test, got %s", bfReq.Model)
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
bfReq := req.ConvertToBifrostRequest("")
if bfReq.Provider != schemas.OpenAI {
t.Errorf("expected provider %s, got %s", schemas.OpenAI, bfReq.Provider)
}
if bfReq.Model != "gpt-test" {
t.Errorf("expected model gpt-test, got %s", bfReq.Model)
}
// LiteLLM acts as a proxy to OpenAI, so no model override needed
bfReq := req.ConvertToBifrostRequest("")
// LiteLLM maps to OpenAI provider
if bfReq.Provider != schemas.OpenAI {
t.Errorf("expected provider %s, got %s", schemas.OpenAI, bfReq.Provider)
}
// Empty override string should preserve original model
if bfReq.Model != "gpt-test" {
t.Errorf("expected model gpt-test, got %s", bfReq.Model)
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types_test.go around lines 20 to
27, add comments explaining that the test verifies LiteLLM requests map to the
schemas.OpenAI provider as a proxy, and that an empty string model override
preserves the original model name, highlighting this behavior differs from other
integrations that use non-empty override strings. This will clarify the
LiteLLM-specific logic for future readers.

Comment thread transports/bifrost-http/integrations/mistral/router.go
return
}

bifrostReq := req.ConvertToBifrostRequest("")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Clarify the empty string parameter purpose.

The ConvertToBifrostRequest("") call uses an empty string parameter. Please verify this is intentional or document the purpose of this parameter.


🏁 Script executed:

#!/bin/bash
# Check the ConvertToBifrostRequest method signature and usage across integrations
ast-grep --pattern 'func ($$$) ConvertToBifrostRequest($$$) $$$'

Length of output: 17391


Clarify fallback behavior for the empty‐model override

Passing "" into ConvertToBifrostRequest intentionally triggers the fallback to r.Model (see identical logic in all other integrations). To make this clear to future readers, please add a brief comment next to the call:

• File — transports/bifrost-http/integrations/mistral/router.go
Line 36

// empty string → use r.Model as the model override
bifrostReq := req.ConvertToBifrostRequest("")
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/mistral/router.go at line 36, the call
to ConvertToBifrostRequest("") uses an empty string parameter to trigger
fallback to r.Model as the model override. Add a brief comment above this line
explaining that passing an empty string causes the method to use r.Model as the
model override to clarify this behavior for future readers.

Comment thread transports/bifrost-http/integrations/mistral/router.go
Comment on lines +1 to +49
package anthropic

import (
"encoding/json"

"github.com/fasthttp/router"
bifrost "github.com/maximhq/bifrost/core"
"github.com/maximhq/bifrost/transports/bifrost-http/lib"
"github.com/valyala/fasthttp"
)

// AnthropicRouter holds route registrations for anthropic endpoints.
type AnthropicRouter struct {
client *bifrost.Bifrost
}

// NewAnthropicRouter creates a new AnthropicRouter with the given bifrost client.
func NewAnthropicRouter(client *bifrost.Bifrost) *AnthropicRouter {
return &AnthropicRouter{client: client}
}

// RegisterRoutes registers all anthropic routes on the given router.
func (a *AnthropicRouter) RegisterRoutes(r *router.Router) {
r.POST("/anthropic/v1/messages", a.handleChatCompletion)
}

// handleChatCompletion handles POST /anthropic/v1/messages
func (a *AnthropicRouter) handleChatCompletion(ctx *fasthttp.RequestCtx) {
var req ChatCompletionRequest
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
return
}

bifrostReq := req.ConvertToBifrostRequest("")
bifrostCtx := lib.ConvertToBifrostContext(ctx)

result, err := a.client.ChatCompletionRequest(*bifrostCtx, bifrostReq)
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
return
}

ctx.SetStatusCode(fasthttp.StatusOK)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(result)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Address code duplication across router implementations.

This router implementation is nearly identical to the Mistral router, differing only in package name, struct name, and endpoint path. Consider creating a generic router pattern to reduce duplication.

Consider creating a common interface or base router:

type IntegrationRouter interface {
    RegisterRoutes(r *router.Router)
}

type BaseRouter struct {
    client *bifrost.Bifrost
    endpoint string
    name string
}

func (b *BaseRouter) handleChatCompletion(ctx *fasthttp.RequestCtx, converter func([]byte) (*schemas.BifrostRequest, error)) {
    // Common implementation
}

This would eliminate the duplicate code across all integration routers.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/anthropic/router.go lines 1 to 49, the
router implementation duplicates code found in other integration routers like
Mistral, differing only in package, struct, and endpoint path. Refactor by
creating a generic base router struct that holds the bifrost client, endpoint
path, and any identifying name. Implement a common handler method on this base
router that accepts a converter function to transform the request body into a
Bifrost request. Then, have specific routers embed or use this base router,
passing their unique endpoint and converter, and register routes using the
shared handler to eliminate duplication.

Comment on lines +1 to +49
package openai

import (
"encoding/json"

"github.com/fasthttp/router"
bifrost "github.com/maximhq/bifrost/core"
"github.com/maximhq/bifrost/transports/bifrost-http/lib"
"github.com/valyala/fasthttp"
)

// OpenAIRouter holds route registrations for openai endpoints.
type OpenAIRouter struct {
client *bifrost.Bifrost
}

// NewOpenAIRouter creates a new OpenAIRouter with the given bifrost client.
func NewOpenAIRouter(client *bifrost.Bifrost) *OpenAIRouter {
return &OpenAIRouter{client: client}
}

// RegisterRoutes registers all openai routes on the given router.
func (o *OpenAIRouter) RegisterRoutes(r *router.Router) {
r.POST("/openai/v1/chat/completions", o.handleChatCompletion)
}

// handleChatCompletion handles POST /openai/v1/chat/completions
func (o *OpenAIRouter) handleChatCompletion(ctx *fasthttp.RequestCtx) {
var req ChatCompletionRequest
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
return
}

bifrostReq := req.ConvertToBifrostRequest("")
bifrostCtx := lib.ConvertToBifrostContext(ctx)

result, err := o.client.ChatCompletionRequest(*bifrostCtx, bifrostReq)
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
return
}

ctx.SetStatusCode(fasthttp.StatusOK)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(result)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consolidate duplicated router implementations.

This is the third router with nearly identical implementation. The code duplication across Mistral, Anthropic, and OpenAI routers violates DRY principles and increases maintenance burden.

Implement a factory pattern or generic router to handle all integrations:

type RouterConfig struct {
    Name string
    Endpoint string
    RequestConverter func([]byte) (*schemas.BifrostRequest, error)
}

func NewIntegrationRouter(client *bifrost.Bifrost, config RouterConfig) *IntegrationRouter {
    // Single implementation for all providers
}

This approach would:

  • Eliminate code duplication
  • Ensure consistent error handling
  • Simplify testing and maintenance
  • Make adding new providers easier
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/router.go lines 1 to 49, the
OpenAIRouter implementation duplicates code found in other routers like Mistral
and Anthropic, violating DRY principles. Refactor by creating a generic
IntegrationRouter that accepts a RouterConfig struct containing the provider
name, endpoint, and a request conversion function. Implement a single router
that uses this config to handle all integrations uniformly, consolidating route
registration, request parsing, error handling, and response encoding. This will
reduce duplication, unify error handling, and simplify adding new providers.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🔭 Outside diff range comments (7)
transports/bifrost-http/integrations/mistral/types.go (1)

49-50: ⚠️ Potential issue

Remove stray character.

There's an extraneous "n" character at line 49 that should be removed.

-n
-50
transports/bifrost-http/integrations/litellm/types.go (1)

49-50: ⚠️ Potential issue

Remove stray character.

There's an extraneous "n" character at line 49 that should be removed.

-n
-50
transports/bifrost-http/integrations/langgraph/types.go (1)

49-50: ⚠️ Potential issue

Remove stray character.

There's an extraneous "n" character at line 49 that should be removed.

-n
-50
transports/bifrost-http/integrations/openai/types.go (1)

50-51: ⚠️ Potential issue

Remove stray character.

There's an extraneous "n" character at line 50 that should be removed.

-n
-51
transports/bifrost-http/integrations/litellm/router.go (1)

50-51: 🧹 Nitpick (assertive)

Remove extraneous character.

There's a stray "n" character at the end of the file that should be removed.

-n
-
transports/bifrost-http/integrations/anthropic/router.go (1)

50-51: 🧹 Nitpick (assertive)

Remove extraneous character.

There's a stray "n" character at the end of the file that should be removed.

-n
-
transports/bifrost-http/integrations/openai/router.go (1)

50-51: 🧹 Nitpick (assertive)

Remove extraneous character.

There's a stray "n" character at the end of the file that should be removed.

-n
-
♻️ Duplicate comments (15)
transports/bifrost-http/integrations/anthropic/types.go (2)

7-20: 🛠️ Refactor suggestion

Consider refactoring to reduce code duplication.

This struct definition is identical across all AI provider integrations (Anthropic, Mistral, LiteLLM, LangGraph). Consider extracting a common BaseChatCompletionRequest struct to eliminate duplication and improve maintainability.


23-48: 🛠️ Refactor suggestion

Add validation for required fields.

The conversion method doesn't validate required fields like Model or Messages. Consider adding validation to prevent runtime errors downstream.

transports/bifrost-http/integrations/mistral/types.go (1)

22-23: Add documentation for the model parameter.

The conversion method lacks documentation for the model parameter's purpose.

-// ConvertToBifrostRequest converts the request to a BifrostRequest.
+// ConvertToBifrostRequest converts the request to a BifrostRequest.
+// If model is provided and non-empty, it overrides the request's model field.
 func (r *ChatCompletionRequest) ConvertToBifrostRequest(model string) *schemas.BifrostRequest {
transports/bifrost-http/integrations/litellm/types.go (1)

22-23: Add documentation for the model parameter.

The conversion method lacks documentation for the model parameter's purpose.

-// ConvertToBifrostRequest converts the request to a BifrostRequest.
+// ConvertToBifrostRequest converts the request to a BifrostRequest.
+// If model is provided and non-empty, it overrides the request's model field.
 func (r *ChatCompletionRequest) ConvertToBifrostRequest(model string) *schemas.BifrostRequest {
transports/bifrost-http/integrations/langgraph/types.go (1)

22-23: Add documentation for the model parameter.

The conversion method lacks documentation for the model parameter's purpose.

-// ConvertToBifrostRequest converts the request to a BifrostRequest.
+// ConvertToBifrostRequest converts the request to a BifrostRequest.
+// If model is provided and non-empty, it overrides the request's model field.
 func (r *ChatCompletionRequest) ConvertToBifrostRequest(model string) *schemas.BifrostRequest {
transports/bifrost-http/integrations/langchain/types.go (1)

40-42: ⚠️ Potential issue

Avoid hardcoding the provider to OpenAI for LangChain requests.

LangChain supports multiple providers (Anthropic, Azure, Google Vertex, Amazon Bedrock, etc.), but the provider is hardcoded to schemas.OpenAI. This will misidentify requests when LangChain is configured to use non-OpenAI models.

transports/bifrost-http/integrations/langgraph/router.go (2)

30-34: 🛠️ Refactor suggestion

Improve error handling to avoid exposing internal details.

Directly encoding Go errors to JSON responses can expose internal implementation details and potentially sensitive information to clients.


40-44: 🛠️ Refactor suggestion

Apply consistent error handling for Bifrost client errors.

Same issue as above - directly encoding internal errors can expose implementation details.

transports/bifrost-http/integrations/langchain/router.go (1)

12-49: 🛠️ Refactor suggestion

Significant code duplication across integration routers.

The router implementations for LangChain, LangGraph, and Mistral are nearly identical. This violates the DRY principle and makes maintenance more difficult.

transports/bifrost-http/integrations/mistral/router.go (3)

30-34: 🛠️ Refactor suggestion

Improve error response format.

Directly encoding error objects as JSON may expose internal error details to clients and produce inconsistent response formats.


36-36: 🧹 Nitpick (assertive)

Clarify fallback behavior for the empty-model override.

Passing "" into ConvertToBifrostRequest intentionally triggers the fallback to r.Model. To make this clear to future readers, please add a brief comment.


40-44: 🛠️ Refactor suggestion

Apply consistent error response format.

Similar to the JSON unmarshaling error, directly encoding the Bifrost client error may expose internal details.

transports/bifrost-http/integrations/litellm/router.go (1)

28-49: Consider refactoring to reduce code duplication.

This handler method is nearly identical to other integration routers. The code duplication violates DRY principles and increases maintenance burden.

transports/bifrost-http/integrations/anthropic/router.go (1)

1-49: Address code duplication across router implementations.

This router implementation is nearly identical to other integration routers, differing only in package name, struct name, and endpoint path.

transports/bifrost-http/integrations/openai/router.go (1)

1-49: Consolidate duplicated router implementations.

This is another router with nearly identical implementation to other integration routers. The code duplication violates DRY principles and increases maintenance burden.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 996446b and db694e4.

📒 Files selected for processing (15)
  • transports/bifrost-http/integrations/anthropic/router.go (1 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (1 hunks)
  • transports/bifrost-http/integrations/genai/router.go (1 hunks)
  • transports/bifrost-http/integrations/genai/types.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/router.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/types.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/router.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/types.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/types.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/router.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/types.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/types.go (1 hunks)
  • transports/bifrost-http/integrations/utils.go (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
transports/bifrost-http/integrations/langgraph/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#54
File: core/schemas/bifrost.go:46-49
Timestamp: 2025-06-04T09:22:18.093Z
Learning: In core/schemas/bifrost.go, the RequestInput struct uses ChatCompletionInput *[]BifrostMessage (pointer-to-slice) rather than []BifrostMessage to properly represent union type semantics. For text completion requests, ChatCompletionInput should be nil to indicate "no chat payload at all", while for chat completion requests it should be non-nil (even if empty slice). This distinguishes between different request types rather than just empty vs non-empty chat messages.
transports/bifrost-http/integrations/openai/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#54
File: core/schemas/bifrost.go:46-49
Timestamp: 2025-06-04T09:22:18.093Z
Learning: In core/schemas/bifrost.go, the RequestInput struct uses ChatCompletionInput *[]BifrostMessage (pointer-to-slice) rather than []BifrostMessage to properly represent union type semantics. For text completion requests, ChatCompletionInput should be nil to indicate "no chat payload at all", while for chat completion requests it should be non-nil (even if empty slice). This distinguishes between different request types rather than just empty vs non-empty chat messages.
🧬 Code Graph Analysis (7)
transports/bifrost-http/integrations/langchain/types.go (2)
core/schemas/bifrost.go (7)
  • BifrostMessage (143-152)
  • Tool (108-112)
  • ToolChoice (137-140)
  • BifrostRequest (54-64)
  • ModelParameters (75-89)
  • OpenAI (34-34)
  • RequestInput (46-49)
core/schemas/provider.go (1)
  • Provider (142-149)
transports/bifrost-http/integrations/anthropic/router.go (2)
transports/bifrost-http/integrations/anthropic/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/langgraph/router.go (2)
transports/bifrost-http/integrations/langgraph/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/mistral/router.go (2)
transports/bifrost-http/integrations/mistral/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/langchain/router.go (3)
core/bifrost.go (1)
  • Bifrost (39-51)
transports/bifrost-http/integrations/langchain/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/litellm/router.go (3)
core/bifrost.go (1)
  • Bifrost (39-51)
transports/bifrost-http/integrations/litellm/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/openai/router.go (3)
core/bifrost.go (1)
  • Bifrost (39-51)
transports/bifrost-http/integrations/openai/types.go (1)
  • ChatCompletionRequest (8-21)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
🪛 golangci-lint (1.64.8)
transports/bifrost-http/integrations/genai/router.go

64-64: expected declaration, found n

(typecheck)

transports/bifrost-http/integrations/utils.go

8-8: expected declaration, found n

(typecheck)

transports/bifrost-http/integrations/genai/types.go

141-141: expected declaration, found n

(typecheck)

🔇 Additional comments (7)
transports/bifrost-http/integrations/mistral/types.go (1)

40-47: Verify correct provider assignment and ChatCompletionInput usage.

The implementation correctly uses schemas.Mistral as the provider and follows the established pattern for ChatCompletionInput pointer-to-slice semantics.

transports/bifrost-http/integrations/litellm/types.go (1)

40-47: Provider assignment appears correct for LiteLLM.

The use of schemas.OpenAI as the provider is appropriate since LiteLLM closely mirrors the OpenAI API as documented in the comments. The ChatCompletionInput pointer-to-slice usage is correct.

transports/bifrost-http/integrations/langgraph/types.go (1)

40-47: Provider assignment appears correct for LangGraph.

The use of schemas.OpenAI as the provider is appropriate since LangGraph is largely compatible with OpenAI's API as documented in the comments. The ChatCompletionInput pointer-to-slice usage is correct.

transports/bifrost-http/integrations/openai/types.go (1)

41-48: Implementation follows correct patterns.

The provider assignment and ChatCompletionInput pointer-to-slice usage are correct. The conversion logic properly handles all parameters.

transports/bifrost-http/integrations/litellm/router.go (1)

36-36: Verify the empty string parameter in ConvertToBifrostRequest.

The call to req.ConvertToBifrostRequest("") passes an empty string parameter. Verify if this parameter should contain a specific value (e.g., model name, provider identifier).

transports/bifrost-http/integrations/anthropic/router.go (1)

36-36: Verify the empty string parameter in ConvertToBifrostRequest.

The call to req.ConvertToBifrostRequest("") passes an empty string parameter. Ensure this parameter contains the appropriate value for the Anthropic integration.

transports/bifrost-http/integrations/openai/router.go (1)

36-36: Verify the empty string parameter in ConvertToBifrostRequest.

The call to req.ConvertToBifrostRequest("") passes an empty string parameter. Confirm if this should contain a specific value for the OpenAI integration context.

Comment thread transports/bifrost-http/integrations/genai/types.go Outdated
Comment thread transports/bifrost-http/integrations/genai/router.go Outdated
Comment thread transports/bifrost-http/integrations/utils.go Outdated
Comment thread transports/bifrost-http/integrations/anthropic/types.go Outdated
Comment on lines +39 to +44
result, err := l.client.ChatCompletionRequest(*bifrostCtx, bifrostReq)
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Apply consistent error handling for Bifrost client errors.

Same issue as above - directly encoding internal errors can expose implementation details.

-	result, err := l.client.ChatCompletionRequest(*bifrostCtx, bifrostReq)
-	if err != nil {
-		ctx.SetStatusCode(fasthttp.StatusInternalServerError)
-		json.NewEncoder(ctx).Encode(err)
-		return
-	}
+	result, err := l.client.ChatCompletionRequest(*bifrostCtx, bifrostReq)
+	if err != nil {
+		ctx.SetStatusCode(fasthttp.StatusInternalServerError)
+		ctx.SetContentType("application/json")
+		json.NewEncoder(ctx).Encode(map[string]string{
+			"error": "Internal server error",
+		})
+		return
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
result, err := l.client.ChatCompletionRequest(*bifrostCtx, bifrostReq)
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
return
}
result, err := l.client.ChatCompletionRequest(*bifrostCtx, bifrostReq)
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{
"error": "Internal server error",
})
return
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langchain/router.go around lines 39 to
44, the error handling directly encodes the internal error which may expose
sensitive implementation details. Modify the error handling to log the detailed
error internally but respond to the client with a generic error message and
appropriate HTTP status code to avoid leaking internal information.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

♻️ Duplicate comments (13)
transports/bifrost-http/integrations/langchain/types.go (1)

40-42: Avoid hardcoding provider to OpenAI
Hardcoding Provider: schemas.OpenAI will misclassify all incoming LangChain requests. Map the provider dynamically—either from the new Provider field or by parsing a model‐prefix—into the appropriate schemas.Provider enum.

transports/bifrost-http/integrations/langchain/router.go (1)

39-44: Standardize internal error responses
Returning json.NewEncoder(ctx).Encode(err) exposes implementation details. Log the detailed error internally, then respond with ctx.SetContentType("application/json") and a generic payload (e.g. {"error":"internal server error"}).

transports/bifrost-http/integrations/langgraph/router.go (2)

30-34: Improve error handling to avoid exposing internal details.

Directly encoding Go errors to JSON responses can expose internal implementation details and potentially sensitive information to clients. Consider creating structured error responses instead.


40-44: Apply consistent error handling for Bifrost client errors.

Same issue as above - directly encoding internal errors can expose implementation details.

transports/bifrost-http/integrations/litellm/router.go (1)

28-49: Code duplication issue remains unaddressed.

This handler implementation is nearly identical to other integration routers, as previously noted. The duplication across multiple integration handlers reduces maintainability.

transports/bifrost-http/integrations/anthropic/types.go (2)

7-20: Consider refactoring to reduce code duplication.

This struct definition remains identical across all AI provider integrations, as noted in previous reviews. The duplication issue persists and should be addressed by extracting common fields into a shared base struct.


23-48: Add validation for required fields.

The conversion method still lacks validation for required fields like Model and Messages, as identified in previous reviews. This could lead to runtime errors downstream when empty or nil values are processed.

transports/bifrost-http/integrations/anthropic/router.go (1)

1-49: Address code duplication across router implementations.

This router implementation remains nearly identical to other integration routers, as noted in previous reviews. The duplication issue persists and should be addressed through a generic router pattern or base implementation.

transports/bifrost-http/integrations/openai/types.go (1)

23-24: Address the documentation issue from previous review.

The model parameter still lacks documentation as pointed out in the previous review. This is important for maintainability and developer understanding.

transports/bifrost-http/integrations/openai/router.go (1)

1-50: Address code duplication across router implementations.

This router implementation duplicates patterns found in other provider routers, violating DRY principles and increasing maintenance burden as noted in previous reviews.

transports/bifrost-http/integrations/mistral/router.go (3)

30-34: Improve error response format.

Directly encoding error objects as JSON may expose internal error details to clients and produce inconsistent response formats.

Consider creating a structured error response:

 	if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
 		ctx.SetStatusCode(fasthttp.StatusBadRequest)
-		json.NewEncoder(ctx).Encode(err)
+		json.NewEncoder(ctx).Encode(map[string]string{"error": "Invalid JSON request body"})
 		return
 	}

36-36: Clarify fallback behavior for the empty model override.

Passing "" into ConvertToBifrostRequest intentionally triggers the fallback to r.Model (see identical logic in all other integrations). To make this clear to future readers, please add a brief comment.

+	// empty string → use r.Model as the model override
 	bifrostReq := req.ConvertToBifrostRequest("")

40-44: Apply consistent error response format.

Similar to the JSON unmarshaling error, directly encoding the Bifrost client error may expose internal details.

Apply the same structured error response pattern:

 	if err != nil {
 		ctx.SetStatusCode(fasthttp.StatusInternalServerError)
-		json.NewEncoder(ctx).Encode(err)
+		json.NewEncoder(ctx).Encode(map[string]string{"error": "Internal server error"})
 		return
 	}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between db694e4 and ecf5145.

📒 Files selected for processing (12)
  • transports/bifrost-http/integrations/anthropic/router.go (1 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/router.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/types.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/router.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/types.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/types.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/router.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/types.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/types.go (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
transports/bifrost-http/integrations/langgraph/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#54
File: core/schemas/bifrost.go:46-49
Timestamp: 2025-06-04T09:22:18.093Z
Learning: In core/schemas/bifrost.go, the RequestInput struct uses ChatCompletionInput *[]BifrostMessage (pointer-to-slice) rather than []BifrostMessage to properly represent union type semantics. For text completion requests, ChatCompletionInput should be nil to indicate "no chat payload at all", while for chat completion requests it should be non-nil (even if empty slice). This distinguishes between different request types rather than just empty vs non-empty chat messages.
transports/bifrost-http/integrations/openai/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#54
File: core/schemas/bifrost.go:46-49
Timestamp: 2025-06-04T09:22:18.093Z
Learning: In core/schemas/bifrost.go, the RequestInput struct uses ChatCompletionInput *[]BifrostMessage (pointer-to-slice) rather than []BifrostMessage to properly represent union type semantics. For text completion requests, ChatCompletionInput should be nil to indicate "no chat payload at all", while for chat completion requests it should be non-nil (even if empty slice). This distinguishes between different request types rather than just empty vs non-empty chat messages.
🧬 Code Graph Analysis (3)
transports/bifrost-http/integrations/langchain/router.go (2)
transports/bifrost-http/integrations/langchain/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/openai/router.go (3)
core/bifrost.go (1)
  • Bifrost (39-51)
transports/bifrost-http/integrations/openai/types.go (1)
  • ChatCompletionRequest (8-21)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/mistral/router.go (3)
core/bifrost.go (1)
  • Bifrost (39-51)
transports/bifrost-http/integrations/mistral/types.go (1)
  • ChatCompletionRequest (7-20)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
🪛 golangci-lint (1.64.8)
transports/bifrost-http/integrations/mistral/router.go

1-1: : # github.com/maximhq/bifrost/transports/bifrost-http/integrations/mistral [github.com/maximhq/bifrost/transports/bifrost-http/integrations/mistral.test]
bifrost-http/integrations/mistral/types.go:41:21: undefined: schemas.Mistral
bifrost-http/integrations/mistral/types_test.go:22:34: undefined: schemas.Mistral
bifrost-http/integrations/mistral/types_test.go:23:58: undefined: schemas.Mistral

(typecheck)

🔇 Additional comments (14)
transports/bifrost-http/integrations/langgraph/types.go (3)

7-20: LGTM! Struct definition follows established patterns.

The ChatCompletionRequest struct is well-defined with appropriate field types, JSON tags, and comprehensive coverage of LangGraph API parameters.


41-41: Verify the hardcoded OpenAI provider for LangGraph integration.

The provider is hardcoded to schemas.OpenAI for a LangGraph integration. Please confirm this is intentional - does LangGraph use OpenAI-compatible API format, or should this be a dedicated LangGraph provider type?


43-45: Correct usage of pointer-to-slice pattern.

The implementation correctly uses &r.Messages for ChatCompletionInput, which aligns with the established pattern for representing union type semantics where a non-nil pointer indicates chat completion requests.

transports/bifrost-http/integrations/langgraph/router.go (2)

12-25: LGTM! Router structure follows established patterns.

The LangGraphRouter implementation follows the consistent pattern used across other SDK integrations with proper dependency injection and route registration.


46-48: LGTM! Success response handling is appropriate.

The success path correctly sets status code, content type, and encodes the result properly.

transports/bifrost-http/integrations/litellm/types.go (1)

1-49: LGTM! Well-structured LiteLLM integration types.

The ChatCompletionRequest struct and ConvertToBifrostRequest method are well-implemented:

  • Comprehensive field coverage mirroring OpenAI API structure (appropriate for LiteLLM compatibility)
  • Proper use of pointer types for optional fields
  • Correct JSON field tags
  • Sound conversion logic with model override support
  • Appropriate provider mapping to schemas.OpenAI for LiteLLM compatibility
transports/bifrost-http/integrations/litellm/router.go (1)

17-25: LGTM! Clean router structure and route registration.

The constructor and route registration follow good patterns with appropriate endpoint naming for LiteLLM compatibility.

transports/bifrost-http/integrations/openai/types.go (2)

44-45: Correct usage of pointer-to-slice semantics.

The implementation correctly uses &r.Messages to create a pointer to the messages slice, which aligns with the union type semantics required by the BifrostRequest structure. This properly distinguishes between different request types.


8-21: Well-structured request type definition.

The struct properly mirrors the OpenAI Chat API with appropriate field types and JSON tags. The use of pointers for optional parameters correctly handles the omitempty semantics.

transports/bifrost-http/integrations/openai/router.go (1)

36-36: Model parameter usage is appropriate.

Passing an empty string to ConvertToBifrostRequest("") correctly defaults to using the request's own model field, which is the expected behavior for this endpoint.

transports/bifrost-http/integrations/mistral/types.go (2)

7-20: Well-designed struct following OpenAI API conventions.

The ChatCompletionRequest struct is properly designed with appropriate field types and JSON tags. Using pointers for optional parameters enables correct omitempty behavior.


23-48: Conversion logic is correct with proper parameter mapping.

The method correctly handles model override logic and maps all request parameters to the internal BifrostRequest format. The fallback behavior when an empty model string is provided is appropriate.

transports/bifrost-http/integrations/mistral/router.go (2)

12-25: Standard router implementation following consistent patterns.

The MistralRouter struct, constructor, and route registration follow the established patterns seen across other integrations in this codebase.


46-49: Proper success response handling.

The success path correctly sets the HTTP status, content type, and encodes the result as JSON.

Comment on lines +7 to +20
type ChatCompletionRequest struct {
Model string `json:"model"`
Messages []schemas.BifrostMessage `json:"messages"`
Temperature *float64 `json:"temperature,omitempty"`
TopP *float64 `json:"top_p,omitempty"`
TopK *int `json:"top_k,omitempty"`
MaxTokens *int `json:"max_tokens,omitempty"`
StopSequences *[]string `json:"stop_sequences,omitempty"`
PresencePenalty *float64 `json:"presence_penalty,omitempty"`
FrequencyPenalty *float64 `json:"frequency_penalty,omitempty"`
ParallelToolCalls *bool `json:"parallel_tool_calls,omitempty"`
Tools *[]schemas.Tool `json:"tools,omitempty"`
ToolChoice *schemas.ToolChoice `json:"tool_choice,omitempty"`
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Include explicit provider metadata in the request struct
ChatCompletionRequest lacks a field to carry the actual provider (e.g. OpenAI, Anthropic, Mistral, etc.) into ConvertToBifrostRequest. Without it, all LangChain calls are incorrectly tagged as OpenAI. Add a Provider field (or similar) to map it into the resulting schemas.BifrostRequest.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langchain/types.go between lines 7 and
20, the ChatCompletionRequest struct is missing a field to specify the provider
(such as OpenAI, Anthropic, Mistral). Add a new string field named Provider (or
similar) to this struct to explicitly carry the provider metadata. This will
allow ConvertToBifrostRequest to correctly map and tag the request with the
actual provider instead of defaulting to OpenAI.

Comment on lines +36 to +38
bifrostReq := req.ConvertToBifrostRequest("")
bifrostCtx := lib.ConvertToBifrostContext(ctx)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Simplify context conversion signature
ConvertToBifrostContext returns *context.Context which you immediately dereference on use. Change it to return context.Context directly to remove unnecessary pointer indirection.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langchain/router.go around lines 36 to
38, the function ConvertToBifrostContext currently returns a pointer to
context.Context which is immediately dereferenced. Modify
ConvertToBifrostContext to return context.Context directly instead of
*context.Context, and update all call sites accordingly to remove pointer usage
and simplify the code.

Comment on lines +12 to +25
// LangChainRouter holds route registrations for langchain endpoints.
type LangChainRouter struct {
client *bifrost.Bifrost
}

// NewLangChainRouter creates a new LangChainRouter with the given bifrost client.
func NewLangChainRouter(client *bifrost.Bifrost) *LangChainRouter {
return &LangChainRouter{client: client}
}

// RegisterRoutes registers all langchain routes on the given router.
func (l *LangChainRouter) RegisterRoutes(r *router.Router) {
r.POST("/langchain/v1/chat/completions", l.handleChatCompletion)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Reduce duplication across SDK routers
The route registration and handler scaffolding for LangChain mirror the other integration routers verbatim. Extract common logic (parsing, error handling, response encoding) into a shared helper or base router to enforce consistency and DRY.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langchain/router.go around lines 12 to
25, the LangChainRouter duplicates route registration and handler scaffolding
logic found in other SDK routers. Refactor by extracting common functionality
such as route registration, request parsing, error handling, and response
encoding into a shared helper or base router struct. Then have LangChainRouter
embed or use this shared component to reduce duplication and ensure consistent
behavior across all integration routers.

Comment on lines +30 to +34
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid exposing internal errors on malformed input
Directly encoding err can leak internal details and omits setting the response Content-Type. Instead, call ctx.SetContentType("application/json") and return a structured error payload (e.g. {"error":"invalid request format"}).

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langchain/router.go around lines 30 to
34, the code directly encodes the internal error object on JSON unmarshal
failure, which can expose sensitive details and does not set the response
Content-Type. Fix this by setting the response Content-Type to
"application/json" using ctx.SetContentType, and return a structured JSON error
message like {"error":"invalid request format"} instead of encoding the raw
error.

var req ChatCompletionRequest
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error response encoding.

Using json.NewEncoder(ctx).Encode(err) may not produce meaningful JSON since Go errors don't implement JSON marshaling by default. Consider creating structured error responses.

-		json.NewEncoder(ctx).Encode(err)
+		json.NewEncoder(ctx).Encode(map[string]string{"error": err.Error()})

Apply the same fix to line 42.

Also applies to: 42-42

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/router.go at lines 32 and 42,
the code uses json.NewEncoder(ctx).Encode(err) which does not produce meaningful
JSON because Go errors do not implement JSON marshaling by default. To fix this,
create a structured error response type (e.g., a struct with a message field)
and encode that instead of the raw error. Replace the calls at both lines 32 and
42 to encode this structured error response for clearer and more consistent JSON
error output.

Comment on lines +40 to +44
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Improve error response consistency.

The error handling for internal server errors should also use structured error responses for consistency.

	if err != nil {
		ctx.SetStatusCode(fasthttp.StatusInternalServerError)
-		json.NewEncoder(ctx).Encode(err)
+		ctx.SetContentType("application/json")
+		json.NewEncoder(ctx).Encode(map[string]string{"error": "Internal server error"})
		return
	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
return
}
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{"error": "Internal server error"})
return
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/anthropic/router.go around lines 40 to
44, the error response for internal server errors is currently unstructured.
Modify the error handling to return a structured JSON error response, such as an
object with an "error" field containing the error message, to maintain
consistency with other error responses in the codebase.

Comment on lines +30 to +34
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Improve error response formatting.

The current error encoding directly serializes the unmarshaling error, which may expose internal implementation details. Consider using a structured error response format.

	if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
		ctx.SetStatusCode(fasthttp.StatusBadRequest)
-		json.NewEncoder(ctx).Encode(err)
+		ctx.SetContentType("application/json")
+		json.NewEncoder(ctx).Encode(map[string]string{"error": "Invalid request format"})
		return
	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
return
}
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{"error": "Invalid request format"})
return
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/anthropic/router.go around lines 30 to
34, the error response directly encodes the unmarshaling error, potentially
exposing internal details. Modify the error handling to return a structured JSON
response with a clear error message field instead of encoding the raw error.
This improves security and client-side error handling by providing a consistent
error format.

Comment on lines +30 to +34
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error response handling.

The error handling has inconsistent content type setting and may leak internal error details.

Apply this diff to improve error handling:

 	if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
 		ctx.SetStatusCode(fasthttp.StatusBadRequest)
+		ctx.SetContentType("application/json")
-		json.NewEncoder(ctx).Encode(err)
+		json.NewEncoder(ctx).Encode(map[string]string{"error": "Invalid request format"})
 		return
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
return
}
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(map[string]string{"error": "Invalid request format"})
return
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/router.go around lines 30 to 34,
the error handling after json.Unmarshal does not set the response content type
and directly encodes the error object, which may leak internal details. Fix this
by setting the response content type to "application/json" before writing the
error response, and replace the error encoding with a sanitized JSON error
message that does not expose internal error details.

Comment on lines +40 to +44
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consistent content type setting for error responses.

Error responses should have consistent content type headers like success responses.

Apply this diff:

 	if err != nil {
 		ctx.SetStatusCode(fasthttp.StatusInternalServerError)
+		ctx.SetContentType("application/json")
 		json.NewEncoder(ctx).Encode(err)
 		return
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
return
}
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(err)
return
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/router.go around lines 40 to 44,
the error response does not set the content type header, causing inconsistency
with success responses. Fix this by setting the content type header to
"application/json" before encoding and sending the error response to ensure
consistent content type handling.

}

return &schemas.BifrostRequest{
Provider: schemas.Mistral,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Critical: Undefined constant will cause compilation failure.

The static analysis indicates schemas.Mistral is undefined, which will prevent the code from compiling.


🏁 Script executed:

#!/bin/bash
# Check if schemas.Mistral constant is defined in the schemas package
ast-grep --pattern 'const $_ = Mistral' || ast-grep --pattern 'Mistral $_ = $_'

Length of output: 41


🏁 Script executed:

#!/bin/bash
# Search for all occurrences of "Mistral" across the repo to locate its definition
rg -n "Mistral"

Length of output: 3155


🏁 Script executed:

#!/bin/bash
# Display imports in the Mistral types file to locate the schemas package
sed -n '1,50p' transports/bifrost-http/integrations/mistral/types.go

Length of output: 1949


🏁 Script executed:

#!/bin/bash
# Search for any definitions of "Mistral" within the core/schemas package
rg -n "Mistral" core/schemas

Length of output: 28


Critical: Undefined constant schemas.Mistral
The code references schemas.Mistral, but no such constant exists in the core/schemas package—this will cause a compile-time error. Please define it (for example, in your provider constants file):

// in core/schemas/provider.go (or similar)
const (
    // …
    Mistral Provider = "mistral"
)

Affected locations:

  • transports/bifrost-http/integrations/mistral/types.go:41
  • transports/bifrost-http/integrations/mistral/types_test.go:22-23
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/mistral/types.go at line 41, the code
references an undefined constant schemas.Mistral, causing a compilation error.
To fix this, define the Mistral constant in the core/schemas package, ideally in
the provider constants file (e.g., core/schemas/provider.go), by adding a line
like: const Mistral Provider = "mistral". This will ensure the reference is
valid and the code compiles successfully.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
transports/bifrost-http/main.go (1)

189-197: Verify that the Mistral provider constant is properly defined in core schemas.

The extension routers are correctly initialized following the established pattern. However, ensure that the schemas.Mistral constant referenced by the mistral integration is properly defined in core/schemas/bifrost.go to resolve the compilation error mentioned in previous reviews.

#!/bin/bash
# Description: Verify that schemas.Mistral constant is defined and all integration packages exist
# Expected: Find the Mistral constant definition and confirm package structure

# Check if Mistral provider constant is defined
echo "Checking for Mistral provider constant:"
rg "Mistral.*ModelProvider" core/schemas/

# Verify all integration packages exist
echo -e "\nChecking integration package structure:"
fd -t d . transports/bifrost-http/integrations/ | sort

# Check each integration has required router constructor
echo -e "\nChecking router constructors:"
for integration in openai anthropic litellm langchain langgraph mistral; do
    echo "=== $integration ==="
    ast-grep --pattern "func New${integration^}Router($client *bifrost.Bifrost) *${integration^}Router {
        $$$
    }"
done
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ecf5145 and d099427.

📒 Files selected for processing (1)
  • transports/bifrost-http/main.go (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
transports/bifrost-http/main.go (2)
core/schemas/bifrost.go (1)
  • ModelProvider (31-31)
transports/bifrost-http/integrations/utils.go (1)
  • ExtensionRouter (5-7)
🪛 golangci-lint (1.64.8)
transports/bifrost-http/main.go

34-34: missing import path

(typecheck)


36-36: import path must be a string

(typecheck)

Comment on lines +32 to +42
// File: core/schemas/bifrost.go

const (
OpenAI ModelProvider = "openai"
Azure ModelProvider = "azure"
Anthropic ModelProvider = "anthropic"
Bedrock ModelProvider = "bedrock"
Cohere ModelProvider = "cohere"
Vertex ModelProvider = "vertex"
Mistral ModelProvider = "mistral"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove invalid Go code and fix the file structure.

Lines 32-42 contain invalid Go code that appears to be misplaced content from core/schemas/bifrost.go. This is causing compilation errors and needs to be removed.

-	"github.com/maximhq/bifrost/transports/bifrost-http/integrations"
-// File: core/schemas/bifrost.go
-
-const (
-    OpenAI    ModelProvider = "openai"
-    Azure     ModelProvider = "azure"
-    Anthropic ModelProvider = "anthropic"
-    Bedrock   ModelProvider = "bedrock"
-    Cohere    ModelProvider = "cohere"
-    Vertex    ModelProvider = "vertex"
-    Mistral   ModelProvider = "mistral"
-)
+	"github.com/maximhq/bifrost/transports/bifrost-http/integrations"
+	"github.com/maximhq/bifrost/transports/bifrost-http/integrations/anthropic"
+	"github.com/maximhq/bifrost/transports/bifrost-http/integrations/genai"
+	"github.com/maximhq/bifrost/transports/bifrost-http/integrations/langchain"
+	"github.com/maximhq/bifrost/transports/bifrost-http/integrations/langgraph"
+	"github.com/maximhq/bifrost/transports/bifrost-http/integrations/litellm"
+	"github.com/maximhq/bifrost/transports/bifrost-http/integrations/mistral"
+	"github.com/maximhq/bifrost/transports/bifrost-http/integrations/openai"
🧰 Tools
🪛 golangci-lint (1.64.8)

34-34: missing import path

(typecheck)


36-36: import path must be a string

(typecheck)

🤖 Prompt for AI Agents
In transports/bifrost-http/main.go between lines 32 and 42, remove the invalid
Go code that belongs to core/schemas/bifrost.go, as it is misplaced and causing
compilation errors. Ensure that only relevant code for main.go remains in this
section to fix the file structure and allow successful compilation.

@akshaydeo akshaydeo deleted the codex/add-middleware-for-openai-and-other-sdks branch August 31, 2025 17:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants