Conversation
Summary by CodeRabbit
Summary by CodeRabbit
WalkthroughThis change introduces comprehensive integration support for Bifrost as a unified proxy for multiple AI frameworks and provider SDKs. It adds new HTTP routers and bidirectional type conversion layers for OpenAI, Anthropic, Mistral, LiteLLM, LangChain, and LangGraph. Extensive documentation and example integration guides are also included. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Router
participant Bifrost
participant Provider
Client->>Router: POST /{provider}/v1/chat/completions (or similar)
Router->>Router: Parse & validate request
Router->>Router: Convert request to BifrostRequest
Router->>Bifrost: ChatCompletionRequest(BifrostRequest)
Bifrost->>Provider: Forward request (with fallback, load balancing, etc.)
Provider-->>Bifrost: Provider response
Bifrost-->>Router: BifrostResponse
Router->>Router: Convert response to provider format
Router-->>Client: JSON response
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
This stack of pull requests is managed by Graphite. Learn more about stacking. |
There was a problem hiding this comment.
Actionable comments posted: 31
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (14)
docs/integrations.md(1 hunks)transports/bifrost-http/integrations/anthropic/router.go(1 hunks)transports/bifrost-http/integrations/anthropic/types.go(1 hunks)transports/bifrost-http/integrations/langchain/router.go(1 hunks)transports/bifrost-http/integrations/langchain/types.go(1 hunks)transports/bifrost-http/integrations/langgraph/router.go(1 hunks)transports/bifrost-http/integrations/langgraph/types.go(1 hunks)transports/bifrost-http/integrations/litellm/router.go(1 hunks)transports/bifrost-http/integrations/litellm/types.go(1 hunks)transports/bifrost-http/integrations/mistral/router.go(1 hunks)transports/bifrost-http/integrations/mistral/types.go(1 hunks)transports/bifrost-http/integrations/openai/router.go(1 hunks)transports/bifrost-http/integrations/openai/types.go(1 hunks)transports/bifrost-http/main.go(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (5)
transports/bifrost-http/main.go (8)
transports/bifrost-http/integrations/utils.go (1)
ExtensionRouter(5-7)transports/bifrost-http/integrations/genai/router.go (1)
NewGenAIRouter(18-20)transports/bifrost-http/integrations/openai/router.go (1)
NewOpenAIRouter(18-20)transports/bifrost-http/integrations/anthropic/router.go (1)
NewAnthropicRouter(18-20)transports/bifrost-http/integrations/mistral/router.go (1)
NewMistralRouter(18-20)transports/bifrost-http/integrations/litellm/router.go (1)
NewLiteLLMRouter(18-20)transports/bifrost-http/integrations/langchain/router.go (1)
NewLangChainRouter(19-21)transports/bifrost-http/integrations/langgraph/router.go (1)
NewLangGraphRouter(19-21)
transports/bifrost-http/integrations/openai/router.go (3)
core/bifrost.go (2)
Bifrost(39-51)ChatCompletionRequest(24-24)transports/bifrost-http/integrations/openai/types.go (2)
OpenAIChatRequest(51-72)DeriveOpenAIFromBifrostResponse(261-324)transports/bifrost-http/lib/ctx.go (1)
ConvertToBifrostContext(43-73)
transports/bifrost-http/integrations/openai/types.go (2)
core/schemas/bifrost.go (18)
ToolChoiceTypeFunction(126-126)FunctionCall(260-263)Function(101-105)ToolChoice(137-140)LogProbs(253-257)BifrostRequest(54-64)OpenAI(34-34)RequestInput(46-49)BifrostMessage(143-152)ModelChatMessageRole(20-20)ToolCall(266-270)AssistantMessage(166-171)ToolMessage(160-163)ModelParameters(75-89)Tool(108-112)FunctionParameters(92-98)BifrostResponse(184-194)LLMUsage(197-203)core/schemas/provider.go (1)
Provider(142-149)
transports/bifrost-http/integrations/anthropic/types.go (2)
core/schemas/bifrost.go (19)
ToolChoice(137-140)BifrostRequest(54-64)Anthropic(36-36)RequestInput(46-49)BifrostMessage(143-152)ModelChatMessageRoleSystem(25-25)ModelChatMessageRole(20-20)ToolCall(266-270)Function(101-105)FunctionCall(260-263)AssistantMessage(166-171)ToolMessage(160-163)ModelParameters(75-89)Tool(108-112)FunctionParameters(92-98)ToolChoiceType(116-116)ToolChoiceFunction(132-134)BifrostResponse(184-194)LLMUsage(197-203)core/schemas/provider.go (1)
Provider(142-149)
transports/bifrost-http/integrations/mistral/types.go (2)
core/schemas/bifrost.go (16)
Function(101-105)ToolChoice(137-140)BifrostRequest(54-64)OpenAI(34-34)RequestInput(46-49)BifrostMessage(143-152)ModelChatMessageRole(20-20)ToolCall(266-270)FunctionCall(260-263)AssistantMessage(166-171)ToolMessage(160-163)ModelParameters(75-89)Tool(108-112)FunctionParameters(92-98)BifrostResponse(184-194)LLMUsage(197-203)core/schemas/provider.go (1)
Provider(142-149)
🪛 LanguageTool
docs/integrations.md
[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...
(SINGULAR_NOUN_ADV_AGREEMENT)
🔇 Additional comments (12)
transports/bifrost-http/main.go (1)
185-193: Nice modularisation – new routers cleanly plugged inThe
extensionsslice pattern keepsmaintidy and makes future integrations trivial to add.transports/bifrost-http/integrations/openai/router.go (3)
12-25: LGTM on router structure and endpoint registration.The router follows a clean, consistent pattern with proper dependency injection and clear endpoint registration.
36-46: LGTM on request validation and conversion logic.Good validation of required model parameter and proper use of conversion functions to transform between OpenAI and Bifrost formats.
32-32:⚠️ Potential issueFix JSON encoding with fasthttp context.
Using
json.NewEncoder(ctx).Encode()doesn't work correctly withfasthttp.RequestCtxas it doesn't implement theio.Writerinterface as expected byjson.NewEncoder.- json.NewEncoder(ctx).Encode(map[string]string{"error": err.Error()}) + errorResp, _ := json.Marshal(map[string]string{"error": err.Error()}) + ctx.SetBody(errorResp) + ctx.SetContentType("application/json")Likely an incorrect or invalid review comment.
docs/integrations.md (2)
1-479: Excellent comprehensive integration documentation.This documentation provides thorough coverage of all integration patterns with clear examples, configuration guidance, and migration paths. The structure is logical and user-friendly, making it easy for developers to adopt Bifrost with their existing SDK usage.
🧰 Tools
🪛 LanguageTool
[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...(SINGULAR_NOUN_ADV_AGREEMENT)
390-390: 🧹 Nitpick (assertive)Fix grammar: subject-verb agreement.
The phrase "load balances" should be "load-balances" (hyphenated) or restructured for proper grammar.
-Bifrost automatically load balances requests across multiple keys for the same provider based on the `weight` parameter and model availability. +Bifrost automatically load-balances requests across multiple keys for the same provider based on the `weight` parameter and model availability.Likely an incorrect or invalid review comment.
🧰 Tools
🪛 LanguageTool
[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...(SINGULAR_NOUN_ADV_AGREEMENT)
transports/bifrost-http/integrations/anthropic/router.go (2)
42-46: Good addition of Anthropic-specific validation.The additional validation for
MaxTokensis appropriate since it's a required parameter for Anthropic's API, unlike OpenAI where it's optional.
12-25: LGTM on consistent router structure.The router maintains consistency with other integration routers while properly implementing Anthropic-specific endpoint patterns.
transports/bifrost-http/integrations/litellm/router.go (3)
22-28: Excellent endpoint versioning support.Good implementation of both
/litellmand/litellm/v1paths for each endpoint type, providing flexibility for different LiteLLM client configurations.
49-49: Correct Bifrost method selection.Properly uses
ChatCompletionRequestfor chat completions andTextCompletionRequestfor text completions, aligning with the different endpoint purposes.Also applies to: 81-81
12-20: LGTM on router structure and constructor.Clean, consistent implementation following the established pattern across integration routers.
transports/bifrost-http/integrations/mistral/types.go (1)
67-71:Details
❌ Incorrect review comment
Potential provider mapping confusion
The code sets
Provider: schemas.OpenAIwith a comment explaining that Mistral is OpenAI-compatible. However, if Bifrost has a specific Mistral provider type, it should be used instead to maintain clear provider distinction.Let me check if there's a Mistral provider constant defined:
🏁 Script executed:
#!/bin/bash # Search for Mistral provider definition in the schemas ast-grep --pattern 'Mistral $_ ModelProvider = $_'Length of output: 52
🏁 Script executed:
#!/bin/bash # Also search more broadly for Mistral-related constants rg -A 2 -B 2 "Mistral.*ModelProvider|ModelProvider.*Mistral"Length of output: 62
🏁 Script executed:
#!/bin/bash # Locate ModelProvider type definition rg -n 'type ModelProvider' . # List all provider constants of type ModelProvider rg -n 'const .*ModelProvider =' . # Check for any occurrences of "Mistral" anywhere in the repo rg -n 'Mistral' .Length of output: 8198
No dedicated Mistral provider in schemas—using OpenAI is correct
We inspected core/schemas/bifrost.go (where ModelProvider is defined) and found noMistralconstant—only existing providers likeOpenAI,Anthropic, etc. Since Bifrost doesn’t define a separateMistralprovider and Mistral is OpenAI-API–compatible,Provider: schemas.OpenAIis intentional and correct. No change required.Likely an incorrect or invalid review comment.
| // Convert parameters | ||
| if r.MaxTokens != nil || r.Temperature != nil || r.TopP != nil || r.PresencePenalty != nil || | ||
| r.FrequencyPenalty != nil || r.N != nil || r.LogProbs != nil || r.TopLogProbs != nil { | ||
| params := &schemas.ModelParameters{ | ||
| ExtraParams: make(map[string]interface{}), | ||
| } | ||
|
|
||
| if r.MaxTokens != nil { | ||
| params.MaxTokens = r.MaxTokens | ||
| } | ||
| if r.Temperature != nil { | ||
| params.Temperature = r.Temperature | ||
| } | ||
| if r.TopP != nil { | ||
| params.TopP = r.TopP | ||
| } | ||
| if r.PresencePenalty != nil { | ||
| params.PresencePenalty = r.PresencePenalty | ||
| } | ||
| if r.FrequencyPenalty != nil { | ||
| params.FrequencyPenalty = r.FrequencyPenalty | ||
| } | ||
| if r.N != nil { | ||
| params.ExtraParams["n"] = r.N | ||
| } | ||
| if r.LogProbs != nil { | ||
| params.ExtraParams["logprobs"] = r.LogProbs | ||
| } | ||
| if r.TopLogProbs != nil { | ||
| params.ExtraParams["top_logprobs"] = r.TopLogProbs | ||
| } | ||
|
|
||
| bifrostReq.Params = params |
There was a problem hiding this comment.
🛠️ Refactor suggestion
stop and logit_bias parameters not propagated
stop and logit_bias are part of the OpenAI spec yet are omitted from the Bifrost parameters map.
@@
- if r.TopLogProbs != nil {
- params.ExtraParams["top_logprobs"] = r.TopLogProbs
- }
+ if r.TopLogProbs != nil {
+ params.ExtraParams["top_logprobs"] = r.TopLogProbs
+ }
+
+ if r.LogitBias != nil {
+ params.ExtraParams["logit_bias"] = r.LogitBias
+ }
+
+ // Normalise `stop` (string | []string) into []string
+ if r.Stop != nil {
+ switch v := r.Stop.(type) {
+ case string:
+ seq := []string{v}
+ params.StopSequences = &seq
+ case []interface{}:
+ seq := make([]string, 0, len(v))
+ for _, s := range v {
+ if str, ok := s.(string); ok {
+ seq = append(seq, str)
+ }
+ }
+ params.StopSequences = &seq
+ }
+ }Without this, callers relying on these knobs will see no effect.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/types.go between lines 135 and
167, the code converts OpenAI request parameters but omits the 'stop' and
'logit_bias' fields. To fix this, add checks for r.Stop and r.LogitBias similar
to other parameters, and if they are not nil, include them in the
params.ExtraParams map with appropriate keys. This ensures these parameters are
propagated correctly in the Bifrost request.
| // Convert tools | ||
| if r.Tools != nil { | ||
| tools := []schemas.Tool{} | ||
| for _, tool := range *r.Tools { | ||
| // Convert input_schema to FunctionParameters | ||
| params := schemas.FunctionParameters{ | ||
| Type: "object", | ||
| } | ||
| if tool.InputSchema != nil { | ||
| if schemaMap, ok := tool.InputSchema.(map[string]interface{}); ok { | ||
| if typeVal, ok := schemaMap["type"].(string); ok { | ||
| params.Type = typeVal | ||
| } | ||
| if desc, ok := schemaMap["description"].(string); ok { | ||
| params.Description = &desc | ||
| } | ||
| if required, ok := schemaMap["required"].([]interface{}); ok { | ||
| reqStrings := make([]string, len(required)) | ||
| for i, req := range required { | ||
| if reqStr, ok := req.(string); ok { | ||
| reqStrings[i] = reqStr | ||
| } | ||
| } | ||
| params.Required = reqStrings | ||
| } | ||
| if properties, ok := schemaMap["properties"].(map[string]interface{}); ok { | ||
| params.Properties = properties | ||
| } | ||
| if enum, ok := schemaMap["enum"].([]interface{}); ok { | ||
| enumStrings := make([]string, len(enum)) | ||
| for i, e := range enum { | ||
| if eStr, ok := e.(string); ok { | ||
| enumStrings[i] = eStr | ||
| } | ||
| } | ||
| params.Enum = &enumStrings | ||
| } | ||
| } | ||
| } | ||
|
|
||
| t := schemas.Tool{ | ||
| Type: "function", | ||
| Function: schemas.Function{ | ||
| Name: tool.Name, | ||
| Description: tool.Description, | ||
| Parameters: params, | ||
| }, | ||
| } | ||
| tools = append(tools, t) | ||
| } | ||
| if bifrostReq.Params == nil { | ||
| bifrostReq.Params = &schemas.ModelParameters{} | ||
| } | ||
| bifrostReq.Params.Tools = &tools | ||
|
|
||
| // Convert tool choice | ||
| if r.ToolChoice != nil { | ||
| toolChoice := &schemas.ToolChoice{ | ||
| Type: schemas.ToolChoiceType(r.ToolChoice.Type), | ||
| } | ||
| if r.ToolChoice.Type == "tool" && r.ToolChoice.Name != "" { | ||
| toolChoice.Function = schemas.ToolChoiceFunction{ | ||
| Name: r.ToolChoice.Name, | ||
| } | ||
| } | ||
| bifrostReq.Params.ToolChoice = toolChoice | ||
| } | ||
| } |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
stream flag, image blocks, and tool results are silently dropped
ConvertToBifrostRequest ignores:
r.Stream– callers requesting streaming receive a non-streaming request.imagecontent blocks – lost when forwarding to Bifrost.tool_resultblocks – only the ID is forwarded; the actual result/content is discarded.
If these fields are relevant to downstream logic, propagate them or explicitly document that they are unsupported.
🏁 Script executed:
#!/bin/bash
# Locate the ConvertToBifrostRequest function and check handling of stream, image, and tool_result
rg -n "ConvertToBifrostRequest" -A200 -B20 transports/bifrost-http/integrations/anthropic/types.go
echo "----- Stream references -----"
rg -n "r\.Stream" -n transports/bifrost-http/integrations/anthropic/types.go || echo "No occurrences of r.Stream"
echo "----- Content block handling -----"
rg -n "case.*image" -n transports/bifrost-http/integrations/anthropic/types.go || echo "No image case in switch"
echo "----- tool_result handling -----"
rg -n "tool_result" -n transports/bifrost-http/integrations/anthropic/types.go || echo "No tool_result processing"Length of output: 8221
Ensure streaming, image, and tool‐result data are handled (or clearly documented)
ConvertToBifrostRequest currently:
- Never propagates r.Stream → no
Params.Streamon the BifrostRequest, so streaming callers fall back to non‐streaming. - Lacks a
case "image"branch → all image content blocks are silently dropped. - In the
"tool_result"branch only appends the result text totextContentand setsToolMessage.ToolCallID, but does not serialize the tool result into a structured field.
If your downstream logic relies on streaming, images, or structured tool results, update ConvertToBifrostRequest to:
- Map
r.StreamintobifrostReq.Params.Stream. - Add handling for
content.Type == "image"(e.g. populate anImageURLorImageDatafield onBifrostMessage). - Extend the
"tool_result"case to include the result payload onToolMessage(not just the ID).
Otherwise, document that streaming, images, and tool results are unsupported by this conversion.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/anthropic/types.go around lines 170 to
237, the ConvertToBifrostRequest function does not propagate the r.Stream flag
to bifrostReq.Params.Stream, causing streaming requests to be downgraded to
non-streaming. It also lacks handling for content blocks of type "image",
resulting in dropped image data, and the "tool_result" case only forwards the
tool call ID without including the actual result content. To fix this, update
the function to map r.Stream into bifrostReq.Params.Stream, add a case to handle
"image" content blocks by populating appropriate image fields on BifrostMessage,
and extend the "tool_result" handling to serialize and include the full tool
result payload in the ToolMessage structure. If these features are not
supported, add clear documentation stating that streaming, images, and tool
results are not handled by this conversion.
| // Convert parameters | ||
| if r.MaxTokens != nil || r.Temperature != nil || r.TopP != nil || r.PresencePenalty != nil || | ||
| r.FrequencyPenalty != nil || r.N != nil { | ||
| params := &schemas.ModelParameters{ | ||
| ExtraParams: make(map[string]interface{}), | ||
| } | ||
|
|
||
| if r.MaxTokens != nil { | ||
| params.MaxTokens = r.MaxTokens | ||
| } | ||
| if r.Temperature != nil { | ||
| params.Temperature = r.Temperature | ||
| } | ||
| if r.TopP != nil { | ||
| params.TopP = r.TopP | ||
| } | ||
| if r.PresencePenalty != nil { | ||
| params.PresencePenalty = r.PresencePenalty | ||
| } | ||
| if r.FrequencyPenalty != nil { | ||
| params.FrequencyPenalty = r.FrequencyPenalty | ||
| } | ||
| if r.N != nil { | ||
| params.ExtraParams["n"] = r.N | ||
| } | ||
|
|
||
| // Add LiteLLM-specific params | ||
| if r.APIBase != nil { | ||
| params.ExtraParams["api_base"] = r.APIBase | ||
| } | ||
| if r.APIVersion != nil { | ||
| params.ExtraParams["api_version"] = r.APIVersion | ||
| } | ||
| if r.Drop_params != nil { | ||
| params.ExtraParams["drop_params"] = r.Drop_params | ||
| } | ||
| if r.Metadata != nil { | ||
| params.ExtraParams["metadata"] = r.Metadata | ||
| } | ||
|
|
||
| bifrostReq.Params = params | ||
| } | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Several LiteLLM fields silently dropped
The converter ignores Stop, ToolChoice, ResponseFormat, Stream, and Seed.
Unless there is a conscious product decision, omitting them changes request semantics (e.g., stop greatly affects generation output).
Recommend mapping them into schemas.ModelParameters.ExtraParams at minimum so that downstream providers still receive the intent.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types.go around lines 126 to
168, the conversion logic omits several LiteLLM fields such as Stop, ToolChoice,
ResponseFormat, Stream, and Seed, which can alter request semantics. To fix
this, add these missing fields into the params.ExtraParams map so they are
preserved and passed downstream. This ensures the full intent of the original
request is maintained without losing important parameters.
| for _, toolCall := range *msg.ToolCalls { | ||
| tc := schemas.ToolCall{ | ||
| Type: stringPtr(toolCall.Type), | ||
| ID: &toolCall.ID, | ||
| Function: schemas.FunctionCall{ | ||
| Name: &toolCall.Function.Name, | ||
| Arguments: toolCall.Function.Arguments, | ||
| }, |
There was a problem hiding this comment.
Bug: captured pointers all reference the same loop variable instance
toolCall is re-used on every iteration of the for … range loop.
Taking the address of its fields (&toolCall.ID, &toolCall.Function.Name) stores identical memory addresses for every schemas.ToolCall, so after the loop all IDs / Names point to the values of the last element.
-for _, toolCall := range *msg.ToolCalls {
- tc := schemas.ToolCall{
- Type: stringPtr(toolCall.Type),
- ID: &toolCall.ID,
- Function: schemas.FunctionCall{
- Name: &toolCall.Function.Name,
- Arguments: toolCall.Function.Arguments,
- },
- }
+for _, toolCall := range *msg.ToolCalls {
+ // copy into new vars to obtain distinct addresses
+ idCopy := toolCall.ID
+ nameCopy := toolCall.Function.Name
+ tc := schemas.ToolCall{
+ Type: stringPtr(toolCall.Type),
+ ID: &idCopy,
+ Function: schemas.FunctionCall{
+ Name: &nameCopy,
+ Arguments: toolCall.Function.Arguments,
+ },
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for _, toolCall := range *msg.ToolCalls { | |
| tc := schemas.ToolCall{ | |
| Type: stringPtr(toolCall.Type), | |
| ID: &toolCall.ID, | |
| Function: schemas.FunctionCall{ | |
| Name: &toolCall.Function.Name, | |
| Arguments: toolCall.Function.Arguments, | |
| }, | |
| for _, toolCall := range *msg.ToolCalls { | |
| // copy into new vars to obtain distinct addresses | |
| idCopy := toolCall.ID | |
| nameCopy := toolCall.Function.Name | |
| tc := schemas.ToolCall{ | |
| Type: stringPtr(toolCall.Type), | |
| ID: &idCopy, | |
| Function: schemas.FunctionCall{ | |
| Name: &nameCopy, | |
| Arguments: toolCall.Function.Arguments, | |
| }, | |
| } |
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types.go around lines 100 to
107, the loop variable toolCall is reused in each iteration, causing pointers to
its fields to reference the same memory address for all iterations. To fix this,
create a new local variable inside the loop to hold the current toolCall value,
then take pointers to that local variable's fields. This ensures each
schemas.ToolCall has unique pointers to the correct data.
| // Helper function to determine provider from model name | ||
| func determineProviderFromModel(model string) schemas.ModelProvider { | ||
| // LiteLLM uses prefixes or model names to determine provider | ||
| // This is a simplified version - in production you'd have more sophisticated routing | ||
| if contains(model, "gpt") || contains(model, "o1") { | ||
| return schemas.OpenAI | ||
| } else if contains(model, "claude") { | ||
| return schemas.Anthropic | ||
| } else if contains(model, "gemini") || contains(model, "vertex") { | ||
| return schemas.Vertex | ||
| } else if contains(model, "bedrock") { | ||
| return schemas.Bedrock | ||
| } else if contains(model, "cohere") { | ||
| return schemas.Cohere | ||
| } | ||
| // Default to OpenAI for unknown models | ||
| return schemas.OpenAI | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Provider detection is case–sensitive → mis-routes mixed-case model names
determineProviderFromModel checks substrings directly; models such as "GPT-4o" or "Claude-3" will slip through.
-func determineProviderFromModel(model string) schemas.ModelProvider {
- if contains(model, "gpt") || contains(model, "o1") {
+func determineProviderFromModel(model string) schemas.ModelProvider {
+ m := strings.ToLower(model)
+ if strings.Contains(m, "gpt") || strings.Contains(m, "o1") {
…Using strings.ToLower (or strings.EqualFold for exact matches) avoids brittle routing logic.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types.go around lines 325 to
342, the determineProviderFromModel function performs case-sensitive substring
checks, causing misrouting for mixed-case model names like "GPT-4o" or
"Claude-3". To fix this, convert the input model string to lowercase using
strings.ToLower before performing substring checks, ensuring case-insensitive
matching and more robust provider detection.
| if choice.Message.AssistantMessage != nil && choice.Message.AssistantMessage.ToolCalls != nil { | ||
| toolCalls := []LiteLLMToolCall{} | ||
| for _, toolCall := range *choice.Message.AssistantMessage.ToolCalls { | ||
| tc := LiteLLMToolCall{ | ||
| Type: *toolCall.Type, | ||
| Function: LiteLLMFunctionCall{ | ||
| Name: *toolCall.Function.Name, | ||
| Arguments: toolCall.Function.Arguments, | ||
| }, |
There was a problem hiding this comment.
Potential nil-pointer dereference when building response
toolCall.Type and toolCall.Function.Name are *string.
Dereferencing without a nil-check (*toolCall.Type, *toolCall.Function.Name) will panic if the upstream response omits these fields.
Add defensive checks or default fallbacks:
var tc LiteLLMToolCall
-if toolCall.Type != nil {
- tc.Type = *toolCall.Type
-}
-if toolCall.Function.Name != nil {
- tc.Function.Name = *toolCall.Function.Name
-}
tc.Function.Arguments = toolCall.Function.ArgumentsFailing fast inside a production proxy will take the whole request down – please harden this part.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types.go around lines 429 to
437, the code dereferences pointers toolCall.Type and toolCall.Function.Name
without checking for nil, which can cause a panic if these fields are missing.
To fix this, add nil checks before dereferencing these pointers and provide
default fallback values (e.g., empty string) when they are nil to prevent
nil-pointer dereference and ensure the proxy remains stable.
…llm, mistral, openai
c5efb2e to
728e2d4
Compare
There was a problem hiding this comment.
Actionable comments posted: 15
♻️ Duplicate comments (20)
transports/bifrost-http/integrations/langchain/types.go (2)
65-83:⚠️ Potential issuePersisting loop-variable pointer bug corrupts all message contents
bifrostMsg.Content = &msg.Contentstill takes the address of the loop variablemsg.
After the loop, every stored pointer references the same memory, containing whatever valuemsg.Contenthas in the final iteration.This was already flagged in a previous review and must be fixed.
- bifrostMsg.Content = &msg.Content + contentCopy := msg.Content + bifrostMsg.Content = &contentCopy
87-104:⚠️ Potential issueTool-call
Namesuffers from the same pointer-aliasing issue
&toolCall.Namecaptures the address of the loop variable, so every tool-call ends up with the name of the last iteration.This repeats the earlier review comment and is still unresolved.
- tc := schemas.ToolCall{ + nameCopy := toolCall.Name + tc := schemas.ToolCall{ Type: stringPtr("function"), ID: toolCall.ID, Function: schemas.FunctionCall{ - Name: &toolCall.Name, + Name: &nameCopy, Arguments: argsJSON, }, }transports/bifrost-http/integrations/anthropic/router.go (1)
30-37: JSON encoding issues correctly resolved.The implementation now properly uses
json.Marshalandctx.SetBodyinstead of the incompatiblejson.NewEncoder(ctx).Encode()pattern. This addresses all the previous fasthttp compatibility issues.Also applies to: 56-62, 64-69
transports/bifrost-http/integrations/langchain/router.go (3)
34-38: Use fasthttp’s idiomatic JSON write path
Same issue was raised previously:json.NewEncoder(ctx).Encode(...)allocates an extra
buffer and writes via the slow path. Marshal first, thenctx.SetBody.Also applies to: 52-54, 68-70, 87-88, 102-103
122-127: Batch error response still inconsistentOnly the error message string is returned, losing structured details (
code,type, etc.). See prior review comment for a concrete patch.
140-145: Streaming endpoint advertises streaming but doesn’t streamCalling the non-streaming handler is misleading; either implement SSE/WebSockets or return 501 with an explanatory message (see earlier suggestion).
transports/bifrost-http/integrations/langgraph/router.go (3)
203-210: Non-idiomatic zero-value struct comparison
Duplicate of earlier feedback – compare a sentinel field such asTotalTokens > 0instead ofresult.Usage != (schemas.LLMUsage{}).
238-249: Hard-coded fallback content should be configurable
Same issue was raised previously; consider passing defaults via config instead of"Continue the conversation"/"Start conversation".
311-329: Predictable ID generation
randomStringcycles through a fixed charset, producing non-random IDs. Replace withcrypto/randor UUID.
(Previous comment already provided a patch.)transports/bifrost-http/integrations/anthropic/types.go (3)
98-126: Stream flag, image blocks & full tool-result payloads are silently ignored
The conversion omitsr.Stream, drops"image"content blocks, and flattens tool-result content to plain text. Behaviour was flagged earlier and is still present.
243-251:jsonifyInputhides marshal errors
Returning"{}"masks the real problem and makes debugging impossible. Propagate or log the error instead.
286-291: Usage struct zero-value check
UsebifrostResp.Usage.TotalTokens > 0rather than comparing the whole struct to its zero value.transports/bifrost-http/integrations/mistral/types.go (2)
239-246: Non-idiomatic zero-value check for usage
Same concern as earlier files – checkTotalTokensinstead of struct equality.
262-276: Potential nil-pointer dereference when mapping tool calls
Pointer fields (toolCall.Type,toolCall.Function.Name) are dereferenced without checks, risking panics. Prior feedback already covered this.transports/bifrost-http/integrations/langgraph/types.go (2)
381-404:json.NewEncoder(nil).Encode(...)still panics – replace withfmt.ErrorfPrevious review already highlighted this. The call dereferences a
nilwriter and will panic on the first validation failure.-import "encoding/json" +import ( + "encoding/json" + "fmt" +) ... - return json.NewEncoder(nil).Encode(map[string]string{"error": "Start node not found"}) + return fmt.Errorf("start node %q not found", g.StartNode) ... - return json.NewEncoder(nil).Encode(map[string]string{"error": "Edge references non-existent 'from' node: " + edge.From}) + return fmt.Errorf("edge references non-existent 'from' node %q", edge.From) ... - return json.NewEncoder(nil).Encode(map[string]string{"error": "End node not found: " + endNodeID}) + return fmt.Errorf("end node %q not found", endNodeID)Missing this fix crashes every invalid-graph request.
186-197: Pointer to loop variablecontentescapes – messages share the last valueAddressing
contentinside the loop stores the same pointer in everyBifrostMessage. Copy the value first:- if content, ok := msgMap["content"].(string); ok { - msg.Content = &content - } + if content, ok := msgMap["content"].(string); ok { + c := content + msg.Content = &c + }Do the same for any other loop-scoped variables taken by address (
role, etc.).transports/bifrost-http/integrations/litellm/types.go (4)
100-110: Captured pointers all reference the same loop variable instanceThe previous review already covered this; the issue remains unaddressed. Create per-iteration copies before taking addresses to ensure distinct pointers.
126-167: Important LiteLLM fields still silently dropped
Stop,ToolChoice,ResponseFormat,Stream, andSeedare ignored during parameter translation, potentially altering request semantics. Map them at least intoparams.ExtraParamsso downstream providers receive the intent.
325-347: Provider detection remains case-sensitive → mis-routes mixed-case model namesThe helper still compares raw substrings; convert to lowercase before matching.
-func contains(s, substr string) bool { - return strings.Contains(s, substr) +func contains(s, substr string) bool { + return strings.Contains(strings.ToLower(s), substr) }Then drop the redundant
strings.ToLowercalls in the caller if added.
429-445: Nil-pointer dereference risk when building tool-call response
*toolCall.Typeand*toolCall.Function.Nameare dereferenced without checks. If upstream omits them the proxy panics. Guard with nil checks or default empty strings.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (15)
docs/integrations.md(1 hunks)transports/bifrost-http/integrations/anthropic/router.go(1 hunks)transports/bifrost-http/integrations/anthropic/types.go(1 hunks)transports/bifrost-http/integrations/genai/router.go(2 hunks)transports/bifrost-http/integrations/langchain/router.go(1 hunks)transports/bifrost-http/integrations/langchain/types.go(1 hunks)transports/bifrost-http/integrations/langgraph/router.go(1 hunks)transports/bifrost-http/integrations/langgraph/types.go(1 hunks)transports/bifrost-http/integrations/litellm/router.go(1 hunks)transports/bifrost-http/integrations/litellm/types.go(1 hunks)transports/bifrost-http/integrations/mistral/router.go(1 hunks)transports/bifrost-http/integrations/mistral/types.go(1 hunks)transports/bifrost-http/integrations/openai/router.go(1 hunks)transports/bifrost-http/integrations/openai/types.go(1 hunks)transports/bifrost-http/main.go(3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (7)
transports/bifrost-http/main.go (8)
transports/bifrost-http/integrations/utils.go (1)
ExtensionRouter(5-7)transports/bifrost-http/integrations/genai/router.go (1)
NewGenAIRouter(18-20)transports/bifrost-http/integrations/openai/router.go (1)
NewOpenAIRouter(18-20)transports/bifrost-http/integrations/anthropic/router.go (1)
NewAnthropicRouter(18-20)transports/bifrost-http/integrations/mistral/router.go (1)
NewMistralRouter(18-20)transports/bifrost-http/integrations/litellm/router.go (1)
NewLiteLLMRouter(18-20)transports/bifrost-http/integrations/langchain/router.go (1)
NewLangChainRouter(19-21)transports/bifrost-http/integrations/langgraph/router.go (1)
NewLangGraphRouter(19-21)
transports/bifrost-http/integrations/openai/types.go (2)
core/schemas/bifrost.go (22)
ToolChoiceTypeFunction(126-126)FunctionCall(260-263)Function(101-105)ToolChoice(137-140)LogProbs(253-257)BifrostRequest(54-64)OpenAI(34-34)RequestInput(46-49)BifrostMessage(143-152)ModelChatMessageRole(20-20)ToolCall(266-270)AssistantMessage(166-171)ToolMessage(160-163)ModelParameters(75-89)Tool(108-112)FunctionParameters(92-98)ToolChoiceTypeNone(120-120)ToolChoiceTypeAuto(122-122)ToolChoiceTypeRequired(128-128)ToolChoiceFunction(132-134)BifrostResponse(184-194)LLMUsage(197-203)core/schemas/provider.go (1)
Provider(142-149)
transports/bifrost-http/integrations/mistral/router.go (2)
transports/bifrost-http/integrations/mistral/types.go (2)
MistralChatRequest(51-63)DeriveMistralFromBifrostResponse(226-285)transports/bifrost-http/lib/ctx.go (1)
ConvertToBifrostContext(43-73)
transports/bifrost-http/integrations/anthropic/types.go (2)
core/schemas/bifrost.go (20)
ToolChoiceTypeFunction(126-126)ToolChoice(137-140)BifrostRequest(54-64)Anthropic(36-36)RequestInput(46-49)BifrostMessage(143-152)ModelChatMessageRoleSystem(25-25)ModelChatMessageRole(20-20)ToolCall(266-270)Function(101-105)FunctionCall(260-263)AssistantMessage(166-171)ToolMessage(160-163)ModelParameters(75-89)Tool(108-112)FunctionParameters(92-98)ToolChoiceType(116-116)ToolChoiceFunction(132-134)BifrostResponse(184-194)LLMUsage(197-203)core/schemas/provider.go (1)
Provider(142-149)
transports/bifrost-http/integrations/mistral/types.go (2)
core/schemas/bifrost.go (16)
Function(101-105)ToolChoice(137-140)BifrostRequest(54-64)OpenAI(34-34)RequestInput(46-49)BifrostMessage(143-152)ModelChatMessageRole(20-20)ToolCall(266-270)FunctionCall(260-263)AssistantMessage(166-171)ToolMessage(160-163)ModelParameters(75-89)Tool(108-112)FunctionParameters(92-98)BifrostResponse(184-194)LLMUsage(197-203)core/schemas/provider.go (1)
Provider(142-149)
transports/bifrost-http/integrations/langchain/types.go (2)
core/schemas/provider.go (1)
Provider(142-149)core/schemas/bifrost.go (19)
BifrostRequest(54-64)OpenAI(34-34)ModelProvider(31-31)RequestInput(46-49)BifrostMessage(143-152)ModelChatMessageRoleUser(24-24)ModelChatMessageRoleAssistant(23-23)ModelChatMessageRoleSystem(25-25)ModelChatMessageRoleTool(27-27)ToolCall(266-270)Function(101-105)FunctionCall(260-263)AssistantMessage(166-171)ToolMessage(160-163)ModelParameters(75-89)Tool(108-112)FunctionParameters(92-98)BifrostResponse(184-194)LLMUsage(197-203)
transports/bifrost-http/integrations/litellm/types.go (2)
core/schemas/bifrost.go (21)
Function(101-105)ToolChoice(137-140)BifrostRequest(54-64)RequestInput(46-49)BifrostMessage(143-152)ModelChatMessageRole(20-20)ToolCall(266-270)FunctionCall(260-263)AssistantMessage(166-171)ToolMessage(160-163)ModelParameters(75-89)Tool(108-112)FunctionParameters(92-98)ModelProvider(31-31)OpenAI(34-34)Anthropic(36-36)Vertex(39-39)Bedrock(37-37)Cohere(38-38)BifrostResponse(184-194)LLMUsage(197-203)core/schemas/provider.go (1)
Provider(142-149)
🪛 LanguageTool
docs/integrations.md
[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...
(SINGULAR_NOUN_ADV_AGREEMENT)
[uncategorized] ~423-~423: Possible missing preposition found.
Context: ...st for consistent logging and metrics - Track usage, costs, and performance across al...
(AI_HYDRA_LEO_MISSING_TO)
🔇 Additional comments (7)
transports/bifrost-http/main.go (1)
185-193: 🧹 Nitpick (assertive)Variable shadows imported package name
integrationsDeclaring
extensions := []integrations.ExtensionRouter{...}insidemainshadows the package-level importintegrations, which can confuse readers and tooling.- extensions := []integrations.ExtensionRouter{ + extRouters := []integrations.ExtensionRouter{ ... } ... - for _, extension := range extensions { + for _, extension := range extRouters { extension.RegisterRoutes(r) }Likely an incorrect or invalid review comment.
docs/integrations.md (1)
1-479: Excellent comprehensive integration guide.The documentation is well-structured and provides clear examples for all supported integrations. It effectively demonstrates how users can migrate to Bifrost with minimal code changes while gaining benefits like fallbacks, load balancing, and unified monitoring.
🧰 Tools
🪛 LanguageTool
[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...(SINGULAR_NOUN_ADV_AGREEMENT)
[uncategorized] ~423-~423: Possible missing preposition found.
Context: ...st for consistent logging and metrics - Track usage, costs, and performance across al...(AI_HYDRA_LEO_MISSING_TO)
transports/bifrost-http/integrations/mistral/router.go (2)
51-67: Excellent error handling implementation.The sophisticated error handling correctly distinguishes between client errors (4xx) and server errors (5xx) based on error properties. This provides better client experience and follows HTTP semantics properly.
This error handling pattern should be considered the best practice for other integration routers in this PR.
30-36: Proper JSON encoding with fasthttp.The JSON encoding correctly uses
json.Marshalandctx.SetBodypattern, which is compatible with fasthttp. This addresses the previous compatibility issues.Also applies to: 62-66, 70-74
transports/bifrost-http/integrations/litellm/router.go (1)
32-40: 👍 Correct switch tojson.Marshal+SetBodyAll JSON responses now use the idiomatic fasthttp pattern that was requested in earlier reviews.
Also applies to: 70-78
transports/bifrost-http/integrations/openai/types.go (1)
90-118: Merged tool-call handling looks goodThe earlier overwrite bug has been fixed: both modern
tool_callsand legacyfunction_calldata are merged into a single slice before assignment.transports/bifrost-http/integrations/langgraph/router.go (1)
65-78:/streamendpoint is synchronous – clarify or implement real streamingThe handler just delegates to
executeGraphand returns a full JSON body, yet the route name implies server-sent streaming. This mismatch will confuse SDKs that expect chunked replies.Either:
- Implement a proper SSE/websocket stream, or
- Rename the route and document that streaming is not yet supported.
| bifrostReq := &schemas.BifrostRequest{ | ||
| Provider: provider, | ||
| Model: r.Model, | ||
| Input: schemas.RequestInput{ | ||
| ChatCompletionInput: &[]schemas.BifrostMessage{}, | ||
| }, | ||
| } | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Avoid taking a pointer to a composite literal
Using &[]schemas.BifrostMessage{} produces a pointer to a slice header that lives only on the heap for this expression; while legal, it’s harder to read and easy to misuse.
Prefer allocating the slice first and then referencing it ‑ keeps the code clear and symmetric with later appends.
- bifrostReq := &schemas.BifrostRequest{
- Provider: provider,
- Model: r.Model,
- Input: schemas.RequestInput{
- ChatCompletionInput: &[]schemas.BifrostMessage{},
- },
- }
+ chatInput := []schemas.BifrostMessage{}
+ bifrostReq := &schemas.BifrostRequest{
+ Provider: provider,
+ Model: r.Model,
+ Input: schemas.RequestInput{
+ ChatCompletionInput: &chatInput,
+ },
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| bifrostReq := &schemas.BifrostRequest{ | |
| Provider: provider, | |
| Model: r.Model, | |
| Input: schemas.RequestInput{ | |
| ChatCompletionInput: &[]schemas.BifrostMessage{}, | |
| }, | |
| } | |
| chatInput := []schemas.BifrostMessage{} | |
| bifrostReq := &schemas.BifrostRequest{ | |
| Provider: provider, | |
| Model: r.Model, | |
| Input: schemas.RequestInput{ | |
| ChatCompletionInput: &chatInput, | |
| }, | |
| } |
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langchain/types.go around lines 56 to
63, avoid taking a pointer directly to a composite literal slice. Instead,
declare the slice variable first, then assign its address to
ChatCompletionInput. This improves readability and aligns with common Go
practices for slice pointers.
| genAIResponse := DeriveGenAIFromBifrostResponse(result) | ||
| ctx.SetStatusCode(fasthttp.StatusOK) | ||
| ctx.SetContentType("application/json") | ||
| json.NewEncoder(ctx).Encode(genAIResponse) | ||
| jsonBytes, _ := json.Marshal(genAIResponse) | ||
| ctx.SetBody(jsonBytes) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Success response: same unhandled marshal-error & unstructured pattern
Same concerns apply to the success path; additionally, for large responses pre-allocating a buffer via ctx.Write is marginally faster than SetBody.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai/router.go around lines 63 to 68,
the success response handling ignores errors from json.Marshal and uses
ctx.SetBody which is less efficient for large responses. Fix this by checking
and handling the error returned from json.Marshal properly, returning an
appropriate error response if marshalling fails. Also, replace ctx.SetBody with
ctx.Write using a pre-allocated buffer to improve performance for large JSON
responses.
| var req GeminiChatRequest | ||
| if err := json.Unmarshal(ctx.PostBody(), &req); err != nil { | ||
| ctx.SetStatusCode(fasthttp.StatusBadRequest) | ||
| json.NewEncoder(ctx).Encode(err) | ||
| ctx.SetContentType("application/json") | ||
| jsonBytes, _ := json.Marshal(err) | ||
| ctx.SetBody(jsonBytes) | ||
| return |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Error body is marshalled but still unstructured
json.Marshal(err) encodes the error as a bare JSON string ("some error").
Consider wrapping it in a structured object (`{"error": "..."}') to allow callers to reliably parse error details.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai/router.go around lines 41 to 47,
the error response body is marshalled directly from the error, resulting in an
unstructured JSON string. To fix this, wrap the error message in a structured
JSON object with a key like "error" before marshalling, so the response body is
a JSON object that clients can reliably parse.
| if err != nil { | ||
| ctx.SetStatusCode(fasthttp.StatusInternalServerError) | ||
| json.NewEncoder(ctx).Encode(err) | ||
| ctx.SetContentType("application/json") | ||
| jsonBytes, _ := json.Marshal(err) | ||
| ctx.SetBody(jsonBytes) | ||
| return |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Ignored marshal errors
Both places ignore the json.Marshal error with _. While unlikely, failed marshaling would leave the client with an empty body. Log or handle the error.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai/router.go around lines 55 to 60,
the error returned by json.Marshal is ignored, which could result in an empty
response body if marshaling fails. Modify the code to check the error returned
by json.Marshal, and if an error occurs, log it or handle it appropriately
before setting the response body. This ensures the client receives meaningful
feedback even if marshaling fails.
| jsonBytes, _ := json.Marshal(err) | ||
| ctx.SetBody(jsonBytes) | ||
| return | ||
| } | ||
|
|
||
| ctx.SetStatusCode(fasthttp.StatusOK) | ||
| ctx.SetContentType("application/json") | ||
| json.NewEncoder(ctx).Encode(resp) | ||
| jsonBytes, _ := json.Marshal(resp) | ||
| ctx.SetBody(jsonBytes) |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Repeated silent marshal failures
json.Marshal(err) and json.Marshal(resp) ignore the returned error; on the (rare) chance of failure you’ll emit an empty body with 200/500 status. Capture and fall back to ctx.SetBodyString(err.Error()).
🤖 Prompt for AI Agents
In transports/bifrost-http/main.go around lines 314 to 322, the json.Marshal
calls for err and resp ignore potential errors, which can lead to empty response
bodies on marshal failure. Modify the code to check the error returned by
json.Marshal; if an error occurs, use ctx.SetBodyString with err.Error() as a
fallback to ensure a meaningful response body is always sent.
| // Convert parameters | ||
| if r.MaxTokens != nil || r.Temperature != nil || r.TopP != nil || r.PresencePenalty != nil || | ||
| r.FrequencyPenalty != nil || r.N != nil || r.LogProbs != nil || r.TopLogProbs != nil || | ||
| r.Stop != nil || r.LogitBias != nil { | ||
| params := &schemas.ModelParameters{ | ||
| ExtraParams: make(map[string]interface{}), | ||
| } | ||
|
|
||
| if r.MaxTokens != nil { | ||
| params.MaxTokens = r.MaxTokens | ||
| } | ||
| if r.Temperature != nil { | ||
| params.Temperature = r.Temperature | ||
| } | ||
| if r.TopP != nil { | ||
| params.TopP = r.TopP | ||
| } | ||
| if r.PresencePenalty != nil { | ||
| params.PresencePenalty = r.PresencePenalty | ||
| } | ||
| if r.FrequencyPenalty != nil { | ||
| params.FrequencyPenalty = r.FrequencyPenalty | ||
| } | ||
| if r.N != nil { | ||
| params.ExtraParams["n"] = r.N | ||
| } | ||
| if r.LogProbs != nil { | ||
| params.ExtraParams["logprobs"] = r.LogProbs | ||
| } | ||
| if r.TopLogProbs != nil { | ||
| params.ExtraParams["top_logprobs"] = r.TopLogProbs | ||
| } | ||
| if r.Stop != nil { | ||
| params.ExtraParams["stop"] = r.Stop | ||
| } | ||
| if r.LogitBias != nil { | ||
| params.ExtraParams["logit_bias"] = r.LogitBias | ||
| } | ||
|
|
||
| bifrostReq.Params = params | ||
| } | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Parameter propagation still omits response_format, seed, stream
Minor, but these user-facing knobs are silently discarded. Consider forwarding them via ExtraParams for completeness.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/types.go around lines 137 to 178,
the code converts parameters but omits forwarding the user-facing parameters
response_format, seed, and stream. To fix this, add checks for these parameters
similar to the existing ones and include them in the ExtraParams map of params
if they are not nil, ensuring these parameters are propagated correctly.
| tc := OpenAIToolCall{ | ||
| Type: *toolCall.Type, | ||
| Function: OpenAIFunctionCall{ | ||
| Name: *toolCall.Function.Name, | ||
| Arguments: toolCall.Function.Arguments, | ||
| }, | ||
| } | ||
| if toolCall.ID != nil { | ||
| tc.ID = *toolCall.ID | ||
| } | ||
| toolCalls = append(toolCalls, tc) |
There was a problem hiding this comment.
Nil-pointer panic when Type or Function.Name is unset
*toolCall.Type and *toolCall.Function.Name are dereferenced without a nil check.
If either field is missing (both are pointers in the Bifrost schema), the server will panic.
-tc := OpenAIToolCall{
- Type: *toolCall.Type,
- Function: OpenAIFunctionCall{
- Name: *toolCall.Function.Name,
- Arguments: toolCall.Function.Arguments,
- },
-}
+var t string
+if toolCall.Type != nil {
+ t = *toolCall.Type
+}
+fnName := ""
+if toolCall.Function.Name != nil {
+ fnName = *toolCall.Function.Name
+}
+tc := OpenAIToolCall{
+ Type: t,
+ Function: OpenAIFunctionCall{
+ Name: fnName,
+ Arguments: toolCall.Function.Arguments,
+ },
+}Add identical nil-guards anywhere these pointers are dereferenced.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tc := OpenAIToolCall{ | |
| Type: *toolCall.Type, | |
| Function: OpenAIFunctionCall{ | |
| Name: *toolCall.Function.Name, | |
| Arguments: toolCall.Function.Arguments, | |
| }, | |
| } | |
| if toolCall.ID != nil { | |
| tc.ID = *toolCall.ID | |
| } | |
| toolCalls = append(toolCalls, tc) | |
| // Guard against nil pointers for Type and Function.Name | |
| var t string | |
| if toolCall.Type != nil { | |
| t = *toolCall.Type | |
| } | |
| fnName := "" | |
| if toolCall.Function.Name != nil { | |
| fnName = *toolCall.Function.Name | |
| } | |
| tc := OpenAIToolCall{ | |
| Type: t, | |
| Function: OpenAIFunctionCall{ | |
| Name: fnName, | |
| Arguments: toolCall.Function.Arguments, | |
| }, | |
| } | |
| if toolCall.ID != nil { | |
| tc.ID = *toolCall.ID | |
| } | |
| toolCalls = append(toolCalls, tc) |
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/types.go around lines 449 to 459,
the code dereferences pointers toolCall.Type and toolCall.Function.Name without
checking if they are nil, which can cause a nil-pointer panic. Add nil checks
before dereferencing these pointers to ensure they are not nil, and handle the
nil case appropriately to prevent the server from panicking.
| if err := json.Unmarshal(ctx.PostBody(), &req); err != nil { | ||
| ctx.SetStatusCode(fasthttp.StatusBadRequest) | ||
| ctx.SetContentType("application/json") | ||
| errResponse := map[string]string{"error": err.Error()} | ||
| jsonBytes, _ := json.Marshal(errResponse) | ||
| ctx.SetBody(jsonBytes) | ||
| return | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid repeated JSON marshalling & unchecked errors
json.Marshal is invoked for every 4xx/5xx reply and its error is silently discarded.
Use json.NewEncoder(ctx).Encode(errResponse) (no extra allocation) and handle/propagate the encode error if it ever happens.
-errResponse := map[string]string{"error": err.Error()}
-jsonBytes, _ := json.Marshal(errResponse)
-ctx.SetBody(jsonBytes)
+if encodeErr := json.NewEncoder(ctx).Encode(map[string]string{"error": err.Error()}); encodeErr != nil {
+ ctx.Error("failed to encode error response", fasthttp.StatusInternalServerError)
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if err := json.Unmarshal(ctx.PostBody(), &req); err != nil { | |
| ctx.SetStatusCode(fasthttp.StatusBadRequest) | |
| ctx.SetContentType("application/json") | |
| errResponse := map[string]string{"error": err.Error()} | |
| jsonBytes, _ := json.Marshal(errResponse) | |
| ctx.SetBody(jsonBytes) | |
| return | |
| } | |
| if err := json.Unmarshal(ctx.PostBody(), &req); err != nil { | |
| ctx.SetStatusCode(fasthttp.StatusBadRequest) | |
| ctx.SetContentType("application/json") | |
| if encodeErr := json.NewEncoder(ctx).Encode(map[string]string{"error": err.Error()}); encodeErr != nil { | |
| ctx.Error("failed to encode error response", fasthttp.StatusInternalServerError) | |
| } | |
| return | |
| } |
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langgraph/router.go around lines 36 to
43, the code repeatedly calls json.Marshal to encode error responses and ignores
any errors from marshalling. Replace json.Marshal with
json.NewEncoder(ctx).Encode(errResponse) to write directly to the response body
without extra allocation, and properly check and handle any encoding errors that
occur.
| // Convert parameters | ||
| if r.MaxTokens != nil || r.Temperature != nil || r.TopP != nil || r.RandomSeed != nil || r.SafePrompt != nil { | ||
| params := &schemas.ModelParameters{ | ||
| ExtraParams: make(map[string]interface{}), | ||
| } | ||
|
|
||
| if r.MaxTokens != nil { | ||
| params.MaxTokens = r.MaxTokens | ||
| } | ||
| if r.Temperature != nil { | ||
| params.Temperature = r.Temperature | ||
| } | ||
| if r.TopP != nil { | ||
| params.TopP = r.TopP | ||
| } | ||
| if r.RandomSeed != nil { | ||
| params.ExtraParams["random_seed"] = r.RandomSeed | ||
| } | ||
| if r.SafePrompt != nil { | ||
| params.ExtraParams["safe_prompt"] = r.SafePrompt | ||
| } | ||
|
|
||
| bifrostReq.Params = params | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Important parameters silently dropped
stream, response_format, and arbitrary tool_choice variants are accepted in the request but never forwarded to BifrostRequest.Params.
At minimum pass them through ExtraParams so callers don’t lose functionality:
if r.Stream != nil {
- // ignored
+ params.ExtraParams["stream"] = r.Stream
}
if r.ResponseFormat != nil {
- // ignored
+ params.ExtraParams["response_format"] = r.ResponseFormat
}
// TODO: map tool_choice into ToolChoice when it’s a structured objectCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/mistral/types.go around lines 112 to
135, the parameters stream, response_format, and tool_choice variants are
accepted but not forwarded to BifrostRequest.Params, causing them to be silently
dropped. To fix this, add these parameters to the ExtraParams map within the
ModelParameters struct so they are passed through correctly and callers retain
full functionality.
| func (g *LangGraphDefinition) GetStartNode() *LangGraphNode { | ||
| for _, node := range g.Nodes { | ||
| if node.ID == g.StartNode { | ||
| return &node | ||
| } | ||
| } | ||
| return nil | ||
| } | ||
|
|
||
| // GetNode returns a node by ID | ||
| func (g *LangGraphDefinition) GetNode(id string) *LangGraphNode { | ||
| for _, node := range g.Nodes { | ||
| if node.ID == id { | ||
| return &node | ||
| } | ||
| } | ||
| return nil | ||
| } |
There was a problem hiding this comment.
Returning pointer to loop-scoped variable → all callers receive the same (corrupted) node
range re-uses the iteration variable; taking its address (&node) escapes the same memory location for every iteration. After the loop finishes, the pointer refers to the last element – not the intended one.
-func (g *LangGraphDefinition) GetStartNode() *LangGraphNode {
- for _, node := range g.Nodes {
- if node.ID == g.StartNode {
- return &node // <- points to loop var
- }
- }
- return nil
-}
+func (g *LangGraphDefinition) GetStartNode() *LangGraphNode {
+ for i := range g.Nodes {
+ if g.Nodes[i].ID == g.StartNode {
+ return &g.Nodes[i] // address of slice element
+ }
+ }
+ return nil
+}Apply the same pattern to GetNode and any other helpers that return a pointer to a ranged variable.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (g *LangGraphDefinition) GetStartNode() *LangGraphNode { | |
| for _, node := range g.Nodes { | |
| if node.ID == g.StartNode { | |
| return &node | |
| } | |
| } | |
| return nil | |
| } | |
| // GetNode returns a node by ID | |
| func (g *LangGraphDefinition) GetNode(id string) *LangGraphNode { | |
| for _, node := range g.Nodes { | |
| if node.ID == id { | |
| return &node | |
| } | |
| } | |
| return nil | |
| } | |
| func (g *LangGraphDefinition) GetStartNode() *LangGraphNode { | |
| for i := range g.Nodes { | |
| if g.Nodes[i].ID == g.StartNode { | |
| return &g.Nodes[i] // address of the actual slice element | |
| } | |
| } | |
| return nil | |
| } | |
| // GetNode returns a node by ID | |
| func (g *LangGraphDefinition) GetNode(id string) *LangGraphNode { | |
| for _, node := range g.Nodes { | |
| if node.ID == id { | |
| return &node | |
| } | |
| } | |
| return nil | |
| } |
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langgraph/types.go around lines 338 to
355, the methods GetStartNode and GetNode return pointers to the loop variable
'node', which is reused in each iteration causing all returned pointers to
reference the same memory location. To fix this, instead of returning the
address of the loop variable, return the address of the slice element directly
by using the index to get a pointer to the actual element in the slice. Apply
this pattern to both GetStartNode and GetNode methods.

Add Comprehensive Integration Support for AI Frameworks
This PR adds integration endpoints that make Bifrost compatible with popular AI framework SDKs including LangChain, LangGraph, LiteLLM, and direct provider SDKs (OpenAI, Anthropic, Mistral). Users can now leverage Bifrost's capabilities (fallbacks, load balancing, unified error handling) while using their preferred SDK by simply changing the base URL.
Features Added:
langchain-openaiand other LangChain providersImplementation Details:
Documentation:
integrations.mdguide showing how to use each framework with BifrostThis integration approach allows users to adopt Bifrost with minimal code changes while maintaining compatibility with their existing AI framework code.