Skip to content

[Integrations] SDK supports for anthropic, langchain, langgraph, litellm, mistral, openai#61

Closed
danpiths wants to merge 1 commit intomainfrom
feature/06-05-_integrations_sdk_supports_for_anthropic_langchain_langgraph_litellm_mistral_openai
Closed

[Integrations] SDK supports for anthropic, langchain, langgraph, litellm, mistral, openai#61
danpiths wants to merge 1 commit intomainfrom
feature/06-05-_integrations_sdk_supports_for_anthropic_langchain_langgraph_litellm_mistral_openai

Conversation

@danpiths
Copy link
Copy Markdown
Collaborator

@danpiths danpiths commented Jun 5, 2025

Add Comprehensive Integration Support for AI Frameworks

This PR adds integration endpoints that make Bifrost compatible with popular AI framework SDKs including LangChain, LangGraph, LiteLLM, and direct provider SDKs (OpenAI, Anthropic, Mistral). Users can now leverage Bifrost's capabilities (fallbacks, load balancing, unified error handling) while using their preferred SDK by simply changing the base URL.

Features Added:

  • LangChain Integration: Compatible with langchain-openai and other LangChain providers
  • LangGraph Integration: Full support for LangGraph workflows and state management
  • LiteLLM Integration: Direct OpenAI-compatible proxy for 100+ models
  • Direct Provider SDKs: Support for OpenAI, Anthropic, Google, and Mistral SDKs

Implementation Details:

  • Added router implementations for each integration endpoint
  • Created type conversion logic between framework-specific formats and Bifrost's internal schemas
  • Implemented request/response handling for each integration
  • Added comprehensive documentation with examples for each integration

Documentation:

  • Added detailed integrations.md guide showing how to use each framework with Bifrost
  • Included code examples for all supported frameworks
  • Documented configuration options and benefits

This integration approach allows users to adopt Bifrost with minimal code changes while maintaining compatibility with their existing AI framework code.

@danpiths danpiths requested review from Tushkiz and akshaydeo June 5, 2025 07:34
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 5, 2025

Summary by CodeRabbit

  • New Features

    • Added support for integration with multiple AI frameworks and providers, including LangChain, LangGraph, LiteLLM, OpenAI, Anthropic, Google, and Mistral via unified proxy endpoints.
    • Introduced new HTTP endpoints for each supported framework and provider, enabling chat, completion, batch, streaming, and workflow graph operations.
    • Enabled seamless routing, fallback handling, load balancing, unified error handling, and monitoring across supported SDKs by updating API base URLs.
  • Documentation

    • Added a comprehensive integration guide detailing setup, configuration, and migration steps for using Bifrost with various AI SDKs and providers.
  • Style

    • Improved consistency and reliability of JSON response handling for all HTTP endpoints.

Summary by CodeRabbit

  • New Features

    • Added support for unified integration endpoints compatible with OpenAI, Anthropic, Mistral, LiteLLM, LangChain, and LangGraph APIs.
    • Users can now route requests from multiple AI SDKs and providers through a single Bifrost endpoint with minimal configuration.
    • Enables features such as fallback handling, load balancing, unified error handling, and monitoring across providers.
  • Documentation

    • Introduced a comprehensive integration guide detailing setup and usage for connecting various AI frameworks and SDKs to Bifrost.

Walkthrough

This change introduces comprehensive integration support for Bifrost as a unified proxy for multiple AI frameworks and provider SDKs. It adds new HTTP routers and bidirectional type conversion layers for OpenAI, Anthropic, Mistral, LiteLLM, LangChain, and LangGraph. Extensive documentation and example integration guides are also included.

Changes

File(s) Change Summary
docs/integrations.md Added detailed integration documentation for Bifrost with guides, examples, migration instructions, and benefits.
.../integrations/openai/router.go
.../types.go
Added OpenAI-compatible HTTP router and type conversions for chat completions.
.../integrations/anthropic/router.go
.../types.go
Added Anthropic-compatible HTTP router and type conversions for messages API.
.../integrations/mistral/router.go
.../types.go
Added Mistral-compatible HTTP router and type conversions for chat completions.
.../integrations/litellm/router.go
.../types.go
Added LiteLLM-compatible HTTP router and type conversions for chat and text completions.
.../integrations/langchain/router.go
.../types.go
Added LangChain-compatible HTTP router and type conversions for chat, invoke, batch, and stream endpoints.
.../integrations/langgraph/router.go
.../types.go
Added LangGraph-compatible HTTP router and type conversions for graph execution and management.
transports/bifrost-http/main.go Registered all new integration routers in the main HTTP server initialization; updated JSON response encoding.
.../integrations/genai/router.go Changed JSON response and error encoding to marshal JSON and set response body explicitly instead of using encoder.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Router
    participant Bifrost
    participant Provider

    Client->>Router: POST /{provider}/v1/chat/completions (or similar)
    Router->>Router: Parse & validate request
    Router->>Router: Convert request to BifrostRequest
    Router->>Bifrost: ChatCompletionRequest(BifrostRequest)
    Bifrost->>Provider: Forward request (with fallback, load balancing, etc.)
    Provider-->>Bifrost: Provider response
    Bifrost-->>Router: BifrostResponse
    Router->>Router: Convert response to provider format
    Router-->>Client: JSON response
Loading

Possibly related PRs

Suggested reviewers

  • akshaydeo

Poem

In tunnels deep, the routers grow,
OpenAI, Anthropic, and friends in tow.
With Bifrost bridges, requests now glide,
Through unified flows on every side.
SDKs rejoice, the docs are bright—
Integration magic, done just right!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Collaborator Author

danpiths commented Jun 5, 2025

This stack of pull requests is managed by Graphite. Learn more about stacking.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 31

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ac54c86 and c5efb2e.

📒 Files selected for processing (14)
  • docs/integrations.md (1 hunks)
  • transports/bifrost-http/integrations/anthropic/router.go (1 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/router.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/types.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/router.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/types.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/types.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/router.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/types.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/types.go (1 hunks)
  • transports/bifrost-http/main.go (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (5)
transports/bifrost-http/main.go (8)
transports/bifrost-http/integrations/utils.go (1)
  • ExtensionRouter (5-7)
transports/bifrost-http/integrations/genai/router.go (1)
  • NewGenAIRouter (18-20)
transports/bifrost-http/integrations/openai/router.go (1)
  • NewOpenAIRouter (18-20)
transports/bifrost-http/integrations/anthropic/router.go (1)
  • NewAnthropicRouter (18-20)
transports/bifrost-http/integrations/mistral/router.go (1)
  • NewMistralRouter (18-20)
transports/bifrost-http/integrations/litellm/router.go (1)
  • NewLiteLLMRouter (18-20)
transports/bifrost-http/integrations/langchain/router.go (1)
  • NewLangChainRouter (19-21)
transports/bifrost-http/integrations/langgraph/router.go (1)
  • NewLangGraphRouter (19-21)
transports/bifrost-http/integrations/openai/router.go (3)
core/bifrost.go (2)
  • Bifrost (39-51)
  • ChatCompletionRequest (24-24)
transports/bifrost-http/integrations/openai/types.go (2)
  • OpenAIChatRequest (51-72)
  • DeriveOpenAIFromBifrostResponse (261-324)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/openai/types.go (2)
core/schemas/bifrost.go (18)
  • ToolChoiceTypeFunction (126-126)
  • FunctionCall (260-263)
  • Function (101-105)
  • ToolChoice (137-140)
  • LogProbs (253-257)
  • BifrostRequest (54-64)
  • OpenAI (34-34)
  • RequestInput (46-49)
  • BifrostMessage (143-152)
  • ModelChatMessageRole (20-20)
  • ToolCall (266-270)
  • AssistantMessage (166-171)
  • ToolMessage (160-163)
  • ModelParameters (75-89)
  • Tool (108-112)
  • FunctionParameters (92-98)
  • BifrostResponse (184-194)
  • LLMUsage (197-203)
core/schemas/provider.go (1)
  • Provider (142-149)
transports/bifrost-http/integrations/anthropic/types.go (2)
core/schemas/bifrost.go (19)
  • ToolChoice (137-140)
  • BifrostRequest (54-64)
  • Anthropic (36-36)
  • RequestInput (46-49)
  • BifrostMessage (143-152)
  • ModelChatMessageRoleSystem (25-25)
  • ModelChatMessageRole (20-20)
  • ToolCall (266-270)
  • Function (101-105)
  • FunctionCall (260-263)
  • AssistantMessage (166-171)
  • ToolMessage (160-163)
  • ModelParameters (75-89)
  • Tool (108-112)
  • FunctionParameters (92-98)
  • ToolChoiceType (116-116)
  • ToolChoiceFunction (132-134)
  • BifrostResponse (184-194)
  • LLMUsage (197-203)
core/schemas/provider.go (1)
  • Provider (142-149)
transports/bifrost-http/integrations/mistral/types.go (2)
core/schemas/bifrost.go (16)
  • Function (101-105)
  • ToolChoice (137-140)
  • BifrostRequest (54-64)
  • OpenAI (34-34)
  • RequestInput (46-49)
  • BifrostMessage (143-152)
  • ModelChatMessageRole (20-20)
  • ToolCall (266-270)
  • FunctionCall (260-263)
  • AssistantMessage (166-171)
  • ToolMessage (160-163)
  • ModelParameters (75-89)
  • Tool (108-112)
  • FunctionParameters (92-98)
  • BifrostResponse (184-194)
  • LLMUsage (197-203)
core/schemas/provider.go (1)
  • Provider (142-149)
🪛 LanguageTool
docs/integrations.md

[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...

(SINGULAR_NOUN_ADV_AGREEMENT)

🔇 Additional comments (12)
transports/bifrost-http/main.go (1)

185-193: Nice modularisation – new routers cleanly plugged in

The extensions slice pattern keeps main tidy and makes future integrations trivial to add.

transports/bifrost-http/integrations/openai/router.go (3)

12-25: LGTM on router structure and endpoint registration.

The router follows a clean, consistent pattern with proper dependency injection and clear endpoint registration.


36-46: LGTM on request validation and conversion logic.

Good validation of required model parameter and proper use of conversion functions to transform between OpenAI and Bifrost formats.


32-32: ⚠️ Potential issue

Fix JSON encoding with fasthttp context.

Using json.NewEncoder(ctx).Encode() doesn't work correctly with fasthttp.RequestCtx as it doesn't implement the io.Writer interface as expected by json.NewEncoder.

-		json.NewEncoder(ctx).Encode(map[string]string{"error": err.Error()})
+		errorResp, _ := json.Marshal(map[string]string{"error": err.Error()})
+		ctx.SetBody(errorResp)
+		ctx.SetContentType("application/json")

Likely an incorrect or invalid review comment.

docs/integrations.md (2)

1-479: Excellent comprehensive integration documentation.

This documentation provides thorough coverage of all integration patterns with clear examples, configuration guidance, and migration paths. The structure is logical and user-friendly, making it easy for developers to adopt Bifrost with their existing SDK usage.

🧰 Tools
🪛 LanguageTool

[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...

(SINGULAR_NOUN_ADV_AGREEMENT)


390-390: 🧹 Nitpick (assertive)

Fix grammar: subject-verb agreement.

The phrase "load balances" should be "load-balances" (hyphenated) or restructured for proper grammar.

-Bifrost automatically load balances requests across multiple keys for the same provider based on the `weight` parameter and model availability.
+Bifrost automatically load-balances requests across multiple keys for the same provider based on the `weight` parameter and model availability.

Likely an incorrect or invalid review comment.

🧰 Tools
🪛 LanguageTool

[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...

(SINGULAR_NOUN_ADV_AGREEMENT)

transports/bifrost-http/integrations/anthropic/router.go (2)

42-46: Good addition of Anthropic-specific validation.

The additional validation for MaxTokens is appropriate since it's a required parameter for Anthropic's API, unlike OpenAI where it's optional.


12-25: LGTM on consistent router structure.

The router maintains consistency with other integration routers while properly implementing Anthropic-specific endpoint patterns.

transports/bifrost-http/integrations/litellm/router.go (3)

22-28: Excellent endpoint versioning support.

Good implementation of both /litellm and /litellm/v1 paths for each endpoint type, providing flexibility for different LiteLLM client configurations.


49-49: Correct Bifrost method selection.

Properly uses ChatCompletionRequest for chat completions and TextCompletionRequest for text completions, aligning with the different endpoint purposes.

Also applies to: 81-81


12-20: LGTM on router structure and constructor.

Clean, consistent implementation following the established pattern across integration routers.

transports/bifrost-http/integrations/mistral/types.go (1)

67-71:

Details

❌ Incorrect review comment

Potential provider mapping confusion

The code sets Provider: schemas.OpenAI with a comment explaining that Mistral is OpenAI-compatible. However, if Bifrost has a specific Mistral provider type, it should be used instead to maintain clear provider distinction.

Let me check if there's a Mistral provider constant defined:


🏁 Script executed:

#!/bin/bash
# Search for Mistral provider definition in the schemas
ast-grep --pattern 'Mistral $_ ModelProvider = $_'

Length of output: 52


🏁 Script executed:

#!/bin/bash
# Also search more broadly for Mistral-related constants
rg -A 2 -B 2 "Mistral.*ModelProvider|ModelProvider.*Mistral" 

Length of output: 62


🏁 Script executed:

#!/bin/bash
# Locate ModelProvider type definition
rg -n 'type ModelProvider' .

# List all provider constants of type ModelProvider
rg -n 'const .*ModelProvider =' .

# Check for any occurrences of "Mistral" anywhere in the repo
rg -n 'Mistral' .

Length of output: 8198


No dedicated Mistral provider in schemas—using OpenAI is correct
We inspected core/schemas/bifrost.go (where ModelProvider is defined) and found no Mistral constant—only existing providers like OpenAI, Anthropic, etc. Since Bifrost doesn’t define a separate Mistral provider and Mistral is OpenAI-API–compatible, Provider: schemas.OpenAI is intentional and correct. No change required.

Likely an incorrect or invalid review comment.

Comment thread transports/bifrost-http/integrations/openai/types.go
Comment thread transports/bifrost-http/integrations/openai/types.go
Comment on lines +135 to +176
// Convert parameters
if r.MaxTokens != nil || r.Temperature != nil || r.TopP != nil || r.PresencePenalty != nil ||
r.FrequencyPenalty != nil || r.N != nil || r.LogProbs != nil || r.TopLogProbs != nil {
params := &schemas.ModelParameters{
ExtraParams: make(map[string]interface{}),
}

if r.MaxTokens != nil {
params.MaxTokens = r.MaxTokens
}
if r.Temperature != nil {
params.Temperature = r.Temperature
}
if r.TopP != nil {
params.TopP = r.TopP
}
if r.PresencePenalty != nil {
params.PresencePenalty = r.PresencePenalty
}
if r.FrequencyPenalty != nil {
params.FrequencyPenalty = r.FrequencyPenalty
}
if r.N != nil {
params.ExtraParams["n"] = r.N
}
if r.LogProbs != nil {
params.ExtraParams["logprobs"] = r.LogProbs
}
if r.TopLogProbs != nil {
params.ExtraParams["top_logprobs"] = r.TopLogProbs
}

bifrostReq.Params = params
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

stop and logit_bias parameters not propagated

stop and logit_bias are part of the OpenAI spec yet are omitted from the Bifrost parameters map.

@@
-        if r.TopLogProbs != nil {
-            params.ExtraParams["top_logprobs"] = r.TopLogProbs
-        }
+        if r.TopLogProbs != nil {
+            params.ExtraParams["top_logprobs"] = r.TopLogProbs
+        }
+
+        if r.LogitBias != nil {
+            params.ExtraParams["logit_bias"] = r.LogitBias
+        }
+
+        // Normalise `stop` (string | []string) into []string
+        if r.Stop != nil {
+            switch v := r.Stop.(type) {
+            case string:
+                seq := []string{v}
+                params.StopSequences = &seq
+            case []interface{}:
+                seq := make([]string, 0, len(v))
+                for _, s := range v {
+                    if str, ok := s.(string); ok {
+                        seq = append(seq, str)
+                    }
+                }
+                params.StopSequences = &seq
+            }
+        }

Without this, callers relying on these knobs will see no effect.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/types.go between lines 135 and
167, the code converts OpenAI request parameters but omits the 'stop' and
'logit_bias' fields. To fix this, add checks for r.Stop and r.LogitBias similar
to other parameters, and if they are not nil, include them in the
params.ExtraParams map with appropriate keys. This ensures these parameters are
propagated correctly in the Bifrost request.

Comment thread transports/bifrost-http/integrations/openai/types.go Outdated
Comment thread transports/bifrost-http/integrations/mistral/router.go
Comment on lines +170 to +237
// Convert tools
if r.Tools != nil {
tools := []schemas.Tool{}
for _, tool := range *r.Tools {
// Convert input_schema to FunctionParameters
params := schemas.FunctionParameters{
Type: "object",
}
if tool.InputSchema != nil {
if schemaMap, ok := tool.InputSchema.(map[string]interface{}); ok {
if typeVal, ok := schemaMap["type"].(string); ok {
params.Type = typeVal
}
if desc, ok := schemaMap["description"].(string); ok {
params.Description = &desc
}
if required, ok := schemaMap["required"].([]interface{}); ok {
reqStrings := make([]string, len(required))
for i, req := range required {
if reqStr, ok := req.(string); ok {
reqStrings[i] = reqStr
}
}
params.Required = reqStrings
}
if properties, ok := schemaMap["properties"].(map[string]interface{}); ok {
params.Properties = properties
}
if enum, ok := schemaMap["enum"].([]interface{}); ok {
enumStrings := make([]string, len(enum))
for i, e := range enum {
if eStr, ok := e.(string); ok {
enumStrings[i] = eStr
}
}
params.Enum = &enumStrings
}
}
}

t := schemas.Tool{
Type: "function",
Function: schemas.Function{
Name: tool.Name,
Description: tool.Description,
Parameters: params,
},
}
tools = append(tools, t)
}
if bifrostReq.Params == nil {
bifrostReq.Params = &schemas.ModelParameters{}
}
bifrostReq.Params.Tools = &tools

// Convert tool choice
if r.ToolChoice != nil {
toolChoice := &schemas.ToolChoice{
Type: schemas.ToolChoiceType(r.ToolChoice.Type),
}
if r.ToolChoice.Type == "tool" && r.ToolChoice.Name != "" {
toolChoice.Function = schemas.ToolChoiceFunction{
Name: r.ToolChoice.Name,
}
}
bifrostReq.Params.ToolChoice = toolChoice
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

stream flag, image blocks, and tool results are silently dropped

ConvertToBifrostRequest ignores:

  1. r.Stream – callers requesting streaming receive a non-streaming request.
  2. image content blocks – lost when forwarding to Bifrost.
  3. tool_result blocks – only the ID is forwarded; the actual result/content is discarded.

If these fields are relevant to downstream logic, propagate them or explicitly document that they are unsupported.


🏁 Script executed:

#!/bin/bash
# Locate the ConvertToBifrostRequest function and check handling of stream, image, and tool_result
rg -n "ConvertToBifrostRequest" -A200 -B20 transports/bifrost-http/integrations/anthropic/types.go
echo "----- Stream references -----"
rg -n "r\.Stream" -n transports/bifrost-http/integrations/anthropic/types.go || echo "No occurrences of r.Stream"
echo "----- Content block handling -----"
rg -n "case.*image" -n transports/bifrost-http/integrations/anthropic/types.go || echo "No image case in switch"
echo "----- tool_result handling -----"
rg -n "tool_result" -n transports/bifrost-http/integrations/anthropic/types.go || echo "No tool_result processing"

Length of output: 8221


Ensure streaming, image, and tool‐result data are handled (or clearly documented)

ConvertToBifrostRequest currently:

  • Never propagates r.Stream → no Params.Stream on the BifrostRequest, so streaming callers fall back to non‐streaming.
  • Lacks a case "image" branch → all image content blocks are silently dropped.
  • In the "tool_result" branch only appends the result text to textContent and sets ToolMessage.ToolCallID, but does not serialize the tool result into a structured field.

If your downstream logic relies on streaming, images, or structured tool results, update ConvertToBifrostRequest to:

  • Map r.Stream into bifrostReq.Params.Stream.
  • Add handling for content.Type == "image" (e.g. populate an ImageURL or ImageData field on BifrostMessage).
  • Extend the "tool_result" case to include the result payload on ToolMessage (not just the ID).

Otherwise, document that streaming, images, and tool results are unsupported by this conversion.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/anthropic/types.go around lines 170 to
237, the ConvertToBifrostRequest function does not propagate the r.Stream flag
to bifrostReq.Params.Stream, causing streaming requests to be downgraded to
non-streaming. It also lacks handling for content blocks of type "image",
resulting in dropped image data, and the "tool_result" case only forwards the
tool call ID without including the actual result content. To fix this, update
the function to map r.Stream into bifrostReq.Params.Stream, add a case to handle
"image" content blocks by populating appropriate image fields on BifrostMessage,
and extend the "tool_result" handling to serialize and include the full tool
result payload in the ToolMessage structure. If these features are not
supported, add clear documentation stating that streaming, images, and tool
results are not handled by this conversion.

Comment on lines +126 to +168
// Convert parameters
if r.MaxTokens != nil || r.Temperature != nil || r.TopP != nil || r.PresencePenalty != nil ||
r.FrequencyPenalty != nil || r.N != nil {
params := &schemas.ModelParameters{
ExtraParams: make(map[string]interface{}),
}

if r.MaxTokens != nil {
params.MaxTokens = r.MaxTokens
}
if r.Temperature != nil {
params.Temperature = r.Temperature
}
if r.TopP != nil {
params.TopP = r.TopP
}
if r.PresencePenalty != nil {
params.PresencePenalty = r.PresencePenalty
}
if r.FrequencyPenalty != nil {
params.FrequencyPenalty = r.FrequencyPenalty
}
if r.N != nil {
params.ExtraParams["n"] = r.N
}

// Add LiteLLM-specific params
if r.APIBase != nil {
params.ExtraParams["api_base"] = r.APIBase
}
if r.APIVersion != nil {
params.ExtraParams["api_version"] = r.APIVersion
}
if r.Drop_params != nil {
params.ExtraParams["drop_params"] = r.Drop_params
}
if r.Metadata != nil {
params.ExtraParams["metadata"] = r.Metadata
}

bifrostReq.Params = params
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Several LiteLLM fields silently dropped

The converter ignores Stop, ToolChoice, ResponseFormat, Stream, and Seed.
Unless there is a conscious product decision, omitting them changes request semantics (e.g., stop greatly affects generation output).

Recommend mapping them into schemas.ModelParameters.ExtraParams at minimum so that downstream providers still receive the intent.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types.go around lines 126 to
168, the conversion logic omits several LiteLLM fields such as Stop, ToolChoice,
ResponseFormat, Stream, and Seed, which can alter request semantics. To fix
this, add these missing fields into the params.ExtraParams map so they are
preserved and passed downstream. This ensures the full intent of the original
request is maintained without losing important parameters.

Comment on lines +100 to +107
for _, toolCall := range *msg.ToolCalls {
tc := schemas.ToolCall{
Type: stringPtr(toolCall.Type),
ID: &toolCall.ID,
Function: schemas.FunctionCall{
Name: &toolCall.Function.Name,
Arguments: toolCall.Function.Arguments,
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Bug: captured pointers all reference the same loop variable instance

toolCall is re-used on every iteration of the for … range loop.
Taking the address of its fields (&toolCall.ID, &toolCall.Function.Name) stores identical memory addresses for every schemas.ToolCall, so after the loop all IDs / Names point to the values of the last element.

-for _, toolCall := range *msg.ToolCalls {
-    tc := schemas.ToolCall{
-        Type: stringPtr(toolCall.Type),
-        ID:   &toolCall.ID,
-        Function: schemas.FunctionCall{
-            Name:      &toolCall.Function.Name,
-            Arguments: toolCall.Function.Arguments,
-        },
-    }
+for _, toolCall := range *msg.ToolCalls {
+    // copy into new vars to obtain distinct addresses
+    idCopy   := toolCall.ID
+    nameCopy := toolCall.Function.Name
+    tc := schemas.ToolCall{
+        Type: stringPtr(toolCall.Type),
+        ID:   &idCopy,
+        Function: schemas.FunctionCall{
+            Name:      &nameCopy,
+            Arguments: toolCall.Function.Arguments,
+        },
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for _, toolCall := range *msg.ToolCalls {
tc := schemas.ToolCall{
Type: stringPtr(toolCall.Type),
ID: &toolCall.ID,
Function: schemas.FunctionCall{
Name: &toolCall.Function.Name,
Arguments: toolCall.Function.Arguments,
},
for _, toolCall := range *msg.ToolCalls {
// copy into new vars to obtain distinct addresses
idCopy := toolCall.ID
nameCopy := toolCall.Function.Name
tc := schemas.ToolCall{
Type: stringPtr(toolCall.Type),
ID: &idCopy,
Function: schemas.FunctionCall{
Name: &nameCopy,
Arguments: toolCall.Function.Arguments,
},
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types.go around lines 100 to
107, the loop variable toolCall is reused in each iteration, causing pointers to
its fields to reference the same memory address for all iterations. To fix this,
create a new local variable inside the loop to hold the current toolCall value,
then take pointers to that local variable's fields. This ensures each
schemas.ToolCall has unique pointers to the correct data.

Comment on lines +325 to +342
// Helper function to determine provider from model name
func determineProviderFromModel(model string) schemas.ModelProvider {
// LiteLLM uses prefixes or model names to determine provider
// This is a simplified version - in production you'd have more sophisticated routing
if contains(model, "gpt") || contains(model, "o1") {
return schemas.OpenAI
} else if contains(model, "claude") {
return schemas.Anthropic
} else if contains(model, "gemini") || contains(model, "vertex") {
return schemas.Vertex
} else if contains(model, "bedrock") {
return schemas.Bedrock
} else if contains(model, "cohere") {
return schemas.Cohere
}
// Default to OpenAI for unknown models
return schemas.OpenAI
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Provider detection is case–sensitive → mis-routes mixed-case model names

determineProviderFromModel checks substrings directly; models such as "GPT-4o" or "Claude-3" will slip through.

-func determineProviderFromModel(model string) schemas.ModelProvider {
-    if contains(model, "gpt") || contains(model, "o1") {
+func determineProviderFromModel(model string) schemas.ModelProvider {
+    m := strings.ToLower(model)
+    if strings.Contains(m, "gpt") || strings.Contains(m, "o1") {

Using strings.ToLower (or strings.EqualFold for exact matches) avoids brittle routing logic.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types.go around lines 325 to
342, the determineProviderFromModel function performs case-sensitive substring
checks, causing misrouting for mixed-case model names like "GPT-4o" or
"Claude-3". To fix this, convert the input model string to lowercase using
strings.ToLower before performing substring checks, ensuring case-insensitive
matching and more robust provider detection.

Comment on lines +429 to +437
if choice.Message.AssistantMessage != nil && choice.Message.AssistantMessage.ToolCalls != nil {
toolCalls := []LiteLLMToolCall{}
for _, toolCall := range *choice.Message.AssistantMessage.ToolCalls {
tc := LiteLLMToolCall{
Type: *toolCall.Type,
Function: LiteLLMFunctionCall{
Name: *toolCall.Function.Name,
Arguments: toolCall.Function.Arguments,
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential nil-pointer dereference when building response

toolCall.Type and toolCall.Function.Name are *string.
Dereferencing without a nil-check (*toolCall.Type, *toolCall.Function.Name) will panic if the upstream response omits these fields.

Add defensive checks or default fallbacks:

 var tc LiteLLMToolCall
-if toolCall.Type != nil {
-    tc.Type = *toolCall.Type
-}
-if toolCall.Function.Name != nil {
-    tc.Function.Name = *toolCall.Function.Name
-}
 tc.Function.Arguments = toolCall.Function.Arguments

Failing fast inside a production proxy will take the whole request down – please harden this part.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/types.go around lines 429 to
437, the code dereferences pointers toolCall.Type and toolCall.Function.Name
without checking for nil, which can cause a panic if these fields are missing.
To fix this, add nil checks before dereferencing these pointers and provide
default fallback values (e.g., empty string) when they are nil to prevent
nil-pointer dereference and ensure the proxy remains stable.

@danpiths danpiths force-pushed the feature/06-05-_integrations_sdk_supports_for_anthropic_langchain_langgraph_litellm_mistral_openai branch from c5efb2e to 728e2d4 Compare June 5, 2025 08:03
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

♻️ Duplicate comments (20)
transports/bifrost-http/integrations/langchain/types.go (2)

65-83: ⚠️ Potential issue

Persisting loop-variable pointer bug corrupts all message contents

bifrostMsg.Content = &msg.Content still takes the address of the loop variable msg.
After the loop, every stored pointer references the same memory, containing whatever value msg.Content has in the final iteration.

This was already flagged in a previous review and must be fixed.

-        bifrostMsg.Content = &msg.Content
+        contentCopy := msg.Content
+        bifrostMsg.Content = &contentCopy

87-104: ⚠️ Potential issue

Tool-call Name suffers from the same pointer-aliasing issue

&toolCall.Name captures the address of the loop variable, so every tool-call ends up with the name of the last iteration.

This repeats the earlier review comment and is still unresolved.

-                tc := schemas.ToolCall{
+                nameCopy := toolCall.Name
+                tc := schemas.ToolCall{
                     Type: stringPtr("function"),
                     ID:   toolCall.ID,
                     Function: schemas.FunctionCall{
-                        Name:      &toolCall.Name,
+                        Name:      &nameCopy,
                         Arguments: argsJSON,
                     },
                 }
transports/bifrost-http/integrations/anthropic/router.go (1)

30-37: JSON encoding issues correctly resolved.

The implementation now properly uses json.Marshal and ctx.SetBody instead of the incompatible json.NewEncoder(ctx).Encode() pattern. This addresses all the previous fasthttp compatibility issues.

Also applies to: 56-62, 64-69

transports/bifrost-http/integrations/langchain/router.go (3)

34-38: Use fasthttp’s idiomatic JSON write path
Same issue was raised previously: json.NewEncoder(ctx).Encode(...) allocates an extra
buffer and writes via the slow path. Marshal first, then ctx.SetBody.

Also applies to: 52-54, 68-70, 87-88, 102-103


122-127: Batch error response still inconsistent

Only the error message string is returned, losing structured details (code, type, etc.). See prior review comment for a concrete patch.


140-145: Streaming endpoint advertises streaming but doesn’t stream

Calling the non-streaming handler is misleading; either implement SSE/WebSockets or return 501 with an explanatory message (see earlier suggestion).

transports/bifrost-http/integrations/langgraph/router.go (3)

203-210: Non-idiomatic zero-value struct comparison
Duplicate of earlier feedback – compare a sentinel field such as TotalTokens > 0 instead of result.Usage != (schemas.LLMUsage{}).


238-249: Hard-coded fallback content should be configurable
Same issue was raised previously; consider passing defaults via config instead of "Continue the conversation" / "Start conversation".


311-329: Predictable ID generation
randomString cycles through a fixed charset, producing non-random IDs. Replace with crypto/rand or UUID.
(Previous comment already provided a patch.)

transports/bifrost-http/integrations/anthropic/types.go (3)

98-126: Stream flag, image blocks & full tool-result payloads are silently ignored
The conversion omits r.Stream, drops "image" content blocks, and flattens tool-result content to plain text. Behaviour was flagged earlier and is still present.


243-251: jsonifyInput hides marshal errors
Returning "{}" masks the real problem and makes debugging impossible. Propagate or log the error instead.


286-291: Usage struct zero-value check
Use bifrostResp.Usage.TotalTokens > 0 rather than comparing the whole struct to its zero value.

transports/bifrost-http/integrations/mistral/types.go (2)

239-246: Non-idiomatic zero-value check for usage
Same concern as earlier files – check TotalTokens instead of struct equality.


262-276: Potential nil-pointer dereference when mapping tool calls
Pointer fields (toolCall.Type, toolCall.Function.Name) are dereferenced without checks, risking panics. Prior feedback already covered this.

transports/bifrost-http/integrations/langgraph/types.go (2)

381-404: json.NewEncoder(nil).Encode(...) still panics – replace with fmt.Errorf

Previous review already highlighted this. The call dereferences a nil writer and will panic on the first validation failure.

-import "encoding/json"
+import (
+    "encoding/json"
+    "fmt"
+)

...
-    return json.NewEncoder(nil).Encode(map[string]string{"error": "Start node not found"})
+    return fmt.Errorf("start node %q not found", g.StartNode)
...
-        return json.NewEncoder(nil).Encode(map[string]string{"error": "Edge references non-existent 'from' node: " + edge.From})
+        return fmt.Errorf("edge references non-existent 'from' node %q", edge.From)
...
-            return json.NewEncoder(nil).Encode(map[string]string{"error": "End node not found: " + endNodeID})
+            return fmt.Errorf("end node %q not found", endNodeID)

Missing this fix crashes every invalid-graph request.


186-197: Pointer to loop variable content escapes – messages share the last value

Addressing content inside the loop stores the same pointer in every BifrostMessage. Copy the value first:

-   if content, ok := msgMap["content"].(string); ok {
-       msg.Content = &content
-   }
+   if content, ok := msgMap["content"].(string); ok {
+       c := content
+       msg.Content = &c
+   }

Do the same for any other loop-scoped variables taken by address (role, etc.).

transports/bifrost-http/integrations/litellm/types.go (4)

100-110: Captured pointers all reference the same loop variable instance

The previous review already covered this; the issue remains unaddressed. Create per-iteration copies before taking addresses to ensure distinct pointers.


126-167: Important LiteLLM fields still silently dropped

Stop, ToolChoice, ResponseFormat, Stream, and Seed are ignored during parameter translation, potentially altering request semantics. Map them at least into params.ExtraParams so downstream providers receive the intent.


325-347: Provider detection remains case-sensitive → mis-routes mixed-case model names

The helper still compares raw substrings; convert to lowercase before matching.

-func contains(s, substr string) bool {
-    return strings.Contains(s, substr)
+func contains(s, substr string) bool {
+    return strings.Contains(strings.ToLower(s), substr)
 }

Then drop the redundant strings.ToLower calls in the caller if added.


429-445: Nil-pointer dereference risk when building tool-call response

*toolCall.Type and *toolCall.Function.Name are dereferenced without checks. If upstream omits them the proxy panics. Guard with nil checks or default empty strings.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c5efb2e and 728e2d4.

📒 Files selected for processing (15)
  • docs/integrations.md (1 hunks)
  • transports/bifrost-http/integrations/anthropic/router.go (1 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (1 hunks)
  • transports/bifrost-http/integrations/genai/router.go (2 hunks)
  • transports/bifrost-http/integrations/langchain/router.go (1 hunks)
  • transports/bifrost-http/integrations/langchain/types.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/router.go (1 hunks)
  • transports/bifrost-http/integrations/langgraph/types.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/types.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/router.go (1 hunks)
  • transports/bifrost-http/integrations/mistral/types.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/types.go (1 hunks)
  • transports/bifrost-http/main.go (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (7)
transports/bifrost-http/main.go (8)
transports/bifrost-http/integrations/utils.go (1)
  • ExtensionRouter (5-7)
transports/bifrost-http/integrations/genai/router.go (1)
  • NewGenAIRouter (18-20)
transports/bifrost-http/integrations/openai/router.go (1)
  • NewOpenAIRouter (18-20)
transports/bifrost-http/integrations/anthropic/router.go (1)
  • NewAnthropicRouter (18-20)
transports/bifrost-http/integrations/mistral/router.go (1)
  • NewMistralRouter (18-20)
transports/bifrost-http/integrations/litellm/router.go (1)
  • NewLiteLLMRouter (18-20)
transports/bifrost-http/integrations/langchain/router.go (1)
  • NewLangChainRouter (19-21)
transports/bifrost-http/integrations/langgraph/router.go (1)
  • NewLangGraphRouter (19-21)
transports/bifrost-http/integrations/openai/types.go (2)
core/schemas/bifrost.go (22)
  • ToolChoiceTypeFunction (126-126)
  • FunctionCall (260-263)
  • Function (101-105)
  • ToolChoice (137-140)
  • LogProbs (253-257)
  • BifrostRequest (54-64)
  • OpenAI (34-34)
  • RequestInput (46-49)
  • BifrostMessage (143-152)
  • ModelChatMessageRole (20-20)
  • ToolCall (266-270)
  • AssistantMessage (166-171)
  • ToolMessage (160-163)
  • ModelParameters (75-89)
  • Tool (108-112)
  • FunctionParameters (92-98)
  • ToolChoiceTypeNone (120-120)
  • ToolChoiceTypeAuto (122-122)
  • ToolChoiceTypeRequired (128-128)
  • ToolChoiceFunction (132-134)
  • BifrostResponse (184-194)
  • LLMUsage (197-203)
core/schemas/provider.go (1)
  • Provider (142-149)
transports/bifrost-http/integrations/mistral/router.go (2)
transports/bifrost-http/integrations/mistral/types.go (2)
  • MistralChatRequest (51-63)
  • DeriveMistralFromBifrostResponse (226-285)
transports/bifrost-http/lib/ctx.go (1)
  • ConvertToBifrostContext (43-73)
transports/bifrost-http/integrations/anthropic/types.go (2)
core/schemas/bifrost.go (20)
  • ToolChoiceTypeFunction (126-126)
  • ToolChoice (137-140)
  • BifrostRequest (54-64)
  • Anthropic (36-36)
  • RequestInput (46-49)
  • BifrostMessage (143-152)
  • ModelChatMessageRoleSystem (25-25)
  • ModelChatMessageRole (20-20)
  • ToolCall (266-270)
  • Function (101-105)
  • FunctionCall (260-263)
  • AssistantMessage (166-171)
  • ToolMessage (160-163)
  • ModelParameters (75-89)
  • Tool (108-112)
  • FunctionParameters (92-98)
  • ToolChoiceType (116-116)
  • ToolChoiceFunction (132-134)
  • BifrostResponse (184-194)
  • LLMUsage (197-203)
core/schemas/provider.go (1)
  • Provider (142-149)
transports/bifrost-http/integrations/mistral/types.go (2)
core/schemas/bifrost.go (16)
  • Function (101-105)
  • ToolChoice (137-140)
  • BifrostRequest (54-64)
  • OpenAI (34-34)
  • RequestInput (46-49)
  • BifrostMessage (143-152)
  • ModelChatMessageRole (20-20)
  • ToolCall (266-270)
  • FunctionCall (260-263)
  • AssistantMessage (166-171)
  • ToolMessage (160-163)
  • ModelParameters (75-89)
  • Tool (108-112)
  • FunctionParameters (92-98)
  • BifrostResponse (184-194)
  • LLMUsage (197-203)
core/schemas/provider.go (1)
  • Provider (142-149)
transports/bifrost-http/integrations/langchain/types.go (2)
core/schemas/provider.go (1)
  • Provider (142-149)
core/schemas/bifrost.go (19)
  • BifrostRequest (54-64)
  • OpenAI (34-34)
  • ModelProvider (31-31)
  • RequestInput (46-49)
  • BifrostMessage (143-152)
  • ModelChatMessageRoleUser (24-24)
  • ModelChatMessageRoleAssistant (23-23)
  • ModelChatMessageRoleSystem (25-25)
  • ModelChatMessageRoleTool (27-27)
  • ToolCall (266-270)
  • Function (101-105)
  • FunctionCall (260-263)
  • AssistantMessage (166-171)
  • ToolMessage (160-163)
  • ModelParameters (75-89)
  • Tool (108-112)
  • FunctionParameters (92-98)
  • BifrostResponse (184-194)
  • LLMUsage (197-203)
transports/bifrost-http/integrations/litellm/types.go (2)
core/schemas/bifrost.go (21)
  • Function (101-105)
  • ToolChoice (137-140)
  • BifrostRequest (54-64)
  • RequestInput (46-49)
  • BifrostMessage (143-152)
  • ModelChatMessageRole (20-20)
  • ToolCall (266-270)
  • FunctionCall (260-263)
  • AssistantMessage (166-171)
  • ToolMessage (160-163)
  • ModelParameters (75-89)
  • Tool (108-112)
  • FunctionParameters (92-98)
  • ModelProvider (31-31)
  • OpenAI (34-34)
  • Anthropic (36-36)
  • Vertex (39-39)
  • Bedrock (37-37)
  • Cohere (38-38)
  • BifrostResponse (184-194)
  • LLMUsage (197-203)
core/schemas/provider.go (1)
  • Provider (142-149)
🪛 LanguageTool
docs/integrations.md

[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...

(SINGULAR_NOUN_ADV_AGREEMENT)


[uncategorized] ~423-~423: Possible missing preposition found.
Context: ...st for consistent logging and metrics - Track usage, costs, and performance across al...

(AI_HYDRA_LEO_MISSING_TO)

🔇 Additional comments (7)
transports/bifrost-http/main.go (1)

185-193: 🧹 Nitpick (assertive)

Variable shadows imported package name integrations

Declaring extensions := []integrations.ExtensionRouter{...} inside main shadows the package-level import integrations, which can confuse readers and tooling.

-    extensions := []integrations.ExtensionRouter{
+    extRouters := []integrations.ExtensionRouter{
         ...
     }
 ...
-    for _, extension := range extensions {
+    for _, extension := range extRouters {
         extension.RegisterRoutes(r)
     }

Likely an incorrect or invalid review comment.

docs/integrations.md (1)

1-479: Excellent comprehensive integration guide.

The documentation is well-structured and provides clear examples for all supported integrations. It effectively demonstrates how users can migrate to Bifrost with minimal code changes while gaining benefits like fallbacks, load balancing, and unified monitoring.

🧰 Tools
🪛 LanguageTool

[grammar] ~390-~390: Possible subject-verb agreement error.
Context: ...oad Balancing:** Bifrost automatically load balances requests across multiple keys ...

(SINGULAR_NOUN_ADV_AGREEMENT)


[uncategorized] ~423-~423: Possible missing preposition found.
Context: ...st for consistent logging and metrics - Track usage, costs, and performance across al...

(AI_HYDRA_LEO_MISSING_TO)

transports/bifrost-http/integrations/mistral/router.go (2)

51-67: Excellent error handling implementation.

The sophisticated error handling correctly distinguishes between client errors (4xx) and server errors (5xx) based on error properties. This provides better client experience and follows HTTP semantics properly.

This error handling pattern should be considered the best practice for other integration routers in this PR.


30-36: Proper JSON encoding with fasthttp.

The JSON encoding correctly uses json.Marshal and ctx.SetBody pattern, which is compatible with fasthttp. This addresses the previous compatibility issues.

Also applies to: 62-66, 70-74

transports/bifrost-http/integrations/litellm/router.go (1)

32-40: 👍 Correct switch to json.Marshal + SetBody

All JSON responses now use the idiomatic fasthttp pattern that was requested in earlier reviews.

Also applies to: 70-78

transports/bifrost-http/integrations/openai/types.go (1)

90-118: Merged tool-call handling looks good

The earlier overwrite bug has been fixed: both modern tool_calls and legacy function_call data are merged into a single slice before assignment.

transports/bifrost-http/integrations/langgraph/router.go (1)

65-78: /stream endpoint is synchronous – clarify or implement real streaming

The handler just delegates to executeGraph and returns a full JSON body, yet the route name implies server-sent streaming. This mismatch will confuse SDKs that expect chunked replies.

Either:

  1. Implement a proper SSE/websocket stream, or
  2. Rename the route and document that streaming is not yet supported.

Comment on lines +56 to +63
bifrostReq := &schemas.BifrostRequest{
Provider: provider,
Model: r.Model,
Input: schemas.RequestInput{
ChatCompletionInput: &[]schemas.BifrostMessage{},
},
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid taking a pointer to a composite literal

Using &[]schemas.BifrostMessage{} produces a pointer to a slice header that lives only on the heap for this expression; while legal, it’s harder to read and easy to misuse.

Prefer allocating the slice first and then referencing it ‑ keeps the code clear and symmetric with later appends.

-    bifrostReq := &schemas.BifrostRequest{
-        Provider: provider,
-        Model:    r.Model,
-        Input: schemas.RequestInput{
-            ChatCompletionInput: &[]schemas.BifrostMessage{},
-        },
-    }
+    chatInput := []schemas.BifrostMessage{}
+    bifrostReq := &schemas.BifrostRequest{
+        Provider: provider,
+        Model:    r.Model,
+        Input: schemas.RequestInput{
+            ChatCompletionInput: &chatInput,
+        },
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
bifrostReq := &schemas.BifrostRequest{
Provider: provider,
Model: r.Model,
Input: schemas.RequestInput{
ChatCompletionInput: &[]schemas.BifrostMessage{},
},
}
chatInput := []schemas.BifrostMessage{}
bifrostReq := &schemas.BifrostRequest{
Provider: provider,
Model: r.Model,
Input: schemas.RequestInput{
ChatCompletionInput: &chatInput,
},
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langchain/types.go around lines 56 to
63, avoid taking a pointer directly to a composite literal slice. Instead,
declare the slice variable first, then assign its address to
ChatCompletionInput. This improves readability and aligns with common Go
practices for slice pointers.

Comment on lines 63 to 68
genAIResponse := DeriveGenAIFromBifrostResponse(result)
ctx.SetStatusCode(fasthttp.StatusOK)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(genAIResponse)
jsonBytes, _ := json.Marshal(genAIResponse)
ctx.SetBody(jsonBytes)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Success response: same unhandled marshal-error & unstructured pattern

Same concerns apply to the success path; additionally, for large responses pre-allocating a buffer via ctx.Write is marginally faster than SetBody.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai/router.go around lines 63 to 68,
the success response handling ignores errors from json.Marshal and uses
ctx.SetBody which is less efficient for large responses. Fix this by checking
and handling the error returned from json.Marshal properly, returning an
appropriate error response if marshalling fails. Also, replace ctx.SetBody with
ctx.Write using a pre-allocated buffer to improve performance for large JSON
responses.

Comment on lines 41 to 47
var req GeminiChatRequest
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
json.NewEncoder(ctx).Encode(err)
ctx.SetContentType("application/json")
jsonBytes, _ := json.Marshal(err)
ctx.SetBody(jsonBytes)
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Error body is marshalled but still unstructured

json.Marshal(err) encodes the error as a bare JSON string ("some error").
Consider wrapping it in a structured object (`{"error": "..."}') to allow callers to reliably parse error details.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai/router.go around lines 41 to 47,
the error response body is marshalled directly from the error, resulting in an
unstructured JSON string. To fix this, wrap the error message in a structured
JSON object with a key like "error" before marshalling, so the response body is
a JSON object that clients can reliably parse.

Comment on lines 55 to 60
if err != nil {
ctx.SetStatusCode(fasthttp.StatusInternalServerError)
json.NewEncoder(ctx).Encode(err)
ctx.SetContentType("application/json")
jsonBytes, _ := json.Marshal(err)
ctx.SetBody(jsonBytes)
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Ignored marshal errors

Both places ignore the json.Marshal error with _. While unlikely, failed marshaling would leave the client with an empty body. Log or handle the error.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/genai/router.go around lines 55 to 60,
the error returned by json.Marshal is ignored, which could result in an empty
response body if marshaling fails. Modify the code to check the error returned
by json.Marshal, and if an error occurs, log it or handle it appropriately
before setting the response body. This ensures the client receives meaningful
feedback even if marshaling fails.

Comment on lines +314 to +322
jsonBytes, _ := json.Marshal(err)
ctx.SetBody(jsonBytes)
return
}

ctx.SetStatusCode(fasthttp.StatusOK)
ctx.SetContentType("application/json")
json.NewEncoder(ctx).Encode(resp)
jsonBytes, _ := json.Marshal(resp)
ctx.SetBody(jsonBytes)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Repeated silent marshal failures

json.Marshal(err) and json.Marshal(resp) ignore the returned error; on the (rare) chance of failure you’ll emit an empty body with 200/500 status. Capture and fall back to ctx.SetBodyString(err.Error()).

🤖 Prompt for AI Agents
In transports/bifrost-http/main.go around lines 314 to 322, the json.Marshal
calls for err and resp ignore potential errors, which can lead to empty response
bodies on marshal failure. Modify the code to check the error returned by
json.Marshal; if an error occurs, use ctx.SetBodyString with err.Error() as a
fallback to ensure a meaningful response body is always sent.

Comment on lines +137 to +178
// Convert parameters
if r.MaxTokens != nil || r.Temperature != nil || r.TopP != nil || r.PresencePenalty != nil ||
r.FrequencyPenalty != nil || r.N != nil || r.LogProbs != nil || r.TopLogProbs != nil ||
r.Stop != nil || r.LogitBias != nil {
params := &schemas.ModelParameters{
ExtraParams: make(map[string]interface{}),
}

if r.MaxTokens != nil {
params.MaxTokens = r.MaxTokens
}
if r.Temperature != nil {
params.Temperature = r.Temperature
}
if r.TopP != nil {
params.TopP = r.TopP
}
if r.PresencePenalty != nil {
params.PresencePenalty = r.PresencePenalty
}
if r.FrequencyPenalty != nil {
params.FrequencyPenalty = r.FrequencyPenalty
}
if r.N != nil {
params.ExtraParams["n"] = r.N
}
if r.LogProbs != nil {
params.ExtraParams["logprobs"] = r.LogProbs
}
if r.TopLogProbs != nil {
params.ExtraParams["top_logprobs"] = r.TopLogProbs
}
if r.Stop != nil {
params.ExtraParams["stop"] = r.Stop
}
if r.LogitBias != nil {
params.ExtraParams["logit_bias"] = r.LogitBias
}

bifrostReq.Params = params
}

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Parameter propagation still omits response_format, seed, stream

Minor, but these user-facing knobs are silently discarded. Consider forwarding them via ExtraParams for completeness.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/types.go around lines 137 to 178,
the code converts parameters but omits forwarding the user-facing parameters
response_format, seed, and stream. To fix this, add checks for these parameters
similar to the existing ones and include them in the ExtraParams map of params
if they are not nil, ensuring these parameters are propagated correctly.

Comment on lines +449 to +459
tc := OpenAIToolCall{
Type: *toolCall.Type,
Function: OpenAIFunctionCall{
Name: *toolCall.Function.Name,
Arguments: toolCall.Function.Arguments,
},
}
if toolCall.ID != nil {
tc.ID = *toolCall.ID
}
toolCalls = append(toolCalls, tc)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Nil-pointer panic when Type or Function.Name is unset

*toolCall.Type and *toolCall.Function.Name are dereferenced without a nil check.
If either field is missing (both are pointers in the Bifrost schema), the server will panic.

-tc := OpenAIToolCall{
-    Type: *toolCall.Type,
-    Function: OpenAIFunctionCall{
-        Name:      *toolCall.Function.Name,
-        Arguments: toolCall.Function.Arguments,
-    },
-}
+var t string
+if toolCall.Type != nil {
+    t = *toolCall.Type
+}
+fnName := ""
+if toolCall.Function.Name != nil {
+    fnName = *toolCall.Function.Name
+}
+tc := OpenAIToolCall{
+    Type: t,
+    Function: OpenAIFunctionCall{
+        Name:      fnName,
+        Arguments: toolCall.Function.Arguments,
+    },
+}

Add identical nil-guards anywhere these pointers are dereferenced.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
tc := OpenAIToolCall{
Type: *toolCall.Type,
Function: OpenAIFunctionCall{
Name: *toolCall.Function.Name,
Arguments: toolCall.Function.Arguments,
},
}
if toolCall.ID != nil {
tc.ID = *toolCall.ID
}
toolCalls = append(toolCalls, tc)
// Guard against nil pointers for Type and Function.Name
var t string
if toolCall.Type != nil {
t = *toolCall.Type
}
fnName := ""
if toolCall.Function.Name != nil {
fnName = *toolCall.Function.Name
}
tc := OpenAIToolCall{
Type: t,
Function: OpenAIFunctionCall{
Name: fnName,
Arguments: toolCall.Function.Arguments,
},
}
if toolCall.ID != nil {
tc.ID = *toolCall.ID
}
toolCalls = append(toolCalls, tc)
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/openai/types.go around lines 449 to 459,
the code dereferences pointers toolCall.Type and toolCall.Function.Name without
checking if they are nil, which can cause a nil-pointer panic. Add nil checks
before dereferencing these pointers to ensure they are not nil, and handle the
nil case appropriately to prevent the server from panicking.

Comment on lines +36 to +43
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json")
errResponse := map[string]string{"error": err.Error()}
jsonBytes, _ := json.Marshal(errResponse)
ctx.SetBody(jsonBytes)
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid repeated JSON marshalling & unchecked errors

json.Marshal is invoked for every 4xx/5xx reply and its error is silently discarded.
Use json.NewEncoder(ctx).Encode(errResponse) (no extra allocation) and handle/propagate the encode error if it ever happens.

-errResponse := map[string]string{"error": err.Error()}
-jsonBytes, _ := json.Marshal(errResponse)
-ctx.SetBody(jsonBytes)
+if encodeErr := json.NewEncoder(ctx).Encode(map[string]string{"error": err.Error()}); encodeErr != nil {
+    ctx.Error("failed to encode error response", fasthttp.StatusInternalServerError)
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json")
errResponse := map[string]string{"error": err.Error()}
jsonBytes, _ := json.Marshal(errResponse)
ctx.SetBody(jsonBytes)
return
}
if err := json.Unmarshal(ctx.PostBody(), &req); err != nil {
ctx.SetStatusCode(fasthttp.StatusBadRequest)
ctx.SetContentType("application/json")
if encodeErr := json.NewEncoder(ctx).Encode(map[string]string{"error": err.Error()}); encodeErr != nil {
ctx.Error("failed to encode error response", fasthttp.StatusInternalServerError)
}
return
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langgraph/router.go around lines 36 to
43, the code repeatedly calls json.Marshal to encode error responses and ignores
any errors from marshalling. Replace json.Marshal with
json.NewEncoder(ctx).Encode(errResponse) to write directly to the response body
without extra allocation, and properly check and handle any encoding errors that
occur.

Comment on lines +112 to +135
// Convert parameters
if r.MaxTokens != nil || r.Temperature != nil || r.TopP != nil || r.RandomSeed != nil || r.SafePrompt != nil {
params := &schemas.ModelParameters{
ExtraParams: make(map[string]interface{}),
}

if r.MaxTokens != nil {
params.MaxTokens = r.MaxTokens
}
if r.Temperature != nil {
params.Temperature = r.Temperature
}
if r.TopP != nil {
params.TopP = r.TopP
}
if r.RandomSeed != nil {
params.ExtraParams["random_seed"] = r.RandomSeed
}
if r.SafePrompt != nil {
params.ExtraParams["safe_prompt"] = r.SafePrompt
}

bifrostReq.Params = params
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Important parameters silently dropped

stream, response_format, and arbitrary tool_choice variants are accepted in the request but never forwarded to BifrostRequest.Params.
At minimum pass them through ExtraParams so callers don’t lose functionality:

 if r.Stream != nil {
-    // ignored
+    params.ExtraParams["stream"] = r.Stream
 }
 if r.ResponseFormat != nil {
-    // ignored
+    params.ExtraParams["response_format"] = r.ResponseFormat
 }
 // TODO: map tool_choice into ToolChoice when it’s a structured object

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/mistral/types.go around lines 112 to
135, the parameters stream, response_format, and tool_choice variants are
accepted but not forwarded to BifrostRequest.Params, causing them to be silently
dropped. To fix this, add these parameters to the ExtraParams map within the
ModelParameters struct so they are passed through correctly and callers retain
full functionality.

Comment on lines +338 to +355
func (g *LangGraphDefinition) GetStartNode() *LangGraphNode {
for _, node := range g.Nodes {
if node.ID == g.StartNode {
return &node
}
}
return nil
}

// GetNode returns a node by ID
func (g *LangGraphDefinition) GetNode(id string) *LangGraphNode {
for _, node := range g.Nodes {
if node.ID == id {
return &node
}
}
return nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Returning pointer to loop-scoped variable → all callers receive the same (corrupted) node

range re-uses the iteration variable; taking its address (&node) escapes the same memory location for every iteration. After the loop finishes, the pointer refers to the last element – not the intended one.

-func (g *LangGraphDefinition) GetStartNode() *LangGraphNode {
-    for _, node := range g.Nodes {
-        if node.ID == g.StartNode {
-            return &node            // <- points to loop var
-        }
-    }
-    return nil
-}
+func (g *LangGraphDefinition) GetStartNode() *LangGraphNode {
+    for i := range g.Nodes {
+        if g.Nodes[i].ID == g.StartNode {
+            return &g.Nodes[i]      // address of slice element
+        }
+    }
+    return nil
+}

Apply the same pattern to GetNode and any other helpers that return a pointer to a ranged variable.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (g *LangGraphDefinition) GetStartNode() *LangGraphNode {
for _, node := range g.Nodes {
if node.ID == g.StartNode {
return &node
}
}
return nil
}
// GetNode returns a node by ID
func (g *LangGraphDefinition) GetNode(id string) *LangGraphNode {
for _, node := range g.Nodes {
if node.ID == id {
return &node
}
}
return nil
}
func (g *LangGraphDefinition) GetStartNode() *LangGraphNode {
for i := range g.Nodes {
if g.Nodes[i].ID == g.StartNode {
return &g.Nodes[i] // address of the actual slice element
}
}
return nil
}
// GetNode returns a node by ID
func (g *LangGraphDefinition) GetNode(id string) *LangGraphNode {
for _, node := range g.Nodes {
if node.ID == id {
return &node
}
}
return nil
}
🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/langgraph/types.go around lines 338 to
355, the methods GetStartNode and GetNode return pointers to the loop variable
'node', which is reused in each iteration causing all returned pointers to
reference the same memory location. To fix this, instead of returning the
address of the loop variable, return the address of the slice element directly
by using the index to get a pointer to the actual element in the slice. Apply
this pattern to both GetStartNode and GetNode methods.

@akshaydeo akshaydeo deleted the feature/06-05-_integrations_sdk_supports_for_anthropic_langchain_langgraph_litellm_mistral_openai branch August 31, 2025 17:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants