Skip to content

feat: add LiteLLM router integration with model provider detection#67

Merged
akshaydeo merged 1 commit intomainfrom
06-10-feat_add_litellm_integration_to_http_transport
Jun 10, 2025
Merged

feat: add LiteLLM router integration with model provider detection#67
akshaydeo merged 1 commit intomainfrom
06-10-feat_add_litellm_integration_to_http_transport

Conversation

@Pratham-Mishra04
Copy link
Copy Markdown
Collaborator

Add LiteLLM Integration Support

This PR adds support for LiteLLM integration to Bifrost HTTP transport. LiteLLM is a popular proxy that standardizes access to various LLM providers through an OpenAI-compatible interface.

Key changes:

  • Created a new LiteLLMRouter that handles requests to /litellm/chat/completions endpoint
  • Added model name pattern detection to automatically determine the appropriate provider based on the model name
  • Implemented comprehensive pattern matching for various AI providers (OpenAI, Azure, Anthropic, Google Vertex, AWS Bedrock, Cohere)
  • Registered the LiteLLM router in the main HTTP server

This integration allows users to route requests through Bifrost using LiteLLM's standardized interface while maintaining compatibility with existing OpenAI request/response formats.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 9, 2025

Caution

Review failed

The pull request is closed.

Summary by CodeRabbit

  • New Features
    • Added support for LiteLLM integration, enabling handling of multiple AI providers (OpenAI, Anthropic, Vertex, Azure) with OpenAI-compatible API routes.
  • Bug Fixes
    • Improved error handling for request and response conversions, returning appropriate HTTP status codes for invalid or failed conversions.
  • Refactor
    • Standardized response and request conversion functions across integrations for more robust and explicit error handling.
    • Enhanced provider detection logic for model names.
    • Improved handling and conversion of image content types for Anthropic and GenAI integrations.
  • Chores
    • Updated core dependency to the latest version.
    • Enhanced 404 error messages to include the requested path.

Summary by CodeRabbit

  • New Features

    • Introduced support for LiteLLM endpoints, enabling chat completion requests using OpenAI-compatible request and response formats.
    • Added automatic detection of AI providers based on model names for improved integration flexibility.
  • Improvements

    • Enhanced 404 error messages to include the requested path for easier troubleshooting.
    • Standardized image content type handling by using predefined constants instead of string pointers across Anthropic and GenAI integrations.
    • Added explicit error handling in request and response converters for multiple providers (OpenAI, Anthropic, GenAI, LiteLLM).
  • Chores

    • Updated server configuration to register the new LiteLLM integration alongside existing providers.
    • Upgraded a core dependency to the latest version.

Walkthrough

A new LiteLLM router is introduced to the Bifrost HTTP transport, enabling OpenAI-compatible chat completion requests at dedicated endpoints. This integration uses a provider-detection utility to route requests based on model names and registers the LiteLLM router alongside existing provider routers in the main server initialization. The 404 handler was updated to include the requested path in its response. Additionally, request and response converter functions across multiple integrations were updated to return errors explicitly, improving robustness. Image content type handling was standardized using constants, and a core dependency was upgraded.

Changes

File(s) Change Summary
transports/bifrost-http/integrations/litellm/router.go Added LiteLLMRouter struct and NewLiteLLMRouter function; defines POST routes for chat completions and message endpoints; handles provider detection and request/response conversion with error handling.
transports/bifrost-http/integrations/utils.go Added GetProviderFromModel function with helpers for provider detection based on model name; updated RequestConverter and ResponseConverter function types and call sites to return errors; renamed RouteConfig.ResponseFunc to ResponseConverter.
transports/bifrost-http/main.go Imported LiteLLM integration; registered LiteLLM router in extension routers; enhanced 404 handler to include requested path.
transports/bifrost-http/integrations/anthropic/types.go Added helper convertAnthropicImageSource to convert Anthropic image sources to Bifrost format; standardized image content type handling using constants.
transports/bifrost-http/integrations/genai/types.go Replaced string pointer usage for image content types with predefined constants; updated assignments accordingly.
transports/bifrost-http/integrations/anthropic/router.go Updated RequestConverter and ResponseConverter signatures to return errors; added explicit error handling for invalid input types.
transports/bifrost-http/integrations/genai/router.go Updated RequestConverter and ResponseConverter signatures to return errors; added explicit error handling for invalid input types.
transports/bifrost-http/integrations/openai/router.go Updated RequestConverter and ResponseConverter signatures to return errors; added explicit error handling for invalid input types; imported errors package.
transports/go.mod Updated github.com/maximhq/bifrost/core dependency from v1.0.9 to v1.0.10.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant LiteLLMRouter
    participant Bifrost
    participant Provider

    Client->>LiteLLMRouter: POST /litellm/chat/completions (OpenAIChatRequest)
    LiteLLMRouter->>LiteLLMRouter: Validate model field\nDetect provider from model string\nUnmarshal request to provider-specific type
    LiteLLMRouter->>LiteLLMRouter: Convert to BifrostRequest
    LiteLLMRouter->>Bifrost: Forward BifrostRequest
    Bifrost->>Provider: Process chat completion
    Provider-->>Bifrost: Return response
    Bifrost-->>LiteLLMRouter: Return BifrostResponse
    LiteLLMRouter->>LiteLLMRouter: Convert response to provider-specific format
    LiteLLMRouter-->>Client: Return OpenAI-compatible response
Loading

Poem

In the warren of code, a new route appears,
LiteLLM hops in, allaying old fears.
With models detected by clever new art,
It sends chat completions with a rabbit’s swift heart.
Now Bifrost speaks OpenAI, Anthropic, and more—
The garden of AI grows richer than before!
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a73e38 and c790b36.

⛔ Files ignored due to path filters (1)
  • transports/go.sum is excluded by !**/*.sum
📒 Files selected for processing (9)
  • transports/bifrost-http/integrations/anthropic/router.go (2 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (3 hunks)
  • transports/bifrost-http/integrations/genai/router.go (2 hunks)
  • transports/bifrost-http/integrations/genai/types.go (1 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (2 hunks)
  • transports/bifrost-http/integrations/utils.go (10 hunks)
  • transports/bifrost-http/main.go (3 hunks)
  • transports/go.mod (1 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Collaborator Author

Pratham-Mishra04 commented Jun 9, 2025

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2a21331 and 7f96552.

📒 Files selected for processing (3)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/utils.go (2 hunks)
  • transports/bifrost-http/main.go (2 hunks)
🔇 Additional comments (4)
transports/bifrost-http/main.go (2)

50-50: LGTM! Import follows established patterns.

The import is correctly placed in alphabetical order and follows the existing integration import pattern.


202-202: LGTM! Router registration follows established patterns.

The LiteLLM router is properly initialized and registered alongside existing routers, maintaining consistency with the extension architecture.

transports/bifrost-http/integrations/litellm/router.go (2)

31-31:

Details

✅ Verification successful

Verify provider detection integration.

The provider detection logic looks correct, but ensure that the GetProviderFromModel function handles edge cases appropriately (empty strings, nil model names).

Verify the provider detection handles edge cases:


🏁 Script executed:

#!/bin/bash
# Description: Check if GetProviderFromModel handles edge cases properly

# Search for validation in GetProviderFromModel
rg -A 10 -B 5 "func GetProviderFromModel" --type go

# Look for potential nil or empty string handling
rg -A 5 -B 5 "strings\.TrimSpace\|strings\.ToLower" transports/bifrost-http/integrations/utils.go

Length of output: 1512


🏁 Script executed:

#!/bin/bash
# Retrieve the full GetProviderFromModel function to verify default behavior and edge-case handling
rg -n "func GetProviderFromModel" -A200 -B0 transports/bifrost-http/integrations/utils.go

Length of output: 3181


Provider detection handles empty and whitespace-only inputs

The GetProviderFromModel function trims whitespace and lowercases the input, then falls back to schemas.OpenAI for any model string that doesn’t match a known pattern—including empty or whitespace-only strings. Since Go strings can’t be nil, no additional nil‐checking is required here. No changes needed.


22-22:

Details

✅ Verification successful

Consider endpoint naming consistency.

The endpoint uses /litellm/chat/completions which follows the pattern but consider if it should align more closely with LiteLLM's actual endpoint structure.

Please verify LiteLLM's expected endpoint structure:


🌐 Web query:

What is the standard endpoint path for LiteLLM chat completions? Does it use /litellm/chat/completions or a different pattern?

💡 Result:

The standard endpoint path for LiteLLM chat completions is /chat/completions. LiteLLM follows the OpenAI API format for its endpoints, making it compatible with existing OpenAI API integrations[1][5].

When using LiteLLM, you can access the chat completions functionality through this endpoint regardless of which underlying LLM provider you're using. This standardization is part of LiteLLM's core functionality, which allows you to "call 100+ LLMs using the OpenAI Input/Output Format"[6][10].

For OpenAI-compatible endpoints specifically, you would use the same /chat/completions path, but you need to add an "openai/" prefix to your model name so LiteLLM knows you're trying to call an OpenAI-compatible endpoint[2].

The consistent endpoint structure helps developers maintain a unified interface when working with multiple LLM providers. Responses from this endpoint will always have text responses available at ['choices']['message']['content'], ensuring consistency across different model providers[6][10].

When making requests to this endpoint, you would typically structure your code like this:

from litellm import completion

response = completion(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "user",
            "content": "hi",
        }
    ],
)

This endpoint is one of several supported by LiteLLM, including others like /embeddings, /completions, and /images[5].

Citations:


🏁 Script executed:

#!/bin/bash
set -e
echo "Listing all router.go files in integrations:"
find transports/bifrost-http/integrations -name router.go

echo -e "\nExtracting Path definitions across integrations:"
rg -n 'Path:' transports/bifrost-http/integrations

Length of output: 943


No changes needed for LiteLLM endpoint path

LiteLLM’s standard chat-completions endpoint is /chat/completions. Prefixing it with litellm in our router (i.e. /litellm/chat/completions) matches how we handle other integrations (e.g. /openai/v1/chat/completions, /anthropic/v1/messages) and aligns with LiteLLM’s OpenAI-compatible interface.

Comment thread transports/bifrost-http/integrations/utils.go
Comment thread transports/bifrost-http/integrations/utils.go
Comment thread transports/bifrost-http/integrations/litellm/router.go Outdated
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from 7f96552 to 2186c4b Compare June 9, 2025 20:27
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
transports/bifrost-http/integrations/utils.go (3)

217-253: Address provider detection order and pattern specificity issues.

This implementation has the same concerns raised in previous reviews that remain unaddressed:

  1. Provider order conflict: Azure models containing "gpt" will be incorrectly detected as OpenAI because the OpenAI check occurs before Azure check
  2. Generic Bedrock patterns: Patterns like "amazon.", "ai21.", "meta." are too broad and may cause false positives

The ordering needs to be corrected to check Azure before OpenAI, and Bedrock patterns should be more specific to avoid unintended matches.


266-272: 🛠️ Refactor suggestion

Azure patterns remain too generic.

The Azure detection patterns are still insufficient as noted in previous reviews. "model-router" and "computer-use-preview" may not cover comprehensive Azure OpenAI model detection.

Consider adding more specific Azure patterns like "azure-openai", "azureopenai", and ".openai.azure.com" to improve detection accuracy.


294-299: 🛠️ Refactor suggestion

Bedrock patterns remain overly generic.

The Bedrock pattern specificity issue from previous reviews hasn't been resolved. Patterns like "amazon.", "ai21.", "meta." are too broad and could match non-Bedrock models.

Make patterns more specific (e.g., "amazon.titan", "ai21.j2", "meta.llama", "bedrock.amazonaws.com") to reduce false positives.

transports/bifrost-http/integrations/litellm/router.go (1)

31-38: 🛠️ Refactor suggestion

Add defensive validation for type assertion.

The type assertion lacks safety validation as noted in previous reviews. While the generic router should ensure type safety, defensive programming is recommended.

Consider adding validation:

 requestConverter := func(req interface{}) *schemas.BifrostRequest {
-	if litellmReq, ok := req.(*openai.OpenAIChatRequest); ok {
+	litellmReq, ok := req.(*openai.OpenAIChatRequest)
+	if !ok {
+		return nil
+	}
+	if litellmReq != nil {
 		bifrostReq := litellmReq.ConvertToBifrostRequest()
 		bifrostReq.Provider = integrations.GetProviderFromModel(litellmReq.Model)
 		return bifrostReq
+	}
+	return nil
-	}
-	return nil
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7f96552 and 2186c4b.

📒 Files selected for processing (3)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/utils.go (2 hunks)
  • transports/bifrost-http/main.go (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
transports/bifrost-http/main.go (1)
transports/bifrost-http/integrations/litellm/router.go (1)
  • NewLiteLLMRouter (21-68)
transports/bifrost-http/integrations/utils.go (1)
core/schemas/bifrost.go (7)
  • ModelProvider (31-31)
  • OpenAI (34-34)
  • Azure (35-35)
  • Anthropic (36-36)
  • Vertex (39-39)
  • Bedrock (37-37)
  • Cohere (38-38)
🔇 Additional comments (2)
transports/bifrost-http/main.go (1)

50-50: LGTM! Clean integration of LiteLLM router.

The changes properly integrate the new LiteLLM router following the established patterns:

  • Import is correctly placed alongside other integration packages
  • Router registration matches the existing pattern for other AI providers
  • 404 handler enhancement provides better debugging information

Also applies to: 202-202, 223-223

transports/bifrost-http/integrations/litellm/router.go (1)

21-68: Well-structured LiteLLM router implementation.

The router implementation follows good patterns:

  • Reuses existing OpenAI types for compatibility
  • Properly configures routes with appropriate prefixes
  • Uses the new provider detection utility for automatic routing
  • Handles multiple response formats based on detected provider

Comment thread transports/bifrost-http/integrations/litellm/router.go Outdated
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from 2186c4b to a91771a Compare June 10, 2025 06:57
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-09-feat_anthropic_integration_added_in_http_transport branch from 2a21331 to 6e5e529 Compare June 10, 2025 06:57
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from a91771a to 1bf95d0 Compare June 10, 2025 07:50
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
transports/bifrost-http/integrations/litellm/router.go (2)

97-127: 🧹 Nitpick (assertive)

Add nil check for type assertion safety.

While the pre-hook should ensure ActualRequest is populated, adding defensive validation would prevent potential panics.

Consider adding validation before the type switch:

 		if wrapper.ActualRequest == nil {
 			return nil, errors.New("request was not properly processed by pre-hook")
 		}
+
+		// Defensive check to ensure ActualRequest is valid
+		if wrapper.ActualRequest == nil {
+			return nil, errors.New("internal error: actual request is nil")
+		}

129-140: 🧹 Nitpick (assertive)

Consider explicit handling for all supported providers.

The response converter only handles OpenAI/Azure, Anthropic, and Vertex, falling back to raw BifrostResponse for others. This could cause format inconsistencies for Bedrock and Cohere providers detected by GetProviderFromModel.

Consider adding explicit cases or documenting the fallback behavior:

 		case schemas.Vertex:
 			return genai.DeriveGenAIFromBifrostResponse(resp), nil
+		case schemas.Bedrock, schemas.Cohere:
+			// TODO: Implement provider-specific response derivation
+			// Fallback to raw response format for now
+			return resp, nil
 		default:
 			return resp, nil
transports/bifrost-http/integrations/utils.go (3)

225-261: ⚠️ Potential issue

Reorder provider detection to prevent false positives.

Azure models often contain "gpt" patterns which would match OpenAI first. Move Azure detection before OpenAI to ensure correct provider identification.

Apply this diff to fix the detection order:

 	// Normalize model name for case-insensitive matching
 	modelLower := strings.ToLower(strings.TrimSpace(model))
 
-	// OpenAI Models - comprehensive pattern matching
-	if isOpenAIModel(modelLower) {
-		return schemas.OpenAI
-	}
-
 	// Azure OpenAI Models - specific Azure patterns
 	if isAzureModel(modelLower) {
 		return schemas.Azure
 	}
 
+	// OpenAI Models - comprehensive pattern matching
+	if isOpenAIModel(modelLower) {
+		return schemas.OpenAI
+	}
+
 	// Anthropic Models - Claude family
 	if isAnthropicModel(modelLower) {

273-280: 🛠️ Refactor suggestion

Enhance Azure pattern detection.

The current Azure patterns are too generic and might miss common Azure OpenAI deployments.

Apply this diff to improve Azure model detection:

 func isAzureModel(model string) bool {
 	azurePatterns := []string{
-		"azure", "model-router", "computer-use-preview",
+		"azure", "model-router", "computer-use-preview",
+		"azure-openai", "azureopenai", ".openai.azure.com",
+		"azure/", "azure-", "deployment-",
 	}

300-307: 🛠️ Refactor suggestion

Refine Bedrock patterns to reduce false positives.

Generic patterns like "amazon.", "ai21.", "meta." could match non-Bedrock models and cause incorrect provider routing.

Apply this diff to make patterns more specific:

 func isBedrockModel(model string) bool {
 	bedrockPatterns := []string{
-		"bedrock", "amazon.", "ai21.", "meta.", "stability.", "titan", "llama",
+		"bedrock", "amazon.titan", "amazon-titan", 
+		"ai21.j2", "ai21-j2", "meta.llama", "meta-llama",
+		"stability.stable", "anthropic.claude-bedrock",
+		"cohere.command-bedrock", "mistral.mistral-bedrock",
 	}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a91771a and 1bf95d0.

⛔ Files ignored due to path filters (1)
  • transports/go.sum is excluded by !**/*.sum
📒 Files selected for processing (9)
  • transports/bifrost-http/integrations/anthropic/router.go (2 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (2 hunks)
  • transports/bifrost-http/integrations/genai/router.go (2 hunks)
  • transports/bifrost-http/integrations/genai/types.go (3 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (2 hunks)
  • transports/bifrost-http/integrations/utils.go (8 hunks)
  • transports/bifrost-http/main.go (3 hunks)
  • transports/go.mod (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
transports/bifrost-http/integrations/anthropic/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#64
File: transports/bifrost-http/integrations/genai/types.go:383-395
Timestamp: 2025-06-09T16:26:05.777Z
Learning: In the Bifrost schema (core/schemas/bifrost.go), only UserMessage and ToolMessage structs have ImageContent fields. AssistantMessage does not have an ImageContent field and cannot contain images. The schema design intentionally limits image content to user and tool messages only.
transports/bifrost-http/integrations/utils.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#65
File: transports/bifrost-http/integrations/utils.go:169-173
Timestamp: 2025-06-09T17:33:52.234Z
Learning: The ChatCompletionRequest method in the Bifrost client follows a contract where the result parameter will never be nil if the error parameter is nil. This means when error checking passes (err == nil), the result is guaranteed to be valid and can be safely used without additional nil checks.
🧬 Code Graph Analysis (4)
transports/bifrost-http/integrations/anthropic/types.go (1)
core/schemas/bifrost.go (1)
  • ImageContentTypeURL (177-177)
transports/bifrost-http/integrations/genai/types.go (1)
core/schemas/bifrost.go (3)
  • ImageContentTypeBase64 (176-176)
  • ImageContent (181-186)
  • ImageContentTypeURL (177-177)
transports/bifrost-http/integrations/openai/router.go (3)
transports/bifrost-http/integrations/utils.go (2)
  • RequestConverter (23-23)
  • ResponseConverter (27-27)
core/schemas/bifrost.go (2)
  • BifrostRequest (54-64)
  • BifrostResponse (191-201)
transports/bifrost-http/integrations/openai/types.go (2)
  • OpenAIChatRequest (36-57)
  • DeriveOpenAIFromBifrostResponse (455-515)
transports/bifrost-http/integrations/utils.go (2)
core/schemas/bifrost.go (9)
  • BifrostRequest (54-64)
  • BifrostResponse (191-201)
  • ModelProvider (31-31)
  • OpenAI (34-34)
  • Azure (35-35)
  • Anthropic (36-36)
  • Vertex (39-39)
  • Bedrock (37-37)
  • Cohere (38-38)
core/bifrost.go (1)
  • ChatCompletionRequest (24-24)
🔇 Additional comments (11)
transports/go.mod (1)

7-7:

Details

✅ Verification successful

Verify the core dependency version update.

The core dependency was bumped from v1.0.9 to v1.0.10. Please ensure this version exists and contains the necessary schema changes for the LiteLLM integration.

Run the following script to verify the version exists and check for any security advisories:


🏁 Script executed:

#!/bin/bash
# Description: Verify the core dependency version and check for security issues

# Check if the version exists on GitHub
curl -s https://api.github.com/repos/maximhq/bifrost/releases/tag/core/v1.0.10 | jq '.tag_name'

# Check the latest available version
curl -s https://api.github.com/repos/maximhq/bifrost/releases/latest | jq '.tag_name'

Length of output: 191


Verify Git tag for the bumped version

The GitHub Releases API didn’t return a release for core/v1.0.10. Let’s check the raw tags list to confirm whether the Go module tag actually exists:

Please share the output to confirm the tag’s presence before merging.


🏁 Script executed:

#!/bin/bash
# Check if a Git tag for core/v1.0.10 (or v1.0.10) is present
curl -s https://api.github.com/repos/maximhq/bifrost/git/refs/tags \
  | jq -r '.[].ref' \
  | grep -E 'refs/tags/(core/)?v1\.0\.10' \
  || echo "Tag not found"

Length of output: 149


Check for security advisories on the bifrost repo

Let’s verify that no known vulnerabilities have been reported against this module version:

Please share the output to confirm before merging.


🏁 Script executed:

#!/bin/bash
# Description: List any GitHub Security Advisories for this repository
curl -s \
  -H "Accept: application/vnd.github.v3+json" \
  https://api.github.com/repos/maximhq/bifrost/security/advisories \
  | jq 'if type=="array" and length==0 then "No advisories found" else . end'

Length of output: 297


Verified: core/v1.0.10 exists and no known security advisories

  • The Git tag core/v1.0.10 is present in the repository.
  • A check against GitHub’s Security Advisories API returned no advisories for this module.

You can safely proceed with the dependency bump.

transports/bifrost-http/integrations/anthropic/types.go (1)

139-139: LGTM! Standardized image content type assignment.

The changes replace local string pointers with the predefined constant schemas.ImageContentTypeURL, improving type safety and consistency across the codebase. This aligns well with the schema design and similar changes in other integrations.

Also applies to: 172-172

transports/bifrost-http/main.go (2)

223-223: LGTM! Improved 404 error message.

The updated 404 handler now includes the requested path in the response, which provides better debugging information for developers when routes are not found.


50-50:

Details

✅ Verification successful

Verify the LiteLLM integration implementation.

The LiteLLM router is properly imported and registered with the extensions. This follows the same pattern as other provider integrations.

Run the following script to verify the LiteLLM package exists and is properly implemented:

Also applies to: 202-202


🏁 Script executed:

#!/bin/bash
# Description: Verify LiteLLM integration package exists and implements required interfaces

# Check if the litellm package exists
fd "router.go" transports/bifrost-http/integrations/litellm --exec cat {}

# Verify it implements ExtensionRouter interface
ast-grep --pattern 'func NewLiteLLMRouter($_) *LiteLLMRouter {
  $$$
}'

Length of output: 16011


Approve LiteLLM integration and 404 handler update

Verified that the transports/bifrost-http/integrations/litellm/router.go file exists and that NewLiteLLMRouter correctly implements the ExtensionRouter pattern, registering the /litellm prefixed routes with the shared GenericRouter. The 404 handler enhancement is also appropriate for improved debugging.

Points confirmed:

  • transports/bifrost-http/integrations/litellm/router.go defines NewLiteLLMRouter(client *bifrost.Bifrost) *LiteLLMRouter
  • All required routes (/litellm/chat/completions, /litellm/v1/messages) are configured with pre-hook, request/response converters
  • Matching pattern confirmed by ast-grep and file presence via fd
  • 404 handler update (line 223) includes the requested path for clearer error messages
transports/bifrost-http/integrations/genai/types.go (2)

175-175: LGTM! Standardized image content type constants.

The changes replace local string assignments with predefined constants schemas.ImageContentTypeBase64 and schemas.ImageContentTypeURL, improving type safety and consistency.

Also applies to: 184-184


464-464: LGTM! Simplified image content type comparisons.

The comparison logic is improved by using constants directly instead of pointer dereferencing, making the code more readable and removing unnecessary nil checks.

Also applies to: 496-496

transports/bifrost-http/integrations/openai/router.go (2)

4-4: LGTM! Added errors package for enhanced error handling.

The errors package import supports the new error-returning function signatures, improving robustness of the conversion pipeline.


26-34: LGTM! Enhanced error handling in converter functions.

The function signatures are updated to return errors explicitly:

  • RequestConverter now returns (*schemas.BifrostRequest, error)
  • ResponseConverter renamed from ResponseFunc and returns (interface{}, error)

This provides better error propagation and aligns with the updated function types in the utils package. The error handling is appropriate - invalid request types return an error while successful conversions return nil.

transports/bifrost-http/integrations/genai/router.go (1)

28-36: LGTM!

The error handling implementation is correct and follows the new converter interface pattern consistently.

transports/bifrost-http/integrations/anthropic/router.go (1)

26-34: LGTM!

The error handling implementation is correct and consistent with the updated converter interface.

transports/bifrost-http/integrations/utils.go (1)

146-154:

Details

✅ Verification successful

Verify error handling doesn't mask provider errors.

The updated error handling correctly propagates conversion errors. The Bifrost error handling also properly distinguishes between Bifrost internal errors and provider errors.

Let me verify that provider-specific error information is preserved:

Also applies to: 162-171, 187-192


🏁 Script executed:

#!/bin/bash
# Description: Check if provider errors are properly handled in the codebase

# Search for BifrostError structure and usage
ast-grep --pattern 'type BifrostError struct {
  $$$
}'

# Search for error handling in provider integrations
rg -A 5 "IsBifrostError" --type go

Length of output: 19272


Error handling verified: conversion errors and provider errors correctly distinguished

I confirmed that:

  • config.RequestConverter errors are surfaced as 400 Bad Request.
  • The BifrostError struct (in core/schemas/bifrost.go) includes the original provider error in ErrorField and an IsBifrostError flag.
  • Both transports/bifrost-http/main.go and transports/bifrost-http/integrations/utils.go inspect bifrostErr.IsBifrostError to set 500 Internal Server Error for internal/provider failures and 400 Bad Request otherwise.

No further changes needed.

Comment thread transports/bifrost-http/integrations/litellm/router.go
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-09-feat_anthropic_integration_added_in_http_transport branch from 6e5e529 to e86af61 Compare June 10, 2025 10:06
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from 1bf95d0 to 794ac99 Compare June 10, 2025 10:06
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (4)
transports/bifrost-http/integrations/litellm/router.go (1)

43-47: ⚠️ Potential issue

Azure requests are rejected due to missing entry in availableProviders

GetProviderFromModel may legitimately return schemas.Azure, but the validation slice omits it, causing the pre-hook to fail with “unsupported provider”.

 availableProviders := []schemas.ModelProvider{
 	schemas.OpenAI,
 	schemas.Anthropic,
 	schemas.Vertex,
+	schemas.Azure,
 }
transports/bifrost-http/integrations/utils.go (3)

274-280: Azure pattern list still quite generic – previous comment applies.


301-305: Bedrock patterns overly broad – previous comment applies.


229-237: ⚠️ Potential issue

Azure models mis-classified as OpenAI – check order

Because many Azure deployment names contain “gpt”, the OpenAI check fires first and the function returns schemas.OpenAI, never reaching the Azure block. Move the Azure test above the OpenAI test to restore accuracy.

-	// OpenAI Models
-	if isOpenAIModel(modelLower) {
-		return schemas.OpenAI
-	}
-
-	// Azure OpenAI Models
-	if isAzureModel(modelLower) {
-		return schemas.Azure
-	}
+	// Azure OpenAI deployments (detect before generic GPT patterns)
+	if isAzureModel(modelLower) {
+		return schemas.Azure
+	}
+
+	// OpenAI Models
+	if isOpenAIModel(modelLower) {
+		return schemas.OpenAI
+	}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1bf95d0 and 794ac99.

⛔ Files ignored due to path filters (1)
  • transports/go.sum is excluded by !**/*.sum
📒 Files selected for processing (9)
  • transports/bifrost-http/integrations/anthropic/router.go (2 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (2 hunks)
  • transports/bifrost-http/integrations/genai/router.go (2 hunks)
  • transports/bifrost-http/integrations/genai/types.go (3 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (2 hunks)
  • transports/bifrost-http/integrations/utils.go (8 hunks)
  • transports/bifrost-http/main.go (3 hunks)
  • transports/go.mod (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
transports/bifrost-http/integrations/anthropic/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#64
File: transports/bifrost-http/integrations/genai/types.go:383-395
Timestamp: 2025-06-09T16:26:05.777Z
Learning: In the Bifrost schema (core/schemas/bifrost.go), only UserMessage and ToolMessage structs have ImageContent fields. AssistantMessage does not have an ImageContent field and cannot contain images. The schema design intentionally limits image content to user and tool messages only.
transports/bifrost-http/integrations/utils.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#65
File: transports/bifrost-http/integrations/utils.go:169-173
Timestamp: 2025-06-09T17:33:52.234Z
Learning: The ChatCompletionRequest method in the Bifrost client follows a contract where the result parameter will never be nil if the error parameter is nil. This means when error checking passes (err == nil), the result is guaranteed to be valid and can be safely used without additional nil checks.
🧬 Code Graph Analysis (5)
transports/bifrost-http/integrations/anthropic/types.go (1)
core/schemas/bifrost.go (1)
  • ImageContentTypeURL (177-177)
transports/bifrost-http/main.go (1)
transports/bifrost-http/integrations/litellm/router.go (1)
  • NewLiteLLMRouter (33-157)
transports/bifrost-http/integrations/genai/types.go (1)
core/schemas/bifrost.go (3)
  • ImageContentTypeBase64 (176-176)
  • ImageContent (181-186)
  • ImageContentTypeURL (177-177)
transports/bifrost-http/integrations/genai/router.go (3)
transports/bifrost-http/integrations/utils.go (2)
  • RequestConverter (23-23)
  • ResponseConverter (27-27)
core/schemas/bifrost.go (2)
  • BifrostRequest (54-64)
  • BifrostResponse (191-201)
transports/bifrost-http/integrations/genai/types.go (2)
  • GeminiChatRequest (28-39)
  • DeriveGenAIFromBifrostResponse (340-459)
transports/bifrost-http/integrations/utils.go (1)
core/schemas/bifrost.go (9)
  • BifrostRequest (54-64)
  • BifrostResponse (191-201)
  • ModelProvider (31-31)
  • OpenAI (34-34)
  • Azure (35-35)
  • Anthropic (36-36)
  • Vertex (39-39)
  • Bedrock (37-37)
  • Cohere (38-38)
🔇 Additional comments (11)
transports/go.mod (1)

7-7: LGTM! Dependency update aligns with new features.

The core dependency bump to v1.0.10 is appropriate to support the LiteLLM integration and other enhancements introduced in this PR.

transports/bifrost-http/integrations/anthropic/types.go (1)

139-139: LGTM! Image content type standardization improves consistency.

The change from using string pointers (&content.Source.Type) to predefined constants (schemas.ImageContentTypeURL) standardizes image content type handling across integrations and improves type safety.

Also applies to: 172-172

transports/bifrost-http/main.go (3)

50-50: LGTM! LiteLLM integration import added correctly.

The import follows the established pattern for other integrations.


202-202: LGTM! LiteLLM router registered with extensions.

The router is added to the extensions list following the same pattern as other provider integrations (genai, openai, anthropic).


223-223: LGTM! Enhanced 404 handler provides better debugging information.

Including the requested path in the 404 response improves debugging and troubleshooting capabilities.

transports/bifrost-http/integrations/genai/types.go (2)

175-175: LGTM! Image content type assignment standardized.

The change from creating local string variables to using predefined constants (schemas.ImageContentTypeBase64, schemas.ImageContentTypeURL) improves consistency and type safety across integrations.

Also applies to: 184-184


468-468: LGTM! Image content type comparison logic simplified.

Direct comparison with constants eliminates the need for pointer dereferencing and nil checks, making the code cleaner and more robust.

Also applies to: 504-504

transports/bifrost-http/integrations/openai/router.go (3)

4-4: LGTM! Errors package import added for enhanced error handling.

The import supports the updated function signatures that now return explicit errors.


26-30: LGTM! Request converter updated with explicit error handling.

The function signature now returns (*schemas.BifrostRequest, error) which enables proper error reporting for invalid request types, improving robustness.


32-34: LGTM! Response converter updated with explicit error handling.

The function signature now returns (interface{}, error) which maintains consistency with the updated error handling pattern across integrations.

transports/bifrost-http/integrations/litellm/router.go (1)

75-83: Unreachable schemas.Azure branch without the fix above

The switch already handles schemas.Azure, but the earlier validation prevents it from ever being reached. Once the slice is fixed the branch will work as intended.
No action needed beyond the previous change, just highlighting the tight coupling.

Comment thread transports/bifrost-http/integrations/genai/router.go
Comment thread transports/bifrost-http/integrations/anthropic/router.go
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-09-feat_anthropic_integration_added_in_http_transport branch from e86af61 to c24d907 Compare June 10, 2025 10:25
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from 794ac99 to df025e2 Compare June 10, 2025 10:25
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
transports/bifrost-http/integrations/genai/router.go (1)

28-33: ⚠️ Potential issue

Nil-safety still missing after ConvertToBifrostRequest
Prior feedback flagged the risk of returning a nil pointer from ConvertToBifrostRequest, which would panic downstream. The current code still bypasses that check.

-					return geminiReq.ConvertToBifrostRequest(), nil
+					bReq := geminiReq.ConvertToBifrostRequest()
+					if bReq == nil {
+						return nil, errors.New("failed to convert Gemini request to Bifrost format")
+					}
+					return bReq, nil
transports/bifrost-http/integrations/anthropic/router.go (1)

26-33: ⚠️ Potential issue

Repeat of the nil-safety issue
Same concern as in GenAI: ConvertToBifrostRequest() can return nil, yet no check is performed.

-					return anthropicReq.ConvertToBifrostRequest(), nil
+					bReq := anthropicReq.ConvertToBifrostRequest()
+					if bReq == nil {
+						return nil, errors.New("failed to convert Anthropic request to Bifrost format")
+					}
+					return bReq, nil
transports/bifrost-http/integrations/litellm/router.go (1)

43-48: ⚠️ Potential issue

Azure provider missing from availableProviders slice
preHook rejects Azure models as “unsupported” even though later logic (request/response converters) handles them.

 availableProviders := []schemas.ModelProvider{
 	schemas.OpenAI,
 	schemas.Anthropic,
 	schemas.Vertex,
+	schemas.Azure,
 }

Without this, every Azure deployment matched by GetProviderFromModel will hard-fail.

transports/bifrost-http/integrations/utils.go (1)

235-244: 🛠️ Refactor suggestion

Azure detection should precede OpenAI to avoid false positives
Azure deployment names often contain “gpt-*”; with the current order they are classified as openai, breaking downstream handling. Re-order the checks:

-	// OpenAI Models
-	if isOpenAIModel(modelLower) {
-		return schemas.OpenAI
-	}
-
-	// Azure OpenAI Models
-	if isAzureModel(modelLower) {
-		return schemas.Azure
-	}
+	// Azure OpenAI deployments (detect first to avoid "gpt-*" collision)
+	if isAzureModel(modelLower) {
+		return schemas.Azure
+	}
+
+	// OpenAI Models
+	if isOpenAIModel(modelLower) {
+		return schemas.OpenAI
+	}

Reduces misclassification risk for Azure customers.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 794ac99 and df025e2.

⛔ Files ignored due to path filters (1)
  • transports/go.sum is excluded by !**/*.sum
📒 Files selected for processing (9)
  • transports/bifrost-http/integrations/anthropic/router.go (2 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (2 hunks)
  • transports/bifrost-http/integrations/genai/router.go (2 hunks)
  • transports/bifrost-http/integrations/genai/types.go (3 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (2 hunks)
  • transports/bifrost-http/integrations/utils.go (8 hunks)
  • transports/bifrost-http/main.go (3 hunks)
  • transports/go.mod (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
transports/bifrost-http/integrations/anthropic/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#64
File: transports/bifrost-http/integrations/genai/types.go:383-395
Timestamp: 2025-06-09T16:26:05.777Z
Learning: In the Bifrost schema (core/schemas/bifrost.go), only UserMessage and ToolMessage structs have ImageContent fields. AssistantMessage does not have an ImageContent field and cannot contain images. The schema design intentionally limits image content to user and tool messages only.
transports/bifrost-http/integrations/utils.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#65
File: transports/bifrost-http/integrations/utils.go:169-173
Timestamp: 2025-06-09T17:33:52.234Z
Learning: The ChatCompletionRequest method in the Bifrost client follows a contract where the result parameter will never be nil if the error parameter is nil. This means when error checking passes (err == nil), the result is guaranteed to be valid and can be safely used without additional nil checks.
🧬 Code Graph Analysis (5)
transports/bifrost-http/main.go (1)
transports/bifrost-http/integrations/litellm/router.go (1)
  • NewLiteLLMRouter (33-157)
transports/bifrost-http/integrations/anthropic/types.go (1)
core/schemas/bifrost.go (1)
  • ImageContentTypeURL (177-177)
transports/bifrost-http/integrations/genai/types.go (1)
core/schemas/bifrost.go (3)
  • ImageContentTypeBase64 (176-176)
  • ImageContent (181-186)
  • ImageContentTypeURL (177-177)
transports/bifrost-http/integrations/anthropic/router.go (2)
transports/bifrost-http/integrations/utils.go (2)
  • RequestConverter (23-23)
  • ResponseConverter (27-27)
transports/bifrost-http/integrations/anthropic/types.go (2)
  • AnthropicMessageRequest (90-102)
  • DeriveAnthropicFromBifrostResponse (325-400)
transports/bifrost-http/integrations/utils.go (2)
core/schemas/bifrost.go (9)
  • BifrostRequest (54-64)
  • BifrostResponse (191-201)
  • ModelProvider (31-31)
  • OpenAI (34-34)
  • Azure (35-35)
  • Anthropic (36-36)
  • Vertex (39-39)
  • Bedrock (37-37)
  • Cohere (38-38)
core/bifrost.go (1)
  • ChatCompletionRequest (24-24)
🔇 Additional comments (7)
transports/go.mod (1)

7-7: LGTM! Dependency update supports new features.

The core dependency update from v1.0.9 to v1.0.10 aligns with the new LiteLLM integration and enhanced error handling across the transport layer.

transports/bifrost-http/integrations/anthropic/types.go (1)

139-139: Excellent use of constants for type safety.

Replacing string pointers with the predefined schemas.ImageContentTypeURL constant improves type safety and consistency across the codebase. This eliminates potential nil pointer issues and ensures standardized image content type values.

Also applies to: 172-172

transports/bifrost-http/main.go (2)

50-50: LiteLLM integration properly implemented.

The LiteLLM router integration follows the established pattern used by other provider integrations and correctly implements the multi-provider functionality described in the PR objectives.

Also applies to: 202-202


223-223: Improved 404 handler with better debugging information.

Including the requested path in the 404 response provides valuable debugging information for developers, making it easier to identify incorrect endpoints.

transports/bifrost-http/integrations/genai/types.go (2)

175-175: Consistent use of image content type constants.

Using schemas.ImageContentTypeBase64 and schemas.ImageContentTypeURL constants ensures type safety and consistency with the pattern established across other integrations.

Also applies to: 184-184


468-468: Proper constant usage in comparison operations.

The comparisons correctly use the same predefined constants, maintaining consistency between assignment and comparison operations throughout the codebase.

Also applies to: 504-504

transports/bifrost-http/integrations/openai/router.go (1)

4-4: Enhanced error handling with updated function signatures.

The addition of error returns to both RequestConverter and ResponseConverter functions improves error handling consistency across all router integrations. This standardization enables better error propagation and debugging capabilities.

Also applies to: 26-34

Comment thread transports/bifrost-http/integrations/utils.go
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from df025e2 to da036a4 Compare June 10, 2025 11:04
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-09-feat_anthropic_integration_added_in_http_transport branch 2 times, most recently from 58bf3b3 to e186e57 Compare June 10, 2025 11:09
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from da036a4 to 1993787 Compare June 10, 2025 11:09
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 4

♻️ Duplicate comments (5)
transports/bifrost-http/integrations/genai/router.go (1)

28-33: ⚠️ Potential issue

Still missing nil-safety on ConvertToBifrostRequest return
Prior review already requested a nil-check to prevent panics. The converter still returns the pointer unvalidated.

-	return geminiReq.ConvertToBifrostRequest(), nil
+	bReq := geminiReq.ConvertToBifrostRequest()
+	if bReq == nil {
+		return nil, errors.New("failed to convert Gemini request to Bifrost format")
+	}
+	return bReq, nil
transports/bifrost-http/integrations/anthropic/router.go (1)

26-31: ⚠️ Potential issue

Nil-check omission identical to previous feedback
ConvertToBifrostRequest() may return nil; returning it unchecked can crash downstream. Same fix applies as for GenAI.

transports/bifrost-http/integrations/litellm/router.go (1)

43-48: ⚠️ Potential issue

Azure provider excluded → every Azure request fails pre-hook
availableProviders omits schemas.Azure, yet GetProviderFromModel and later switch blocks explicitly support Azure. The pre-hook rejects valid Azure requests as “unsupported”.

-	availableProviders := []schemas.ModelProvider{
-		schemas.OpenAI,
-		schemas.Anthropic,
-		schemas.Vertex,
-	}
+	availableProviders := []schemas.ModelProvider{
+		schemas.OpenAI,
+		schemas.Azure,
+		schemas.Anthropic,
+		schemas.Vertex,
+	}
transports/bifrost-http/integrations/utils.go (2)

162-170: ⚠️ Potential issue

Wrong variable passed to sendError hides real Bifrost error

err is always nil at this point; should pass bifrostErr.

-			}(), err)
+			}(), bifrostErr)

235-243: 🛠️ Refactor suggestion

Provider detection order causes Azure misclassification

OpenAI check precedes Azure; many Azure deployment names contain “gpt” and will be mistaken for OpenAI.

-	if isOpenAIModel(modelLower) {
-		return schemas.OpenAI
-	}
-
-	// Azure OpenAI Models
-	if isAzureModel(modelLower) {
+	if isAzureModel(modelLower) {
+		return schemas.Azure
+	}
+
+	// OpenAI Models
+	if isOpenAIModel(modelLower) {
 		return schemas.OpenAI
 	}
🛑 Comments failed to post (4)
transports/bifrost-http/integrations/litellm/router.go (2)

129-139: 🧹 Nitpick (assertive)

Default response can silently break LiteLLM clients
Unrecognised providers fall back to raw BifrostResponse, which is not OpenAI-compatible. At minimum, log a warning; ideally handle Bedrock/Cohere explicitly.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/router.go around lines 129 to
139, the default case in the responseConverter function returns the raw
BifrostResponse without compatibility, which can break LiteLLM clients silently.
Modify the default case to log a warning about the unrecognized provider and
consider adding explicit handling for Bedrock and Cohere providers to ensure
compatibility. This will prevent silent failures and improve client robustness.

108-126: 🛠️ Refactor suggestion

Missing nil-check on per-provider ConvertToBifrostRequest results
If any provider implementation returns nil, the server will panic later. Add defensive checks after each conversion.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/litellm/router.go around lines 108 to
126, after calling ConvertToBifrostRequest for each provider type, add a nil
check on the returned bifrostReq. If bifrostReq is nil, return an error
indicating the conversion failed to prevent a later panic. This defensive check
ensures the server handles nil results gracefully.
transports/bifrost-http/integrations/utils.go (2)

280-286: 🧹 Nitpick (assertive)

Azure patterns still too generic / incomplete
Consider adding more specific substrings ("azure-openai", ".openai.azure.com") to improve accuracy.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/utils.go around lines 280 to 286, the
list of Azure model patterns is too generic and incomplete. To improve accuracy,
add more specific substrings such as "azure-openai" and ".openai.azure.com" to
the azurePatterns slice. This will help the isAzureModel function better
identify Azure models by matching these additional specific patterns.

308-310: 🧹 Nitpick (assertive)

Bedrock patterns risk false positives
Generic substrings like "amazon.", "meta." match plenty of non-Bedrock models. Tighten them (e.g., "amazon.titan-", "meta.llama-") to reduce mis-detection.

🤖 Prompt for AI Agents
In transports/bifrost-http/integrations/utils.go around lines 308 to 310, the
bedrockPatterns array uses generic substrings like "amazon." and "meta." which
can cause false positives by matching unrelated models. Refine these patterns to
be more specific, for example, change "amazon." to "amazon.titan-" and "meta."
to "meta.llama-" to reduce mis-detection and ensure only relevant Bedrock models
are matched.

@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-09-feat_anthropic_integration_added_in_http_transport branch from e186e57 to c9f43b6 Compare June 10, 2025 11:12
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch 2 times, most recently from 0d3e6c5 to 236919a Compare June 10, 2025 11:19
@akshaydeo akshaydeo marked this pull request as ready for review June 10, 2025 11:39
cursor[bot]

This comment was marked as outdated.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (3)
transports/bifrost-http/integrations/genai/router.go (1)

28-33: 🧹 Nitpick (assertive)

Propagate conversion failure instead of silently returning (nil, nil)

ConvertToBifrostRequest() can legally return nil (e.g., empty messages array).
Returning (nil, nil) here forces the Generic router to emit a generic “Invalid request” and loses the root-cause context. Bubble up an explicit error for easier client debugging.

-	if geminiReq, ok := req.(*GeminiChatRequest); ok {
-		return geminiReq.ConvertToBifrostRequest(), nil
-	}
-	return nil, errors.New("invalid request type")
+	if geminiReq, ok := req.(*GeminiChatRequest); ok {
+		if bifrostReq := geminiReq.ConvertToBifrostRequest(); bifrostReq != nil {
+			return bifrostReq, nil
+		}
+		return nil, errors.New("failed to convert Gemini request to Bifrost format")
+	}
+	return nil, errors.New("invalid request type")
transports/bifrost-http/integrations/anthropic/router.go (1)

26-34: 🧹 Nitpick (assertive)

Same silent-nil issue as GenAI converter

For parity with other routers and clearer diagnostics, return an explicit error when ConvertToBifrostRequest() yields nil.
The diff is analogous to the GenAI suggestion above.

transports/bifrost-http/integrations/utils.go (1)

162-170: ⚠️ Potential issue

Wrong variable passed to sendError – client gets empty error body

The handler still forwards err (nil here) instead of bifrostErr, so the HTTP response lacks the actual Bifrost failure details. This was flagged previously but remains unfixed.

-			}(), err)
+			}(), bifrostErr)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between da036a4 and 236919a.

⛔ Files ignored due to path filters (1)
  • transports/go.sum is excluded by !**/*.sum
📒 Files selected for processing (9)
  • transports/bifrost-http/integrations/anthropic/router.go (2 hunks)
  • transports/bifrost-http/integrations/anthropic/types.go (2 hunks)
  • transports/bifrost-http/integrations/genai/router.go (2 hunks)
  • transports/bifrost-http/integrations/genai/types.go (3 hunks)
  • transports/bifrost-http/integrations/litellm/router.go (1 hunks)
  • transports/bifrost-http/integrations/openai/router.go (2 hunks)
  • transports/bifrost-http/integrations/utils.go (8 hunks)
  • transports/bifrost-http/main.go (3 hunks)
  • transports/go.mod (1 hunks)
🧰 Additional context used
🧠 Learnings (4)
transports/bifrost-http/integrations/anthropic/types.go (1)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#64
File: transports/bifrost-http/integrations/genai/types.go:383-395
Timestamp: 2025-06-09T16:26:05.777Z
Learning: In the Bifrost schema (core/schemas/bifrost.go), only UserMessage and ToolMessage structs have ImageContent fields. AssistantMessage does not have an ImageContent field and cannot contain images. The schema design intentionally limits image content to user and tool messages only.
transports/bifrost-http/integrations/genai/router.go (2)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#67
File: transports/bifrost-http/integrations/anthropic/router.go:26-34
Timestamp: 2025-06-10T11:19:29.578Z
Learning: The Generic router in transports/bifrost-http/integrations/utils.go already handles nil pointers from RequestConverter functions. When a RequestConverter returns a nil *schemas.BifrostRequest, the Generic router automatically returns an HTTP 400 error, making additional nil checks in individual router implementations redundant.
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#67
File: transports/bifrost-http/integrations/anthropic/router.go:26-34
Timestamp: 2025-06-10T11:19:29.578Z
Learning: The Generic router in transports/bifrost-http/integrations/utils.go already handles nil pointers from RequestConverter functions. When a RequestConverter returns a nil *schemas.BifrostRequest, the Generic router automatically returns an HTTP 400 error with "Invalid request" message, making additional nil checks in individual router implementations redundant.
transports/bifrost-http/integrations/anthropic/router.go (2)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#67
File: transports/bifrost-http/integrations/anthropic/router.go:26-34
Timestamp: 2025-06-10T11:19:29.578Z
Learning: The Generic router in transports/bifrost-http/integrations/utils.go already handles nil pointers from RequestConverter functions. When a RequestConverter returns a nil *schemas.BifrostRequest, the Generic router automatically returns an HTTP 400 error with "Invalid request" message, making additional nil checks in individual router implementations redundant.
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#67
File: transports/bifrost-http/integrations/anthropic/router.go:26-34
Timestamp: 2025-06-10T11:19:29.578Z
Learning: The Generic router in transports/bifrost-http/integrations/utils.go already handles nil pointers from RequestConverter functions. When a RequestConverter returns a nil *schemas.BifrostRequest, the Generic router automatically returns an HTTP 400 error, making additional nil checks in individual router implementations redundant.
transports/bifrost-http/integrations/utils.go (3)
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#67
File: transports/bifrost-http/integrations/anthropic/router.go:26-34
Timestamp: 2025-06-10T11:19:29.578Z
Learning: The Generic router in transports/bifrost-http/integrations/utils.go already handles nil pointers from RequestConverter functions. When a RequestConverter returns a nil *schemas.BifrostRequest, the Generic router automatically returns an HTTP 400 error, making additional nil checks in individual router implementations redundant.
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#67
File: transports/bifrost-http/integrations/anthropic/router.go:26-34
Timestamp: 2025-06-10T11:19:29.578Z
Learning: The Generic router in transports/bifrost-http/integrations/utils.go already handles nil pointers from RequestConverter functions. When a RequestConverter returns a nil *schemas.BifrostRequest, the Generic router automatically returns an HTTP 400 error with "Invalid request" message, making additional nil checks in individual router implementations redundant.
Learnt from: Pratham-Mishra04
PR: maximhq/bifrost#65
File: transports/bifrost-http/integrations/utils.go:169-173
Timestamp: 2025-06-09T17:33:52.234Z
Learning: The ChatCompletionRequest method in the Bifrost client follows a contract where the result parameter will never be nil if the error parameter is nil. This means when error checking passes (err == nil), the result is guaranteed to be valid and can be safely used without additional nil checks.
🧬 Code Graph Analysis (6)
transports/bifrost-http/main.go (1)
transports/bifrost-http/integrations/litellm/router.go (1)
  • NewLiteLLMRouter (33-160)
transports/bifrost-http/integrations/anthropic/types.go (1)
core/schemas/bifrost.go (1)
  • ImageContentTypeURL (177-177)
transports/bifrost-http/integrations/openai/router.go (3)
transports/bifrost-http/integrations/utils.go (2)
  • RequestConverter (23-23)
  • ResponseConverter (27-27)
core/schemas/bifrost.go (2)
  • BifrostRequest (54-64)
  • BifrostResponse (191-201)
transports/bifrost-http/integrations/openai/types.go (2)
  • OpenAIChatRequest (36-57)
  • DeriveOpenAIFromBifrostResponse (452-507)
transports/bifrost-http/integrations/genai/types.go (1)
core/schemas/bifrost.go (3)
  • ImageContentTypeBase64 (176-176)
  • ImageContent (181-186)
  • ImageContentTypeURL (177-177)
transports/bifrost-http/integrations/genai/router.go (2)
transports/bifrost-http/integrations/utils.go (2)
  • RequestConverter (23-23)
  • ResponseConverter (27-27)
transports/bifrost-http/integrations/genai/types.go (1)
  • DeriveGenAIFromBifrostResponse (337-456)
transports/bifrost-http/integrations/utils.go (2)
core/schemas/bifrost.go (8)
  • BifrostRequest (54-64)
  • ModelProvider (31-31)
  • Azure (35-35)
  • OpenAI (34-34)
  • Anthropic (36-36)
  • Vertex (39-39)
  • Bedrock (37-37)
  • Cohere (38-38)
core/bifrost.go (1)
  • ChatCompletionRequest (24-24)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Graphite / mergeability_check
🔇 Additional comments (10)
transports/go.mod (1)

7-7: LGTM - Dependency update aligns with new functionality.

The core dependency upgrade to v1.0.10 supports the enhanced error handling capabilities being introduced across the integrations.

transports/bifrost-http/integrations/anthropic/types.go (1)

139-139: LGTM - Improved type safety with schema constants.

Using schemas.ImageContentTypeURL constant instead of pointer dereferencing improves type safety and consistency across integrations. This change aligns with similar updates in other integrations.

Also applies to: 172-172

transports/bifrost-http/main.go (3)

50-50: LGTM - Correct import for LiteLLM integration.

The import follows the established pattern for other integrations.


202-202: LGTM - Proper router registration.

The LiteLLM router is correctly added to the extensions list, following the same pattern as other integrations (GenAI, OpenAI, Anthropic).


223-223: LGTM - Enhanced error messaging for debugging.

Including the requested path in the 404 response improves debugging experience for API consumers.

transports/bifrost-http/integrations/genai/types.go (2)

175-175: LGTM - Consistent use of schema constants for image content types.

Replacing string literals with schemas.ImageContentTypeBase64 and schemas.ImageContentTypeURL constants improves type safety and consistency across integrations.

Also applies to: 184-184


465-465: LGTM - Type-safe comparisons with schema constants.

Using the predefined constants in comparisons ensures consistency and reduces the chance of typos in string literals.

Also applies to: 501-501

transports/bifrost-http/integrations/openai/router.go (3)

4-4: LGTM - Required import for error handling.

The errors package import is necessary for the updated error handling in converter functions.


26-30: LGTM - Proper error handling in request converter.

The updated function signature correctly returns both the converted request and an error. The type assertion properly returns an error when the request type is invalid.


32-34: LGTM - Consistent error handling in response converter.

The response converter follows the same pattern, returning both the converted response and a nil error on success. This maintains consistency with the new error handling approach across all integrations.

Comment thread transports/bifrost-http/integrations/litellm/router.go
Comment thread transports/bifrost-http/integrations/litellm/router.go
Comment thread transports/bifrost-http/integrations/litellm/router.go
Comment thread transports/bifrost-http/integrations/utils.go
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from 236919a to c584545 Compare June 10, 2025 13:11
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-09-feat_anthropic_integration_added_in_http_transport branch from c9f43b6 to 4c58221 Compare June 10, 2025 13:39
@Pratham-Mishra04 Pratham-Mishra04 force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch 2 times, most recently from 260ff7b to 4a73e38 Compare June 10, 2025 13:54
Copy link
Copy Markdown
Contributor

akshaydeo commented Jun 10, 2025

Merge activity

  • Jun 10, 4:28 PM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Jun 10, 4:34 PM UTC: Graphite rebased this pull request as part of a merge.
  • Jun 10, 4:36 PM UTC: @akshaydeo merged this pull request with Graphite.

@akshaydeo akshaydeo changed the base branch from 06-09-feat_anthropic_integration_added_in_http_transport to graphite-base/67 June 10, 2025 16:32
@akshaydeo akshaydeo changed the base branch from graphite-base/67 to main June 10, 2025 16:33
@akshaydeo akshaydeo force-pushed the 06-10-feat_add_litellm_integration_to_http_transport branch from 4a73e38 to c790b36 Compare June 10, 2025 16:34
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ BugBot reviewed your changes and found no bugs!


Was this report helpful? Give feedback by reacting with 👍 or 👎

@akshaydeo akshaydeo merged commit bc74f7d into main Jun 10, 2025
1 of 2 checks passed
@akshaydeo akshaydeo deleted the 06-10-feat_add_litellm_integration_to_http_transport branch August 31, 2025 17:31
akshaydeo added a commit that referenced this pull request Nov 17, 2025
# Add LiteLLM Integration Support

This PR adds support for LiteLLM integration to Bifrost HTTP transport. LiteLLM is a popular proxy that standardizes access to various LLM providers through an OpenAI-compatible interface.

Key changes:
- Created a new `LiteLLMRouter` that handles requests to `/litellm/chat/completions` endpoint
- Added model name pattern detection to automatically determine the appropriate provider based on the model name
- Implemented comprehensive pattern matching for various AI providers (OpenAI, Azure, Anthropic, Google Vertex, AWS Bedrock, Cohere)
- Registered the LiteLLM router in the main HTTP server

This integration allows users to route requests through Bifrost using LiteLLM's standardized interface while maintaining compatibility with existing OpenAI request/response formats.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants