Skip to content

docs: add HTTP transport API reference documentation#59

Merged
Pratham-Mishra04 merged 1 commit intomainfrom
06-04-feat_swagger_docs_and_references_added_for_http_transport
Jun 4, 2025
Merged

docs: add HTTP transport API reference documentation#59
Pratham-Mishra04 merged 1 commit intomainfrom
06-04-feat_swagger_docs_and_references_added_for_http_transport

Conversation

@Pratham-Mishra04
Copy link
Copy Markdown
Collaborator

Add HTTP Transport API Documentation

Added comprehensive API documentation for the Bifrost HTTP transport, including:

  • A detailed markdown reference guide (docs/http-transport-api.md) covering all endpoints, request/response formats, and schema definitions
  • A complete OpenAPI 3.0 specification file (docs/openapi.json) for machine-readable API documentation
  • Updated the transports README with a link to the new API documentation

The documentation provides detailed information about the chat completions and text completions endpoints, including request parameters, response formats, error handling, and examples in multiple programming languages. It also covers authentication, monitoring, and supported model providers.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jun 4, 2025

Summary by CodeRabbit

  • Documentation
    • Introduced a comprehensive API reference for the Bifrost HTTP Transport, detailing available endpoints, request/response formats, error handling, configuration options, and usage examples.
    • Added a full OpenAPI 3.0 specification for integration and tooling support.
    • Updated the README to highlight the new API documentation and made a minor typographical correction.

Walkthrough

This update introduces a comprehensive API reference for the Bifrost HTTP transport, including a detailed Markdown document and a full OpenAPI 3.0 specification. The documentation covers endpoints for chat and text completions, monitoring, request/response schemas, error handling, usage examples, and server configuration. The transports README was also updated to reference this new documentation.

Changes

File(s) Change Summary
docs/http-transport-api.md Added a detailed API reference for Bifrost HTTP transport, including endpoints, schemas, and examples
docs/openapi.json Added a complete OpenAPI 3.0.3 specification for the Bifrost HTTP Transport API
transports/README.md Added prominent link to new API docs; fixed minor typographical error

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant BifrostAPI

    Client->>BifrostAPI: POST /v1/chat/completions or /v1/text/completions
    BifrostAPI-->>Client: JSON response (choices, usage, error info)

    Client->>BifrostAPI: GET /metrics
    BifrostAPI-->>Client: Prometheus metrics (plain text)
Loading

Suggested reviewers

  • danpiths
  • akshaydeo

Poem

In docs a new chapter, crisp and bright,
Bifrost’s API now shines in the light.
Endpoints and schemas, all neat in a row,
For chat, for text, for metrics to show.
With OpenAPI’s map and examples galore,
Integration’s a breeze—rabbits couldn’t ask for more!
🐇✨


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Collaborator Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

@Pratham-Mishra04 Pratham-Mishra04 marked this pull request as ready for review June 4, 2025 13:25
@Pratham-Mishra04 Pratham-Mishra04 merged commit ac54c86 into main Jun 4, 2025
1 of 2 checks passed
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b090d64 and e32f74b.

📒 Files selected for processing (3)
  • docs/http-transport-api.md (1 hunks)
  • docs/openapi.json (1 hunks)
  • transports/README.md (2 hunks)
🧰 Additional context used
🪛 Checkov (3.2.334)
docs/openapi.json

[HIGH] 1-1139: Ensure that the global security field has rules defined

(CKV_OPENAPI_4)


[HIGH] 1-1139: Ensure that security operations is not empty.

(CKV_OPENAPI_5)


[MEDIUM] 355-361: Ensure that arrays have a maximum number of items

(CKV_OPENAPI_21)

🪛 LanguageTool
docs/http-transport-api.md

[uncategorized] ~209-~209: Possible missing comma found.
Context: ... | *Either messages or text is required depending on the endpoint. ### Bifrost...

(AI_HYDRA_LEO_MISSING_COMMA)

🪛 markdownlint-cli2 (0.17.2)
docs/http-transport-api.md

7-7: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

🔇 Additional comments (3)
docs/openapi.json (1)

1125-1139: The tags section cleanly groups endpoints. Consider adding a brief description to the “Monitoring” tag for consistency.

🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 1-1139: Ensure that the global security field has rules defined

(CKV_OPENAPI_4)


[HIGH] 1-1139: Ensure that security operations is not empty.

(CKV_OPENAPI_5)

transports/README.md (1)

5-5: Verify the relative link to HTTP API docs.

The link uses ../docs/http-transport-api.md from this file. Confirm it resolves correctly on GitHub; if not, switch to an absolute path (/docs/http-transport-api.md).

docs/http-transport-api.md (1)

200-210: 🧹 Nitpick (assertive)

Fix missing comma in table row.

In the schema definitions table, append a comma in the description cell to prevent Markdown parsing issues. For example:

| `messages`  | [`BifrostMessage[]`](#bifrostmessage) | ✅\*     | Array of chat messages (required for chat completions), |

Likely an incorrect or invalid review comment.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~209-~209: Possible missing comma found.
Context: ... | *Either messages or text is required depending on the endpoint. ### Bifrost...

(AI_HYDRA_LEO_MISSING_COMMA)

Comment thread docs/openapi.json
Comment on lines +1 to +21
{
"openapi": "3.0.3",
"info": {
"title": "Bifrost HTTP Transport API",
"description": "A unified HTTP API for accessing multiple AI model providers including OpenAI, Anthropic, Azure, Bedrock, Cohere, and Vertex AI. Bifrost provides standardized endpoints for text and chat completions with built-in fallback support and comprehensive monitoring.",
"version": "1.0.0",
"contact": {
"name": "Bifrost API Support",
"url": "https://github.com/maximhq/bifrost"
},
"license": {
"name": "MIT",
"url": "https://opensource.org/licenses/MIT"
}
},
"servers": [
{
"url": "http://localhost:8080",
"description": "Local development server"
}
],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add global security definitions and requirements.

The spec lacks a top-level security field and components/securitySchemes. Define an API-key scheme (or OAuth2/Bearer) and apply it globally to ensure clients know how to authenticate:

components:
  securitySchemes:
    ApiKeyAuth:
      type: apiKey
      in: header
      name: Authorization

security:
  - ApiKeyAuth: []
🧰 Tools
🪛 Checkov (3.2.334)

[HIGH] 1-1139: Ensure that the global security field has rules defined

(CKV_OPENAPI_4)


[HIGH] 1-1139: Ensure that security operations is not empty.

(CKV_OPENAPI_5)

🤖 Prompt for AI Agents
In docs/openapi.json lines 1 to 21, the OpenAPI spec is missing global security
definitions and requirements. Add a top-level "components" object with a
"securitySchemes" field defining an ApiKeyAuth scheme of type "apiKey" located
in the header with the name "Authorization". Then add a top-level "security"
field applying this ApiKeyAuth scheme globally to indicate how clients should
authenticate.

Comment thread docs/openapi.json
Comment on lines +341 to +1124
"components": {
"schemas": {
"ChatCompletionRequest": {
"type": "object",
"required": ["provider", "model", "messages"],
"properties": {
"provider": {
"$ref": "#/components/schemas/ModelProvider"
},
"model": {
"type": "string",
"description": "Model identifier (provider-specific)",
"example": "gpt-4o"
},
"messages": {
"type": "array",
"items": {
"$ref": "#/components/schemas/BifrostMessage"
},
"description": "Array of chat messages",
"minItems": 1
},
"params": {
"$ref": "#/components/schemas/ModelParameters"
},
"fallbacks": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Fallback"
},
"description": "Fallback providers and models"
}
}
},
"TextCompletionRequest": {
"type": "object",
"required": ["provider", "model", "text"],
"properties": {
"provider": {
"$ref": "#/components/schemas/ModelProvider"
},
"model": {
"type": "string",
"description": "Model identifier (provider-specific)",
"example": "gpt-3.5-turbo-instruct"
},
"text": {
"type": "string",
"description": "Text prompt for completion",
"example": "The benefits of artificial intelligence include"
},
"params": {
"$ref": "#/components/schemas/ModelParameters"
},
"fallbacks": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Fallback"
},
"description": "Fallback providers and models"
}
}
},
"ModelProvider": {
"type": "string",
"enum": ["openai", "anthropic", "azure", "bedrock", "cohere", "vertex"],
"description": "AI model provider",
"example": "openai"
},
"BifrostMessage": {
"type": "object",
"required": ["role"],
"properties": {
"role": {
"$ref": "#/components/schemas/MessageRole"
},
"content": {
"type": "string",
"description": "Text content of the message",
"example": "Hello, how are you?"
},
"tool_call_id": {
"type": "string",
"description": "ID of the tool call (for tool messages)"
},
"tool_calls": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ToolCall"
},
"description": "Tool calls made by assistant"
},
"image_content": {
"$ref": "#/components/schemas/ImageContent"
},
"refusal": {
"type": "string",
"description": "Refusal message from assistant"
},
"annotations": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Annotation"
},
"description": "Message annotations"
},
"thought": {
"type": "string",
"description": "Assistant's internal thought process"
}
}
},
"MessageRole": {
"type": "string",
"enum": ["user", "assistant", "system", "tool"],
"description": "Role of the message sender",
"example": "user"
},
"ModelParameters": {
"type": "object",
"properties": {
"temperature": {
"type": "number",
"minimum": 0.0,
"maximum": 2.0,
"description": "Controls randomness in the output",
"example": 0.7
},
"top_p": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "Nucleus sampling parameter",
"example": 0.9
},
"top_k": {
"type": "integer",
"minimum": 1,
"description": "Top-k sampling parameter",
"example": 40
},
"max_tokens": {
"type": "integer",
"minimum": 1,
"description": "Maximum number of tokens to generate",
"example": 1000
},
"stop_sequences": {
"type": "array",
"items": {
"type": "string"
},
"description": "Sequences that stop generation",
"example": ["\n\n", "END"]
},
"presence_penalty": {
"type": "number",
"minimum": -2.0,
"maximum": 2.0,
"description": "Penalizes repeated tokens",
"example": 0.0
},
"frequency_penalty": {
"type": "number",
"minimum": -2.0,
"maximum": 2.0,
"description": "Penalizes frequent tokens",
"example": 0.0
},
"tools": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Tool"
},
"description": "Available tools for the model"
},
"tool_choice": {
"$ref": "#/components/schemas/ToolChoice"
},
"parallel_tool_calls": {
"type": "boolean",
"description": "Enable parallel tool execution",
"example": true
}
}
},
"Tool": {
"type": "object",
"required": ["type", "function"],
"properties": {
"id": {
"type": "string",
"description": "Unique tool identifier"
},
"type": {
"type": "string",
"enum": ["function"],
"description": "Tool type",
"example": "function"
},
"function": {
"$ref": "#/components/schemas/Function"
}
}
},
"Function": {
"type": "object",
"required": ["name", "description", "parameters"],
"properties": {
"name": {
"type": "string",
"description": "Function name",
"example": "get_weather"
},
"description": {
"type": "string",
"description": "Function description",
"example": "Get current weather for a location"
},
"parameters": {
"$ref": "#/components/schemas/FunctionParameters"
}
}
},
"FunctionParameters": {
"type": "object",
"required": ["type"],
"properties": {
"type": {
"type": "string",
"description": "Parameter type",
"example": "object"
},
"description": {
"type": "string",
"description": "Parameter description"
},
"properties": {
"type": "object",
"additionalProperties": true,
"description": "Parameter properties (JSON Schema)"
},
"required": {
"type": "array",
"items": {
"type": "string"
},
"description": "Required parameter names"
},
"enum": {
"type": "array",
"items": {
"type": "string"
},
"description": "Enum values for parameters"
}
}
},
"ToolChoice": {
"type": "object",
"required": ["type"],
"properties": {
"type": {
"type": "string",
"enum": ["none", "auto", "any", "function", "required"],
"description": "How tools should be chosen",
"example": "auto"
},
"function": {
"$ref": "#/components/schemas/ToolChoiceFunction"
}
}
},
"ToolChoiceFunction": {
"type": "object",
"required": ["name"],
"properties": {
"name": {
"type": "string",
"description": "Name of the function to call",
"example": "get_weather"
}
}
},
"ToolCall": {
"type": "object",
"required": ["function"],
"properties": {
"id": {
"type": "string",
"description": "Unique tool call identifier",
"example": "call_123"
},
"type": {
"type": "string",
"enum": ["function"],
"description": "Tool call type",
"example": "function"
},
"function": {
"$ref": "#/components/schemas/FunctionCall"
}
}
},
"FunctionCall": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Function name",
"example": "get_weather"
},
"arguments": {
"type": "string",
"description": "JSON string of function arguments",
"example": "{\"location\": \"San Francisco, CA\"}"
}
}
},
"ImageContent": {
"type": "object",
"properties": {
"type": {
"type": "string",
"description": "Content type"
},
"url": {
"type": "string",
"description": "Image URL or data URI",
"example": "https://example.com/image.jpg"
},
"media_type": {
"type": "string",
"description": "MIME type of the image",
"example": "image/jpeg"
},
"detail": {
"type": "string",
"enum": ["low", "high", "auto"],
"description": "Image detail level",
"example": "auto"
}
}
},
"Annotation": {
"type": "object",
"required": ["type", "url_citation"],
"properties": {
"type": {
"type": "string",
"description": "Annotation type"
},
"url_citation": {
"$ref": "#/components/schemas/Citation"
}
}
},
"Citation": {
"type": "object",
"required": ["start_index", "end_index", "title"],
"properties": {
"start_index": {
"type": "integer",
"description": "Start index in the text"
},
"end_index": {
"type": "integer",
"description": "End index in the text"
},
"title": {
"type": "string",
"description": "Citation title"
},
"url": {
"type": "string",
"description": "Citation URL"
},
"sources": {
"description": "Citation sources"
},
"type": {
"type": "string",
"description": "Citation type"
}
}
},
"Fallback": {
"type": "object",
"required": ["provider", "model"],
"properties": {
"provider": {
"$ref": "#/components/schemas/ModelProvider"
},
"model": {
"type": "string",
"description": "Fallback model name",
"example": "claude-3-sonnet-20240229"
}
}
},
"BifrostResponse": {
"type": "object",
"properties": {
"id": {
"type": "string",
"description": "Unique response identifier",
"example": "chatcmpl-123"
},
"object": {
"type": "string",
"enum": ["chat.completion", "text.completion"],
"description": "Response type",
"example": "chat.completion"
},
"choices": {
"type": "array",
"items": {
"$ref": "#/components/schemas/BifrostResponseChoice"
},
"description": "Array of completion choices"
},
"model": {
"type": "string",
"description": "Model used for generation",
"example": "gpt-4o"
},
"created": {
"type": "integer",
"description": "Unix timestamp of creation",
"example": 1677652288
},
"service_tier": {
"type": "string",
"description": "Service tier used"
},
"system_fingerprint": {
"type": "string",
"description": "System fingerprint"
},
"usage": {
"$ref": "#/components/schemas/LLMUsage"
},
"extra_fields": {
"$ref": "#/components/schemas/BifrostResponseExtraFields"
}
}
},
"BifrostResponseChoice": {
"type": "object",
"required": ["index", "message"],
"properties": {
"index": {
"type": "integer",
"description": "Choice index",
"example": 0
},
"message": {
"$ref": "#/components/schemas/BifrostMessage"
},
"finish_reason": {
"type": "string",
"enum": [
"stop",
"length",
"tool_calls",
"content_filter",
"function_call"
],
"description": "Reason completion stopped",
"example": "stop"
},
"stop": {
"type": "string",
"description": "Stop sequence that ended generation"
},
"log_probs": {
"$ref": "#/components/schemas/LogProbs"
}
}
},
"LLMUsage": {
"type": "object",
"properties": {
"prompt_tokens": {
"type": "integer",
"description": "Tokens in the prompt",
"example": 56
},
"completion_tokens": {
"type": "integer",
"description": "Tokens in the completion",
"example": 31
},
"total_tokens": {
"type": "integer",
"description": "Total tokens used",
"example": 87
},
"completion_tokens_details": {
"$ref": "#/components/schemas/CompletionTokensDetails"
}
}
},
"CompletionTokensDetails": {
"type": "object",
"properties": {
"reasoning_tokens": {
"type": "integer",
"description": "Tokens used for reasoning"
},
"audio_tokens": {
"type": "integer",
"description": "Tokens used for audio"
},
"accepted_prediction_tokens": {
"type": "integer",
"description": "Accepted prediction tokens"
},
"rejected_prediction_tokens": {
"type": "integer",
"description": "Rejected prediction tokens"
}
}
},
"BifrostResponseExtraFields": {
"type": "object",
"properties": {
"provider": {
"$ref": "#/components/schemas/ModelProvider"
},
"model_params": {
"$ref": "#/components/schemas/ModelParameters"
},
"latency": {
"type": "number",
"description": "Request latency in seconds",
"example": 1.234
},
"chat_history": {
"type": "array",
"items": {
"$ref": "#/components/schemas/BifrostMessage"
},
"description": "Full conversation history"
},
"billed_usage": {
"$ref": "#/components/schemas/BilledLLMUsage"
},
"raw_response": {
"type": "object",
"description": "Raw provider response"
}
}
},
"BilledLLMUsage": {
"type": "object",
"properties": {
"prompt_tokens": {
"type": "number",
"description": "Billed prompt tokens"
},
"completion_tokens": {
"type": "number",
"description": "Billed completion tokens"
},
"search_units": {
"type": "number",
"description": "Billed search units"
},
"classifications": {
"type": "number",
"description": "Billed classifications"
}
}
},
"LogProbs": {
"type": "object",
"properties": {
"content": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ContentLogProb"
},
"description": "Log probabilities for content"
},
"refusal": {
"type": "array",
"items": {
"$ref": "#/components/schemas/LogProb"
},
"description": "Log probabilities for refusal"
}
}
},
"ContentLogProb": {
"type": "object",
"required": ["logprob", "token"],
"properties": {
"bytes": {
"type": "array",
"items": {
"type": "integer"
},
"description": "Byte representation"
},
"logprob": {
"type": "number",
"description": "Log probability",
"example": -0.123
},
"token": {
"type": "string",
"description": "Token",
"example": "hello"
},
"top_logprobs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/LogProb"
},
"description": "Top log probabilities"
}
}
},
"LogProb": {
"type": "object",
"required": ["logprob", "token"],
"properties": {
"bytes": {
"type": "array",
"items": {
"type": "integer"
},
"description": "Byte representation"
},
"logprob": {
"type": "number",
"description": "Log probability",
"example": -0.456
},
"token": {
"type": "string",
"description": "Token",
"example": "world"
}
}
},
"BifrostError": {
"type": "object",
"required": ["is_bifrost_error", "error"],
"properties": {
"event_id": {
"type": "string",
"description": "Unique error event ID",
"example": "evt_123"
},
"type": {
"type": "string",
"description": "Error type",
"example": "invalid_request_error"
},
"is_bifrost_error": {
"type": "boolean",
"description": "Whether error originated from Bifrost",
"example": true
},
"status_code": {
"type": "integer",
"description": "HTTP status code",
"example": 400
},
"error": {
"$ref": "#/components/schemas/ErrorField"
}
}
},
"ErrorField": {
"type": "object",
"required": ["message"],
"properties": {
"type": {
"type": "string",
"description": "Error type",
"example": "invalid_request_error"
},
"code": {
"type": "string",
"description": "Error code",
"example": "missing_required_parameter"
},
"message": {
"type": "string",
"description": "Human-readable error message",
"example": "Provider is required"
},
"param": {
"description": "Parameter that caused the error",
"example": "provider"
},
"event_id": {
"type": "string",
"description": "Error event ID",
"example": "evt_123"
}
}
}
},
"responses": {
"BadRequest": {
"description": "Bad Request - Invalid request format or missing required fields",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/BifrostError"
},
"example": {
"is_bifrost_error": true,
"status_code": 400,
"error": {
"type": "invalid_request_error",
"code": "missing_required_parameter",
"message": "Provider is required",
"param": "provider"
}
}
}
}
},
"Unauthorized": {
"description": "Unauthorized - Invalid or missing API key",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/BifrostError"
},
"example": {
"is_bifrost_error": true,
"status_code": 401,
"error": {
"type": "authentication_error",
"message": "Invalid API key provided"
}
}
}
}
},
"RateLimited": {
"description": "Too Many Requests - Rate limit exceeded",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/BifrostError"
},
"example": {
"is_bifrost_error": false,
"status_code": 429,
"error": {
"type": "rate_limit_error",
"message": "Rate limit exceeded. Please try again later."
}
}
}
}
},
"InternalServerError": {
"description": "Internal Server Error - Server or provider error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/BifrostError"
},
"example": {
"is_bifrost_error": true,
"status_code": 500,
"error": {
"type": "api_error",
"message": "Internal server error occurred"
}
}
}
}
}
}
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Validate $ref resolution and promote reuse.

The components are exhaustive but quite large. Double-check that all $ref pointers resolve, and consider extracting repeated patterns (e.g. shared parameter definitions) into reusable schemas to reduce duplication and improve maintainability.

🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 355-361: Ensure that arrays have a maximum number of items

(CKV_OPENAPI_21)

🤖 Prompt for AI Agents
In docs/openapi.json from lines 341 to 1124, the $ref pointers should be
verified to ensure they correctly resolve to existing schemas. Review the schema
definitions for repeated patterns, especially in parameter and property
definitions, and extract these common structures into separate reusable
component schemas. Replace duplicated inline definitions with $ref references to
these new shared schemas to reduce redundancy and improve maintainability.

Comment thread docs/openapi.json
Comment on lines +16 to +21
"servers": [
{
"url": "http://localhost:8080",
"description": "Local development server"
}
],
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider adding a production servers entry.

Currently only a local server is listed. For completeness, include a placeholder for production, e.g.:

servers:
  - url: https://api.yourdomain.com
    description: Production server
🤖 Prompt for AI Agents
In docs/openapi.json around lines 16 to 21, only the local development server is
listed under the servers array. Add an additional entry for the production
server with a placeholder URL like "https://api.yourdomain.com" and a
description "Production server" to provide a complete server list for different
environments.

Comment thread docs/openapi.json
Comment on lines +22 to +340
"paths": {
"/v1/chat/completions": {
"post": {
"summary": "Create Chat Completion",
"description": "Creates a chat completion using conversational messages. Supports tool calling, image inputs, and multiple AI providers with automatic fallbacks.",
"operationId": "createChatCompletion",
"tags": ["Chat Completions"],
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ChatCompletionRequest"
},
"examples": {
"simple_chat": {
"summary": "Simple chat message",
"value": {
"provider": "openai",
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
]
}
},
"tool_calling": {
"summary": "Chat with tool calling",
"value": {
"provider": "openai",
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "What's the weather in San Francisco?"
}
],
"params": {
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
}
}
],
"tool_choice": {
"type": "function",
"function": {
"name": "get_weather"
}
}
}
}
},
"with_fallbacks": {
"summary": "Chat with fallback providers",
"value": {
"provider": "openai",
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "Explain quantum computing"
}
],
"fallbacks": [
{
"provider": "anthropic",
"model": "claude-3-sonnet-20240229"
},
{
"provider": "cohere",
"model": "command"
}
]
}
}
}
}
}
},
"responses": {
"200": {
"description": "Successful chat completion",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/BifrostResponse"
},
"examples": {
"simple_response": {
"summary": "Simple chat response",
"value": {
"id": "chatcmpl-123",
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking. How can I help you today?"
},
"finish_reason": "stop"
}
],
"model": "gpt-4o",
"created": 1677652288,
"usage": {
"prompt_tokens": 12,
"completion_tokens": 19,
"total_tokens": 31
},
"extra_fields": {
"provider": "openai",
"model_params": {},
"latency": 1.234,
"raw_response": {}
}
}
},
"tool_response": {
"summary": "Tool calling response",
"value": {
"id": "chatcmpl-456",
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{\"location\": \"San Francisco, CA\"}"
}
}
]
},
"finish_reason": "tool_calls"
}
],
"model": "gpt-4o",
"created": 1677652288,
"usage": {
"prompt_tokens": 45,
"completion_tokens": 12,
"total_tokens": 57
},
"extra_fields": {
"provider": "openai",
"model_params": {},
"latency": 0.856,
"raw_response": {}
}
}
}
}
}
}
},
"400": {
"$ref": "#/components/responses/BadRequest"
},
"401": {
"$ref": "#/components/responses/Unauthorized"
},
"429": {
"$ref": "#/components/responses/RateLimited"
},
"500": {
"$ref": "#/components/responses/InternalServerError"
}
}
}
},
"/v1/text/completions": {
"post": {
"summary": "Create Text Completion",
"description": "Creates a text completion from a prompt. Useful for text generation, summarization, and other non-conversational tasks.",
"operationId": "createTextCompletion",
"tags": ["Text Completions"],
"requestBody": {
"required": true,
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/TextCompletionRequest"
},
"examples": {
"simple_text": {
"summary": "Simple text completion",
"value": {
"provider": "openai",
"model": "gpt-3.5-turbo-instruct",
"text": "The future of artificial intelligence is",
"params": {
"max_tokens": 100,
"temperature": 0.7
}
}
},
"with_stop_sequences": {
"summary": "Text completion with stop sequences",
"value": {
"provider": "cohere",
"model": "command",
"text": "Write a short story about a robot:",
"params": {
"max_tokens": 200,
"temperature": 0.8,
"stop_sequences": ["\n\n", "THE END"]
}
}
}
}
}
}
},
"responses": {
"200": {
"description": "Successful text completion",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/BifrostResponse"
},
"examples": {
"text_response": {
"summary": "Text completion response",
"value": {
"id": "cmpl-789",
"object": "text.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The future of artificial intelligence is incredibly promising, with advances in machine learning, natural language processing, and robotics reshaping industries and daily life."
},
"finish_reason": "stop"
}
],
"model": "gpt-3.5-turbo-instruct",
"created": 1677652288,
"usage": {
"prompt_tokens": 8,
"completion_tokens": 32,
"total_tokens": 40
},
"extra_fields": {
"provider": "openai",
"model_params": {
"max_tokens": 100,
"temperature": 0.7
},
"latency": 0.654,
"raw_response": {}
}
}
}
}
}
}
},
"400": {
"$ref": "#/components/responses/BadRequest"
},
"401": {
"$ref": "#/components/responses/Unauthorized"
},
"429": {
"$ref": "#/components/responses/RateLimited"
},
"500": {
"$ref": "#/components/responses/InternalServerError"
}
}
}
},
"/metrics": {
"get": {
"summary": "Get Prometheus Metrics",
"description": "Returns Prometheus-compatible metrics for monitoring request counts, latency, token usage, and error rates.",
"operationId": "getMetrics",
"tags": ["Monitoring"],
"responses": {
"200": {
"description": "Prometheus metrics in text format",
"content": {
"text/plain": {
"schema": {
"type": "string"
},
"example": "# HELP http_requests_total Total number of HTTP requests\n# TYPE http_requests_total counter\nhttp_requests_total{method=\"POST\",handler=\"/v1/chat/completions\",code=\"200\"} 42\n"
}
}
}
}
}
}
},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Enforce maxItems on critical arrays.

Schemas like ChatCompletionRequest.messages (lines 355–361), fallbacks, tools, and tool_calls only specify minItems. To guard against oversized payloads, add reasonable maxItems constraints. Example:

"messages":
  type: array
  items:
    $ref: "#/components/schemas/BifrostMessage"
- minItems: 1
+ minItems: 1
+ maxItems: 100

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In docs/openapi.json around lines 355 to 361 and related schema definitions, the
arrays such as ChatCompletionRequest.messages, fallbacks, tools, and tool_calls
currently only have minItems constraints. To prevent excessively large payloads,
add appropriate maxItems constraints to these arrays in their schema
definitions, setting reasonable upper limits to enforce size restrictions.

Comment thread transports/README.md
You need to pass all environment variables referenced in your `config.json` file that start with the prefix `env.` to the docker run command using the -e flag. This ensures Docker sets them correctly inside the container.

example usage: Suppose your config.json only contains one environment variable placeholder, `env.COHERE_API_KEY`. Heres how you would run it:
example usage: Suppose your config.json only contains one environment variable placeholder, `env.COHERE_API_KEY`. Here's how you would run it:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Capitalize and format "Example usage".

Align with other headings—use bold or a heading style and backtick-wrap config.json:

- example usage: Suppose your config.json only contains...
+ **Example Usage:** Suppose your `config.json` only contains...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
example usage: Suppose your config.json only contains one environment variable placeholder, `env.COHERE_API_KEY`. Here's how you would run it:
**Example Usage:** Suppose your `config.json` only contains one environment variable placeholder, `env.COHERE_API_KEY`. Here's how you would run it:
🤖 Prompt for AI Agents
In transports/README.md at line 99, capitalize "Example usage" and format it
consistently with other headings by making it bold or using a heading style.
Also, wrap `config.json` in backticks to highlight it as code.

Comment on lines +7 to +9
```
http://localhost:8080
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Specify language for fenced code block.

Add a language label (e.g., text) to the Base URL code fence to satisfy markdown lint rules:

```text
http://localhost:8080

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>

7-7: Fenced code blocks should have a language specified
null

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

In docs/http-transport-api.md at lines 7 to 9, the fenced code block showing the
Base URL lacks a language specifier, which violates markdown lint rules. Add a
language label such as "text" immediately after the opening triple backticks to
specify the code block language, ensuring proper markdown formatting and lint
compliance.


</details>

<!-- This is an auto-generated comment by CodeRabbit -->

@akshaydeo akshaydeo deleted the 06-04-feat_swagger_docs_and_references_added_for_http_transport branch August 31, 2025 17:30
akshaydeo pushed a commit that referenced this pull request Nov 17, 2025
# Add HTTP Transport API Documentation

Added comprehensive API documentation for the Bifrost HTTP transport, including:

- A detailed markdown reference guide (`docs/http-transport-api.md`) covering all endpoints, request/response formats, and schema definitions
- A complete OpenAPI 3.0 specification file (`docs/openapi.json`) for machine-readable API documentation
- Updated the transports README with a link to the new API documentation

The documentation provides detailed information about the chat completions and text completions endpoints, including request parameters, response formats, error handling, and examples in multiple programming languages. It also covers authentication, monitoring, and supported model providers.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant