Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,7 @@ curl -X POST http://localhost:8080/providers \
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Hello from Bifrost! 🌈"}
]
Expand Down
3 changes: 1 addition & 2 deletions docs/mcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,7 @@ Bifrost's Model Context Protocol integration enables AI models to seamlessly dis
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "List the files in the /tmp directory"}
]
Expand Down
21 changes: 8 additions & 13 deletions docs/quickstart/http-transport.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,8 +132,7 @@ docker run -p 8080:8080 maximhq/bifrost
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello, Bifrost!"}]
}'
```
Expand Down Expand Up @@ -242,12 +241,12 @@ export ANTHROPIC_API_KEY="your-anthropic-key"
# Use OpenAI
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"provider": "openai", "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Hello from OpenAI!"}]}'
-d '{"model": "openai/gpt-4o-mini", "messages": [{"role": "user", "content": "Hello from OpenAI!"}]}'

# Use Anthropic
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"provider": "anthropic", "model": "claude-3-sonnet-20240229", "messages": [{"role": "user", "content": "Hello from Anthropic!"}]}'
-d '{"model": "anthropic/claude-3-sonnet-20240229", "messages": [{"role": "user", "content": "Hello from Anthropic!"}]}'
```

### **🔄 Add Automatic Fallbacks**
Expand All @@ -257,10 +256,9 @@ curl -X POST http://localhost:8080/v1/chat/completions \
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}],
"fallbacks": [{"provider": "anthropic", "model": "claude-3-sonnet-20240229"}]
"fallbacks": ["anthropic/claude-3-sonnet-20240229"]
}'
```

Expand All @@ -276,8 +274,7 @@ import requests
response = requests.post(
"http://localhost:8080/v1/chat/completions",
json={
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello from Python!"}]
}
)
Expand All @@ -291,8 +288,7 @@ const response = await fetch("http://localhost:8080/v1/chat/completions", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
provider: "openai",
model: "gpt-4o-mini",
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello from Node.js!" }],
}),
});
Expand All @@ -306,8 +302,7 @@ response, err := http.Post(
"http://localhost:8080/v1/chat/completions",
"application/json",
strings.NewReader(`{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello from Go!"}]
}`)
)
Expand Down
3 changes: 1 addition & 2 deletions docs/usage/errors.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,8 +221,7 @@ client = BifrostClient("http://localhost:8080")

try:
response = client.chat_completion({
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
})
print("Success:", response)
Expand Down
7 changes: 3 additions & 4 deletions docs/usage/http-transport/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ open http://localhost:8080
# Make requests to any provider
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"provider": "openai", "model": "gpt-4o-mini", "messages": [...]}'
-d '{"model": "openai/gpt-4o-mini", "messages": [...]}'
```

---
Expand Down Expand Up @@ -171,10 +171,9 @@ const response = await openai.chat.completions.create({
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}],
"fallbacks": [{"provider": "anthropic", "model": "claude-3-sonnet-20240229"}]
"fallbacks": ["anthropic/claude-3-sonnet-20240229"]
}'

# OpenAI-compatible endpoint
Expand Down
21 changes: 7 additions & 14 deletions docs/usage/http-transport/configuration/mcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,8 +209,7 @@ Tools are automatically available in chat completions:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "List the files in the current directory"}
]
Expand Down Expand Up @@ -269,8 +268,7 @@ When MCP is configured, Bifrost automatically adds available tools to requests.
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Can you list the files in the /tmp directory?"}
]
Expand Down Expand Up @@ -337,8 +335,7 @@ curl -X POST http://localhost:8080/v1/mcp/tool/execute \
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Can you list the files in the /tmp directory?"},
{
Expand Down Expand Up @@ -394,8 +391,7 @@ Control which MCP tools are available per request using context:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "List files and search web"}
],
Expand All @@ -407,8 +403,7 @@ curl -X POST http://localhost:8080/v1/chat/completions \
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Help me with file operations"}
],
Expand Down Expand Up @@ -479,8 +474,7 @@ bifrost-http -config config.json -port 8080 -plugins maxim
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "What files are in this directory?"}
]
Expand Down Expand Up @@ -563,8 +557,7 @@ docker logs bifrost-container | grep MCP
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{
"role": "user",
Expand Down
8 changes: 3 additions & 5 deletions docs/usage/http-transport/configuration/providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -497,20 +497,18 @@ docker run -p 8080:8080 \
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Test message"}]
}'

# Test with fallbacks
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Test message"}],
"fallbacks": [
{"provider": "anthropic", "model": "claude-3-sonnet-20240229"}
"anthropic/claude-3-sonnet-20240229"
]
}'
```
Expand Down
33 changes: 11 additions & 22 deletions docs/usage/http-transport/endpoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,7 @@ Chat conversation endpoint supporting all providers.

```json
{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{
"role": "user",
Expand All @@ -43,12 +42,7 @@ Chat conversation endpoint supporting all providers.
"temperature": 0.7,
"max_tokens": 1000
},
"fallbacks": [
{
"provider": "anthropic",
"model": "claude-3-sonnet-20240229"
}
]
"fallbacks": ["anthropic/claude-3-sonnet-20240229"]
}
```

Expand Down Expand Up @@ -83,8 +77,7 @@ Chat conversation endpoint supporting all providers.
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "What is the capital of France?"}
]
Expand All @@ -99,8 +92,7 @@ Text completion endpoint for simple text generation.

```json
{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"text": "The future of AI is",
"params": {
"temperature": 0.8,
Expand Down Expand Up @@ -254,12 +246,11 @@ bifrost_provider_errors_total{provider="openai",error_type="rate_limit"} 23

### **Common Parameters**

| Parameter | Type | Description | Example |
| ----------- | ------ | ----------------------- | ----------------------------- |
| `provider` | string | AI provider to use | `"openai"` |
| `model` | string | Model name | `"gpt-4o-mini"` |
| `params` | object | Model parameters | `{"temperature": 0.7}` |
| `fallbacks` | array | Fallback configurations | `[{"provider": "anthropic"}]` |
| Parameter | Type | Description | Example |
| ----------- | ------ | ----------------------- | ---------------------------------------- |
| `model` | string | Provider and model name | `"openai/gpt-4o-mini"` |
| `params` | object | Model parameters | `{"temperature": 0.7}` |
| `fallbacks` | array | Fallback model names | `["anthropic/claude-3-sonnet-20240229"]` |

Comment thread
Pratham-Mishra04 marked this conversation as resolved.
### **Model Parameters**

Expand Down Expand Up @@ -315,8 +306,7 @@ MCP tools are automatically available in chat completions:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "List files in the current directory"}
]
Expand Down Expand Up @@ -354,8 +344,7 @@ curl -X POST http://localhost:8080/v1/chat/completions \
# Initial request
curl -X POST http://localhost:8080/v1/chat/completions \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"model": "openai/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Read the README.md file"},
{
Expand Down
Loading