Skip to content

Commit

Permalink
feat: support 100k context window model
Browse files Browse the repository at this point in the history
  • Loading branch information
jtsang4 committed May 11, 2023
1 parent 8cfe749 commit fda2616
Show file tree
Hide file tree
Showing 3 changed files with 30 additions and 5 deletions.
22 changes: 19 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,21 @@ This project converts the API of Anthropic's Claude model to the OpenAI Chat API

* ✨ Call Claude API like OpenAI ChatGPT API
* 💦 Support streaming response
* 🐻 Only support `claude-v1.3` model currently
* 🐻 Support `claude-v1.3`(gpt-3.5-turbo-0301), `claude-v1.3-100k`(gpt-3.5-turbo, gpt-4, gpt-4-0314) model

## Getting Started

You can run this project using Docker or Docker Compose:

### Using Docker
### Deployment

#### Using Docker

```bash
docker run -p 8000:8000 wtzeng/claude-to-chatgpt:latest
```

### Using Docker Compose
#### Using Docker Compose

```bash
docker-compose up
Expand All @@ -25,6 +27,20 @@ docker-compose up

The API will then be available at http://localhost:8000. API endpoint: `/v1/chat/completions`

### Usage

When you input the model parameter as `gpt-3.5-turbo-0301`, it will be substituted with `claude-v1.3`. otherwise, `claude-v1.3-100k` will be utilized.

```bash
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $CLAUDE_API_KEY" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```

## Conversion Details

The Claude Completion API has an endpoint `/v1/complete` which takes the following JSON request:
Expand Down
6 changes: 4 additions & 2 deletions claude_to_chatgpt/adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from fastapi import Request
from claude_to_chatgpt.util import num_tokens_from_string
from claude_to_chatgpt.logger import logger
from claude_to_chatgpt.models import model_map

role_map = {
"system": "Human",
Expand Down Expand Up @@ -41,14 +42,15 @@ def convert_messages_to_prompt(self, messages):
return prompt

def openai_to_claude_params(self, openai_params):
model = model_map.get(openai_params["model"], "claude-v1.3-100k")
messages = openai_params["messages"]

prompt = self.convert_messages_to_prompt(messages)

claude_params = {
"model": "claude-v1.3",
"model": model,
"prompt": prompt,
"max_tokens_to_sample": 9016,
"max_tokens_to_sample": 100000 if model == "claude-v1.3-100k" else 9016,
}

if openai_params.get("max_tokens"):
Expand Down
7 changes: 7 additions & 0 deletions claude_to_chatgpt/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,3 +96,10 @@
"parent": None,
},
]

model_map = {
"gpt-3.5-turbo": "claude-v1.3-100k",
"gpt-3.5-turbo-0301": "claude-v1.3",
"gpt-4": "claude-v1.3-100k",
"gpt-4-0314": "claude-v1.3-100k",
}

0 comments on commit fda2616

Please sign in to comment.