Skip to content

Commit

Permalink
Custom Model Client docs follow-up (#1545)
Browse files Browse the repository at this point in the history
* custom model client docs followup

* fix function name in docs

* Update website/docs/Use-Cases/enhanced_inference.md

Co-authored-by: Chi Wang <[email protected]>

* Update website/docs/Use-Cases/enhanced_inference.md

Co-authored-by: Chi Wang <[email protected]>

* Update website/docs/Use-Cases/enhanced_inference.md

Co-authored-by: Chi Wang <[email protected]>

* Update website/docs/Use-Cases/enhanced_inference.md

Co-authored-by: Chi Wang <[email protected]>

---------

Co-authored-by: Chi Wang <[email protected]>
  • Loading branch information
olgavrou and sonichi authored Feb 5, 2024
1 parent d999b45 commit b1817ab
Show file tree
Hide file tree
Showing 5 changed files with 20 additions and 15 deletions.
2 changes: 2 additions & 0 deletions autogen/oai/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ class Choice(Protocol):
class Message(Protocol):
content: Optional[str]

message: Message

choices: List[Choice]
model: str

Expand Down
4 changes: 3 additions & 1 deletion notebook/agentchat_custom_model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,9 @@
" class ModelClientResponseProtocol(Protocol):\n",
" class Choice(Protocol):\n",
" class Message(Protocol):\n",
" content: str | None\n",
" content: Optional[str]\n",
"\n",
" message: Message\n",
"\n",
" choices: List[Choice]\n",
" model: str\n",
Expand Down
4 changes: 3 additions & 1 deletion website/blog/2024-01-26-Custom-Models/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,9 @@ class ModelClient(Protocol):
class ModelClientResponseProtocol(Protocol):
class Choice(Protocol):
class Message(Protocol):
content: str | None
content: Optional[str]

message: Message

choices: List[Choice]
model: str
Expand Down
5 changes: 4 additions & 1 deletion website/docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,10 @@ In version >=1, OpenAI renamed their `api_base` parameter to `base_url`. So for

### Can I use non-OpenAI models?

Yes. Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
Yes. You currently have two options:

- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
- You can supply your own custom model implementation and use it with Autogen. Please check https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models for more information.

## Handle Rate Limit Error and Timeout Error

Expand Down
20 changes: 8 additions & 12 deletions website/docs/Use-Cases/enhanced_inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,9 +107,6 @@ The tuned config can be used to perform inference.

## API unification

<!-- `autogen.Completion.create` is compatible with both `openai.Completion.create` and `openai.ChatCompletion.create`, and both OpenAI API and Azure OpenAI API. So models such as "text-davinci-003", "gpt-3.5-turbo" and "gpt-4" can share a common API.
When chat models are used and `prompt` is given as the input to `autogen.Completion.create`, the prompt will be automatically converted into `messages` to fit the chat completion API requirement. One advantage is that one can experiment with both chat and non-chat models for the same prompt in a unified API. -->

`autogen.OpenAIWrapper.create()` can be used to create completions for both chat and non-chat models, and both OpenAI API and Azure OpenAI API.

```python
Expand All @@ -133,7 +130,7 @@ print(client.extract_text_or_completion_object(response))

For local LLMs, one can spin up an endpoint using a package like [FastChat](https://github.com/lm-sys/FastChat), and then use the same API to send a request. See [here](/blog/2023/07/14/Local-LLMs) for examples on how to make inference with local LLMs.

<!-- When only working with the chat-based models, `autogen.ChatCompletion` can be used. It also does automatic conversion from prompt to messages, if prompt is provided instead of messages. -->
For custom model clients, one can register the client with `autogen.OpenAIWrapper.register_model_client` and then use the same API to send a request. See [here](/blog/2024/01/26/Custom-Models) for examples on how to make inference with custom model clients.

## Usage Summary

Expand Down Expand Up @@ -166,6 +163,8 @@ Total cost: 0.00027
* Model 'gpt-3.5-turbo': cost: 0.00027, prompt_tokens: 50, completion_tokens: 100, total_tokens: 150
```

Note: if using a custom model client (see [here](/blog/2024/01/26/Custom-Models) for details) and if usage summary is not implemented, then the usage summary will not be available.

## Caching

API call results are cached locally and reused when the same request is issued.
Expand Down Expand Up @@ -241,13 +240,6 @@ The differences between autogen's `cache_seed` and openai's `seed`:

### Runtime error

<!-- It is easy to hit error when calling OpenAI APIs, due to connection, rate limit, or timeout. Some of the errors are transient. `autogen.Completion.create` deals with the transient errors and retries automatically. Request timeout, max retry period and retry wait time can be configured via `request_timeout`, `max_retry_period` and `retry_wait_time`.
- `request_timeout` (int): the timeout (in seconds) sent with a single request.
- `max_retry_period` (int): the total time (in seconds) allowed for retrying failed requests.
- `retry_wait_time` (int): the time interval to wait (in seconds) before retrying a failed request.
Moreover, -->
One can pass a list of configurations of different models/endpoints to mitigate the rate limits and other runtime error. For example,

```python
Expand All @@ -268,12 +260,16 @@ client = OpenAIWrapper(
{
"model": "llama2-chat-7B",
"base_url": "http://127.0.0.1:8080",
},
{
"model": "microsoft/phi-2",
"model_client_cls": "CustomModelClient"
}
],
)
```

`client.create()` will try querying Azure OpenAI gpt-4, OpenAI gpt-3.5-turbo, and a locally hosted llama2-chat-7B one by one,
`client.create()` will try querying Azure OpenAI gpt-4, OpenAI gpt-3.5-turbo, a locally hosted llama2-chat-7B, and phi-2 using a custom model client class named `CustomModelClient`, one by one,
until a valid result is returned. This can speed up the development process where the rate limit is a bottleneck. An error will be raised if the last choice fails. So make sure the last choice in the list has the best availability.

For convenience, we provide a number of utility functions to load config lists.
Expand Down

0 comments on commit b1817ab

Please sign in to comment.