Skip to content

Commit 6e74e2d

Browse files
authored
headsup about dependency change (#378)
* headsup about dependency change * more change
1 parent a8da385 commit 6e74e2d

File tree

4 files changed

+10
-8
lines changed

4 files changed

+10
-8
lines changed

README.md

+4-5
Original file line numberDiff line numberDiff line change
@@ -13,10 +13,9 @@ This project is a spinoff from [FLAML](https://github.com/microsoft/FLAML).
1313
<br>
1414
</p> -->
1515

16-
:fire: autogen has graduated from [FLAML](https://github.com/microsoft/FLAML) into a new project.
17-
18-
<!-- :fire: Heads-up: We're preparing to migrate [autogen](https://microsoft.github.io/FLAML/docs/Use-Cases/Autogen) into a dedicated Github repository. Alongside this move, we'll also launch a dedicated Discord server and a website for comprehensive documentation.
16+
:fire: Heads-up: pyautogen v0.2 will switch to using openai v1.
1917

18+
<!--
2019
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
2120
2221
:fire: [autogen](https://microsoft.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
@@ -33,7 +32,7 @@ AutoGen is a framework that enables the development of LLM applications using mu
3332
- It supports **diverse conversation patterns** for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy,
3433
the number of agents, and agent conversation topology.
3534
- It provides a collection of working systems with different complexities. These systems span a **wide range of applications** from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns.
36-
- AutoGen provides a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` as an **enhanced inference API**. It allows easy performance tuning, utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.
35+
- AutoGen provides **enhanced LLM inference**. It offers easy performance tuning, plus utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.
3736

3837
AutoGen is powered by collaborative [research studies](https://microsoft.github.io/autogen/docs/Research) from Microsoft, Penn State University, and the University of Washington.
3938

@@ -111,7 +110,7 @@ Please find more [code examples](https://microsoft.github.io/autogen/docs/Exampl
111110

112111
## Enhanced LLM Inferences
113112

114-
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` adding powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
113+
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers enhanced LLM inference with powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
115114

116115
```python
117116
# perform tuning

website/docs/Getting-Started.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ AutoGen is a framework that enables development of LLM applications using multip
1212
* It supports **diverse conversation patterns** for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy,
1313
the number of agents, and agent conversation topology.
1414
* It provides a collection of working systems with different complexities. These systems span a **wide range of applications** from various domains and complexities. They demonstrate how AutoGen can easily support different conversation patterns.
15-
* AutoGen provides a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` as an **enhanced inference API**. It allows easy performance tuning, utilities like API unification & caching, and advanced usage patterns, such as error handling, multi-config inference, context programming etc.
15+
* AutoGen provides **enhanced LLM inference**. It offers easy performance tuning, plus utilities like API unification & caching, and advanced usage patterns, such as error handling, multi-config inference, context programming etc.
1616

1717
AutoGen is powered by collaborative [research studies](/docs/Research) from Microsoft, Penn State University, and University of Washington.
1818

@@ -44,7 +44,7 @@ The figure below shows an example conversation flow with AutoGen.
4444
* [Documentation](/docs/Use-Cases/agent_chat).
4545

4646
#### Enhanced LLM Inferences
47-
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, error handling, templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
47+
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers enhanced LLM inference with powerful functionalites like tuning, caching, error handling, templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
4848
```python
4949
# perform tuning
5050
config, analysis = autogen.Completion.tune(

website/docs/Installation.md

+3
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,9 @@ AutoGen requires **Python version >= 3.8**. It can be installed from pip:
2626
```bash
2727
pip install pyautogen
2828
```
29+
30+
`pyautogen<0.2` requires `openai<1`. Starting from pyautogen v0.2, `openai>=1` is required.
31+
2932
<!--
3033
or conda:
3134
```

website/docs/Use-Cases/enhanced_inference.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Enhanced Inference
22

3-
`autogen.Completion` is a drop-in replacement of `openai.Completion` and `openai.ChatCompletion` as an enhanced inference API.
3+
`autogen.Completion` is a drop-in replacement of `openai.Completion` and `openai.ChatCompletion` for enhanced LLM inference.
44
There are a number of benefits of using `autogen` to perform inference: performance tuning, API unification, caching, error handling, multi-config inference, result filtering, templating and so on.
55

66
## Tune Inference Parameters

0 commit comments

Comments
 (0)