Skip to content

Commit

Permalink
Merge branch 'main' into fix/async-function-and-tool-execution
Browse files Browse the repository at this point in the history
  • Loading branch information
sonichi authored Oct 31, 2023
2 parents 9e3a1ff + b432c1b commit b27ff86
Show file tree
Hide file tree
Showing 29 changed files with 3,793 additions and 264 deletions.
24 changes: 16 additions & 8 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -1,13 +1,21 @@
{
"extensions": ["ms-python.python", "visualstudioexptteam.vscodeintellicode"],
"dockerFile": "Dockerfile",
"settings": {
"terminal.integrated.profiles.linux": {
"bash": {
"path": "/bin/bash"
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-toolsai.jupyter",
"visualstudioexptteam.vscodeintellicode"
],
"settings": {
"terminal.integrated.profiles.linux": {
"bash": {
"path": "/bin/bash"
}
},
"terminal.integrated.defaultProfile.linux": "bash"
}
},
"terminal.integrated.defaultProfile.linux": "bash"
}
},
"dockerFile": "Dockerfile",
"updateContentCommand": "pip install -e . pre-commit && pre-commit install"
}
5 changes: 5 additions & 0 deletions .github/workflows/openai.yml
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,10 @@ jobs:
- name: Install packages for Teachable when needed
run: |
pip install -e .[teachable]
- name: Install packages for RetrieveChat with QDrant when needed
if: matrix.python-version == '3.11'
run: |
pip install -e .[retrievechat] qdrant_client[fastembed]
- name: Coverage
if: matrix.python-version == '3.9'
env:
Expand All @@ -76,6 +80,7 @@ jobs:
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
pip install nbconvert nbformat ipykernel
coverage run -a -m pytest test/agentchat/test_qdrant_retrievechat.py
coverage run -a -m pytest test/test_with_openai.py
coverage run -a -m pytest test/test_notebook.py
coverage xml
Expand Down
11 changes: 5 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,9 @@ This project is a spinoff from [FLAML](https://github.com/microsoft/FLAML).
<br>
</p> -->

:fire: autogen has graduated from [FLAML](https://github.com/microsoft/FLAML) into a new project.

<!-- :fire: Heads-up: We're preparing to migrate [autogen](https://microsoft.github.io/FLAML/docs/Use-Cases/Autogen) into a dedicated Github repository. Alongside this move, we'll also launch a dedicated Discord server and a website for comprehensive documentation.
:fire: Heads-up: pyautogen v0.2 will switch to using openai v1.

<!--
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
:fire: [autogen](https://microsoft.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
Expand All @@ -33,7 +32,7 @@ AutoGen is a framework that enables the development of LLM applications using mu
- It supports **diverse conversation patterns** for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy,
the number of agents, and agent conversation topology.
- It provides a collection of working systems with different complexities. These systems span a **wide range of applications** from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns.
- AutoGen provides a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` as an **enhanced inference API**. It allows easy performance tuning, utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.
- AutoGen provides **enhanced LLM inference**. It offers easy performance tuning, plus utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.

AutoGen is powered by collaborative [research studies](https://microsoft.github.io/autogen/docs/Research) from Microsoft, Penn State University, and the University of Washington.

Expand Down Expand Up @@ -111,7 +110,7 @@ Please find more [code examples](https://microsoft.github.io/autogen/docs/Exampl

## Enhanced LLM Inferences

Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` adding powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers enhanced LLM inference with powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.

```python
# perform tuning
Expand Down Expand Up @@ -197,7 +196,7 @@ This project has adopted the [Microsoft Open Source Code of Conduct](https://ope
For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.

## Contributers Wall
## Contributors Wall
<a href="https://github.com/microsoft/autogen/graphs/contributors">
<img src="https://contrib.rocks/image?repo=microsoft/autogen" />
</a>
Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/contrib/math_user_proxy_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ def __init__(
self.last_reply = None

def generate_init_message(self, problem, prompt_type="default", customized_prompt=None):
"""Generate a prompt for the assitant agent with the given problem and prompt.
"""Generate a prompt for the assistant agent with the given problem and prompt.
Args:
problem (str): the problem to be solved.
Expand Down
Loading

0 comments on commit b27ff86

Please sign in to comment.