Skip to content

Commit

Permalink
Dev/v0.2 (#393)
Browse files Browse the repository at this point in the history
* api_base -> base_url (#383)

* InvalidRequestError -> BadRequestError (#389)

* remove api_key_path; close #388

* close #402 (#403)

* openai client (#419)

* openai client

* client test

* _client -> client

* _client -> client

* extra kwargs

* Completion -> client (#426)

* Completion -> client

* Completion -> client

* Completion -> client

* Completion -> client

* support aoai

* fix test error

* remove commented code

* support aoai

* annotations

* import

* reduce test

* skip test

* skip test

* skip test

* debug test

* rename test

* update workflow

* update workflow

* env

* py version

* doc improvement

* docstr update

* openai<1

* add tiktoken to dependency

* filter_func

* async test

* dependency

* migration guide (#477)

* migration guide

* change in kwargs

* simplify header

* update optigude description

* deal with azure gpt-3.5

* add back test_eval_math_responses

* timeout

* Add back tests for RetrieveChat (#480)

* Add back tests for RetrieveChat

* Fix format

* Update dependencies order

* Fix path

* Fix path

* Fix path

* Fix tests

* Add not run openai on MacOS or Win

* Update skip openai tests

* Remove unnecessary dependencies, improve format

* Add py3.8 for testing qdrant

* Fix multiline error of windows

* Add openai tests

* Add dependency mathchat, remove unused envs

* retrieve chat is tested

* bump version to 0.2.0b1

---------

Co-authored-by: Li Jiang <[email protected]>
  • Loading branch information
sonichi and thinkall authored Nov 4, 2023
1 parent 5089525 commit c4f8b1c
Show file tree
Hide file tree
Showing 71 changed files with 886 additions and 441 deletions.
5 changes: 3 additions & 2 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ jobs:
python -m pip install --upgrade pip wheel
pip install -e .
python -c "import autogen"
pip install -e.[mathchat,retrievechat,test] datasets pytest
pip install -e. pytest
pip uninstall -y openai
- name: Test with pytest
if: matrix.python-version != '3.10'
Expand All @@ -49,7 +49,8 @@ jobs:
- name: Coverage
if: matrix.python-version == '3.10'
run: |
pip install coverage
pip install -e .[mathchat,test]
pip uninstall -y openai
coverage run -a -m pytest test
coverage xml
- name: Upload coverage to Codecov
Expand Down
58 changes: 58 additions & 0 deletions .github/workflows/contrib-openai.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# This workflow will install Python dependencies and run tests
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: OpenAI4ContribTests

on:
pull_request_target:
branches: ['main']
paths:
- 'autogen/**'
- 'test/agentchat/contrib/**'
- '.github/workflows/contrib-openai.yml'
- 'setup.py'

jobs:
RetrieveChatTest:
strategy:
matrix:
os: [ubuntu-latest]
python-version: ["3.10"]
runs-on: ${{ matrix.os }}
environment: openai1
steps:
# checkout to pr branch
- name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install packages and dependencies
run: |
docker --version
python -m pip install --upgrade pip wheel
pip install -e .
python -c "import autogen"
pip install coverage pytest-asyncio
- name: Install packages for test when needed
run: |
pip install docker
pip install qdrant_client[fastembed]
pip install -e .[retrievechat]
- name: Coverage
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
coverage run -a -m pytest test/agentchat/contrib/test_retrievechat.py test/agentchat/contrib/test_qdrant_retrievechat.py
coverage xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
59 changes: 59 additions & 0 deletions .github/workflows/contrib-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: ContribTests

on:
pull_request:
branches: ['main', 'dev/v0.2']
paths:
- 'autogen/**'
- 'test/agentchat/contrib/**'
- '.github/workflows/contrib-tests.yml'
- 'setup.py'

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ github.head_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}

jobs:
RetrieveChatTest:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-2019]
python-version: ["3.8", "3.9", "3.10", "3.11"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install packages and dependencies for all tests
run: |
python -m pip install --upgrade pip wheel
pip install pytest
- name: Install qdrant_client when python-version is 3.10
if: matrix.python-version == '3.10' || matrix.python-version == '3.8'
run: |
pip install qdrant_client[fastembed]
- name: Install packages and dependencies for RetrieveChat
run: |
pip install -e .[retrievechat]
pip uninstall -y openai
- name: Test RetrieveChat
run: |
pytest test/test_retrieve_utils.py test/agentchat/contrib/test_retrievechat.py test/agentchat/contrib/test_qdrant_retrievechat.py
- name: Coverage
if: matrix.python-version == '3.10'
run: |
pip install coverage>=5.3
coverage run -a -m pytest test/test_retrieve_utils.py test/agentchat/contrib
coverage xml
- name: Upload coverage to Codecov
if: matrix.python-version == '3.10'
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
31 changes: 8 additions & 23 deletions .github/workflows/openai.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# This workflow will install Python dependencies and run tests with a variety of Python versions
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions

name: OpenAI
Expand All @@ -11,9 +11,6 @@ on:
- 'test/**'
- 'notebook/agentchat_auto_feedback_from_code_execution.ipynb'
- 'notebook/agentchat_function_call.ipynb'
- 'notebook/agentchat_MathChat.ipynb'
- 'notebook/oai_completion.ipynb'
- 'notebook/oai_chatgpt_gpt4.ipynb'
- '.github/workflows/openai.yml'

jobs:
Expand All @@ -23,7 +20,7 @@ jobs:
os: [ubuntu-latest]
python-version: ["3.9", "3.10", "3.11"]
runs-on: ${{ matrix.os }}
environment: openai
environment: openai1
steps:
# checkout to pr branch
- name: Checkout
Expand All @@ -38,28 +35,17 @@ jobs:
run: |
docker --version
python -m pip install --upgrade pip wheel
pip install -e.[blendsearch]
pip install -e.
python -c "import autogen"
pip install coverage pytest-asyncio datasets
pip install coverage pytest-asyncio
- name: Install packages for test when needed
if: matrix.python-version == '3.9'
run: |
pip install docker
- name: Install packages for MathChat when needed
if: matrix.python-version != '3.11'
- name: Install dependencies for test when needed
if: matrix.python-version == '3.10' # test_agentchat_function_call
run: |
pip install -e .[mathchat]
- name: Install packages for RetrieveChat when needed
if: matrix.python-version == '3.9'
run: |
pip install -e .[retrievechat]
- name: Install packages for Teachable when needed
run: |
pip install -e .[teachable]
- name: Install packages for RetrieveChat with QDrant when needed
if: matrix.python-version == '3.11'
run: |
pip install -e .[retrievechat] qdrant_client[fastembed]
pip install -e.[mathchat]
- name: Coverage
if: matrix.python-version == '3.9'
env:
Expand All @@ -80,8 +66,7 @@ jobs:
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
pip install nbconvert nbformat ipykernel
coverage run -a -m pytest test/agentchat/test_qdrant_retrievechat.py
coverage run -a -m pytest test/test_with_openai.py
coverage run -a -m pytest test/agentchat/test_function_call_groupchat.py
coverage run -a -m pytest test/test_notebook.py
coverage xml
cat "$(pwd)/test/executed_openai_notebook_output.txt"
Expand Down
4 changes: 2 additions & 2 deletions OAI_CONFIG_LIST_sample
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@
{
"model": "gpt-4",
"api_key": "<your Azure OpenAI API key here>",
"api_base": "<your Azure OpenAI API base here>",
"base_url": "<your Azure OpenAI API base here>",
"api_type": "azure",
"api_version": "2023-07-01-preview"
},
{
"model": "gpt-3.5-turbo",
"api_key": "<your Azure OpenAI API key here>",
"api_base": "<your Azure OpenAI API base here>",
"base_url": "<your Azure OpenAI API base here>",
"api_type": "azure",
"api_version": "2023-07-01-preview"
}
Expand Down
24 changes: 13 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,11 @@ AutoGen is a framework that enables the development of LLM applications using mu

![AutoGen Overview](https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png)

- AutoGen enables building next-gen LLM applications based on **multi-agent conversations** with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses.
- It supports **diverse conversation patterns** for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy,
- AutoGen enables building next-gen LLM applications based on [multi-agent conversations](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses.
- It supports [diverse conversation patterns](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns) for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy,
the number of agents, and agent conversation topology.
- It provides a collection of working systems with different complexities. These systems span a **wide range of applications** from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns.
- AutoGen provides **enhanced LLM inference**. It offers easy performance tuning, plus utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.
- It provides a collection of working systems with different complexities. These systems span a [wide range of applications](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#diverse-applications-implemented-with-autogen) from various domains and complexities. This demonstrates how AutoGen can easily support diverse conversation patterns.
- AutoGen provides [enhanced LLM inference](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification). It offers utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.

AutoGen is powered by collaborative [research studies](https://microsoft.github.io/autogen/docs/Research) from Microsoft, Penn State University, and the University of Washington.

Expand All @@ -42,14 +42,14 @@ The easiest way to start playing is

[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/microsoft/autogen?quickstart=1)

2. Copy OAI_CONFIG_LIST_sample to /notebook folder, name to OAI_CONFIG_LIST, and set the correct configuration.
2. Copy OAI_CONFIG_LIST_sample to ./notebook folder, name to OAI_CONFIG_LIST, and set the correct configuration.
3. Start playing with the notebooks!



## Installation

AutoGen requires **Python version >= 3.8**. It can be installed from pip:
AutoGen requires **Python version >= 3.8, < 3.12**. It can be installed from pip:

```bash
pip install pyautogen
Expand All @@ -72,7 +72,7 @@ For LLM inference configurations, check the [FAQs](https://microsoft.github.io/a

## Multi-Agent Conversation Framework

Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
Autogen enables the next-gen LLM applications with a generic [multi-agent conversation](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:
Expand Down Expand Up @@ -106,14 +106,16 @@ After the repo is cloned.
The figure below shows an example conversation flow with AutoGen.
![Agent Chat Example](https://github.com/microsoft/autogen/blob/main/website/static/img/chat_example.png)

Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-AgentChat) for this feature.
Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AgentChat) for this feature.

## Enhanced LLM Inferences

Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers enhanced LLM inference with powerful functionalities like tuning, caching, error handling, and templating. For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.

<!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
```python
# perform tuning
# perform tuning for openai<1
config, analysis = autogen.Completion.tune(
data=tune_data,
metric="success",
Expand All @@ -127,7 +129,7 @@ config, analysis = autogen.Completion.tune(
response = autogen.Completion.create(context=test_instance, **config)
```
Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-Inference) for this feature.
Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/Inference) for this feature. -->

## Documentation

Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/assistant_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def __init__(
system_message (str): system message for the ChatCompletion inference.
Please override this attribute if you want to reprogram the agent.
llm_config (dict): llm inference configuration.
Please refer to [Completion.create](/docs/reference/oai/completion#create)
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
is_termination_msg (function): a function that takes a message in the form of a dictionary
and returns a boolean value indicating if this received message is a termination message.
Expand Down
15 changes: 7 additions & 8 deletions autogen/agentchat/contrib/teachable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def colored(x, *args, **kwargs):


class TeachableAgent(ConversableAgent):
"""Teachable Agent, a subclass of ConversableAgent using a vector database to remember user teachings.
"""(Experimental) Teachable Agent, a subclass of ConversableAgent using a vector database to remember user teachings.
In this class, the term 'user' refers to any caller (human or not) sending messages to this agent.
Not yet tested in the group-chat setting."""

Expand All @@ -40,7 +40,7 @@ def __init__(
system_message (str): system message for the ChatCompletion inference.
human_input_mode (str): This agent should NEVER prompt the human for input.
llm_config (dict or False): llm inference configuration.
Please refer to [Completion.create](/docs/reference/oai/completion#create)
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
To disable llm-based auto reply, set to False.
analyzer_llm_config (dict or False): llm inference configuration passed to TextAnalyzerAgent.
Expand Down Expand Up @@ -125,11 +125,8 @@ def _generate_teachable_assistant_reply(
messages = messages.copy()
messages[-1]["content"] = new_user_text

# Generate a response.
msgs = self._oai_system_message + messages
response = oai.ChatCompletion.create(messages=msgs, **self.llm_config)
response_text = oai.ChatCompletion.extract_text_or_function_call(response)[0]
return True, response_text
# Generate a response by reusing existing generate_oai_reply
return self.generate_oai_reply(messages, sender, config)

def learn_from_user_feedback(self):
"""Reviews the user comments from the last chat, and decides what teachings to store as memos."""
Expand Down Expand Up @@ -265,12 +262,14 @@ def analyze(self, text_to_analyze, analysis_instructions):
self.send(recipient=self.analyzer, message=analysis_instructions, request_reply=True) # Request the reply.
return self.last_message(self.analyzer)["content"]
else:
# TODO: This is not an encouraged usage pattern. It breaks the conversation-centric design.
# consider using the arg "silent"
# Use the analyzer's method directly, to leave analyzer message out of the printed chat.
return self.analyzer.analyze_text(text_to_analyze, analysis_instructions)


class MemoStore:
"""
"""(Experimental)
Provides memory storage and retrieval for a TeachableAgent, using a vector database.
Each DB entry (called a memo) is a pair of strings: an input text and an output text.
The input text might be a question, or a task to perform.
Expand Down
10 changes: 3 additions & 7 deletions autogen/agentchat/contrib/text_analyzer_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@


class TextAnalyzerAgent(ConversableAgent):
"""Text Analysis agent, a subclass of ConversableAgent designed to analyze text as instructed."""
"""(Experimental) Text Analysis agent, a subclass of ConversableAgent designed to analyze text as instructed."""

def __init__(
self,
Expand All @@ -26,7 +26,7 @@ def __init__(
system_message (str): system message for the ChatCompletion inference.
human_input_mode (str): This agent should NEVER prompt the human for input.
llm_config (dict or False): llm inference configuration.
Please refer to [Completion.create](/docs/reference/oai/completion#create)
Please refer to [OpenAIWrapper.create](/docs/reference/oai/client#create)
for available options.
To disable llm-based auto reply, set to False.
teach_config (dict or None): Additional parameters used by TeachableAgent.
Expand Down Expand Up @@ -74,9 +74,5 @@ def analyze_text(self, text_to_analyze, analysis_instructions):
msg_text = "\n".join(
[analysis_instructions, text_to_analyze, analysis_instructions]
) # Repeat the instructions.
messages = self._oai_system_message + [{"role": "user", "content": msg_text}]

# Generate and return the analysis string.
response = oai.ChatCompletion.create(context=None, messages=messages, **self.llm_config)
output_text = oai.ChatCompletion.extract_text_or_function_call(response)[0]
return output_text
return self.generate_oai_reply([{"role": "user", "content": msg_text}], None, None)[1]
Loading

0 comments on commit c4f8b1c

Please sign in to comment.