Skip to content

Commit

Permalink
Merge branch 'main' into sfblog
Browse files Browse the repository at this point in the history
  • Loading branch information
sonichi authored Mar 14, 2024
2 parents 44f094a + 77513f0 commit 5961b7f
Show file tree
Hide file tree
Showing 12 changed files with 180 additions and 42 deletions.
69 changes: 69 additions & 0 deletions .github/workflows/dotnet-release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# This workflow will build a .NET project
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-net

name: dotnet-release

on:
workflow_dispatch:
push:
branches:
- dotnet/release

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ github.head_ref }}
cancel-in-progress: true

permissions:
contents: read
packages: write

jobs:
build:
name: Build and release
runs-on: ubuntu-latest
environment: dotnet
defaults:
run:
working-directory: dotnet
steps:
- uses: actions/checkout@v3
- name: Setup .NET
uses: actions/setup-dotnet@v3
with:
global-json-file: dotnet/global.json
- name: Restore dependencies
run: |
dotnet restore -bl
- name: Build
run: |
echo "Build AutoGen"
dotnet build --no-restore --configuration Release -bl /p:SignAssembly=true
- name: Unit Test
run: dotnet test --no-build -bl --configuration Release
env:
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_ENDPOINT: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
AZURE_GPT_35_MODEL_ID: ${{ secrets.AZURE_GPT_35_MODEL_ID }}
OEPNAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
- name: Pack
run: |
echo "Create release build package"
dotnet pack --no-build --configuration Release --output './output/release' -bl
echo "ls output directory"
ls -R ./output
- name: Publish package to Nuget
run: |
echo "Publish package to Nuget"
echo "ls output directory"
ls -R ./output/release
dotnet nuget push --api-key AzureArtifacts ./output/release/*.nupkg --skip-duplicate --api-key ${{ secrets.AUTOGEN_NUGET_API_KEY }}
- name: Tag commit
run: |
Write-Host "Tag commit"
# version = eng/MetaInfo.props.Project.PropertyGroup.VersionPrefix
$metaInfoContent = cat ./eng/MetaInfo.props
$version = $metaInfoContent | Select-String -Pattern "<VersionPrefix>(.*)</VersionPrefix>" | ForEach-Object { $_.Matches.Groups[1].Value }
git tag -a "$version" -m "AutoGen.Net release $version"
git push origin --tags
shell: pwsh
5 changes: 4 additions & 1 deletion .github/workflows/type-check.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,7 @@ jobs:
- uses: actions/setup-python@v4
- run: pip install ".[jupyter-executor]" mypy
# As more modules are type check clean, add them here
- run: mypy --install-types --non-interactive autogen/logger
- run: |
mypy --install-types --non-interactive \
autogen/logger \
autogen/exception_utils.py
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -178,5 +178,7 @@ test/agentchat/test_agent_scripts/*

# test cache
.cache_test
.db


notebook/result.png
6 changes: 6 additions & 0 deletions autogen/agentchat/conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,12 @@ def __init__(
description (str): a short description of the agent. This description is used by other agents
(e.g. the GroupChatManager) to decide when to call upon this agent. (Default: system_message)
"""
# we change code_execution_config below and we have to make sure we don't change the input
# in case of UserProxyAgent, without this we could even change the default value {}
code_execution_config = (
code_execution_config.copy() if hasattr(code_execution_config, "copy") else code_execution_config
)

self._name = name
# a dictionary of conversations, default value is list
self._oai_messages = defaultdict(list)
Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/user_proxy_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def __init__(
max_consecutive_auto_reply: Optional[int] = None,
human_input_mode: Literal["ALWAYS", "TERMINATE", "NEVER"] = "ALWAYS",
function_map: Optional[Dict[str, Callable]] = None,
code_execution_config: Optional[Union[Dict, Literal[False]]] = None,
code_execution_config: Union[Dict, Literal[False]] = {},
default_auto_reply: Optional[Union[str, Dict, None]] = "",
llm_config: Optional[Union[Dict, Literal[False]]] = False,
system_message: Optional[Union[str, List]] = "",
Expand Down
3 changes: 2 additions & 1 deletion autogen/coding/local_commandline_code_executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,8 @@ def execute_code_blocks(self, code_blocks: List[CodeBlock]) -> CommandLineCodeRe
filename = f"tmp_code_{code_hash}.{'py' if lang.startswith('python') else lang}"

written_file = (self._work_dir / filename).resolve()
written_file.open("w", encoding="utf-8").write(code)
with written_file.open("w", encoding="utf-8") as f:
f.write(code)
file_names.append(written_file)

program = sys.executable if lang.startswith("python") else _cmd(lang)
Expand Down
13 changes: 8 additions & 5 deletions autogen/exception_utils.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,23 @@
from typing import Any


class AgentNameConflict(Exception):
def __init__(self, msg="Found multiple agents with the same name.", *args, **kwargs):
def __init__(self, msg: str = "Found multiple agents with the same name.", *args: Any, **kwargs: Any):
super().__init__(msg, *args, **kwargs)


class NoEligibleSpeaker(Exception):
"""Exception raised for early termination of a GroupChat."""

def __init__(self, message="No eligible speakers."):
def __init__(self, message: str = "No eligible speakers."):
self.message = message
super().__init__(self.message)


class SenderRequired(Exception):
"""Exception raised when the sender is required but not provided."""

def __init__(self, message="Sender is required but not provided."):
def __init__(self, message: str = "Sender is required but not provided."):
self.message = message
super().__init__(self.message)

Expand All @@ -23,7 +26,7 @@ class InvalidCarryOverType(Exception):
"""Exception raised when the carryover type is invalid."""

def __init__(
self, message="Carryover should be a string or a list of strings. Not adding carryover to the message."
self, message: str = "Carryover should be a string or a list of strings. Not adding carryover to the message."
):
self.message = message
super().__init__(self.message)
Expand All @@ -32,6 +35,6 @@ def __init__(
class UndefinedNextAgent(Exception):
"""Exception raised when the provided next agents list does not overlap with agents in the group."""

def __init__(self, message="The provided agents list does not overlap with agents in the group."):
def __init__(self, message: str = "The provided agents list does not overlap with agents in the group."):
self.message = message
super().__init__(self.message)
4 changes: 2 additions & 2 deletions notebook/agentchat_RetrieveChat.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@
"\n",
"## Construct agents for RetrieveChat\n",
"\n",
"We start by initializing the `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for RetrieveAssistantAgent. The detailed instructions are given in the user message. Later we will use the `RetrieveUserProxyAgent.generate_init_prompt` to combine the instructions and a retrieval augmented generation task for an initial prompt to be sent to the LLM assistant."
"We start by initializing the `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for RetrieveAssistantAgent. The detailed instructions are given in the user message. Later we will use the `RetrieveUserProxyAgent.message_generator` to combine the instructions and a retrieval augmented generation task for an initial prompt to be sent to the LLM assistant."
]
},
{
Expand Down Expand Up @@ -3037,7 +3037,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.13"
},
"skip_test": "Requires interactive usage"
},
Expand Down
1 change: 1 addition & 0 deletions samples/apps/autogen-studio/autogenstudio/datamodel.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ class LLMConfig:
cache_seed: Optional[Union[int, None]] = None
timeout: Optional[int] = None
max_tokens: Optional[int] = None
extra_body: Optional[dict] = None

def dict(self):
result = asdict(self)
Expand Down
45 changes: 15 additions & 30 deletions test/agentchat/test_agent_setup_with_use_docker_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,17 +16,6 @@
skip = False or skip_openai


def get_current_autogen_env_var():
return os.environ.get("AUTOGEN_USE_DOCKER", None)


def restore_autogen_env_var(current_env_value):
if current_env_value is None:
del os.environ["AUTOGEN_USE_DOCKER"]
else:
os.environ["AUTOGEN_USE_DOCKER"] = current_env_value


def docker_running():
return is_docker_running() or in_docker_container()

Expand Down Expand Up @@ -54,32 +43,36 @@ def test_agent_setup_with_use_docker_false():


@pytest.mark.skipif(skip, reason="openai not installed")
def test_agent_setup_with_env_variable_false_and_docker_running():
current_env_value = get_current_autogen_env_var()
def test_agent_setup_with_env_variable_false_and_docker_running(monkeypatch):
monkeypatch.setenv("AUTOGEN_USE_DOCKER", "False")

os.environ["AUTOGEN_USE_DOCKER"] = "False"
user_proxy = UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)

assert user_proxy._code_execution_config["use_docker"] is False

restore_autogen_env_var(current_env_value)


@pytest.mark.skipif(skip or (not docker_running()), reason="openai not installed OR docker not running")
def test_agent_setup_with_default_and_docker_running():
def test_agent_setup_with_default_and_docker_running(monkeypatch):
monkeypatch.delenv("AUTOGEN_USE_DOCKER", raising=False)

assert os.getenv("AUTOGEN_USE_DOCKER") is None

user_proxy = UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)

assert os.getenv("AUTOGEN_USE_DOCKER") is None

assert user_proxy._code_execution_config["use_docker"] is True


@pytest.mark.skipif(skip or (docker_running()), reason="openai not installed OR docker running")
def test_raises_error_agent_setup_with_default_and_docker_not_running():
def test_raises_error_agent_setup_with_default_and_docker_not_running(monkeypatch):
monkeypatch.delenv("AUTOGEN_USE_DOCKER", raising=False)
with pytest.raises(RuntimeError):
UserProxyAgent(
name="test_agent",
Expand All @@ -88,31 +81,23 @@ def test_raises_error_agent_setup_with_default_and_docker_not_running():


@pytest.mark.skipif(skip or (docker_running()), reason="openai not installed OR docker running")
def test_raises_error_agent_setup_with_env_variable_true_and_docker_not_running():
current_env_value = get_current_autogen_env_var()

os.environ["AUTOGEN_USE_DOCKER"] = "True"
def test_raises_error_agent_setup_with_env_variable_true_and_docker_not_running(monkeypatch):
monkeypatch.setenv("AUTOGEN_USE_DOCKER", "True")

with pytest.raises(RuntimeError):
UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)

restore_autogen_env_var(current_env_value)


@pytest.mark.skipif(skip or (not docker_running()), reason="openai not installed OR docker not running")
def test_agent_setup_with_env_variable_true_and_docker_running():
current_env_value = get_current_autogen_env_var()

os.environ["AUTOGEN_USE_DOCKER"] = "True"
def test_agent_setup_with_env_variable_true_and_docker_running(monkeypatch):
monkeypatch.setenv("AUTOGEN_USE_DOCKER", "True")

user_proxy = UserProxyAgent(
name="test_agent",
human_input_mode="NEVER",
)

assert user_proxy._code_execution_config["use_docker"] is True

restore_autogen_env_var(current_env_value)
4 changes: 2 additions & 2 deletions website/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ yarn install

`quarto` is used to render notebooks.

Install it [here](https://quarto.org/docs/get-started/).
Install it [here](https://github.com/quarto-dev/quarto-cli/releases).

> Note: Support for Docusaurus 3.0 in Quarto is from version `1.4`. Ensure that your `quarto` version is `1.4` or higher.
> Note: Ensure that your `quarto` version is `1.5.23` or higher.
## Local Development

Expand Down
68 changes: 68 additions & 0 deletions website/docs/topics/retrieval_augmentation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Retrieval Augmentation

Retrieval Augmented Generation (RAG) is a powerful technique that combines language models with external knowledge retrieval to improve the quality and relevance of generated responses.

One way to realize RAG in AutoGen is to construct agent chats with `RetrieveAssistantAgent` and `RetrieveUserProxyAgent` classes.

## Example Setup: RAG with Retrieval Augmented Agents
The following is an example setup demonstrating how to create retrieval augmented agents in AutoGen:

### Step 1. Create an instance of `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`.

Here `RetrieveUserProxyAgent` instance acts as a proxy agent that retrieves relevant information based on the user's input.
```python
assistant = RetrieveAssistantAgent(
name="assistant",
system_message="You are a helpful assistant.",
llm_config={
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
},
)
ragproxyagent = RetrieveUserProxyAgent(
name="ragproxyagent",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
retrieve_config={
"task": "code",
"docs_path": [
"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md",
"https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Research.md",
os.path.join(os.path.abspath(""), "..", "website", "docs"),
],
"custom_text_types": ["mdx"],
"chunk_token_size": 2000,
"model": config_list[0]["model"],
"client": chromadb.PersistentClient(path="/tmp/chromadb"),
"embedding_model": "all-mpnet-base-v2",
"get_or_create": True, # set to False if you don't want to reuse an existing collection, but you'll need to remove the collection manually
},
code_execution_config=False, # set to False if you don't want to execute the code
)
```

### Step 2. Initiating Agent Chat with Retrieval Augmentation

Once the retrieval augmented agents are set up, you can initiate a chat with retrieval augmentation using the following code:

```python
code_problem = "How can I use FLAML to perform a classification task and use spark to do parallel training. Train 30 seconds and force cancel jobs if time limit is reached."
ragproxyagent.initiate_chat(
assistant, message=ragproxyagent.message_generator, problem=code_problem, search_string="spark"
) # search_string is used as an extra filter for the embeddings search, in this case, we only want to search documents that contain "spark".
```

## Online Demo
[Retrival-Augmented Chat Demo on Huggingface](https://huggingface.co/spaces/thinkall/autogen-demos)

## More Examples and Notebooks
For more detailed examples and notebooks showcasing the usage of retrieval augmented agents in AutoGen, refer to the following:
- Automated Code Generation and Question Answering with Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat)
- Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_qdrant_RetrieveChat.ipynb)
- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb)
- **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG)

## Roadmap

Explore our detailed roadmap [here](https://github.com/microsoft/autogen/issues/1657) for further advancements plan around RAG. Your contributions, feedback, and use cases are highly appreciated! We invite you to engage with us and play a pivotal role in the development of this impactful feature.

0 comments on commit 5961b7f

Please sign in to comment.