diff --git a/notebook/agentchat_RetrieveChat.ipynb b/notebook/agentchat_RetrieveChat.ipynb index 6e71c9f4c60e..b8cd70ec48f7 100644 --- a/notebook/agentchat_RetrieveChat.ipynb +++ b/notebook/agentchat_RetrieveChat.ipynb @@ -28,8 +28,9 @@ "- [Example 5: Solve comprehensive QA problems with RetrieveChat's unique feature `Update Context`](#example-5)\n", "- [Example 6: Solve comprehensive QA problems with customized prompt and few-shot learning](#example-6)\n", "\n", - "\\:\\:\\:info Requirements\n", "\n", + "````{=mdx}\n", + ":::info Requirements\n", "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", "```bash\n", @@ -37,8 +38,8 @@ "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", - "\n", - "\\:\\:\\:\n" + ":::\n", + "````" ] }, { @@ -78,19 +79,7 @@ "# a vector database instance\n", "from autogen.retrieve_utils import TEXT_FORMATS\n", "\n", - "config_list = autogen.config_list_from_json(\n", - " env_or_file=\"OAI_CONFIG_LIST\",\n", - " filter_dict={\n", - " \"model\": {\n", - " \"gpt-4\",\n", - " \"gpt4\",\n", - " \"gpt-4-32k\",\n", - " \"gpt-4-32k-0314\",\n", - " \"gpt-35-turbo\",\n", - " \"gpt-3.5-turbo\",\n", - " }\n", - " },\n", - ")\n", + "config_list = autogen.config_list_from_json(env_or_file=\"OAI_CONFIG_LIST\")\n", "\n", "assert len(config_list) > 0\n", "print(\"models to use: \", [config_list[i][\"model\"] for i in range(len(config_list))])" @@ -101,18 +90,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\\:\\:\\:tip\n", + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n", + ":::\n", + "````\n", "\n", - "Learn more about the various ways to configure LLM endpoints [here](/docs/llm_configuration).\n", - "\n", - "\\:\\:\\:" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ "## Construct agents for RetrieveChat\n", "\n", "We start by initializing the `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for RetrieveAssistantAgent. The detailed instructions are given in the user message. Later we will use the `RetrieveUserProxyAgent.generate_init_prompt` to combine the instructions and a retrieval augmented generation task for an initial prompt to be sent to the LLM assistant." diff --git a/notebook/agentchat_auto_feedback_from_code_execution.ipynb b/notebook/agentchat_auto_feedback_from_code_execution.ipynb index 2d3df85c8744..a8d07ad3599c 100644 --- a/notebook/agentchat_auto_feedback_from_code_execution.ipynb +++ b/notebook/agentchat_auto_feedback_from_code_execution.ipynb @@ -21,16 +21,16 @@ "\n", "In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to write code and execute the code. Here `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for the human user to execute the code written by `AssistantAgent`, or automatically execute the code. Depending on the setting of `human_input_mode` and `max_consecutive_auto_reply`, the `UserProxyAgent` either solicits feedback from the human user or returns auto-feedback based on the result of code execution (success or failure and corresponding outputs) to `AssistantAgent`. `AssistantAgent` will debug the code and suggest new code if the result contains error. The two agents keep communicating to each other until the task is done.\n", "\n", - "\\:\\:\\:info Requirements\n", - "\n", + "````{=mdx}\n", + ":::info Requirements\n", "Install `pyautogen`:\n", "```bash\n", "pip install pyautogen\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", - "\n", - "\\:\\:\\:" + ":::\n", + "````" ] }, { @@ -59,11 +59,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\\:\\:\\:tip\n", - "\n", - "Learn more about the various ways to configure LLM endpoints [here](/docs/llm_configuration).\n", - "\n", - "\\:\\:\\:" + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n", + ":::\n", + "````" ] }, { diff --git a/notebook/agentchat_function_call_async.ipynb b/notebook/agentchat_function_call_async.ipynb index ff45d215c122..962708dff118 100644 --- a/notebook/agentchat_function_call_async.ipynb +++ b/notebook/agentchat_function_call_async.ipynb @@ -24,16 +24,16 @@ "\n", "In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n", "\n", - "\\:\\:\\:info Requirements\n", - "\n", + "````{=mdx}\n", + ":::info Requirements\n", "Install `pyautogen`:\n", "```bash\n", "pip install pyautogen\n", "```\n", "\n", "For more information, please refer to the [installation guide](/docs/installation/).\n", - "\n", - "\\:\\:\\:\n" + ":::\n", + "````\n" ] }, { @@ -53,25 +53,18 @@ "config_list = autogen.config_list_from_json(env_or_file=\"OAI_CONFIG_LIST\")" ] }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "92fde41f", - "metadata": {}, - "source": [ - "\\:\\:\\:tip\n", - "\n", - "Learn more about the various ways to configure LLM endpoints [here](/docs/llm_configuration).\n", - "\n", - "\\:\\:\\:" - ] - }, { "attachments": {}, "cell_type": "markdown", "id": "2b9526e7", "metadata": {}, "source": [ + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n", + ":::\n", + "````\n", + "\n", "## Making Async and Sync Function Calls\n", "\n", "In this example, we demonstrate function call execution with `AssistantAgent` and `UserProxyAgent`. With the default system prompt of `AssistantAgent`, we allow the LLM assistant to perform tasks with code, and the `UserProxyAgent` would extract code blocks from the LLM response and execute them. With the new \"function_call\" feature, we define functions and specify the description of the function in the OpenAI config for the `AssistantAgent`. Then we register the functions in `UserProxyAgent`." diff --git a/notebook/agentchat_groupchat.ipynb b/notebook/agentchat_groupchat.ipynb index a0c193a0332e..058b687b3dbe 100644 --- a/notebook/agentchat_groupchat.ipynb +++ b/notebook/agentchat_groupchat.ipynb @@ -5,37 +5,29 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\"Open" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Auto Generated Agent Chat: Group Chat\n", + "\n", + "\n", + "# Group Chat\n", "\n", "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", "This notebook is modified based on https://github.com/microsoft/FLAML/blob/4ea686af5c3e8ff24d9076a7a626c8b28ab5b1d7/notebook/autogen_multiagent_roleplay_chat.ipynb\n", "\n", - "## Requirements\n", - "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", "```bash\n", "pip install pyautogen\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": 105, - "metadata": {}, - "outputs": [], - "source": [ - "%%capture --no-stderr\n", - "# %pip install \"pyautogen>=0.2.3\"" + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -56,24 +48,12 @@ "source": [ "import autogen\n", "\n", - "config_list_gpt4 = autogen.config_list_from_json(\n", + "config_list = autogen.config_list_from_json(\n", " \"OAI_CONFIG_LIST\",\n", " filter_dict={\n", " \"model\": [\"gpt-4\", \"gpt-4-0314\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n", " },\n", - ")\n", - "# config_list_gpt35 = autogen.config_list_from_json(\n", - "# \"OAI_CONFIG_LIST\",\n", - "# filter_dict={\n", - "# \"model\": {\n", - "# \"gpt-3.5-turbo\",\n", - "# \"gpt-3.5-turbo-16k\",\n", - "# \"gpt-3.5-turbo-0301\",\n", - "# \"chatgpt-35-turbo-0301\",\n", - "# \"gpt-35-turbo-v0301\",\n", - "# },\n", - "# },\n", - "# )" + ")" ] }, { @@ -81,40 +61,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n", - "\n", - "The config list looks like the following:\n", - "```python\n", - "config_list = [\n", - " {\n", - " 'model': 'gpt-4',\n", - " 'api_key': '',\n", - " },\n", - " {\n", - " 'model': 'gpt-4',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2023-06-01-preview',\n", - " },\n", - " {\n", - " 'model': 'gpt-4-32k',\n", - " 'api_key': '',\n", - " 'base_url': '',\n", - " 'api_type': 'azure',\n", - " 'api_version': '2023-06-01-preview',\n", - " },\n", - "]\n", - "```\n", + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n", + ":::\n", + "````\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/llm_configuration.ipynb) for full code examples of the different methods." - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ "## Construct Agents" ] }, @@ -124,7 +76,7 @@ "metadata": {}, "outputs": [], "source": [ - "llm_config = {\"config_list\": config_list_gpt4, \"cache_seed\": 42}\n", + "llm_config = {\"config_list\": config_list, \"cache_seed\": 42}\n", "user_proxy = autogen.UserProxyAgent(\n", " name=\"User_proxy\",\n", " system_message=\"A human admin.\",\n", diff --git a/notebook/agentchat_groupchat_RAG.ipynb b/notebook/agentchat_groupchat_RAG.ipynb index c441cb01df3c..92d29d51cc47 100644 --- a/notebook/agentchat_groupchat_RAG.ipynb +++ b/notebook/agentchat_groupchat_RAG.ipynb @@ -5,7 +5,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\"Open" + "" ] }, { @@ -13,27 +17,22 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Auto Generated Agent Chat: Group Chat with Retrieval Augmented Generation\n", + "# Group Chat with Retrieval Augmented Generation\n", "\n", "AutoGen supports conversable agents powered by LLMs, tools, or humans, performing tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "## Requirements\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Some extra dependencies are needed for this notebook, which can be installed via pip:\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n", "```bash\n", - "pip install \"pyautogen[retrievechat]>=0.2.3\"\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "%%capture --no-stderr\n", - "# %pip install \"pyautogen[retrievechat]>=0.2.3\"" + "pip install pyautogen[retrievechat]\n", + "```\n", + "\n", + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -66,13 +65,7 @@ "from autogen import AssistantAgent\n", "from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent\n", "\n", - "config_list = autogen.config_list_from_json(\n", - " \"OAI_CONFIG_LIST\",\n", - " file_location=\".\",\n", - " filter_dict={\n", - " \"model\": [\"gpt-3.5-turbo\", \"gpt-35-turbo\", \"gpt-35-turbo-0613\", \"gpt-4\", \"gpt4\", \"gpt-4-32k\"],\n", - " },\n", - ")\n", + "config_list = autogen.config_list_from_json(\"OAI_CONFIG_LIST\")\n", "\n", "print(\"LLM models: \", [config_list[i][\"model\"] for i in range(len(config_list))])" ] @@ -82,33 +75,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n", - "\n", - "The config list looks like the following:\n", - "```python\n", - "config_list = [\n", - " {\n", - " \"model\": \"gpt-4\",\n", - " \"api_key\": \"\",\n", - " }, # OpenAI API endpoint for gpt-4\n", - " {\n", - " \"model\": \"gpt-35-turbo-0631\", # 0631 or newer is needed to use functions\n", - " \"base_url\": \"\", \n", - " \"api_type\": \"azure\", \n", - " \"api_version\": \"2023-08-01-preview\", # 2023-07-01-preview or newer is needed to use functions\n", - " \"api_key\": \"\"\n", - " }\n", - "]\n", - "```\n", + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n", + ":::\n", + "````\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/llm_configuration.ipynb) for full code examples of the different methods." - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ "## Construct Agents" ] }, @@ -819,7 +791,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.13" + "version": "3.11.7" } }, "nbformat": 4, diff --git a/notebook/agentchat_society_of_mind.ipynb b/notebook/agentchat_society_of_mind.ipynb index 73dd8e4495f1..b395c1433396 100644 --- a/notebook/agentchat_society_of_mind.ipynb +++ b/notebook/agentchat_society_of_mind.ipynb @@ -5,6 +5,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "\n", "# SocietyOfMindAgent\n", "\n", "This notebook demonstrates the SocietyOfMindAgent, which runs a group chat as an internal monologue, but appears to the external world as a single agent. This confers three distinct advantages:\n", @@ -12,47 +18,17 @@ "1. It provides a clean way of producing a hierarchy of agents, hiding complexity as inner monologues.\n", "2. It provides a consistent way of extracting an answer from a lengthy group chat (normally, it is not clear which message is the final response, and the response itself may not always be formatted in a way that makes sense when extracted as a standalone message).\n", "3. It provides a way of recovering when agents exceed their context window constraints (the inner monologue is protected by try-catch blocks)\n", - " \n", - "\n", - "## Requirements\n", "\n", - "AutoGen requires `Python>=3.8`. To run this notebook example, please install the latest version of AutoGen:\n", - "```sh\n", + "````{=mdx}\n", + ":::info Requirements\n", + "Install `pyautogen`:\n", + "```bash\n", "pip install pyautogen\n", - "```" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "# %pip install --quiet pyautogen" - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Set your API Endpoint\n", - "\n", - "The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n", - "\n", - "It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n", - "\n", - "Your json config should look something like the following:\n", - "```json\n", - "[\n", - " {\n", - " \"model\": \"gpt-4\",\n", - " \"api_key\": \"\"\n", - " }\n", - "]\n", "```\n", "\n", - "If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n" + "For more information, please refer to the [installation guide](/docs/installation/).\n", + ":::\n", + "````" ] }, { @@ -79,6 +55,12 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "````{=mdx}\n", + ":::tip\n", + "Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n", + ":::\n", + "````\n", + "\n", "### Example Group Chat with Two Agents\n", "\n", "In this example, we will use an AssistantAgent and a UserProxy agent (configured for code execution) to work together to solve a problem. Executing code requires *at least* two conversation turns (one to write the code, and one to execute the code). If the code fails, or needs further refinement, then additional turns may also be needed. When will then wrap these agents in a SocietyOfMindAgent, hiding the internal discussion from other agents (though will still appear in the console), and ensuring that the response is suitable as a standalone message." diff --git a/notebook/contributing.md b/notebook/contributing.md index 53bcaf83586e..4fb78b0964b8 100644 --- a/notebook/contributing.md +++ b/notebook/contributing.md @@ -26,22 +26,40 @@ The following points are best practices for authoring notebooks to ensure consis You don't need to explain in depth how to install AutoGen. Unless there are specific instructions for the notebook just use the following markdown snippet: -```` -\:\:\:info Requirements - +`````` +````{=mdx} +:::info Requirements Install `pyautogen`: ```bash pip install pyautogen ``` For more information, please refer to the [installation guide](/docs/installation/). +::: +```` +`````` + +Or if extras are needed: + +`````` +````{=mdx} +:::info Requirements +Some extra dependencies are needed for this notebook, which can be installed via pip: + +```bash +pip install pyautogen[retrievechat] flaml[automl] +``` -\:\:\: +For more information, please refer to the [installation guide](/docs/installation/). +::: ```` +`````` When specifying the config list, to ensure consistency it is best to use approximately the following code: ```python +import autogen + config_list = autogen.config_list_from_json( env_or_file="OAI_CONFIG_LIST", ) @@ -49,10 +67,10 @@ config_list = autogen.config_list_from_json( Then after the code cell where this is used, include the following markdown snippet: -``` -\:\:\:tip - -Learn more about the various ways to configure LLM endpoints [here](/docs/llm_configuration). - -\:\:\: -``` +`````` +````{=mdx} +:::tip +Learn more about configuring LLMs for agents [here](/docs/llm_configuration). +::: +```` +``````