From 1ae07eae3b3d180055154b0e3acb5091fa9dda96 Mon Sep 17 00:00:00 2001 From: Jack Gerrits Date: Thu, 10 Oct 2024 11:22:12 -0400 Subject: [PATCH] Update links on 0.2 website --- README.md | 10 +-- notebook/agentchat_MathChat.ipynb | 2 +- notebook/agentchat_RetrieveChat.ipynb | 2 +- notebook/agentchat_RetrieveChat_mongodb.ipynb | 2 +- .../agentchat_RetrieveChat_pgvector.ipynb | 2 +- notebook/agentchat_RetrieveChat_qdrant.ipynb | 2 +- notebook/agentchat_agentoptimizer.ipynb | 2 +- notebook/agentchat_cost_token_tracking.ipynb | 2 +- notebook/agentchat_custom_model.ipynb | 2 +- notebook/agentchat_function_call.ipynb | 2 +- ...at_function_call_currency_calculator.ipynb | 2 +- notebook/agentchat_human_feedback.ipynb | 2 +- notebook/agentchat_langchain.ipynb | 2 +- .../agentchat_oai_assistant_retrieval.ipynb | 2 +- ...ntchat_oai_assistant_twoagents_basic.ipynb | 2 +- notebook/agentchat_planning.ipynb | 2 +- notebook/agentchat_stream.ipynb | 2 +- notebook/agentchat_surfer.ipynb | 2 +- notebook/agentchat_teachability.ipynb | 2 +- .../agentchat_teachable_oai_assistants.ipynb | 2 +- notebook/agentchat_two_users.ipynb | 2 +- ...eo_transcript_translate_with_whisper.ipynb | 2 +- notebook/agentchat_web_info.ipynb | 2 +- notebook/agenteval_cq_math.ipynb | 2 +- website/blog/2023-06-28-MathChat/index.mdx | 2 +- .../blog/2023-10-18-RetrieveChat/index.mdx | 12 ++-- .../blog/2023-10-26-TeachableAgent/index.mdx | 8 +-- website/blog/2023-11-06-LMM-Agent/index.mdx | 6 +- .../blog/2023-11-13-OAI-assistants/index.mdx | 8 +-- website/blog/2023-11-20-AgentEval/index.mdx | 4 +- .../blog/2023-11-26-Agent-AutoBuild/index.mdx | 4 +- .../blog/2023-12-01-AutoGenStudio/index.mdx | 2 +- .../blog/2023-12-23-AgentOptimizer/index.mdx | 2 +- .../blog/2024-01-25-AutoGenBench/index.mdx | 8 +-- .../blog/2024-01-26-Custom-Models/index.mdx | 2 +- .../blog/2024-03-03-AutoGen-Update/index.mdx | 8 +-- website/docs/Examples.md | 64 +++++++++---------- website/docs/Getting-Started.mdx | 2 +- website/docs/Migration-Guide.md | 2 +- website/docs/Use-Cases/agent_chat.md | 6 +- website/docs/Use-Cases/enhanced_inference.md | 4 +- website/docs/contributor-guide/docker.md | 2 +- .../docs/contributor-guide/documentation.md | 2 +- website/docs/contributor-guide/tests.md | 10 +-- website/docs/ecosystem/llamaindex.md | 2 +- website/docs/ecosystem/microsoft-fabric.md | 2 +- website/docs/ecosystem/pgvector.md | 2 +- website/docs/installation/Docker.md | 4 +- .../installation/Optional-Dependencies.md | 16 ++--- .../non-openai-models/cloud-gemini.ipynb | 2 +- .../cloud-gemini_vertexai.ipynb | 2 +- website/docs/topics/retrieval_augmentation.md | 8 +-- website/process_notebooks.py | 2 +- website/src/components/GalleryPage.js | 2 +- 54 files changed, 128 insertions(+), 128 deletions(-) diff --git a/README.md b/README.md index e92083f0df32..0599300c5397 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ AutoGen is an open-source programming framework for building AI agents and facil > -_Maintainers (Sept 6th, 2024)_ -![AutoGen Overview](https://github.com/microsoft/autogen/blob/main/website/static/img/autogen_agentchat.png) +![AutoGen Overview](https://github.com/microsoft/autogen/blob/0.2/website/static/img/autogen_agentchat.png) - AutoGen enables building next-gen LLM applications based on [multi-agent conversations](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses. - It supports [diverse conversation patterns](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns) for complex workflows. With customizable and conversable agents, developers can use AutoGen to build a wide range of conversation patterns concerning conversation autonomy, @@ -171,7 +171,7 @@ Features of this use case include: - **Customization**: AutoGen agents can be customized to meet the specific needs of an application. This includes the ability to choose the LLMs to use, the types of human input to allow, and the tools to employ. - **Human participation**: AutoGen seamlessly allows human participation. This means that humans can provide input and feedback to the agents as needed. -For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), +For [example](https://github.com/microsoft/autogen/blob/0.2/test/twoagent.py), ```python from autogen import AssistantAgent, UserProxyAgent, config_list_from_json @@ -194,9 +194,9 @@ python test/twoagent.py After the repo is cloned. The figure below shows an example conversation flow with AutoGen. -![Agent Chat Example](https://github.com/microsoft/autogen/blob/main/website/static/img/chat_example.png) +![Agent Chat Example](https://github.com/microsoft/autogen/blob/0.2/website/static/img/chat_example.png) -Alternatively, the [sample code](https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py) here allows a user to chat with an AutoGen agent in ChatGPT style. +Alternatively, the [sample code](https://github.com/microsoft/autogen/blob/0.2/samples/simple_chat.py) here allows a user to chat with an AutoGen agent in ChatGPT style. Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples#automated-multi-agent-chat) for this feature.

@@ -240,7 +240,7 @@ You can find detailed documentation about AutoGen [here](https://microsoft.githu In addition, you can find: -- [Research](https://microsoft.github.io/autogen/docs/Research), [blogposts](https://microsoft.github.io/autogen/blog) around AutoGen, and [Transparency FAQs](https://github.com/microsoft/autogen/blob/main/TRANSPARENCY_FAQS.md) +- [Research](https://microsoft.github.io/autogen/docs/Research), [blogposts](https://microsoft.github.io/autogen/blog) around AutoGen, and [Transparency FAQs](https://github.com/microsoft/autogen/blob/0.2/TRANSPARENCY_FAQS.md) - [Contributing guide](https://microsoft.github.io/autogen/docs/Contribute) diff --git a/notebook/agentchat_MathChat.ipynb b/notebook/agentchat_MathChat.ipynb index 69c38031b2fe..bb6f713eabc7 100644 --- a/notebook/agentchat_MathChat.ipynb +++ b/notebook/agentchat_MathChat.ipynb @@ -17,7 +17,7 @@ "\n", "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "MathChat is an experimental conversational framework for math problem solving. In this notebook, we demonstrate how to use MathChat to solve math problems. MathChat uses the `AssistantAgent` and `MathUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `MathUserProxyAgent` implements a different auto reply mechanism corresponding to the MathChat prompts. You can find more details in the paper [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337) or the [blogpost](https://microsoft.github.io/autogen/blog/2023/06/28/MathChat).\n", + "MathChat is an experimental conversational framework for math problem solving. In this notebook, we demonstrate how to use MathChat to solve math problems. MathChat uses the `AssistantAgent` and `MathUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `MathUserProxyAgent` implements a different auto reply mechanism corresponding to the MathChat prompts. You can find more details in the paper [An Empirical Study on Challenging Math Problem Solving with GPT-4](https://arxiv.org/abs/2306.01337) or the [blogpost](https://microsoft.github.io/autogen/blog/2023/06/28/MathChat).\n", "\n", "````{=mdx}\n", ":::info Requirements\n", diff --git a/notebook/agentchat_RetrieveChat.ipynb b/notebook/agentchat_RetrieveChat.ipynb index 6ca2d1ac5123..6debb2450df3 100644 --- a/notebook/agentchat_RetrieveChat.ipynb +++ b/notebook/agentchat_RetrieveChat.ipynb @@ -10,7 +10,7 @@ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", + "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", "\n", "## Table of Contents\n", "We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n", diff --git a/notebook/agentchat_RetrieveChat_mongodb.ipynb b/notebook/agentchat_RetrieveChat_mongodb.ipynb index f1f85f65a802..30508ad6c474 100644 --- a/notebook/agentchat_RetrieveChat_mongodb.ipynb +++ b/notebook/agentchat_RetrieveChat_mongodb.ipynb @@ -10,7 +10,7 @@ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", + "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", "\n", "## Table of Contents\n", "We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n", diff --git a/notebook/agentchat_RetrieveChat_pgvector.ipynb b/notebook/agentchat_RetrieveChat_pgvector.ipynb index 022b1347a2d2..18f3c5199e89 100644 --- a/notebook/agentchat_RetrieveChat_pgvector.ipynb +++ b/notebook/agentchat_RetrieveChat_pgvector.ipynb @@ -10,7 +10,7 @@ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", + "RetrieveChat is a conversational system for retrieval-augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)). Essentially, `RetrieveUserProxyAgent` implement a different auto-reply mechanism corresponding to the RetrieveChat prompts.\n", "\n", "## Table of Contents\n", "We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n", diff --git a/notebook/agentchat_RetrieveChat_qdrant.ipynb b/notebook/agentchat_RetrieveChat_qdrant.ipynb index 9be4cbfe528d..e56bc00e4abb 100644 --- a/notebook/agentchat_RetrieveChat_qdrant.ipynb +++ b/notebook/agentchat_RetrieveChat_qdrant.ipynb @@ -12,7 +12,7 @@ "This notebook demonstrates the usage of Qdrant for RAG, based on [agentchat_RetrieveChat.ipynb](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb).\n", "\n", "\n", - "RetrieveChat is a conversational system for retrieve augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb)).\n", + "RetrieveChat is a conversational system for retrieve augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the `AssistantAgent` and `RetrieveUserProxyAgent`, which is similar to the usage of `AssistantAgent` and `UserProxyAgent` in other notebooks (e.g., [Automated Task Solving with Code Generation, Execution & Debugging](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_auto_feedback_from_code_execution.ipynb)).\n", "\n", "We'll demonstrate usage of RetrieveChat with Qdrant for code generation and question answering w/ human feedback.\n", "\n", diff --git a/notebook/agentchat_agentoptimizer.ipynb b/notebook/agentchat_agentoptimizer.ipynb index 7de418b5ee70..ea3a0d88e65d 100644 --- a/notebook/agentchat_agentoptimizer.ipynb +++ b/notebook/agentchat_agentoptimizer.ipynb @@ -53,7 +53,7 @@ "source": [ "# MathUserProxy with function_call\n", "\n", - "This agent is a customized MathUserProxy inherits from its [parent class](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/math_user_proxy_agent.py).\n", + "This agent is a customized MathUserProxy inherits from its [parent class](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/math_user_proxy_agent.py).\n", "\n", "It supports using both function_call and python to solve math problems.\n" ] diff --git a/notebook/agentchat_cost_token_tracking.ipynb b/notebook/agentchat_cost_token_tracking.ipynb index 17106e7c9385..0d456daa46bd 100644 --- a/notebook/agentchat_cost_token_tracking.ipynb +++ b/notebook/agentchat_cost_token_tracking.ipynb @@ -109,7 +109,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_custom_model.ipynb b/notebook/agentchat_custom_model.ipynb index 773247ee0b91..364d81517a17 100644 --- a/notebook/agentchat_custom_model.ipynb +++ b/notebook/agentchat_custom_model.ipynb @@ -238,7 +238,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_function_call.ipynb b/notebook/agentchat_function_call.ipynb index ff94c0d4fb0a..e341aecaead7 100644 --- a/notebook/agentchat_function_call.ipynb +++ b/notebook/agentchat_function_call.ipynb @@ -104,7 +104,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_function_call_currency_calculator.ipynb b/notebook/agentchat_function_call_currency_calculator.ipynb index 36ef81d5edbe..d6d9e05826f2 100644 --- a/notebook/agentchat_function_call_currency_calculator.ipynb +++ b/notebook/agentchat_function_call_currency_calculator.ipynb @@ -104,7 +104,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_human_feedback.ipynb b/notebook/agentchat_human_feedback.ipynb index 3c21c7c9f4f5..77b09fe0e8d5 100644 --- a/notebook/agentchat_human_feedback.ipynb +++ b/notebook/agentchat_human_feedback.ipynb @@ -102,7 +102,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_langchain.ipynb b/notebook/agentchat_langchain.ipynb index 1d5fea19e066..bb4805cb3558 100644 --- a/notebook/agentchat_langchain.ipynb +++ b/notebook/agentchat_langchain.ipynb @@ -139,7 +139,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_oai_assistant_retrieval.ipynb b/notebook/agentchat_oai_assistant_retrieval.ipynb index ec7e41a00ba8..fec9605161a1 100644 --- a/notebook/agentchat_oai_assistant_retrieval.ipynb +++ b/notebook/agentchat_oai_assistant_retrieval.ipynb @@ -6,7 +6,7 @@ "source": [ "## RAG OpenAI Assistants in AutoGen\n", "\n", - "This notebook shows an example of the [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py#L16C43-L16C43) with retrieval augmented generation. `GPTAssistantAgent` is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n", + "This notebook shows an example of the [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/gpt_assistant_agent.py#L16C43-L16C43) with retrieval augmented generation. `GPTAssistantAgent` is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n", "`UserProxyAgent` in AutoGen." ] }, diff --git a/notebook/agentchat_oai_assistant_twoagents_basic.ipynb b/notebook/agentchat_oai_assistant_twoagents_basic.ipynb index 94da2358c986..295784bea2cc 100644 --- a/notebook/agentchat_oai_assistant_twoagents_basic.ipynb +++ b/notebook/agentchat_oai_assistant_twoagents_basic.ipynb @@ -6,7 +6,7 @@ "source": [ "## OpenAI Assistants in AutoGen\n", "\n", - "This notebook shows a very basic example of the [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py#L16C43-L16C43), which is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n", + "This notebook shows a very basic example of the [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/gpt_assistant_agent.py#L16C43-L16C43), which is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n", "`UserProxyAgent` in AutoGen." ] }, diff --git a/notebook/agentchat_planning.ipynb b/notebook/agentchat_planning.ipynb index 9953bf3b57f0..14fed1c29e0d 100644 --- a/notebook/agentchat_planning.ipynb +++ b/notebook/agentchat_planning.ipynb @@ -105,7 +105,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n", + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n", "\n", "## Construct Agents\n", "\n", diff --git a/notebook/agentchat_stream.ipynb b/notebook/agentchat_stream.ipynb index 5536cef96aa0..d39895827277 100644 --- a/notebook/agentchat_stream.ipynb +++ b/notebook/agentchat_stream.ipynb @@ -102,7 +102,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_surfer.ipynb b/notebook/agentchat_surfer.ipynb index 09c9cb3d4f8d..18be350a41f5 100644 --- a/notebook/agentchat_surfer.ipynb +++ b/notebook/agentchat_surfer.ipynb @@ -479,7 +479,7 @@ "#### Multi-Agent Conversation Framework[​](#multi-agent-conversation-framework \"Direct link to Multi-Agent Conversation Framework\")\n", "\n", "Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.\n", - "By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py),\n", + "By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/0.2/test/twoagent.py),\n", "\n", "The figure below shows an example conversation flow with AutoGen.\n", "\n", diff --git a/notebook/agentchat_teachability.ipynb b/notebook/agentchat_teachability.ipynb index 4be1e135dab1..dd41727a02e5 100644 --- a/notebook/agentchat_teachability.ipynb +++ b/notebook/agentchat_teachability.ipynb @@ -13,7 +13,7 @@ "\n", "In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n", "\n", - "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n", + "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/0.2/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n", "\n", "## Requirements\n", "\n", diff --git a/notebook/agentchat_teachable_oai_assistants.ipynb b/notebook/agentchat_teachable_oai_assistants.ipynb index 75744d9a397a..be37b0d7b5cd 100644 --- a/notebook/agentchat_teachable_oai_assistants.ipynb +++ b/notebook/agentchat_teachable_oai_assistants.ipynb @@ -860,7 +860,7 @@ "metadata": {}, "source": [ "### Create the OpenAI Assistant with retrieval as a tool\n", - "For this example, first upload the [conversable_agent.py](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/conversable_agent.py) file to your OpenAI API account. This can be done manually through the [website](https://platform.openai.com/assistants). Then find the uploaded File ID on the [Files page](https://platform.openai.com/files), and paste that ID into the `file_ids` list in the code below." + "For this example, first upload the [conversable_agent.py](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/conversable_agent.py) file to your OpenAI API account. This can be done manually through the [website](https://platform.openai.com/assistants). Then find the uploaded File ID on the [Files page](https://platform.openai.com/files), and paste that ID into the `file_ids` list in the code below." ] }, { diff --git a/notebook/agentchat_two_users.ipynb b/notebook/agentchat_two_users.ipynb index 14b95b537013..c4fc85861f52 100644 --- a/notebook/agentchat_two_users.ipynb +++ b/notebook/agentchat_two_users.ipynb @@ -82,7 +82,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb index e19e61419ad3..51ad4c6268aa 100644 --- a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb +++ b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb @@ -8,7 +8,7 @@ "# Translating Video audio using Whisper and GPT-3.5-turbo\n", "\n", "In this notebook, we demonstrate how to use whisper and GPT-3.5-turbo with `AssistantAgent` and `UserProxyAgent` to recognize and translate\n", - "the speech sound from a video file and add the timestamp like a subtitle file based on [agentchat_function_call.ipynb](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call.ipynb)\n" + "the speech sound from a video file and add the timestamp like a subtitle file based on [agentchat_function_call.ipynb](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call.ipynb)\n" ] }, { diff --git a/notebook/agentchat_web_info.ipynb b/notebook/agentchat_web_info.ipynb index e2962e9049fb..47660502fa76 100644 --- a/notebook/agentchat_web_info.ipynb +++ b/notebook/agentchat_web_info.ipynb @@ -116,7 +116,7 @@ "]\n", "```\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_openai_utils.ipynb) for full code examples of the different methods." ] }, { diff --git a/notebook/agenteval_cq_math.ipynb b/notebook/agenteval_cq_math.ipynb index 199967a91080..a67b0ae992ca 100644 --- a/notebook/agenteval_cq_math.ipynb +++ b/notebook/agenteval_cq_math.ipynb @@ -65,7 +65,7 @@ "## Set your API Endpoint\n", "* The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for an environment variable with a specified name. The value of the environment variable needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n", "\n", - "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n" + "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n" ] }, { diff --git a/website/blog/2023-06-28-MathChat/index.mdx b/website/blog/2023-06-28-MathChat/index.mdx index be2423de9eed..2521065a6252 100644 --- a/website/blog/2023-06-28-MathChat/index.mdx +++ b/website/blog/2023-06-28-MathChat/index.mdx @@ -76,7 +76,7 @@ We found that compared to basic prompting, which demonstrates the innate capabil For categories like Algebra and Prealgebra, PoT and PS showed little improvement, and in some instances, even led to a decrease in accuracy. However, MathChat was able to enhance total accuracy by around 6% compared to PoT and PS, showing competitive performance across all categories. Remarkably, MathChat improved accuracy in the Algebra category by about 15% over other methods. Note that categories like Intermediate Algebra and Precalculus remained challenging for all methods, with only about 20% of problems solved accurately. The code for experiments can be found at this [repository](https://github.com/yiranwu0/FLAML/tree/gpt_math_solver/flaml/autogen/math). -We now provide an implementation of MathChat using the interactive agents in AutoGen. See this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb) for example usage. +We now provide an implementation of MathChat using the interactive agents in AutoGen. See this [notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_MathChat.ipynb) for example usage. ## Future Directions diff --git a/website/blog/2023-10-18-RetrieveChat/index.mdx b/website/blog/2023-10-18-RetrieveChat/index.mdx index 4bad582eb5df..9a6c23787231 100644 --- a/website/blog/2023-10-18-RetrieveChat/index.mdx +++ b/website/blog/2023-10-18-RetrieveChat/index.mdx @@ -206,7 +206,7 @@ We are using chromadb as the default vector database, you can also use mongodb, by simply set `vector_db` to `mongodb`, `pgvector`, `qdrant` and `couchbase` in `retrieve_config`, respectively. To plugin any other dbs, you can also extend class `agentchat.contrib.vectordb.base`, -check out the code [here](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/vectordb/base.py). +check out the code [here](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/vectordb/base.py). ## Advanced Usage of RAG Agents @@ -401,8 +401,8 @@ The online app and the source code are hosted in [HuggingFace](https://huggingfa ## Read More You can check out more example notebooks for RAG use cases: -- [Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb) -- [Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb) -- [Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb) -- [Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb) -- [Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_mongodb.ipynb) +- [Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat.ipynb) +- [Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_groupchat_RAG.ipynb) +- [Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb) +- [Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb) +- [Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_mongodb.ipynb) diff --git a/website/blog/2023-10-26-TeachableAgent/index.mdx b/website/blog/2023-10-26-TeachableAgent/index.mdx index f097acc73726..9ac57fa9995f 100644 --- a/website/blog/2023-10-26-TeachableAgent/index.mdx +++ b/website/blog/2023-10-26-TeachableAgent/index.mdx @@ -24,13 +24,13 @@ In order to make effective decisions about memo storage and retrieval, the `Teac AutoGen contains four code examples that use `Teachability`. -1. Run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py) to converse with a teachable agent. +1. Run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/0.2/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py) to converse with a teachable agent. -2. Run [test_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/test_teachable_agent.py) for quick unit testing of a teachable agent. +2. Run [test_teachable_agent.py](https://github.com/microsoft/autogen/blob/0.2/test/agentchat/contrib/capabilities/test_teachable_agent.py) for quick unit testing of a teachable agent. -3. Use the Jupyter notebook [agentchat_teachability.ipynb](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb) to step through examples discussed below. +3. Use the Jupyter notebook [agentchat_teachability.ipynb](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachability.ipynb) to step through examples discussed below. -4. Use the Jupyter notebook [agentchat_teachable_oai_assistants.ipynb](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachable_oai_assistants.ipynb) to make arbitrary OpenAI Assistants teachable through `GPTAssistantAgent`. +4. Use the Jupyter notebook [agentchat_teachable_oai_assistants.ipynb](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachable_oai_assistants.ipynb) to make arbitrary OpenAI Assistants teachable through `GPTAssistantAgent`. ## Basic Usage of Teachability diff --git a/website/blog/2023-11-06-LMM-Agent/index.mdx b/website/blog/2023-11-06-LMM-Agent/index.mdx index 0ab92c4dbb90..6613f794ee3d 100644 --- a/website/blog/2023-11-06-LMM-Agent/index.mdx +++ b/website/blog/2023-11-06-LMM-Agent/index.mdx @@ -9,8 +9,8 @@ tags: [LMM, multimodal] **In Brief:** * Introducing the **Multimodal Conversable Agent** and the **LLaVA Agent** to enhance LMM functionalities. * Users can input text and images simultaneously using the `` tag to specify image loading. -* Demonstrated through the [GPT-4V notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb). -* Demonstrated through the [LLaVA notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb). +* Demonstrated through the [GPT-4V notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_gpt-4v.ipynb). +* Demonstrated through the [LLaVA notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb). ## Introduction Large multimodal models (LMMs) augment large language models (LLMs) with the ability to process multi-sensory data. @@ -62,7 +62,7 @@ The `MultimodalConversableAgent` interprets the input prompt, extracting images ## Advanced Usage Similar to other AutoGen agents, multimodal agents support multi-round dialogues with other agents, code generation, factual queries, and management via a GroupChat interface. -For example, the `FigureCreator` in our [GPT-4V notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) and [LLaVA notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) integrates two agents: a coder (an AssistantAgent) and critics (a multimodal agent). +For example, the `FigureCreator` in our [GPT-4V notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_gpt-4v.ipynb) and [LLaVA notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb) integrates two agents: a coder (an AssistantAgent) and critics (a multimodal agent). The coder drafts Python code for visualizations, while the critics provide insights for enhancement. Collaboratively, these agents aim to refine visual outputs. With `human_input_mode=ALWAYS`, you can also contribute suggestions for better visualizations. diff --git a/website/blog/2023-11-13-OAI-assistants/index.mdx b/website/blog/2023-11-13-OAI-assistants/index.mdx index 2fc9bee359b8..ae8073285a43 100644 --- a/website/blog/2023-11-13-OAI-assistants/index.mdx +++ b/website/blog/2023-11-13-OAI-assistants/index.mdx @@ -9,12 +9,12 @@ tags: [openai-assistant] ## TL;DR -OpenAI assistants are now integrated into AutoGen via [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py). +OpenAI assistants are now integrated into AutoGen via [`GPTAssistantAgent`](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/gpt_assistant_agent.py). This enables multiple OpenAI assistants, which form the backend of the now popular GPTs, to collaborate and tackle complex tasks. Checkout example notebooks for reference: -* [Basic example](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) -* [Code interpreter](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb) -* [Function calls](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb) +* [Basic example](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) +* [Code interpreter](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_code_interpreter.ipynb) +* [Function calls](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_function_call.ipynb) ## Introduction diff --git a/website/blog/2023-11-20-AgentEval/index.mdx b/website/blog/2023-11-20-AgentEval/index.mdx index 1abb9e6c9f8d..ac7fab7dc8f7 100644 --- a/website/blog/2023-11-20-AgentEval/index.mdx +++ b/website/blog/2023-11-20-AgentEval/index.mdx @@ -14,7 +14,7 @@ tags: [LLM, GPT, evaluation, task utility] **TL;DR:** * As a developer of an LLM-powered application, how can you assess the utility it brings to end users while helping them with their tasks? * To shed light on the question above, we introduce `AgentEval` — the first version of the framework to assess the utility of any LLM-powered application crafted to assist users in specific tasks. AgentEval aims to simplify the evaluation process by automatically proposing a set of criteria tailored to the unique purpose of your application. This allows for a comprehensive assessment, quantifying the utility of your application against the suggested criteria. -* We demonstrate how `AgentEval` work using [math problems dataset](https://microsoft.github.io/autogen/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc). +* We demonstrate how `AgentEval` work using [math problems dataset](https://microsoft.github.io/autogen/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc). ## Introduction @@ -54,7 +54,7 @@ critic = autogen.AssistantAgent( ) ``` -Next, the critic is given successful and failed examples of the task execution; then, it is able to return a list of criteria (Fig. 1). For reference, use the [following notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). +Next, the critic is given successful and failed examples of the task execution; then, it is able to return a list of criteria (Fig. 1). For reference, use the [following notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb). * The goal of `QuantifierAgent` is to quantify each of the suggested criteria (Fig. 1), providing us with an idea of the utility of this system for the given task. Here is an example of how it can be defined: diff --git a/website/blog/2023-11-26-Agent-AutoBuild/index.mdx b/website/blog/2023-11-26-Agent-AutoBuild/index.mdx index a6d0025699d3..c7329a4b1920 100644 --- a/website/blog/2023-11-26-Agent-AutoBuild/index.mdx +++ b/website/blog/2023-11-26-Agent-AutoBuild/index.mdx @@ -14,8 +14,8 @@ user prompt required, powered by a new designed class **AgentBuilder**. AgentBui leveraging [vLLM](https://docs.vllm.ai/en/latest/index.html) and [FastChat](https://github.com/lm-sys/FastChat). Checkout example notebooks and source code for reference: -- [AutoBuild Examples](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) -- [AgentBuilder](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/contrib/agent_builder.py) +- [AutoBuild Examples](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_basic.ipynb) +- [AgentBuilder](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/contrib/agent_builder.py) ## Introduction In this blog, we introduce **AutoBuild**, a pipeline that can automatically build multi-agent systems for complex tasks. diff --git a/website/blog/2023-12-01-AutoGenStudio/index.mdx b/website/blog/2023-12-01-AutoGenStudio/index.mdx index a2558acb01d6..b379e4520458 100644 --- a/website/blog/2023-12-01-AutoGenStudio/index.mdx +++ b/website/blog/2023-12-01-AutoGenStudio/index.mdx @@ -139,7 +139,7 @@ This section focuses on defining the properties of agents and agent workflows. I

-**Agents**: This provides an interface to declaratively specify properties for an AutoGen agent (mirrors most of the members of a base [AutoGen conversable agent](https://github.com/microsoft/autogen/blob/main/autogen/agentchat/conversable_agent.py) class). +**Agents**: This provides an interface to declaratively specify properties for an AutoGen agent (mirrors most of the members of a base [AutoGen conversable agent](https://github.com/microsoft/autogen/blob/0.2/autogen/agentchat/conversable_agent.py) class). **Agent Workflows**: An agent workflow is a specification of a set of agents that can work together to accomplish a task. The simplest version of this is a setup with two agents – a user proxy agent (that represents a user i.e. it compiles code and prints result) and an assistant that can address task requests (e.g., generating plans, writing code, evaluating responses, proposing error recovery steps, etc.). A more complex flow could be a group chat where even more agents work towards a solution. diff --git a/website/blog/2023-12-23-AgentOptimizer/index.mdx b/website/blog/2023-12-23-AgentOptimizer/index.mdx index ce92952cfb6f..8c709e931f3d 100644 --- a/website/blog/2023-12-23-AgentOptimizer/index.mdx +++ b/website/blog/2023-12-23-AgentOptimizer/index.mdx @@ -17,7 +17,7 @@ More information could be found in: **Paper**: https://arxiv.org/abs/2402.11359. -**Notebook**: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb. +**Notebook**: https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_agentoptimizer.ipynb. ## Introduction In the traditional ML pipeline, we train a model by updating its weights according to the loss on the training set, while in the era of LLM agents, how should we train an agent? diff --git a/website/blog/2024-01-25-AutoGenBench/index.mdx b/website/blog/2024-01-25-AutoGenBench/index.mdx index 3f4b2d4f216b..cd58ecee6436 100644 --- a/website/blog/2024-01-25-AutoGenBench/index.mdx +++ b/website/blog/2024-01-25-AutoGenBench/index.mdx @@ -21,8 +21,8 @@ Today we are releasing AutoGenBench - a tool for evaluating AutoGen agents and w AutoGenBench is a standalone command line tool, installable from PyPI, which handles downloading, configuring, running, and reporting supported benchmarks. AutoGenBench works best when run alongside Docker, since it uses Docker to isolate tests from one another. -- See the [AutoGenBench README](https://github.com/microsoft/autogen/blob/main/samples/tools/autogenbench/README.md) for information on installation and running benchmarks. -- See the [AutoGenBench CONTRIBUTING guide](https://github.com/microsoft/autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) for information on developing or contributing benchmark datasets. +- See the [AutoGenBench README](https://github.com/microsoft/autogen/blob/0.2/samples/tools/autogenbench/README.md) for information on installation and running benchmarks. +- See the [AutoGenBench CONTRIBUTING guide](https://github.com/microsoft/autogen/blob/0.2/samples/tools/autogenbench/CONTRIBUTING.md) for information on developing or contributing benchmark datasets. ### Quick Start @@ -125,7 +125,7 @@ Please do not cite these values in academic work without first inspecting and ve From this output we can see the results of the three separate repetitions of each task, and final summary statistics of each run. In this case, the results were generated via GPT-4 (as defined in the OAI_CONFIG_LIST that was provided), and used the `TwoAgents` template. **It is important to remember that AutoGenBench evaluates _specific_ end-to-end configurations of agents (as opposed to evaluating a model or cognitive framework more generally).** -Finally, complete execution traces and logs can be found in the `Results` folder. See the [AutoGenBench README](https://github.com/microsoft/autogen/blob/main/samples/tools/autogenbench/README.md) for more details about command-line options and output formats. Each of these commands also offers extensive in-line help via: +Finally, complete execution traces and logs can be found in the `Results` folder. See the [AutoGenBench README](https://github.com/microsoft/autogen/blob/0.2/samples/tools/autogenbench/README.md) for more details about command-line options and output formats. Each of these commands also offers extensive in-line help via: - `autogenbench --help` - `autogenbench clone --help` @@ -145,4 +145,4 @@ For an up to date tracking of our work items on this project, please see [AutoGe ## Call for Participation -Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/microsoft/autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel! +Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/microsoft/autogen/blob/0.2/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel! diff --git a/website/blog/2024-01-26-Custom-Models/index.mdx b/website/blog/2024-01-26-Custom-Models/index.mdx index 81a9ad383d3e..e0d43c2ac0ac 100644 --- a/website/blog/2024-01-26-Custom-Models/index.mdx +++ b/website/blog/2024-01-26-Custom-Models/index.mdx @@ -13,7 +13,7 @@ AutoGen now supports custom models! This feature empowers users to define and lo ## Quickstart -An interactive and easy way to get started is by following the notebook [here](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_custom_model.ipynb) which loads a local model from HuggingFace into AutoGen and uses it for inference, and making changes to the class provided. +An interactive and easy way to get started is by following the notebook [here](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_custom_model.ipynb) which loads a local model from HuggingFace into AutoGen and uses it for inference, and making changes to the class provided. ### Step 1: Create the custom model client class diff --git a/website/blog/2024-03-03-AutoGen-Update/index.mdx b/website/blog/2024-03-03-AutoGen-Update/index.mdx index 7458ec74e053..f1128b523680 100644 --- a/website/blog/2024-03-03-AutoGen-Update/index.mdx +++ b/website/blog/2024-03-03-AutoGen-Update/index.mdx @@ -138,7 +138,7 @@ The community has been working hard to address them in several dimensions: We are working on agent-based evaluation tools and benchmarking tools. For example: -- [AgentEval](/blog/2023/11/20/AgentEval). Our [research](https://arxiv.org/abs/2402.09015) finds that LLM agents built with AutoGen can be used to automatically identify evaluation criteria and assess the performance from task descriptions and execution logs. It is demonstrated as a [notebook example](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). Feedback and help are welcome for building it into the library. +- [AgentEval](/blog/2023/11/20/AgentEval). Our [research](https://arxiv.org/abs/2402.09015) finds that LLM agents built with AutoGen can be used to automatically identify evaluation criteria and assess the performance from task descriptions and execution logs. It is demonstrated as a [notebook example](https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb). Feedback and help are welcome for building it into the library. - [AutoGenBench](/blog/2024/01/25/AutoGenBench). AutoGenBench is a commandline tool for downloading, configuring, running an agentic benchmark, and reporting results. It is designed to allow repetition, isolation and instrumentation, leveraging the new [runtime logging](/docs/notebooks/agentchat_logging) feature. These tools have been used for improving the AutoGen library as well as applications. For example, the new state-of-the-art performance achieved by a multi-agent solution to the [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark has benefited from these evaluation tools. @@ -147,7 +147,7 @@ These tools have been used for improving the AutoGen library as well as applicat We are making rapid progress in further improving the interface to make it even easier to build agent applications. For example: -- [AutoBuild](/blog/2023/11/26/Agent-AutoBuild). AutoBuild is an ongoing area of research to automatically create or select a group of agents for a given task and objective. If successful, it will greatly reduce the effort from users or developers when using the multi-agent technology. It also paves the way for agentic decomposition to handle complex tasks. It is available as an experimental feature and demonstrated in two modes: free-form [creation](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) and [selection](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb) from a library. +- [AutoBuild](/blog/2023/11/26/Agent-AutoBuild). AutoBuild is an ongoing area of research to automatically create or select a group of agents for a given task and objective. If successful, it will greatly reduce the effort from users or developers when using the multi-agent technology. It also paves the way for agentic decomposition to handle complex tasks. It is available as an experimental feature and demonstrated in two modes: free-form [creation](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_basic.ipynb) and [selection](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_agent_library.ipynb) from a library. - [AutoGen Studio](/blog/2023/12/01/AutoGenStudio). AutoGen Studio is a no-code UI for fast experimentation with the multi-agent conversations. It lowers the barrier of entrance to the AutoGen technology. Models, agents, and workflows can all be configured without writing code. And chatting with multiple agents in a playground is immediately available after the configuration. Although only a subset of `autogen-agentchat` features are available in this sample app, it demonstrates a promising experience. It has generated tremendous excitement in the community. - Conversation Programming+. The [AutoGen paper](https://arxiv.org/abs/2308.08155) introduced a key concept of _Conversation Programming_, which can be used to program diverse conversation patterns such as 1-1 chat, group chat, hierarchical chat, nested chat etc. While we offered dynamic group chat as an example of high-level orchestration, it made other patterns relatively less discoverable. Therefore, we have added more convenient conversation programming features which enables easier definition of other types of complex workflow, such as [finite state machine based group chat](/blog/2024/02/11/FSM-GroupChat), [sequential chats](/docs/notebooks/agentchats_sequential_chats), and [nested chats](/docs/notebooks/agentchat_nestedchat). Many users have found them useful in implementing specific patterns, which have been always possible but more obvious with the added features. I will write another blog post for a deep dive. @@ -173,8 +173,8 @@ The extensible design of AutoGen makes it easy to integrate with new technologie The above only covers a subset of new features and roadmap. There are many other interesting new features, integration examples or sample apps: -- new features like stateful code execution, [tool decorators](/docs/Use-Cases/agent_chat#tool-calling), [long context handling](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_capability_long_context_handling.ipynb), [web agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_surfer.ipynb). -- integration examples like using [guidance](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb) to generate structured response. +- new features like stateful code execution, [tool decorators](/docs/Use-Cases/agent_chat#tool-calling), [long context handling](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_capability_long_context_handling.ipynb), [web agents](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_surfer.ipynb). +- integration examples like using [guidance](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_guidance.ipynb) to generate structured response. - sample apps like [AutoAnny](/blog/2024/02/02/AutoAnny). ## Call for Help diff --git a/website/docs/Examples.md b/website/docs/Examples.md index 5efd71748f9d..550c2604659d 100644 --- a/website/docs/Examples.md +++ b/website/docs/Examples.md @@ -18,7 +18,7 @@ Links to notebook examples: - Automated Task Solving by Group Chat (with 3 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat) - Automated Data Visualization by Group Chat (with 3 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_vis) - Automated Complex Task Solving by Group Chat (with 6 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_research) -- Automated Task Solving with Coding & Planning Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_planning.ipynb) +- Automated Task Solving with Coding & Planning Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_planning.ipynb) - Automated Task Solving with transition paths specified in a graph - [View Notebook](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) - Running a group chat as an inner-monolgue via the SocietyOfMindAgent - [View Notebook](/docs/notebooks/agentchat_society_of_mind) - Running a group chat with custom speaker selection function - [View Notebook](/docs/notebooks/agentchat_groupchat_customized) @@ -38,73 +38,73 @@ Links to notebook examples: ### Applications -- Automated Continual Learning from New Data - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_stream.ipynb) +- Automated Continual Learning from New Data - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_stream.ipynb) - [OptiGuide](https://github.com/microsoft/optiguide) - Coding, Tool Using, Safeguarding & Question Answering for Supply Chain Optimization - [AutoAnny](https://github.com/microsoft/autogen/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen ### Tool Use -- **Web Search**: Solve Tasks Requiring Web Info - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_web_info.ipynb) +- **Web Search**: Solve Tasks Requiring Web Info - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_web_info.ipynb) - Use Provided Tools as Functions - [View Notebook](/docs/notebooks/agentchat_function_call_currency_calculator) - Use Tools via Sync and Async Function Calling - [View Notebook](/docs/notebooks/agentchat_function_call_async) -- Task Solving with Langchain Provided Tools as Functions - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_langchain.ipynb) +- Task Solving with Langchain Provided Tools as Functions - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_langchain.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG) -- Function Inception: Enable AutoGen agents to update/remove functions during conversations. - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_inception_function.ipynb) +- Function Inception: Enable AutoGen agents to update/remove functions during conversations. - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_inception_function.ipynb) - Agent Chat with Whisper - [View Notebook](/docs/notebooks/agentchat_video_transcript_translate_with_whisper) -- Constrained Responses via Guidance - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb) -- Browse the Web with Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_surfer.ipynb) -- **SQL**: Natural Language Text to SQL Query using the [Spider](https://yale-lily.github.io/spider) Text-to-SQL Benchmark - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_sql_spider.ipynb) +- Constrained Responses via Guidance - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_guidance.ipynb) +- Browse the Web with Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_surfer.ipynb) +- **SQL**: Natural Language Text to SQL Query using the [Spider](https://yale-lily.github.io/spider) Text-to-SQL Benchmark - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_sql_spider.ipynb) - **Web Scraping**: Web Scraping with Apify - [View Notebook](/docs/notebooks/agentchat_webscraping_with_apify) -- **Write a software app, task by task, with specially designed functions.** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call_code_writing.ipynb). +- **Write a software app, task by task, with specially designed functions.** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_function_call_code_writing.ipynb). ### Human Involvement -- Simple example in ChatGPT style [View example](https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py) -- Auto Code Generation, Execution, Debugging and **Human Feedback** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_human_feedback.ipynb) -- Automated Task Solving with GPT-4 + **Multiple Human Users** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb) -- Agent Chat with **Async Human Inputs** - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/Async_human_input.ipynb) +- Simple example in ChatGPT style [View example](https://github.com/microsoft/autogen/blob/0.2/samples/simple_chat.py) +- Auto Code Generation, Execution, Debugging and **Human Feedback** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_human_feedback.ipynb) +- Automated Task Solving with GPT-4 + **Multiple Human Users** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_two_users.ipynb) +- Agent Chat with **Async Human Inputs** - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/Async_human_input.ipynb) ### Agent Teaching and Learning - Teach Agents New Skills & Reuse via Automated Chat - [View Notebook](/docs/notebooks/agentchat_teaching) - Teach Agents New Facts, User Preferences and Skills Beyond Coding - [View Notebook](/docs/notebooks/agentchat_teachability) -- Teach OpenAI Assistants Through GPTAssistantAgent - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachable_oai_assistants.ipynb) -- Agent Optimizer: Train Agents in an Agentic Way - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb) +- Teach OpenAI Assistants Through GPTAssistantAgent - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachable_oai_assistants.ipynb) +- Agent Optimizer: Train Agents in an Agentic Way - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_agentoptimizer.ipynb) ### Multi-Agent Chat with OpenAI Assistants in the loop -- Hello-World Chat with OpenAi Assistant in AutoGen - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) -- Chat with OpenAI Assistant using Function Call - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb) -- Chat with OpenAI Assistant with Code Interpreter - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb) -- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) -- OpenAI Assistant in a Group Chat - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_groupchat.ipynb) -- GPTAssistantAgent based Multi-Agent Tool Use - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/gpt_assistant_agent_function_call.ipynb) +- Hello-World Chat with OpenAi Assistant in AutoGen - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_twoagents_basic.ipynb) +- Chat with OpenAI Assistant using Function Call - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_function_call.ipynb) +- Chat with OpenAI Assistant with Code Interpreter - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_code_interpreter.ipynb) +- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_retrieval.ipynb) +- OpenAI Assistant in a Group Chat - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_groupchat.ipynb) +- GPTAssistantAgent based Multi-Agent Tool Use - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/gpt_assistant_agent_function_call.ipynb) ### Non-OpenAI Models - Conversational Chess using non-OpenAI Models - [View Notebook](/docs/notebooks/agentchat_nested_chats_chess_altmodels) ### Multimodal Agent -- Multimodal Agent Chat with DALLE and GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_dalle_and_gpt4v.ipynb) -- Multimodal Agent Chat with Llava - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) -- Multimodal Agent Chat with GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) +- Multimodal Agent Chat with DALLE and GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_dalle_and_gpt4v.ipynb) +- Multimodal Agent Chat with Llava - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb) +- Multimodal Agent Chat with GPT-4V - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_gpt-4v.ipynb) ### Long Context Handling - + - Long Context Handling as A Capability - [View Notebook](/docs/notebooks/agentchat_transform_messages) ### Evaluation and Assessment -- AgentEval: A Multi-Agent System for Assess Utility of LLM-powered Applications - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb) +- AgentEval: A Multi-Agent System for Assess Utility of LLM-powered Applications - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agenteval_cq_math.ipynb) ### Automatic Agent Building -- Automatically Build Multi-agent System with AgentBuilder - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_basic.ipynb) -- Automatically Build Multi-agent System from Agent Library - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb) +- Automatically Build Multi-agent System with AgentBuilder - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_basic.ipynb) +- Automatically Build Multi-agent System from Agent Library - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/autobuild_agent_library.ipynb) ### Observability -- Track LLM calls, tool usage, actions and errors using AgentOps - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentops.ipynb) +- Track LLM calls, tool usage, actions and errors using AgentOps - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_agentops.ipynb) ## Enhanced Inferences @@ -112,7 +112,7 @@ Links to notebook examples: - API Unification - [View Documentation with Code Example](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification) - Utility Functions to Help Managing API configurations effectively - [View Notebook](/docs/topics/llm_configuration) -- Cost Calculation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_cost_token_tracking.ipynb) +- Cost Calculation - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_cost_token_tracking.ipynb) ### Inference Hyperparameters Tuning @@ -120,5 +120,5 @@ AutoGen offers a cost-effective hyperparameter optimization technique [EcoOptiGe Please find documentation about this feature [here](/docs/Use-Cases/enhanced_inference). Links to notebook examples: -* [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) -* [Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) +* [Optimize for Code Generation](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) +* [Optimize for Math](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) diff --git a/website/docs/Getting-Started.mdx b/website/docs/Getting-Started.mdx index 4a2bbf63fff5..761ed6f3d34d 100644 --- a/website/docs/Getting-Started.mdx +++ b/website/docs/Getting-Started.mdx @@ -118,7 +118,7 @@ Learn more about configuring LLMs for agents [here](/docs/topics/llm_configurati #### Multi-Agent Conversation Framework Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans. -By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py), +By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/microsoft/autogen/blob/0.2/test/twoagent.py), The figure below shows an example conversation flow with AutoGen. diff --git a/website/docs/Migration-Guide.md b/website/docs/Migration-Guide.md index f996a157a299..97e16de0770b 100644 --- a/website/docs/Migration-Guide.md +++ b/website/docs/Migration-Guide.md @@ -26,7 +26,7 @@ autogen.runtime_logging.start() # Stop logging autogen.runtime_logging.stop() ``` -Checkout [Logging documentation](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) to learn more. +Checkout [Logging documentation](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_logging.ipynb) to learn more. Inference parameter tuning can be done via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). - `seed` in autogen is renamed into `cache_seed` to accommodate the newly added `seed` param in openai chat completion api. `use_cache` is removed as a kwarg in `OpenAIWrapper.create()` for being automatically decided by `cache_seed`: int | None. The difference between autogen's `cache_seed` and openai's `seed` is that: diff --git a/website/docs/Use-Cases/agent_chat.md b/website/docs/Use-Cases/agent_chat.md index 76f9959841b8..fa85691810c9 100644 --- a/website/docs/Use-Cases/agent_chat.md +++ b/website/docs/Use-Cases/agent_chat.md @@ -81,13 +81,13 @@ AutoGen, by integrating conversation-driven control utilizing both programming a With the pluggable auto-reply function, one can choose to invoke conversations with other agents depending on the content of the current message and context. For example: - Hierarchical chat like in [OptiGuide](https://github.com/microsoft/optiguide). -- [Dynamic Group Chat](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat.ipynb) which is a special form of hierarchical chat. In the system, we register a reply function in the group chat manager, which broadcasts messages and decides who the next speaker will be in a group chat setting. +- [Dynamic Group Chat](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_groupchat.ipynb) which is a special form of hierarchical chat. In the system, we register a reply function in the group chat manager, which broadcasts messages and decides who the next speaker will be in a group chat setting. - [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine) which is a special form of dynamic group chat. In this approach, a directed transition matrix is fed into group chat. Users can specify legal transitions or specify disallowed transitions. -- Nested chat like in [conversational chess](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_nested_chats_chess.ipynb). +- Nested chat like in [conversational chess](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_nested_chats_chess.ipynb). 2. LLM-Based Function Call -Another approach involves LLM-based function calls, where LLM decides if a specific function should be invoked based on the conversation's status during each inference. This approach enables dynamic multi-agent conversations, as seen in scenarios like [multi-user math problem solving scenario](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb), where a student assistant automatically seeks expertise via function calls. +Another approach involves LLM-based function calls, where LLM decides if a specific function should be invoked based on the conversation's status during each inference. This approach enables dynamic multi-agent conversations, as seen in scenarios like [multi-user math problem solving scenario](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_two_users.ipynb), where a student assistant automatically seeks expertise via function calls. ### Diverse Applications Implemented with AutoGen diff --git a/website/docs/Use-Cases/enhanced_inference.md b/website/docs/Use-Cases/enhanced_inference.md index 14723391e8cf..825f46ff0714 100644 --- a/website/docs/Use-Cases/enhanced_inference.md +++ b/website/docs/Use-Cases/enhanced_inference.md @@ -140,7 +140,7 @@ The `OpenAIWrapper` from `autogen` tracks token counts and costs of your API cal - `mode='actual'`: only print non-cached usage. - `mode='total'`: only print all usage (including cache). -Reset your session's usage data with `clear_usage_summary()` when needed. [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_client_cost.ipynb) +Reset your session's usage data with `clear_usage_summary()` when needed. [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_client_cost.ipynb) Example usage: ```python @@ -307,7 +307,7 @@ When debugging or diagnosing an LLM-based system, it is often convenient to log ### For openai >= 1 -Logging example: [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) +Logging example: [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_logging.ipynb) #### Start logging: ```python diff --git a/website/docs/contributor-guide/docker.md b/website/docs/contributor-guide/docker.md index 8ea54ed7c228..b09a53aa35f2 100644 --- a/website/docs/contributor-guide/docker.md +++ b/website/docs/contributor-guide/docker.md @@ -48,4 +48,4 @@ docker run -it -p 8081:3000 -v /home/AutoGenDeveloper/autogen-newcode:newstuff/ ## Develop in Remote Container If you use vscode, you can open the autogen folder in a [Container](https://code.visualstudio.com/docs/remote/containers). -We have provided the configuration in [devcontainer](https://github.com/microsoft/autogen/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended. +We have provided the configuration in [devcontainer](https://github.com/microsoft/autogen/blob/0.2/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended. diff --git a/website/docs/contributor-guide/documentation.md b/website/docs/contributor-guide/documentation.md index 70a79b700697..3380ff529280 100644 --- a/website/docs/contributor-guide/documentation.md +++ b/website/docs/contributor-guide/documentation.md @@ -2,7 +2,7 @@ ## How to get a notebook rendered on the website -See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#how-to-get-a-notebook-displayed-on-the-website) for instructions on how to get a notebook in the `notebook` directory rendered on the website. +See [here](https://github.com/microsoft/autogen/blob/0.2/notebook/contributing.md#how-to-get-a-notebook-displayed-on-the-website) for instructions on how to get a notebook in the `notebook` directory rendered on the website. ## Build documentation locally diff --git a/website/docs/contributor-guide/tests.md b/website/docs/contributor-guide/tests.md index c5eabb907329..a4b03b6379d3 100644 --- a/website/docs/contributor-guide/tests.md +++ b/website/docs/contributor-guide/tests.md @@ -2,12 +2,12 @@ Tests are automatically run via GitHub actions. There are two workflows: -1. [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) -1. [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml) +1. [build.yml](https://github.com/microsoft/autogen/blob/0.2/.github/workflows/build.yml) +1. [openai.yml](https://github.com/microsoft/autogen/blob/0.2/.github/workflows/openai.yml) -The first workflow is required to pass for all PRs (and it doesn't do any OpenAI calls). The second workflow is required for changes that affect the OpenAI tests (and does actually call LLM). The second workflow requires approval to run. When writing tests that require OpenAI calls, please use [`pytest.mark.skipif`](https://github.com/microsoft/autogen/blob/b1adac515931bf236ac59224269eeec683a162ba/test/oai/test_client.py#L19) to make them run in only when `openai` package is installed. If additional dependency for this test is required, install the dependency in the corresponding python version in [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml). +The first workflow is required to pass for all PRs (and it doesn't do any OpenAI calls). The second workflow is required for changes that affect the OpenAI tests (and does actually call LLM). The second workflow requires approval to run. When writing tests that require OpenAI calls, please use [`pytest.mark.skipif`](https://github.com/microsoft/autogen/blob/b1adac515931bf236ac59224269eeec683a162ba/test/oai/test_client.py#L19) to make them run in only when `openai` package is installed. If additional dependency for this test is required, install the dependency in the corresponding python version in [openai.yml](https://github.com/microsoft/autogen/blob/0.2/.github/workflows/openai.yml). -Make sure all tests pass, this is required for [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) checks to pass +Make sure all tests pass, this is required for [build.yml](https://github.com/microsoft/autogen/blob/0.2/.github/workflows/build.yml) checks to pass ## Running tests locally @@ -27,7 +27,7 @@ Tests for the `autogen.agentchat.contrib` module may be skipped automatically if required dependencies are not installed. Please consult the documentation for each contrib module to see what dependencies are required. -See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#testing) for how to run notebook tests. +See [here](https://github.com/microsoft/autogen/blob/0.2/notebook/contributing.md#testing) for how to run notebook tests. ## Skip flags for tests diff --git a/website/docs/ecosystem/llamaindex.md b/website/docs/ecosystem/llamaindex.md index 4e9cffeedc7d..6561880b4d22 100644 --- a/website/docs/ecosystem/llamaindex.md +++ b/website/docs/ecosystem/llamaindex.md @@ -4,4 +4,4 @@ [Llamaindex](https://www.llamaindex.ai/) allows the users to create Llamaindex agents and integrate them in autogen conversation patterns. -- [Llamaindex + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb) +- [Llamaindex + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_group_chat_with_llamaindex_agents.ipynb) diff --git a/website/docs/ecosystem/microsoft-fabric.md b/website/docs/ecosystem/microsoft-fabric.md index d0593a4a7d04..5ef625c71763 100644 --- a/website/docs/ecosystem/microsoft-fabric.md +++ b/website/docs/ecosystem/microsoft-fabric.md @@ -4,4 +4,4 @@ [Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. In this notenook, we give a simple example for using AutoGen in Microsoft Fabric. -- [Microsoft Fabric + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_microsoft_fabric.ipynb) +- [Microsoft Fabric + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_microsoft_fabric.ipynb) diff --git a/website/docs/ecosystem/pgvector.md b/website/docs/ecosystem/pgvector.md index 99afa676e7e4..637941e6135e 100644 --- a/website/docs/ecosystem/pgvector.md +++ b/website/docs/ecosystem/pgvector.md @@ -2,4 +2,4 @@ [PGVector](https://github.com/pgvector/pgvector) is an open-source vector similarity search for Postgres. -- [PGVector + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb) +- [PGVector + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb) diff --git a/website/docs/installation/Docker.md b/website/docs/installation/Docker.md index a7fa6bd829b5..a4506f9c10ae 100644 --- a/website/docs/installation/Docker.md +++ b/website/docs/installation/Docker.md @@ -41,7 +41,7 @@ Here's how you can run an application built with AutoGen, using the Docker image Here, `$(pwd)/myapp` is your local directory, and `/home/autogen/autogen/myapp` is the path in the Docker container where your code will be located. -2. **Mount your code:** Now suppose you have your application built with AutoGen in a main script named `twoagent.py` ([example](https://github.com/microsoft/autogen/blob/main/test/twoagent.py)) in a folder named `myapp`. With the command line below, you can mount your folder and run the application in Docker. +2. **Mount your code:** Now suppose you have your application built with AutoGen in a main script named `twoagent.py` ([example](https://github.com/microsoft/autogen/blob/0.2/test/twoagent.py)) in a folder named `myapp`. With the command line below, you can mount your folder and run the application in Docker. ```python # Mount the local folder `myapp` into docker image and run the script named "twoagent.py" in the docker. @@ -82,6 +82,6 @@ docker run -it -p {WorkstationPortNum}:{DockerPortNum} -v {WorkStation_Dir}:{Doc ## Additional Resources -- Details on all the Dockerfile options can be found in the [Dockerfile](https://github.com/microsoft/autogen/blob/main/.devcontainer/README.md) README. +- Details on all the Dockerfile options can be found in the [Dockerfile](https://github.com/microsoft/autogen/blob/0.2/.devcontainer/README.md) README. - For more information on Docker usage and best practices, refer to the [official Docker documentation](https://docs.docker.com). - Details on how to use the Dockerfile dev version can be found on the [Contributor Guide](/docs/contributor-guide/docker). diff --git a/website/docs/installation/Optional-Dependencies.md b/website/docs/installation/Optional-Dependencies.md index 3f8164a667e9..89013d023ea6 100644 --- a/website/docs/installation/Optional-Dependencies.md +++ b/website/docs/installation/Optional-Dependencies.md @@ -38,9 +38,9 @@ pip install "pyautogen[blendsearch]<0.2" Example notebooks: -[Optimize for Code Generation](https://github.com/microsoft/autogen/blob/main/notebook/oai_completion.ipynb) +[Optimize for Code Generation](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_completion.ipynb) -[Optimize for Math](https://github.com/microsoft/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) +[Optimize for Math](https://github.com/microsoft/autogen/blob/0.2/notebook/oai_chatgpt_gpt4.ipynb) ## retrievechat @@ -72,11 +72,11 @@ You can find a list of all supported document types by using `autogen.retrieve_u Example notebooks: -[Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb) +[Automated Code Generation and Question Answering with Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat.ipynb) -[Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb) +[Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_groupchat_RAG.ipynb) -[Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb) +[Automated Code Generation and Question Answering with Qdrant based Retrieval Augmented Agents](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb) ## Teachability @@ -86,7 +86,7 @@ To use Teachability, please install AutoGen with the [teachable] option. pip install "autogen-agentchat[teachable]~=0.2" ``` -Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb) +Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_teachability.ipynb) ## Large Multimodal Model (LMM) Agents @@ -98,7 +98,7 @@ pip install "autogen-agentchat[lmm]~=0.2" Example notebooks: -[LLaVA Agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) +[LLaVA Agent](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_lmm_llava.ipynb) ## mathchat @@ -110,7 +110,7 @@ pip install "pyautogen[mathchat]<0.2" Example notebooks: -[Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat.ipynb) +[Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_MathChat.ipynb) ## Graph diff --git a/website/docs/topics/non-openai-models/cloud-gemini.ipynb b/website/docs/topics/non-openai-models/cloud-gemini.ipynb index 0a36dd62cf91..673d8a80f31e 100644 --- a/website/docs/topics/non-openai-models/cloud-gemini.ipynb +++ b/website/docs/topics/non-openai-models/cloud-gemini.ipynb @@ -495,7 +495,7 @@ "user_proxy.initiate_chat(\n", " image_agent,\n", " message=\"\"\"Describe what is in this image?\n", - ".\"\"\",\n", + ".\"\"\",\n", ")" ] }, diff --git a/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb b/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb index 3456a803f488..2389092e8dc9 100644 --- a/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb +++ b/website/docs/topics/non-openai-models/cloud-gemini_vertexai.ipynb @@ -392,7 +392,7 @@ "user_proxy.initiate_chat(\n", " image_agent,\n", " message=\"\"\"Describe what is in this image?\n", - ".\"\"\",\n", + ".\"\"\",\n", ")" ] }, diff --git a/website/docs/topics/retrieval_augmentation.md b/website/docs/topics/retrieval_augmentation.md index 7fdd39f31f92..1ac32e85fef3 100644 --- a/website/docs/topics/retrieval_augmentation.md +++ b/website/docs/topics/retrieval_augmentation.md @@ -124,10 +124,10 @@ ragproxyagent.initiate_chat( ## More Examples and Notebooks For more detailed examples and notebooks showcasing the usage of retrieval augmented agents in AutoGen, refer to the following: - Automated Code Generation and Question Answering with Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat) -- Automated Code Generation and Question Answering with [PGVector](https://github.com/pgvector/pgvector) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb) -- Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_qdrant.ipynb) -- Automated Code Generation and Question Answering with [MongoDB Atlas](https://www.mongodb.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_mongodb.ipynb) -- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_oai_assistant_retrieval.ipynb) +- Automated Code Generation and Question Answering with [PGVector](https://github.com/pgvector/pgvector) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_pgvector.ipynb) +- Automated Code Generation and Question Answering with [Qdrant](https://qdrant.tech/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_qdrant.ipynb) +- Automated Code Generation and Question Answering with [MongoDB Atlas](https://www.mongodb.com/) based Retrieval Augmented Agents - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_RetrieveChat_mongodb.ipynb) +- Chat with OpenAI Assistant with Retrieval Augmentation - [View Notebook](https://github.com/microsoft/autogen/blob/0.2/notebook/agentchat_oai_assistant_retrieval.ipynb) - **RAG**: Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent) - [View Notebook](/docs/notebooks/agentchat_groupchat_RAG) ## Roadmap diff --git a/website/process_notebooks.py b/website/process_notebooks.py index 5e25f5e69510..a3157363ab82 100755 --- a/website/process_notebooks.py +++ b/website/process_notebooks.py @@ -348,7 +348,7 @@ def post_process_mdx(rendered_mdx: Path, source_notebooks: Path, front_matter: D if "{" in title: title = title[: title.find("{")].strip() - github_link = f"https://github.com/microsoft/autogen/blob/main/{repo_relative_notebook}" + github_link = f"https://github.com/microsoft/autogen/blob/0.2/{repo_relative_notebook}" content = ( content[:title_end] + "\n[![Open on GitHub](https://img.shields.io/badge/Open%20on%20GitHub-grey?logo=github)](" diff --git a/website/src/components/GalleryPage.js b/website/src/components/GalleryPage.js index 11f4abe2e7fe..aec7907b41ef 100644 --- a/website/src/components/GalleryPage.js +++ b/website/src/components/GalleryPage.js @@ -91,7 +91,7 @@ const GalleryPage = (props) => { return null; } const colab_href = `https://colab.research.google.com/github/microsoft/autogen/blob/main/${item.source}`; - const github_href = `https://github.com/microsoft/autogen/blob/main/${item.source}`; + const github_href = `https://github.com/microsoft/autogen/blob/0.2/${item.source}`; return ( Open In Colab Static Badge