Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add notebooks section on website #1495

Merged
merged 17 commits into from
Feb 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions .github/workflows/deploy-website.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ jobs:
- name: pydoc-markdown install
run: |
python -m pip install --upgrade pip
pip install pydoc-markdown
pip install pydoc-markdown pyyaml colored
- name: pydoc-markdown run
run: |
pydoc-markdown
Expand All @@ -50,6 +50,9 @@ jobs:
- name: quarto run
run: |
quarto render .
- name: Process notebooks
run: |
python process_notebooks.py
- name: Test Build
run: |
if [ -e yarn.lock ]; then
Expand Down Expand Up @@ -80,7 +83,7 @@ jobs:
- name: pydoc-markdown install
run: |
python -m pip install --upgrade pip
pip install pydoc-markdown
pip install pydoc-markdown pyyaml colored
- name: pydoc-markdown run
run: |
pydoc-markdown
Expand All @@ -93,6 +96,9 @@ jobs:
- name: quarto run
run: |
quarto render .
- name: Process notebooks
run: |
python process_notebooks.py
- name: Build website
run: |
if [ -e yarn.lock ]; then
Expand Down
103 changes: 29 additions & 74 deletions notebook/agentchat_RetrieveChat.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"toc\"></a>\n",
"# Auto Generated Agent Chat: Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering\n",
"<!--\n",
"tags: [\"RAG\"]\n",
"description: |\n",
" Explore the use of AutoGen's RetrieveChat for tasks like code generation from docstrings, answering complex questions with human feedback, and exploiting features like Update Context, custom prompts, and few-shot learning.\n",
"-->\n",
"\n",
"# Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
Expand All @@ -24,35 +21,24 @@
"## Table of Contents\n",
"We'll demonstrate six examples of using RetrieveChat for code generation and question answering:\n",
"\n",
"[Example 1: Generate code based off docstrings w/o human feedback](#example-1)\n",
"\n",
"[Example 2: Answer a question based off docstrings w/o human feedback](#example-2)\n",
"\n",
"[Example 3: Generate code based off docstrings w/ human feedback](#example-3)\n",
"\n",
"[Example 4: Answer a question based off docstrings w/ human feedback](#example-4)\n",
"- [Example 1: Generate code based off docstrings w/o human feedback](#example-1)\n",
"- [Example 2: Answer a question based off docstrings w/o human feedback](#example-2)\n",
"- [Example 3: Generate code based off docstrings w/ human feedback](#example-3)\n",
"- [Example 4: Answer a question based off docstrings w/ human feedback](#example-4)\n",
"- [Example 5: Solve comprehensive QA problems with RetrieveChat's unique feature `Update Context`](#example-5)\n",
"- [Example 6: Solve comprehensive QA problems with customized prompt and few-shot learning](#example-6)\n",
"\n",
"[Example 5: Solve comprehensive QA problems with RetrieveChat's unique feature `Update Context`](#example-5)\n",
"\n",
"[Example 6: Solve comprehensive QA problems with customized prompt and few-shot learning](#example-6)\n",
"\\:\\:\\:info Requirements\n",
"\n",
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
"\n",
"```bash\n",
"pip install pyautogen[retrievechat] flaml[automl]\n",
"```\n",
"\n",
"## Requirements\n",
"For more information, please refer to the [installation guide](/docs/installation/).\n",
"\n",
"AutoGen requires `Python>=3.8`. To run this notebook example, please install the [retrievechat] option.\n",
"```bash\n",
"pip install \"pyautogen[retrievechat]>=0.2.3\" \"flaml[automl]\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# %pip install \"pyautogen[retrievechat]>=0.2.3\" \"flaml[automl]\""
"\\:\\:\\:\n"
]
},
{
Expand Down Expand Up @@ -94,7 +80,6 @@
"\n",
"config_list = autogen.config_list_from_json(\n",
" env_or_file=\"OAI_CONFIG_LIST\",\n",
" file_location=\".\",\n",
" filter_dict={\n",
" \"model\": {\n",
" \"gpt-4\",\n",
Expand All @@ -116,35 +101,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 and gpt-3.5-turbo models are kept in the list based on the filter condition.\n",
"\n",
"The config list looks like the following:\n",
"```python\n",
"config_list = [\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" },\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'base_url': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
" {\n",
" 'model': 'gpt-3.5-turbo',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'base_url': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
"]\n",
"```\n",
"\\:\\:\\:tip\n",
"\n",
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n",
"Learn more about the various ways to configure LLM endpoints [here](/docs/llm_endpoint_configuration).\n",
"\n",
"You can set the value of config_list in other ways you prefer, e.g., loading from a YAML file."
"\\:\\:\\:"
]
},
{
Expand Down Expand Up @@ -230,10 +191,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"example-1\"></a>\n",
"### Example 1\n",
"\n",
"[back to top](#toc)\n",
"[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to help generate sample code and automatically run the code and fix errors if there is any.\n",
"\n",
Expand Down Expand Up @@ -537,10 +497,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"example-2\"></a>\n",
"### Example 2\n",
"\n",
"[back to top](#toc)\n",
"[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to answer a question that is not related to code generation.\n",
"\n",
Expand Down Expand Up @@ -1092,10 +1051,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"example-3\"></a>\n",
"### Example 3\n",
"\n",
"[back to top](#toc)\n",
"[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to help generate sample code and ask for human-in-loop feedbacks.\n",
"\n",
Expand Down Expand Up @@ -1506,10 +1464,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"example-4\"></a>\n",
"### Example 4\n",
"\n",
"[back to top](#toc)\n",
"[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to answer a question and ask for human-in-loop feedbacks.\n",
"\n",
Expand Down Expand Up @@ -2065,10 +2022,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"example-5\"></a>\n",
"### Example 5\n",
"\n",
"[back to top](#toc)\n",
"[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to answer questions for [NaturalQuestion](https://ai.google.com/research/NaturalQuestions) dataset.\n",
"\n",
Expand Down Expand Up @@ -2665,10 +2621,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"example-6\"></a>\n",
"### Example 6\n",
"\n",
"[back to top](#toc)\n",
"[Back to top](#table-of-contents)\n",
"\n",
"Use RetrieveChat to answer multi-hop questions for [2WikiMultihopQA](https://github.com/Alab-NII/2wikimultihop) dataset with customized prompt and few-shot learning.\n",
"\n",
Expand Down
79 changes: 16 additions & 63 deletions notebook/agentchat_auto_feedback_from_code_execution.ipynb
Original file line number Diff line number Diff line change
@@ -1,61 +1,36 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_auto_feedback_from_code_execution.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Auto Generated Agent Chat: Task Solving with Code Generation, Execution & Debugging\n",
"<!--\n",
"tags: [\"code generation\", \"debugging\"]\n",
"description: |\n",
" Use conversable language learning model agents to solve tasks and provide automatic feedback through a comprehensive example of writing, executing, and debugging Python code to compare stock price changes.\n",
"-->\n",
"\n",
"# Task Solving with Code Generation, Execution and Debugging\n",
"\n",
"AutoGen offers conversable LLM agents, which can be used to solve various tasks with human or automatic feedback, including tasks that require using tools via code.\n",
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to write code and execute the code. Here `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for the human user to execute the code written by `AssistantAgent`, or automatically execute the code. Depending on the setting of `human_input_mode` and `max_consecutive_auto_reply`, the `UserProxyAgent` either solicits feedback from the human user or returns auto-feedback based on the result of code execution (success or failure and corresponding outputs) to `AssistantAgent`. `AssistantAgent` will debug the code and suggest new code if the result contains error. The two agents keep communicating to each other until the task is done.\n",
"\n",
"## Requirements\n",
"\\:\\:\\:info Requirements\n",
"\n",
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
"Install `pyautogen`:\n",
"```bash\n",
"pip install pyautogen\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"execution": {
"iopub.execute_input": "2023-02-13T23:40:52.317406Z",
"iopub.status.busy": "2023-02-13T23:40:52.316561Z",
"iopub.status.idle": "2023-02-13T23:40:52.321193Z",
"shell.execute_reply": "2023-02-13T23:40:52.320628Z"
}
},
"outputs": [],
"source": [
"# %pip install pyautogen>=0.2.3"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set your API Endpoint\n",
"```\n",
"\n",
"The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
"For more information, please refer to the [installation guide](/docs/installation/).\n",
"\n",
"\\:\\:\\:"
]
},
{
Expand Down Expand Up @@ -84,33 +59,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n",
"\\:\\:\\:tip\n",
"\n",
"The config list looks like the following:\n",
"```python\n",
"config_list = [\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your OpenAI API key here>',\n",
" },\n",
" {\n",
" 'model': 'gpt-4',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'base_url': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
" {\n",
" 'model': 'gpt-4-32k',\n",
" 'api_key': '<your Azure OpenAI API key here>',\n",
" 'base_url': '<your Azure OpenAI API base here>',\n",
" 'api_type': 'azure',\n",
" 'api_version': '2023-06-01-preview',\n",
" },\n",
"]\n",
"```\n",
"Learn more about the various ways to configure LLM endpoints [here](/docs/llm_endpoint_configuration).\n",
"\n",
"You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods."
"\\:\\:\\:"
]
},
{
Expand Down
Loading
Loading