Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix broken links #1905

Merged
merged 1 commit into from
Mar 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
102 changes: 51 additions & 51 deletions notebook/agentchat_teachability.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"\n",
"In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n",
"\n",
"This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](../test/agentchat/contrib/chat_with_teachable_agent.py).\n",
"This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](../test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n",
"\n",
"## Requirements\n",
"\n",
Expand Down Expand Up @@ -138,8 +138,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[92m\n",
"CLEARING MEMORY\u001b[0m\n"
"\u001B[92m\n",
"CLEARING MEMORY\u001B[0m\n"
]
}
],
Expand Down Expand Up @@ -191,14 +191,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"What is the Vicuna model?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"The term \"Vicuna model\" does not point to a well-known concept or framework in the realms of science, technology, or social sciences as of my last knowledge update in early 2023. It's possible that the term could be a reference to a proprietary model or a concept that has emerged after my last update or it might be a misspelling or a misunderstanding.\n",
"\n",
Expand All @@ -224,14 +224,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Vicuna is a 13B-parameter language model released by Meta.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"My apologies for the confusion. As of my last update, the Vicuna model had not been part of my database. If Vicuna is indeed a 13-billion-parameter language model developed by Meta (formerly Facebook Inc.), then it would be one of the large-scale transformer-based models akin to those like GPT-3 by OpenAI.\n",
"\n",
Expand Down Expand Up @@ -261,14 +261,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"What is the Orca model?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"As of my last update, the Orca model appears to reference a new development that I do not have extensive information on, similar to the earlier reference to the Vicuna model.\n",
"\n",
Expand All @@ -294,14 +294,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Orca is a 13B-parameter language model developed by Microsoft. It outperforms Vicuna on most tasks.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"Thank you for providing the context about the Orca model. Based on the new information you've given, Orca is a language model with 13 billion parameters, similar in size to Meta's Vicuna model, but developed by Microsoft. If it outperforms Vicuna on most tasks, it suggests that it could have been trained on a more diverse dataset, use a more advanced architecture, have more effective training techniques, or some combination of these factors.\n",
"\n",
Expand Down Expand Up @@ -336,14 +336,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"How does the Vicuna model compare to the Orca model?\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"The Vicuna model and the Orca model are both large-scale language models with a significant number of parameters—13 billion, to be exact.\n",
"\n",
Expand Down Expand Up @@ -379,7 +379,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Please summarize this abstract.\n",
"\n",
Expand All @@ -389,9 +389,9 @@
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"AutoGen is an open-source framework designed to facilitate the creation of applications using large language models (LLMs) through the use of multiple conversational agents. These agents can be tailored to users' needs and are capable of interaction in multiple modes, including with other LLMs, human input, and additional tools. With AutoGen, developers have the flexibility to program agent interactions using both natural language and code, enabling the creation of complex patterns suitable for a wide range of applications. The framework has been proven effective across various fields, such as math, coding, question answering, and entertainment, based on empirical studies conducted to test its capabilities.\n",
"\n",
Expand Down Expand Up @@ -425,7 +425,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Please summarize this abstract. \n",
"When I'm summarizing an abstract, I try to make the summary contain just three short bullet points: the title, the innovation, and the key empirical results.\n",
Expand All @@ -436,9 +436,9 @@
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"- Title: AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation\n",
"- Innovation: AutoGen, an open-source framework that supports building large language model (LLM) applications by enabling conversation among multiple customizable and conversable agents.\n",
Expand Down Expand Up @@ -475,7 +475,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Please summarize this abstract.\n",
"\n",
Expand All @@ -484,9 +484,9 @@
"Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"- Title: Sparks of Artificial General Intelligence: Early experiments with GPT-4\n",
"\n",
Expand Down Expand Up @@ -526,7 +526,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Consider the identity: \n",
"9 * 4 + 6 * 6 = 72\n",
Expand All @@ -535,9 +535,9 @@
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"To solve this problem, we need to find a way to add exactly 27 (since 99 - 72 = 27) to the left hand side of the equation by modifying only one of the integers in the equation. \n",
"\n",
Expand Down Expand Up @@ -602,7 +602,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Consider the identity: \n",
"9 * 4 + 6 * 6 = 72\n",
Expand All @@ -623,9 +623,9 @@
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"Given the new set of instructions and the correction that according to a past memory, the solution is \"9 * 1 + 6 * 9\", let's follow the steps carefully to arrive at the correct modified equation.\n",
"\n",
Expand Down Expand Up @@ -707,7 +707,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Consider the identity: \n",
"9 * 4 + 6 * 6 = 72\n",
Expand All @@ -716,9 +716,9 @@
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"Let's apply the steps you've provided to solve the problem at hand:\n",
"\n",
Expand Down Expand Up @@ -779,7 +779,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33muser\u001b[0m (to teachable_agent):\n",
"\u001B[33muser\u001B[0m (to teachable_agent):\n",
"\n",
"Consider the identity: \n",
"8 * 3 + 7 * 9 = 87\n",
Expand All @@ -788,9 +788,9 @@
"\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[31m\n",
">>>>>>>> USING AUTO REPLY...\u001b[0m\n",
"\u001b[33mteachable_agent\u001b[0m (to user):\n",
"\u001B[31m\n",
">>>>>>>> USING AUTO REPLY...\u001B[0m\n",
"\u001B[33mteachable_agent\u001B[0m (to user):\n",
"\n",
"Let's apply the plan step-by-step to find the correct modification:\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions website/blog/2023-10-26-TeachableAgent/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ In order to make effective decisions about memo storage and retrieval, the `Teac

AutoGen contains four code examples that use `Teachability`.

1. Run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/chat_with_teachable_agent.py) to converse with a teachable agent.
1. Run [chat_with_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py) to converse with a teachable agent.

2. Run [test_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/test_teachable_agent.py) for quick unit testing of a teachable agent.
2. Run [test_teachable_agent.py](https://github.com/microsoft/autogen/blob/main/test/agentchat/contrib/capabilities/test_teachable_agent.py) for quick unit testing of a teachable agent.

3. Use the Jupyter notebook [agentchat_teachability.ipynb](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb) to step through examples discussed below.

Expand Down
Loading