Skip to content

Commit 497bf02

Browse files
victordibiapcdeadeasysonichi
authored
Sample Web Application Built with AutoGen (microsoft#695)
* Adding research assistant code * Adding research assistant code * checking in RA files * Remove used text file * Update README.md to include Saleema's name to the Contributors list. * remove extraneous files * update gitignore * improve structure on global skills * fix linting error * readme update * readme update * fix wrong function bug * readme update * update ui build * cleanup, remove unused modules * readme and docs updates * set default user * ui build update * add screenshot to improve instructions * remove logout behaviour, replace with note to developers to add their own logout logic * Create blog and edit ARA README * Added the stock prices example in the readme for ARA * Include edits from review with Saleema * fix format issues * Cosmetic changes for betting debug messages * edit authors * remove references to request_timeout to support autogen v0.0.2 * update bg color for UI * readme update * update research assistant blog post * omit samples folder from codecov * ui build update + precommit refactor * formattiing updates fromo pre-commit * readme update * remove compiled source files * update gitignore * refactor, file removals * refactor for improved structure - datamodel, chat and db helper * update gitignore * refactor, file removals * refactor for improved structure - datamodel, chat and db helper * refactor skills view * general refactor * gitignore update and general refactor * skills update * general refactor * ui folder structure refactor * improve support for skills loading * add fetch profile default skill * refactor chat to autogenchat * qol refactor * improve metadata display * early support for autogenflow in ui * docs update general refactor * general refactor * readme update * readme update * readme and cli update * pre-commit updates * precommit update * readme update * add steup.py for older python build versions * add manifest.in, update app icon * in-progress changes to agent specification * remove use_cache refs * update datamodel, and fix for default serverurl * request_timeout * readme update, fix autogen values * fix pyautogen version * precommit formatting and other qol items * update folder structure * req update * readme and docs update * docs update * remove duplicate in yaml file * add support for explicit skills addition * readme and documentation updates * general refactor * remove blog post, schedule for future PR * readme update, add info on llmconfig * make use_cache False by default unless set * minor ui updates * upgrade ui to use latest uatogen lib version 0.2.0b5 * Ui refactor, support for adding arbitrary model specifications * formatting/precommit checks * update readme, utils default skill --------- Co-authored-by: Piali Choudhury <[email protected]> Co-authored-by: Chi Wang <[email protected]>
1 parent c4d1018 commit 497bf02

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+5164
-0
lines changed

.coveragerc

+1
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,4 @@ branch = True
33
source = autogen
44
omit =
55
*test*
6+
*samples*
+24
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
database.sqlite
2+
.cache/*
3+
autogenra/web/files/user/*
4+
autogenra/web/files/ui/*
5+
OAI_CONFIG_LIST
6+
scratch/
7+
autogenra/web/workdir/*
8+
autogenra/web/ui/*
9+
autogenra/web/skills/user/*
10+
.release.sh
11+
12+
# Byte-compiled / optimized / DLL files
13+
__pycache__/
14+
*.py[cod]
15+
*$py.class
16+
17+
# Environments
18+
.env
19+
.venv
20+
env/
21+
venv/
22+
ENV/
23+
env.bak/
24+
venv.bak/
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
recursive-include autogenra/web/ui *
2+
recursive-exclude notebooks *
3+
recursive-exclude frontend *
4+
recursive-exclude docs *
5+
recursive-exclude tests *
+119
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,119 @@
1+
# AutoGen Assistant
2+
3+
![ARA](./docs/ara_stockprices.png)
4+
5+
AutoGen Assistant is an Autogen-powered AI app (user interface) that can converse with you to help you conduct research, write and execute code, run saved skills, create new skills (explicitly and by demonstration), and adapt in response to your interactions.
6+
7+
### Capabilities / Roadmap
8+
9+
Some of the capabilities supported by the app frontend include the following:
10+
11+
- [x] Select fron a list of agents (current support for two agent workflows - `UserProxyAgent` and `AssistantAgent`)
12+
- [x] Modify agent configuration (e.g. temperature, model, agent system message, model etc) and chat with updated agent configurations.
13+
- [x] View agent messages and output files in the UI from agent runs.
14+
- [ ] Support for more complex agent workflows (e.g. `GroupChat` workflows)
15+
- [ ] Improved user experience (e.g., streaming intermediate model output, better summarization of agent responses, etc)
16+
17+
Project Structure:
18+
19+
- _autogenra/_ code for the backend classes and web api (FastAPI)
20+
- _frontend/_ code for the webui, built with Gatsby and Tailwind
21+
22+
## Getting Started
23+
24+
AutoGen requires access to an LLM. Please see the [AutoGen docs](https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints) on how to configure access to your LLM provider. In this sample, We recommend setting up your `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY` environment variable and then specifying the exact model parameters to be used in the `llm_config` that is passed to each agent specification. See the `get_default_agent_config()` method in `utils.py` to see an example of setting up `llm_config`. The example below shows how to configure access to an Azure OPENAI LLM.
25+
26+
```python
27+
llm_config = LLMConfig(
28+
config_list=[{
29+
"model": "gpt-4",
30+
"api_key": "<azure_api_key>",
31+
"api_base": "<azure api base>",
32+
"api_type": "azure",
33+
"api_version": "2023-06-01-preview"
34+
}],
35+
temperature=0,
36+
)
37+
```
38+
39+
```bash
40+
export OPENAI_API_KEY=<your_api_key>
41+
```
42+
43+
### Install and Run
44+
45+
To install a prebuilt version of the app from PyPi. We highly recommend using a virtual environment (e.g. miniconda) and **python 3.10+** to avoid dependency conflicts.
46+
47+
```bash
48+
pip install autogenra
49+
autogenra ui --port 8081 # run the web ui on port 8081
50+
```
51+
52+
### Install from Source
53+
54+
To install the app from source, clone the repository and install the dependencies.
55+
56+
```bash
57+
pip install -e .
58+
```
59+
60+
You will also need to build the app front end. Note that your Gatsby requires node > 14.15.0 . You may need to [upgrade your node](https://stackoverflow.com/questions/10075990/upgrading-node-js-to-latest-version) version as needed.
61+
62+
```bash
63+
npm install --global yarn
64+
cd frontend
65+
yarn install
66+
yarn build
67+
```
68+
69+
The command above will build the frontend ui and copy the build artifacts to the `autogenra` web ui folder. Note that you may have to run `npm install --force --legacy-peer-deps` to force resolve some peer dependencies.
70+
71+
Run the web ui:
72+
73+
```bash
74+
autogenra ui --port 8081 # run the web ui on port 8081
75+
```
76+
77+
Navigate to <http://localhost:8081/> to view the web ui.
78+
79+
To update the web ui, navigate to the frontend directory, make changes and rebuild the ui.
80+
81+
## Capabilities
82+
83+
This demo focuses on the research assistant use case with some generalizations:
84+
85+
- **Skills**: The agent is provided with a list of skills that it can leverage while attempting to address a user's query. Each skill is a python function that may be in any file in a folder made availabe to the agents. We separate the concept of global skills available to all agents `backend/files/global_utlis_dir` and user level skills `backend/files/user/<user_hash>/utils_dir`, relevant in a multi user environment. Agents are aware skills as they are appended to the system message. A list of example skills is available in the `backend/global_utlis_dir` folder. Modify the file or create a new file with a function in the same directory to create new global skills.
86+
87+
- **Conversation Persistence**: Conversation history is persisted in an sqlite database `database.sqlite`.
88+
89+
- **Default Agent Workflow**: The default a sample workflow with two agents - a user proxy agent and an assistant agent.
90+
91+
## Example Usage
92+
93+
Let us use a simple query demonstrating the capabilities of the research assistant.
94+
95+
```
96+
Plot a chart of NVDA and TESLA stock price YTD. Save the result to a file named nvda_tesla.png
97+
```
98+
99+
The agents responds by _writing and executing code_ to create a python program to generate the chart with the stock prices.
100+
101+
> Note than there could be multiple turns between the `AssistantAgent` and the `UserProxyAgent` to produce and execute the code in order to complete the task.
102+
103+
![ARA](./docs/ara_stockprices.png)
104+
105+
> Note: You can also view the debug console that generates useful information to see how the agents are interacting in the background.
106+
107+
<!-- ![ARA](./docs/ara_console.png) -->
108+
109+
## FAQ
110+
111+
- How do I add more skills to the research assistant? This can be done by adding a new file with documented functions to `autogenra/web/skills/global` directory.
112+
- How do I specify the agent configuration (e.g. temperature, model, agent system message, model etc). You can do either from the UI interface or by modifying the default agent configuration in `utils.py` (`get_default_agent_config()` method)
113+
- How do I reset the conversation? You can reset the conversation by deleting the `database.sqlite` file. You can also delete user files by deleting the `autogenra/web/files/user/<user_id_md5hash>` folder.
114+
- How do I view messages generated by agents? You can view the messages generated by the agents in the debug console. You can also view the messages in the `database.sqlite` file.
115+
116+
## Acknowledgements
117+
118+
Based on the [AutoGen](https://microsoft.github.io/autogen) project.
119+
Adapted in October 2023 from a research prototype (original credits: Gagan Bansal, Adam Fourney, Victor Dibia, Piali Choudhury, Saleema Amershi, Ahmed Awadallah, Chi Wang)

samples/apps/autogen-assistant/autogenra/__init__.py

Whitespace-only changes.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
import json
2+
import time
3+
from typing import List
4+
from .datamodel import FlowConfig, Message
5+
from .utils import extract_successful_code_blocks, get_default_agent_config, get_modified_files
6+
from .autogenflow import AutoGenFlow
7+
import os
8+
9+
10+
class ChatManager:
11+
def __init__(self) -> None:
12+
pass
13+
14+
def chat(self, message: Message, history: List, flow_config: FlowConfig = None, **kwargs) -> None:
15+
work_dir = kwargs.get("work_dir", None)
16+
scratch_dir = os.path.join(work_dir, "scratch")
17+
skills_suffix = kwargs.get("skills_prompt", "")
18+
19+
# if no flow config is provided, use the default
20+
if flow_config is None:
21+
flow_config = get_default_agent_config(scratch_dir, skills_suffix=skills_suffix)
22+
23+
# print("Flow config: ", flow_config)
24+
flow = AutoGenFlow(config=flow_config, history=history, work_dir=scratch_dir, asst_prompt=skills_suffix)
25+
message_text = message.content.strip()
26+
27+
output = ""
28+
start_time = time.time()
29+
30+
metadata = {}
31+
flow.run(message=f"{message_text}", clear_history=False)
32+
33+
agent_chat_messages = flow.receiver.chat_messages[flow.sender][len(history) :]
34+
metadata["messages"] = agent_chat_messages
35+
36+
successful_code_blocks = extract_successful_code_blocks(agent_chat_messages)
37+
successful_code_blocks = "\n\n".join(successful_code_blocks)
38+
output = (
39+
(
40+
flow.sender.last_message()["content"]
41+
+ "\n The following code snippets were used: \n"
42+
+ successful_code_blocks
43+
)
44+
if successful_code_blocks
45+
else flow.sender.last_message()["content"]
46+
)
47+
48+
metadata["code"] = ""
49+
end_time = time.time()
50+
metadata["time"] = end_time - start_time
51+
modified_files = get_modified_files(start_time, end_time, scratch_dir, dest_dir=work_dir)
52+
metadata["files"] = modified_files
53+
54+
print("Modified files: ", modified_files)
55+
56+
output_message = Message(
57+
user_id=message.user_id,
58+
root_msg_id=message.root_msg_id,
59+
role="assistant",
60+
content=output,
61+
metadata=json.dumps(metadata),
62+
)
63+
64+
return output_message
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
from typing import List, Optional
2+
from dataclasses import asdict
3+
import autogen
4+
from .datamodel import AgentFlowSpec, FlowConfig, Message
5+
6+
7+
class AutoGenFlow:
8+
"""
9+
AutoGenFlow class to load agents from a provided configuration and run a chat between them
10+
"""
11+
12+
def __init__(
13+
self, config: FlowConfig, history: Optional[List[Message]] = None, work_dir: str = None, asst_prompt: str = None
14+
) -> None:
15+
"""
16+
Initializes the AutoGenFlow with agents specified in the config and optional
17+
message history.
18+
19+
Args:
20+
config: The configuration settings for the sender and receiver agents.
21+
history: An optional list of previous messages to populate the agents' history.
22+
23+
"""
24+
self.work_dir = work_dir
25+
self.asst_prompt = asst_prompt
26+
self.sender = self.load(config.sender)
27+
self.receiver = self.load(config.receiver)
28+
29+
if history:
30+
self.populate_history(history)
31+
32+
def _sanitize_history_message(self, message: str) -> str:
33+
"""
34+
Sanitizes the message e.g. remove references to execution completed
35+
36+
Args:
37+
message: The message to be sanitized.
38+
39+
Returns:
40+
The sanitized message.
41+
"""
42+
to_replace = ["execution succeeded", "exitcode"]
43+
for replace in to_replace:
44+
message = message.replace(replace, "")
45+
return message
46+
47+
def populate_history(self, history: List[Message]) -> None:
48+
"""
49+
Populates the agent message history from the provided list of messages.
50+
51+
Args:
52+
history: A list of messages to populate the agents' history.
53+
"""
54+
for msg in history:
55+
if isinstance(msg, dict):
56+
msg = Message(**msg)
57+
if msg.role == "user":
58+
self.sender.send(
59+
msg.content,
60+
self.receiver,
61+
request_reply=False,
62+
)
63+
elif msg.role == "assistant":
64+
self.receiver.send(
65+
msg.content,
66+
self.sender,
67+
request_reply=False,
68+
)
69+
70+
def sanitize_agent_spec(self, agent_spec: AgentFlowSpec) -> AgentFlowSpec:
71+
"""
72+
Sanitizes the agent spec by setting loading defaults
73+
74+
Args:
75+
config: The agent configuration to be sanitized.
76+
agent_type: The type of the agent.
77+
78+
Returns:
79+
The sanitized agent configuration.
80+
"""
81+
82+
agent_spec.config.is_termination_msg = agent_spec.config.is_termination_msg or (
83+
lambda x: "TERMINATE" in x.get("content", "").rstrip()
84+
)
85+
86+
if agent_spec.type == "userproxy":
87+
code_execution_config = agent_spec.config.code_execution_config or {}
88+
code_execution_config["work_dir"] = self.work_dir
89+
agent_spec.config.code_execution_config = code_execution_config
90+
if agent_spec.type == "assistant":
91+
agent_spec.config.system_message = (
92+
autogen.AssistantAgent.DEFAULT_SYSTEM_MESSAGE
93+
+ "\n\n"
94+
+ agent_spec.config.system_message
95+
+ "\n\n"
96+
+ self.asst_prompt
97+
)
98+
99+
return agent_spec
100+
101+
def load(self, agent_spec: AgentFlowSpec) -> autogen.Agent:
102+
"""
103+
Loads an agent based on the provided agent specification.
104+
105+
Args:
106+
agent_spec: The specification of the agent to be loaded.
107+
108+
Returns:
109+
An instance of the loaded agent.
110+
"""
111+
agent: autogen.Agent
112+
agent_spec = self.sanitize_agent_spec(agent_spec)
113+
if agent_spec.type == "assistant":
114+
agent = autogen.AssistantAgent(**asdict(agent_spec.config))
115+
elif agent_spec.type == "userproxy":
116+
agent = autogen.UserProxyAgent(**asdict(agent_spec.config))
117+
else:
118+
raise ValueError(f"Unknown agent type: {agent_spec.type}")
119+
return agent
120+
121+
def run(self, message: str, clear_history: bool = False) -> None:
122+
"""
123+
Initiates a chat between the sender and receiver agents with an initial message
124+
and an option to clear the history.
125+
126+
Args:
127+
message: The initial message to start the chat.
128+
clear_history: If set to True, clears the chat history before initiating.
129+
"""
130+
self.sender.initiate_chat(
131+
self.receiver,
132+
message=message,
133+
clear_history=clear_history,
134+
)

0 commit comments

Comments
 (0)