Skip to content

Commit 3dfa305

Browse files
ekzhugagb
andauthored
Update docs for new executors (#2119)
* Update docs for new executors * Update website/docs/FAQ.mdx Co-authored-by: gagb <[email protected]> * Update website/docs/FAQ.mdx Co-authored-by: gagb <[email protected]> * Update website/docs/installation/Installation.mdx Co-authored-by: gagb <[email protected]> * Update website/docs/installation/Installation.mdx Co-authored-by: gagb <[email protected]> --------- Co-authored-by: gagb <[email protected]>
1 parent 01afc9b commit 3dfa305

File tree

3 files changed

+92
-71
lines changed

3 files changed

+92
-71
lines changed

website/docs/FAQ.mdx

+32-17
Original file line numberDiff line numberDiff line change
@@ -71,31 +71,46 @@ in the system message. This line is in the default system message of the `Assist
7171
If the `# filename` doesn't appear in the suggested code still, consider adding explicit instructions such as "save the code to disk" in the initial user message in `initiate_chat`.
7272
The `AssistantAgent` doesn't save all the code by default, because there are cases in which one would just like to finish a task without saving the code.
7373

74-
## Code execution
74+
## Legacy code executor
7575

76-
We strongly recommend using docker to execute code. There are two ways to use docker:
76+
:::note
77+
The new code executors offers more choices of execution backend.
78+
Read more about [code executors](/docs/tutorial/code-executors).
79+
:::
7780

78-
1. Run AutoGen in a docker container. For example, when developing in [GitHub codespace](https://codespaces.new/microsoft/autogen?quickstart=1), AutoGen runs in a docker container. If you are not developing in Github codespace, follow instructions [here](installation/Docker.md#option-1-install-and-run-autogen-in-docker) to install and run AutoGen in docker.
79-
2. Run AutoGen outside of a docker, while performing code execution with a docker container. For this option, make sure docker is up and running. If you want to run the code locally (not recommended) then `use_docker` can be set to `False` in `code_execution_config` for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an environment variable.
81+
The legacy code executor is used by specifying the `code_execution_config` in the agent's constructor.
8082

81-
### Enable Python 3 docker image
82-
83-
You might want to override the default docker image used for code execution. To do that set `use_docker` key of `code_execution_config` property to the name of the image. E.g.:
8483
```python
85-
user_proxy = autogen.UserProxyAgent(
86-
name="agent",
87-
human_input_mode="TERMINATE",
88-
max_consecutive_auto_reply=10,
84+
from autogen import UserProxyAgent
85+
86+
user_proxy = UserProxyAgent(
87+
name="user_proxy",
8988
code_execution_config={"work_dir":"_output", "use_docker":"python:3"},
90-
llm_config=llm_config,
91-
system_message=""""Reply TERMINATE if the task has been solved at full satisfaction.
92-
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
9389
)
9490
```
9591

96-
If you have problems with agents running `pip install` or get errors similar to `Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')`, you can choose **'python:3'** as image as shown in the code example above and that should solve the problem.
97-
98-
### Agents keep thanking each other when using `gpt-3.5-turbo`
92+
In this example, the `code_execution_config` specifies that the code will be
93+
executed in a docker container with the image `python:3`.
94+
By default, the image name is `python:3-slim` if not specified.
95+
The `work_dir` specifies the directory where the code will be executed.
96+
If you have problems with agents running `pip install` or get errors similar to
97+
`Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory')`,
98+
you can choose **'python:3'** as image as shown in the code example above and
99+
that should solve the problem.
100+
101+
By default it runs code in a docker container. If you want to run code locally
102+
(not recommended) then `use_docker` can be set to `False` in `code_execution_config`
103+
for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an
104+
environment variable.
105+
106+
You can also develop your AutoGen application in a docker container.
107+
For example, when developing in [GitHub codespace](https://codespaces.new/microsoft/autogen?quickstart=1),
108+
AutoGen runs in a docker container.
109+
If you are not developing in GitHub Codespaces,
110+
follow instructions [here](installation/Docker.md#option-1-install-and-run-autogen-in-docker)
111+
to install and run AutoGen in docker.
112+
113+
## Agents keep thanking each other when using `gpt-3.5-turbo`
99114

100115
When using `gpt-3.5-turbo` you may often encounter agents going into a "gratitude loop", meaning when they complete a task they will begin congratulating and thanking each other in a continuous loop. This is a limitation in the performance of `gpt-3.5-turbo`, in contrast to `gpt-4` which has no problem remembering instructions. This can hinder the experimentation experience when trying to test out your own use case with cheaper models.
101116

website/docs/installation/Installation.mdx

+19-25
Original file line numberDiff line numberDiff line change
@@ -78,35 +78,29 @@ pip install pyautogen
7878

7979
:::
8080

81+
## Install Docker for Code Execution
8182

82-
## Code execution with Docker (default)
83+
We recommend using Docker for code execution.
84+
To install Docker, follow the instructions for your operating system on the [Docker website](https://docs.docker.com/get-docker/).
8385

84-
Even if you install AutoGen locally, we highly recommend using Docker for [code execution](FAQ.mdx#code-execution).
85-
86-
The default behaviour for code-execution agents is for code execution to be performed in a docker container.
87-
88-
**To turn this off**: if you want to run the code locally (not recommended) then `use_docker` can be set to `False` in `code_execution_config` for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an environment variable.
89-
90-
You might want to override the default docker image used for code execution. To do that set `use_docker` key of `code_execution_config` property to the name of the image. E.g.:
86+
A simple example of how to use Docker for code execution is shown below:
9187

9288
```python
93-
user_proxy = autogen.UserProxyAgent(
94-
name="agent",
95-
human_input_mode="TERMINATE",
96-
max_consecutive_auto_reply=10,
97-
code_execution_config={"work_dir":"_output", "use_docker":"python:3"},
98-
llm_config=llm_config,
99-
system_message=""""Reply TERMINATE if the task has been solved at full satisfaction.
100-
Otherwise, reply CONTINUE, or the reason why the task is not solved yet."""
101-
)
89+
from pathlib import Path
90+
from autogen import UserProxyAgent
91+
from autogen.coding import DockerCommandLineCodeExecutor
92+
93+
work_dir = Path("coding")
94+
work_dir.mkdir(exist_ok=True)
95+
96+
with DockerCommandLineCodeExecutor(work_dir=work_dir) as code_executor:
97+
user_proxy = UserProxyAgent(
98+
name="user_proxy",
99+
code_execution_config={"executor": code_executor},
100+
)
102101
```
103102

104-
**Turn off code execution entirely**: if you want to turn off code execution entirely, set `code_execution_config` to `False`. E.g.:
103+
To learn more about code executors, see the [code executors tutorial](/docs/tutorial/code-executors).
105104

106-
```python
107-
user_proxy = autogen.UserProxyAgent(
108-
name="agent",
109-
llm_config=llm_config,
110-
code_execution_config=False,
111-
)
112-
```
105+
You might have seen a different way of defining the executors without creating the
106+
executor object, please refer to FAQ for this [legacy code executor](/docs/FAQ#legacy-code-executor).

website/docs/topics/non-openai-models/cloud-togetherai.md

+41-29
Original file line numberDiff line numberDiff line change
@@ -21,76 +21,88 @@ set TOGETHER_API_KEY=YourTogetherAIKeyHere
2121
Create your LLM configuration, with the [model you want](https://docs.together.ai/docs/inference-models).
2222

2323
```python
24-
import autogen
2524
import os
2625

27-
llm_config={
28-
"config_list": [
29-
{
30-
# Available together.ai model strings:
31-
# https://docs.together.ai/docs/inference-models
32-
"model": "mistralai/Mistral-7B-Instruct-v0.1",
33-
"api_key": os.environ['TOGETHER_API_KEY'],
34-
"base_url": "https://api.together.xyz/v1"
35-
}
36-
],
37-
"cache_seed": 42
38-
}
26+
config_list = [
27+
{
28+
# Available together.ai model strings:
29+
# https://docs.together.ai/docs/inference-models
30+
"model": "mistralai/Mistral-7B-Instruct-v0.1",
31+
"api_key": os.environ['TOGETHER_API_KEY'],
32+
"base_url": "https://api.together.xyz/v1"
33+
}
34+
]
3935
```
4036

4137
## Construct Agents
4238

4339
```python
40+
from pathlib import Path
41+
from autogen import AssistantAgent, UserProxyAgent
42+
from autogen.coding import LocalCommandLineCodeExecutor
43+
44+
work_dir = Path("groupchat")
45+
work_dir.mkdir(exist_ok=True)
46+
47+
# Create local command line code executor.
48+
code_executor = LocalCommandLineCodeExecutor(work_dir=work_dir)
49+
4450
# User Proxy will execute code and finish the chat upon typing 'exit'
45-
user_proxy = autogen.UserProxyAgent(
51+
user_proxy = UserProxyAgent(
4652
name="UserProxy",
4753
system_message="A human admin",
4854
code_execution_config={
4955
"last_n_messages": 2,
50-
"work_dir": "groupchat",
51-
"use_docker": False,
52-
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
56+
"executor": code_executor,
57+
},
5358
human_input_mode="TERMINATE",
5459
is_termination_msg=lambda x: "TERMINATE" in x.get("content"),
5560
)
5661

5762
# Python Coder agent
58-
coder = autogen.AssistantAgent(
63+
coder = AssistantAgent(
5964
name="softwareCoder",
6065
description="Software Coder, writes Python code as required and reiterates with feedback from the Code Reviewer.",
6166
system_message="You are a senior Python developer, a specialist in writing succinct Python functions.",
62-
llm_config=llm_config,
67+
llm_config={"config_list": config_list},
6368
)
6469

6570
# Code Reviewer agent
66-
reviewer = autogen.AssistantAgent(
71+
reviewer = AssistantAgent(
6772
name="codeReviewer",
6873
description="Code Reviewer, reviews written code for correctness, efficiency, and security. Asks the Software Coder to address issues.",
6974
system_message="You are a Code Reviewer, experienced in checking code for correctness, efficiency, and security. Review and provide feedback to the Software Coder until you are satisfied, then return the word TERMINATE",
7075
is_termination_msg=lambda x: "TERMINATE" in x.get("content"),
71-
llm_config=llm_config,
76+
llm_config={"config_list": config_list},
7277
)
7378
```
7479

7580
## Establish the group chat
7681

7782
```python
83+
from autogen import GroupChat, GroupChatManager
84+
7885
# Establish the Group Chat and disallow a speaker being selected consecutively
79-
groupchat = autogen.GroupChat(agents=[user_proxy, coder, reviewer], messages=[], max_round=12, allow_repeat_speaker=False)
86+
groupchat = GroupChat(agents=[user_proxy, coder, reviewer], messages=[], max_round=12, allow_repeat_speaker=False)
8087

8188
# Manages the group of multiple agents
82-
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
89+
manager = GroupChatManager(groupchat=groupchat, llm_config={"config_list": config_list})
8390
```
8491

8592
## Start Chat
8693

8794
```python
88-
# Start the chat with a request to write a function
89-
user_proxy.initiate_chat(
90-
manager,
91-
message="Write a Python function for the Fibonacci sequence, the function will have one parameter for the number in the sequence, which the function will return the Fibonacci number for."
92-
)
93-
# type exit to terminate the chat
95+
from autogen.cache import Cache
96+
97+
# Cache LLM responses.
98+
with Cache.disk() as cache:
99+
# Start the chat with a request to write a function
100+
user_proxy.initiate_chat(
101+
manager,
102+
message="Write a Python function for the Fibonacci sequence, the function will have one parameter for the number in the sequence, which the function will return the Fibonacci number for.",
103+
cache=cache,
104+
)
105+
# type exit to terminate the chat
94106
```
95107

96108
Output:

0 commit comments

Comments
 (0)