You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-6
Original file line number
Diff line number
Diff line change
@@ -150,9 +150,9 @@ Microsoft and any contributors reserve all other rights, whether under their res
150
150
or trademarks, whether by implication, estoppel or otherwise.
151
151
152
152
153
-
## Citation
154
-
[AutoGen](https://arxiv.org/abs/2308.08155).
155
-
```
153
+
## Citation
154
+
[AutoGen](https://arxiv.org/abs/2308.08155).
155
+
```
156
156
@inproceedings{wu2023autogen,
157
157
title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
158
158
author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang},
@@ -173,7 +173,7 @@ or trademarks, whether by implication, estoppel or otherwise.
173
173
}
174
174
```
175
175
176
-
[MathChat](https://arxiv.org/abs/2306.01337).
176
+
[MathChat](https://arxiv.org/abs/2306.01337).
177
177
178
178
```
179
179
@inproceedings{wu2023empirical,
@@ -183,5 +183,3 @@ or trademarks, whether by implication, estoppel or otherwise.
Yes. Please check https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
106
+
103
107
## Handle Rate Limit Error and Timeout Error
104
108
105
109
You can set `retry_wait_time` and `max_retry_period` to handle rate limit error. And you can set `request_timeout` to handle timeout error. They can all be specified in `llm_config` for an agent, which will be used in the [`create`](/docs/reference/oai/completion#create) function for LLM inference.
@@ -109,3 +113,24 @@ You can set `retry_wait_time` and `max_retry_period` to handle rate limit error.
109
113
-`request_timeout` (int): the timeout (in seconds) sent with a single request.
110
114
111
115
Please refer to the [documentation](/docs/Use-Cases/enhanced_inference#runtime-error) for more info.
116
+
117
+
## How to continue a finished conversation
118
+
119
+
When you call `initiate_chat` the conversation restarts by default. You can use `send` or `initiate_chat(clear_history=False)` to continue the conversation.
120
+
121
+
## How do we decide what LLM is used for each agent? How many agents can be used? How do we decide how many agents in the group?
122
+
123
+
Each agent can be customized. You can use LLMs, tools or human behind each agent. If you use an LLM for an agent, use the one best suited for its role. There is no limit of the number of agents, but start from a small number like 2, 3. The more capable is the LLM and the fewer roles you need, the fewer agents you need.
124
+
125
+
The default user proxy agent doesn't use LLM. If you'd like to use an LLM in UserProxyAgent, the use case could be to simulate user's behavior.
126
+
127
+
The default assistant agent is instructed to use both coding and language skills. It doesn't have to do coding, depending on the tasks. And you can customize the system message. So if you want to use it for coding, use a model that's good at coding.
128
+
129
+
## Why is code not saved as file?
130
+
131
+
If you are using a custom system message for the coding agent, please include something like:
132
+
`If you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line.`
133
+
in the system message. This line is in the default system message of the `AssistantAgent`.
134
+
135
+
If the `# filename` doesn't appear in the suggested code still, consider adding explicit instructions such as "save the code to disk" in the initial user message in `initiate_chat`.
136
+
The `AssistantAgent` doesn't save all the code by default, because there are cases in which one would just like to finish a task without saving the code.
0 commit comments