Skip to content

Commit 49ad771

Browse files
authored
expand faq (#66)
* expand faq * models * fix format error
1 parent bf65b59 commit 49ad771

File tree

5 files changed

+32
-8
lines changed

5 files changed

+32
-8
lines changed

README.md

+4-6
Original file line numberDiff line numberDiff line change
@@ -150,9 +150,9 @@ Microsoft and any contributors reserve all other rights, whether under their res
150150
or trademarks, whether by implication, estoppel or otherwise.
151151

152152

153-
## Citation
154-
[AutoGen](https://arxiv.org/abs/2308.08155).
155-
```
153+
## Citation
154+
[AutoGen](https://arxiv.org/abs/2308.08155).
155+
```
156156
@inproceedings{wu2023autogen,
157157
title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework},
158158
author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Shaokun Zhang and Erkang Zhu and Beibin Li and Li Jiang and Xiaoyun Zhang and Chi Wang},
@@ -173,7 +173,7 @@ or trademarks, whether by implication, estoppel or otherwise.
173173
}
174174
```
175175

176-
[MathChat](https://arxiv.org/abs/2306.01337).
176+
[MathChat](https://arxiv.org/abs/2306.01337).
177177

178178
```
179179
@inproceedings{wu2023empirical,
@@ -183,5 +183,3 @@ or trademarks, whether by implication, estoppel or otherwise.
183183
booktitle={ArXiv preprint arXiv:2306.01337},
184184
}
185185
```
186-
187-

website/docs/FAQ.md

+25
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,10 @@ You can also explicitly specify that by:
100100
assistant = autogen.AssistantAgent(name="assistant", llm_config={"api_key": ...})
101101
```
102102

103+
### Can I use non-OpenAI models?
104+
105+
Yes. Please check https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
106+
103107
## Handle Rate Limit Error and Timeout Error
104108

105109
You can set `retry_wait_time` and `max_retry_period` to handle rate limit error. And you can set `request_timeout` to handle timeout error. They can all be specified in `llm_config` for an agent, which will be used in the [`create`](/docs/reference/oai/completion#create) function for LLM inference.
@@ -109,3 +113,24 @@ You can set `retry_wait_time` and `max_retry_period` to handle rate limit error.
109113
- `request_timeout` (int): the timeout (in seconds) sent with a single request.
110114

111115
Please refer to the [documentation](/docs/Use-Cases/enhanced_inference#runtime-error) for more info.
116+
117+
## How to continue a finished conversation
118+
119+
When you call `initiate_chat` the conversation restarts by default. You can use `send` or `initiate_chat(clear_history=False)` to continue the conversation.
120+
121+
## How do we decide what LLM is used for each agent? How many agents can be used? How do we decide how many agents in the group?
122+
123+
Each agent can be customized. You can use LLMs, tools or human behind each agent. If you use an LLM for an agent, use the one best suited for its role. There is no limit of the number of agents, but start from a small number like 2, 3. The more capable is the LLM and the fewer roles you need, the fewer agents you need.
124+
125+
The default user proxy agent doesn't use LLM. If you'd like to use an LLM in UserProxyAgent, the use case could be to simulate user's behavior.
126+
127+
The default assistant agent is instructed to use both coding and language skills. It doesn't have to do coding, depending on the tasks. And you can customize the system message. So if you want to use it for coding, use a model that's good at coding.
128+
129+
## Why is code not saved as file?
130+
131+
If you are using a custom system message for the coding agent, please include something like:
132+
`If you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line.`
133+
in the system message. This line is in the default system message of the `AssistantAgent`.
134+
135+
If the `# filename` doesn't appear in the suggested code still, consider adding explicit instructions such as "save the code to disk" in the initial user message in `initiate_chat`.
136+
The `AssistantAgent` doesn't save all the code by default, because there are cases in which one would just like to finish a task without saving the code.

website/docusaurus.config.js

+2-2
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ module.exports = {
99
baseUrl: '/autogen/',
1010
onBrokenLinks: 'throw',
1111
onBrokenMarkdownLinks: 'warn',
12-
favicon: 'img/flaml_logo.ico',
12+
favicon: 'img/ag.ico',
1313
organizationName: 'Microsoft', // Usually your GitHub org/user name.
1414
projectName: 'AutoGen', // Usually your repo name.
1515
themeConfig: {
1616
navbar: {
1717
title: 'AutoGen',
1818
logo: {
1919
alt: 'AutoGen',
20-
src: 'img/flaml_logo_fill.svg',
20+
src: 'img/ag.svg',
2121
},
2222
items: [
2323
{

website/static/img/ag.ico

3.05 KB
Binary file not shown.

website/static/img/ag.svg

+1
Loading

0 commit comments

Comments
 (0)