You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/docs/FAQ.md
+28
Original file line number
Diff line number
Diff line change
@@ -192,6 +192,34 @@ Please refer to https://microsoft.github.io/autogen/docs/reference/agentchat/con
192
192
The "use_docker" arg in an agent's code_execution_config will be set to the name of the image containing the change after execution, when the conversation finishes.
193
193
You can save that image name. For a new conversation, you can set "use_docker" to the saved name of the image to start execution there.
194
194
195
+
## Database locked error
196
+
197
+
When using VMs such as Azure Machine Learning compute instances,
198
+
you may encounter a "database locked error". This is because the
199
+
[LLM cache](./Use-Cases/agent_chat.md#cache)
200
+
is trying to write to a location that the application does not have access to.
201
+
202
+
You can set the `cache_path_root` to a location where the application has access.
203
+
For example,
204
+
205
+
```python
206
+
from autogen import Cache
207
+
208
+
with Cache.disk(cache_path_root="/tmp/.cache") as cache:
209
+
agent_a.initate_chat(agent_b, ..., cache=cache)
210
+
```
211
+
212
+
You can also use Redis cache instead of disk cache. For example,
213
+
214
+
```python
215
+
from autogen import Cache
216
+
217
+
with Cache.redis(redis_url=...) as cache:
218
+
agent_a.initate_chat(agent_b, ..., cache=cache)
219
+
```
220
+
221
+
You can also disable the cache. See [here](./Use-Cases/agent_chat.md#llm-caching) for details.
222
+
195
223
## Agents are throwing due to docker not running, how can I resolve this?
196
224
197
225
If running AutoGen locally the default for agents who execute code is for them to try and perform code execution within a docker container. If docker is not running, this will cause the agent to throw an error. To resolve this you have the below options:
Copy file name to clipboardExpand all lines: website/docs/Use-Cases/agent_chat.md
+26-4
Original file line number
Diff line number
Diff line change
@@ -287,15 +287,37 @@ By adopting the conversation-driven control with both programming language and n
287
287
288
288
### LLM Caching
289
289
290
-
Since version 0.2.8, a configurable context manager allows you to easily configure LLM cache, using either DiskCache or Redis. All agents inside the context manager will use the same cache.
290
+
Since version 0.2.8, a configurable context manager allows you to easily
291
+
configure LLM cache, using either DiskCache or Redis. All agents inside the
292
+
context manager will use the same cache.
291
293
292
294
```python
293
-
from autogen.cache.cacheimport Cache
295
+
from autogen import Cache
294
296
295
-
with Cache.redis(cache_seed=42, redis_url="redis://localhost:6379/0") as cache:
297
+
# Use Redis as cache
298
+
with Cache.redis(redis_url="redis://localhost:6379/0") as cache:
Copy file name to clipboardExpand all lines: website/docs/Use-Cases/enhanced_inference.md
+36-11
Original file line number
Diff line number
Diff line change
@@ -168,33 +168,51 @@ Total cost: 0.00027
168
168
169
169
## Caching
170
170
171
-
API call results are cached locally and reused when the same request is issued. This is useful when repeating or continuing experiments for reproducibility and cost saving.
171
+
API call results are cached locally and reused when the same request is issued.
172
+
This is useful when repeating or continuing experiments for reproducibility and cost saving.
172
173
173
174
Starting version 0.2.8, a configurable context manager allows you to easily configure
174
175
the cache, using either DiskCache or Redis.
175
-
All `OpenAIWrapper` created inside the context manager can use the same cache through the constructor.
176
+
All `OpenAIWrapper` created inside the context manager can use the same cache
177
+
through the constructor.
176
178
177
179
```python
178
-
from autogen.cache.cacheimport Cache
180
+
from autogen import Cache
179
181
180
-
with Cache.redis(cache_seed=42, redis_url="redis://localhost:6379/0") as cache:
182
+
with Cache.redis(redis_url="redis://localhost:6379/0") as cache:
181
183
client = OpenAIWrapper(..., cache=cache)
182
184
client.create(...)
183
185
184
-
with Cache.disk(cache_seed=42, cache_dir=".cache") as cache:
186
+
with Cache.disk() as cache:
185
187
client = OpenAIWrapper(..., cache=cache)
186
188
client.create(...)
187
189
```
188
190
189
191
You can also set a cache directly in the `create()` method.
190
192
191
193
```python
192
-
client = OpenAIWrapper()
193
-
with Cache.disk(cache_seed=42, cache_dir=".cache") as cache:
194
+
client = OpenAIWrapper(...)
195
+
with Cache.disk() as cache:
196
+
client.create(..., cache=cache)
197
+
```
198
+
199
+
You can vary the `cache_seed` parameter to get different LLM output while
200
+
still using cache.
201
+
202
+
```python
203
+
# Setting the cache_seed to 1 will use a different cache from the default one
204
+
# and you will see different output.
205
+
with Cache.disk(cache_seed=1) as cache:
194
206
client.create(..., cache=cache)
195
207
```
196
208
197
-
You can control the randomness by setting the `cache_seed` parameter.
209
+
By default DiskCache uses `.cache` for storage. To change the cache directory,
210
+
set `cache_path_root`:
211
+
212
+
```python
213
+
with Cache.disk(cache_path_root="/tmp/autogen_cache") as cache:
_NOTE_. openai v1.1 introduces a new param `seed`. The difference between autogen's `cache_seed` and openai's `seed` is that:
212
-
* autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made.
213
-
* openai's `seed` is a best-effort deterministic sampling with no guarantee of determinism. When using openai's `seed` with `cache_seed` set to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output.
229
+
### Difference between `cache_seed` and openai's `seed` parameter
230
+
231
+
openai v1.1 introduces a new param `seed`.
232
+
The differences between autogen's `cache_seed` and openai's `seed`:
233
+
- autogen uses local disk cache to guarantee the exactly same output is produced
234
+
for the same input and when cache is hit, no openai api call will be made.
235
+
- openai's `seed` is a best-effort deterministic sampling with no guarantee
236
+
of determinism. When using openai's `seed` with `cache_seed` set to None,
237
+
even for the same input, an openai api call will be made and there is
0 commit comments