You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: website/docs/Use-Cases/agent_chat.md
-50
Original file line number
Diff line number
Diff line change
@@ -327,56 +327,6 @@ With the pluggable auto-reply function, one can choose to invoke conversations w
327
327
328
328
Another approach involves LLM-based function calls, where LLM decides if a specific function should be invoked based on the conversation's status during each inference. This approach enables dynamic multi-agent conversations, as seen in scenarios like [multi-user math problem solving scenario](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_two_users.ipynb), where a student assistant automatically seeks expertise via function calls.
329
329
330
-
### LLM Caching
331
-
332
-
Since version 0.2.8, a configurable context manager allows you to easily
333
-
configure LLM cache, using either DiskCache or Redis. All agents inside the
334
-
context manager will use the same cache.
335
-
336
-
```python
337
-
from autogen import Cache
338
-
339
-
# Use Redis as cache
340
-
with Cache.redis(redis_url="redis://localhost:6379/0") as cache:
Since version [`0.2.8`](https://github.com/microsoft/autogen/releases/tag/v0.2.8), a configurable context manager allows you to easily
4
+
configure LLM cache, using either [`DiskCache`](/docs/reference/cache/disk_cache#diskcache) or [`RedisCache`](/docs/reference/cache/redis_cache#rediscache). All agents inside the
5
+
context manager will use the same cache.
6
+
7
+
```python
8
+
from autogen import Cache
9
+
10
+
# Use Redis as cache
11
+
with Cache.redis(redis_url="redis://localhost:6379/0") as cache:
0 commit comments