Skip to content

Commit dd61eaa

Browse files
skzhang1AnonymousRepoSub
and
AnonymousRepoSub
authored
update (#2178)
Co-authored-by: AnonymousRepoSub <“[email protected]” >
1 parent e6237d4 commit dd61eaa

File tree

1 file changed

+5
-3
lines changed
  • website/blog/2023-12-23-AgentOptimizer

1 file changed

+5
-3
lines changed

website/blog/2023-12-23-AgentOptimizer/index.mdx

+5-3
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,12 @@ tags: [LLM, research]
1212
**TL;DR:**
1313
Introducing **AgentOptimizer**, a new class for training LLM agents in the era of LLMs as a service.
1414
**AgentOptimizer** is able to prompt LLMs to iteratively optimize function/skills of AutoGen agents according to the historical conversation and performance.
15-
Checkout one implementation for **AgentOptimizer** on [MATH](https://github.com/hendrycks/math) dataset
16-
[here](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb).
1715

18-
More information could be found in the [paper](https://arxiv.org/abs/2402.11359).
16+
More information could be found in:
17+
18+
**Paper**: https://arxiv.org/abs/2402.11359.
19+
20+
**Notebook**: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb.
1921

2022
## Introduction
2123
In the traditional ML pipeline, we train a model by updating its weights according to the loss on the training set, while in the era of LLM agents, how should we train an agent?

0 commit comments

Comments
 (0)