diff --git a/website/blog/2023-12-23-AgentOptimizer/index.mdx b/website/blog/2023-12-23-AgentOptimizer/index.mdx index 23e348fa30cb..caafda75790f 100644 --- a/website/blog/2023-12-23-AgentOptimizer/index.mdx +++ b/website/blog/2023-12-23-AgentOptimizer/index.mdx @@ -12,10 +12,12 @@ tags: [LLM, research] **TL;DR:** Introducing **AgentOptimizer**, a new class for training LLM agents in the era of LLMs as a service. **AgentOptimizer** is able to prompt LLMs to iteratively optimize function/skills of AutoGen agents according to the historical conversation and performance. -Checkout one implementation for **AgentOptimizer** on [MATH](https://github.com/hendrycks/math) dataset -[here](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb). -More information could be found in the [paper](https://arxiv.org/abs/2402.11359). +More information could be found in: + +**Paper**: https://arxiv.org/abs/2402.11359. + +**Notebook**: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_agentoptimizer.ipynb. ## Introduction In the traditional ML pipeline, we train a model by updating its weights according to the loss on the training set, while in the era of LLM agents, how should we train an agent?