From 60c6e5ca74d7e8d94b368e612f9701cde446fa6c Mon Sep 17 00:00:00 2001 From: Julia Kiseleva Date: Sun, 30 Jun 2024 15:00:13 -0700 Subject: [PATCH] Update README.md (#3025) adding links to blogposts to increase the clarity --- autogen/agentchat/contrib/agent_eval/README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/autogen/agentchat/contrib/agent_eval/README.md b/autogen/agentchat/contrib/agent_eval/README.md index 6588a1ec6113..478f28fd74ec 100644 --- a/autogen/agentchat/contrib/agent_eval/README.md +++ b/autogen/agentchat/contrib/agent_eval/README.md @@ -1,7 +1,9 @@ -Agents for running the AgentEval pipeline. +Agents for running the [AgentEval](https://microsoft.github.io/autogen/blog/2023/11/20/AgentEval/) pipeline. AgentEval is a process for evaluating a LLM-based system's performance on a given task. When given a task to evaluate and a few example runs, the critic and subcritic agents create evaluation criteria for evaluating a system's solution. Once the criteria has been created, the quantifier agent can evaluate subsequent task solutions based on the generated criteria. For more information see: [AgentEval Integration Roadmap](https://github.com/microsoft/autogen/issues/2162) + +See our [blog post](https://microsoft.github.io/autogen/blog/2024/06/21/AgentEval) for usage examples and general explanations.