Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Roadmap]: Integrating AgentEval #2162

Closed
julianakiseleva opened this issue Mar 27, 2024 · 3 comments
Closed

[Roadmap]: Integrating AgentEval #2162

julianakiseleva opened this issue Mar 27, 2024 · 3 comments
Assignees
Labels
0.2 Issues which are related to the pre 0.4 codebase needs-triage roadmap Issues related to roadmap of AutoGen

Comments

@julianakiseleva
Copy link
Contributor

julianakiseleva commented Mar 27, 2024

Describe the issue

Tip

Want to get involved?

We'd love it if you did! Please get in contact with the people assigned to this issue, or leave a comment. See general contributing advice here too.

Background:
AutoGen aims to simplify the development of LLM-powered multi-agent systems for various applications, ultimately making end users' lives easier by assisting with their tasks. Next, we all yearn to understand how our developed systems perform, their utility for users, and, perhaps most crucially, how we can enhance them. Directly evaluating multi-agent systems poses challenges as current approaches predominantly rely on success metrics – essentially, whether the agent accomplishes tasks. However, comprehending user interaction with a system involves far more than success alone. Take math problems, for instance; it's not merely about the agent solving the problem. Equally significant is its ability to convey solutions based on various criteria, including completeness, conciseness, and the clarity of the provided explanation. Furthermore, success isn't always clearly defined for every task.
Rapid advances in LLMs and multi-agent systems have brought forth many emerging capabilities that we're keen on translating into tangible utilities for end users. We introduce the first version of AgentEval framework - a tool crafted to empower developers in swiftly gauging the utility of LLM-powered applications designed to help end users accomplish the desired task.

Here is the blogpost for short description.
Here is a first paper on AgentEval for more details.

The goal of this issue is to integrate AgentEval into the AutoGen library (and further to AutoGenStudio) .

Roadmap involves:

1. Improvement to AgentEval schema [Black parts is tested, and blue part is ongoing work]: @siqingh

agen-eval-adding-verifier

2. Usage of AgentEval:

Complete Offline mode: @jluey1 @SeunRomiluyi

That is how it used now, namely a system designer provides the following triple <the task description, successful task execution, failed task execution> and gets as an output the list, where criterion is e.g.:

[
  { 
      "name": "Problem Interpretation",
      "description": "Ability to correctly interpret the problem.",
      "accepted_values": ["completely off", "slightly relevant", "relevant", "mostly accurate", "completely accurate"]
    },
]

Then, the system designer can ask AgentEval to quantify input datapoints, where input datapoints are logs of agents interactions (currently we mostly use AutoGen logs).

Online mode: @lalo @chinganc
Here, we envision to use AgentEval can be used as a part of Optimizer/Manager/Controller. Figure below provides an example of AgentEval can we used at each step of pipeline execution)

agen-eval-as-optimizer jpg

Function Signatures:

def generate_criteria:
    llm_config (dict or bool): llm inference configuration
    task (Task): the task to evaluate
    additional_instructions (str): additional instructions for the criteria agent
    max_round (int): The maximum number of rounds to run the conversation.
    use_subcritic (bool): Whether to use the subcritic agent to generate subcriteria.
return  list[Criterion]
def quantify_criteria:
    llm_config (dict or bool): llm inference configuration.
    criteria ([Criterion]): A list of criteria for evaluating the utility of a given task.
    task (Task): The task to evaluate.
    test_case (str): The test case to evaluate.
    ground_truth (str): The ground truth for the test case.
return A dictionary where the keys are the criteria and the values are the assessed performance based on accepted values for each criteria.
class Criterion:
    name (str): name of the criterion
    description (str): description of the criterion
    accepted_values (list[str]): list of possible values for the criterion (could this also be a range of values?)
    sub_criteria: list[Criterion] // list of sub criteria
     
class Task:
    name (str): Name of the task to be evaluated
    description (str): description of the task
    successful_response (str): chat message example of a successful response
    failed_response (str): chat message example of a failed response

Update 1: PR #2156
Contributor: @jluey1

Update 2: RR #2526
Contributor: @lalo

@YichiHuang
Copy link

How is the latest development progress of the verifier agent?

@jluey1
Copy link
Collaborator

jluey1 commented Jul 30, 2024

How is the latest development progress of the verifier agent?

Hi Yichi, right now I am finalizing the work to integrate AgentEval into Autogen Studio. I do not have an eta on when I will get to the VerifierAgent work.

@rysweet rysweet added 0.2 Issues which are related to the pre 0.4 codebase needs-triage labels Oct 2, 2024
@fniedtner
Copy link
Collaborator

not planned for 0.4 unless there is significant interest

@fniedtner fniedtner closed this as not planned Won't fix, can't repro, duplicate, stale Oct 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.2 Issues which are related to the pre 0.4 codebase needs-triage roadmap Issues related to roadmap of AutoGen
Projects
None yet
Development

No branches or pull requests

8 participants