Skip to content

Add collate file and more tests from autogpt into testbed #915

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Dec 14, 2023
17 changes: 17 additions & 0 deletions samples/tools/testbed/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -216,3 +216,20 @@ python ./run_scenarios.py ./scenarios/GAIA/gaia_validation_level_1__two_agents_g
# Compute Metrics
python utils/collate_gaia_csv.py ./results/gaia_validation_level_1__two_agents_gpt4 | python utils/metrics_gaia.py
```

## (Example) Running tasks from AutoGPT

The Testbed supports running tasks proposed in [AutoGPT benchmark](https://github.com/Significant-Gravitas/AutoGPT/tree/master/benchmark/agbenchmark/challenges). In this scenario, the agents are prompted to handle a diverse range of tasks, including coding, question answering according to given tasks, web scraping. Similar to scenarios in HumanEval, the agents can call the unit test script to check if the task is successfully done.

Accessing this scenario-type requires converting tasks, running the Testbed, collating the results, and finally computing the metrics. The following commands will run each test instance with GPT-4:

```
# Convert tasks
python utils/prepare_autogpt.py

# Run all the scenarios with GPT-4
python run_scenarios.py scenarios/AutoGPT/autogpt_twoagent_gpt4.jsonl

# Compute metrics, the metric script shares the same one with HumanEval
python utils/collate_autogpt.py ./results/autogpt_twoagent_gpt4 | python metrics_human_eval.py
```
4 changes: 4 additions & 0 deletions samples/tools/testbed/includes/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1,5 @@
git+https://github.com/microsoft/autogen.git
pandas
beautifulsoup4
requests
pytest
2 changes: 1 addition & 1 deletion samples/tools/testbed/scenarios/AutoGPT/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
The AutoGPT style tasks are contained in folder `challenges`.

Run `python utils/prepare_data.py` to convert the tasks to jsonl format compatible for evaluation.
Run `python ../../utils/prepare_autogpt.py` to convert the tasks to jsonl format compatible for evaluation.
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
import os
import subprocess
import sys
import shutil


def scoring(content: str, should_contain: list, should_not_contain: list):
Expand All @@ -28,7 +29,6 @@ def scoring(content: str, should_contain: list, should_not_contain: list):


def check():
workspace = "coding"
files_contents = []
scores = []

Expand All @@ -54,9 +54,11 @@ def check():

for file_path in matching_files:
if eval_type == "python":
# copy the test file to working directory
shutil.copy(f"../custom_python/{file_path}", "./")
result = subprocess.run(
[sys.executable, file_path],
cwd=os.path.abspath(workspace),
cwd=os.path.abspath("./"),
capture_output=True,
text=True,
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,24 +24,24 @@
"work_dir": work_dir,
"use_docker": False,
},
max_consecutive_auto_reply=10,
max_consecutive_auto_reply=5,
# default_auto_reply="TERMINATE",
)

if target_folder:
# The tasks involves reading from a file then do sth to it.
message = """
Your task is to: __TASK__ The file you needed is located in this directory: '__TARGET_FOLDER__'. You should save the output files in this directory: './'
Use the following command to check if all the unit tests have passed:
Here is the task description: __TASK__ The file you needed is located in this directory: '__TARGET_FOLDER__'. You should save the output files in the current directory: './'
Run the following command to check if all the unit tests have passed:
```bash
python ../check.py
```
You should refine the code and results until all the tests have passed.
"""
else:
message = """
Your task is to: __TASK__
Use the following command to check if all the unit tests have passed:
Here is the task description: __TASK__
Run the following command to check if all the unit tests have passed:
```bash
python ../check.py
```
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
import unittest

import password_generator


class TestPasswordGenerator(unittest.TestCase):
def test_password_length(self):
for i in range(8, 17):
password = password_generator.generate_password(i)
self.assertEqual(len(password), i)

def test_value_error(self):
with self.assertRaises(ValueError):
password_generator.generate_password(7)
with self.assertRaises(ValueError):
password_generator.generate_password(17)

def test_password_content(self):
password = password_generator.generate_password()
self.assertTrue(any(c.isdigit() for c in password))
self.assertTrue(any(c in password_generator.string.punctuation for c in password))


if __name__ == "__main__":
unittest.main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
{
"category": [
"coding"
],
"cutoff": 90,
"dependencies": [
"TestThreeSum"
],
"eval_id": "ac75c471-e0ce-400c-ba9a-fb72aaab444f",
"ground": {
"answer": "password_generator.py is created and satisfies the requirements.",
"eval": {
"type": "python"
},
"files": [
"test_pwd.py"
],
"should_contain": [],
"should_not_contain": []
},
"info": {
"description": "Tests if the agent can create a random password generator.",
"difficulty": "basic",
"side_effects": []
},
"name": "PasswordGenerator",
"task": "Create a random password generator. The password should have between 8 and 16 characters and should contain at least one letter, number and symbol. The password should be printed to the console. If no length is specified, the password should be 8 characters long. The password_generator should be imported as a module and called as password = password_generator.generate_password(length=x). Any invalid input should raise a ValueError."
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import os
import subprocess
import tempfile
import unittest


class TestOrganizeFiles(unittest.TestCase):
def setUp(self):
# Create temporary directory
self.test_dir = tempfile.mkdtemp()

# File types and their corresponding directory
self.file_types = {
"test_image.png": "images",
"test_doc.txt": "documents",
"test_audio.mp3": "audio",
}

# Create test files
for file_name in self.file_types.keys():
open(os.path.join(self.test_dir, file_name), "a").close()

def test_organize_files(self):
# Call the organize_files.py script using subprocess
subprocess.call(["python", "organize_files.py", "--directory_path=" + self.test_dir])

# Check if the files have been moved to the correct directories
for file_name, directory in self.file_types.items():
self.assertTrue(os.path.isfile(os.path.join(self.test_dir, directory, file_name)))

def tearDown(self):
# Delete test directory and its contents
for file_name, directory in self.file_types.items():
os.remove(os.path.join(self.test_dir, directory, file_name))
for directory in set(self.file_types.values()):
os.rmdir(os.path.join(self.test_dir, directory))
os.rmdir(self.test_dir)


if __name__ == "__main__":
unittest.main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
{
"category": [
"coding",
"general"
],
"cutoff": 90,
"dependencies": [
"TestPasswordGenerator"
],
"eval_id": "029c1e6f-2b36-451e-bca6-60063b827d2e",
"ground": {
"answer": "The correct python file is written and organizes the files accordingly",
"eval": {
"type": "python"
},
"files": [
"test_file_organize.py"
],
"should_contain": [],
"should_not_contain": []
},
"info": {
"description": "Tests if the agent can create a file organizer.",
"difficulty": "basic",
"side_effects": []
},
"name": "FileOrganizer",
"task": "Create a file organizer CLI tool in Python that sorts files in a directory based on their file types (e.g., images, documents, audio) and moves them into these corresponding folders: 'images', 'documents', 'audio'. The entry point will be a python file that can be run this way: python organize_files.py --directory_path=YOUR_DIRECTORY_PATH"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import unittest

from url_shortener import retrieve_url, shorten_url


class TestURLShortener(unittest.TestCase):
def test_url_retrieval(self):
# Shorten the URL to get its shortened form
shortened_url = shorten_url("https://www.example.com")

# Retrieve the original URL using the shortened URL directly
retrieved_url = retrieve_url(shortened_url)

self.assertEqual(
retrieved_url,
"https://www.example.com",
"Retrieved URL does not match the original!",
)


if __name__ == "__main__":
unittest.main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
{
"category": [
"coding"
],
"cutoff": 150,
"dependencies": [
"TestFileOrganizer"
],
"eval_id": "8106fd7f-83fd-496e-9513-280f4a3f012c",
"ground": {
"answer": "The correct python file for a basic url shortener CLI",
"eval": {
"type": "python"
},
"files": [
"test_url_shorten.py"
],
"should_contain": [],
"should_not_contain": []
},
"info": {
"description": "Tests if the agent can create a URL shortener.",
"difficulty": "basic",
"side_effects": []
},
"name": "UrlShortener",
"task": "Build a basic URL shortener using a python CLI. Here are the specifications.\n\nFunctionality: The program should have two primary functionalities.\n\nShorten a given URL.\nRetrieve the original URL from a shortened URL.\n\nCLI: The command-line interface should accept a URL as its first input. It should be able to determine if the url is a shortened url or not. If the url is not shortened, it will display ONLY the shortened url, otherwise, it will display ONLY the original unshortened URL. Afterwards, it should prompt the user for another URL to process.\n\nTechnical specifications:\nBuild a file called url_shortener.py. This file will be called through command lines. Do not write your own test cases or any unit test code.\n\nEdge cases:\nFor the sake of simplicity, there will be no edge cases, you can assume the input is always correct and the user immediately passes the shortened version of the url he just shortened.\n\nYou will be expected to create a python file called url_shortener.py that will function through imported as a module.\n\nThe url_shortener.py will be tested this way:\n```\nimport unittest\nfrom url_shortener import shorten_url, retrieve_url\n\nclass TestURLShortener(unittest.TestCase):\n def test_url_retrieval(self):\n # Shorten the URL to get its shortened form\n shortened_url = shorten_url('https://www.example.com')\n\n # Retrieve the original URL using the shortened URL directly\n retrieved_url = retrieve_url(shortened_url)\n\n self.assertEqual(retrieved_url, 'https://www.example.com', \"Retrieved URL does not match the original!\")\n\nif __name__ == \"__main__\":\n unittest.main()\n```"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import subprocess

import pytest


def run_game_with_inputs(inputs):
# Start the game process
process = subprocess.Popen(
["python", "tic_tac_toe.py"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)

# Send the input moves one by one
output, errors = process.communicate("\n".join(inputs))

# Print the inputs and outputs
print("Inputs:\n", "\n".join(inputs))
print("Output:\n", output)
print("Errors:\n", errors)

return output


@pytest.mark.parametrize(
"inputs, expected_output",
[
(["0,0", "1,0", "0,1", "1,1", "0,2"], "Player 1 won!"),
(["1,0", "0,0", "1,1", "0,1", "2,0", "0,2"], "Player 2 won!"),
(["0,0", "0,1", "0,2", "1,1", "1,0", "1,2", "2,1", "2,0", "2,2"], "Draw"),
],
)
def test_game(inputs, expected_output):
output = run_game_with_inputs(inputs)
assert expected_output in output


if __name__ == "__main__":
pytest.main()
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
{
"category": [
"coding",
"general"
],
"cutoff": 150,
"dependencies": [
"TestUrlShortener"
],
"eval_id": "504b1648-e14a-4982-8b27-074598eb4fd0",
"ground": {
"answer": "The correct python file for a TicTacToe game is written",
"eval": {
"type": "python"
},
"files": [
"test_tictactoe.py"
],
"should_contain": [],
"should_not_contain": []
},
"info": {
"description": "Tests if the agent can create Tic-Tac-Toe game",
"difficulty": "basic",
"side_effects": []
},
"name": "TicTacToe",
"task": "Build a Tic-Tac-Toe game using a python CLI. Here are the specifications.\n\nThe Grid: The game board is a 3x3 grid, consisting of 3 rows and 3 columns, creating a total of 9 squares.\n\nPlayers: There are two players. One player uses the number \"1\", and the other player uses the number \"2\".\n\nTaking Turns: Players take turns to put their respective numbers (\"1\" or \"2\") in an empty square of the grid. Once a player has placed their number in a square, it cannot be changed or removed.\n\nObjective: The goal is to get three of your numbers in a row, either horizontally, vertically, or diagonally.\n\nEnd of the Game: The game concludes in one of two ways: One player gets three of their numbers in a row (horizontally, vertically, or diagonally) and is declared the winner.\nAll squares on the grid are filled, and no player has three in a row. This situation is a \"draw\" or a \"tie\".\n\nTechnical specifications:\nBuild a file called tic_tac_toe.py. This file will be called through command lines. You will have to prompt users for their move. Player 1 will always start.\nPlayers will input their move in the following format: \"x,y\" where x and y represent the location in the grid (0,0 is top left, 2,2 is bottom right).\n\nYour primary requirement is to halt the game when appropriate and to print only one of these three exact sentences:\n\n\"Player 1 won!\"\n\"Player 2 won!\"\n\"Draw\"\n\nEdge cases: A player can send an incorrect location. Either the location is incorrect or the square is already filled. In this case, this counts as doing nothing, and the player gets prompted for new locations again.\n\n\nYou will be expected to create a python file called tic_tac_toe.py that will run through command lines by using ```python tic_tac_toe.py```.\n\nHere is an example of how your tic_tac_toe.py game will be tested.\n```\nprocess = subprocess.Popen(\n ['python', 'tic_tac_toe.py'],\n stdout=subprocess.PIPE,\n text=True\n)\n\noutput, _ = process.communicate('\\n'.join([\"0,0\", \"1,0\", \"0,1\", \"1,1\", \"0,2\"]))\n\nassert \"Player 1 won!\" in output\n```"
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
ID,Name,Age
101,John,28
102,Alice,34
103,Bob,45
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
ID,Occupation,Salary
101,Engineer,80000
102,Doctor,120000
103,Lawyer,95000
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{
"category": [
"data",
"general"
],
"cutoff": 60,
"dependencies": [
"TestSortCsv"
],
"eval_id": "52467beb-b951-4356-9776-9a0ae46bb33b",
"ground": {
"answer": "The csv data is combined",
"eval": {
"type": "file"
},
"files": [
"output.csv"
],
"should_contain": [
"Age,ID,Name,Occupation,Salary\n28,101,John,Engineer,80000\n34,102,Alice,Doctor,120000\n45,103,Bob,Lawyer,95000"
]
},
"info": {
"description": "Tests if the agent can combine data from a csv",
"difficulty": "intermediate",
"side_effects": [
""
]
},
"name": "CombineCsv",
"task": "The csvs 'file1.csv' and 'file2.csv' both have a column 'ID'. Combine these 2 csvs using the 'ID' column. Then sort the rows by 'ID' in ascending order, sort the columns alphabetically. Write the output in 'output.csv'."
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
Date,Description,Amount,Category
2023-01-01,Grocery Store,52.3,Groceries
2023-01-02,Pharmacy,12.5,Healthcare
2023-01-03,Gas Station,29.1,Transportation
2023-01-04,Water,19,Utilities
2023-01-05,Grocery Store,60.25,Groceries
2023-01-06,Coffee Shop,4.5,Dining
2023-01-07,Cinema Tickets,20,Entertainment
2023-01-08,Book Store,30.4,Shopping
2023-01-09,Restaurant Dinner,55.8,Dining
2023-01-10,Electric Bill,65.35,Utilities
2023-01-11,Grocery Store,45.1,Groceries
Loading