Skip to content

Commit

Permalink
Revert "The unlooping and fixing of file execution. (Significant-Grav…
Browse files Browse the repository at this point in the history
…itas#3368)"

This reverts commit d8c16de.
  • Loading branch information
BillSchumacher committed May 2, 2023
1 parent 4767fe6 commit c018519
Show file tree
Hide file tree
Showing 102 changed files with 989 additions and 7,177 deletions.
31 changes: 22 additions & 9 deletions .devcontainer/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,13 +1,26 @@
# Use an official Python base image from the Docker Hub
# [Choice] Python version (use -bullseye variants on local arm64/Apple Silicon): 3, 3.10, 3-bullseye, 3.10-bullseye, 3-buster, 3.10-buster
ARG VARIANT=3-bullseye
FROM python:3.10

# Install browsers
RUN apt-get update && apt-get install -y \
chromium-driver firefox-esr \
ca-certificates
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# Remove imagemagick due to https://security-tracker.debian.org/tracker/CVE-2019-10131
&& apt-get purge -y imagemagick imagemagick-6-common

# Install utilities
RUN apt-get install -y curl jq wget git
# Temporary: Upgrade python packages due to https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-40897
# They are installed by the base image (python) which does not have the patch.
RUN python3 -m pip install --upgrade setuptools

# Declare working directory
WORKDIR /workspace/Auto-GPT
# Install Chromium for web browsing
RUN apt-get install -y chromium-driver

# [Optional] If your pip requirements rarely change, uncomment this section to add them to the image.
# COPY requirements.txt /tmp/pip-tmp/
# RUN pip3 --disable-pip-version-check --no-cache-dir install -r /tmp/pip-tmp/requirements.txt \
# && rm -rf /tmp/pip-tmp

# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>

# [Optional] Uncomment this line to install global node packages.
# RUN su vscode -c "source /usr/local/share/nvm/nvm.sh && npm install -g <your-package-here>" 2>&1
12 changes: 6 additions & 6 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
{
"dockerComposeFile": "./docker-compose.yml",
"service": "auto-gpt",
"workspaceFolder": "/workspace/Auto-GPT",
"shutdownAction": "stopCompose",
"build": {
"dockerfile": "./Dockerfile",
"context": "."
},
"features": {
"ghcr.io/devcontainers/features/common-utils:2": {
"installZsh": "true",
"username": "vscode",
"userUid": "6942",
"userGid": "6942",
"userUid": "1000",
"userGid": "1000",
"upgradePackages": "true"
},
"ghcr.io/devcontainers/features/desktop-lite:1": {},
Expand Down
19 changes: 0 additions & 19 deletions .devcontainer/docker-compose.yml

This file was deleted.

13 changes: 0 additions & 13 deletions .env.template
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,6 @@
## AI_SETTINGS_FILE - Specifies which AI Settings file to use (defaults to ai_settings.yaml)
# AI_SETTINGS_FILE=ai_settings.yaml

## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y
## EXIT_KEY - Key to exit AUTO-GPT
# EXIT_KEY=n

################################################################################
### LLM PROVIDER
################################################################################
Expand Down Expand Up @@ -49,14 +44,6 @@ OPENAI_API_KEY=your-openai-api-key
# FAST_TOKEN_LIMIT=4000
# SMART_TOKEN_LIMIT=8000

### EMBEDDINGS
## EMBEDDING_MODEL - Model to use for creating embeddings
## EMBEDDING_TOKENIZER - Tokenizer to use for chunking large inputs
## EMBEDDING_TOKEN_LIMIT - Chunk size limit for large inputs
# EMBEDDING_MODEL=text-embedding-ada-002
# EMBEDDING_TOKENIZER=cl100k_base
# EMBEDDING_TOKEN_LIMIT=8191

################################################################################
### MEMORY
################################################################################
Expand Down
5 changes: 0 additions & 5 deletions .gitattributes

This file was deleted.

38 changes: 6 additions & 32 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This document provides guidelines and best practices to help you contribute effe

By participating in this project, you agree to abide by our [Code of Conduct]. Please read it to understand the expectations we have for everyone who contributes to this project.

[Code of Conduct]: https://docs.agpt.co/code-of-conduct/
[Code of Conduct]: https://significant-gravitas.github.io/Auto-GPT/code-of-conduct.md

## 📢 A Quick Word
Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT.
Expand Down Expand Up @@ -99,22 +99,15 @@ https://github.com/Significant-Gravitas/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-labe
## Testing your changes

If you add or change code, make sure the updated code is covered by tests.
To increase coverage if necessary, [write tests using pytest].

For more info on running tests, please refer to ["Running tests"](https://docs.agpt.co/testing/).
To increase coverage if necessary, [write tests using `pytest`].

[write tests using pytest]: https://realpython.com/pytest-python-testing/
For more info on running tests, please refer to ["Running tests"](https://significant-gravitas.github.io/Auto-GPT/testing/).

### API-dependent tests

To run tests that involve making calls to the OpenAI API, we use VCRpy. It caches known
requests and matching responses in so-called *cassettes*, allowing us to run the tests
in CI without needing actual API access.

When changes cause a test prompt to be generated differently, it will likely miss the
cache and make a request to the API, updating the cassette with the new request+response.
*Be sure to include the updated cassette in your PR!*
[write tests using `pytest`]: https://realpython.com/pytest-python-testing/


In Pytest, we use VCRpy. It's a package that allows us to save OpenAI and other API providers' responses.
When you run Pytest locally:

- If no prompt change: you will not consume API tokens because there are no new OpenAI calls required.
Expand All @@ -127,22 +120,3 @@ When you run Pytest locally:
- Or: The test might be poorly written. In that case, you can make suggestions to change the test.

In our CI pipeline, Pytest will use the cassettes and not call paid API providers, so we need your help to record the replays that you break.


### Community Challenges
Challenges are goals we need Auto-GPT to achieve.
To pick the challenge you like, go to the tests/integration/challenges folder and select the areas you would like to work on.
- a challenge is new if level_currently_beaten is None
- a challenge is in progress if level_currently_beaten is greater or equal to 1
- a challenge is beaten if level_currently_beaten = max_level

Here is an example of how to run the memory challenge A and attempt to beat level 3.

pytest -s tests/integration/challenges/memory/test_memory_challenge_a.py --level=3

To beat a challenge, you're not allowed to change anything in the tests folder, you have to add code in the autogpt folder

Challenges use cassettes. Cassettes allow us to replay your runs in our CI pipeline.
Don't hesitate to delete the cassettes associated to the challenge you're working on if you need to. Otherwise it will keep replaying the last run.

Once you've beaten a new level of a challenge, please create a pull request and we will analyze how you changed Auto-GPT to beat the challenge.
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# Auto-GPT: An Autonomous GPT-4 Experiment
[![Official Website](https://img.shields.io/badge/Official%20Website-agpt.co-blue?style=flat&logo=world&logoColor=white)](https://agpt.co)
[![Unit Tests](https://img.shields.io/github/actions/workflow/status/Significant-Gravitas/Auto-GPT/ci.yml?label=unit%20tests)](https://github.com/Significant-Gravitas/Auto-GPT/actions/workflows/ci.yml)
[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt)
[![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/auto-gpt?style=social)](https://github.com/Significant-Gravitas/Auto-GPT/stargazers)
Expand Down Expand Up @@ -100,21 +99,21 @@ Your support is greatly appreciated. Development of this free, open-source proje

Please see the [documentation][docs] for full setup instructions and configuration options.

[docs]: https://docs.agpt.co/
[docs]: https://significant-gravitas.github.io/Auto-GPT/

## 📖 Documentation
* [⚙️ Setup][docs/setup]
* [💻 Usage][docs/usage]
* [🔌 Plugins][docs/plugins]
* Configuration
* [🔍 Web Search](https://docs.agpt.co/configuration/search/)
* [🧠 Memory](https://docs.agpt.co/configuration/memory/)
* [🗣️ Voice (TTS)](https://docs.agpt.co/configuration/voice/)
* [🖼️ Image Generation](https://docs.agpt.co/configuration/imagegen/)
* [🔍 Web Search](https://significant-gravitas.github.io/Auto-GPT/configuration/search/)
* [🧠 Memory](https://significant-gravitas.github.io/Auto-GPT/configuration/memory/)
* [🗣️ Voice (TTS)](https://significant-gravitas.github.io/Auto-GPT/configuration/voice/)
* [🖼️ Image Generation](https://significant-gravitas.github.io/Auto-GPT/configuration/imagegen/)

[docs/setup]: https://docs.agpt.co/setup/
[docs/usage]: https://docs.agpt.co/usage/
[docs/plugins]: https://docs.agpt.co/plugins/
[docs/setup]: https://significant-gravitas.github.io/Auto-GPT/setup/
[docs/usage]: https://significant-gravitas.github.io/Auto-GPT/usage/
[docs/plugins]: https://significant-gravitas.github.io/Auto-GPT/plugins/

## ⚠️ Limitations

Expand All @@ -126,6 +125,7 @@ This experiment aims to showcase the potential of GPT-4 but comes with some limi

## 🛡 Disclaimer

Disclaimer
This project, Auto-GPT, is an experimental application and is provided "as-is" without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise.

The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by Auto-GPT.
Expand Down
8 changes: 0 additions & 8 deletions autogpt/__init__.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,5 @@
import os
import random
import sys

from dotenv import load_dotenv

if "pytest" in sys.argv or "pytest" in sys.modules or os.getenv("CI"):
print("Setting random seed to 42")
random.seed(42)

# Load the users .env file into environment variables
load_dotenv(verbose=True, override=True)

Expand Down
56 changes: 38 additions & 18 deletions autogpt/agent/agent.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
from colorama import Fore, Style

from autogpt.app import execute_command, get_command
from autogpt.chat import chat_with_ai, create_chat_message
from autogpt.config import Config
from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques
from autogpt.json_utils.utilities import LLM_DEFAULT_RESPONSE_FORMAT, validate_json
from autogpt.llm import chat_with_ai, create_chat_completion, create_chat_message
from autogpt.json_utils.utilities import validate_json
from autogpt.llm_utils import create_chat_completion
from autogpt.logs import logger, print_assistant_thoughts
from autogpt.speech import say_text
from autogpt.spinner import Spinner
from autogpt.utils import clean_input
from autogpt.utils import clean_input, send_chat_message_to_user
from autogpt.workspace import Workspace


Expand Down Expand Up @@ -56,10 +57,6 @@ def __init__(
cfg = Config()
self.ai_name = ai_name
self.memory = memory
self.summary_memory = (
"I was created." # Initial memory necessary to avoid hilucination
)
self.last_memory_index = 0
self.full_message_history = full_message_history
self.next_action_count = next_action_count
self.command_registry = command_registry
Expand Down Expand Up @@ -87,7 +84,11 @@ def start_interaction_loop(self):
logger.typewriter_log(
"Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}"
)
send_chat_message_to_user(
f"Continuous Limit Reached: \n {cfg.continuous_limit}"
)
break
send_chat_message_to_user("Thinking... \n")
# Send message to AI, get response
with Spinner("Thinking... "):
assistant_reply = chat_with_ai(
Expand All @@ -107,7 +108,7 @@ def start_interaction_loop(self):

# Print Assistant thoughts
if assistant_reply_json != {}:
validate_json(assistant_reply_json, LLM_DEFAULT_RESPONSE_FORMAT)
validate_json(assistant_reply_json, "llm_response_format_1")
# Get command name and arguments
try:
print_assistant_thoughts(
Expand All @@ -117,6 +118,7 @@ def start_interaction_loop(self):
if cfg.speak_mode:
say_text(f"I want to execute {command_name}")

send_chat_message_to_user("Thinking... \n")
arguments = self._resolve_pathlike_command_args(arguments)

except Exception as e:
Expand All @@ -127,26 +129,31 @@ def start_interaction_loop(self):
# Get key press: Prompt the user to press enter to continue or escape
# to exit
self.user_input = ""
send_chat_message_to_user(
"NEXT ACTION: \n " + f"COMMAND = {command_name} \n "
f"ARGUMENTS = {arguments}"
)
logger.typewriter_log(
"NEXT ACTION: ",
Fore.CYAN,
f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} "
f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
)

logger.info(
print(
"Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands"
"'n' to exit program, or enter feedback for "
f"{self.ai_name}..."
f"{self.ai_name}...",
flush=True,
)
while True:
console_input = ""
if cfg.chat_messages_enabled:
console_input = clean_input("Waiting for your response...")
else:
console_input = clean_input(
Fore.MAGENTA + "Input:" + Style.RESET_ALL
)
if console_input.lower().strip() == cfg.authorise_key:
if console_input.lower().strip() == "y":
user_input = "GENERATE NEXT COMMAND JSON"
break
elif console_input.lower().strip() == "s":
Expand All @@ -164,28 +171,28 @@ def start_interaction_loop(self):
Fore.YELLOW,
"",
)
if self_feedback_resp[0].lower().strip() == cfg.authorise_key:
if self_feedback_resp[0].lower().strip() == "y":
user_input = "GENERATE NEXT COMMAND JSON"
else:
user_input = self_feedback_resp
break
elif console_input.lower().strip() == "":
logger.warn("Invalid input format.")
print("Invalid input format.")
continue
elif console_input.lower().startswith(f"{cfg.authorise_key} -"):
elif console_input.lower().startswith("y -"):
try:
self.next_action_count = abs(
int(console_input.split(" ")[1])
)
user_input = "GENERATE NEXT COMMAND JSON"
except ValueError:
logger.warn(
print(
"Invalid input format. Please enter 'y -n' where n is"
" the number of continuous tasks."
)
continue
break
elif console_input.lower() == cfg.exit_key:
elif console_input.lower() == "n":
user_input = "EXIT"
break
else:
Expand All @@ -200,10 +207,16 @@ def start_interaction_loop(self):
"",
)
elif user_input == "EXIT":
logger.info("Exiting...")
send_chat_message_to_user("Exiting...")
print("Exiting...", flush=True)
break
else:
# Print command
send_chat_message_to_user(
"NEXT ACTION: \n " + f"COMMAND = {command_name} \n "
f"ARGUMENTS = {arguments}"
)

logger.typewriter_log(
"NEXT ACTION: ",
Fore.CYAN,
Expand Down Expand Up @@ -239,6 +252,13 @@ def start_interaction_loop(self):
result = plugin.post_command(command_name, result)
if self.next_action_count > 0:
self.next_action_count -= 1
memory_to_add = (
f"Assistant Reply: {assistant_reply} "
f"\nResult: {result} "
f"\nHuman Feedback: {user_input} "
)

self.memory.add(memory_to_add)

# Check if there's a result from the command append it to the message
# history
Expand Down
3 changes: 2 additions & 1 deletion autogpt/agent/agent_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@
from typing import List

from autogpt.config.config import Config
from autogpt.llm import Message, create_chat_completion
from autogpt.llm_utils import create_chat_completion
from autogpt.singleton import Singleton
from autogpt.types.openai import Message


class AgentManager(metaclass=Singleton):
Expand Down
Loading

0 comments on commit c018519

Please sign in to comment.