Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
… swyx/extendLocalCache
  • Loading branch information
swyxio committed Apr 13, 2023
2 parents 66ffbc4 + 36dc481 commit 4c8aaa5
Show file tree
Hide file tree
Showing 39 changed files with 576 additions and 217 deletions.
7 changes: 2 additions & 5 deletions .env.template
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
PINECONE_API_KEY=your-pinecone-api-key
PINECONE_ENV=your-pinecone-region
OPENAI_API_KEY=your-openai-api-key
TEMPERATURE=1
ELEVENLABS_API_KEY=your-elevenlabs-api-key
ELEVENLABS_VOICE_1_ID=your-voice-id
ELEVENLABS_VOICE_2_ID=your-voice-id
Expand All @@ -9,11 +10,7 @@ FAST_LLM_MODEL=gpt-3.5-turbo
GOOGLE_API_KEY=
CUSTOM_SEARCH_ENGINE_ID=
USE_AZURE=False
OPENAI_AZURE_API_BASE=your-base-url-for-azure
OPENAI_AZURE_API_VERSION=api-version-for-azure
OPENAI_AZURE_DEPLOYMENT_ID=deployment-id-for-azure
OPENAI_AZURE_CHAT_DEPLOYMENT_ID=deployment-id-for-azure-chat
OPENAI_AZURE_EMBEDDINGS_DEPLOYMENT_ID=deployment-id-for-azure-embeddigs
EXECUTE_LOCAL_COMMANDS=False
IMAGE_PROVIDER=dalle
HUGGINGFACE_API_TOKEN=
USE_MAC_OS_TTS=False
Expand Down
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ By following these guidelines, your PRs are more likely to be merged quickly aft
- [ ] I have thoroughly tested my changes with multiple different prompts.
- [ ] I have considered potential risks and mitigations for my changes.
- [ ] I have documented my changes clearly and comprehensively.
- [ ] I have not snuck in any "extra" small tweaks changes <!-- Submit these as separate Pull Reqests, they are the easiest to merge! -->
- [ ] I have not snuck in any "extra" small tweaks changes <!-- Submit these as separate Pull Requests, they are the easiest to merge! -->

<!-- If you haven't added tests, please explain why. If you have, check the appropriate box. If you've ensured your PR is atomic and well-documented, check the corresponding boxes. -->

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
- name: Lint with flake8
continue-on-error: false
run: flake8 scripts/ tests/ --select E303,W293,W291,W292,E305
run: flake8 scripts/ tests/ --select E303,W293,W291,W292,E305,E231,E302

- name: Run unittest tests with coverage
run: |
Expand Down
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,11 @@ package-lock.json
auto_gpt_workspace/*
*.mpeg
.env
azure.yaml
*venv/*
outputs/*
ai_settings.yaml
last_run_ai_settings.yaml
.vscode
.idea/*
auto-gpt.json
Expand All @@ -19,3 +21,6 @@ log.txt
.coverage
coverage.xml
htmlcov/

# For Macs Dev Environs: ignoring .Desktop Services_Store
.DS_Store
49 changes: 31 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

![GitHub Repo stars](https://img.shields.io/github/stars/Torantulino/auto-gpt?style=social)
![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)
[![](https://dcbadge.vercel.app/api/server/PQ7VX6TY4t?style=flat)](https://discord.gg/PQ7VX6TY4t)
[![Unit Tests](https://github.com/Torantulino/Auto-GPT/actions/workflows/unit_tests.yml/badge.svg)](https://github.com/Torantulino/Auto-GPT/actions/workflows/unit_tests.yml)
[![Discord Follow](https://dcbadge.vercel.app/api/server/PQ7VX6TY4t?style=flat)](https://discord.gg/PQ7VX6TY4t)
[![Unit Tests](https://github.com/Torantulino/Auto-GPT/actions/workflows/ci.yml/badge.svg)](https://github.com/Torantulino/Auto-GPT/actions/workflows/unit_tests.yml)

Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI.

Expand Down Expand Up @@ -32,21 +32,28 @@ Your support is greatly appreciated

- [Auto-GPT: An Autonomous GPT-4 Experiment](#auto-gpt-an-autonomous-gpt-4-experiment)
- [Demo (30/03/2023):](#demo-30032023)
- [💖 Help Fund Auto-GPT's Development](#-help-fund-auto-gpts-development)
- [Table of Contents](#table-of-contents)
- [🚀 Features](#-features)
- [📋 Requirements](#-requirements)
- [💾 Installation](#-installation)
- [🔧 Usage](#-usage)
- [Logs](#logs)
- [🗣️ Speech Mode](#️-speech-mode)
- [🔍 Google API Keys Configuration](#-google-api-keys-configuration)
- [Setting up environment variables](#setting-up-environment-variables)
- [Redis Setup](#redis-setup)
- [🌲 Pinecone API Key Setup](#-pinecone-api-key-setup)
- [Setting up environment variables](#setting-up-environment-variables-1)
- [Setting Your Cache Type](#setting-your-cache-type)
- [View Memory Usage](#view-memory-usage)
- [💀 Continuous Mode ⚠️](#-continuous-mode-️)
- [GPT3.5 ONLY Mode](#gpt35-only-mode)
- [🖼 Image Generation](#image-generation)
- [🖼 Image Generation](#-image-generation)
- [⚠️ Limitations](#️-limitations)
- [🛡 Disclaimer](#-disclaimer)
- [🐦 Connect with Us on Twitter](#-connect-with-us-on-twitter)
- [Run tests](#run-tests)
- [Run linter](#run-linter)

## 🚀 Features

Expand All @@ -70,36 +77,41 @@ Optional:

To install Auto-GPT, follow these steps:

0. Make sure you have all the **requirements** above, if not, install/get them.
1. Make sure you have all the **requirements** above, if not, install/get them.

_The following commands should be executed in a CMD, Bash or Powershell window. To do this, go to a folder on your computer, click in the folder path at the top and type CMD, then press enter._

1. Clone the repository:
2. Clone the repository:
For this step you need Git installed, but you can just download the zip file instead by clicking the button at the top of this page ☝️

```
git clone https://github.com/Torantulino/Auto-GPT.git
```

2. Navigate to the project directory:
3. Navigate to the project directory:
_(Type this into your CMD window, you're aiming to navigate the CMD window to the repository you just downloaded)_

```
cd 'Auto-GPT'
```

3. Install the required dependencies:
4. Install the required dependencies:
_(Again, type this into your CMD window)_

```
pip install -r requirements.txt
```

4. Rename `.env.template` to `.env` and fill in your `OPENAI_API_KEY`. If you plan to use Speech Mode, fill in your `ELEVEN_LABS_API_KEY` as well.

- Obtain your OpenAI API key from: https://platform.openai.com/account/api-keys.
- Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and provide the `OPENAI_AZURE_API_BASE`, `OPENAI_AZURE_API_VERSION` and `OPENAI_AZURE_DEPLOYMENT_ID` values as explained here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section. Additionally you need separate deployments for both embeddings and chat. Add their ID values to `OPENAI_AZURE_CHAT_DEPLOYMENT_ID` and `OPENAI_AZURE_EMBEDDINGS_DEPLOYMENT_ID` respectively
5. Rename `.env.template` to `.env` and fill in your `OPENAI_API_KEY`. If you plan to use Speech Mode, fill in your `ELEVEN_LABS_API_KEY` as well.
- Obtain your OpenAI API key from: https://platform.openai.com/account/api-keys.
- Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and then:
- Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all of the deployment ids for the relevant models in the `azure_model_map` section:
- `fast_llm_model_deployment_id` - your gpt-3.5-turbo or gpt-4 deployment id
- `smart_llm_model_deployment_id` - your gpt-4 deployment id
- `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment id
- Please specify all of these values as double quoted strings
- details can be found here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section and here: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line for the embedding model.

## 🔧 Usage

Expand All @@ -115,7 +127,7 @@ python scripts/main.py

### Logs

You will find activity and error logs in the folder `./logs`
You will find activity and error logs in the folder `./output/logs`

To output debug logs:

Expand Down Expand Up @@ -227,7 +239,7 @@ MEMORY_INDEX=whatever

Pinecone enables the storage of vast amounts of vector-based memory, allowing for only relevant memories to be loaded for the agent at any given time.

1. Go to app.pinecone.io and make an account if you don't already have one.
1. Go to [pinecone](https://app.pinecone.io/) and make an account if you don't already have one.
2. Choose the `Starter` plan to avoid being charged.
3. Find your API key and region under the default project in the left sidebar.

Expand All @@ -253,7 +265,6 @@ export PINECONE_ENV="Your pinecone region" # something like: us-east4-gcp
```


## Setting Your Cache Type

By default Auto-GPT is going to use LocalCache instead of redis or Pinecone.
Expand Down Expand Up @@ -357,11 +368,13 @@ coverage run -m unittest discover tests

## Run linter

This project uses [flake8](https://flake8.pycqa.org/en/latest/) for linting. To run the linter, run the following command:
This project uses [flake8](https://flake8.pycqa.org/en/latest/) for linting. We currently use the following rules: `E303,W293,W291,W292,E305,E231,E302`. See the [flake8 rules](https://www.flake8rules.com/) for more information.

To run the linter, run the following command:

```
flake8 scripts/ tests/
# Or, if you want to run flake8 with the same configuration as the CI:
flake8 scripts/ tests/ --select E303,W293,W291,W292,E305
flake8 scripts/ tests/ --select E303,W293,W291,W292,E305,E231,E302
```
7 changes: 0 additions & 7 deletions ai_settings.yaml

This file was deleted.

7 changes: 7 additions & 0 deletions azure.yaml.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
azure_api_type: azure_ad
azure_api_base: your-base-url-for-azure
azure_api_version: api-version-for-azure
azure_model_map:
fast_llm_model_deployment_id: gpt35-deployment-id-for-azure
smart_llm_model_deployment_id: gpt4-deployment-id-for-azure
embedding_model_deployment_id: embedding-deployment-id-for-azure
3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,5 @@ chromadb
orjson
Pillow
coverage
flake8
flake8
numpy
1 change: 1 addition & 0 deletions scripts/agent_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
# Create new GPT agent
# TODO: Centralise use of create_chat_completion() to globally enforce token limit


def create_agent(task, prompt, model):
"""Create a new agent and return its key"""
global next_key
Expand Down
1 change: 1 addition & 0 deletions scripts/ai_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import data
import os


class AIConfig:
"""
A class object that contains the configuration information for the AI
Expand Down
1 change: 1 addition & 0 deletions scripts/ai_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ def improve_code(suggestions: List[str], code: str) -> str:
result_string = call_ai_function(function_string, args, description_string)
return result_string


def write_tests(code: str, focus: List[str]) -> str:
"""
A function that takes in code and focus topics and returns a response from create chat completion api call.
Expand Down
66 changes: 34 additions & 32 deletions scripts/browse.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

cfg = Config()


# Function to check if the URL is valid
def is_valid_url(url):
try:
Expand All @@ -14,49 +15,51 @@ def is_valid_url(url):
except ValueError:
return False


# Function to sanitize the URL
def sanitize_url(url):
return urljoin(url, urlparse(url).path)

# Function to make a request with a specified timeout and handle exceptions
def make_request(url, timeout=10):
try:
response = requests.get(url, headers=cfg.user_agent_header, timeout=timeout)
response.raise_for_status()
return response
except requests.exceptions.RequestException as e:
return "Error: " + str(e)

# Define and check for local file address prefixes
def check_local_file_access(url):
local_prefixes = ['file:///', 'file://localhost', 'http://localhost', 'https://localhost']
return any(url.startswith(prefix) for prefix in local_prefixes)

def scrape_text(url):
"""Scrape text from a webpage"""
# Basic check if the URL is valid
if not url.startswith('http'):
return "Error: Invalid URL"

# Restrict access to local files
if check_local_file_access(url):
return "Error: Access to local files is restricted"
def get_response(url, headers=cfg.user_agent_header, timeout=10):
try:
# Restrict access to local files
if check_local_file_access(url):
raise ValueError('Access to local files is restricted')

# Most basic check if the URL is valid:
if not url.startswith('http://') and not url.startswith('https://'):
raise ValueError('Invalid URL format')

# Validate the input URL
if not is_valid_url(url):
# Sanitize the input URL
sanitized_url = sanitize_url(url)

# Make the request with a timeout and handle exceptions
response = make_request(sanitized_url)
response = requests.get(sanitized_url, headers=headers, timeout=timeout)

if isinstance(response, str):
return response
else:
# Sanitize the input URL
sanitized_url = sanitize_url(url)
# Check if the response contains an HTTP error
if response.status_code >= 400:
return None, "Error: HTTP " + str(response.status_code) + " error"

response = requests.get(sanitized_url, headers=cfg.user_agent_header)
return response, None
except ValueError as ve:
# Handle invalid URL format
return None, "Error: " + str(ve)

except requests.exceptions.RequestException as re:
# Handle exceptions related to the HTTP request (e.g., connection errors, timeouts, etc.)
return None, "Error: " + str(re)


def scrape_text(url):
"""Scrape text from a webpage"""
response, error_message = get_response(url)
if error_message:
return error_message

soup = BeautifulSoup(response.text, "html.parser")

Expand Down Expand Up @@ -89,11 +92,9 @@ def format_hyperlinks(hyperlinks):

def scrape_links(url):
"""Scrape links from a webpage"""
response = requests.get(url, headers=cfg.user_agent_header)

# Check if the response contains an HTTP error
if response.status_code >= 400:
return "error"
response, error_message = get_response(url)
if error_message:
return error_message

soup = BeautifulSoup(response.text, "html.parser")

Expand Down Expand Up @@ -131,6 +132,7 @@ def create_message(chunk, question):
"content": f"\"\"\"{chunk}\"\"\" Using the above text, please answer the following question: \"{question}\" -- if the question cannot be answered using the text, please summarize the text."
}


def summarize_text(text, question):
"""Summarize text using the LLM model"""
if not text:
Expand Down
2 changes: 2 additions & 0 deletions scripts/call_ai_function.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
cfg = Config()

from llm_utils import create_chat_completion


# This is a magic function that can do anything with no-code. See
# https://github.com/Torantulino/AI-Functions for more info.
def call_ai_function(function, args, description, model=None):
Expand Down
1 change: 1 addition & 0 deletions scripts/chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@

cfg = Config()


def create_chat_message(role, content):
"""
Create a chat message with the given role and content.
Expand Down
Loading

0 comments on commit 4c8aaa5

Please sign in to comment.