Skip to content

agiresearch/AIOS

Repository files navigation

AIOS: LLM Agent Operating System

Code License

agiresearch%2FAIOS | Trendshift

AIOS is to build a Large Language Model (LLM) Agent operating system, which intends to embed large language model into the Operating System as the brain of the OS. AIOS is designated to address problems (e.g., scheduling, context switch, memory management, etc.) during the development and deployment of LLM-based agents for a better ecosystem among agent developers and users.

🏠 Architecture of AIOS

AIOS provides the LLM kernel as an abstraction on top of the OS kernel. The kernel intends to facilitate the installation and usage of agents. At the present moment, AIOS is a userspace wrapper around the current kernel. However, this is subject to change as outlined in the Q4 Goals and Objectives.

πŸ“° News

  • [2024-06-20] πŸ”₯ Function calling for open-sourced LLMs (native huggingface, vllm, ollama) is supported.
  • [2024-05-20] πŸš€ More agents with ChatGPT-based tool calling are added (i.e., MathAgent, RecAgent, TravelAgent, AcademicAgent and CreationAgent), their profiles and workflows can be found in OpenAGI.
  • [2024-05-13] πŸ› οΈ Local models (diffusion models) as tools from HuggingFace are integrated.
  • [2024-05-01] πŸ› οΈ The agent creation in AIOS is refactored, which can be found in our OpenAGI package.
  • [2024-04-05] πŸ› οΈ AIOS currently supports external tool callings (google search, wolframalpha, rapid API, etc).
  • [2024-04-02] 🀝 AIOS Discord Community is up. Welcome to join the community for discussions, brainstorming, development, or just random chats! For how to contribute to AIOS, please see CONTRIBUTE.
  • [2024-03-25] ✈️ Our paper AIOS: LLM Agent Operating System is released and AIOS repository is officially launched!
  • [2023-12-06] πŸ“‹ After several months of working, our perspective paper LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem is officially released.

✈️ Getting Started

Prerequisites

At the minimum, we recommend a Nvidia GPU with 4 GB of memory or an ARM based Macbook. It should be able to run on machines with inferior hardware, but task completion time will increase greatly. If you notice a large delay in execution, you can try to use an API based model, such as gpt (paid) or gemini (free).

Installation

Git clone AIOS

git clone https://github.com/agiresearch/AIOS.git

Install the required packages using pip

conda create -n AIOS python=3.11
source activate AIOS
cd AIOS
pip install -r requirements.txt

If you don't have an Nvidia GPU, you could also use a venv

cd AIOS
python -m venv venv
chmod +x venv/bin/activate
. venv/bin/activate
pip install -r requirements.txt

Usage

If you use open-sourced models from huggingface, you need to setup your Hugging Face token and cache directory

export HUGGING_FACE_HUB_TOKEN=<YOUR READ TOKEN>
export HF_HOME=<YOUR CACHE DIRECTORY>

If you use LLM APIs, you need to setup your API key such as OpenAI API Key, Gemini API Key

export OPENAI_API_KEY=<YOUR OPENAI API KEY>
export GEMINI_API_KEY=<YOUR GEMINI API KEY>

If you use external API tools in your agents, please refer to the How to setup external tools.

You can also create .env file from the .env.example file, and then use dotenv to load the environment variables using .env file into your application's environment at runtime.

cp .env.example .env

Documentation

There is a README.md in each directory which provides a brief explanation on what the contents of the directory include.

Demonstration Mode

In the demonstration mode, we provide a toy example: we hardcode three agents and allow you to change the parameters. Then you can see the output of each step in running multiple agents For open-sourced LLMs, you need to setup the name of the LLM you would like to use the max gpu memory, the evaluation device and the maximum length of generated new tokens.

# For open-sourced LLMs
python main.py --llm_name <llm_name> --max_gpu_memory <max_gpu_memory> --eval_device <eval_device> --max_new_tokens <max_new_tokens>
## Use meta-llama/Meta-Llama-3-8B-Instruct for example
python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --max_gpu_memory '{"0": "48GB"}' --eval_device "cuda:0" --max_new_tokens 256

For inference acceleration, you can also use vllm as the backend.

## Use meta-llama/Meta-Llama-3-8B-Instruct for example
CUDA_VISILE_DEVICES=0,1 python main.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --use_backend vllm --max_gpu_memory '{"0": "24GB", "1": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256

For close-sourced LLMs, you just need to setup the name of the LLM.

# For close-sourced LLMs
python main.py --llm_name <llm_name>
## Use gpt-4 for example
python main.py --llm_name gpt-4

You can use bash script to start the agent execution like this

bash scripts/run/gpt4.sh

You can use an open-source model on an Apple MacBook with Ollama. First, you will need to pull the model. Let's use llama3 as an example:

ollama pull llama3:8b
ollama pull llama3:8b

Then, you can run the Python script with the input parameter to start using AIOS with Llama3 and Ollama on your MacBook:

python main.py --llm_name ollama/llama3:8b
python main.py --llm_name ollama/llama3:8b

Interactive Mode

In the deployment mode, the outputs of running agents are stored in files. And in this mode, you are provided with multiple commands to run agents and see resource usage of agents (e.g., run \<xxxAgent\>: \<YOUR TASK\>, print agent). Different from the interactive mode, you need to set all the default loggers as file loggers.

# For open-sourced LLMs
python simulator.py --llm_name <llm_name> --max_gpu_memory <max_gpu_memory> --eval_device <eval_device> --max_new_tokens <max_new_tokens> --scheduler_log_mode file --agent_log_mode file --llm_kernel_log_mode file
## Use meta-llama/Meta-Llama-3-8B-Instruct for example
python simulator.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256 --scheduler_log_mode file --agent_log_mode file --llm_kernel_log_mode file
# For close-sourced LLMs
python simulator.py --llm_name <llm_name> --scheduler_log_mode file --agent_log_mode file --llm_kernel_log_mode file
## Use gpt-4 for example
python simulator.py --llm_name gpt-4 --scheduler_log_mode file --agent_log_mode file --llm_kernel_log_mode file

You can use bash script to start the interactive simulation session like this

bash scripts/interactive/gpt4.sh

Example run of simulator.py:

run MathAgent: Calculate the surface area and volume of a cylinder with a radius of 5 units and height of 10 units using the formulas "2 * pi * r * h + 2 * pi * r2" and "pi * r2 * h".
print agent

A run command will not output to the standard output. Instead, it will create a log file.

Evaluation Mode

In the evaluation mode, we draw prompts for each agent from agent_configs/ and evaluate the performance of the agents by allowing the user to specify which agents should be run.

Additionally, you can evaluate the acceleration performance with or without AIOS by comparing the waiting time and turnaround time.

python eval.py --llm_name gpt-3.5-turbo --agents MathAgent:1,TravelAgent:1,RecAgent:1,AcademicAgent:1,CreationAgent:1

You can use bash script to start the agent execution like this

bash scripts/eval/gpt4.sh

If you want to obtain metrics for either concurrent execution (with AIOS) or sequential execution (without AIOS), you can specify the mode parameter when running the eval.py file."

python eval.py --llm_name gpt-4 --agents MathAgent:1,TravelAgent:1,RecAgent:1,AcademicAgent:1,CreationAgent:1 --mode concurrent-only
python eval.py --llm_name gpt-4 --agents MathAgent:1,TravelAgent:1,RecAgent:1,AcademicAgent:1,CreationAgent:1 --mode sequential-only

You could also run the models locally:

python eval.py --llm_name meta-llama/Meta-Llama-3-8B-Instruct --max_gpu_memory '{"0": "24GB"}' --eval_device "cuda:0" --max_new_tokens 256 --agents MathAgent:1,TravelAgent:1 --mode concurrent-only

Supported LLM Endpoints

  • OpenAI API
  • Gemini API
  • ollama
  • vllm
  • native huggingface models (locally)

πŸ–‹οΈ References

@article{mei2024aios,
  title={AIOS: LLM Agent Operating System},
  author={Mei, Kai and Li, Zelong and Xu, Shuyuan and Ye, Ruosong and Ge, Yingqiang and Zhang, Yongfeng}
  journal={arXiv:2403.16971},
  year={2024}
}
@article{ge2023llm,
  title={LLM as OS, Agents as Apps: Envisioning AIOS, Agents and the AIOS-Agent Ecosystem},
  author={Ge, Yingqiang and Ren, Yujie and Hua, Wenyue and Xu, Shuyuan and Tan, Juntao and Zhang, Yongfeng},
  journal={arXiv:2312.03815},
  year={2023}
}

πŸš€ Contributions

AIOS is dedicated to facilitating the development and deployment of LLM agents in a systematic way, collaborators and contributions are always welcome to foster a cohesive, effective and efficient AIOS-Agent ecosystem!

For detailed information on how to contribute, see CONTRIBUTE. If you would like to contribute to the codebase, issues or pull requests are always welcome!

🌍 AIOS Contributors

AIOS contributors

🀝 Discord Channel

If you would like to join the community, ask questions, chat with fellows, learn about or propose new features, and participate in future developments, join our Discord Community!

πŸ“ͺ Contact

For issues related to AIOS development, we encourage submitting issues, pull requests, or initiating discussions in the AIOS Discord Channel. For other issues please feel free to contact Kai Mei ([email protected]) and Yongfeng Zhang ([email protected]).

🌟 Star History

Star History Chart

About

AIOS: LLM Agent Operating System

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages