Installing | Tutorials | Contributing
RLHive is a framework designed to facilitate research in reinforcement learning. It provides the components necessary to run a full RL experiment, for both single agent and multi agent environments. It is designed to be readable and easily extensible, to allow users to quickly run and experiment with their own ideas.
The full documentation and tutorials are available at https://rlhive.readthedocs.io/.
RLHive is available through pip! For the basic RLHive package, simply run
pip install rlhive
.
You can also install dependencies necessary for the environments that
RLHive comes with by running pip install rlhive[<env_names>]
where
<env_names>
is a comma separated list made up of the following:
- atari
- gym_minigrid
- pettingzoo
In addition to these environments, Minatar and Marlgrid are also supported, but need to be installed separately.
To install Minatar, run
pip install MinAtar@git+https://github.com/kenjyoung/MinAtar.git@8b39a18a60248ede15ce70142b557f3897c4e1eb
To install Marlgrid, run
pip install marlgrid@https://github.com/kandouss/marlgrid/archive/refs/heads/master.zip
- Quickstart
- Creating new agents
- Using DQN/Rainbow Agents
- Using Environments/Creating new Environments
- Configuring your experiments through YAML files and command line
- Loggers and Scheduling
- Registering Custom RLHive Objects
- Using Replay Buffers
- Single/Multi-Agent Runners
We'd love for you to contribute your own work to RLHive. Before doing so, please read our contributing guide.