Collection of DQN Implementations for Atari games from the MinAtar library.
Framework for training and testing DQN agents on Atari games (also compatible with every other OpenAI Gym environment).
This repository contains a framework for the Deep Learning DQN algorithm with the following extensions:
- Double DQN
- Prioritized Experience Replay
- Dueling DQN
- Noisy DQN
- Rainbow DQN
It is also possible to train the agent on any OpenAI Gym environment.
To install the required packages, run the following command:
pip install -r requirements.txt
To make the usage more convenient, a CLI script is provided. Documentation is available by adding the --help
flag to the command, e.g.:
Helper menu is also available for all subcommands, e.g.:
Training parameters are fully documented in the CLI client. Run the following command to see all the available options:
python scripts/cli.py train --help
to revise all possible parameters. These are passed as command line arguments.
It is possible to pass all the training parameters from a config file instead of the command line. Exemplary config files are provided in the configs
directory. They are written in YAML format, e.g.:
max_epochs: 10000
env: MinAtar/Freeway
device: cuda
rollouts_per_validation: 3
validate_every_n_epochs: 1000
To train an agent from a config file, run the following command:
python scripts/cli.py train -f <path-to-config-file>
To run the agent in live mode, run the following command:
python scripts/cli.py live --help
or for MinAtar environments:
python scripts/cli.py minatar-live --help
for MinAtar environments.
python scripts/cli.py minatar-live --checkpoint_path <path-to-checkpoint>
python scripts/cli.py minatar-live --checkpoint_path data/Breakout/mini_rainbow/03/checkpoints/epoch=9999-step=20000.ckpt
python scripts/cli.py minatar-live --checkpoint_path data/Freeway/mini_rainbow/03/checkpoints/epoch=9999-step=20000.ckpt
python scripts/cli.py minatar-live --checkpoint_path logs/MinAtar_SpaceInvaders/version_0/checkpoints/epoch=49999-step=100000.ckpt