This is a PyTorch implementation of Prioritized Level Replay.
Prioritized Level Replay is a simple method for improving generalization and sample-efficiency of deep RL agents on procedurally-generated environments by adaptively updating a sampling distribution over the training levels based on a score of the learning potential of replaying each level.
conda create -n level-replay python=3.8
conda activate level-replay
git clone https://github.com/facebookresearch/level-replay.git
cd level-replay
pip install -r requirements.txt
# Clone level-replay-compatible versions of Procgen and MiniGrid environments.
git clone https://github.com/minqi/procgen.git
cd procgen
python setup.py install
cd ..
git clone https://github.com/minqi/gym-minigrid .git
cd gym-minigrid
pip install -e .
cd ..
For training the model use the runing shell script as follow:
./run_script <type_of_loss> <serial_number_of_maze_type>
- l1
- uniform
- Multiroom-N4-Random
- ObstructedMazeGamut-Easy
- ObstructedMazeGamut-Medium
./run_script uniform 2
will train the model with ppo algorithm, l1 loss and on "ObstructedMazeGamut-Easy" maze.
Likewise, Prioritized Level Replay results in drastic improvements to hard exploration environments in MiniGrid. On MiniGrid, we directly observe that the selective sampling employed by this method induces an implicit curriculum over levels from easier to harder levels.
The PPO implementation is largely based on Ilya Kostrikov's excellent implementation (https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail) and Roberta Raileanu's specific integration with Procgen (https://github.com/rraileanu/auto-drac).
If you make use of this code in your own work, please cite our paper:
@misc{jiang2020prioritized,
title={{Prioritized Level Replay}},
author={Minqi Jiang and Edward Grefenstette and Tim Rockt\"{a}schel},
year={2020},
eprint={2010.03934},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
The code in this repository is released under Creative Commons Attribution-NonCommercial 4.0 International License (CC-BY-NC 4.0).