Skip to content

alito/deep_q_rl

 
 

Repository files navigation

Introduction

This package provides a Lasagne/Theano-based implementation of the deep Q-learning algorithm described in:

Playing Atari with Deep Reinforcement Learning Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller ,

Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.

and

Deep Reinforcement Learning with Double Q-learning Hado van Hasselt, Arthur Guez and David Silver.

Here is a video showing a trained network playing breakout (using an earlier version of the code):

http://youtu.be/SZ88F82KLX4

Dependencies

The script dep_script.sh can be used to install all dependencies under Ubuntu.

Running

Use the scripts run_nips.py, run_nature.py or run_double.py to start all the necessary processes:

$ ./run_nips.py --rom breakout

$ ./run_nature.py --rom breakout

or to use the Gym enviornment:

$ ./run_nature -f gym --rom BreakoutNoFrameskip-v3 (using the NoFrameskip-v3 versions will make the environment match the behaviour we see from direct access)

The run_nips.py script uses parameters consistent with the original NIPS workshop paper. This code should take 2-4 days to complete. The run_nature.py script uses parameters consistent with the Nature paper. The final policies should be better, but it will take 6-10 days to finish training. run_double.py uses double DQN. It should produce even better policies than the Nature version.

Either script will store output files in a folder prefixed with the name of the ROM. Pickled version of the network objects are stored after every epoch. The file results.csv will contain the testing output. You can plot the progress by executing plot_results.py:

$ python plot_results.py breakout_05-28-17-09_0p00025_0p99/results.csv

After training completes, you can watch the network play using the ale_run_watch.py script:

$ python ale_run_watch.py breakout_05-28-17-09_0p00025_0p99/network_file_99.pkl

Performance Tuning

Theano Configuration

Setting allow_gc=False in THEANO_FLAGS or in the .theanorc file significantly improves performance at the expense of a slight increase in memory usage on the GPU.

Getting Help

The deep Q-learning web-forum can be used for discussion and advice related to deep Q-learning in general and this package in particular.

See Also

About

Theano-based implementation of Deep Q-learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.7%
  • Shell 1.3%