Distributed CUDA; DQN replace; AtariPrioritizedReplay
Enable Distributed CUDA
#170
Fix the long standing pytorch + distributed using spawn
multiprocessing due to Lab classes not pickleable. Just let the class wrapped in a mp_runner
passed as mp.Process(target=mp_runner, args)
so the classes don't get cloned from memory when spawning process, since it is now passed from outside.
DQN replace method fix
#169
DQN target network replacement was in the wrong direction. Fix that.
AtariPrioritizedReplay
#170 #171
Add a quick AtariPrioritizedReplay via some multi-inheritance black magic with PrioritizedReplay, AtariReplay