Skip to content

Distributed CUDA; DQN replace; AtariPrioritizedReplay

Compare
Choose a tag to compare
@kengz kengz released this 15 Sep 18:30
· 2380 commits to master since this release
bdb4942

Enable Distributed CUDA

#170
Fix the long standing pytorch + distributed using spawn multiprocessing due to Lab classes not pickleable. Just let the class wrapped in a mp_runner passed as mp.Process(target=mp_runner, args) so the classes don't get cloned from memory when spawning process, since it is now passed from outside.

DQN replace method fix

#169
DQN target network replacement was in the wrong direction. Fix that.

AtariPrioritizedReplay

#170 #171
Add a quick AtariPrioritizedReplay via some multi-inheritance black magic with PrioritizedReplay, AtariReplay