-
-
Couldn't load subscription status.
- Fork 108
Description
Nvidia has ported Atari to CUDA: https://github.com/NVlabs/cule
The biggest benefit is, the data are in GPU memory, thus avoided memory copy between host & device.
My ideas are:
add cule as a 3rd party env
then update the algorithms in ReinforcementLearningZoo.jl, in order to use CuArray as actions, states, trajectory buffers...
implement a env wrapper
it should be a parallel GPU kernel, which setup the launch options, and execute the built in env.
future work
Numba has many limitations in kernel code. https://numba.pydata.org/numba-doc/dev/cuda/overview.html
Use C++ to implement CUDA kernel is more difficult than Numba.
But Julia has great support for CUDA programming, so this is the best choice for RL.
If this experiment is successful, then we can port some 3rd party env into Julia code, or the users can implement custom env, and train agents with this framework.