This is a clean and robust Pytorch implementation of NoisyNet DQN. A quick render here:
![]() |
![]() |
---|
![]() |
![]() |
---|
Other RL algorithms by Pytorch can be found here.
gymnasium==0.29.1
numpy==1.26.1
pytorch==2.1.0
python==3.11.5
python main.py # Train NoisyNet DQN on CartPole-v1 from scratch
If you want to train on different enviroments
python main.py --EnvIdex 1
The --EnvIdex can be set to be 0 and 1, where
'--EnvIdex 0' for 'CartPole-v1'
'--EnvIdex 1' for 'LunarLander-v2'
Note: if you want train on LunarLander-v2, you need to install box2d-py first. You can install box2d-py via:
pip install gymnasium[box2d]
python main.py --EnvIdex 0 --render True --Loadmodel True --ModelIdex 100 # Play CartPole-v1 with NoisyNet DQN
python main.py --EnvIdex 1 --render True --Loadmodel True --ModelIdex 550 # Play LunarLander-v2 with NoisyNet DQN
You can use the tensorboard to record anv visualize the training curve.
- Installation (please make sure Pytorch is installed already):
pip install tensorboard
pip install packaging
- Record (the training curves will be saved at '\runs'):
python main.py --write True
- Visualization:
tensorboard --logdir runs
For more details of Hyperparameter Setting, please check 'main.py'
DQN: Mnih V , Kavukcuoglu K , Silver D , et al. Playing Atari with Deep Reinforcement Learning[J]. Computer Science, 2013.
NoisyNet DQN: Fortunato M, Azar M G, Piot B, et al. Noisy networks for exploration[J]. arXiv preprint arXiv:1706.10295, 2017.