You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey Phil! Thanks for the course. I'm really enjoying it so far.
I've implemented the first real Deep Q Network, and it is not learning. Whenever I take off the convolutional layers and just use the fully connected layers and test it on the CartPole-v1 it is able to learn, however, whenever I test it on Pong or Breakout with the convolutional layers it does not work. I've gone through all of my code many times. I don't know what it is I messed up. I've checked the wrapper, the network, the agent, even the main loop. Could it possibly be my imports?
I did some more experimenting and am finding that my observations are returning values of 0. It is not an issue for the game pong, but it is for all other atari games.
Hey Phil! Thanks for the course. I'm really enjoying it so far.
I've implemented the first real Deep Q Network, and it is not learning. Whenever I take off the convolutional layers and just use the fully connected layers and test it on the CartPole-v1 it is able to learn, however, whenever I test it on Pong or Breakout with the convolutional layers it does not work. I've gone through all of my code many times. I don't know what it is I messed up. I've checked the wrapper, the network, the agent, even the main loop. Could it possibly be my imports?
I'm not sure what's the best way to upload code. Let me know if there is a better way.
ExperienceReplay.txt
GymWrapper.txt
TrainAgent.txt
DeepQNetwork.txt
The text was updated successfully, but these errors were encountered: