Skip to content

Repo for a generalised DQN Agent model capable of solving major discrete action space control problems

Notifications You must be signed in to change notification settings

harshitandro/Deep-Q-Network

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Q Networks

Generic implementation of Deep Q Networks using Keras for the purpose of solving RL control tasks having discrete action space.

Requirements :

numpy
matplotlib
keras
imageio
gym (OpenAI Gym)

Help :

$ python main_driver -h

Environments :

List of OpenAI Gym Environments which are solved by agents in this repo:

  • CartPole-v0

    Targeted Reward (over 100 consecutive itrs) : >= 195
    Reward Achieved (over 100 consecutive itrs) : 200.0
    Episodes before solve : 265

    Following are the commands used to train & test the model(includes the necessary hyperparameters):

    • Command to train : python3 main_driver,py train CartPole-v0 --itr-count 2000 -b 64 -lr 0.0001 --gamma 0.99 --render False --avg-expected-reward -110.0

    • Command to test : python3 main_driver,py test CartPole-v0 --render True --avg-expected-reward 195 --test-total-iterations 100 --model-path "best_save/CartPole-v0_local_model_1534607382.5808585.h5"

    Result Log Files:

    Detailed Log Files:

    Reward Plot:
    Reward Plot

    Agent Playing the Env:
    Agent Play


About

Repo for a generalised DQN Agent model capable of solving major discrete action space control problems

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages