Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

My code is very poor in learning 2048 game using Double DQN #56

Open
codetiger opened this issue Jul 3, 2017 · 2 comments
Open

My code is very poor in learning 2048 game using Double DQN #56

codetiger opened this issue Jul 3, 2017 · 2 comments

Comments

@codetiger
Copy link

Firstly, thanks for the great collection of code and articles. The articles were very useful in understanding DQN and implementing it.

However, my code is very bad in learning. I am not sure what is wrong with my code. I am using DDQN and passing rewards based on different criteria. Also the state is just a normalized version of the board itself.

My code repo is here https://github.com/codetiger/MachineLearning-2048
Let me know if you can review and help me understanding why my code doesnot learn anything even after 1000 episodes.

@dnddnjs
Copy link
Contributor

dnddnjs commented Jul 5, 2017

Thank you for utilizing our code for your own code. We will review your code and tell you our opinion!

@codetiger
Copy link
Author

Hi, I added some optimization techniques to my Agent and got better results.

The agent was trained for 100K episodes in 2x2 grid and got 100% optimal move every time. However, I did not have enough patience to train the agent for 4x4 grid. Updated my repo with new results

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants