You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, thanks for providing this great RL resource!
I have a comment / suggestion for the tictactoe.py code:
The tictactoe.py code uses a compete() function to test if the AI players are sufficiently well trained.
If they play well enough, each game should end in a tie.
With the default settings in the code, all 1000 games end up in a tie.
However, this is not super informative to whether the AI has learned to play the game well.
Why? Because epsilon is zero for both players, both players follow the learned Q-table greedily, and therefore make identical choices in all states where one move has a dominant Q-value. This is the case for the first six turns. The only variation is after six turns, there are three moves that have equal Q-value, and therefore one of them is randomly chosen.
I think an improvement is to let one player use the Q-table greedily, and the other player select moves randomly.
Regards,
Gertjan
The text was updated successfully, but these errors were encountered:
Hello! Well, I think your suggestion's reasonable. But personally, the play() function should be used to test if the player is sufficiently well trained. So maybe there is no need to change the compete(turns) function. Do you agree with me?
Besides, I admit that it seems to be meaningless to repeat so many identical games as you mentioned, and I can't figure out the purpose of the compete(turns) function. Perhaps we can wait for the owner of the repository, @ShangtongZhang 's answer, since he/she is the author of the file.
Hi there, thanks for providing this great RL resource!
I have a comment / suggestion for the tictactoe.py code:
The
tictactoe.py
code uses acompete()
function to test if the AI players are sufficiently well trained.If they play well enough, each game should end in a tie.
With the default settings in the code, all 1000 games end up in a tie.
However, this is not super informative to whether the AI has learned to play the game well.
Why? Because epsilon is zero for both players, both players follow the learned Q-table greedily, and therefore make identical choices in all states where one move has a dominant Q-value. This is the case for the first six turns. The only variation is after six turns, there are three moves that have equal Q-value, and therefore one of them is randomly chosen.
I think an improvement is to let one player use the Q-table greedily, and the other player select moves randomly.
Regards,
Gertjan
The text was updated successfully, but these errors were encountered: