You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've created a new environment, but I'm struggling to determine if the RL agent is learning correctly. It feels like it isn't improving much, thus am wondering if I have implemented the environment correctly.
Just wondering if you have any tips regarding how I might best check everything is implemented correctly? E.g
Is there anything I should be checking on TensorBoard?
Do you have any advice on what I should be normalizing (e.g. action, states,...)?
(Any other advice/tips greatly appreciated...)
Many thanks for any help, and for this amazing lib! :)
The text was updated successfully, but these errors were encountered:
Hmm, maybe I should write up a FAQ at some point since this is a very common question.
There's no easy answer, RL is still more like an art rather than science.
check that your agents get some reward under a random policy. If they never reach rewarding states under random policy they won't learn.
check your tensorboard for any signs of exploding gradients. This can be very high KL-divergence, or Adam momentum. These values should be typically pretty low. Disable gradient norm and see if the gradients reach very high values.
see what the policy is doing visually, even if it's not good. This often gives a good idea of what's going on.
Normalizations typically help, especially when reward or observation scales are not tuned correctly. But it can hurt in some cases. Try disabling all normalization.
You might also just not be training enough. What is your framerate and how long are your training sessions? Some environments take hundreds of millions of steps to get the learning going.
It's hard to say more without knowing more about your environment or config. If you can share some details maybe I can help. Sharing your config would help too.
Hi, I've created a new environment, but I'm struggling to determine if the RL agent is learning correctly. It feels like it isn't improving much, thus am wondering if I have implemented the environment correctly.
Just wondering if you have any tips regarding how I might best check everything is implemented correctly? E.g
Many thanks for any help, and for this amazing lib! :)
The text was updated successfully, but these errors were encountered: