-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traininig with rllib #1
Comments
Get it ,PPO algorithm solves the problem of continuous action space,Therefore, it is not suitable to use ppo for the discrete motion space scenes such as basic and health gethering in Vizdoom. |
More or less, yes (see the paper for results). Continuous spaces seem to be much harder to learn than discrete ones, so try to avoid them. |
@Miffyli By the way, do you know the DQN training hyperparameters that can works well in other scenes of vizdoom (except basic and health gethering)? :) |
Sadly no, I have mainly used A2C or PPO for ViZDoom tasks lately. I think the default parameters used for Atari games should work reasonably well out-of-the-box, though :) . |
Hi @Miffyli , I find rllib/configs/vizdoom_ppo.yaml in your repo, is this the config that you have verified which can use ppo algo (whthin RLlib) for training vizdoom? :)
The text was updated successfully, but these errors were encountered: