-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DDPG + HER to replace TRPO #45
Comments
I'm a little unclear about the question. Are you trying one of our examples? If not, is that a simulated task? For all our real-world robot tasks, we do inherit
As for registering the env, it's needed only when you'd like to use I'm assuming that you're trying to use the baselines implementation of DDPG. Let me know if you have any other questions. |
I have a different robot but I modified the code so that it can work. However, I want to try a different algorithm (DDPG + HER) as it should be faster than TRPO. HER uses gym make env function so I think I can follow your suggestion. Another question, my code has a problem when running for a number of hours or so. The thread _sensor_handler and actuator_handler stop running after a while (even it was running fine after one hour or so). What might be the possible reasons for that? |
This is a typical error: WARNING:root:Agent has over-run its allocated dt, it has been 0.28047633171081543 since the last observation, 0.24047633171081542 more than allowed It just keeps looping between these. As the commands are not sent to the robot (the actuator_handler thread stops), the robot does not move at all. I also checked that the sensor_handler also stops running. |
Is it possible for you to share some code snippets or elaborate on what you are trying to do? |
Thanks! Please look at the code at https://github.com/hhn1n15/SenseAct_Aubo. Basically right now I am trying to replicate your results (using TRPO) with a new robot (Aubo robot). I added new device aubo, created an aubo_reacher (based on ur_reacher). Most of the code stays the same. |
The dt may overrun if expensive learning updates are done sequentially among many other reasons. It is not that bothersome to have it say once in every few minutes. However, if this happens more often, two options can be to compute the update more efficiently using powerful computers or make the learning updates asynchronously using a different process. Are the handlers stopping even when you are running TRPO or PPO? I suggest getting it learning first with TRPO or PPO using the example script before moving to HER. Getting effective learning with a new robot is no trivial job and would be glad to see this working! |
I haven't tried DDPG+HER yet. The two handlers stops even with the original code using TRPO. Actually, the communicator stops making the two threads stop. |
I want to replace the TRPO with DDPG + HER and am having difficulties. The combination only works with a task that is registered with Gym. How did TRPO avoid that?
The text was updated successfully, but these errors were encountered: