Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DDPG + HER to replace TRPO #45

Open
hai-h-nguyen opened this issue Feb 21, 2019 · 7 comments
Open

DDPG + HER to replace TRPO #45

hai-h-nguyen opened this issue Feb 21, 2019 · 7 comments

Comments

@hai-h-nguyen
Copy link

I want to replace the TRPO with DDPG + HER and am having difficulties. The combination only works with a task that is registered with Gym. How did TRPO avoid that?

@gauthamvasan
Copy link
Collaborator

gauthamvasan commented Feb 21, 2019

I'm a little unclear about the question. Are you trying one of our examples? If not, is that a simulated task?

For all our real-world robot tasks, we do inherit gym.core.Env.
For example, with the UR5 arm,

  • ReacherEnv inherits the gym core env (link)
  • The observation and actions space are defined as gym Box objects (link)

As for registering the env, it's needed only when you'd like to use env = gym.make("custom_env_name"). We did that with our DoubleInvertedPendulumEnv. (link)

I'm assuming that you're trying to use the baselines implementation of DDPG. Let me know if you have any other questions.

@hai-h-nguyen
Copy link
Author

I have a different robot but I modified the code so that it can work. However, I want to try a different algorithm (DDPG + HER) as it should be faster than TRPO. HER uses gym make env function so I think I can follow your suggestion.

Another question, my code has a problem when running for a number of hours or so. The thread _sensor_handler and actuator_handler stop running after a while (even it was running fine after one hour or so). What might be the possible reasons for that?

@hai-h-nguyen
Copy link
Author

This is a typical error:

WARNING:root:Agent has over-run its allocated dt, it has been 0.28047633171081543 since the last observation, 0.24047633171081542 more than allowed
Resetting
Reset done
Resetting
Reset done
Resetting
Reset done
Resetting
Reset done
Resetting

It just keeps looping between these. As the commands are not sent to the robot (the actuator_handler thread stops), the robot does not move at all. I also checked that the sensor_handler also stops running.

@gauthamvasan
Copy link
Collaborator

Is it possible for you to share some code snippets or elaborate on what you are trying to do?
I have seen such errors when python multi-processing code was setup incorrectly.

@hai-h-nguyen
Copy link
Author

Thanks! Please look at the code at https://github.com/hhn1n15/SenseAct_Aubo. Basically right now I am trying to replicate your results (using TRPO) with a new robot (Aubo robot). I added new device aubo, created an aubo_reacher (based on ur_reacher). Most of the code stays the same.

@armahmood
Copy link
Member

The dt may overrun if expensive learning updates are done sequentially among many other reasons. It is not that bothersome to have it say once in every few minutes. However, if this happens more often, two options can be to compute the update more efficiently using powerful computers or make the learning updates asynchronously using a different process.

Are the handlers stopping even when you are running TRPO or PPO?

I suggest getting it learning first with TRPO or PPO using the example script before moving to HER. Getting effective learning with a new robot is no trivial job and would be glad to see this working!

@hai-h-nguyen
Copy link
Author

I haven't tried DDPG+HER yet. The two handlers stops even with the original code using TRPO. Actually, the communicator stops making the two threads stop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants