Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts on choosing experiments #40

Open
hjsuh94 opened this issue Apr 21, 2023 · 5 comments
Open

Thoughts on choosing experiments #40

hjsuh94 opened this issue Apr 21, 2023 · 5 comments

Comments

@hjsuh94
Copy link
Owner

hjsuh94 commented Apr 21, 2023

What do we want out of our experiments? In the setting of offline RL, we want our algorithm to

  1. Achieve reasonable success on the task
  2. Show that adding distribution risk improves over vanilla RL with learned dynamics.

On low-dimensional examples like cartpole and keypoints, this could be a tricky balance. If the dynamics is trained "too well", distribution risk will make the optimizer more conservative and we don't achieve 2. On the other hand, if the dynamics is trained too badly, we don't achieve 1.

So in order to answer the question of "when does distribution risk help?", we will need to actively find cases where:

  1. Interpolative regime: have a "reasonably bad" dynamics, where if we land on a correct sequence of samples, we can still achieve the task.
  2. Extrapolative regime: create a tension between optimality and safety by forcing an optimal trajectory to get out of the support of data.
@hongkai-dai
Copy link
Collaborator

Thanks for the summary!

For the interpolative regime, I was thinking about the scenario where a car needs to take a turn to reach the goal in the corridor, something like this

-----------------------------------
|                      goal
|      ---------------------------
|     |
|     |
|     |
| car |

In the training, we only have demonstration data that the car doesn't hit the wall. Now in the planning, we ask the car to reach the goal as quickly as possible. If we don't have the score risk, then the car would try to cut the corner and hit the wall, where it interpolates the dynamics between the starting location and the goal location. This interpolated dynamics is bad because it doesn't consider the collision dynamics. What do you think?

@hjsuh94
Copy link
Owner Author

hjsuh94 commented Apr 21, 2023

I think that's an excellent example! In the above classification, I think this would be an extrapolative example. (Sorry the interpolation is overloaded, I meant it as trying to interpolate within data, not interpolate between initial and goal state).

I love these kind of examples where we artificially take away some region of data and assign physical meaning. For instance we might consider box pushing with a circular obstacle in the middle, which has exactly the same effect as simply having no data in that region. I think the car example that you showed has a similar story.

@hjsuh94
Copy link
Owner Author

hjsuh94 commented Apr 21, 2023

BTW I wrote this because after data augmentation, the trained dynamics was so good that distribution risk did not quite help

@hongkai-dai
Copy link
Collaborator

Interpolative regime: have a "reasonably bad" dynamics, where if we land on a correct sequence of samples, we can still achieve the task.

What do you think if we train the dynamics model with very few data, and a complicated neural network, hence the neural-network will overfit to the training dataset, but in between the data the neural network performs bad? Hence we can add the the risk score as a regularization term.

@hjsuh94
Copy link
Owner Author

hjsuh94 commented Apr 22, 2023

I think that's a good idea! But this is a bit of a tough experiment to find the right balance, since if we don't have too few data, we may not be able to achieve the task at all.

In the D4RL dataset, I saw that they had around 100,000 datasets for the Mujodo / Adroit Tasks. I've been collecting 100,000 data for planar pushing as well, but I am not able to train a good enough dynamics for offline trajopt to succeed. Even here I've been seeing that risk score is helping! I will add some results this weekend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants