You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I'm trying to reproduce the Agent-Driver end-to-end, I'm running into some problems with the memory module. It seems that all the embedding of the encoded query has already been cached in the local memory file database.pkl? Thus I'm wondering what encoding model did you use to get the embedding of the queries and keys? Did you simply use the coordinates of the ego-states and historical trajectories and use the one-hot vector to embed the mission goal? If not, I would really appreciate it if you could kindly share the encoder you use.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi, thanks for your interest in our work.
We retrieve the memory scenarios by querying [ego_states, mission_goal, ego_hist_traj]. Please see here for details of the query vector. Specifically, the mission goal is embedded by a 3x1 one-hot vector indicating going left, right, and straight.
Hi, thanks again for the incredible work!
As I'm trying to reproduce the Agent-Driver end-to-end, I'm running into some problems with the memory module. It seems that all the embedding of the encoded query has already been cached in the local memory file database.pkl? Thus I'm wondering what encoding model did you use to get the embedding of the queries and keys? Did you simply use the coordinates of the ego-states and historical trajectories and use the one-hot vector to embed the mission goal? If not, I would really appreciate it if you could kindly share the encoder you use.
Thanks!
The text was updated successfully, but these errors were encountered: