Skip to content

Latest commit

 

History

History
98 lines (72 loc) · 3.17 KB

README.md

File metadata and controls

98 lines (72 loc) · 3.17 KB

Atlas reinforcement learning

This project aims to run reinforcement learning models on a real Atlas.

  • Python version: 3.7.11

Timeline

  • Run the simulated Atlas model in PyBullet.
  • Wrap it into an OpenAI-Gym environment.
  • Run low pass filter or simulation with 100hz and action sampling with 30hz
  • Retarget motion files to AtlasEnv and see how it looks
  • Create a second backend for the OpenAI-Gym that connects to the real Atlas.
  • Train a very basic machine learning model with stable_baselines in the virtual Gym and run it in the real Gym. Fix most of the joints to zero, except, for example, the right arm.
  • Encode rotations as 2 unit vectors (x and y)
  • Remove sim-time from observation
  • Remove horizontal components from observation
  • chosenDifference should be difference between actual joint angles and reference joint angles
  • actionSpeedDiff should also be difference between actual joint angles and reference joint angles
  • Sample start position uniformly between first and last reference motion frame
  • do sanity check on eulerDif. Set robot to fixed angle diff, check if its valid
  • Tune exponents: Starts at .2 and spreads between .5
  • Change batch_size=512
  • Use hyperparameters from https://github.com/google-research/motion_imitation/blob/d0e7b963c5a301984352d25a3ee0820266fa4218/motion_imitation/run.py
  • Set friction on ground plane to a higher value
  • Add low pass filtering to joint motors. Low pass filter on the actual angles instead of the normalized action space
  • See how the model behaves and decide on further steps.
  • Use IK to better match reference motion
  • Use reward for relative end effector difference
  • Use motion from real atlas as reference
  • Debug without ground plane and figure out why the simulations still diverge
  • Sample from uniform instead of normal distribution

Setup

Install Python 3.7. Create a virtual env (or something else).

which python3.7
virtualenv -p /usr/bin/python3.7 env
source env/bin/activate
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html

For the env explorer

sudo apt-get install build-essential
sudo apt-get install --reinstall libxcb-xinerama0
sudo apt-get install qtcreator

Run the code

Run the following command to play around with the Atlas Robot in PyBullet.

python3 atlasrl/simulator/standalone.py

alt text

For training the model, run

python3 -m atlasrl

For testing the remote, run

python3 -m atlasrl.robots.AtlasRemoteEnv_test

For exploring the env, run

python3 -m atlasrl.robots.AtlasEnvExplorer

State, Actions and Reward

  • Observed state is not clear yet.
  • Actions are the pd-controller targets for the 30 joints of the simulated Atlas.
  • Rewards depend on the specific task.

Data format

  • 1st is dT in secs

Reference motion player

python -m atlasrl.simulator.ShowReferenceMotion

Running with ISAAC

source ~/.local/share/ov/pkg/isaac_sim-2021.1.1/python_samples/setenv.sh