Skip to content

Commit

Permalink
first commit
Browse files Browse the repository at this point in the history
  • Loading branch information
ZhengyiLuo committed Mar 19, 2024
1 parent 9865865 commit 4fd77a9
Show file tree
Hide file tree
Showing 198 changed files with 35,830 additions and 2 deletions.
144 changes: 144 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@

*.json

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
out/
# C extensions
*.so
*.pkl

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
wandb/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

.vscode/*
.vscode/settings.json



# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/


# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
venv/
env.bak/
venv.bak/


# Rope project settings
.ropeproject

# mkdocs documentation
/site

# Pyre type checker
.pyre/
checkpoints/
data/*
output/
log/

*.png
*.jpg
stageroom_train/ # tensorboard log
trials/
output_videos/
*.pkl
*.pt
new_trials
gh-md-toc
175 changes: 173 additions & 2 deletions README.MD
Original file line number Diff line number Diff line change
@@ -1,11 +1,182 @@
[Repo still under construction]

# Universal Humanoid Motion Representations for Physics-Based Control

Official implementation of ICLR 2023 spotlight paper: "Universal Humanoid Motion Representations for Physics-Based Control".
Official implementation of ICLR 2023 spotlight paper: "Universal Humanoid Motion Representations for Physics-Based Control". In this work, we develop a humanoi motion latent space that is low dimensional (32), high coverage (99.8% of AMASS motion), can speed up downstream task for hierarchical RL, and can be randomly sampled as a generative model.

Our proposed Physics-based Universal motion Latent SpacE (PULSE) is akin to a foundation model for control where downstream tasks ranging from simple locomotion, complex terrain traversal, to free-form motion tracking can all reuse this representation

This repo has a large amount of code overlap with [PHC](https://github.com/ZhengyiLuo/PHC); PULSE is more focused on the generative tasks.

[[paper]](https://arxiv.org/abs/2310.04582) [[website]](https://www.zhengyiluo.com/PULSE/)

<div float="center">
<img src="assets/pulse.gif" />
</div>

## News 🚩

[March 19, 2024] Training and Evaluation code released.

## TODOs

- [ ] Add support for smplx/h (fingers!!!).

- [ ] Add suppport for more downstream tasks.

### Dependencies

To create the environment, follow the following instructions:

1. Create new conda environment and install pytroch:


```
conda create -n isaac python=3.8
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
pip install -r requirement.txt
```

2. Download and setup [Isaac Gym](https://developer.nvidia.com/isaac-gym).


3. [Optional if only inference] Download SMPL paramters from [SMPL](https://smpl.is.tue.mpg.de/). Put them in the `data/smpl` folder, unzip them into 'data/smpl' folder. Please download the v1.1.0 version, which contains the neutral humanoid. Rename the files `basicmodel_neutral_lbs_10_207_0_v1.1.0`, `basicmodel_m_lbs_10_207_0_v1.1.0.pkl`, `basicmodel_f_lbs_10_207_0_v1.1.0.pkl` to `SMPL_NEUTRAL.pkl`, `SMPL_MALE.pkl` and `SMPL_FEMALE.pkl`. Rename The file structure should look like this:

```
|-- data
|-- smpl
|-- SMPL_FEMALE.pkl
|-- SMPL_NEUTRAL.pkl
|-- SMPL_MALE.pkl
```

4. Use the following script to download trained models and sample data.

```
bash download_data.sh
```

this wil download amass_isaac_standing_upright_slim.pkl, which is a standing still pose for testing.

## Evaluation

### Viewer Shortcuts

| Keyboard | Function |
| ---- | --- |
| f | focus on humanoid |
| Right click + WASD | change view port |
| Shift + Right click + WASD | change view port fast |
| r | reset episode |
| i | start random sampling PULSE |
| j | apply large force to the humanoid |
| l | record screenshot, press again to stop recording|
| ; | cancel screen shot|
| m | cancel termination based on imitation |

... more shortcut can be found in `phc/env/tasks/base_task.py`

Notes on rendering: I am using pyvirtualdisplay to record the video such that you can see all humanoids at the same time (default function will only capture the first environment). You can disable it using the flag `no_virtual_display=True`.


### Run PULSE

```
python phc/run_hydra.py env.task=HumanoidImDistillGetup env=env_im_vae exp_name=pulse_vae_iclr robot.real_weight_porpotion_boxes=False learning=im_z_fit env.models=['output/HumanoidIm/phc_3/Humanoid_00258000.pth','output/HumanoidIm/phc_comp_3/Humanoid_00023501.pth'] env.motion_file=sample_data//amass_isaac_standing_upright_slim.pkl test=True env.num_envs=1 headless=False epoch=-1
```

Press M (disable termination), and press I (start sampling), to see ramdomly sampled motion.


### Train Downstream Tasks

(more coming soon!)

Speed:
```
python phc/run_hydra.py env.task=HumanoidSpeedZ env=env_im_vae exp_name=pulse_vae_iclr robot.real_weight_porpotion_boxes=False learning=im_z_fit env.models=['output/HumanoidIm/phc_3/Humanoid_00258000.pth','output/HumanoidIm/phc_comp_3/Humanoid_00023501.pth'] env.motion_file=sample_data//amass_isaac_standing_upright_slim.pkl test=True env.num_envs=1 headless=False epoch=-1
```





## Training



### Data Processing AMASS

We train on a subset of the [AMASS](https://amass.is.tue.mpg.de/) dataset.

For processing the AMASS, first, download the AMASS dataset from [AMASS](https://amass.is.tue.mpg.de/). Then, run the following script on the unzipped data:


After downloading AMASS, use the script `python scripts/data_process/convert_amass_data.py`

### Training PULSE

```
python phc/run_hydra.py env.task=HumanoidImDistillGetup env=env_im_vae exp_name=pulse_vae robot.real_weight_porpotion_boxes=False learning=im_z_fit env.models=['output/HumanoidIm/phc_3/Humanoid_00258000.pth','output/HumanoidIm/phc_comp_3/Humanoid_00023501.pth'] env.motion_file=[insert data pkl]
```


## Citation
If you find this work useful for your research, please cite our paper:
```
@inproceedings{
luo2024universal,
title={Universal Humanoid Motion Representations for Physics-Based Control},
author={Zhengyi Luo and Jinkun Cao and Josh Merel and Alexander Winkler and Jing Huang and Kris M. Kitani and Weipeng Xu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=OrOd8PxOO2}
}
```

Also consider citing these prior works that are used in this project:

```
@inproceedings{Luo2023PerpetualHC,
author={Zhengyi Luo and Jinkun Cao and Alexander W. Winkler and Kris Kitani and Weipeng Xu},
title={Perpetual Humanoid Control for Real-time Simulated Avatars},
booktitle={International Conference on Computer Vision (ICCV)},
year={2023}
}
@inproceedings{rempeluo2023tracepace,
author={Rempe, Davis and Luo, Zhengyi and Peng, Xue Bin and Yuan, Ye and Kitani, Kris and Kreis, Karsten and Fidler, Sanja and Litany, Or},
title={Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023}
}
@inproceedings{Luo2022EmbodiedSH,
title={Embodied Scene-aware Human Pose Estimation},
author={Zhengyi Luo and Shun Iwase and Ye Yuan and Kris Kitani},
booktitle={Advances in Neural Information Processing Systems},
year={2022}
}
@inproceedings{Luo2021DynamicsRegulatedKP,
title={Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation},
author={Zhengyi Luo and Ryo Hachiuma and Ye Yuan and Kris Kitani},
booktitle={Advances in Neural Information Processing Systems},
year={2021}
}
```

## References
This repository is built on top of the following amazing repositories:
* Main code framework is from: [IsaacGymEnvs](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs)
* Large amount of code is from: [PHC](https://github.com/ZhengyiLuo/PHC)
* SMPL models and layer is from: [SMPL-X model](https://github.com/vchoutas/smplx)

Please follow the lisence of the above repositories for usage.





9 changes: 9 additions & 0 deletions download_data.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
mkdir sample_data
mkdir -p output output/HumanoidIm/ output/HumanoidIm/phc_3 output/HumanoidIm/phc_comp_3 output/HumanoidIm/pulse_vae_iclr
gdown https://drive.google.com/uc?id=1bLp4SNIZROMB7Sxgt0Mh4-4BLOPGV9_U -O sample_data/ # filtered shapes from AMASS
gdown https://drive.google.com/uc?id=1arpCsue3Knqttj75Nt9Mwo32TKC4TYDx -O sample_data/ # all shapes from AMASS
gdown https://drive.google.com/uc?id=1fFauJE0W0nJfihUvjViq9OzmFfHo_rq0 -O sample_data/ # sample standing neutral data.
gdown https://drive.google.com/uc?id=1uzFkT2s_zVdnAohPWHOLFcyRDq372Fmc -O sample_data/ # amass_occlusion_v3
gdown https://drive.google.com/uc?id=1ztyljPCzeRwQEJqtlME90gZwMXLhGTOQ -O output/HumanoidIm/pulse_vae_iclr/
gdown https://drive.google.com/uc?id=1JbK9Vzo1bEY8Pig6D92yAUv8l-1rKWo3 -O output/HumanoidIm/phc_comp_3/
gdown https://drive.google.com/uc?id=1pS1bRUbKFDp6o6ZJ9XSFaBlXv6_PrhNc -O output/HumanoidIm/phc_3/
Empty file added phc/__init__.py
Empty file.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
39 changes: 39 additions & 0 deletions phc/data/assets/mjcf/humanoid_template_local.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
<mujoco model="humanoid">
<compiler coordinate="local"/>
<statistic extent="2" center="0 0 1"/>
<option timestep="0.00555"/>

<default>
<motor ctrlrange="-1 1" ctrllimited="true"/>
<geom type="capsule" condim="1" friction="1.0 0.05 0.05" solimp=".9 .99 .003" solref=".015 1"/>
<joint type="hinge" damping="0.1" stiffness="5" armature=".007" limited="true" solimplimit="0 .99 .01"/>
<site size=".04" group="3"/>
<default class="force-torque">
<site type="box" size=".01 .01 .02" rgba="1 0 0 1" />
</default>
<default class="touch">
<site type="capsule" rgba="0 0 1 .3"/>
</default>
</default>

<asset>
<texture type="skybox" builtin="gradient" rgb1=".4 .5 .6" rgb2="0 0 0" width="100" height="100"/>
<texture builtin="flat" height="1278" mark="cross" markrgb="1 1 1" name="texgeom" random="0.01" rgb1="0.8 0.6 0.4" rgb2="0.8 0.6 0.4" type="cube" width="127"/>
<texture builtin="checker" height="100" name="texplane" rgb1="0 0 0" rgb2="0.8 0.8 0.8" type="2d" width="100"/>
<material name="MatPlane" reflectance="0.5" shininess="1" specular="1" texrepeat="60 60" texture="texplane"/>
<material name="geom" texture="texgeom" texuniform="true"/>
</asset>

<worldbody>
<light cutoff="100" diffuse="1 1 1" dir="-0 0 -1.3" directional="true" exponent="1" pos="0 0 1.3" specular=".1 .1 .1"/>
<geom conaffinity="1" condim="3" name="floor" pos="0 0 0" rgba="0.8 0.9 0.8 1" size="100 100 .2" type="plane" material="MatPlane"/>

</worldbody>

<actuator/>

<contact/>

</mujoco>


Loading

0 comments on commit 4fd77a9

Please sign in to comment.