We use exactly the same train/test set on RLBench as PerAct. We re-render the same episodes with higher camera resolution (
- Download the testing episodes from PerAct repo. Extract the zip files to
./data/peract/raw/test
- Rearrange episodes by their variantions
# For each task, separate episodes in all_variations/ to variations0/ ... variationsN/
> python data_preprocessing/rearrange_rlbench_demos.py --root_dir $(pwd)/data/peract/raw/test
- Download our packaged demonstrations for training from here. Extract the zip file to
./data/peract/
- Download the training/validation episodes from PerAct repo. Extract the zip files to
./data/peract/raw/train
or./data/peract/raw/val
- Run this bashscript for re-rendering and packaging them into
.dat
files
# set SPLIT=train for training episodes
# set SPLIT=val for validation episodes
> bash scripts/rerender_highres_cameraview.sh
./data/peract
|------ raw/test
| |------ close_jar/
| | |------ variation0/
| | | |------ variation_descriptions.pkl
| | | |------ episodes/
| | | |------ episode0/
| | | | |------ low_dim_obs.pkl
| | | | |------ front_depth/
| | | | |------ front_rgb/
| | | | |------ wrist_depth/
| | | | |------ wrist_rgb/
| | | | |------ left_shoulder_depth/
| | | | |------ left_shoulder_rgb/
| | | | |------ right_shoulder_depth/
| | | | |------ right_shoulder_rgb/
| | | |
| | | |------ episode0/...
| | |------ variation1/...
| |------ push_buttons/a
|
|------ Peract_packaged/
|------ train/
| |------ close_jar+0/
| | |------ ep0.dat
| | |------ ep1.dat
| |
| |------ close_jar+0/...
|
|------ val/...