Zizhang Li, Xiaoyang Lyu, Yuanyuan Ding, Mengmeng Wang, Yiyi Liao, Yong Liu
We use geometry motivated prior information to regularize the unobservable regions for indoor compositional reconstruction.
- Training code
- Evaluation scripts
- Mesh extraction script
- Editted rendering script
- Dataset clean
Clone the repository and create an anaconda environment called rico using
git clone [email protected]:kyleleey/RICO.git
cd RICO
conda create -y -n rico python=3.8
conda activate rico
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
pip install -r requirements.txt
We provide processed scannet and synthetic scenes in this link. Please download the data and unzip in the data
folder, the resulting folder structure should be:
└── RICO
└── data
├── scannet
├── syn_data
Run the following command to train rico on the synthetic scene 1:
cd ./code
bash slurm_run.sh PARTITION CFG_PATH SCAN_ID PORT
where PARTITION
is the slurm partition name you're using. You can use confs/RICO_scannet.conf
or confs/RICO_synthetic.conf
for CFG_PATH
to train on ScanNet or synthetic scene. You also need to provide specific SCAN_ID
and PORT
.
If you are not in a slurm environment you can simply run:
python training/exp_runner.py --conf CFG_PATH --scan_id SCAN_ID --port PORT
To run quantitative evaluation on synthetic scenes for object and masked background depth:
cd synthetic_eval
python evaluate.py
python evaluate_bgdepth.py
Evaluation results will be saved in synthetic_eval/evaluation
as .json files.
We also provide other scripts for experiment files after training.
To extract the per-object mesh and the combined scene mesh:
cd scripts
python extract_mesh_rico.py
To render translation edited results:
cd scripts
python edit_render.py
You can change the detailed settings in these scripts to run on top of different experiment results.
This project is built upon MonoSDF, ObjSDF and also the original VolSDF. To construct the synthetic scenes, we mainly use the function of BlenderNeRF. We thank all the authors for their great work and repos.
If you find our code or paper useful, please cite
@inproceedings{li2023rico,
author = {Li, Zizhang and Lyu, Xiaoyang and Ding, Yuanyuan and Wang, Mengmeng and Liao, Yiyi and Liu, Yong},
title = {RICO: Regularizing the Unobservable for Indoor Compositional Reconstruction},
booktitle = {ICCV},
year = {2023},
}