Skip to content

Code for "Deep Stereo using Adaptive Thin Volume Representation with Uncertainty Awareness"

License

Notifications You must be signed in to change notification settings

touristCheng/UCSNet

Repository files navigation

UCSNet

Deep Stereo using Adaptive Thin Volume Representation with Uncertainty Awareness, CVPR 2020. (Oral Presentation)

Introduction

UCSNet is a learning-based framework for multi-view stereo (MVS). If you find this project useful for your research, please cite:

@inproceedings{cheng2020deep,
  title={Deep stereo using adaptive thin volume representation with uncertainty awareness},
  author={Cheng, Shuo and Xu, Zexiang and Zhu, Shilin and Li, Zhuwen and Li, Li Erran and Ramamoorthi, Ravi and Su, Hao},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={2524--2534},
  year={2020}
}

reconstruction results on DTU dataset:

dtu

How to Use

Environment

  • python 3.6 (Anaconda)
  • pip install -r requirements.txt

Reproducing Results

Compute Depth:

  • Download pre-processed testset: Tanks and Temples and DTU. Each dataset should be organized as the following:
root_directory
├──scan1 (scene_name1)
├──scan2 (scene_name2)    
	├── images
	│   ├── 00000000.jpg       
	│   ├── 00000001.jpg       
	│   └── ...                
	├── cams                   
	│   ├── 00000000_cam.txt   
	│   ├── 00000001_cam.txt   
	│   └── ...                
	└── pair.txt               
  • In scripts/test_on_dtu.sh or scripts/test_on_tanks.sh, set root_path to dataset root directory, set save_path to your directory
  • Test on GPU by running bash scripts/test_on_dtu.sh or bash scripts/test_on_tanks.sh
  • For testing your own data, please organize your dataset in the same way, and generate the data list for the scenes you want to test. View selection is very crutial for multi-view stereo. For each scene, you may also need to implement the view selection in pair.txt:
TOTAL_IMAGE_NUM
IMAGE_ID0                       # index of reference image 0 
10 ID0 SCORE0 ID1 SCORE1 ...    # 10 best source images for reference image 0 
IMAGE_ID1                       # index of reference image 1
10 ID0 SCORE0 ID1 SCORE1 ...    # 10 best source images for reference image 1 
...

Depth Fusion:

  • Download the modified fusibile: git clone https://github.com/YoYo000/fusibile
  • Install by cmake . and make
  • In scripts/fuse_dtu.sh or bash scripts/fuse_tanks.sh, set exe_path to executable fusibile path, set root_path to the directory that contain the test results, set target_path to where you want to save the point clouds.
  • Fusing by running bash scripts/fuse_dtu.sh or bash scripts/fuse_tanks.sh

Note: For DTU results, the fusion is performed on an NVIDIA GTX 1080Ti. For Tanks and Temple results, the fusion is performed on an NVIDIA P6000, as fusibile requires to read in the depth maps all in once, you may need a GPU with memory around 20GB. You can decrease the depth resolution in previous computing step or try our implementation for depth fusion

DTU Evaluation:

  • Download the offical evaluation tool from DTU benchmark
  • Put the ground-truth point clouds and the predicted point clouds in the MVS Data/Points folder
  • In GetUsedSets.m, modify the UsedSets to be [1 4 9 10 11 12 13 15 23 24 29 32 33 34 48 49 62 75 77 110 114 118] as that are the test objects used in the literatures, then calculate the scores using BaseEvalMain_web.m and ComputeStat_web.m
  • The accuracy of each object is stored in BaseStat.MeanData, and the completeness of each object is stored in BaseStat.MeanStl, use the average number as the final accuracy and completeness
  • We also provide our pre-computed point clouds for your convenience, the evaluation results are:
Accuracy Completeness Overall
0.3388 0.3456 0.3422

Training

  • Install NVIDIA apex for using Synchronized Batch Normalization
  • Download pre-processed DTU training data from MVSNet, and download our rendered full resolution ground-truth. Place the ground-truth in root directory, the train set need to be organized as:
root_directory
├──Cameras
├──Rectified
├──Depths_4
└──Depths  
  • In scripts/train.sh, set root_path to root directory, set num_gpus to the number of GPU on a machine (We use 8 1080Ti in our experiments).
  • Training: bash scripts/train.sh

Acknowledgements

UCSNet takes the MVSNet as its backbone. Thanks to Yao Yao for opening source of his excellent work, thanks to Xiaoyang Guo for his PyTorch implementation MVSNet_pytorch.

About

Code for "Deep Stereo using Adaptive Thin Volume Representation with Uncertainty Awareness"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published