Skip to content

ArthasMil/AACVP-MVSNet

Repository files navigation

AACVP-MVSNet

The code of the paper Attention Aware Cost Volume Pyramid Based Multi-view Stereo Network for 3D Reconstruction (AACVP-MVSNet)。

The original paper could be found here (Arxiv) and here (Elsevier).


This work strongly strongly borrows the insights from the previous MVS approaches. More details in the Acknowledge part.

0. Introduction

This project is inspired many previous MVS works, such as MVSNet and CVP-MVSNet. The self-attention layer and the group wise correlation are introduced in our network, aiming at improving the completeness and overall accuracy of 3D Reconstruction.

The network structure of AACVP-MVSNet

If you find this project useful for your research, please cite:

@article{YU2021448,
title = {Attention aware cost volume pyramid based multi-view stereo network for 3D reconstruction},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
volume = {175},
pages = {448-460},
year = {2021},
issn = {0924-2716},
doi = {https://doi.org/10.1016/j.isprsjprs.2021.03.010},
url = {https://www.sciencedirect.com/science/article/pii/S0924271621000794},
author = {Anzhu Yu and Wenyue Guo and Bing Liu and Xin Chen and Xin Wang and Xuefeng Cao and Bingchuan Jiang},
}

The best result of our model is listed below as well as some previous works.

Methods Acc. (mm) Comp. (mm) Overall (mm)
PruMVSNet 0.495 0.433 0.464
PointMVSNet 0.361 0.421 0.391
MVSNet 0.449 0.380 0.414
CasMVSNet 0.325 0.385 0.355
CVP-MVSNet 0.296 0.406 0.351
Ours(BEST) 0.353 0.299 0.326

Some results on BlendedMVS dataset is listed below.

Scene 1

Scene 2

Scene 3

Scene 4


1. How to use

Our experiment use the same dataset with CVP-MVSNet, so the usage of this code stays the same with CVP-MVSNet.

0. Pre-requisites

  • Nvidia GPU with 11GB or more vRam.
  • CUDA 10.1 or newer
  • python3.7
  • python2.7 for fusion script

We only test our code under mentioned requirements.

1. Clone the source code

git clone https://github.com/ArthasMil/AACVP-MVSNet.git

2. Download testing dataset

Testing data(2G):

Download the pre-processed DTU testing data from here and extract it to your own path.

3. Train the model

bash train.sh

4. Generate depth map using our pre-trained model

bash eval.sh

When finished, you can find depth maps in outputs_pretrained folder.

5. Generate point clouds and reproduce DTU results

Check out Yao Yao's modified version of fusibile

git clone https://github.com/YoYo000/fusibile

Install fusibile by cmake . and make, which will generate the executable atFUSIBILE_EXE_PATH

Link fusibile executable into fusion folder (Note: You should modify FUSIBILE_EXE_PATH to the path to your fusibile executable)

ln -s FUSIBILE_EXE_PATH ./fusion/fusibile

Install extra dependencies

pip2 install -r ./fusion/requirements_fusion.txt

Use provided script to use fusibile to generate point clouds.

cd ./fusion/

bash fusion.sh

Move the final3D files to output folder

python2 fusibile_to_dtu_eval.py

Evaluate the point clouds using the DTU evaluation code.

6. Pre-trained model

The pretrained model is in BaiduYun (code:pse8). Put it in your folder and modify the path in eval_AACVPMVSNet.sh.

Acknowledge

This work is supported by National Natural Science Foundation of China (No.41801388 and No.41801319).

This repository is MAINLY based on the CVP-MVSNet repository by Jiayu Yang. Many thanks to Jiayu Yang for the great project!

The fusion implementation of T&T dataset is not based on the Fusibile toolbox, we use the mixed prob. masks at different levels and borrowed the insights from D2HC-RMVSNet and CasMVSNet_pl. Thanks Hongwei Yi and Kwea123 for their works.

About

The code for AACVP-MVSNet

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published