This is a improved version of masknet (https://github.com/vinits5/masknet), and the code is mainly based on it.
Source Code Author: Ruqin Zhou
- pytorch==1.3.0+cu92
- transforms3d==0.3.1
- h5py==2.9.0
- ninja==1.9.0.post1
- tensorboardX=1.8
./learning3d/data_utils/download_data.sh
conda create -n masknet python=3.7
pip install -r requirements.txt
python train.py --exp_name exp_masknet --partial 1 --noise 0 --outliers 0
python train.py --eval 1 --pretrained ./pretrained/exp_masknet/best_model_0.7.t7 --partial 0 --noise 0 --outliers 1
CUDA_VISIBLE_DEVICES=0 python test.py --pretrained ./pretrained/exp_masknet/best_model_0.7.t7 --reg_algorithm 'pointnetlk'
We provide a number of registration algorithms with MaskNet as listed below:
- PointNetLK
- Deep Closest Point (DCP)
- Iterative Closest Point (ICP)
- PRNet
- PCRNet
- RPMNet
In the test.py file, change the template and source variables with your data on line number 156 and 157. Ground truth values for mask and transformation between template and source can be provided by changing the variables on line no. 158 and 159 resp.
python test.py --user_data True --reg_algorithm 'pointnetlk'
cd evaluation && chmod +x evaluate.sh && ./evaluate.sh
python download_3dmatch.py
python test_3DMatch.py
python plot_figures.py
python make_video.py
This project is release under the MIT License.
We would like to thank the authors of PointNet, PRNet, RPM-Net, PointNetLK and [masknet] (https://github.com/vinits5/masknet) for sharing their codes.