Non-official Pytorch implementation of the CREStereo (CVPR 2022 Oral) model converted from the original MegEngine implementation.
update 2023/01/03:
- enable DistributedDataParallel (DDP) training, training time is much faster than before.
# train DDP
# change 'dist' to True in /cfgs/train.yaml file
python -m torch.distributed.launch --nproc_per_node=8 train.py
# train DP
# change 'dist' to False in /cfgs/train.yaml file
python train.py
- This is just an effort to try to implement the CREStereo model into Pytorch from MegEngine due to the issues of the framework to convert to other formats (megvii-research/CREStereo#3).
- I am not the author of the paper, and I am don't fully understand what the model is doing. Therefore, there might be small differences with the original model that might impact the performance.
- I have not added any license, since the repository uses code from different repositories. Check the License section below for more detail.
- Download the model from here and save it into the models folder.
- The model was covnerted from the original MegEngine weights using the
convert_weights.py
script. Place the MegEngine weights (crestereo_eth3d.mge) file into the models folder before the conversion.
- CREStereo (Apache License 2.0): https://github.com/megvii-research/CREStereo/blob/master/LICENSE
- RAFT (BSD 3-Clause):https://github.com/princeton-vl/RAFT/blob/master/LICENSE
- LoFTR (Apache License 2.0):https://github.com/zju3dv/LoFTR/blob/master/LICENSE
- CREStereo: https://github.com/megvii-research/CREStereo
- RAFT: https://github.com/princeton-vl/RAFT
- LoFTR: https://github.com/zju3dv/LoFTR
- Grid sample replacement: https://zenn.dev/pinto0309/scraps/7d4032067d0160
- torch2mge: https://github.com/MegEngine/torch2mge