edm_video_demo.mp4
conda env create -f environment.yaml
conda activate edm
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia -y
pip install -r requirements.txt
We provide our pretrained model and a fixed onnx model in google drive and Baidu Netdisk. Please place ckpt in folder weights/ and onnx model in folder deploy/.
See demo_single_pair.ipynb
See subdirectory deploy
Exporting onnx model first:
cd deploy
pip install -r requirements_deploy.txt
python export_onnx.py
Run demo on ONNX Runtime using TensorRT backend:
python run_onnx.py
Refer to edm_onnx_cpp
Setup the testing subsets of ScanNet and MegaDepth first.
The test and training can be downloaded by download link provided by LoFTR.
Create symlinks from the previously downloaded datasets to data/{{dataset}}/test
.
# set up symlinks
ln -s /path/to/scannet-1500-testset/* data/scannet/test
ln -s /path/to/megadepth-1500-testset/* data/megadepth/test
bash scripts/reproduce_test/outdoor.sh
bash scripts/reproduce_test/indoor.sh
Prepare training data according to the settings of LoFTR.
bash scripts/reproduce_train/outdoor.sh
Part of the code is based on EfficientLoFTR and RLE. We thank the authors for their useful source code.
If you find this project useful, please cite:
@article{li2025edm,
title={EDM: Efficient Deep Feature Matching},
author={Li, Xi and Rao, Tong and Pan, Cihui},
journal={arXiv preprint arXiv:2503.05122},
year={2025}
}