Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation - Neurips 2024
arXiv | Project Page | Demo | Video | Poster
This is the official implementation of Depth Anywhere, a project that proposes cross-camera model knowledge distillation by leveraging the large amount of perspective data and the capabilities of perspective foundation depth models.
Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation
Ning-Hsu Wang, Yu-Lun Liu1
1National Yang Ming Chiao Tung University
- Sep, 26, 2024: Paper accepted to Neurips 2024
- Jun, 24m 2024: Hugging Face demo released
-
If you only need to run our code, we have a simplified environment installation as follows.
conda create --name depth-anywhere python=3.8 conda activate depth-anywhere pip install -r requirements.txt
-
If you plan to train with a different teacher or student model, you can set up the environment in three steps:
- Create you conda environments
conda create --name depth-anywhere python=3.8 conda activate depth-anywhere
- Install the environments of your perspective foundation model (we take Depth Anything as an exampler)
git clone https://github.com/LiheYoung/Depth-Anything cd Depth-Anything pip install -r requirements.txt
- Install the 360 depth baseline model (We take UniFuse as an example)
cd baseline_models git clone https://github.com/alibaba/UniFuse-Unidirectional-Fusion.git pip install -r requirements.txt
You can download checkpoints from link and place them under checkpoints as mentioned in checkpoints/README.md.
For model inferernce
python inference.py \
--input_dir [Path to you input dir, default: data/examples/sf3d] \
--pretrained_weight [Path to your checkpoint .pth file]\
--output_dir [Path you would like to store your output files, default: outputs]
-
Data Preparation In order to reproduce the paper setting, you will need to download the following datasets under data. We follow the official split for these datasets.
-
Model Training For model training, we follow the training settings of all baseline_model in our paper for a fair comparison. Feel free to tune the hyperparameters or change the student/teacher models to achieve superior results.
We put example config files of four models used in our paper under configs
To run the training script:
python train.py --config [Path to config file]
@article{wang2024depthanywhere,
title={Depth Anywhere: Enhancing 360 monocular depth estimation via perspective distillation and unlabeled data augmentation},
author={Wang, Ning-Hsu and Liu, Yu-Lun},
journal={Advances in Neural Information Processing Systems},
volume={37},
year={2024}
}
We sincerely appreciate the following research / code / datasets that made our research possible