Skip to content

Accompaning repository for the 2022 ICRA paper "Lightweight Monocular Depth Estimation through Guided Decoding"

Notifications You must be signed in to change notification settings

mic-rud/GuidedDecoding

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GuidedDecoding

Accompaning repository for the 2022 ICRA paper "Lightweight Monocular Depth Estimation through Guided Decoding"

Trained weights

Dataset Resolution Model-Version
NYU Depth V2 240x320 (Half) GuideDepth
NYU Depth V2 240x320 (Half) GuideDepth-S
NYU Depth V2 480x640 (Full) GuideDepth
NYU Depth V2 480x640 (Full) GuideDepth-S
KITTI 192x640 (Half) GuideDepth
KITTI 192x640 (Half) TODO
KITTI 384x1280 (Full) GuideDepth
KITTI 384x1280 (Full) TODO

Evaluation procedure (on GPU)

For the evaluation, download the already prepared testsets from here:

NYU Depth V2

KITTI

Unpack the data for Evaluation

python main.py --eval --dataset DATASET --resolution RESOLUTION --model MODEL_NAME --test_path PATH_TO_TEST_DATA --num_workers=NUM_WORKERS --save_results PATH_TO_RESULTS

You can select from the following options:

[RESOLUTION: full, half]
[DATASET: nyu_reduced, kitti]

Inference and deployment

We performed our evaluation on the NVIDIA Jetson Nano and the NVIDIA Xavier NX, using the following dependencies:

Jetpack: 4.5.1

CUDA: 10.2

CUDNN: 8.0.0

Python: 3.6.9

tensorRT: 7.1.3

PyTorch: 1.8.0

torchvision: 0.9.1

torch2trt: 0.2.0

Installing PyTorch and torchvision, refer to this post: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-11-now-available/72048

Installing torch2trt: https://github.com/NVIDIA-AI-IOT/torch2trt

You might need to increase SWAP memory for the tensorRT conversion to 4GB: https://github.com/JetsonHacksNano/resizeSwapMemory

Usage

python3 inference.py --eval --model MODEL_NAME --resolution RESOLUTION --dataset DATASET --weights_path PATH_TO_WEIGHTS --save_results PATH_TO_RESULTS --test_path PATH_TO_TEST_DATA

By selecting from the following options:

[RESOLUTION: full, half]
[DATASET: nyu_reduced, kitti]

Training

You will need the pretrained weights for DDRNet-23 slim, which can be downloaded here or acquired from the official repository

Preparing NYU Depth V2

We used a Subset of NYU Depth V2 designed and prepared by Alhashim et al. (https://github.com/ialhashim/DenseDepth)

To train, download the dataset linked in their repository. No need to unpack, the dataloader loads the compressed data.

Preparing KITTI

Coming soon!

Training procedure

run main.py --train --dataset DATASET --resolution RESOLUTION --model MODEL_NAME --data_path PATH_TO_TRAINING_DATA --num_workers=NUM_WORKERS --save_checkpoint PATH_TO_CHECKPOINTS

About

Accompaning repository for the 2022 ICRA paper "Lightweight Monocular Depth Estimation through Guided Decoding"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages