-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm a little confused about the inconsistency in the number of datasets #66
Comments
Follow python utils/download_from_gdrive.py 1AysroWpfISmm-yRFGBgFTrLy6FjQwvwP sync.zip: |
I am not sure whether the amount of training data in the paper is the same as that in nyu_train.txt. |
It's interesting that everyone's papers say 50k training data. Maybe everyone uses sync.zip |
Thanks for finding the typo in our paper. It is true that everyone uses sync.zip. :D |
From 《From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation》: |
About the NYUv2:
Paper:“We train our network on a 50K RGB-Depth pairs subset following previous works.”
dataset_prepare.md:“Following previous work, I utilize about 50K image-depth pairs as our training set and standard 652 images as the validation set. ”
nyu_train.txt:Only 24,231 pairs of data.
The text was updated successfully, but these errors were encountered: