Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on Varying Evaluation Metrics Across Datasets in 3D Reconstruction #20

Open
wheltz opened this issue Aug 16, 2024 · 0 comments

Comments

@wheltz
Copy link

wheltz commented Aug 16, 2024

Hello,
I'm new to the field of 3D reconstruction and have a question about why PSNR, SSIM, and LPIPS are calculated differently across various datasets. I understand that this variation often stems from the baseline methods choosing different evaluation metrics. However, when assessing the actual performance of some methods, I find many of the reported metrics in this field to be unreliable. This is because, upon reproduction by others on new datasets, it's unclear which specific evaluation methods were used. This inconsistency has been confusing for me. Is this thing in evaluation metrics a common practice in the field?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant