-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depth map inpainting #1
Comments
Thank you very much for your interest in our work. In fact, traditional monocular depth estimators do not possess the ability to complete depth information. Therefore, their pipeline involves monocular depth estimation followed by alignment based on scale. In the visualization results, we have reattached the known areas to better demonstrate their discontinuities. That is to say, even after alignment, there are still discontinuities with the true depth, which is one of the significances of our work. |
Thank you for your reply, can you please provide the corresponding inference code? I'd like to test the results and it will help me in my future work. |
The inference code in marigold can be used directly. They provide a complete least squares algorithm. |
ok 如果我将depthanything v2后面接上一个可学习的offset, 然后再去接上一个可学习的scale和shift,这是否会有一个更好的结果, 部分缓解discontinuities的问题, 从而得到这个complete depth information |
Thanks for your work, I was reading the ”DepthLab“ paper and was curious about the Fig.3 depth results, e.g. how is depth anyting v2 used to implement the depth complementation task? Is there any comparison made if the depth is predicted directly using RGB image, which as usual will have better smoothed boundaries?
The text was updated successfully, but these errors were encountered: