Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth map inpainting #1

Open
mldemox opened this issue Dec 26, 2024 · 4 comments
Open

Depth map inpainting #1

mldemox opened this issue Dec 26, 2024 · 4 comments

Comments

@mldemox
Copy link

mldemox commented Dec 26, 2024

Thanks for your work, I was reading the ”DepthLab“ paper and was curious about the Fig.3 depth results, e.g. how is depth anyting v2 used to implement the depth complementation task? Is there any comparison made if the depth is predicted directly using RGB image, which as usual will have better smoothed boundaries?

@Johanan528
Copy link
Collaborator

Thank you very much for your interest in our work. In fact, traditional monocular depth estimators do not possess the ability to complete depth information. Therefore, their pipeline involves monocular depth estimation followed by alignment based on scale. In the visualization results, we have reattached the known areas to better demonstrate their discontinuities. That is to say, even after alignment, there are still discontinuities with the true depth, which is one of the significances of our work.

@mldemox
Copy link
Author

mldemox commented Dec 31, 2024

Thank you for your reply, can you please provide the corresponding inference code? I'd like to test the results and it will help me in my future work.

@Johanan528
Copy link
Collaborator

The inference code in marigold can be used directly. They provide a complete least squares algorithm.

@booker-max
Copy link

Thank you very much for your interest in our work. In fact, traditional monocular depth estimators do not possess the ability to complete depth information. Therefore, their pipeline involves monocular depth estimation followed by alignment based on scale. In the visualization results, we have reattached the known areas to better demonstrate their discontinuities. That is to say, even after alignment, there are still discontinuities with the true depth, which is one of the significances of our work.

ok 如果我将depthanything v2后面接上一个可学习的offset, 然后再去接上一个可学习的scale和shift,这是否会有一个更好的结果, 部分缓解discontinuities的问题, 从而得到这个complete depth information

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants