-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The results are of low quality in Sparse Depth Completion for LiDAR or SFM. #12
Comments
The most important thing, as mentioned in the paper, is that resizing the mask can lead to a loss of precision (the area specified by the mask is no longer accurate). Therefore, for sparse-to-dense tasks, do not resize . Set the process_res parameter to the longest side length of the input. |
Hello, which part of the paper mentions the related content: “is that resizing the mask can lead to a loss of precision (the area specified by the mask is no longer accurate)”? |
In the final part of Section 4.3 and the future work section, it is mentioned. The explanation in the article may not be very clear. Specifically, since the input itself is sparse information, the alignment between the mask and the known information is particularly important. If the mask and the known depth are resized separately, it often leads to a mismatch between the resized_mask and the resized_known_depth. I apologize for any confusion caused. We will provide a detailed explanation in the future, including usage methods and precautions for each task.:-) |
Hello, I have a question. The output of DepthAnything is relative depth, correct? The method you're using outputs distance, right? If so, how are these two compared? |
Hello.:-) First, in traditional monocular depth estimation, the process for evaluating metrics is as follows: initially, relative depth is estimated, and then a least squares transformation is applied to convert it to absolute depth based on the ground truth (gt) values provided in the benchmark. This is because relative depth alone cannot be directly used for evaluation. For more details, you can refer to this link: https://github.com/prs-eth/Marigold/blob/main/eval.py. It is worth noting that our method does not require the least squares transformation. Instead, we only need to remember the normalization parameters used during input, such as the maximum and minimum values, to easily convert the depth back to absolute depth. |
The results are of low quality in Sparse Depth Completion for LiDAR or SFM. We use point cloud data as supervision and masks to generate depth information, but the performance seems to be inferior to Depth Anything. It might be an issue with my settings, and I look forward to any guidance or solutions.
mask and LiDAR like this
The text was updated successfully, but these errors were encountered: