-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any hint code on calculating SDF, do we need to rotate it first? #11
Comments
Can you please clarify further your question? Do you want to compute the SDF for a new 3D scene scan, or use the existing ones? |
I want to calculate SDF for a new 3D scene. However, I cannot get the max&min in vicon.npy as well as vicon_normal.npy The big problem is how to compute SDF for a new 3D scene scan (or obj file), Thanks! |
For getting the SDF of a new scene:
After this, as you say, you have things in two different systems. What your camera observes is in the camera coordinate system, and the scene mesh and SDF is in the scene/world coordinate system. This is like defining a virtual camera that "sees" the scene mesh in the same way as the real camera "sees" the real scene. For this you need to find a rigid transformation (rotation and translation) from one coordinate system to the other, i.e. perform extrinsic calibration. For this you need a minimum of three correspondences. Please see Section 3.2 in the PROX paper. For this calibration, we capture with the camera 1 RGB/depth image without any human in it. A human annotator clicks on 3 (2D) salient points in the RGB/depth image (Kinect captures aligned RGB and depth images) and the corresponding 3 (3D) points on the scene mesh (be aware of some holes that the mesh might have). These 3 correspondences are the constraints you need to compute the transformation. Please have in mind that there might be some noise in the annotation, so this transformation will be a bit rough, but it gets you into the ballpark of aligning the captured depth image (it "sees" a part of the scene, i.e. the scene mesh) to the 3D scene mesh. We use this to initialize an ICP stage, that refines this alignment with iterative correspondence estimation and optimization. For the steps of this paragraph we used the Open3D library, which was very handy, so we totally recommend it. Unfortunately I am not fully sure I understand the problem you describe for I hope the above clarify things :) |
Hi, Can we directly use the preprocessing code in DeepSDF https://github.com/facebookresearch/DeepSDF/blob/master/preprocess_data.py to get the SDF from the given 3D mesh? Thank you! |
Hi! I didn't look at the DeepSDF code but this library might be useful https://github.com/marian42/mesh_to_sdf. You should pay attention to use the sign_method='normals'. The other method which computes the sign based on the depth maps will NOT work for PROX scenes. The only issue is that it is a bit slow because it doesn't use any accelerated structure. Another option would be to look at PyTorch3D. They have some function for computing the KNN on point clouds https://pytorch3d.readthedocs.io/en/latest/modules/ops.html#pytorch3d.ops.knn_points. I don't see a function to compute point2surface distance but take a look maybe I missed. |
Thank you for your answer! |
hello, where is the "models" directory under the dataset root? Could you help me? |
Hello, I would like to ask whether you have calculated SDF for the 3D mesh successfully? |
Any hint code on calculating SDF and do we need to translate it to camera coordinate and calculate?
or we can use the world-coordinate one for fitting?
The text was updated successfully, but these errors were encountered: