Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] {Black borders of rectified left/right images causing bad disparity} #853

Open
matlabbe opened this issue Nov 22, 2022 · 3 comments
Open
Labels
bug Something isn't working

Comments

@matlabbe
Copy link

matlabbe commented Nov 22, 2022

Hello,

we are encountering a problem with the resulting depth image created from Left-Right IR cameras on OAK-D. Here is the issue (showing right image with corresponding depth overlay):

Screenshot from 2022-11-21 17-26-22

We calibrated the camera with:

./calibrate.py   -brd bw1098obc -db -s 2.5

The output of stereoRectify is:

self.R1, self.R2, self.P1, self.P2, self.Q, validPixROI1, validPixROI2 = cv2.stereoRectify(

R1 (left)
[[ 0.999973 -0.007341 -0.000398]
 [ 0.007341  0.999973 -0.000019]
 [ 0.000398  0.000016  1.      ]]

R2 (right)
[[ 0.999939  0.002479 -0.010763]
 [-0.002478  0.999997  0.000031]
 [ 0.010763 -0.000004  0.999942]]

M1 (left)
[[802.297     0.      623.15045]
 [  0.      802.1731  402.0222 ]
 [  0.        0.        1.     ]]

M2 (right)
[[803.97705   0.      632.70996]
 [  0.      804.194   403.02563]
 [  0.        0.        1.     ]]

P1 (left)
[[803.18353    0.       653.974678   0.      ]
 [  0.       803.18353  404.848763   0.      ]
 [  0.         0.         1.         0.      ]]

P2 (right)
[[  803.18353      0.         653.974678 -6034.308056]
 [    0.         803.18353    404.848763     0.      ]
 [    0.           0.           1.           0.      ]]

When launching with ROS, the camera_info looks like this:

roslaunch depthai_examples stereo_inertial_node.launch depth_aligned:=false monoResolution:=800p  rectify:=true

Left:
K: [802.2969970703125, 0.0, 623.1504516601562, 0.0, 802.173095703125, 402.0221862792969, 0.0, 0.0, 1.0]
R: [0.9999729990959167, -0.007341025862842798, -0.00039802250103093684, 0.007341018877923489, 0.9999730587005615, -1.9204771888325922e-05, 0.0003981527697760612, 1.628236350370571e-05, 0.9999999403953552]
P: [803.97705078125, 0.0, 632.7099609375, 60.401063102285846, 0.0, 804.1939697265625, 403.025634765625, 0.0, 0.0, 0.0, 1.0, 0.0]

Right:
K: [803.97705078125, 0.0, 632.7099609375, 0.0, 804.1939697265625, 403.025634765625, 0.0, 0.0, 1.0]
R: [0.9999390244483948, 0.0024786319117993116, -0.010762757621705532, -0.0024784409906715155, 0.9999969005584717, 3.108179589617066e-05, 0.010762801393866539, -4.4050393626093864e-06, 0.9999420642852783]
P: [803.97705078125, 0.0, 632.7099609375, 0.0, 0.0, 804.1939697265625, 403.025634765625, 0.0, 0.0, 0.0, 1.0, 0.0]

Not sure why P is not the same than P1 and P2 !? It is like using M2 to regenerate P1 and P2 (which seems wrong). The baseline Tx (or P(0,3)) of left camera_info is also wrong, minor the cm scale, it should be 60.34308056, not 60.401063102285846.

To try to remove the black border, we wanted to use the alpha parameter of stereoRectify() like this:

self.R1, self.R2, self.P1, self.P2, self.Q, validPixROI1, validPixROI2 = cv2.stereoRectify(
                                                                            self.M1,
                                                                            self.d1,
                                                                            self.M2,
                                                                            self.d2,
                                                                            self.img_shape, self.R, self.T, alpha=0)

While M1,M2,R1 and R2 stay the same, P1 and P2 are now scaled:

P1 (left)
[[843.329274   0.       653.974678   0.      ]
 [  0.       843.329274 404.848763   0.      ]
 [  0.         0.         1.         0.      ]]

P2 (right)
[[  843.329274     0.         653.974678 -6335.922542]
 [    0.         843.329274   404.848763     0.      ]
 [    0.           0.           1.           0.      ]]

This works during the calibration, we don't see any black borders in the preview. The function used to show the preview is:

mapx_l, mapy_l = cv2.initUndistortRectifyMap(
self.M1, self.d1, self.R1, self.P1, self.img_shape, cv2.CV_32FC1)
mapx_r, mapy_r = cv2.initUndistortRectifyMap(
self.M2, self.d2, self.R2, self.P2, self.img_shape, cv2.CV_32FC1)
print("Printing p1 and p2")
print(self.P1)
print(self.P2)

This function requires P1 and P2, which are different than M1 and M2. Because P1 and P2 seem not saved in EEPROM, the resulting rectification with depthai_example ros package is exactly the same than before, with the black border artifacts (causing wrong disparity) and with also same camera_info.

Questions:

  • Shouldn't P1 and P2 be also exported in the camera? What is the rectification approach used on the camera that doesn't require P1 and P2? I guess P1 and P2 could be regenerated on the camera with stereoRectify() before passing to a initUndistortRectifyMap() to create the rectification mesh. However, if it is doing so, we cannot modify alpha parameter on camera to avoid the black borders (if we can, how?).
  • Second option: should we use the mesh calibration approach instead so that P1 and P2 are correctly used for rectification on the camera? For this approach, why P1 and P2 are not used here like in the preview above?
    map_x_l, map_y_l = cv2.initUndistortRectifyMap(self.M1, self.d1, self.R1, self.M2, self.img_shape, cv2.CV_32FC1)
    map_x_r, map_y_r = cv2.initUndistortRectifyMap(self.M2, self.d2, self.R2, self.M2, self.img_shape, cv2.CV_32FC1)

As a final note, we tried publishing the raw left and right images from depthai_example, then use http://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration to recalibrate ourself the camera, then with stereo_image_proc and the new camera_info messages, we get the correctly rectified images without black borders and thus fixing the disparity issue. However, we prefer computing the disparity on the camera than on the host computer. Hopefully we can find a solution to get good rectification on the camera.

@matlabbe matlabbe added the bug Something isn't working label Nov 22, 2022
@Erol444
Copy link
Member

Erol444 commented Nov 22, 2022

Hi @matlabbe ,
Thank you for detailed report on this issue. We have a few approaches that we will be testing in the next few weeks to mitigate this issue. What we suggest, and have worked for a few companies, is to crop the right-most band of the depth image which produces random/noisy depth pixels. You can crop on-device as well using ImageManip node. Thoughts?
Thanks, Erik

@matlabbe
Copy link
Author

matlabbe commented Nov 22, 2022

I tried with the Mesh approach with those changes (explained in my previous post):

diff --git a/depthai_helpers/calibration_utils.py b/depthai_helpers/calibration_utils.py
index ff9a776..b0f66c1 100644
--- a/depthai_helpers/calibration_utils.py
+++ b/depthai_helpers/calibration_utils.py
@@ -394,7 +394,7 @@ class StereoCalibration(object):
                                                                                         self.d1,
                                                                                         self.M2,
                                                                                         self.d2,
-                                                                                        self.img_shape, self.R, self.T)
+                                                                                        self.img_shape, self.R, self.T, alpha=0)
         else:
             self.R1, self.R2, self.P1, self.P2, self.Q = cv2.fisheye.stereoRectify(self.M1,
                 self.d1,
@@ -959,8 +959,8 @@ class StereoCalibration(object):
         print("Mesh path")
         print(curr_path)
 
-        map_x_l, map_y_l = cv2.initUndistortRectifyMap(self.M1, self.d1, self.R1, self.M2, self.img_shape, cv2.CV_32FC1)
-        map_x_r, map_y_r = cv2.initUndistortRectifyMap(self.M2, self.d2, self.R2, self.M2, self.img_shape, cv2.CV_32FC1)
+        map_x_l, map_y_l = cv2.initUndistortRectifyMap(self.M1, self.d1, self.R1, self.P1, self.img_shape, cv2.CV_32FC1)
+        map_x_r, map_y_r = cv2.initUndistortRectifyMap(self.M2, self.d2, self.R2, self.P2, self.img_shape, cv2.CV_32FC1)
 
         map_x_l_fp32 = map_x_l.astype(np.float32)
         map_y_l_fp32 = map_y_l.astype(np.float32)

and it does fix the black border problem.

Rectified images with original version:
org_calib_left_rectified org_calib_right_rectified

Rectified images with modified version:
new_calib_left_rectified new_calib_right_rectified

However, the camera_info published for left and right images in ROS are wrong, the correct rectified approach will still use a copy of M2 in P1 and P2. With alpha=-1 (default), M2 is pretty close to P2, but with alpha=0 (to remove black borders), P2 is much different than M2 because we "zoomed" in the left and right images to remove the black borders. Here are the wrong camera_info:

#Left:
D: [-6.0713887214660645, 54.4517707824707, -0.0003773995558731258, -0.00036596463178284466, 43.84017562866211, -6.166638374328613, 54.441646575927734, 43.68896484375]
K: [805.8760986328125, 0.0, 627.5831298828125, 0.0, 805.9297485351562, 411.1523742675781, 0.0, 0.0, 1.0]
R: [0.9999866485595703, 0.003882801625877619, -0.0034086634404957294, -0.0038728320505470037, 0.9999881982803345, 0.0029264981858432293, 0.0034199864603579044, -0.0029132580384612083, 0.999989926815033]
P: [802.166015625, 0.0, 648.2816772460938, 60.32984576865099, 0.0, 802.0962524414062, 409.9562683105469, 0.0, 0.0, 0.0, 1.0, 0.0]

#Right:
D: [6.126948833465576, -53.37107849121094, 0.0006922174943611026, -0.0006033015670254827, 565.4175415039062, 5.966142654418945, -53.328697204589844, 564.033203125]
K: [802.166015625, 0.0, 648.2816772460938, 0.0, 802.0962524414062, 409.9562683105469, 0.0, 0.0, 1.0]
R: [0.9999366998672485, -0.004832806997001171, -0.010160881094634533, 0.004803117364645004, 0.9999841451644897, -0.002944336738437414, 0.010174949653446674, 0.002895346377044916, 0.9999440312385559]
P: [802.166015625, 0.0, 648.2816772460938, 0.0, 0.0, 802.0962524414062, 409.9562683105469, 0.0, 0.0, 0.0, 1.0, 0.0]

Here what should camera_info look like to match the rectified images without black borders (note P1 and P2 are not equal to M1 and M2 (K1 and K2)):

#Left:
D: [-6.0713887214660645, 54.4517707824707, -0.0003773995558731258, -0.00036596463178284466, 43.84017562866211, -6.166638374328613, 54.441646575927734, 43.68896484375]
K: [805.8760986328125, 0.0, 627.5831298828125, 0.0, 805.9297485351562, 411.1523742675781, 0.0, 0.0, 1.0]
R: [0.9999866485595703, 0.003882801625877619, -0.0034086634404957294, -0.0038728320505470037, 0.9999881982803345, 0.0029264981858432293, 0.0034199864603579044, -0.0029132580384612083, 0.999989926815033]
P: [827.490288, 0.0, 646.020584, 62.23527591, 0.0, 827.490288, 410.763664, 0.0, 0.0, 0.0, 1.0, 0.0]

#Right:
D: [6.126948833465576, -53.37107849121094, 0.0006922174943611026, -0.0006033015670254827, 565.4175415039062, 5.966142654418945, -53.328697204589844, 564.033203125]
K: [802.166015625, 0.0, 648.2816772460938, 0.0, 802.0962524414062, 409.9562683105469, 0.0, 0.0, 1.0]
R: [0.9999366998672485, -0.004832806997001171, -0.010160881094634533, 0.004803117364645004, 0.9999841451644897, -0.002944336738437414, 0.010174949653446674, 0.002895346377044916, 0.9999440312385559]
P: [827.490288, 0.0, 646.020584, 0.0, 0.0, 827.490288, 410.763664, 0.0, 0.0, 0.0, 1.0, 0.0]

As a suggestion, without saving explicitly P1 and P2 on the camera, you could regenerate them by calling cv2.stereoRectify on the camera (I think you are exporting enough info in the JSON for that), while adding in StereoDepth API a new function called "setRectificationAlpha()" that would set the alpha value in stereoRectifiy (e.g. alpha=0 to remove black borders). Create the mesh onboard with new P1 and P2. For ROS, send back the new computed P1 and P2 in camera_info so that nodes (e.g. VSLAM, Visual stereo odometry) received the rectified images can have the right focal distance of the zoomed stereo images without black borders.

EDIT
For us, it was more about the wrong disparity on the side that was causing some obstacles to disappear (ray tracing with very far depth). For now, we will just ignore X pixels from each side of the depth image. We will still look forward for your refactoring of the calibration tool.

@matlabbe
Copy link
Author

matlabbe commented Feb 8, 2023

New update here that could fix this issue: luxonis/depthai-core#718

EDIT: Note that setting alpha to 0 or 1, the camera_info would have to be updated (focal length is modified) on ROS side, would have to verify if it is the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants