Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions about lane annotation #13

Closed
anm-Yi opened this issue May 12, 2022 · 5 comments
Closed

questions about lane annotation #13

anm-Yi opened this issue May 12, 2022 · 5 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@anm-Yi
Copy link

anm-Yi commented May 12, 2022

thanks for your great job.I have some questions about lane annotaion as follow:
(1) what "attribute:0" means? And I found that most of time it equals 0
(2) how to project 3D points under xyz camera coordinate to uv image coordinate? I tried [x,y,1]^T=K[X,Y,Z,1],which XYZ came from 3D points ["xyz"] in json and K came from ["intrinsic"], but points generated seemed wrong.

k1 = np.c_[np.array(json["intrinsic"]),np.zeros(3)]
t = np.array(XYZ.reshape(-1,1))
xyz = np.matmul(k1,t)

@anm-Yi anm-Yi changed the title questions about label annotation questions about lane annotation May 12, 2022
@dyfcalid
Copy link

Thank you for your issue.

  1. We only annotate the four most important lanes near the ego-vehicle to the number 1-4 (the left-left lane is 1, the left lane is 2, the right lane is 3 and the right-right lane is 4), and the other lanes or road edges to 0.
  2. The 3D lane points are under the coordinate system which has the x-axis pointing forward, the y-axis left, and the z-axis up. So before multiplying by the camera intrinsic matrix, it needs to be converted to the standard camera coordinate system (x-axis right, y-axis down, z-axis forward).
    Here is a simple example:
lane = np.array(lane_line['xyz'])
lane = np.vstack((lane, np.ones((1, lane.shape[1]))))
cam_representation = np.array([[0, -1, 0, 0],
                              [0, 0, -1, 0],
                              [1, 0, 0, 0],
                              [0, 0, 0, 1]], dtype=float)
lane = np.matmul(cam_representation, lane)
lane = lane[0:3, :]
lane = np.matmul(json["intrinsics"], lane)
x_2d = lane[0,:] / lane[2,:]
y_2d = lane[1,:] / lane[2,:]

@anm-Yi
Copy link
Author

anm-Yi commented May 16, 2022

Thanks for your reply!

  1. I've known the meaning of number 1-4,but some labels made me confused, such as labels for the folder "segment-
    9907794657177651763_1126_570_1146_570_with_camera_labels",all of which are 0.
  2. Nice code! I missed the conversion and now your code works.

So for 1. I wanna to confirm if I havn't miss other information, which means annotation has some mistakes, then I will close the issue.

@dyfcalid
Copy link

I've known the meaning of number 1-4,but some labels made me confused, such as labels for the folder "segment-
9907794657177651763_1126_570_1146_570_with_camera_labels",all of which are 0.

We have checked the annotations and there is indeed such a problem, and we will fix it later. Thanks.

@anm-Yi
Copy link
Author

anm-Yi commented May 17, 2022

Please mention it here if you have done this work. Thanks!

@anm-Yi anm-Yi closed this as completed May 17, 2022
@hli2020 hli2020 pinned this issue May 17, 2022
@nikhil-nakhate
Copy link

Hi @dyfcalid , Thanks for explanation about the transformation of the camera frames. I had a related question. Why are the extrinsics re computed? What are the original extrinsics with respect to? The following is the piece of code that I am referring to:

                cam_extrinsics = np.array(info_dict['extrinsic'])
                # Re-calculate extrinsic matrix based on ground coordinate
                R_vg = np.array([[0, 1, 0],
                                    [-1, 0, 0],
                                    [0, 0, 1]], dtype=float)
                R_gc = np.array([[1, 0, 0],
                                    [0, 0, 1],
                                    [0, -1, 0]], dtype=float)
                cam_extrinsics[:3, :3] = np.matmul(np.matmul(
                                            np.matmul(np.linalg.inv(R_vg), cam_extrinsics[:3, :3]),
                                                R_vg), R_gc)
                cam_extrinsics[0:2, 3] = 0.0

@ChonghaoSima ChonghaoSima added the documentation Improvements or additions to documentation label Sep 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

5 participants