Skip to content

Commit 155b545

Browse files
committed
[Add] update pre-release code
1 parent e0713e9 commit 155b545

File tree

304 files changed

+7757
-2
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

304 files changed

+7757
-2
lines changed

CMakeLists.txt

+49
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
cmake_minimum_required(VERSION 2.8.3)
2+
project(mlcc)
3+
4+
set(CMAKE_BUILD_TYPE "Release")
5+
set(CMAKE_CXX_FLAGS "-std=c++14")
6+
set(CMAKE_CXX_FLAGS_RELEASE "-O3 -Wall -g")
7+
8+
find_package(catkin REQUIRED COMPONENTS
9+
roscpp
10+
rospy
11+
std_msgs
12+
)
13+
14+
find_package(PCL REQUIRED)
15+
find_package(OpenCV REQUIRED)
16+
find_package(Eigen3 REQUIRED)
17+
find_package(Ceres REQUIRED)
18+
19+
catkin_package(
20+
CATKIN_DEPENDS roscpp rospy std_msgs message_runtime
21+
)
22+
23+
include_directories(
24+
include
25+
${catkin_INCLUDE_DIRS}
26+
${PCL_INCLUDE_DIRS}
27+
${CERES_INCLUDE_DIRS}
28+
)
29+
30+
catkin_package(
31+
CATKIN_DEPENDS geometry_msgs nav_msgs roscpp rospy std_msgs
32+
DEPENDS EIGEN3 PCL OpenCV
33+
INCLUDE_DIRS
34+
)
35+
36+
add_executable(pose_refine source/pose_refine.cpp)
37+
target_link_libraries(pose_refine ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${OpenCV_LIBS})
38+
39+
add_executable(extrinsic_refine source/extrinsic_refine.cpp)
40+
target_link_libraries(extrinsic_refine ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${OpenCV_LIBS})
41+
42+
add_executable(global source/global.cpp)
43+
target_link_libraries(global ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${OpenCV_LIBS})
44+
45+
add_executable(calib_camera source/camera_calib.cpp)
46+
target_link_libraries(calib_camera ${catkin_LIBRARIES} ${PCL_LIBRARIES} ${CERES_LIBRARIES} ${OpenCV_LIBS})
47+
48+
add_executable(tobinary source/tobinary.cpp)
49+
target_link_libraries(tobinary ${catkin_LIBRARIES} ${PCL_LIBRARIES})

README.md

+62-2
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# Fast and Accurate Extrinsic Calibration for Multiple LiDARs and Cameras
22

3-
The pre-print version of our paper is available [here](paper/paper.pdf). The code will be released at the end of this month (around October). Our experiment video is availalbe on [YouTube](https://youtu.be/PaiYgAXl9iY) and [Bilibili](https://www.bilibili.com/video/BV1p64y1h7ae?spm_id_from=333.999.0.0).
3+
The pre-print version of our paper is available [here](https://arxiv.org/abs/2109.06550). The pre-release code has been uploaded. Our experiment video is availalbe on [YouTube](https://youtu.be/PaiYgAXl9iY) and [Bilibili](https://www.bilibili.com/video/BV1p64y1h7ae?spm_id_from=333.999.0.0).
44

5-
![](figure/cover.png)
5+
<!-- ![](figure/cover.png) -->
66

77
## Introduction
88
In this paper, we propose a fast, accurate, and targetless extrinsic calibration method for multiple LiDARs and cameras based on adaptive voxelization. On the theory level, we incorporate the LiDAR extrinsic calibration with the bundle adjustment method. We derive the second-order derivatives of the cost function w.r.t. the extrinsic parameter to accelerate the optimization. On the implementation level, we apply the adaptive voxelization to dynamically segment the LiDAR point cloud into voxels with non-identical sizes, and reduce the computation time in the process of feature correspondence matching.
@@ -18,3 +18,63 @@ Fig. 2 Adaptive voxelization in LiDAR-LiDAR extrinsic calibration.
1818

1919
![](figure/camera_voxel.png)
2020
Fig. 3 Adaptive voxelization in LiDAR-camera extrinsic calibration. A) real world image. B) raw point cloud of this scene. C) voxelization of [previous work](https://ieeexplore.ieee.org/document/9495137?source=authoralert) where the yellow circles indicate the false edge estimation. D) edges extracted with our proposed method.
21+
22+
## 1. Prerequisites
23+
### 1.1 ROS and Ubuntu
24+
Our code has been tested on `Ubuntu 16.04` with `ROS Kinetic` and `Ubuntu 18.04` with `ROS Melodic`.
25+
### 1.2 Ceres Solver
26+
Our code has been tested on [ceres solver 1.14.x](https://github.com/ceres-solver/ceres-solver).
27+
### 1.3 OpenCV
28+
Our code has been tested on [OpenCV 3.4.14](https://github.com/opencv/opencv).
29+
### 1.4 Eigen
30+
Our code has been tested on [Eigen 3.3.7](https://gitlab.com/libeigen/eigen).
31+
### 1.5 PCL
32+
Our code has been tested on [PCL 1.8](https://github.com/PointCloudLibrary/pcl).
33+
34+
## 2. Build and Run
35+
Clone the repository and catkin_make:
36+
37+
```
38+
cd ~/catkin_ws/src
39+
git clone [email protected]:hku-mars/mlcc.git
40+
cd .. && catkin_make
41+
source ~/catkin_ws/devel/setup.bash
42+
```
43+
44+
## 3. Run Our Example
45+
The parameters base LiDAR (`AVIA` or `MID`), test scene (`scene-1` or `scene-2`), `adaptive_voxel_size`, etc., could be modified in the corresponding launch file.
46+
### 3.1 LiDAR-LiDAR Extrinsic Calibration
47+
<!-- ![](figure/workflow.jpg) -->
48+
Step 1: base LiDAR pose optimization (the initial pose is stored in `scene-x/original_pose`)
49+
```
50+
roslaunch mlcc pose_refine.launch
51+
```
52+
53+
Step 2: LiDAR extrinsic optimization (the initial extrinsic is stored in `config/init_extrinsic`)
54+
```
55+
roslaunch mlcc extrinsic_refine.launch
56+
```
57+
58+
Step 3: pose and extrinsic joint optimization
59+
```
60+
roslaunch mlcc global_refine.launch
61+
```
62+
### 3.2 LiADR-Camera Extrinsic Calibration
63+
```
64+
roslaunch mlcc camera_calib.launch
65+
```
66+
67+
## 4. Run Your Own Data
68+
To test on your own data, you need to transform the point cloud into `.dat` format (see `source/tobinary.cpp` for more details). Please only collect the point cloud and images when the LiDAR (sensor platform) is not moving for the optimal precision (or segment them from a complete rosbag). The base LiDAR poses and initial extrinsic values shall also be provided. These initial values could be obtained by general SLAM and Hand-eye calibration algorithms.
69+
70+
You may need to modify the parameters `adaptive_voxel_size`, `feat_eigen_limit` and `downsampling_size` for LiDAR-LiDAR extrinsic calibration to adjust the precision and speed. You need to change the corresponding path and topic name in the yaml files in the `config` folder.
71+
72+
## 5. Known Issues
73+
Currently, we seperate the LiDAR extrinsic calibration process into three steps for debug reasons. In future release, we wish to combine them together to make it more convenient to use.
74+
75+
## 6. License
76+
The source code is released under [GPLv2](http://www.gnu.org/licenses/) license.
77+
78+
We are still working on improving the performance and reliability of our codes. For any technical issues, please contact us via email <[email protected]> and <[email protected]>.
79+
80+
For commercial use, please contact Dr. Fu Zhang <[email protected]>.

config/avia_stereo.yaml

+38
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
%YAML:1.0
2+
3+
BaseLiDARNumber: 3 # AVIA
4+
ExtLiDARNumber1: 0 # left LiDAR of MID-100
5+
ExtLiDARNumber2: 1 # middle LiDAR of MID-100
6+
ExtLiDARNumber3: 2 # right LiDAR of MID-100
7+
LiDARFilesPath: "/home/sam/catkin_ws/src/mlcc/scene2/"
8+
ExtrinsicNumber: 3 # total number of extrinsics to be calibrated
9+
10+
# Initial extrinsic (usually provided by hand measurement or cad design)
11+
ExtrinsicMat: !!opencv-matrix
12+
rows: 4
13+
cols: 4
14+
dt: d
15+
data: [0.0, -1.0, 0.0, 0.0,
16+
0.0, 0.0, -1.0, 0.0,
17+
1.0, 0.0, 0.0, 0.0,
18+
0.0, 0.0, 0.0, 1.0]
19+
20+
# Params for Canny Edge Extraction
21+
Canny.gray_threshold: 10
22+
Canny.len_threshold: 200
23+
24+
# Params for Voxel Cutting & Plane Fitting & Edge Extraction
25+
Voxel.size: 1
26+
Voxel.down_sample_size: 0.02
27+
Plane.min_points_size: 60
28+
Plane.normal_theta_min: 60
29+
Plane.normal_theta_max: 120
30+
Plane.max_size: 4
31+
Ransac.dis_threshold: 0.03
32+
Ransac.iter_num: 200
33+
Edge.min_dis_threshold: 0.05
34+
Edge.max_dis_threshold: 0.08
35+
36+
# Params for color point clouds
37+
Color.dense: 10
38+
Color.intensity_threshold: 10

config/init_extrinsic

+10
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
for MID-100 self Calibration, i.e., L1 is the base LiDAR
2+
3+
L0: 0.2 0.2 0.2 0.999858 0.000676286 0.0157461 -0.00599366
4+
L2: 0.2 0.2 0.2 0.999964 -0.000508365 0.0043916 -0.00727742
5+
6+
for AVIA calibrating MID-100, i.e., AVIA is the base LiDAR
7+
8+
L0: 0 0 0 -0.0332571 -0.0141886 -0.00218784 0.999344
9+
L1: 0 0 0 -0.0190995 -0.00383766 0.00756648 -0.999782
10+
L2: 0 0 0 0.0135145 -0.00174505 -0.0090076 0.999867

config/left.yaml

+38
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
%YAML:1.0
2+
3+
#--------------------------------------------------------------------------------------------
4+
# Camera Parameters. Adjust them!
5+
#--------------------------------------------------------------------------------------------
6+
image_topic: "/left_camera/image"
7+
8+
CameraMat: !!opencv-matrix
9+
rows: 3
10+
cols: 3
11+
dt: d
12+
data: [863.590518437255, 0.116814496662654, 621.666074631063,
13+
0, 863.100180533059, 533.971978652819,
14+
0, 0, 1]
15+
16+
DistCoeffs: !!opencv-matrix
17+
rows: 5
18+
cols: 1
19+
dt: d
20+
data: [-0.0944205499243979, 0.0946727677776504, -0.00807970960613932, 8.07461209775283e-05, 6.52716646875879e-05]
21+
22+
Camera.width: 1280
23+
Camera.height: 1024
24+
25+
ExtrinsicMat: !!opencv-matrix
26+
rows: 4
27+
cols: 4
28+
dt: d
29+
# if AVIA is the base LiDAR
30+
data: [0,-1,0,0,
31+
0,0,-1,0,
32+
1,0,0,0,
33+
0,0,0,1]
34+
# if MID is the base LiDAR
35+
# data: [0,1,0,0,
36+
# 0,0,-1,0,
37+
# -1,0,0,0,
38+
# 0,0,0,1]

config/mid100_stereo.yaml

+37
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
%YAML:1.0
2+
3+
BaseLiDARNumber: 1 # middle LiDAR of MID-100
4+
ExtLiDARNumber1: 0 # left LiDAR of MID-100
5+
ExtLiDARNumber2: 2 # right LiDAR of MID-100
6+
LiDARFilesPath: "/home/sam/catkin_ws/src/mlcc/scene2/"
7+
ExtrinsicNumber: 2 # total number of extrinsics to be calibrated
8+
9+
# Initial extrinsic (usually provided by hand measurement or cad design)
10+
ExtrinsicMat: !!opencv-matrix
11+
rows: 4
12+
cols: 4
13+
dt: d
14+
data: [0.0, -1.0, 0.0, 0.0,
15+
0.0, 0.0, -1.0, 0.0,
16+
1.0, 0.0, 0.0, 0.0,
17+
0.0, 0.0, 0.0, 1.0]
18+
19+
# Params for Canny Edge Extraction
20+
Canny.gray_threshold: 15
21+
Canny.len_threshold: 200
22+
23+
# Params for Voxel Cutting & Plane Fitting & Edge Extraction
24+
Voxel.size: 1.0
25+
Voxel.down_sample_size: 0.02
26+
Plane.min_points_size: 60
27+
Plane.normal_theta_min: 20
28+
Plane.normal_theta_max: 160
29+
Plane.max_size: 5
30+
Ransac.dis_threshold: 0.02
31+
Ransac.iter_num: 200
32+
Edge.min_dis_threshold: 0.03
33+
Edge.max_dis_threshold: 0.06
34+
35+
# Params for color point clouds
36+
Color.dense: 1
37+
Color.intensity_threshold: 10

config/right.yaml

+38
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
%YAML:1.0
2+
3+
#--------------------------------------------------------------------------------------------
4+
# Camera Parameters. Adjust them!
5+
#--------------------------------------------------------------------------------------------
6+
image_topic: "/right_camera/image"
7+
8+
CameraMat: !!opencv-matrix
9+
rows: 3
10+
cols: 3
11+
dt: d
12+
data: [863.081687640302, 0.176140650303666, 628.941349825503,
13+
0, 862.563371991533, 533.002909535090,
14+
0, 0, 1]
15+
16+
DistCoeffs: !!opencv-matrix
17+
rows: 5
18+
cols: 1
19+
dt: d
20+
data: [-0.0943795554942897, 0.0982998241524913, -0.0125418048527694, 0.000456235380677041, -8.73113795357082e-05]
21+
22+
Camera.width: 1280
23+
Camera.height: 1024
24+
25+
ExtrinsicMat: !!opencv-matrix
26+
rows: 4
27+
cols: 4
28+
dt: d
29+
# if AVIA is the base LiDAR
30+
data: [0,-1,0,0,
31+
0,0,-1,0,
32+
1,0,0,0,
33+
0,0,0,1]
34+
# if MID is the base LiDAR
35+
# data: [0,1,0,0,
36+
# 0,0,-1,0,
37+
# -1,0,0,0,
38+
# 0,0,0,1]

figure/workflow.jpg

328 KB
Loading

0 commit comments

Comments
 (0)