This repository implements the LiDAR-visual Dynamic Obstacle Detection and Tracking (LV-DOT) algorithm which aims at detecting and tracking dynamic obstacles for robots with extremely constraint computational resources.
![]() |
![]() |
![]() |
![]() |
For additional details, please refer to the related paper available here:
Zhefan Xu*, Haoyu Shen*, Xinming Han, Hanyu Jin, Kanlong Ye, and Kenji Shimada, "LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for autonomous robot navigation”, arXiv, 2025. [preprint] [YouTube] [BiliBili]
*The authors contributed equally.
- 2025-02-28: The GitHub code, video demos, and relavant papers for our LV-DOT framework are released. The authors will actively maintain and update this repo!
The system requirements for this repository are as follows. Please ensure your system meets these requirements:
- Ubuntu 18.04/20.04 LTS
- ROS Melodic/Noetic
This package has been tested on the following onboard computer:
Please follow the instructions below to install this package.
# This package needs ROS vision_msgs package
sudo apt install ros-noetic-vision-msgs
# Install YOLOv11 required package
pip install ultralytics
cd ~/catkin_ws/src
git clone https://github.com/Zhefan-Xu/LV-DOT.git
cd ..
catkin_make
Please download the rosbag file from this link:
rosbag play -l corridor_demo.bag
roslaunch onboard_detector run_detector.launch
The perception results can be visualized in Rviz as follows:
corridor_demo.mp4
Please adjust the configuration file under cfg/detector_param.yaml
of your LiDAR and camera device. Also, change the color image topic name in scripts/yolo_detector/yolov11_detector.py
From the parameter file, you can find that the algorithm expects the following data from the robot:
-
LiDAR Point Cloud:
/pointcloud
-
Depth image:
/camera/depth/image_rect_raw
-
Color image:
/camera/color/image_rect_raw
-
Robot pose:
/mavros/local_position/pose
-
Robot odometry (alternative to robot pose):
/mavros/local_position/odom
Additionally, update the camera intrinsic parameters and the camera-LiDAR extrinsic parameters in the config file.
Run the following command to launch dynamic obstacle detection and tracking.
# Launch your sensor device first. Make sure it has the above data.
roslaunch onboard_detector run_detector.launch
The LV-DOT can be directly utilized to assist mobile robot navigation and collision avoidance in dynamic environments, as demonstrated below:
![]() |
![]() |
The LV-DOT framework is shown below. Using onboard LiDAR, camera, and odometry inputs, the LiDAR and depth detection modules detect 3D obstacles, while the color detection module identifies 2D dynamic obstacles. The LiDAR-visual fusion module refines these detections, and the tracking module classifies obstacles as static or dynamic.
Example qualitative perception results in various testing environments are shown below:
If our work is useful to your research, please consider citing our paper.
@article{LV-DOT,
title={LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for autonomous robot navigation},
author={Xu, Zhefan and Shen, Haoyu and Han, Xinming and Jin, Hanyu and Ye, Kanlong and Shimada, Kenji},
journal={arXiv preprint arXiv:2502.20607},
year={2025}
}