Skip to content

LV-DOT: LiDAR-Visual Dynamic Obstacle Detection and Tracking (C++/Python/ROS)

License

Notifications You must be signed in to change notification settings

Zhefan-Xu/LV-DOT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LV-DOT: LiDAR-Visual Dynamic Obstacle Detection and Tracking for Autonomous Robots

This repository implements the LiDAR-visual Dynamic Obstacle Detection and Tracking (LV-DOT) algorithm which aims at detecting and tracking dynamic obstacles for robots with extremely constraint computational resources.

corridor - gif intersection - gif
office - gif workspace - gif

For additional details, please refer to the related paper available here:

Zhefan Xu*, Haoyu Shen*, Xinming Han, Hanyu Jin, Kanlong Ye, and Kenji Shimada, "LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for autonomous robot navigation”, arXiv, 2025. [preprint] [YouTube] [BiliBili]

*The authors contributed equally.

News

  • 2025-02-28: The GitHub code, video demos, and relavant papers for our LV-DOT framework are released. The authors will actively maintain and update this repo!

Table of Contents

Installation Guide

The system requirements for this repository are as follows. Please ensure your system meets these requirements:

  • Ubuntu 18.04/20.04 LTS
  • ROS Melodic/Noetic

This package has been tested on the following onboard computer:

Please follow the instructions below to install this package.

# This package needs ROS vision_msgs package
sudo apt install ros-noetic-vision-msgs

# Install YOLOv11 required package
pip install ultralytics

cd ~/catkin_ws/src
git clone https://github.com/Zhefan-Xu/LV-DOT.git
cd ..
catkin_make

Run Demo

a. Run on dataset

Please download the rosbag file from this link:

rosbag play -l corridor_demo.bag
roslaunch onboard_detector run_detector.launch

The perception results can be visualized in Rviz as follows:

corridor_demo.mp4

b. Run on your device

Please adjust the configuration file under cfg/detector_param.yaml of your LiDAR and camera device. Also, change the color image topic name in scripts/yolo_detector/yolov11_detector.py

From the parameter file, you can find that the algorithm expects the following data from the robot:

  • LiDAR Point Cloud: /pointcloud

  • Depth image: /camera/depth/image_rect_raw

  • Color image: /camera/color/image_rect_raw

  • Robot pose: /mavros/local_position/pose

  • Robot odometry (alternative to robot pose): /mavros/local_position/odom

Additionally, update the camera intrinsic parameters and the camera-LiDAR extrinsic parameters in the config file.

Run the following command to launch dynamic obstacle detection and tracking.

# Launch your sensor device first. Make sure it has the above data.
roslaunch onboard_detector run_detector.launch

The LV-DOT can be directly utilized to assist mobile robot navigation and collision avoidance in dynamic environments, as demonstrated below:

block - gif approach - gif

LV-DOT Framework and Results

The LV-DOT framework is shown below. Using onboard LiDAR, camera, and odometry inputs, the LiDAR and depth detection modules detect 3D obstacles, while the color detection module identifies 2D dynamic obstacles. The LiDAR-visual fusion module refines these detections, and the tracking module classifies obstacles as static or dynamic.

Example qualitative perception results in various testing environments are shown below:

Citation and Reference

If our work is useful to your research, please consider citing our paper.

@article{LV-DOT,
  title={LV-DOT: LiDAR-visual dynamic obstacle detection and tracking for autonomous robot navigation},
  author={Xu, Zhefan and Shen, Haoyu and Han, Xinming and Jin, Hanyu and Ye, Kanlong and Shimada, Kenji},
  journal={arXiv preprint arXiv:2502.20607},
  year={2025}
}

About

LV-DOT: LiDAR-Visual Dynamic Obstacle Detection and Tracking (C++/Python/ROS)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published