Skip to content

Benzlxs/tDBN

Repository files navigation

3DBN

This repository contains the python implementation of 3D Backbone Network for 3D Object Detection.

NOTE-2020-5-3

The new version of thes code can be found Det3D. In the new version of 3DBN, we are using the spconv, which can make detection more efficienct and feasible for building deeper network model for higher accuracy.

Pipeline

GuidePic

Install

Implemented and tested on Ubuntu 16.04 with Python 3.6 and Pytorch 1.0.

  1. Clone the repo
git clone https://github.com/Benzlxs/tDBN.git
  1. Install Python dependencies The miniconda3 package manager package is recommended.
cd ./tDBN
pip3 install -r requirements.txt
  1. Install Pytorch. Visiting pytorch official webpage and installing 1.0 version PyTorch according to your hardware configuration.
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
  1. Install SparseConvNet according to its README file.

  2. Compile the protos.

cd ./tDBN
bash protos/run_protoc.sh

Dataset

  1. Downlaoad the KITTI and arrange files as following:
kitti_dataset
        training
                image_2
                label_2
                calib
                velodyne
                velodyne_reduced
        testing
                image_2
                calib
                velodyne
                velodyne_reduced
  1. Split dataset and put them under the folder, ./tDBN/kitti/data_split, we provide the two set of data split, 50/50 split or 75/25 split, or you can customize your data split ratio.

  2. Create kitti dataset infos:

cd ./tDBN
python ./scripts/create_data.py create_kitti_info_file --data_path=kitti_dataset
  1. Create reduced point cloud:
cd ./tDBN
python ./scripts/create_data.py create_reduced_point_cloud --data_path=kitti_dataset
  1. Create groundtruth database:
cd ./tDBN
python ./scripts/create_data.py create_groundtruth_database --data_path=kitti_dataset
  1. Modify the directory in config file Go to the config folder and configurate the database_info_path, kitti_info_path and kitti_root_path to your path.

Training

  1. Select your config file and output directory in train.sh, like setting config=./configs/car_tDBN_bv_2.config
  2. Start to train:
cd ./tDBN
bash train.sh
  1. Training results are saved in output directory, check the log.txt and 'eval_log.txt' for detailed performance.
  2. Some training results are as following
Eval_at_125571
Car [email protected], 0.70, 0.70
3D  AP 87.98, 77.89, 76.35
Eval_at_160940
Car [email protected], 0.70, 0.70
3D  AP 88.20, 77.59, 75.58

Evaluate and inference

  1. Select your config file and output directory in train.sh, like setting config=./configs/car_tDBN_bv_2.config
  2. Set the model path that you want to evaluate, ckpt_path
  3. If you want to evlaute, set test=False, if you want to generate testing result, set test=True and kitti_info_path=your_kitti_dataset_root/kitti_infos_test.pkl
  4. Start to evaluate or inference:
cd ./tDBN
bash evaluator.sh
  1. Testing results on KITTI benchmark.
Benchmark	Easy	Moderate	Hard
Car (Detection)	90.30 %	88.62 %	80.08 %
Car (Orientation)	89.93 %	87.95 %	79.32 %
Car (3D Detection)	83.56 %	74.64 %	66.76 %
Car (Bird's Eye View)	88.13 %	79.40 %	77.97 %

Acknowledge

Thanks to the team of Yan Yan , we have benifited a lot from their previous work SECOND.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published