3D点云SOTA方法,代码,论文,数据集(点云目标检测&分割)
- 如有疑问,微信:shuangyu_ai
- 更多自动驾驶相关交流群,欢迎扫码加入:自动驾驶感知(PCL/ROS+DL):技术交流群汇总(新版)
应同学建议,创建了星球 【自动驾驶感知(PCL/ROS+DL)】 专注于自动驾驶感知领域,包括传统方法(PCL点云库,ROS)和深度学习(目标检测+语义分割)方法。同时涉及Apollo,Autoware(基于ros2),BEV感知,三维重建,SLAM(视觉+激光雷达) ,模型压缩(蒸馏+剪枝+量化等),自动驾驶模拟仿真,自动驾驶数据集标注&数据闭环等自动驾驶全栈技术,欢迎扫码二维码加入,一起登顶自动驾驶的高峰!
点云处理方法上主要包括两类方法
- 深度学习方法 [
python
]- 目标检测&语义分割&多目标跟踪(MOT)
- 【202212done】目标检测最新论文实时更新
- 【202304done】语义分割最新论文实时更新
- 传统上基于规则的方法 [
c++
]
@双愚 , 若fork或star请注明来源
- 【202212done】目标检测最新论文实时更新
- 【202304done】语义分割最新论文实时更新
- 【202209done】目标检测框架(pcdet+mmdetection3d+det3d+paddle3d)文章撰写
- 【202208done】数据集调研总结
- 【202406done】3D语义分割框架综述(mmdetection3d|OpenPCSeg|Pointcept)
- 数据集详细剖析:kitti&waymo&nuScenes
- Apollo学习https://github.com/HuangCongQing/apollo_note
代码注解笔记:
- pcdet:https://github.com/HuangCongQing/pcdet-note
- mmdetection3d:https://github.com/HuangCongQing/mmdetection3d-note
- det3d: TODO
- paddle3d: TODO
- paperswithcode: https://paperswithcode.com/
数据集基本处理: 数据集标注文件处理
部分数据下载脚本:https://github.com/HuangCongQing/download_3D_dataset
点云可视化笔记和代码:https://github.com/HuangCongQing/Point-Clouds-Visualization
3D点云可视化的库有很多,你的选择可能是:
- pcl 点云可视化 [
c++
] - ROS topic可视化 [
c++
] [python
] - open3D [
python
] - mayavi[
python
] - matplolib [
python
]
数据标注工具总结:https://github.com/HuangCongQing/data-labeling-tools
- One-stage
- Two-stage
Voxel-Net、SECOND、PointPillars、HVNet、DOPS、Point-GNN、SA-SSD、3D-VID、3DSSD
- Voxel-Net
- SECOND
- PointPillars
- HVNet
- DOPS
- Point-GNN
- SA-SSD
- 3D-VID
- 3DSSD
F-pointNet、F-ConvNet、Point-RCNN、Part-A^2、PV-RCNN、Fast Point RCNN、TANet
- F-pointNet
- F-ConvNet
- Point-RCNN
- Part-A^2
- PV-RCNN
- Fast Point RCNN
- TANet
PointNet is proposed to learn per-point features using shared MLPs and global features using symmetrical pooling functions. Based on PointNet, a series of point-based networks have been proposed
Point-based Methods: these methods can be roughly divided into pointwise MLP methods, point convolution methods, RNN-based methods, and graph-based methods
PointNet++,PointSIFT,PointWeb,ShellNet,RandLA-Net
PointNet++ PointSIFT PointWeb ShellNet RandLA-Net
PointCNN PCCN A-CNN ConvPoint pointconv KPConv DPC InterpCNN
- PointCNN
- PCCN
- A-CNN
- ConvPoint
- pointconv
- KPConv
- DPC
- InterpCNN
G+RCU RSNet 3P-RNN DAR-Net
- G+RCU
- RSNet
- 3P-RNN
- DAR-Net
DGCNN SPG SSP+SPG PyramNet GACNet PAG HDGCN HPEIN SPH3D-GCN DPAM
- DGCNN
- SPG
- SSP+SPG
- PyramNet
- GACNet
- PAG
- HDGCN
- HPEIN
- SPH3D-GCN
- DPAM
Graviti 收录了 400 多个高质量 CV 类数据集,覆盖无人驾驶、智慧零售、机器人等多种 AI 应用领域。举两个例子: 文章> https://bbs.cvmart.net/topics/3346
- Google数据集搜索:https://toolbox.google.com/datasetsearch
- Datahub,分享高质量数据集平台:https://datahub.io/
- 用于上传和查找数据集的机器学习数据集存储库:https://www.webdoctx.com/www.mldata.org
- datafountain收集数据集:https://www.datafountain.cn/dataSets
- tinymind收集数据集:https://www.tinymind.cn/sites#group_22 看到的一篇文章,里面有介绍很多数据集的:世界上最有价值的不是石油而是数据(附数据资源下载链接)
- https://www.graviti.cn/open-datasets
https://github.com/Yochengliu/awesome-point-cloud-analysis#---datasets
- [KITTI] The KITTI Vision Benchmark Suite. [
det.
]**常用 - [ModelNet] The Princeton ModelNet . [
cls.
] - [ShapeNet] A collaborative dataset between researchers at Princeton, Stanford and TTIC. [
seg.
] - [PartNet] The PartNet dataset provides fine grained part annotation of objects in ShapeNetCore. [
seg.
] - [PartNet] PartNet benchmark from Nanjing University and National University of Defense Technology. [
seg.
] - [S3DIS**] The Stanford Large-Scale 3D Indoor Spaces Dataset. [
seg.
]**常用 - [ScanNet] Richly-annotated 3D Reconstructions of Indoor Scenes. [
cls.
seg.
] - [Stanford 3D] The Stanford 3D Scanning Repository. [
reg.
] - [UWA Dataset] . [
cls.
seg.
reg.
] - [Princeton Shape Benchmark] The Princeton Shape Benchmark.
- [SYDNEY URBAN OBJECTS DATASET] This dataset contains a variety of common urban road objects scanned with a Velodyne HDL-64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across classes of vehicles, pedestrians, signs and trees. [
cls.
match.
] - [ASL Datasets Repository(ETH)] This site is dedicated to provide datasets for the Robotics community with the aim to facilitate result evaluations and comparisons. [
cls.
match.
reg.
det
] - [Large-Scale Point Cloud Classification Benchmark(ETH)] This benchmark closes the gap and provides a large labelled 3D point cloud data set of natural scenes with over 4 billion points in total. [
cls.
] - [Robotic 3D Scan Repository] The Canadian Planetary Emulation Terrain 3D Mapping Dataset is a collection of three-dimensional laser scans gathered at two unique planetary analogue rover test facilities in Canada.
- [Radish] The Robotics Data Set Repository (Radish for short) provides a collection of standard robotics data sets.
- [IQmulus & TerraMobilita Contest] The database contains 3D MLS data from a dense urban environment in Paris (France), composed of 300 million points. The acquisition was made in January 2013. [
cls.
seg.
det.
] - [Oakland 3-D Point Cloud Dataset] This repository contains labeled 3-D point cloud laser data collected from a moving platform in a urban environment.
- [Robotic 3D Scan Repository] This repository provides 3D point clouds from robotic experiments,log files of robot runs and standard 3D data sets for the robotics community.
- [Ford Campus Vision and Lidar Data Set] The dataset is collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck.
- [The Stanford Track Collection] This dataset contains about 14,000 labeled tracks of objects as observed in natural street scenes by a Velodyne HDL-64E S2 LIDAR.
- [PASCAL3D+] Beyond PASCAL: A Benchmark for 3D Object Detection in the Wild. [
pos.
det.
] - [3D MNIST] The aim of this dataset is to provide a simple way to get started with 3D computer vision problems such as 3D shape recognition. [
cls.
] - [WAD] [ApolloScape] The datasets are provided by Baidu Inc. [
tra.
seg.
det.
] - [nuScenes] The nuScenes dataset is a large-scale autonomous driving dataset.用过
- [PreSIL] Depth information, semantic segmentation (images), point-wise segmentation (point clouds), ground point labels (point clouds), and detailed annotations for all vehicles and people. [paper] [
det.
aut.
] - [3D Match] Keypoint Matching Benchmark, Geometric Registration Benchmark, RGB-D Reconstruction Datasets. [
reg.
rec.
oth.
] - [BLVD] (a) 3D detection, (b) 4D tracking, (c) 5D interactive event recognition and (d) 5D intention prediction. [ICRA 2019 paper] [
det.
tra.
aut.
oth.
] - [PedX] 3D Pose Estimation of Pedestrians, more than 5,000 pairs of high-resolution (12MP) stereo images and LiDAR data along with providing 2D and 3D labels of pedestrians. [ICRA 2019 paper] [
pos.
aut.
] - [H3D] Full-surround 3D multi-object detection and tracking dataset. [ICRA 2019 paper] [
det.
tra.
aut.
] - [Argoverse BY ARGO AI] Two public datasets (3D Tracking and Motion Forecasting) supported by highly detailed maps to test, experiment, and teach self-driving vehicles how to understand the world around them.[CVPR 2019 paper][
tra.
aut.
] - [Matterport3D] RGB-D: 10,800 panoramic views from 194,400 RGB-D images. Annotations: surface reconstructions, camera poses, and 2D and 3D semantic segmentations. Keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and scene classification. [3DV 2017 paper] [code] [blog]
- [SynthCity] SynthCity is a 367.9M point synthetic full colour Mobile Laser Scanning point cloud. Nine categories. [
seg.
aut.
] - [Lyft Level 5] Include high quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map. [
det.
seg.
aut.
] - [SemanticKITTI] Sequential Semantic Segmentation, 28 classes, for autonomous driving. All sequences of KITTI odometry labeled. [ICCV 2019 paper**] [
seg.
oth.
aut.
]**常用 - [NPM3D] The Paris-Lille-3D has been produced by a Mobile Laser System (MLS) in two different cities in France (Paris and Lille). [
seg.
] - [The Waymo Open Dataset] The Waymo Open Dataset is comprised of high resolution sensor data collected by Waymo self-driving cars in a wide variety of conditions. [
det.
] - [A*3D: An Autonomous Driving Dataset in Challeging Environments] A*3D: An Autonomous Driving Dataset in Challeging Environments. [
det.
] - [PointDA-10 Dataset] Domain Adaptation for point clouds.
- [Oxford Robotcar] The dataset captures many different combinations of weather, traffic and pedestrians. [
cls.
det.
rec.
]
- [S3DIS**] The Stanford Large-Scale 3D Indoor Spaces Dataset. [
seg.
] [常用
] - [SemanticKITTI] Sequential Semantic Segmentation, 28 classes, for autonomous driving. All sequences of KITTI odometry labeled. [ICCV 2019 paper**] [
seg.
oth.
aut.
] [常用
] - Semantic3d
todo
- [KITTI] The KITTI Vision Benchmark Suite. [
det.
]**常用 - [nuScenes] The nuScenes dataset is a large-scale autonomous driving dataset.用过
- [The Waymo Open Dataset] The Waymo Open Dataset is comprised of high resolution sensor data collected by Waymo self-driving cars in a wide variety of conditions. [
det.
]
- https://github.com/timzhang642/3D-Machine-Learning
- https://github.com/victorphd/autonomous-vahicles-learning-resource
- https://github.com/Yochengliu/awesome-point-cloud-analysis
- https://github.com/NUAAXQ/awesome-point-cloud-analysis-2021
- https://github.com/QingyongHu/SoTA-Point-Cloud
- https://arxiv.org/abs/1912.12033 : Deep Learning for 3D Point Clouds: A Survey
- https://github.com/zhulf0804/3D-PointCloud
Copyright (c) 双愚. All rights reserved.
Licensed under the MIT License.
微信公众号:【双愚】(huang_chongqing) 聊科研技术,谈人生思考,欢迎关注~
往期推荐:
本人创建星球 【自动驾驶感知(PCL/ROS+DL)】 专注于自动驾驶感知领域,包括传统方法(PCL点云库,ROS)和深度学习(目标检测+语义分割)方法。同时涉及Apollo,Autoware(基于ros2),BEV感知,三维重建,SLAM(视觉+激光雷达) ,模型压缩(蒸馏+剪枝+量化等),自动驾驶模拟仿真,自动驾驶数据集标注&数据闭环等自动驾驶全栈技术,欢迎扫码二维码加入,一起登顶自动驾驶的高峰!