- We use distributed training.
- For fair comparison with other codebases, we report the GPU memory as the maximum value of
torch.cuda.max_memory_allocated()
for all 8 GPUs. Note that this value is usually less than whatnvidia-smi
shows. - We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script benchmark.py which computes the average time on 2000 images.
Please refer to SECOND for details.
Please refer to PointPillars for details.
Please refer to Part-A2 for details.
Please refer to VoteNet for details.
Please refer to Dynamic Voxelization for details.
Please refer to MVXNet for details.
Please refer to RegNet for details.
We also support baseline models on nuImages dataset. Please refer to nuImages for details.