This is the official code of LiDAR R-CNN: An Efficient and Universal 3D Object Detector. In this work, we present LiDAR R-CNN, a second stage detector that can generally improve any existing 3D detector. We find a common problem in Point-based RCNN, which is the learned features ignore the size of proposals, and propose several methods to remedy it. Evaluated on WOD benchmarks, our method significantly outperforms previous state-of-the-art.
中文介绍:https://zhuanlan.zhihu.com/p/359800738
- We provide the training code for multi-frame setting, and show 3 frame results based PointPillars.
All the codes are tested in the following environment:
- Linux (tested on Ubuntu 16.04)
- Python 3.6+
- PyTorch 1.5 or higher (tested on PyTorch 1.5, 6, 7)
- CUDA 10.1
To install pybind11:
git clone [email protected]:pybind/pybind11.git
cd pybind11
mkdir build && cd build
cmake .. && make -j
sudo make install
To install requirements:
pip install -r requirements.txt
apt-get install ninja-build libeigen3-dev
Install LiDAR_RCNN
library:
python setup.py develop --user
Cuda Extensions:
# Rotated IOU
cd src/LiDAR_RCNN/ops/iou3d/
python setup.py build_ext --inplace
Please refer to data processer to generate the proposal data.
After preparing WOD data, we can train the vehicle only model in the paper, run this command:
python -m torch.distributed.launch --nproc_per_node=4 tools/train.py --cfg config/lidar_rcnn.yaml --name lidar_rcnn
For 3 class in WOD:
python -m torch.distributed.launch --nproc_per_node=8 tools/train.py --cfg config/lidar_rcnn_all_cls.yaml --name lidar_rcnn_all
The models and logs will be saved to work_dirs/outputs
.
NOTE: for multi-frame training, please set MODEL.Frame = n
in config.
To evaluate, run distributed testing with 4 gpus:
python -m torch.distributed.launch --nproc_per_node=4 tools/test.py --cfg config/lidar_rcnn.yaml --checkpoint outputs/lidar_rcnn/checkpoint_lidar_rcnn_59.pth.tar
python tools/create_results.py --cfg config/lidar_rcnn.yaml
Note that, you should keep the nGPUS
in config equal to nproc_per_node
.This will generate a val.bin
file in the work_dir/results
. You can create submission to Waymo server using waymo-open-dataset code by following the instructions here.
Our model achieves the following performance on:
Waymo Open Dataset Challenges (3D Detection)
Proposals from | Class | Frame/Channel | 3D AP L1 Vehicle | 3D AP L1 Pedestrian | 3D AP L1 Cyclist |
---|---|---|---|---|---|
PointPillars | Vehicle | 1 / 1x | 75.6 | - | - |
PointPillars | Vehicle | 1 / 2x | 75.6 | - | - |
PointPillars | Vehicle | 3 / 2x | 77.8 | - | - |
SST | Vehicle | 3 / 2x | 78.6 | - | - |
PointPillars | 3 Class | 1 / 1x | 73.4 | 70.7 | 67.4 |
PointPillars | 3 Class | 1 / 2x | 73.8 | 71.9 | 69.4 |
Proposals from | Class | Frame/Channel | 3D AP L2 Vehicle | 3D AP L2 Pedestrian | 3D AP L2 Cyclist |
---|---|---|---|---|---|
PointPillars | Vehicle | 1 / 1x | 66.8 | - | - |
PointPillars | Vehicle | 1 / 2x | 67.9 | - | - |
PointPillars | Vehicle | 3 / 2x | 69.1 | - | - |
SST | Vehicle | 3 / 2x | 69.9 | - | - |
PointPillars | 3 Class | 1 / 1x | 64.8 | 62.4 | 64.8 |
PointPillars | 3 Class | 1 / 2x | 65.1 | 63.5 | 66.8 |
Note: The proposals provided by PointPillars are detected on 1 frame points cloud.
If you find our paper or repository useful, please consider citing
@article{li2021lidar,
title={LiDAR R-CNN: An Efficient and Universal 3D Object Detector},
author={Li, Zhichao and Wang, Feng and Wang, Naiyan},
journal={CVPR},
year={2021},
}
This project draws on the following codebases.