[modified from original repo] PoseCNN-PyTorch: A PyTorch Implementation of the PoseCNN Framework for 6D Object Pose Estimation
This is the PyTorch-based implementation of PoseCNN, initally supplied by NVLabs!!
PoseCNN is an end-to-end Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. arXiv, Project
Rotation regression in PoseCNN cannot handle symmetric objects very well. Check PoseRBPF for a better solution for symmetric objects.
The code also supports pose refinement by matching segmented 3D point cloud of an object to its SDF.
PoseCNN-PyTorch is released under the NVIDIA Source Code License (refer to the LICENSE file for details).
If you find the package is useful in your research, please consider citing:
@inproceedings{xiang2018posecnn,
Author = {Yu Xiang and Tanner Schmidt and Venkatraman Narayanan and Dieter Fox},
Title = {{PoseCNN}: A Convolutional Neural Network for {6D} Object Pose Estimation in Cluttered Scenes},
booktitle = {Robotics: Science and Systems (RSS)},
Year = {2018}
}
- Ubuntu 20.04 or above
- PyTorch 1.11 or above
- CUDA 11.3 or above
other requirements:
- for fmt version 8.1.1
- for assimp 4.1.0 (default on Ubuntu 20.04 is 5...)
if assimp is not found create either an symlink or add it to venv/sit-packages/pyassimp
-
Install PyTorch
-
Install Eigen: 'sudo apt install libeigen3-dev'
-
Install fmt to compile Sophus, use the release 8-1-1 fmt
-
Install Sophus from Strasdat instead of Yxung, the Github source code is here
-
Install python packages
pip install -r requirement.txt
-
Initialize the submodules in ycb_render
git submodule update --init --recursive
-
Compile the new layers under $ROOT/lib/layers we introduce in PoseCNN
cd $ROOT/lib/layers sudo python setup.py install
-
Compile cython components
cd $ROOT/lib/utils python setup.py build_ext --inplace
-
Compile the ycb_render in $ROOT/ycb_render
cd $ROOT/ycb_render sudo python setup.py develop
-
3D models of YCB Objects we used here (3G). Save under $ROOT/data or use a symbol link.
-
Our pre-trained checkpoints here (4G). Save under $ROOT/data or use a symbol link.
-
Our real-world images with pose annotations for 20 YCB objects collected via robot interation here (53G). Check our ICRA 2020 paper for details.
-
Download 3D models and our pre-trained checkpoints first.
-
run the following script
./experiments/scripts/demo.sh
-
Download background images, and save to $ROOT/data or use symbol links.
-
Download pretrained VGG16 weights: here (528M). Put the weight file to $ROOT/data/checkpoints. If our pre-trained models are already downloaded, the VGG16 checkpoint should be in $ROOT/data/checkpoints already.
-
Training and testing for 20 YCB objects with synthetic data. Modify the configuration file for training on a subset of these objects.
cd $ROOT # multi-gpu training, use 1 GPU or 2 GPUs since batch size is set to 2 ./experiments/scripts/ycb_object_train.sh # testing on synthetic data, $GPU_ID can be 0, 1, etc. ./experiments/scripts/ycb_object_test.sh $GPU_ID
-
Download the YCB-Video dataset from here.
-
Create a symlink for the YCB-Video dataset
cd $ROOT/data/YCB_Video ln -s $ycb_data data
-
Training and testing on the YCB-Video dataset
cd $ROOT # multi-gpu training, use 1 GPU or 2 GPUs since batch size is set to 2 ./experiments/scripts/ycb_video_train.sh # testing, $GPU_ID can be 0, 1, etc. ./experiments/scripts/ycb_video_test.sh $GPU_ID
-
Download the DexYCB dataset from here.
-
Create a symlink for the DexYCB dataset
cd $ROOT/data/DEX_YCB ln -s $dex_ycb_data data
-
Training and testing on the DexYCB dataset
cd $ROOT # multi-gpu training for different splits, use 1 GPU or 2 GPUs since batch size is set to 2 ./experiments/scripts/dex_ycb_train_s0.sh ./experiments/scripts/dex_ycb_train_s1.sh ./experiments/scripts/dex_ycb_train_s2.sh ./experiments/scripts/dex_ycb_train_s3.sh # testing, $GPU_ID can be 0, 1, etc. # our trained models are in checkpoints.zip ./experiments/scripts/dex_ycb_test_s0.sh $GPU_ID $EPOCH ./experiments/scripts/dex_ycb_test_s1.sh $GPU_ID $EPOCH ./experiments/scripts/dex_ycb_test_s2.sh $GPU_ID $EPOCH ./experiments/scripts/dex_ycb_test_s3.sh $GPU_ID $EPOCH
-
Python2 is needed for ROS.
-
Make sure our pretrained checkpoints are downloaded.
# start realsense
roslaunch realsense2_camera rs_aligned_depth.launch tf_prefix:=measured/camera
# start rviz
rosrun rviz rviz -d ./ros/posecnn.rviz
# run posecnn for detection only (20 objects), $GPU_ID can be 0, 1, etc.
./experiments/scripts/ros_ycb_object_test_detection.sh $GPU_ID
# run full posecnn (20 objects), $GPU_ID can be 0, 1, etc.
./experiments/scripts/ros_ycb_object_test.sh $GPU_ID
Our example: