Challenging of Automatic Plug-in Charging (APC) & Automatic Charging and Plug-in (ACP)
This repository aims to introduce data prerequisites used in our project, focusing on 3D detection on Charging Station and Socket/Plug, which is mainly based on PV-RCNN.
In this project, all point cloud was retrieved by a PMD Camera with development kits.
There are many tools (online or off-line) providing labeling on a bunch of points, such as, basicfinder, supervise and 3D BAT. We are using an online tool, supervise for labeling 3D point cloud as below.
Inspired by KITTI, for detection of charging station and socket/plug, two datasets for training and a dataset for evaluation need to be established respectively. To keep the coordinate as same as KITTI, and other requirements that make sure point cloud data we acquired can be fed into the target deep network, a set of tools were developed.
Since PV-RCNN is a state-of-the-art deep network framework that has high-performance on many autonomous driving benchmarks, such as KITTI. We employ and practice this learning-based technique to do a challenging of Automatic Charging and Plug-in(ACP). Moreover, Point Cloud, as the data-structure of input in our project, is the fundamental data source of 3D detection in PV-RCNN. PV-RCNN was implemented in OpenPCDet and modified in OpenPCDet. We hope the challenging of ACP can be benefitted by Learning-based methods.
For training:
A dataset of Charging Station, which consists of Training(number: ~1000, size: 480MB) and Evaluation (number: ~100, size: 53MB) data.
Download model (150 MB), trained with ~1000 dataset and 250 epochs.
Detection Result:
For evaluation:
A dataset of Socket/Plug, which consists of Training(number: ~1000, size: 254MB) and Evaluation (number: ~100, size: 48MB) data.
Download model (150 MB), trained with ~1000 dataset and 250 epochs.
Detection Result:
Thanks to the UR robot, multiple acquisition poses could be obtained and integrated to rebuild a complete 3D environment. Followed by feature-based strategies to identify position and orientation of pin. For more details about this, please refer to these papers(click as below).
If you found it is useful, please consider cite us:
@article{zhou2022learning,
title={Learning-based object detection and localization for a mobile robot manipulator in SME production},
author={Zhou, Zhengxue and Li, Leihui and F{\"u}rsterling, Alexander and Durocher, Hjalte Joshua and Mouridsen, Jesper and Zhang, Xuping},
journal={Robotics and Computer-Integrated Manufacturing},
volume={73},
pages={102229},
year={2022},
publisher={Elsevier}
}
@inproceedings{zhou2021deep,
title={Deep Learning on 3D Object Detection for Automatic Plug-in Charging Using a Mobile Manipulator},
author={Zhou, Zhengxue and Li, Leihui and Wang, Riwei and Zhang, Xuping},
booktitle={2021 IEEE International Conference on Robotics and Automation (ICRA)},
pages={4148--4154},
year={2021},
organization={IEEE}
}
This project so far is maintained by @Leihui Li and @Zhengxue Zhou, please be free to contact us if you have any problems.