CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding (CVPR'22)
If you find our work, this repository, or pretrained models useful, please consider giving a star ⭐ and citation.
@InProceedings{Afham_2022_CVPR,
author = {Afham, Mohamed and Dissanayake, Isuru and Dissanayake, Dinithi and Dharmasiri, Amaya and Thilakarathna, Kanchana and Rodrigo, Ranga},
title = {CrossPoint: Self-Supervised Cross-Modal Contrastive Learning for 3D Point Cloud Understanding},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {9902-9912}
}
- (Mar 25, 2023)
- (Mar 2, 2022)
- Paper accepted at CVPR 2022 🎉
- (Mar 2, 2022)
- Training and evaluation codes for CrossPoint, along with pretrained models are released.
Refer requirements.txt
for the required packages.
CrossPoint pretrained models with DGCNN feature extractor are available here.
Datasets are available here. Run the command below to download all the datasets (ShapeNetRender, ModelNet40, ScanObjectNN, ShapeNetPart) to reproduce the results.
cd data
source download_data.sh
Refer scripts/script.sh
for the commands to train CrossPoint.
Run eval_ssl.ipynb
notebook to perform linear SVM object classification in both ModelNet40 and ScanObjectNN datasets.
Refer scripts/fsl_script.sh
to perform few-shot object classification.
Refer scripts/script.sh
for fine-tuning experiment for part segmentation in ShapeNetPart dataset.
Our code borrows heavily from DGCNN repository. We thank the authors of DGCNN for releasing their code. If you use our model, please consider citing them as well.