Intermediary-guided Bidirectional Spatial-Temporal Aggregation Network for Video-based Visible-Infrared Person Re-Identification
We use /torch >=1.8 / 24G RTX3090 for training and evaluation.
mkdir data_original
mkdir data_anaglyph
There are many ways to generate anaglyph images, and you can also use the code (main_VCM.py) we provide.
Note that the organization, file name, and storage format of the original data and the anaglyph data should be consistent.
data
├── data_original
│ └──
│ └──
│ └──
│ └── ..
├── data_anaglyph
│ └──
│ └──
│ └──
│ └── ..
python train.py
Later, we will upload our trained model(download), and you can load the model directly without training.
If you have any questions, please feel free to contact me. ( [email protected] ).
@ARTICLE{10047982,
author={Li, Huafeng and Liu, Minghui and Hu, Zhanxuan and Nie, Feiping and Yu, Zhengtao},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
title={Intermediary-guided Bidirectional Spatial-Temporal Aggregation Network for Video-based Visible-Infrared Person Re-Identification},
year={2023},
volume={},
number={},
pages={1-1},
doi={10.1109/TCSVT.2023.3246091}}