Code (pytorch) for 'Unsupervised Domain Adaptation without Source Data by Casting a BAIT' on VisDA. If for Office-Home and Office-31, please use learning rate 10 times larger.
TL;DR: We extend MCD to source-free domain adaptation.
You need to download the VisDA dataset.
Our codes are using PyTorch 1.3.1, torchvision 0.4.2 (python 3.7.6). The experiments are conducted on one GPU (RTX6000).
- First training model on the source data.
python train_source.py
- Then adapting source model to target domain, with only the unlabeled target data.
python train_target.py
The result of SHOT is from the ICML camera-ready version.
The codes are based on SHOT (ICML 2020, also source-free).