Skip to content

Latest commit

 

History

History
50 lines (36 loc) · 2.06 KB

README.md

File metadata and controls

50 lines (36 loc) · 2.06 KB

LCCS

This repository contains code demonstrating the method in our IJCAI 2022 paper Few-Shot Adaptation of Pre-Trained Networks for Domain Shift, and arXiv version containing both main manuscript and appendix.

Setting up

Prerequisites

This code makes use of Dassl.pytorch. Please follow the instructions at https://github.com/KaiyangZhou/Dassl.pytorch#installation to install dassl.

We used NVIDIA container image for PyTorch, release 20.12, to run experiments.

Dataset

This demonstration runs on PACS. Please download the dataset and save in lccs/imcls/data/.

Training and Evaluation

1. Training source models

From lccs/imcls/scripts/, run ./run_source.sh. The source models will be saved in lccs/imcls/output_source_models/.

2. Adapt source models with LCCS

From lccs/imcls/scripts/, run ./run_lccs.sh. The outputs after adaptation will be saved in lccs/imcls/output_results/.

3. Summarize model performance

From lccs/imcls/results_scripts/, first run ./collect_results.sh and then ./consolidate_results.sh.

Citation

@inproceedings{zhang2022lccs,
  title     = {Few-Shot Adaptation of Pre-Trained Networks for Domain Shift},
  author    = {Zhang, Wenyu and Shen, Li and Zhang, Wanyue and Foo, Chuan-Sheng},
  booktitle = {Proceedings of the Thirty-First International Joint Conference on
               Artificial Intelligence, {IJCAI-22}},
  publisher = {International Joint Conferences on Artificial Intelligence Organization},
  editor    = {Lud De Raedt},
  pages     = {1665--1671},
  year      = {2022},
  month     = {7},
  note      = {Main Track},
  doi       = {10.24963/ijcai.2022/232},
  url       = {https://doi.org/10.24963/ijcai.2022/232},
}

Acknowledgements

Our implementation is based off the repository MixStyle. Thanks to the MixStyle implementation.