Skip to content

Latest commit

 

History

History
58 lines (51 loc) · 2.74 KB

README.md

File metadata and controls

58 lines (51 loc) · 2.74 KB

Attentive Temporal Consistent Network (ATCoN)

This repository is the code for the ECCV 2022 paper "Source-free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition". This repository is built based on the MFNet repository. We thank the authors of MFNet for their excellent work.

alt text

Prerequisites

This repository is built with PyTorch, with the following packages necessary for training and testing:

PyTorch (1.8.0 recommended)
opencv-python (pip)

Note that compatibility towards newer PyTorch versions may NOT be guaranteed

Project Detail and Dataset Download

Please visit our project page to find out details of this work and to download the dataset.

Training and Testing

There are two steps involved during training: a) training of the source model; b) training of the target model. Note that the source model should be stored properly before the training of the target model.

To train on the source dataset, simply run:

python train_source.py

To train on the target dataset, simply run:

python train_target.py

Alternatively, you may train the target model with the SHOT by simply:

python train_target.py --method SHOT

To test the trained model on the target domain, change directory to the '/test' folder and run:

python evaluate_target.py

You may additionally test the source model on the source domain (to get a glimpse of how the source model performs) by running:

python evaluate_source.py

Notes on training and testing

  • The pretrained model where we start our training from is now uploaded to Gdrive.
  • Notes on the '/exps' folder can be found in the README file in that folder.
  • We provide a demo weight here, you should locate it in the '/exps' folder.
  • We also provide the train and test log files (in the 'exps/' and 'test/' folders) respectively so that you can check your own training process. Do note that the names may differ a bit.

If you find this paper interesting and useful, please cite our paper:

@inproceedings{xu2022sourcefree,
  title={Source-Free Video Domain Adaptation by Learning Temporal Consistency for Action Recognition},
  author={Xu, Yuecong and Yang, Jianfei and Cao, Haozhi and Wu, Keyu and Min, Wu and Chen, Zhenghua},
  booktitle={Computer Vision -- ECCV 2022},
  year={2022},
  publisher={Springer Nature Switzerland},
  address={Cham},
  pages={147--164},
}