This repository is the official implementation of Whitening Consistently Improves Self-Supervised Learning
[arxiv]
If you use our code or results, please cite our paper and consider giving this repo a ⭐ :
@misc{kalapos2024whiteningconsistentlyimproves,
title={Whitening Consistently Improves Self-Supervised Learning},
author={András Kalapos and Bálint Gyires-Tóth},
year={2024},
eprint={2408.07519},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.07519},
}
For each SSL method, we provide a script to run the training. The scripts are located in the pretrain folder.
The following pretraining methods are implemented:
E.g. to run BYOL pretraining:
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=. python pretrain/train_byol.py
We recommend using the provided Docker container to run the code.
- Create a keypair, copy the public key to the root of this repo and name it
cm-docker.pub
! - Run
make ssh
. - Connect on port 2233
ssh root@<hostname> -i <private_key_path> -p 2222
.
To run the container without starting an ssh server, run make run
.
To customize Docker build and run, edit the Makefile or the Dockerfile.
Warning
make ssh
and make run
start the container with the --rm
flag! Only contents of the /workspace
persist if the container is stopped (via a simple volume mount)!
Install the requirements with pip install -r requirements.txt
.
To set the path for the datasets, edit the Makefile's data_path=...
line.
CIFAR-10 and STL-10 download automatically; to set up TinyImageNet, we provide a script: utils/tiny_imagenet_setup.py.
Whitening is implemented based on huangleiBuaa/IterNorm.
Our implementation is based on the Lightly library.