This repo reproduces the SCARF (Self-Supervised Contrastive Learning Using Random Feature Corruption) framework for self-supervised learning with tabular data.
Authors: Dara Bahri, Heinrich Jiang, Yi Tay, Donald Metzler
Reference: Bahri, Dara, et al. "Scarf: Self-supervised contrastive learning using random feature corruption." arXiv preprint arXiv:2106.15147 (2021).
Original paper: https://research.google/pubs/scarf-self-supervised-contrastive-learning-using-random-feature-corruption/
Original repo: --
Clone this repository, create a new Conda environment and
git clone https://github.com/chris-santiago/scarf.git
conda env create -f environment.yml
cd scarf
pip install -e .
This project uses Hydra for managing configuration CLI arguments. See scarf/conf
for full
configuration details.
This project uses Task as a task runner. Though the underlying Python
commands can be executed without it, we recommend installing Task
for ease of use. Details located in Taskfile.yml
.
> task -l
task: Available tasks for this project:
* check-config: Check Hydra configuration
* compare: Compare using linear baselines
* train: Train a model
* wandb: Login to Weights & Biases
Example: Train model and for adult-income
dataset experiment
The --
forwards CLI arguments to Hydra.
task train -- experiment=income
This project was built using this cookiecutter and is setup to use PDM for dependency management, though it's not required for package installation.
This project is set up to log experiment results with Weights and Biases. It
expects an API key within a .env
file in the root directory:
WANDB_KEY=<my-super-secret-key>
Users can configure different logger(s) within the conf/trainer/default.yaml
file.