Pytorch implementation for the Sharpness-Aware Self-Distillation attack(SASD).
.
├── imagenet
├── NIPS17
├── results
├── src
│ ├── eval_ensemble.py
│ ├── eval_single.py
│ ├── finetune.py
│ ├── GN
│ ├── rap_baseline.py
│ ├── rap_ensemble_baseline.py
│ ├── SASD.py
│ ├── torch_nets
│ └── utils
│ ├── args.py
│ ├── attack_methods.py
│ ├── image_loader.py
│ ├── image_process.py
│ ├── model_loader.py
│ ├── SAM.py
│ ├── scale_weight.py
│ └── utils.py
└── tf2torch_models
Datasets and model checkpoints are available in NIPS17, ILSVRC2012, and tf_to_pytorch_model.
SASD model's checkpoints can be downloaded here: link.
pip install -r requirements.txt
Please add the directory to PYTHONPATH before running SASD:
cd SASD
export PYTHONPATH="$PYTHONPATH:$PWD"
When the ILSVRC2012 and NIPS17 datasets are available, you can run SASD and check it's performance following the argparser suggestions in src/utils/args.py.
Here is an sample command for running Sharpness-Aware Self-Minimization(SASD):
python src/SASD.py \
-m resnet50 \
-b 10 \
-e 2 \
-t 1 \
-w 20 \
--sharpness_aware \
-o 1 \
-l 0.05
This project make use of the following third-party projects:
- SAM We referenced the SAM optimizer settings from this code repository.
- Targeted Transfer We referenced the implementation of generating targeted adversarial perturbations from this code repository.
We used the methods from the following repository when comparing the baseline.
We'd like to express our gratitude to the authors of these projects for their work, which greatly facilitated the development of this project.