You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is the official repo for ["Dropout Attacks"](https://www.computer.org/csdl/proceedings-article/sp/2024/313000a026/1RjEa2qP0fm) by Andrew Yuan, Alina Oprea, and Cheng Tan
4
+
5
+
## Table of Contents
6
+
-[Installing](#installing)
7
+
-[Replication](#replication)
8
+
-[Citation](#citation)
9
+
10
+
11
+
## Installing
12
+
```
13
+
git clone [email protected]:awyuan/dropout-attacks.git # [TODO: FILL IN REPO HERE]
14
+
cd dropout-attacks
15
+
pip install -r requirements.txt
16
+
```
17
+
18
+
## Replication
19
+
To replicate the results reported in the paper, run `replication.sh` from the dropout-attacks folder. If only running
20
+
parts of the script, make sure you run any commands from within the evaluation folder. To generate figure pdfs found in
21
+
`./paper/figures_charts/`, run `create_figures.sh` from the dropout-attacks folder.
22
+
23
+
## Citation
24
+
If you have found this project to be useful, please consider citing:
25
+
26
+
```
27
+
@INPROCEEDINGS {,
28
+
author = {A. Yuan and A. Oprea and C. Tan},
29
+
booktitle = {2024 IEEE Symposium on Security and Privacy (SP)},
30
+
title = {Dropout Attacks},
31
+
year = {2024},
32
+
volume = {},
33
+
issn = {2375-1207},
34
+
pages = {26-26},
35
+
abstract = {Dropout is a common operator in deep learning, aiming to prevent overfitting by randomly dropping neurons during training. This paper introduces a new family of poisoning attacks against neural networks named DROPOUTATTACK. DROPOUTATTACK attacks the dropout operator by manipulating the selection of neurons to drop instead of selecting them uniformly at random. We design, implement, and evaluate four DROPOUTATTACK variants that cover a broad range of scenarios. These attacks can slow or stop training, destroy prediction accuracy of target classes, and sabotage either precision or recall of a target class. In our experiments of training a VGG-16 model on CIFAR-100, our attack can reduce the precision of the victim class by 34.6% (81.7% → 47.1%) without incurring any degradation in model accuracy},
0 commit comments