You can from this link, run code on google colab : https://github.com/ZahraHeydari95/BFA/blob/paper_branch/Bit_Flip_based_Adversarial_Weight_Attack.ipynb
This repository includes a Bit-Flip Attack (BFA) algorithm which search and identify the vulernable bits within a quantized deep neural network. On the other hand, it contains two methods to deal with bit-flip attack. 1- Binarization-aware training 2- Clustering as relaxation of binarization (PC)
- Python
- Pytorch >=1.01
- TensorboardX
- conda
- install Requirement.txt file
Please modify TENSORBOARD=
and data_path=
in the example bash codes before running the code.
HOST=$(hostname)
echo "Current host is: $HOST"
# Automatic check the host and configuration
case $HOST in
"alpha") # alpha is the hostname
TENSORBOARD='-----/-----/----' # tensorboard environment path
data_path='----/----/----' # cifar10 dataset path
;;
esac
Note: Keep the bit-width of weight quantization as 8-bit.
$ bash BFA_imagenet.sh
The random attack is performed on all the possible weight bit (regardless MSB to LSB). You just need to add another line to enable the random bit flip --random_bfa
in BFA_imagent.sh
:
...
--attack_sample_size ${attack_sample_size} \
--random_bfa
...
Taken the ResNet-18 on CIFAR-10 as example:
-
Define a binarized ResNet18 in
models/quan_resnet_cifar.py
. -
To use the weight binariztaion function. Comment out multi-bit quantization. (copy file quantization-binariztaion.py from models folder in quantization.py)
-
Perform the model training, where the binarized model is initialized in
models/__init__.py
asresnet18_quan
. Then runbash train_CIFAR.sh
in terminal (Don't forget the path configuration!). -
With binarized model trained and stored at
<path-to-model>/model_best.pth.tar
, make sure the following changes in theBFA_CIFAR.sh
:
pretrained_model='<path-to-model>/model_best.pth.tar'
The piecewise weight clutering should not be applied on the binarized NN.
-
Make sure
models/quantization.py
use the multi-bit quantization, in constrast to the binarized counterpart. To change the bit-width, please access the code inmodels/quantization.py
. Under the definition ofquan_Conv2d
andquan_Linear
, change the argself.N_bits = 8
if you want 8-bit quantization. -
In
train_CIFAR.sh
, enable (i.e., uncomment) the following command:
--clustering --lambda_coeff 1e-3
Then train the model by bash train_CIFAR.sh
.
- For the BFA evaluation, please refer the binarization case.