Skip to content

Commit

Permalink
pdf generator is fixed
Browse files Browse the repository at this point in the history
  • Loading branch information
fazelehh committed Jan 16, 2025
1 parent b443719 commit d845713
Show file tree
Hide file tree
Showing 5 changed files with 18 additions and 262 deletions.
2 changes: 1 addition & 1 deletion examples/mia/celebA_HQ/.gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
data/*
!data/.gitkeep

leakprp_output/*
leakpro_output/*
target/*
28 changes: 0 additions & 28 deletions examples/mia/celebA_HQ/audit.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,34 +11,6 @@ audit: # Configurations for auditing
gamma: 2.0
offline_a: 0.33 # parameter from which we compute p(x) from p_OUT(x) such that p_IN(x) = a p_OUT(x) + b.
offline_b: 0.66
# qmia:
# training_data_fraction: 1.0 # Fraction of the auxilary dataset (data without train and test indices) to use for training the quantile regressor
# epochs: 5 # Number of training epochs for quantile regression
# population:
# attack_data_fraction: 1.0 # Fraction of the auxilary dataset to use for this attack
# lira:
# training_data_fraction: 0.3 # Fraction of the auxilary dataset to use for this attack (in each shadow model training)
# num_shadow_models: 2 # Number of shadow models to train
# online: False # perform online or offline attack
# fixed_variance: True # Use a fixed variance for the whole audit
# boosting: True
# loss_traj:
# training_distill_data_fraction : 0.7 # Fraction of the auxilary dataset to use for training the distillation models D_s = (1-D_KD)/2
# number_of_traj: 10 # Number of epochs (number of points in the loss trajectory)
# label_only: False # True or False
# mia_classifier_epochs: 100
# HSJ:
# attack_data_fraction: 0.01 # Fraction of the auxilary dataset to use for this attack
# target_metadata_path: "./target/model_metadata.pkl"
# num_iterations: 2 # Number of iterations for the optimization
# initial_num_evals: 100 # Number of evaluations for number of random vecotr to estimate the gradient
# max_num_evals: 10000 # Maximum number of evaluations
# stepsize_search: "geometric_progression" # Step size search method
# gamma: 1.0 # Gamma for the optimization
# constraint: 2
# batch_size: 50
# verbose: True
# epsilon_threshold: 1e-6

output_dir: "./leakpro_output"
attack_type: "mia" #mia, gia
Expand Down
5 changes: 3 additions & 2 deletions examples/mia/celebA_HQ/celebA_HQ_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,10 @@ def get_criterion(self) -> torch.nn.Module:

def get_optimizer(self, model: torch.nn.Module) -> optim.Optimizer:
"""Set the optimizer for the model."""
learning_rate =0.1 # Default learning rate
learning_rate =0.01 # Default learning rate
momentum = 0.9 # Default momentum
return optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum)
weight_decay = 0.0001 # Default weight decay
return optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum, weight_decay=weight_decay)

def train(
self,
Expand Down
243 changes: 13 additions & 230 deletions examples/mia/celebA_HQ/main_celebA_hq.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion examples/mia/celebA_HQ/train_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ train: # Configuration for training
optimizer: SGD # String which indicates the optimizer. We support Adam and SGD. For speedyresnet, it uses its own optimizer.
learning_rate: 0.001 # Float number for indicating learning rate for training the target model. For speedyresnet, it uses its own learning_rate.
momentum: 0.9
weight_decay: 0.1 # Float number for indicating weight decay for training the target model. For speedyresnet, it uses its own weight_decay.
weight_decay: 0.001 # Float number for indicating weight decay for training the target model. For speedyresnet, it uses its own weight_decay.

data: # Configuration for data
dataset: celebA_HQ # String indicates the name of the dataset
Expand Down

0 comments on commit d845713

Please sign in to comment.