Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory issue for attack using many shadow models or large datasets #166

Open
rickardbrannvall opened this issue Oct 23, 2024 · 0 comments
Open

Comments

@rickardbrannvall
Copy link
Collaborator

Issue

Problem Description

Lira attack runs out of memory when using many shadow models

  1. shadow models are kept in memory in large list
  2. evaluated scaled logits are kept in memory
  3. maybe also some issues with GPU memory clean-up

Issue (2) also materialize for large data sets (even for few shadow models).

This limits the usability of LeakPro to small toy problems.

Expected Behavior

LeakPro should work also for attacks

  • on complex models with many parameters
  • for large datasets
  • using many shadow models.

What Needs to be Done

Consider splitting the evaluation of the attack in smaller batches

  • read shadow model from disk one at a time
  • read data batches from disk
  • write results to disk
  • clean up intermediate results and used models

How Can It Be Tested or Reproduced

Run attacks to test expected behaviour.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant