Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
Update adversary attack generation example (#12918)
Browse files Browse the repository at this point in the history
* Fix adversary example generation

* Update README.md

* Fix test_utils.list_gpus()

* fix unused variable
  • Loading branch information
ThomasDelteil authored and sandeep-krishnamurthy committed Nov 6, 2018
1 parent 8f6efe3 commit 722ad7a
Show file tree
Hide file tree
Showing 3 changed files with 185 additions and 164 deletions.
4 changes: 2 additions & 2 deletions example/adversary/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Adversarial examples

This demonstrates the concept of "adversarial examples" from [1] showing how to fool a well-trained CNN.
The surprising idea is that one can easily generate examples which the CNN will consistently
make the wrong prediction for that a human can easily tell are correct.
Adversarial examples are samples where the input has been manipulated to confuse a model (i.e. confident in an incorrect prediction) but where the correct answer still appears obvious to a human.
This method for generating adversarial examples uses the gradient of the loss with respect to the input to craft the adversarial examples.

[1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." [arXiv preprint arXiv:1412.6572 (2014)](https://arxiv.org/abs/1412.6572)
Loading

0 comments on commit 722ad7a

Please sign in to comment.