Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The attack is performed on GT label and not on the label predicted by the model on the clean image #2

Open
cimice15 opened this issue Aug 22, 2023 · 2 comments

Comments

@cimice15
Copy link

I really appreciate yor nice work! But I think there is an error: the attack is performed on the GT label and not on the label predicted by the model on the clean image.

Hope to get your answer, thank you!

@ylhz
Copy link
Member

ylhz commented Aug 27, 2023

We appreciate your interest in our work!

This is not an error, but there are two settings. It is correct as long as it is maintained that all methods use the same setting. The methods in this paper and all comparative methods use GT to generate adversarial examples and evaluate performance.

@cimice15
Copy link
Author

cimice15 commented Sep 8, 2023

Ok thank you. I have another question, how do you calculate the success rates results in Table 2 for clean and adversarial in the paper?
In the clean conf. the success rate is between the prediction of the classifier with clean image and the prediction of the defense algo with clean image as input?
In the other conf. the success rate is between the prediction of the classifier with adv image and the prediction of the defense algo with adv image as input?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants