You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I really appreciate yor nice work! But I think there is an error: the attack is performed on the GT label and not on the label predicted by the model on the clean image.
Hope to get your answer, thank you!
The text was updated successfully, but these errors were encountered:
This is not an error, but there are two settings. It is correct as long as it is maintained that all methods use the same setting. The methods in this paper and all comparative methods use GT to generate adversarial examples and evaluate performance.
Ok thank you. I have another question, how do you calculate the success rates results in Table 2 for clean and adversarial in the paper?
In the clean conf. the success rate is between the prediction of the classifier with clean image and the prediction of the defense algo with clean image as input?
In the other conf. the success rate is between the prediction of the classifier with adv image and the prediction of the defense algo with adv image as input?
I really appreciate yor nice work! But I think there is an error: the attack is performed on the GT label and not on the label predicted by the model on the clean image.
Hope to get your answer, thank you!
The text was updated successfully, but these errors were encountered: