-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Hi,
It is a nice work. Have you ever tried it on the images of large size? I tried the method on 224*224 images and found that the gap between clean images and adversarial ones hard to enlarge. For example, I set the adv_lamda to 0.6 for the first layer while clean_lamda to 0.1, but after training, the adv_energy can only reach about 0.23 while clean_energy about 0.11. Based on it, I then train the second and third layer with adv_lamda set to 1.0 and 2.0 respectively, the final adv_energy is about 0.7, while the clean_energy about 0.3.
I wonder if different configurations should be taken for the images of large size.
What else, I'm also curious about the detection transferability between different attacks, such as the detector trained on PGD can also detect the Square attack, which is mentioned in your paper, but it seems there are no further experiments on that?
Looking forward to your reply, thanks.