On the Adversarial Robustness of Vision Transformers
Rulin Shao, Zhouxing Shi, et al.
NeurIPS 2022
Adversarial training has become one of the most effective methods for improving robustness of neural networks. However, it often suffers from poor generalization on both clean and perturbed data. Current robust training method always use a uniformed perturbation strength for every samples to generate adversarial examples during model training for improving adversarial robustness. However, we show it would lead worse training and generalizaiton error and forcing the prediction to match one-hot label. In this paper, therefore, we propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial training. We first show theoretically the CAT scheme improves the generalization. Also, through extensive experiments, we show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods. The full version of this paper is available at https://arxiv.org/abs/2002.06789.
Rulin Shao, Zhouxing Shi, et al.
NeurIPS 2022
Alex Mathai, Sambaran Bandyopadhyay, et al.
IJCAI 2022
Takayuki Katsuki, Kohei Miyaguchi, et al.
IJCAI 2022
Mayank Mishra, Dhiraj Madan, et al.
IJCAI 2022