PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Most previous studies on visual adversarial attacks mainly pay close attention to the attack performance but only few of them concerns the appearance of the examples after the generated adversarial perturbation been exerted on. Without enforcing any restrictions over the adversarial perturbation often leads to conspicuous and attention-grabbing patterns in the generated adversarial examples which can be easily identified by humans. In order to address the issue mentioned above, we propose a method to craft the perturbation generated by visual adversarial attack for object recognition through leveraging the post-hoc visual explanation methods for DNNs to generate saliency map️ which are capable of indicating the region of the input image that makes the most important contribution to the model prediction. Through pointing out the region where the adversarial attack should focus on to maximize the impact and confining the scope of adversarial perturbation to be exerted, our method can generate natural looking adversarial examples while maintaining high attack performance. With extensive experiments in which the method proposed in this work is compared to the current state-of-the-art adversarial attack techniques all of which are applied to widely used deep neural networks on standard datasets, the results show that our proposed method produces significantly more realistic and natural looking adversarial examples than several state-of-the-art baselines while achieving competitive attack performance.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Yizhou He,Jing Fu,Kang Wang, andBo Lei
"Guidance to effective and natural adversarial attack via visual explanation for DNNs", Proc. SPIE 13104, Advanced Fiber Laser Conference (AFL2023), 131043C (18 March 2024); https://doi.org/10.1117/12.3023442
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Yizhou He, Jing Fu, Kang Wang, Bo Lei, "Guidance to effective and natural adversarial attack via visual explanation for DNNs," Proc. SPIE 13104, Advanced Fiber Laser Conference (AFL2023), 131043C (18 March 2024); https://doi.org/10.1117/12.3023442