Presentation + Paper
12 April 2021 Defending against sparse adversarial attacks using impulsive noise reduction filters
Author Affiliations +
Abstract
Deep Neural Networks (DNNs) have been deployed in many real-world applications in various domains, both industry and academic, and have proven to deliver outstanding performance. However, DNNs are vulnerable to adversarial attacks, that are small perturbations embedded in an image. As a result, introduction of DNNs into safety-critical systems, such as autonomous vehicles, unmanned aerial vehicles or healthcare devices, would introduce very high risk of limiting their capabilities to recognize and interpret the environment in which they are used and therefore would lead to devastating consequences. Thus, robustness enhancement of DNNs by development of defense mechanisms is a matter of the utmost importance. In this paper, we evaluated a set of state-of-the-art denoising filters designed for impulsive noise removal as defensive solutions. The proposed methods are applied as a pre-processing step, in which the adversarial patterns in the source image are removed before performing classification task. As a result, the pre-processing defense block can be easily integrated with any type of classifier, without any knowledge about utilized training procedures or internal architecture of the model. Moreover, the evaluated filtering methods can be considered as universal defensive techniques, as they are completely unrelated with the internal aspects of the selected attack and can be applied against any type of adversarial threats. The experimental results obtained on German Traffic Sign Recognition Benchmark (GTSRB) have proven that the denoising filters provide high robustness against sparse adversarial attacks and do not significantly decrease the classification performance on non-altered data.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Krystian Radlak, Michal Szczepankiewicz, and Bogdan Smolka "Defending against sparse adversarial attacks using impulsive noise reduction filters", Proc. SPIE 11736, Real-Time Image Processing and Deep Learning 2021, 117360O (12 April 2021); https://doi.org/10.1117/12.2587999
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Detection and tracking algorithms

Denoising

Defense and security

Image classification

Image filtering

Network security

Machine learning

RELATED CONTENT


Back to Top