Skip to main content

Research Repository

Advanced Search

Integrating single-shot fast gradient sign method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers

Hassan, Muhammad; Younis, Shahzad; Rasheed, Ahmed; Bilal, Muhammad

Integrating single-shot fast gradient sign method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers Thumbnail


Authors

Muhammad Hassan

Shahzad Younis

Ahmed Rasheed

Muhammad Bilal Muhammad.Bilal@uwe.ac.uk
Associate Professor - Big Data Application



Abstract

Deep learning architectures have emerged as powerful function approximators in a broad spectrum of complex representation learning tasks, such as, computer vision, natural language processing and collaborative filtering. These architectures bear a high potential to learn the intrinsic structure of data and extract valuable insights. Despite the surge in the development of state-of-the-art intelligent systems using the deep neural networks (DNNs), these systems have found to be vulnerable to adversarial examples produced by adding a small-magnitude of perturbations. Such adversarial examples are adept at misleading the DNN classifiers. In the past, different attack strategies have been proposed to produce adversarial examples in the digital, physical, and transform domain, but the likelihood to generate perceptually realistic adversarial examples require more research efforts. In this paper, we present a novel approach to produce adversarial examples by combining the single-shot fast gradient sign method (FGSM) and spatial, as well as, transform domain image processing techniques. The resulted perturbations neutralize the impact of low-intensity based regions, thus, instilling the noise only in the selective high-intensity regions of the input image. While combining the customized perturbation with one-step FGSM perturbation in an un-targeted black-box attack scenario, the proposed approach successfully fools state-of-the-art DNN classifiers with 99% adversarial examples being misclassified on the ImageNet validation dataset.

Citation

Hassan, M., Younis, S., Rasheed, A., & Bilal, M. (2022). Integrating single-shot fast gradient sign method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers. In Proceedings Volume 12084, Fourteenth International Conference on Machine Vision (ICMV 2021). https://doi.org/10.1117/12.2623585

Conference Name The 14th International Conference on Machine Vision (ICMV 2021)
Conference Location Rome, Italy
Start Date Dec 1, 2021
Acceptance Date Oct 18, 2021
Online Publication Date Mar 4, 2022
Publication Date Mar 4, 2022
Deposit Date Feb 28, 2022
Publicly Available Date Apr 5, 2022
Publisher Society of Photo-optical Instrumentation Engineers
Volume 12084
Book Title Proceedings Volume 12084, Fourteenth International Conference on Machine Vision (ICMV 2021)
ISBN 9781510650442
DOI https://doi.org/10.1117/12.2623585
Keywords FGSM; Image Processing; Steganography; Perturbations; Adversarial Examples; Black-Box Attacks
Public URL https://uwe-repository.worktribe.com/output/8278871
Publisher URL http://icmv.org/

Files

Integrating single-shot fast gradient sign method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers (1.7 Mb)
PDF

Licence
http://www.rioxx.net/licenses/all-rights-reserved

Publisher Licence URL
http://www.rioxx.net/licenses/all-rights-reserved

Copyright Statement
This is the author's accepted manuscript of the following article: Hassan, M., Younis, S., Rasheed, A., & Bilal, M. (2022). Integrating single-shot fast gradient sign method (FGSM) with classical image processing techniques for generating adversarial attacks on deep learning classifiers. In Proceedings Volume 12084, Fourteenth International Conference on Machine Vision (ICMV 2021). The final published version is available here: https://doi.org/10.1117/12.2623585.




You might also like



Downloadable Citations