Sarfraz Brohi
AI under attack: Metric-driven analysis of cybersecurity threats in deep learning models for healthcare applications
Brohi, Sarfraz; Mastoi, Qurat-ul-ain
Authors
Qurat-ul-ain Mastoi
Abstract
Incorporating Artificial Intelligence (AI) in healthcare has transformed disease diagnosis and treatment by offering unprecedented benefits. However, it has also revealed critical cybersecurity vulnerabilities in Deep Learning (DL) models, which raise significant risks to patient safety and their trust in AI-driven applications. Existing studies primarily focus on theoretical vulnerabilities or specific attack types, leaving a gap in understanding the practical implications of multiple attack scenarios on healthcare AI. In this paper, we provide a comprehensive analysis of key attack vectors, including adversarial attacks, such as the gradient-based Fast Gradient Sign Method (FGSM), evasion attacks (perturbation-based), and data poisoning, which threaten the reliability of DL models, with a specific focus on breast cancer detection. We propose the Healthcare AI Vulnerability Assessment Algorithm (HAVA) that systematically simulates these attacks, calculates the Post-Attack Vulnerability Index (PAVI), and quantitatively evaluates their impacts. Our findings revealed that the adversarial FGSM and evasion attacks significantly reduced model accuracy from 97.36% to 61.40% (PAVI: 0.385965) and 62.28% (PAVI: 0.377193), respectively, demonstrating their severe impact on performance, but data poisoning had a milder effect, retaining 89.47% accuracy (PAVI: 0.105263). The confusion matrices also revealed a higher rate of false positives in the adversarial FGSM and evasion attacks than more balanced misclassification patterns observed in data poisoning. By proposing a unified framework for quantifying and analyzing these post-attack vulnerabilities, this research contributes to formulating resilient AI models for critical domains where accuracy and reliability are important.
Journal Article Type | Article |
---|---|
Acceptance Date | Mar 3, 2025 |
Online Publication Date | Mar 10, 2025 |
Publication Date | Mar 10, 2025 |
Deposit Date | Mar 26, 2025 |
Publicly Available Date | Mar 26, 2025 |
Journal | Algorithms |
Electronic ISSN | 1999-4893 |
Publisher | MDPI |
Peer Reviewed | Peer Reviewed |
Volume | 18 |
Issue | 3 |
Article Number | 157 |
DOI | https://doi.org/10.3390/a18030157 |
Public URL | https://uwe-repository.worktribe.com/output/13970176 |
Files
AI under attack: Metric-driven analysis of cybersecurity threats in deep learning models for healthcare applications
(3.7 Mb)
PDF
Licence
http://creativecommons.org/licenses/by/4.0/
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
You might also like
Heart patient health monitoring system using invasive and non-invasive measurement
(2024)
Journal Article
Survey improving usability of the smartphones for elders
(2023)
Journal Article
Downloadable Citations
About UWE Bristol Research Repository
Administrator e-mail: repository@uwe.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search