Skip to main content

Research Repository

Advanced Search

AI under attack: Metric-driven analysis of cybersecurity threats in deep learning models for healthcare applications

Brohi, Sarfraz; Mastoi, Qurat-ul-ain

AI under attack: Metric-driven analysis of cybersecurity threats in deep learning models for healthcare applications Thumbnail


Authors

Sarfraz Brohi

Qurat-ul-ain Mastoi



Abstract

Incorporating Artificial Intelligence (AI) in healthcare has transformed disease diagnosis and treatment by offering unprecedented benefits. However, it has also revealed critical cybersecurity vulnerabilities in Deep Learning (DL) models, which raise significant risks to patient safety and their trust in AI-driven applications. Existing studies primarily focus on theoretical vulnerabilities or specific attack types, leaving a gap in understanding the practical implications of multiple attack scenarios on healthcare AI. In this paper, we provide a comprehensive analysis of key attack vectors, including adversarial attacks, such as the gradient-based Fast Gradient Sign Method (FGSM), evasion attacks (perturbation-based), and data poisoning, which threaten the reliability of DL models, with a specific focus on breast cancer detection. We propose the Healthcare AI Vulnerability Assessment Algorithm (HAVA) that systematically simulates these attacks, calculates the Post-Attack Vulnerability Index (PAVI), and quantitatively evaluates their impacts. Our findings revealed that the adversarial FGSM and evasion attacks significantly reduced model accuracy from 97.36% to 61.40% (PAVI: 0.385965) and 62.28% (PAVI: 0.377193), respectively, demonstrating their severe impact on performance, but data poisoning had a milder effect, retaining 89.47% accuracy (PAVI: 0.105263). The confusion matrices also revealed a higher rate of false positives in the adversarial FGSM and evasion attacks than more balanced misclassification patterns observed in data poisoning. By proposing a unified framework for quantifying and analyzing these post-attack vulnerabilities, this research contributes to formulating resilient AI models for critical domains where accuracy and reliability are important.

Journal Article Type Article
Acceptance Date Mar 3, 2025
Online Publication Date Mar 10, 2025
Publication Date Mar 10, 2025
Deposit Date Mar 26, 2025
Publicly Available Date Mar 26, 2025
Journal Algorithms
Electronic ISSN 1999-4893
Publisher MDPI
Peer Reviewed Peer Reviewed
Volume 18
Issue 3
Article Number 157
DOI https://doi.org/10.3390/a18030157
Public URL https://uwe-repository.worktribe.com/output/13970176

Files





You might also like



Downloadable Citations