Qurat-ul-ain Mastoi
Explainable AI in medical imaging: An interpretable and collaborative federated learning model for brain tumor classification
Mastoi, Qurat-ul-ain; Latif, Shahid; Brohi, Sarfraz; Ahmad, Jawad; Alqhatani, Abdulmajeed; Alshehri, Mohammed S.; Al Mazroa, Alanoud; Ullah, Rahmat
Authors
Dr Shahid Latif Shahid.Latif@uwe.ac.uk
Research Fellow Reminder Project
Sarfraz Brohi
Jawad Ahmad
Abdulmajeed Alqhatani
Mohammed S. Alshehri
Alanoud Al Mazroa
Rahmat Ullah
Abstract
Introduction: A brain tumor is a collection of abnormal cells in the brain that can become life-threatening due to its ability to spread. Therefore, a prompt and meticulous classification of the brain tumor is an essential element in healthcare care. Magnetic Resonance Imaging (MRI) is the central resource for producing high-quality images of soft tissue and is considered the principal technology for diagnosing brain tumors. Recently, computer vision techniques such as deep learning (DL) have played an important role in the classification of brain tumors, most of which use traditional centralized classification models, which face significant challenges due to the insufficient availability of diverse and representative datasets and exacerbate the difficulties in obtaining a transparent model. This study proposes a collaborative federated learning model (CFLM) with explainable artificial intelligence (XAI) to mitigate existing problems using state-of-the-art methods. Methods: The proposed method addresses four class classification problems to identify glioma, meningioma, no tumor, and pituitary tumors. We have integrated GoogLeNet with a federated learning (FL) framework to facilitate collaborative learning on multiple devices to maintain the privacy of sensitive information locally. Moreover, this study also focuses on the interpretability to make the model transparent using Gradient-weighted class activation mapping (Grad-CAM) and saliency map visualizations. Results: In total, 10 clients were selected for the proposed model with 50 communication rounds, each with decentralized local datasets for training. The proposed approach achieves 94% classification accuracy. Moreover, we incorporate Grad-CAM with heat maps and saliency maps to offer interpretability and meaningful graphical interpretations for healthcare specialists. Conclusion: This study outlines an efficient and interpretable model for brain tumor classification by introducing an integrated technique using FL with GoogLeNet architecture. The proposed framework has great potential to improve brain tumor classification to make them more reliable and transparent for clinical use.
Journal Article Type | Article |
---|---|
Acceptance Date | Jan 23, 2025 |
Online Publication Date | Feb 27, 2025 |
Publication Date | Feb 27, 2025 |
Deposit Date | Mar 20, 2025 |
Publicly Available Date | Mar 20, 2025 |
Journal | Frontiers in Oncology |
Electronic ISSN | 2234-943X |
Publisher | Frontiers Media |
Peer Reviewed | Peer Reviewed |
Volume | 15 |
Article Number | 1535478 |
DOI | https://doi.org/10.3389/fonc.2025.1535478 |
Keywords | federated learning, brain tumors, medical diagnosis, explainable AI, GoogLeNet |
Public URL | https://uwe-repository.worktribe.com/output/13946262 |
Files
Explainable AI in medical imaging: An interpretable and collaborative federated learning model for brain tumor classification
(18.9 Mb)
PDF
Licence
http://creativecommons.org/licenses/by/4.0/
Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/
You might also like
Heart patient health monitoring system using invasive and non-invasive measurement
(2024)
Journal Article
Survey improving usability of the smartphones for elders
(2023)
Journal Article