Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Bridging the Gap: Exploring Interpretability in Deep Learning Models for Brain Tumor Classification from MRI Images

Version 1 : Received: 17 February 2024 / Approved: 18 February 2024 / Online: 19 February 2024 (10:21:08 CET)

A peer-reviewed article of this Preprint also exists.

Nhlapho, W.; Atemkeng, M.; Brima, Y.; Ndogmo, J.-C. Bridging the Gap: Exploring Interpretability in Deep Learning Models for Brain Tumor Detection and Diagnosis from MRI Images. Information 2024, 15, 182. Nhlapho, W.; Atemkeng, M.; Brima, Y.; Ndogmo, J.-C. Bridging the Gap: Exploring Interpretability in Deep Learning Models for Brain Tumor Detection and Diagnosis from MRI Images. Information 2024, 15, 182.

Abstract

The advent of deep learning (DL) has revolutionized medical imaging, offering unprecedented avenues for accurate disease classification and diagnosis. DL models have shown remarkable promise for classifying brain tumor from Magnetic Resonance Imaging (MRI) scans. However, despite their impressive performance, the opaque nature of DL models poses challenges in understanding their decision-making processes, particularly crucial in medical contexts where interpretability is essential. This paper explores the intersection of medical image analysis and DL interpretability, aiming to elucidate the decision-making rationale of DL models in brain tumor classification. Leveraging state-of-the-art DL frameworks with transfer learning, we conduct a comprehensive evaluation encompassing both classification accuracy and interpretability. Using state-of-the-art DL frameworks with transfer learning, we conduct a thorough evaluation covering both classification accuracy and interpretability. We employ adaptive path-based techniques to understand the underlying decision-making mechanisms of these models. Grad-CAM and Grad-CAM++ highlight critical image regions where the tumors are located.

Keywords

transfer learning; deep learning; brain tumor classification; explanability and interpretability; Grad-Cam++; integrated gradients

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.