Submitted:
27 May 2025
Posted:
28 May 2025
You are already at the latest version
Abstract
Keywords:
Chapter 1: Introduction
1.1. Background on AI in Healthcare
1.2. Importance of Explainability in Clinical Decision Support Systems
1.2.1. Trust and Adoption
1.2.2. Ethical Considerations
1.3. Objectives of the Chapter
- Define explainable AI and its relevance to healthcare.
- Discuss the various techniques employed to achieve explainability in AI models used in CDSS.
- Identify the challenges associated with implementing XAI in high-stakes medical environments.
- Outline evaluation frameworks for assessing the effectiveness of XAI in supporting clinical decision-making.
1.4. Overview of Key Topics
- Chapter 2 explores the role of explainable AI in healthcare, detailing its necessity for fostering trust and ethical compliance in clinical environments.
- Chapter 3 provides an in-depth examination of various techniques for achieving explainability, including model-agnostic approaches, model-specific methods, and visualization techniques.
- Chapter 4 addresses the challenges faced in implementing XAI, including technical barriers, resistance from healthcare professionals, and ethical considerations.
- Chapter 5 presents evaluation frameworks designed to assess the clarity, consistency, and usability of explanations provided by XAI systems.
- Chapter 6 highlights case studies that demonstrate successful applications of XAI in CDSS, showcasing lessons learned and best practices.
- Chapter 7 discusses future directions for research and development in XAI, emphasizing the integration of emerging technologies and collaborative efforts among stakeholders.
1.5. Conclusion
Chapter 2: The Role of Explainable AI in Healthcare
2.1. Introduction
2.2. Definition of Explainable AI
2.2.1. What is Explainable AI?
2.2.2. Importance of Explainability
- Transparency: Understanding the rationale behind AI decisions fosters transparency and accountability.
- Trust: Clinicians are more likely to adopt AI tools if they can comprehend and trust the underlying processes.
- Improved Decision-Making: Clear explanations can enhance clinical decision-making by providing context and support for AI-generated recommendations.
2.3. Importance of XAI in Clinical Settings
2.3.1. Fostering Trust and Adoption
- Providing Justifications: When AI systems explain their reasoning, healthcare providers can better evaluate the validity of the recommendations.
- Facilitating Human-AI Collaboration: Explainability fosters a collaborative environment where clinicians can work alongside AI tools, integrating them into their decision-making processes.
2.3.2. Ethical Considerations
- Patient Safety: Ensuring that AI recommendations are understandable reduces the risk of misinterpretation and potential harm to patients.
- Accountability: XAI helps clarify who is responsible for decisions made with the assistance of AI, addressing concerns about liability in case of adverse outcomes.
- Bias Mitigation: Understanding how AI models make decisions can help identify and mitigate biases that may adversely affect certain patient groups.
2.4. Regulatory and Compliance Considerations
2.4.1. Regulatory Frameworks
- Health Insurance Portability and Accountability Act (HIPAA): Establishes standards for protecting patient health information in the U.S.
- General Data Protection Regulation (GDPR): Governs data protection and privacy in the European Union, emphasizing the need for transparency and explainability in AI systems.
2.4.2. Compliance Challenges
- Enhancing Transparency: By providing clear explanations of AI processes, organizations can demonstrate compliance with regulatory requirements for transparency.
- Supporting Risk Management: Understanding the decision-making processes of AI systems can help institutions manage risks associated with AI deployment.
2.5. The Impact of XAI on Patient Outcomes
2.5.1. Improved Clinical Decision-Making
- Enhanced Diagnostic Accuracy: By providing explanations for AI-generated diagnoses, clinicians can make more informed decisions, reducing the likelihood of misdiagnosis.
- Personalized Treatment Plans: Explainable AI can help tailor treatment recommendations to individual patient profiles, improving overall patient care.
2.5.2. Patient Engagement
- Informed Consent: Patients who understand the rationale behind AI-driven recommendations are more likely to engage in their treatment plans.
- Health Literacy: Providing explanations in understandable terms can enhance patients' health literacy, empowering them to make informed choices about their care.
2.6. Conclusion
Chapter 3: Techniques for Explainable Artificial Intelligence in Clinical Decision Support
3.1. Introduction
3.2. Model-Agnostic Approaches
3.2.1. LIME (Local Interpretable Model-agnostic Explanations)
3.2.1.1. Overview
3.2.1.2. Process
- Perturbation: LIME generates perturbations of the input data by altering features slightly.
- Model Fitting: It trains a simpler, interpretable model (e.g., linear regression) using the perturbed data and the corresponding predictions from the complex model.
- Explanation Generation: The coefficients of the simpler model provide insights into which features were most influential in the prediction.
3.2.1.3. Applications in Healthcare
- Disease Diagnosis: LIME can help clinicians understand why a model predicts a certain disease for a patient based on their symptoms and medical history.
- Risk Assessment: By elucidating the factors contributing to risk scores, LIME assists healthcare professionals in making informed decisions.
3.2.1.4. Limitations
- Locality Dependence: LIME provides explanations that are only valid in the local vicinity of the instance, potentially leading to misleading interpretations if the global model behavior is not considered.
- Computationally Intensive: Generating perturbations and fitting simpler models can be computationally expensive, especially with large datasets.
3.2.2. SHAP (Shapley Additive Explanations)
3.2.2.1. Overview
3.2.2.2. Process
- Feature Contribution Calculation: SHAP computes the contribution of each feature by evaluating the change in the expected model output when the feature is included versus excluded.
- Additive Feature Attribution: The contributions are additive, meaning the model output can be expressed as the sum of contributions from individual features.
3.2.2.3. Applications in Healthcare
- Treatment Recommendations: SHAP can explain why a certain treatment is recommended based on a patient’s unique characteristics, allowing for personalized care.
- Outcome Predictions: It allows clinicians to understand the factors influencing predictions of patient outcomes, facilitating discussions with patients about their care.
3.2.2.4. Limitations
- Complexity: SHAP can be complex to implement and may require significant computational resources, particularly with large datasets or complex models.
- Interpretation: While SHAP values are theoretically sound, clinicians may find them challenging to interpret without proper training.
3.3. Model-Specific Techniques
3.3.1. Decision Trees and Rule-Based Models
3.3.1.1. Overview
3.3.1.2. Explanation Mechanism
- Decision Paths: Each prediction is based on a series of decisions made at nodes in the tree, allowing for clear tracing of how a conclusion was reached.
- Rule Extraction: Rule-based models can generate explicit rules that describe the conditions leading to specific outcomes, making them transparent and easy to understand.
3.3.1.3. Applications in Healthcare
- Diagnostic Support: Clinicians can follow the decision path to understand how a diagnosis was reached, enabling better trust in the model's recommendations.
- Treatment Guidelines: Rule-based models can provide explicit treatment guidelines based on patient characteristics, improving adherence to best practices.
3.3.1.4. Limitations
- Overfitting: Decision trees can easily overfit to the training data, leading to models that are not generalizable.
- Limited Complexity: More complex relationships in data may not be captured effectively by simple decision trees or rules.
3.3.2. Interpretable Neural Networks
3.3.2.1. Overview
3.3.2.2. Techniques
- Attention Mechanisms: These mechanisms allow the model to focus on specific parts of the input data, providing insights into which features are most important for predictions.
- Layer-wise Relevance Propagation (LRP): LRP attributes the prediction of a neural network back to the input features, helping to identify which features contributed most to a particular decision.
3.3.2.3. Applications in Healthcare
- Medical Imaging: In applications like radiology, attention mechanisms can highlight areas of interest in medical images, aiding radiologists in their assessments.
- Clinical Text Analysis: Interpretable neural networks can analyze clinical notes, identifying key terms or phrases that influence predictions.
3.3.2.4. Limitations
- Complexity: Despite attempts to enhance interpretability, neural networks can still be viewed as "black boxes," making it challenging to fully understand their decision-making processes.
- Resource Intensive: Training and interpreting complex neural networks require significant computational resources and expertise.
3.4. Visualization Techniques
3.4.1. Feature Importance Visualization
3.4.1.1. Overview
3.4.1.2. Implementation
- Bar Charts: Simple bar charts can depict the importance of each feature, allowing clinicians to quickly grasp which factors are most influential.
- Heatmaps: Heatmaps can visualize feature interactions and their contributions to predictions, providing deeper insights into data relationships.
3.4.1.3. Applications in Healthcare
- Patient Risk Assessment: Visualizing feature importance helps identify critical risk factors for patients, facilitating targeted interventions.
- Treatment Decision Support: Clinicians can understand the rationale behind treatment recommendations by viewing the importance of various patient attributes.
3.4.1.4. Limitations
- Over-Simplification: Simplified visualizations may overlook complex interactions between features, leading to incomplete understandings.
- Potential Misinterpretation: Clinicians may misinterpret feature importance if not adequately trained in data visualization principles.
3.4.2. Saliency Maps in Medical Imaging
3.4.2.1. Overview
3.4.2.2. Implementation
- Gradient-Based Methods: These methods calculate gradients of the prediction score with respect to input pixels, indicating which areas of the image contribute most to the decision.
- Class Activation Mapping (CAM): CAM generates heatmaps that highlight the areas in the image that are most important for making a specific prediction.
3.4.2.3. Applications in Healthcare
- Radiology: Saliency maps can help radiologists focus on specific areas of an image, enhancing diagnostic accuracy.
- Pathology: In histopathological images, saliency maps can indicate regions of interest, aiding pathologists in identifying disease markers.
3.4.2.4. Limitations
- Sensitivity to Noise: Saliency maps can be sensitive to noise in the images, potentially leading to misleading visualizations.
- Interpretation Challenges: Clinicians may need additional training to accurately interpret saliency maps and understand their implications for diagnosis.
3.5. Case-Based Reasoning
3.5.1. Overview
3.5.2. Process
- Case Retrieval: Relevant cases similar to the current patient are retrieved from a database.
- Comparison and Adaptation: The retrieved cases are compared to the current case, allowing clinicians to adapt the previous solutions to fit the new context.
3.5.3. Applications in Healthcare
- Diagnostic Support: Clinicians can reference similar past cases to guide their diagnoses and treatment decisions, enhancing the rationale behind their choices.
- Patient Management: Case-based reasoning can inform long-term management strategies by drawing on successful outcomes from similar patients.
3.5.4. Limitations
- Data Availability: The effectiveness of case-based reasoning depends on the availability and quality of historical case data.
- Potential Bias: Reliance on past cases may introduce bias if historical data does not represent the current patient population accurately.
3.6. Conclusion
Chapter 4: Techniques for Explainable Artificial Intelligence in Clinical Decision Support
4.1. Introduction
4.2. Model-Agnostic Approaches
4.2.1. LIME (Local Interpretable Model-agnostic Explanations)
4.2.1.1. Mechanism
4.2.1.2. Applications in Healthcare
- Predictive Models: LIME can be used to explain predictions made by models assessing patient risks, such as predicting the probability of hospital readmission.
- Diagnostic Support: It helps clinicians understand the basis for AI-driven diagnostic suggestions by highlighting the most influential features.
4.2.1.3. Benefits and Limitations
- Benefits: Provides clear, instance-specific explanations; applicable to various model types.
- Limitations: The quality of explanations depends on the choice of perturbations and may not generalize well beyond local instances.
4.2.2. SHAP (Shapley Additive Explanations)
4.2.2.1. Mechanism
4.2.2.2. Applications in Healthcare
- Feature Importance: SHAP can elucidate which factors most significantly impact patient outcomes, aiding in risk stratification.
- Treatment Decisions: It assists clinicians in understanding the rationale behind treatment recommendations based on multiple patient attributes.
4.2.2.3. Benefits and Limitations
- Benefits: Provides consistent and theoretically grounded explanations; applicable to any model.
- Limitations: Computationally intensive, particularly for large datasets and complex models, which may hinder real-time applications.
4.3. Model-Specific Techniques
4.3.1. Decision Trees and Rule-Based Models
4.3.1.1. Mechanism
4.3.1.2. Applications in Healthcare
- Clinical Pathways: Decision trees can guide clinicians through treatment paths based on patient characteristics and clinical guidelines.
- Risk Assessment: Simple rule-based models can stratify patient risk based on easily understood criteria.
4.3.1.3. Benefits and Limitations
- Benefits: Highly interpretable; easy for clinicians to understand and communicate to patients.
- Limitations: Prone to overfitting; may not capture complex relationships as effectively as ensemble methods.
4.3.2. Interpretable Neural Networks
4.3.2.1 Mechanism
4.3.2.2 Applications in Healthcare
- Medical Imaging: Attention mechanisms in convolutional neural networks (CNNs) can identify which parts of an image contributed most to the model's diagnosis.
- Natural Language Processing: In text-based applications, interpretable architectures can clarify which portions of clinical notes influenced predictions.
4.3.2.3 Benefits and Limitations
- Benefits: Can model complex relationships while providing interpretable outputs; useful in deep learning contexts.
- Limitations: Still less interpretable than simpler models; the complexity of neural networks can obscure understanding.
4.4. Visualization Techniques
4.4.1. Feature Importance Visualization
4.4.1.1. Mechanism
4.4.1.2. Applications in Healthcare
- Patient Risk Scores: Visualizations can highlight the most impactful factors in predicting patient outcomes, aiding in risk assessment discussions.
- Diagnostic Tools: They can enhance the interpretability of AI-driven diagnostic support by visually representing significant symptoms or test results.
4.4.1.3. Benefits and Limitations
- Benefits: Intuitive visual representations enhance understanding; facilitate discussions with healthcare teams.
- Limitations: May oversimplify complex interactions; does not provide detailed insights into individual predictions.
4.4.2. Saliency Maps in Medical Imaging
4.4.2.1. Mechanism
4.4.2.2. Applications in Healthcare
- Radiology: Saliency maps can guide radiologists in interpreting AI-assisted diagnoses by indicating areas of concern in imaging studies.
- Pathology: In histopathology images, saliency maps can demonstrate which cellular features drove the model's classification.
4.4.2.3. Benefits and Limitations
- Benefits: Visualizes model reasoning directly on images; enhances trust in AI-assisted diagnostics.
- Limitations: May not always align with clinical reasoning; potential for misinterpretation if not properly contextualized.
4.5. Case-Based Reasoning
4.5.1. Mechanism
4.5.2. Applications in Healthcare
- Personalized Treatment: Case-based reasoning can support personalized treatment recommendations by referencing similar patient histories.
- Diagnosis Support: It helps clinicians understand the rationale behind AI-driven diagnoses by relating them to past cases.
4.5.3. Benefits and Limitations
- Benefits: Provides contextually rich explanations; leverages real-world clinical experience.
- Limitations: Requires a substantial database of historical cases; may not always find a relevant case for unique situations.
4.6. Conclusion
Chapter 5: Evaluation Frameworks for Explainable AI in Healthcare
5.1. Introduction
5.2. Criteria for Evaluating Explainability
5.2.1. Clarity and Comprehensibility
- Simplicity: Explanations should avoid jargon and technical language that may confuse clinicians.
- Relevance: Explanations must focus on the most pertinent features influencing the AI's decision, allowing users to grasp the rationale behind recommendations quickly.
- Visual Aids: The use of visual representations can enhance clarity, such as graphs, charts, or diagrams that illustrate decision-making processes.
5.2.2. Consistency and Reliability
- Stability: Explanations should remain stable when similar inputs are processed, ensuring that minor variations in data do not lead to drastically different outputs.
- Dependability: Regular evaluations should be conducted to verify that the XAI system consistently produces reliable explanations over time.
5.2.3. Actionability of Explanations
- Practical Recommendations: Explanations must not only describe the reasoning behind a decision but also guide clinicians on actionable next steps.
- Integration with Clinical Workflow: Explanations should seamlessly fit into existing clinical workflows, enhancing rather than disrupting the decision-making process.
5.3. User-Centered Evaluation Methods
5.3.1. Surveys and Interviews
- Structured Surveys: Surveys can be designed to gauge user satisfaction with the clarity, relevance, and usability of explanations. Questions may focus on perceived usefulness, trust in the AI system, and overall experience.
- In-depth Interviews: Conducting interviews allows for deeper insights into user experiences, highlighting specific challenges and suggestions for improvement.
5.3.2. Usability Testing in Clinical Environments
- Scenario-Based Testing: Participants can be asked to engage with the XAI system using realistic clinical scenarios, allowing evaluators to observe interactions and gather qualitative data.
- Task Completion Rates: Measuring how effectively healthcare professionals can complete tasks using the XAI system can help identify usability issues.
5.3.3. Focus Groups
- Diverse Perspectives: Engaging a range of clinicians from different specialties can provide a comprehensive understanding of the challenges and benefits associated with XAI.
- Collaborative Feedback: Discussions can generate new ideas for improving explainability and integrating XAI into clinical practice.
5.4. Performance Metrics for XAI Systems
5.4.1. Assessment of Clinical Outcomes
- Diagnostic Accuracy: Measuring changes in diagnostic accuracy after implementing XAI can provide insights into its influence on clinical decision-making.
- Patient Outcomes: Tracking patient outcomes, such as treatment success rates or recovery times, can help assess the real-world impact of AI-driven recommendations.
5.4.2. Impact on Decision-Making Processes
- Decision Confidence: Surveys can assess whether XAI improves clinicians’ confidence in their decisions.
- Time to Decision: Measuring how much time clinicians spend making decisions with and without XAI can help evaluate its efficiency.
5.4.3. User Adoption Rates
- Frequency of Use: Tracking how often clinicians engage with the XAI system can provide insights into its perceived value.
- Training and Support Needs: Evaluating the level of training and support required for effective use can help identify areas for improvement.
5.5. Case Study Applications of Evaluation Frameworks
5.5.1. Successful Implementations of XAI
- Predictive Analytics in Oncology: A study evaluating a predictive analytics tool for cancer treatment that utilized user-centered evaluation methods to refine explanations.
- Cardiovascular Risk Assessment: A case where performance metrics were used to measure the impact of an XAI system on clinician decision-making and patient outcomes.
5.5.2. Lessons Learned from Real-World Applications
- Best Practices for Evaluation: Identifying effective strategies for gathering user feedback and assessing clinical impact.
- Challenges Encountered: Documenting challenges faced during implementation and evaluation can guide future efforts in XAI development.
5.6. Conclusion
Chapter 6: Case Studies and Applications of Explainable AI in Clinical Decision Support
6.1. Introduction
6.2. Case Study 1: Predictive Analytics for Disease Diagnosis
6.2.1. Background
6.2.2. Implementation of XAI
- Data Sources: Electronic health records (EHRs) were used to train the model, incorporating clinical notes, lab results, and vital signs.
- Explainability Techniques: SHAP values were calculated to determine the contribution of each feature to the model's predictions, allowing clinicians to understand which factors increased a patient's risk of sepsis.
6.2.3. Results
6.2.4. Implications
6.3. Case Study 2: Treatment Recommendation Systems
6.3.1. Background
6.3.2. Implementation of XAI
- Data Sources: Patient treatment records, genomic data, and clinical outcomes were integrated to train the model.
- Explainability Techniques: LIME was used to generate explanations for individual treatment recommendations, allowing oncologists to see which features influenced the model's suggestions for specific patients.
6.3.3. Results
6.3.4. Implications
6.4. Case Study 3: Diagnostic Imaging with Explainable AI
6.4.1. Background
6.4.2. Implementation of XAI
- Data Sources: A large dataset of labeled chest X-rays was used to train the model, focusing on both normal and abnormal findings.
- Explainability Techniques: Saliency maps highlighted regions of interest in the X-rays, allowing radiologists to see the model's focus areas when making predictions.
6.4.4. Implications
6.5. Case Study 4: Risk Assessment in Cardiology
6.5.1. Background
6.5.2. Implementation of XAI
- Data Sources: Patient data included EHRs, lifestyle questionnaires, and lab results, providing a comprehensive view of individual risk profiles.
- Explainability Techniques: SHAP values allowed clinicians to see the contributions of various factors to each patient’s risk score, enhancing understanding and facilitating discussions with patients.
6.5.3. Results
6.5.4. Implications
6.6. Lessons Learned from Case Studies
6.6.1. Importance of User Feedback
6.6.2. Balancing Complexity and Interpretability
6.6.3. Training and Education
6.7. Conclusion
Chapter 7: Future Directions for Explainable Artificial Intelligence in Clinical Decision Support
7.1. Introduction
7.2. Advancements in XAI Techniques
7.2.1. Enhanced Model Interpretability
- Research Focus: Exploring architectures that balance performance with interpretability, such as attention mechanisms in neural networks, can yield models that are easier to explain.
- Example: Developing transparent neural networks that allow clinicians to see which features influence predictions, thus enhancing trust and understanding.
7.2.2. Integration of Multimodal Data
- Approach: Future systems should leverage multimodal data to provide comprehensive explanations that consider various factors influencing patient outcomes.
- Impact: By synthesizing information from diverse sources, XAI can present more holistic insights, aiding clinical decision-making.
7.2.3. Real-Time Explainability
- Strategy: Developing XAI frameworks that provide on-the-fly explanations for AI-driven recommendations can enhance clinician engagement and trust.
- Example: Implementing systems that generate explanations alongside predictions as a patient’s data is processed can help clinicians understand the rationale behind recommendations promptly.
7.3. Integration with Emerging Technologies
7.3.1. Blockchain for Data Integrity and Trust
- Potential Applications: Leveraging blockchain can ensure data provenance, allowing clinicians to trace how patient data was used in AI models, thus reinforcing accountability.
- Future Research: Examining how blockchain can be integrated with XAI to provide transparent audit trails of AI decision-making processes.
7.3.2. Internet of Things (IoT) and Remote Monitoring
- Opportunity: Integrating data from wearables and remote monitoring devices can enhance the contextual understanding in AI models, leading to more nuanced explanations.
- Future Directions: Research should explore how XAI can interpret and explain insights derived from IoT data to inform clinical decisions, particularly in chronic disease management.
7.3.3. Natural Language Processing (NLP) for Enhanced Communication
- Approach: Developing NLP-driven interfaces that translate complex model outputs into understandable language can improve clinician engagement with AI recommendations.
- Example: Implementing chatbots that provide explanations of AI decisions in layman's terms, enhancing patient understanding and involvement in their care.
7.4. Collaborative Efforts Among Stakeholders
7.4.1. Multidisciplinary Research Collaborations
- Strategy: Establishing multidisciplinary teams can foster innovative approaches to XAI, addressing both technical and ethical challenges comprehensively.
- Impact: Such collaborations can ensure that AI systems are designed with a holistic understanding of their implications in healthcare settings.
7.4.2. Engaging Healthcare Professionals
- Methodology: Conducting participatory design workshops can help gather insights from clinicians about their needs and preferences regarding explanations.
- Outcome: Engaging professionals in the development process can lead to more relevant and practical XAI solutions that align with clinical workflows.
7.4.3. Patient Involvement and Education
- Approach: Developing educational programs that explain how AI works and the importance of explainability can empower patients to engage in their healthcare actively.
- Future Directions: Research should focus on creating accessible resources that demystify AI for patients, enhancing their understanding and participation in the decision-making process.
7.5. Regulatory and Ethical Considerations
7.5.1. Establishing Guidelines for Explainability
- Focus Areas: Guidelines should address criteria for explainability, accountability, and transparency, ensuring that AI systems meet ethical standards.
- Collaboration with Policymakers: Engaging with policymakers to develop regulations that support the safe and ethical deployment of XAI in clinical settings is crucial.
7.5.2. Addressing Bias and Fairness
- Strategies: Implementing robust auditing frameworks to evaluate AI systems for fairness and bias can enhance the trustworthiness of XAI.
- Outcome: By addressing these issues proactively, stakeholders can help ensure that XAI systems do not perpetuate existing disparities in healthcare.
7.6. Conclusion
References
- Hossan, K. M. R., Rahman, M. H., & Hossain, M. D. HUMAN-CENTERED AI IN HEALTHCARE: BRIDGING SMART SYSTEMS AND PERSONALIZED MEDICINE FOR COMPASSIONATE CARE.
- Hossain, M. D., Rahman, M. H., & Hossan, K. M. R. (2025). Artificial Intelligence in healthcare: Transformative applications, ethical challenges, and future directions in medical diagnostics and personalized medicine.
- Kim, J.W.; Khan, A.U.; Banerjee, I. Systematic review of hybrid vision transformer architectures for radiological image analysis. Journal of Imaging Informatics in Medicine 2025, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Springenberg, M.; Frommholz, A.; Wenzel, M.; Weicken, E.; Ma, J.; Strodthoff, N. From modern CNNs to vision transformers: Assessing the performance, robustness, and classification strategies of deep learning models in histopathology. Medical image analysis 2023, 87, 102809. [Google Scholar] [CrossRef] [PubMed]
- Atabansi, C.C.; Nie, J.; Liu, H.; Song, Q.; Yan, L.; Zhou, X. A survey of Transformer applications for histopathological image analysis: New developments and future directions. BioMedical Engineering OnLine 2023, 22, 96. [Google Scholar] [CrossRef] [PubMed]
- Sharma, R. R., Sungheetha, A., Tiwari, M., Pindoo, I. A., Ellappan, V., & Pradeep, G. G. S. (2025, May). Comparative Analysis of Vision Transformer and CNN Architectures in Medical Image Classification. In International Conference on Sustainability Innovation in Computing and Engineering (ICSICE 2024) (pp. 1343-1355). Atlantis Press. [CrossRef]
- Patil, P. R. (2025). Deep Learning Revolution in Skin Cancer Diagnosis with Hybrid Transformer-CNN Architectures. Vidhyayana-An International Multidisciplinary Peer-Reviewed E-Journal-ISSN 2454-8596, 10(si4).
- Shobayo, O.; Saatchi, R. Developments in Deep Learning Artificial Neural Network Techniques for Medical Image Analysis and Interpretation. Diagnostics 2025, 15, 1072. [Google Scholar] [CrossRef] [PubMed]
- Karthik, R., Thalanki, V., & Yadav, P. (2023, December). Deep Learning-Based Histopathological Analysis for Colon Cancer Diagnosis: A Comparative Study of CNN and Transformer Models with Image Preprocessing Techniques. In International Conference on Intelligent Systems Design and Applications (pp. 90-101). Cham: Springer Nature Switzerland. [CrossRef]
- Xu, H., Xu, Q., Cong, F., Kang, J., Han, C., Liu, Z., ... & Lu, C. (2023). Vision transformers for computational histopathology. IEEE Reviews in Biomedical Engineering, 17, 63-79. [CrossRef] [PubMed]
- Singh, S. (2024). Computer-aided diagnosis of thoracic diseases in chest X-rays using hybrid cnn-transformer architecture. arXiv preprint arXiv:2404.11843. [CrossRef]
- Fu, B.; Zhang, M.; He, J.; Cao, Y.; Guo, Y.; Wang, R. StoHisNet: A hybrid multi-classification model with CNN and Transformer for gastric pathology images. Computer Methods and Programs in Biomedicine 2022 221, 106924. [CrossRef]
- Bougourzi, F.; Dornaika, F.; Distante, C.; Taleb-Ahmed, A. D-TrAttUnet: Toward hybrid CNN-transformer architecture for generic and subtle segmentation in medical images. Computers in biology and medicine 2024, 176, 108590. [Google Scholar] [CrossRef] [PubMed]
- Islam, M.T.; Rahman, M.A.; Mazumder, M.T.R.; Shourov, S.H. COMPARATIVE ANALYSIS OF NEURAL NETWORK ARCHITECTURES FOR MEDICAL IMAGE CLASSIFICATION: EVALUATING PERFORMANCE ACROSS DIVERSE MODELS. American Journal of Advanced Technology and Engineering Solutions 2024, 4, 01–42. [Google Scholar] [CrossRef]
- Vanitha, K., Manimaran, A., Chokkanathan, K., Anitha, K., Mahesh, T. R., Kumar, V. V., & Vivekananda, G. N. (2024). Attention-based Feature Fusion with External Attention Transformers for Breast Cancer Histopathology Analysis. IEEE Access. [CrossRef]
- Borji, A.; Kronreif, G.; Angermayr, B.; Hatamikia, S. Advanced hybrid deep learning model for enhanced evaluation of osteosarcoma histopathology images. Frontiers in Medicine, 2025, 12, 1555907. [Google Scholar] [CrossRef] [PubMed]
- Aburass, S.; Dorgham, O.; Al Shaqsi, J.; Abu Rumman, M.; Al-Kadi, O. Vision Transformers in Medical Imaging: a Comprehensive Review of Advancements and Applications Across Multiple Diseases. Journal of Imaging Informatics in Medicine 2025, 1–44. [Google Scholar] [CrossRef] [PubMed]
- Wang, X., Yang, S., Zhang, J., Wang, M., Zhang, J., Yang, W., ... & Han, X. (2022). Transformer-based unsupervised contrastive learning for histopathological image classification. Medical image analysis, 81, 102559. [CrossRef] [PubMed]
- Xia, K.; Wang, J. Recent advances of transformers in medical image analysis: a comprehensive review. MedComm–Future Medicine 2023, 2, e38. [Google Scholar] [CrossRef]
- Gupta, S., Dubey, A. K., Singh, R., Kalra, M. K., Abraham, A., Kumari, V., ... & Suri, J. S. (2024). Four transformer-based deep learning classifiers embedded with an attention U-Net-based lung segmenter and layer-wise relevance propagation-based heatmaps for COVID-19 X-ray scans. Diagnostics, 14(14), 1534. [CrossRef] [PubMed]
- Henry, E. U., Emebob, O., & Omonhinmin, C. A. (2022). Vision transformers in medical imaging: A review. arXiv preprint arXiv:2211.10043. [CrossRef]
- Manjunatha, A., & Mahendra, G. (2024, December). TransNet: A Hybrid Deep Learning Architecture Combining CNNs and Transformers for Enhanced Medical Image Segmentation. In 2024 International Conference on Computing and Intelligent Reality Technologies (ICCIRT) (pp. 221-225). IEEE. [CrossRef]
- Reza, S. M., Hasnath, A. B., Roy, A., Rahman, A., & Faruk, A. B. (2024). Analysis of transformer and CNN based approaches for classifying renal abnormality from image data (Doctoral dissertation, Brac University).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).