Preprint
Review

This version is not peer-reviewed.

Overcoming Barriers to Artificial Intelligence Adoption in Healthcare: A Review of Applications, Challenges and Emerging Solutions

Submitted:

03 March 2026

Posted:

04 March 2026

You are already at the latest version

Abstract
Artificial Intelligence (AI) refers to systems designed to mimic human intelligence, enabling machines to perform tasks that typically require reasoning, learning, and decision-making. Today, AI is integrated into everyday life through technologies such as virtual assistants (e.g., Siri, Alexa, and Google Assistant), autonomous transportation systems, aviation technologies, gaming, and digital platforms. While AI has transformed multiple industries, healthcare has emerged as one of its most impactful domains, significantly enhancing medical imaging, disease diagnosis, treatment planning, and patient management. However, the clinical adoption of AI has been constrained by several persistent barriers, including limited computational resources, scarcity of high-quality annotated datasets, lack of interpretability, privacy concerns, regulatory ambiguity, and integration challenges within existing healthcare infrastructures. The emergence and rapid advancement of modern Deep Learning (DL) techniques helped address many of these challenges by enabling AI systems to analyze complex and high-dimensional healthcare data more effectively. Consequently, AI is increasingly leveraged to overcome traditional healthcare system constraints, improving diagnostic precision, workflow efficiency, and patient outcomes. Despite significant progress, unresolved technical, ethical,regulatory and organizational challenges necessitate a comprehensive evaluation of AI’s role in healthcare. This review discusses the evolution and applications of AI in healthcare, examines the limitations of traditional healthcare systems, explores how AI addresses these challenges, identifies current limitations of AI based approaches, and presents potential solutions to guide future advancements in AI applications within healthcare systems.
Keywords: 
;  ;  ;  ;  

1. Introduction

Artificial Intelligence (AI) has its roots in the early work of Alan Turing in 1950[1,2,3]. Since then, AI has evolved with advances in computing power, enabling machines to learn from data and perform complex tasks. In the context of healthcare, AI offers a transformative extension of two fundamental principles in clinical practice: knowledge and experience. Traditionally, physicians acquire expertise gradually through years of patient care, education, and exposure to a broad spectrum of clinical scenarios. However, AI systems can accelerate this process by analyzing large volumes of medical data, such as clinical notes, imaging studies, lab reports, and evidence-based literature. Nevertheless, the modern era of digital healthcare has introduced a transformative parallel: AI offers an accelerated and scalable approach to accumulating and applying clinical experience and knowledge [4,5,6].
Healthcare systems worldwide are confronting significant and persistent challenges. These include limited access to quality medical care, escalating treatment costs, inefficient use of resources, and an expanding elderly population that requires continuous support. Events like the COVID-19 [7,8]pandemic have made these issues worse. During such times, many hospitals struggled with shortages of protective equipment, limited or inaccurate testing, overworked healthcare professionals, and poor communication between healthcare centers, which delayed proper care and response. In addition to these issues, healthcare systems face numerous other challenges that strain both resources and personnel. The growing volume of medical data, fragmented health records, and limited interoperability between healthcare systems hinder efficient decision-making. Many regions continue to experience shortages of skilled professionals and unequal access to quality care, particularly in rural and low-resource settings. Diagnostic inaccuracies, administrative inefficiencies, and concerns over data privacy further complicate service delivery. Together, these factors highlight the urgent need for innovative solutions that can enhance efficiency, accuracy, and accessibility within the healthcare ecosystem.
To address these growing challenges, AI is being increasingly adopted across the healthcare sector for its potential to improve efficiency, accuracy, and decision-making. AI systems analyze vast and diverse medical datasets, including clinical histories, diagnostic reports, imaging modalities, and evidence-based studies, to extract meaningful insights. By identifying complex patterns within this data, AI supports healthcare professionals in making timely and informed decisions. While healthcare professionals accumulate expertise over years of practice, AI can learn from millions of cases in very limited time, enabling faster knowledge translation and improved patient outcomes. For example, a radiologist might look at thousands of scans in their career, but AI can review millions in a short time and keep getting better by learning from patterns and results. AI’s strength in healthcare lies in its ability to learn from large datasets, identify patterns, and generate actionable insights. It encompasses various domains such as Machine Learning (ML), Artificial Neural Networks (ANNs), Deep Learning (DL), and Convolutional Neural Networks (CNNs), each contributing uniquely to medical advancements. With the help of ML and DL, AI can detect critical features in medical images and adapt to new information, enabling faster and more accurate diagnoses. Technologies like image processing, Computer Vision (CV), and ANNs play a vital role in this progress. Table 1 summarizes the different AI techniques applied for medical diagnosis. These tools enable machines to interpret medical data, learn from experience, and produce reliable diagnostic results, particularly DL models like CNNs, which are highly effective in analyzing intricate image features for disease detection.
Despite these promising developments, the widespread adoption of AI in healthcare still faces numerous barriers. Key challenges include data privacy and security concerns, lack of interoperability, model and data bias, limited availability of high-quality and annotated datasets, ethical and legal issues, and resistance to adopting new technologies in clinical practice. To overcome these obstacles, researchers and policymakers are exploring several solutions, such as establishing standardized data-sharing frameworks using Federated Learning (FL), ensuring algorithmic transparency, employing blockchain for data security, developing explainable AI models, and implementing robust governance policies for data protection. Furthermore, collaborative efforts among healthcare institutions, technology developers, and regulatory bodies are essential to build trust, enhance system integration, and promote the responsible use of AI in healthcare. AI serves as a powerful complement to clinical expertise, not a replacement. It enhances decision-making by providing speed, consistency, and insights derived from large-scale data. As healthcare becomes increasingly digital, AI is expected to play a vital role in diagnosis, prognosis, treatment planning, and personalized medicine, moving the field closer to data-driven, high-quality healthcare.
Therefore, this review aims to provide a comprehensive overview of how AI is transforming healthcare, the key challenges hindering its adoption, and the emerging strategies designed to address these barriers. By highlighting both limitations and potential solutions, the review seeks to present a balanced perspective on achieving effective, ethical, and sustainable AI driven healthcare systems.

2. Research Methodology

To conduct this review, a systematic approach is adopted to identify, select, and analyze relevant literature from multiple scholarly databases. The initial phase involves defining the scope of the review, which encompasses several key aspects, including the application of AI in healthcare, medical imaging and diagnostics, associated challenges, proposed solutions, emerging strategies, and future implications. Subsequently, a structured methodology is followed to screen, evaluate, and select the most relevant studies based on predefined selection criteria.

2.1. Scope of the Review

This review aims to provide a comprehensive understanding of the integration of AI into healthcare systems by addressing several key research questions. The central questions explored in this review are as follows:
Q1. 
What is the current role and significance of AI in modern healthcare systems?
Q2. 
How are different AI techniques applied across healthcare applications?
Q3. 
What are the major challenges and limitations of AI based healthcare systems?
Q4. 
What strategies and solutions are emerging to overcome the challenges?
Q5. 
What are the currently operational AI based products and their applications in healthcare?
Q6. 
How does this review differ from existing literature?
Q7. 
What future directions can accelerate AI driven healthcare innovations?

2.2. Article Selection Criteria

In this comprehensive review, the aim is to identify, evaluate, and analyze various studies focused on the integration of AI in healthcare, including its applications, techniques, challenges, solutions, and emerging trends. Figure 1 illustrates the keywords and search strategies used to retrieve the papers included in this review.
The search began with core terms such as “AI in healthcare,” “AI-based diagnostics,” and “AI-driven medical imaging.” These were then combined with additional domain-specific keywords to create multiple search strings, as shown below:
  • “AI” + “medical imaging” + “diagnostics”
  • “AI” + “disease prediction” + “deep learning”
  • “AI” + “healthcare applications” + “clinical decision support”
  • “AI” + “challenges” + “solutions”
  • “AI” + “electronic health records” + “NLP”
  • “XAI” + “blockchain” + “secure healthcare systems”
  • “XAI” + “federated learning” + “privacy-preserving healthcare”
  • “AI” + “emerging trends” + “future directions”
The literature was collected from credible scholarly databases, including Springer, Elsevier, Wiley, IEEE Xplore, ScienceDirect, PubMed, and ACM Digital Library.
The inclusion criteria focused on selecting studies published between 2016 and 2026 that specifically address the use of AI techniques in healthcare applications. The final collection consists of journal papers, conference proceedings, review articles, and book chapters directly relevant to the topic.
Several articles were excluded for the following reasons:
  • Duplicates found across multiple databases
  • Studies unrelated to healthcare or focusing purely on medical science without AI integration
  • Non-English publications
  • Papers with only abstracts available
  • Articles discussing technologies outside the scope of AI-based healthcare systems
Additionally, the year-wise distribution of the selected research papers on AI in healthcare has been presented through a graphical representation in Figure 2.

3. History of AI in Healthcare

The integration of AI into healthcare began in the 1970s [9], with the introduction of MYCIN[10] one of the earliest expert systems. MYCIN utilized a rule-based knowledge base to assist in identifying bacteria responsible for infectious diseases. However, such systems were limited by their dependence on manually crafted rules. In the 1980s, ML[11] approaches were introduced to address this limitation. Unlike rule-based systems, ML algorithms learn patterns and relationships directly from historical data, allowing for more adaptive and data-driven clinical decision support. While traditional ML techniques were effective in learning patterns from structured data, they had several limitations. ML models typically relied on handcrafted features and performed poorly when dealing with unstructured and high-dimensional data such as medical images, and often failed to generalize well across different datasets. To address these challenges, neural networks[12] gained prominence around 1995, offering a layered architecture capable of learning complex, non-linear patterns directly from raw data, reducing the need for manual feature engineering. Neural networks laid the groundwork for more powerful, data-driven approaches in healthcare. In the early 2000s,[13] the field advanced with the emergence of CNNs, a specialized class of deep learning models designed for image analysis. CNNs demonstrated remarkable success in medical imaging tasks by automatically extracting spatial features and learning hierarchical representations. However, training CNNs from scratch required large annotated datasets, high computational cost, and involved challenges such as unstable performance due to randomly initialized weights. To mitigate these issues, the period between 2010 and 2016 witnessed the rise of transfer learning[14] in healthcare AI. This technique involves reusing pretrained models typically trained on large-scale datasets like ImageNet and fine-tuning them on medical datasets. Transfer learning significantly improved performance, especially in scenarios with limited data, while reducing training time and computational burden. Despite their success, DL models are often criticized for being “black boxes”, meaning their decision-making processes are not easily interpretable. This lack of transparency creates challenges in critical domains like healthcare, where understanding the rationale behind a model’s prediction is essential for clinical trust, validation, and regulatory approval. As a result, relying solely on performance is insufficient-explainability became a key concern. To address these concerns, the development of XAI [15] gained significant attention. Although the idea of explainability had been discussed earlier, specific efforts to interpret CNNs began to take shape in late 2016, especially in the context of medical imaging. These approaches aimed to make deep models more transparent, interpretable, and acceptable in clinical practice. While XAI improved trust and interpretability[16], the need to maintain data privacy in sensitive domains like healthcare gave rise to FL[17] in 2018, enabling collaborative model training without centralizing patient data.[18,19] However, FL presented its own challenges, such as data heterogeneity and lack of trust among institutions[20]. To address these issues, Blockchain-assisted FL[21] was introduced in 2021, offering a secure, decentralized framework with enhanced transparency, traceability, and tamper-resistance—marking the next step in trustworthy AI for healthcare[22,23,24]. A historical overview of AI technologies transforming healthcare over time is illustrated in Figure 3.

4. Role of AI in Modern Healthcare

In order to comprehensively understand the impact and limitations of AI in this field, the applications have been categorized into key functional areas based on their roles in patient care, system optimization, and research development. These include medical imaging and diagnostics, predictive analytics, drug discovery, virtual health assistants, personalized treatment planning, remote monitoring, hospital workflow optimization, and public health surveillance. Each of these domains reflects how AI is uniquely applied to solve specific challenges. The following subsections explore these categories in detail. Figure 4. presents a visual representation of AI applications in the healthcare domain.

4.1. Medical Imaging and Diagnostics

AI, particularly DL techniques such as CNNs, has significantly transformed medical imaging by enhancing the detection, segmentation, and classification of diseases across various imaging modalities, including X-rays, MRI, CT, and ultrasound [25,26]. The models assist radiologists in identifying abnormalities with greater speed, accuracy, and consistency, often detecting subtle patterns that may be missed by the human eye. Notable applications include the detection of rectal cancer using high-resolution MRI with Faster R-CNN [27], lung nodules in chest CT scans [28], and small liver cancers in multimodal ultrasound images [29]. Tools like EyeArt have accurately identified vision-threatening diabetic retinopathy without dilation [30], while the GRAIDS system has shown expert-level performance in diagnosing gastrointestinal cancers from endoscopy images [31]. AI has also been used to evaluate breast lesions on dynamic contrast-enhanced MRI and cervical spine radiographs for cervical spondylotic myelopathy. [32] Beyond improving diagnostic accuracy, AI reduces radiologist workload by automating repetitive tasks and helps minimize human errors. It also enables standardized image interpretation, reducing inter-observer variability and supporting more reliable clinical decision-making. Furthermore, AI contributes to reducing human error, standardizing interpretations, and automating repetitive tasks, thus enhancing overall reliability in clinical imaging workflows.

4.2. Predictive Analytics and Risk Stratification

Predictive analytics and risk stratification have become critical AI driven applications in modern healthcare for identifying high-risk patients and enabling early intervention. By leveraging historical clinical data, EHRs, imaging, and genomics, ML models can forecast disease progression, hospital readmission, adverse outcomes, or treatment response. For instance,[33] developed an XGBoost-based model to predict in-hospital mortality among ICU heart failure patients using the MIMIC-III and eICU datasets. Similarly, [34] demonstrated DL models that outperformed conventional scoring systems in predicting inpatient mortality, unplanned readmissions, and prolonged length of stay using large-scale EHRs. In oncology, ML models have enabled patient stratification based on genomic profiles and tumor heterogeneity, thereby facilitating precision treatment strategies [35]. Moreover, risk stratification tools powered by AI assist clinicians in resource prioritization, particularly in critical care settings, as shown by predictive tools developed for sepsis onset[36] and acute kidney injury[37]. Additionally, These enables proactive intervention, improving patient outcomes and reducing hospitalizations. Supports population health management and personalized preventive care. Can integrate heterogeneous data types for comprehensive risk modeling.

4.3. Drug Discovery and Development

AI is revolutionizing drug discovery and development by significantly accelerating and optimizing various stages of the pipeline, including target identification, lead compound generation, toxicity prediction, and clinical trial design. Traditional drug development is often costly and time-consuming, with a high failure rate in later stages. AI models, have shown the potential to reduce these challenges by learning complex patterns from large biological and chemical datasets.[38] demonstrated the effectiveness of graph-based ML techniques in molecular property prediction, virtual screening, and de novo drug generation. [39] conducted a comprehensive survey on generative AI models such as GANs and VAEs, which are increasingly used for designing novel molecules with desired pharmacological properties. Additionally, explainable AI frameworks are emerging to ensure transparency in AI-driven drug discovery. [40] emphasized the importance of model interpretability for regulatory compliance and clinical trust, particularly in high-stakes domains like pharmacokinetics and toxicity assessment. Furthermore, recent studies have highlighted AI’s contribution to streamlining early-stage development and preclinical validation. For instance, a review published in Discover Pharmaceutical Sciences [41] discussed how AI is transforming traditional workflows by integrating heterogeneous biomedical data to predict target-drug interactions and optimize clinical trial outcomes. Likewise, a study from Biomarker Research [42] explored AI applications in patient stratification and biomarker-driven drug design, underscoring the role of AI in personalized medicine.

4.4. Virtual Health Assistants and Chatbots

AI-powered Virtual Health Assistants (VHA) and chatbots are increasingly deployed in healthcare settings to improve accessibility, patient engagement, and operational efficiency. These systems leverage Natural Language Processing (NLP) and ML to simulate human-like conversations, assist with symptom assessment, medication reminders and mental health support. Notably, tools like Babylon Health [43] and Ada Health use AI-driven chat interfaces to provide preliminary diagnostic support and guide patients toward appropriate care. These applications are particularly beneficial in under-resourced or remote areas with limited access to healthcare professionals, offering 24/7 assistance and reducing the burden on healthcare systems. Chatbots have also been successfully deployed for risk assessment in conditions such as cancer, depression, and chronic diseases [44]. The integration of large language models (LLMs) such as GPT-4 and Med-PaLM 2 has further enhanced the quality, safety, and clinical relevance of these conversational agents [45,46]. Beyond diagnostics, VHAs are being increasingly used for delivering personalized health education and promoting behavioral change. Applications include providing post-operative care instructions, chronic disease coaching, and real-time mental health support. Systematic reviews have shown that such interventions contribute to better patient knowledge retention, improved medication adherence, and increased follow-up compliance [47].

4.5. Remote Monitoring and Wearable Devices

AI plays a crucial role in enhancing remote patient monitoring through the integration of wearable devices. By analyzing continuous data from wearable sensors, AI algorithms can monitor vital signs, detect physiological anomalies, and assist in the remote management of chronic diseases such as diabetes, hypertension, and heart conditions[48,49]. These capabilities enable continuous, real-time health monitoring, which facilitates early detection of health deterioration and timely medical intervention[50,51]. AI-powered wearable systems also empower individuals to take an active role in managing their health by providing personalized feedback on activity levels, sleep quality, and other lifestyle parameters. This promotes self-management, encourages healthy behavior changes, and supports long-term disease prevention and control [52]. Moreover, remote monitoring solutions help reduce hospital readmissions, lower healthcare costs, and ease the burden on healthcare systems by minimizing unnecessary in-person visits.

4.6. AI in Hospital Operations and Workflow Optimization

AI technologies have emerged as powerful tools for optimizing hospital operations and clinical workflows. These systems address inefficiencies in scheduling, resource allocation, patient flow, and documentation, thereby enhancing care delivery and overall operational performance. One of the core applications is AI-driven patient flow optimization[53], where predictive models forecast admission rates, discharge timings, and bed occupancy trends, enabling real-time resource management and reducing emergency department crowding [54,55]. Operational scheduling has also been enhanced through AI algorithms that automate and optimize surgical case scheduling, staff rosters, and diagnostic appointments. Reinforcement learning and constraint-based optimization approaches have shown significant success in minimizing wait times and maximizing utilization of operation theaters and imaging equipment[56,57]. AI further supports hospital supply chain and inventory management by predicting usage patterns and optimizing procurement cycles[58]. Techniques such as time-series forecasting and clustering are employed to ensure the availability of essential medical supplies while minimizing waste and storage costs [59]. These applications not only improve operational efficiency but also contribute to cost reduction, staff satisfaction, and enhanced patient experience key indicators of a high-performing healthcare system.

5. Key Challenges

Despite the remarkable progress and potential of AI in healthcare, its widespread adoption still faces several critical obstacles. These challenges span across data management, ethical and legal frameworks, technical robustness, operational integration, and the trustworthiness of AI systems. Each of these challenges is discussed in detail below.

5.1. Data Related Challenges

Healthcare data is highly sensitive because it includes most intimate information conceivable about an individual. Unauthorized access or misuse[60,61] of information can lead to identity theft, insurance fraud, and reputational damage for healthcare institutions. There are ethical and legal[62] obligations for health care providers to preserve the privacy and confidentiality of patient information. However, AI models require access to detailed patient records to detect complex patterns, but regulations like the Health Insurance Portability and Accountability Act (HIPAA)[63] in the U.S. and National Digital Health Authority (NDHA) impose strict rules on how patient data can be collected, stored, processed, and shared. This limits the ability of researchers and developers to freely access and share real-world data.
Additionally, quality [64] of data is directly related to the accuracy, consistency, and timeliness of data. And data is often collected from a wide variety of sources, such as electronic records, imaging systems, lab reports, wearable sensors, and handwritten notes each with its own format and reliability level. AI models are highly sensitive to poor-quality[65] or unstructured data. Issues like missing values, inconsistent coding standards, duplicate entries, or outdated records can significantly degrade model performance. Moreover, semantic differences across institutions make it difficult to aggregate and harmonize data. Moreover, AI research, requires large volumes of accurately labeled data. The labels are often clinical annotations made by domain experts (e.g., radiologists or pathologists), which are time-consuming and expensive to produce. Lack of labeled data[66,67] slows down model training, testing, and validation. For rare diseases, even gathering sufficient samples is difficult. Additionally, labeling inconsistencies or inter-observer variability can introduce noise into datasets.

5.2. Ethical and Legal Challenges

AI in healthcare introduces numerous ethical and legal concerns due to its potential impact on life or death decisions. The key challenges include bias[68], accountability, and the transparency of AI models. A biased model indicates that its predictions disproportionately favor certain groups or classes, often as a result of imbalanced training data. Since these models are trained on historical data, any existing biases or unfair patterns in the data may be reproduced or even amplified during decision making. In the healthcare domain, such bias[69] can lead to misclassifications in diagnosis and treatment, potentially exacerbating health disparities and undermining trust in AI systems. Moreover, most DL models, particularly those using neural networks, are black box[70] and difficult to interpret. This lack of explainability hinders trust[71,72], limits clinical adoption, and complicates legal liability when AI-based decisions are challenged. Typically, highly accurate models like DL are often difficult to interpret, whereas models that are easier to understand, such as decision trees, usually offer lower accuracy, as shown in Figure 5. However, medical experts need clear justifications for decisions, especially in high-stakes environments like diagnosis or treatment planning. Additionally, regulatory bodies require traceable decision-making processes to ensure safety and ethical responsibility. Recently, a major concern arose in 2017[73] when the Royal Free NHS Trust shared 1.6 million patient records with Google DeepMind without informing the patients. This incident raised serious concerns about transparency and trust. Patients must be informed about how their data is used, or they may resist AI adoption. Ensuring privacy and building trust through ethical data handling remains a critical challenge.

5.3. Technical Challenges

AI in healthcare has offered strong capabilities in various fields including disease prediction, drug discovery and personalized medicine. However, developing AI based healthcare systems faces some critical challenges including model interpretability, integration with existing healthcare systems, Algorithmic bias, Optimization, real-time processing and scalability. These challenges adversely affect the performance of the system which leads to an inaccurate system.

5.3.1. Model Interpretability

A major barrier to AI adoption in healthcare is the lack of interpretability of many high performing models, particularly deep learning architectures such as Convolutional Neural Networks (CNNs) and Transformer-based models. While these models achieve exceptional accuracy in tasks like classification or clinical note summarization; however, they often perform the task without offering any insight into how decisions are made. This black box nature undermines clinical trust and complicates legal and ethical accountability[74,75]. Medical experts require explanations for AI generated decisions, especially in case of severe diseases like cancer diagnosis, critical environment like ICU care, or surgical planning. Current solutions, such as Local Interpretable Model-Agnostic Explanations (LIME), Shapley Additive Explanations (SHAP), and counterfactual explanations, attempt to provide human understandable outputs. However, these are often post-hoc and not completely reliable or do not reflect the internal logic of the model [76]. They can be misleading, especially when applied to biased models. Furthermore, the need for explainability varies across different medical specializations. For example, a radiologist might accept feature heatmaps for image diagnosis, while a general practitioner may require clear verbal reasoning tied to patient history. Without consistent and standardized interpretability protocols, AI may remain confined to research settings without its proper implication in real world clinical setup.

5.3.2. Model Biasness

The problem of fairness in AI is closely linked to how well these systems work for everyone. ML models can sometimes reflect unfair patterns from society, leading to poor results for certain minority groups, especially if the training data is biased [77]. Research shows that AI systems can often cause more harm to people who are already disadvantaged because of their race, gender, or economic status [78]. In healthcare, for example, some hospital mortality prediction tools work better for certain ethnic groups than others [79]. There are also skin cancer detection systems that perform as well as expert dermatologists [80,81], but they don’t work as well on darker skin because most of the training images are of lighter-skinned people. This is a serious concern, as people with darker skin often have more advanced skin conditions when diagnosed and lower survival rates[82]. Algorithmic unfairness can be understood in three main parts: model bias, where the model mainly works well for the majority group but not for those underrepresented; model variance, which happens when there isn’t enough data from minority groups; and outcome noise, which refers to unknown factors that can affect predictions, but could be reduced by identifying specific subgroups and collecting more detailed data [79]. To reduce these problems, it’s important to raise awareness and involve healthcare professionals in the design and development of AI systems. Their input can help researchers take the right steps to measure and address bias before using these models in real settings. AI tools should be built with global use in mind and tested on diverse populations that reflect the people they are meant to help. It’s important to carefully check how an AI system performs across different groups, such as by age, ethnicity, gender, social background, and location. This helps ensure it works fairly for everyone. It’s also crucial to study how the algorithm might change what kinds of diseases are detected. If the AI system finds different types or stages of disease compared to what doctors usually catch, then both the possible benefits and risks of this shift must be examined. For example, in mammography, AI might detect milder forms of ductal carcinoma in situ, which could lead to more treatments without much improvement in health outcomes. To spot any issues before full-scale use, small trial runs of the AI system should be done within healthcare settings to better understand how it works in practice and to identify any potential problems.

5.3.3. Security Challenges of AI models

AI algorithms have demonstrated vulnerability to adversarial attacks, where deliberately modified inputs can mislead models that would otherwise perform accurately[83,84]. Although such scenarios remain largely theoretical in clinical practice, they highlight important risks. For instance, research has shown that subtle changes like adding noise or rotating an image can cause a model to incorrectly classify benign skin lesions as malignant [85].AI systems in healthcare encounter numerous security threats at various stages, including data acquisition, preprocessing, model training, and inference, as illustrated in Figure 6. During the data collection phase, devices such as sensors can be exposed to spoofing attacks[86,87]. The preprocessing stage may also face risks like scaling attacks. Given the highly sensitive nature of medical information, it becomes a frequent target for cyber threats. Moreover, AI models can be exploited through adversarial attacks, where slight, often imperceptible, alterations to input data can mislead the model into making incorrect predictions or unintentionally revealing private patient data [88]. These vulnerabilities underscore the importance of implementing stringent security practices across all stages. Securing data collection mechanisms against manipulation and building models resilient to adversarial inputs during training and deployment are essential steps toward safeguarding healthcare AI applications. Such practices are critical for maintaining the trustworthiness and reliability of AI in medical environments.

5.3.4. Efficient and Effective AI System

One of the most pressing technical challenges in integrating AI into healthcare systems is ensuring the efficiency of AI models and infrastructure across various clinical settings. Unlike general purpose AI applications that often operate in cloudrich environments, healthcare imposes unique constraints that demand AI systems to be computationally efficient, energy conscious, latency minimizing, and capable of delivering real-time performance without compromising on diagnostic accuracy. Deep learning models such as convolutional neural networks (CNNs) and transformers, which, while highly effective in tasks like medical image classification, disease prediction, and natural language processing in clinical notes, are resource-hungry and often infeasible to deploy in low resource or time sensitive settings without significant optimization. Developing optimized AI models for healthcare involves more than achieving high accuracy; it demands solutions that are computationally efficient, scalable, robust, and clinically deployable. Despite significant advances, various challenges hinder the optimization of AI systems for real-world healthcare applications. A fundamental challenge lies in balancing model complexity with computational efficiency. State-of-the-art deep learning architectures, such as convolutional neural networks (CNNs) and transformers, are often computationally intensive and memory-hungry [89]. Training and deploying these models in real-time clinical environments, especially where hardware resources are constrained (e.g., in rural or developing regions), becomes impractical. Optimizing for lightweight inference while retaining diagnostic performance is an ongoing area of research [90]. Another significant obstacle is optimizing for generalizability and robustness. Many AI models that perform well on training datasets fail to maintain accuracy across different clinical environments due to covariate shifts, differences in imaging devices, or population diversity [91]. Ensuring model robustness across hospitals, regions, and demographic groups requires advanced techniques such as domain adaptation and FL[92]. However, these methods themselves are computationally demanding and difficult to implement at scale. Explainability is essential for clinical integration. Optimized models must not only perform well but also offer interpretable insights to earn trust from clinicians. However, optimization for explainability often comes at the cost of predictive power [93]. Methods like SHAP and LIME offer local interpretations, but they add computational overhead and may not scale efficiently for real-time decision-making. Furthermore, optimizing AI for clinical use necessitates rigorous validation, regulatory approval, and seamless integration into existing healthcare workflows. Most models are optimized under idealized conditions that do not account for real-world clinical noise, workflow interruptions, or human-in-the-loop constraints [94]. Clinical optimization thus involves an additional layer of systems engineering that goes beyond algorithmic performance. Real-time AI deployment also poses unique challenges in latency, throughput, and fault tolerance. For time-sensitive applications such as emergency diagnosis or ICU monitoring, models must deliver results within milliseconds while ensuring consistent reliability [95]. Traditional cloud-based solutions may introduce unacceptable delays; edge AI approaches can mitigate latency but require further model compression and optimization, often degrading accuracy [96]. Finally, optimization must consider energy efficiency and environmental impact. As healthcare systems scale their AI infrastructure, the energy demands of training and running large models become significant. Green AI initiatives advocate for resource aware model development, which is still in its early stages in the healthcare domain [97]. In conclusion, optimizing AI models for healthcare involves a complex trade-off between accuracy, efficiency, interpretability, scalability, and trustworthiness. Addressing these challenges requires interdisciplinary collaboration and innovative methods in data engineering, model design, system integration, and clinical validation to ensure AI solutions are not only powerful but also practical and sustainable.

5.3.5. Real-Time Processing and Scalability

Real-time processing and scalability are critical requirements for deploying AI systems in healthcare, particularly in applications such as emergency care, continuous monitoring, medical imaging analysis, and robotic surgery. These use cases demand immediate or near-instantaneous processing of complex data streams, while also maintaining accuracy, safety, and reliability. However, achieving both low-latency performance and scalability in dynamic clinical environments poses several formidable challenges. One of the primary challenges in real-time healthcare AI is managing the computational burden associated with complex models. Many AI systems, especially deep neural networks used for imaging and sensor data analysis, require significant processing power [89]. When these models are deployed in real-time scenarios such as: detecting cardiac arrhythmias from ECG signals or identifying anomalies in radiological scans—the latency introduced by heavy computations can delay decision-making, risking patient safety [98]. Reducing inference time while maintaining diagnostic performance is non-trivial and often requires model compression techniques such as quantization, pruning, and knowledge distillation [90], each of which introduces trade-offs in accuracy or interpretability. Scalability issues further complicate real-time performance. As healthcare systems increasingly integrate digital diagnostics and wearable devices, the volume and velocity of incoming data continue to rise. Traditional on-premise infrastructures often lack the capacity to handle such data at scale, particularly when multiple concurrent AI inferences are required across different departments or facilities [99]. Cloud-based solutions offer elasticity, but they introduce latency due to network communication and may not be viable for time-sensitive or privacy-critical applications [96]. Edge computing is gaining popularity as a way to meet real-time demands by processing data closer to its source. However, deploying AI models on edge devices introduces new challenges in terms of memory limitations, energy constraints, and maintenance complexity [100]. In healthcare, where devices may need to run continuously in resource-limited environments (e.g., ambulances, rural clinics, or wearable health monitors), optimizing AI models for edge compatibility is essential but difficult. Such models must be robust, lightweight, and updateable without human intervention, which demands advances in both hardware and software engineering. Another pressing issue is ensuring real-time performance across diverse patient populations and clinical settings. Models optimized for one hospital or dataset often degrade in performance when applied elsewhere due to data distribution shifts [91]. Achieving real-time robustness under variable clinical contexts requires scalable training pipelines, continual learning mechanisms, and adaptive deployment strategies—all of which are still in experimental stages [101]. Systems integration is a non-trivial component of real-time scalability. AI systems must interface seamlessly with EHRs, hospital information systems, and clinical decision support systems (CDSS). The latency and compatibility issues introduced during integration can negate the gains made in AI model optimization [94]. Moreover, regulatory requirements such as HIPAA and GDPR restrict how data is processed and stored, complicating scalable deployments across institutions and regions [93]. Finally, real-time scalability must include mechanisms for fault tolerance, failover, and safety assurance. Healthcare environments cannot afford system crashes or incorrect outputs during critical operations. Scalable AI architectures must include redundancy, performance monitoring, and alert systems, which require sophisticated engineering beyond traditional AI research [95].

6. Solution and Emerging Strategies

As highlighted in the preceding section on challenges, this section discusses feasible solutions to key issues associated with the implementation of AI in healthcare, based on recent studies that examine various strategies and methodologies to address these concerns. Table 2 provides a concise overview of several key challenges, including data privacy, security, ethical and legal, and technical limitations, along with the approaches proposed in the literature to address them.

6.1. Solution for Data Related Challenges

Data related challenges primarily revolve around issues of data privacy and security, which are critical due to the sensitive nature of patient information. To address these concerns, researchers have proposed advanced techniques such as FL and Blockchain. The following sections provide detailed discussions on these approaches.

6.1.1. FL Based Solution for Data Privacy

The authors have addressed the problem of early detection of Alzheimer by using voice data collected from smart speakers (like Alexa or Google Home), eliminating the need for costly medical equipment while preserving users’ private health information. They have [102] used FL, in which the model is trained locally on users’ devices using their voice data, ensuring that raw data never leaves the device. To further enhance privacy, differential privacy is incorporated by adding random noise to the extracted features before transmission, thereby reducing the risk of tracing the data back to individual users. Additionally, a cryptographic aggregation technique based on secret sharing is implemented to securely combine model updates from multiple clients, ensuring that user data remains protected and unreadable during the training process. However, a slight reduction in prediction accuracy is observed as a trade-off for enhanced privacy protection.
In [103] the authors have addressed the issue of low-quality data contributions (e.g., noisy or inaccurate data). Instead of discarding such data, the proposed framework ensures robust model performance while preserving data privacy and system security. The authors have employed FL, where model training is performed locally on users’ devices, ensuring that raw data are not shared. To enhance security during training, Distributed Paillier Homomorphic Encryption is applied, allowing computations to be performed on encrypted data without exposing the underlying information. Furthermore, a Data Composite Evaluation Method (DCEM) is introduced to assess the quality of each participant’s data and assign lower weights to unreliable contributions rather than excluding them entirely. In addition, a special protocol termed SAP (Secure and Accelerate Partnership) is designed to securely aggregate encrypted updates without information leakage. This framework enables inclusive participation, preserves privacy, and mitigates the negative impact of low-quality data. However, the effectiveness of the system depends on the assumption that the majority of participants are trustworthy.
In both [104] and [105], COVID-19 detection from chest X-ray (CXR) images is investigated. The first study addresses the cross-client variation problem using Federated Averaging; however, it lacks personalized FL mechanisms and assumes uniform data distribution across clients, which is unrealistic in medical practice. The second study effectively handles Non-Independent and Identically Distributed (non-IID) and unbalanced data distributions across clients but does not incorporate model personalization strategies, such as client-specific fine-tuning or adaptive updates.
The authors [106] address the problem of cross-client variation in decentralized medical image data. To mitigate this issue, a framework termed Variation-Aware FL is proposed, wherein each client maps its medical images into a shared image space using a modified CycleGAN. By aligning data distributions across clients, the framework aims to reduce inter-client variability during federated training. However, the image transformation process may introduce distortions in critical regions, such as cancerous areas, which could potentially compromise diagnostic accuracy.
The paper [107] addresses the challenge of inter-client variation for medical image. To address this issue, the authors propose a novel framework termed Customized FL (CusFL), wherein each client is allowed to train a personalized model to improve local performance. Although CusFL improves client-specific accuracy, it exhibits inferior performance on global evaluation compared to approaches such as MOON.
The paper [108] addresses the challenge of performance degradation caused by non-IID medical imaging data. To tackle this issue, the authors propose SplitAVG, a framework that partitions the neural network into client-side and server-side sub-networks, thereby enabling learning from diverse data distributions while preserving privacy. However, the study primarily focuses on statistical heterogeneity and does not consider other practical sources of variability, such as device or behavioral heterogeneity.
Overall, FL provides an effective and privacy-preserving framework for collaborative healthcare data analysis by enabling decentralized training without sharing sensitive patient data.The general workflow and structural categorization of FL approaches are depicted in Figure 7 and Figure 8, respectively.

6.1.2. Standardization Methods for Data Quality Improvement

Standardization of healthcare data is a critical prerequisite for the development and deployment of AI models in clinical environments, especially due to the heterogeneity of data derived from electronic health records (EHRs), medical imaging, and biosensor streams. Statistical standardization methods such as z-score normalization, min–max scaling, and interquartile range-based robust scaling are routinely used to normalize structured medical data, reduce distributional discrepancies, and facilitate stable model training and convergence. By aligning feature scales and mitigating variability across datasets, these techniques improve model generalization and reproducibility. For instance, normalization approaches have been reported to enhance predictive performance in ICU mortality prediction models and brain tumor MRI segmentation tasks by harmonizing feature distributions across datasets[109,110] .
In medical imaging, intensity normalization strategies such as histogram equalization and z-score mapping help reduce inter-institutional variability and have shown measurable improvements in radiomic feature extraction across multicenter CT lung datasets [111]. On the semantic side, coding systems such as SNOMED CT and ICD-10 are indispensable for integrating EHR data across hospitals, while data models like OMOP and FHIR support scalable, interoperable AI solutions [112]. Clinical NLP benefits from lexical and syntactic normalization techniques, including tokenization and UMLS mapping, which enhance named entity recognition and downstream classification performance [113]. In FL settings where data remain distributed, feature distribution alignment techniques such as batch normalization and adaptive normalization are essential for ensuring statistical consistency across training sites without data sharing [114].
Despite significant progress, over-standardization may obscure biologically relevant signals, highlighting the need for balance between harmonization and clinical interpretability.

6.1.3. Blockchain-Based Solution for Data Security

To address data security challenges, blockchain technology has emerged as a robust approach due to its decentralized, tamper-proof, and transparent nature. The solutions can be broadly divided into two categories: Pure blockchain-based solutions and FL- assisted blockchain solutions. Below is a brief overview of key works in both categories, highlighting their approaches and contributions.
  • Blockchain-Based Solutions
    [115] addressed security risks associated with centralized storage and unauthorized access to sensitive health data by proposing a decentralized blockchain-based authentication framework for patient verification across interconnected hospital networks. The system enhances identity integrity and reduces reliance on centralized authorities. However, the proposed approach lacks inclusiveness, potentially excluding individuals without adequate digital access or technical literacy.
    [116] The authors created a system that gives patients full control over their medical records, which are often scattered across different hospitals and hard to access or share. The framework uses blockchain-based storage to ensure the integrity, traceability, and secure access of medical records while enabling controlled data sharing across institutions. Reliance on off-chain cloud storage may expose the system to external risks and limit end-to-end data security. [117] designed a decentralized medical data sharing framework to address the issue of unauthorized exposure and inefficient synchronization of medical records. To achieve this, they proposed a technique that partitions a full medical record into fine-grained data views, each shared selectively with different stakeholders such as patients, doctors, and researchers. Additionally, they employed blockchain-based smart contracts to enforce attribute-level access control, ensuring that only authorized users could update or access specific fields within the shared data. However, the system does not effectively handle concurrent updates across overlapping data views, relying instead on basic serialization mechanisms.
    [118]The authors developed a permissioned blockchain platform to ensure secure, consistent, and patient-controlled management of Electronic Medical Records (EMRs) within hospitals. They addressed issues such as data fragmentation, lack of transparency, and weak access control by integrating smart contracts for role-based permissions and using immutable transaction logs. There is no discussion of interoperability with existing hospital EMR systems, limiting real-world integration feasibility.
    [119] The authors developed a blockchain-based system to securely manage and verify COVID-19 digital medical passports and immunity certificates. The framework addresses challenges related to delayed, inaccurate, and unreliable health reporting. However, the system relies heavily on user-controlled private keys and advanced digital infrastructure, which may limit accessibility and introduce risks associated with key loss, particularly in resource-constrained settings.
    [120] The authors developed an Ethereum-based blockchain solution for the resale, leasing, and auctioning of pre-owned medical equipment. The system leverages smart contracts to automate equipment registration, ownership transfer, certification validation, and stakeholder reputation tracking, ensuring transparency and traceability. However, the solution requires universal blockchain adoption among participants, which may limit scalability and practical deployment in resource-constrained healthcare environments.
    [121] The authors employed Ethereum smart contracts, timer oracles, and IPFS to manage the preventive maintenance of diagnostic medical imaging equipment, including MRI and CT machines. However, the framework requires compliance with stringent healthcare regulations governing data privacy and security.
  • Blockchain Assisted FL Solutions
    Blockchain-assisted FL solutions integrate blockchain with FL to enable secure, verifiable, and decentralized coordination of distributed model training across multiple healthcare institutions. This integration enhances privacy, data integrity, and trust during collaborative learning. However, such frameworks often depend on user-controlled private keys and advanced digital infrastructure, which may limit accessibility and scalability in resource-constrained environments.
    [122] The authors demonstrated a blockchain-enabled FL framework to securely train AI models for diagnosing 15 lung diseases from chest X-ray images without sharing patient data. Although the framework avoids storing raw medical data on-chain, its reliance on a permissionless and transparent blockchain architecture introduces potential privacy exposure risks.
    [123] The authors proposed FDBC-SKS, a blockchain-enabled federated learning framework that integrates knowledge distillation to facilitate secure and efficient knowledge sharing among medical institutions. The framework reduces communication overhead and enhances fairness in model update aggregation and verification. However, the knowledge distillation process may introduce privacy leakage risks, particularly if shared logits are not adequately protected.
    [124] The authors developed a privacy-preserving framework that integrates FL with a permissioned blockchain to enable collaborative brain tumor detection from MRI images across multiple hospitals. The proposed system addresses data-sharing restrictions and security vulnerabilities associated with centralized learning architectures. However, the system requires high computational resources and takes longer to train.
    Blockchain-driven approaches have demonstrated potential in enhancing data security, transparency, and controlled access within healthcare ecosystems. When integrated with FL, blockchain facilitates trusted and decentralized coordination of collaborative model training across medical institutions without exposing sensitive patient data. The blockchain-assisted FL workflow is illustrated in Figure 9.

6.2. Ethical and Legal

The authors [125] examined non-technical challenges in AI adoption, including gaps in legal frameworks for AI decision-making and ethical concerns related to consent, fairness, bias, and workforce impact. They [126] proposed solutions emphasizing the importance of robust regulatory policies, algorithmic transparency, and effective data governance. However, it lacks a concrete implementation framework to operationalize these principles in real world healthcare settings.
The authors [127] proposed a three-stage framework for ethically validating ML-based healthcare applications, addressing issues like data bias, algorithmic opacity, and lack of transparency. The framework highlights AI applications in EHR analysis and drug discovery, emphasizes explainability and auditing, and argues that clinicians should retain decision-making responsibility. A key drawback highlighted by the authors is the black-box nature of AI systems, which limits interpretability and makes it difficult for clinicians to understand or justify AI driven decisions, potentially compromising accountability and patient safety.
The authors [128] critically examined the ethical and legal challenges associated with the integration of AI in healthcare. They have addressed these issues by proposing flexible legal frameworks, enhancing data privacy protections, and introducing ethical design principles to ensure transparency, fairness, and human oversight. A key drawback they have highlighted is the lack of clear accountability when AI systems fail.
The authors [129] conducted a focus group discussion with experts from legal, medical, and technical backgrounds to explore real world challenges. They identified major issues such as unclear legal frameworks, data privacy concerns, and lack of ethical training. To resolve this, the authors emphasized the need for strong AI-specific laws, mandatory ethics training, and transparent, accountable AI systems. However,lack concrete policy models. Implementation strategies specific to the Jordanian legal system are not addressed.
The authors [130] took a global perspective to examine how the ethical and legal implications of AI in healthcare vary across countries and cultures. By analyzing cases from the U.S., EU, and China, they highlights inconsistencies in regulations related to data privacy, algorithmic bias, and liability. The paper identifies the fragmented nature of global AI governance as a core challenge. To address this issue, the authors recommend international policy harmonization, transparent AI design, and cross-border regulatory collaboration. However, the study provides broad discussions on ethical and legal concerns without presenting concrete solutions or practical implementation pathways.

6.3. Solution for Technical Challenges

Medical imaging data exhibit significant variability due to different devices, protocols, and patient populations, impacting model robustness. Consequently, models trained on specific datasets often fail to perform well across institutions. In addition, the black-box nature of many AI models limits clinical trust, as radiologists require transparent explanations to understand and validate AI-driven decisions. Furthermore, regulatory approval processes for AI-based diagnostic systems remain complex and continuously evolving. Ethical concerns also persist, particularly regarding misdiagnosis risks and unresolved liability issues.

6.3.1. Model Interpretability

To address the interpretability limitations of AI models in healthcare, one of the most actively pursued solutions is the advancement of Explainable Artificial Intelligence (XAI)[131,132]. XAI refers to a collection of methods and frameworks aimed at making AI model decisions transparent and understandable to human users. Early foundational methods laid the groundwork for discussions on explainability in AI. Among the widely adopted model-agnostic approaches are Local Interpretable Model-Agnostic Explanations (LIME)[133] and SHapley Additive exPlanations (SHAP)[134], which approximate complex models locally to explain individual predictions. These methods have been successfully applied to various clinical tasks, including sepsis prediction, breast cancer detection, and hospital readmission forecasting. Recent efforts have further extended these tools to better suit the healthcare domain; for example,[135] adapted SHAP for explaining DL based cancer detection, while [136] applied Grad-CAM to interpret model decisions in COVID-19 diagnosis. In imaging, attention-based XAI has gained traction for localizing pathology-relevant features, especially in histopathology[137] and chest X-rays[138]. In addition to these techniques, Concept Bottleneck Models (CBMs) [139] have been introduced, where models are trained to predict human-understandable intermediate concepts (e.g., tumor size, lesion type) before making final predictions. This structured approach offers stepwise reasoning that clinicians can verify. Furthermore, uncertainty aware models are being proposed that accompany each prediction with a confidence score, enabling clinicians to assess the reliability of AI outputs under ambiguous conditions[140]. These advancements reflect a transition from general post-hoc explanations to interpretability approaches that are more aligned with clinical contexts, improving reliability and user relevance for integration into healthcare domain.

6.3.2. Model Biasness

To address model bias in healthcare AI, a key step is the development of diverse and representative datasets. Large scale, multi-institutional collaborations are essential to ensure that training data includes sufficient samples across ethnicity, gender, age, and socioeconomic groups, thereby minimizing underrepresentation[141,142]. At the algorithmic level, fairness-aware learning strategies have been introduced to reduce systematic disparities. Methods such as adversarial debiasing, re-weighting of underrepresented samples, and constraint-based optimization explicitly enforce fairness during training [143]. FL also offers a promising direction by enabling models to learn from globally distributed and demographically diverse data sources without compromising patient privacy[144,145]. Equally important is bias evaluation and transparent reporting. Beyond overall accuracy, models should be assessed using fairness metrics such as subgroup sensitivity, equalized odds, or calibration error across demographic cohorts[146].Standardized reporting frameworks, such as Model Cards for Healthcare AI, have been proposed to disclose performance across diverse subgroups, improving accountability and regulatory compliance[147,148]. Overall, AI in healthcare should work together with clinical experts, not replace them. Decision support tools need to show warnings about possible bias so that experts can be careful, especially for underrepresented populations[149]. Additionally, small-scale pilot deployments within diverse healthcare settings can further expose hidden biases before large-scale clinical adoption. Aligning technical advancements with ethical responsibility is essential for mitigating model bias, thereby paving the way for the safe, equitable, and effective adoption of AI in clinical practice.

6.3.3. Security Challenge

Addressing the security vulnerabilities of AI in healthcare requires a comprehensive, multi-layered defense strategy that safeguards all stages of the AI pipeline. At the data acquisition level, secure communication protocols, device authentication mechanisms, and encryption techniques should be employed to protect sensitive medical information from spoofing or tampering and prone to adversarial attacks. In past many solutions are provided for preventing AI models from adversarial attacks such as: Integrated AT techniques, Explainability driven image defense methods, Dynamic FL protections, Privacy preserving FL frameworks and Semantic-aware adversarial training [150].
To illustrate these approaches, Table 3 summarizes some of the key defense methods used in medical imaging, the types of adversarial attacks they target, the datasets and model architectures applied, and their specific purposes. This table demonstrates how different strategies are implemented to enhance the robustness of AI systems against adversarial threats.

6.3.4. Scalability

The increasing adoption of AI in healthcare has introduced new challenges related to scalability, particularly in environments where Internet of Things (IoT) devices, edge nodes, and cloud infrastructures must collectively handle vast amounts of real-time clinical data. Several recent studies have presented architectural and algorithmic solutions to address these challenges, each emphasizing different dimensions of scalability, including computational efficiency, trust management, interoperability, and latency optimization. For instance, the work in [157] presents a comprehensive framework for optimizing IoT-based healthcare systems through cloud backed platforms by emphasizing cloud elasticity and distributed storage can mitigate bottlenecks inherent in onpremise infrastructures. The study provides elastic scalability for healthcare applications by handling the dynamic workload generated by medical IoT (MIoT) devices which allows the seamless scaling of analytics pipelines without degradation of performance. By leveraging cloud native services, the proposed system demonstrated improved throughput and reliability in processing heterogeneous medical data streams. The ability of cloud backed systems to not only reduce latency but also optimize training and inference workloads for diverse AI models. Similarly, the study in [158] reinforces this perspective by experimentally evaluating a cloud IoT healthcare integration which shows that their prototype maintained response times of 1.5–3 seconds for up to 5,000 devices and approximately 12 seconds even under loads of 20,000 devices, which illustrates the practical headroom and limitations of current cloud centric scalability strategies. Importantly, this work compared the performance of CNNs and Long Short-Term Memory (LSTM) models in clinical prediction tasks, reporting accuracies of about 96% and 93% respectively, thereby demonstrating not only the feasibility of scalable architectures but also the capability of such systems to maintain clinical grade accuracy under high data volume and velocity. Beyond computational throughput, scalability also encompasses security, and interoperability. Employing permissioned blockchain frameworks with smart contracts and off chain storage, healthcare organization can achieve tamper evident audit trails, decentralized consent management, and data access that ensuring system scalability not only in terms of computational performance but also in terms of governance and compliance across multiple stakeholders [159]. Moreover, blockchain’s inherent throughput limitations can be mitigated through lightweight consensus mechanisms and integration with FL, which highlights that scalability in healthcare is as much about the secure coordination of institutions as it is about technical scaling of compute resources. The study in [160] presents a blockchain-powered healthcare framework that integrates hybrid deep learning, permissioned blockchain, and edge computing to enhance security and scalability, while federated learning enables collaborative model training without sharing sensitive data. Furthermore, [161] highlights scalability as a central challenge in blockchain-enable healthcare systems, where large volumes of medical data strain traditional infrastructures. To address this, the authors propose a hybrid deep learning framework integrated with a permissioned blockchain that decentralizes storage and access control. This design improves scalability by distributing workloads across nodes, reducing data retrieval latency, and supporting high transaction throughput. Furthermore, the integration of FL and homomorphic encryption enables privacy-preserving computation while maintaining efficiency, ensuring scalability in real-world multi-institution healthcare environments.

6.4. Solution for Efficient and Effective AI System

Optimizing AI for healthcare requires balancing accuracy, efficiency, robustness, explainability, and sustainability across diverse clinical settings. Recent surveys highlight lightweight architectures, pruning, quantization, and knowledge distillation as critical for deploying CNNs and transformers under real-world compute and latency constraints [162]. To address data scarcity and class imbalance, generative augmentation coupled with optimized detectors (e.g., DCGAN with EfficientDet) enables sensitive yet resource-aware tumor detection suitable for IoT and edge deployments in Industry 5.0 healthcare [163]. Hybrid cloud–edge frameworks further reduce latency by allocating lightweight triage tasks to edge nodes while offloading heavy analytics to elastic cloud resources [164]. Explainability remains central for trust, with optimized pipelines integrating Grad-CAM-style visualizations while balancing computational overhead [165]. At the systems level, real-time rehabilitation devices exemplify safe AI integration by combining adaptive control, intention recognition, and fault-tolerant design, validating deployment in patient-facing scenarios [166]. In addition, standardized evaluation protocols, clinical workflow redesign, governance frameworks, and clinician training accelerate deployment, ensure trust, and reduce production delays [167]. Finally, energy efficiency and Green AI principles are emerging as essential considerations, advocating low-precision training, carbon-aware scheduling, and lightweight inference to mitigate the environmental cost of scaling healthcare AI [168]. Collectively, these solutions emphasize that practical deployment demands interdisciplinary strategies spanning efficient model design, robust training, interpretable outputs, hybrid infrastructures, validated clinical systems, and sustainable operations.

7. AI-Based Existing Technologies and Their Impact on Public Health

7.1. Disease Surveillance and Outbreak Prediction

BlueDot: BlueDot is a Canadian health surveillance company that uses AI, NLP, and big data analytics to detect, track, and predict the spread of infectious diseases globally. Founded in 2013, BlueDot gained prominence for its early detection of the COVID-19 outbreak in Wuhan, China, before it was officially reported by the World Health Organization [169]. The platform works by aggregating and analyzing vast datasets including global airline ticketing data, official public health reports, digital news sources in over 65 languages, and data from animal disease outbreaks. Using ML algorithms, BlueDot processes this data to identify unusual patterns indicative of potential outbreaks and issues early warnings to public health officials and organizations. The system’s ability to integrate disparate data sources and deliver real time insights demonstrates the potential of AI and big data in enhancing global epidemic intelligence [170]. BlueDot’s approach represents a significant advancement in digital epidemiology, aligning with the increasing need for proactive, data driven public health surveillance systems in a globally connected world.
Impact:BlueDot’s early identification of the COVID-19 outbreak highlights the potential of advanced AI-driven tools in global health surveillance. By providing early alerts, the platform enables public health authorities to initiate timely containment strategies based on reported symptoms and other data inputs. A key advantage of BlueDot lies in its real-time disease monitoring and forecasting capabilities, which can significantly enhance societal readiness for future epidemics and health crises [171,172].
HealthMap:HealthMap is an advanced digital surveillance platform that uses artificial intelligence, natural language processing, and web-based data mining to monitor and visualize global disease outbreaks in near real-time. Developed by researchers at Boston Children’s Hospital, HealthMap gathers data from diverse online sources such as news media, official public health reports, and social media to provide timely insights into emerging public health threats. The system automatically processes this information to detect unusual patterns and generate geographic maps and alerts, helping public health officials respond more rapidly to potential epidemics. By integrating informal and formal data sources, HealthMap has demonstrated the effectiveness of AI-supported surveillance tools in enhancing situational awareness and response strategies for infectious disease outbreaks [173,174].
Impact:HealthMap played a crucial role in monitoring the 2014 Ebola outbreak by aggregating data from various sources and offering real-time visualization of disease patterns. This support enabled public health agencies and humanitarian groups to access up-to-date information, enhancing their ability to track the spread of the virus and make informed decisions during the crisis [175,176].
Nextstrain:Nextstrain is an open-source platform designed to track the real time evolution of pathogens by integrating genomic sequencing data with advanced phylogenetic analysis and interactive visualizations. Developed to support global infectious disease surveillance, it continuously collects and analyzes viral genome data from public repositories such as GISAID. By applying tools such as maximum-likelihood phylogenetics and temporal reconstruction, Nextstrain enables users to monitor viral mutations, geographic spread, and transmission dynamics. This real-time genomic tracking has proven particularly valuable in pandemics like COVID-19, where understanding viral evolution and lineage spread was crucial for guiding public health responses and vaccine strategies [177,178].
Impact:Nextstrain played a pivotal role in tracking the genetic evolution of SARS-CoV-2, the virus responsible for COVID-19. By enabling real-time monitoring of viral mutations, it provided critical insights into how the virus was adapting over time, thereby informing vaccine development and supporting efforts to respond to emerging variants effectively [179].

7.2. Predictive Analytics

Artificial intelligence plays a vital role in public health, particularly through predictive analytics, where intelligent systems analyze extensive datasets to forecast trends, guide resource allocation, and support more effective decision making, ultimately contributing to improved healthcare outcomes. Various real-world examples show how healthcare organizations have adopted AI-driven predictive tools to enhance their responses to public health challenges. The following case studies illustrate how these technologies are being applied to address diverse issues within the healthcare domain.
HCCI:The Health Care Cost Institute (HCCI) leverages advanced analytics and large-scale health data to deliver insights into healthcare utilization, spending, and outcomes across the United States. By applying data science techniques to vast insurance claims datasets, HCCI supports efforts to evaluate healthcare cost drivers, identify disparities, and inform public health policies. Its analytical tools help policymakers, researchers, and providers understand patterns in healthcare access and expenditures, facilitating data-driven decisions aimed at improving efficiency and equity in healthcare delivery [180,181].
Health Catalyst: is a prominent player in the field of anticipatory analytics, utilizing integrated data to enhance healthcare delivery and system performance. By combining multiple data sources such as: electronic health records, insurance claims, and clinical registries the platform creates a comprehensive view of individual patient health. One of its practical applications includes leveraging AI algorithms to identify patients at high risk of hospital readmission. These insights inform personalized follow-up care and medication adjustments, which have contributed to reduced readmission rates, improved patient outcomes, and lower overall healthcare costs. Health Catalyst demonstrates how preventive, data-driven strategies can lead to more efficient and fundamentally optimized healthcare systems[182,183].

7.3. Telemedicine and Virtual Health Assistants

Woebot Health: is an innovative digital mental health platform that employs conversational AI to deliver evidence-based therapeutic support. Functioning as a chatbot, Woebot engages users in real-time conversations grounded in principles from cognitive behavioral therapy (CBT), offering emotional support and psychological interventions through natural language processing. It is designed to provide accessible, scalable mental health care, particularly useful in situations where traditional services are limited or unavailable. Studies have shown that Woebot can effectively reduce symptoms of depression and anxiety by facilitating self-awareness and promoting coping strategies through consistent, structured dialogue [184].
Impact:Numerous AI-powered chatbots and telehealth applications are now being used to support the monitoring and management of chronic conditions such as diabetes, hypertension, mental health disorders, chronic pain, and various forms of cancer. Platforms like Ginger offer text-based mental health coaching, while Ada Health leverages ML to analyze user-reported symptoms and suggest lifestyle adjustments [185]. These tools enable continuous remote monitoring, allowing patients to receive timely alerts and guidance without the need for frequent hospital visits. This capability proved especially valuable during the COVID-19 pandemic, when restrictions on movement made traditional in-person consultations challenging, highlighting the potential of AI to facilitate ongoing care and personalized recommendations for individuals with long-term health conditions.

7.4. Diagnostic Support

Two notable companies leveraging artificial intelligence to enhance cancer diagnostics are IBM Watson Health and PathAI, both contributing significantly to early disease detection. IBM Watson Health applies ML to analyze a combination of clinical trial results, medical literature, and patient records, thereby supporting healthcare professionals in diagnosing rare cancers and managing chronic conditions. In contrast, PathAI focuses on interpreting pathology images, assisting pathologists in refining diagnostic accuracy and guiding treatment strategies. These tools have proven particularly effective in identifying uncommon cancer types that may be overlooked by conventional methods, demonstrating AI’s potential to improve diagnostic precision and accelerate therapeutic interventions. Similarly, Aidoc offers AI-driven radiology support by analyzing medical imaging in real time to detect urgent and potentially life-threatening conditions such as strokes, pulmonary embolisms, and intracranial hemorrhages. By promptly highlighting critical findings, Aidoc enhances diagnostic speed and accuracy, ultimately improving patient outcomes and reducing the risk of complications.
Impact: Platforms like IBM Watson Health and PathAI enhance diagnostic accuracy, especially for complex or rare diseases, while tools such as Aidoc assist radiologists by rapidly identifying critical conditions in medical imaging. AI also powers chatbots and telehealth solutions, offering continuous support for chronic disease management and mental health care. These technologies improve health outcomes by enabling timely interventions and reducing healthcare system burdens. As demonstrated in recent studies, AI’s integration into public health has become essential for predictive analytics and personalized healthcare delivery [186].

8. Comparison with existing works

From previous published papers, it has been observed that several studies have discussed AI in healthcare, most works remain narrow in focus and do not provide a unified, comprehensive perspective. Rong et al. [187] have reviewed AI applications only in biomedicine, mainly focusing on diagnostics, data processing, and two case studies, but did not cover history, challenges, solutions, ethical issues, governance, or AI products. Shaheen[188] has mainly focused on explaining the basic applications of AI in healthcare, including AI-supported drug discovery, AI-assisted clinical trials, and patient care systems. Aung et al. [189] have reviewed the current applications of AI in healthcare, highlighting its benefits in supporting physician workflow, reducing administrative burden, and enhancing clinical knowledge. The article also discusses the challenges of training ML systems, concerns around accountability, and the need for proper governance and physician understanding. Sadeghi et al. [190] focused exclusively on XAI in healthcare. The review highlights the importance of explainability for trustworthy, transparent AI deployment in safety-critical medical settings and discusses challenges in implementing XAI in real clinical environments. Wubineh et al. [191] have primarily emphasizes challenges and opportunities of implementing artificial intelligence in healthcare. Aminizadeh et al. [192] have systematically reviewed how ML, DL, and distributed systems improve healthcare QoS by analyzing their applications, platforms, and algorithmic trends. The author[193]consolidates existing studies and highlights the need for collaboration among healthcare and technical communities. In contrast to the above works, the proposed survey offers a broader and more integrated overview of AI in healthcare. It fills the gaps left by previous reviews by covering the historical evolution of AI, core applications, underlying techniques, key challenges, practical solutions, ethical and governance aspects, existing AI healthcare products, and future implications. By bringing all these dimensions together, the proposed work delivers a more comprehensive and cohesive perspective than existing literature.
Table 4. Comparison of proposed survey with existing surveys.
Table 4. Comparison of proposed survey with existing surveys.
Author Prospects
1 2 3 4 5 6 7 8 9
Rong et al. [187]
Shaheen[188]
Aung et al. [189]
Sadeghi et al.[190]
Wubineh et al.[191]
Aminizadeh et al.[192]
Kasula[193]
Proposed survey
Note: 1-AI significance, 2- History, 3- Applications, 4-Techniques, 5-Challenges, 6- Solutions, 7-Ethical & Governance Aspects, 8-AI Products, 9- Future Implications.

9. Future Implications

With the continuous advancement of AI in healthcare, the coming years might witness significant advancements in diagnostic systems, clinical decision support, and patient management. Although current solutions address many technical, ethical, and privacy challenges in healthcare AI, several open questions and limitations remain. Understanding these gaps is crucial for guiding future research and enabling safe, effective, and equitable AI adoption across diverse clinical settings. Based on the reviewed studies from 2016 to 2026, several possible research directions and implications can be outlined:
  • Cloud and Edge Based AI Deployment: Future healthcare systems are anticipated to adopt integrated cloud edge computing frameworks to support real-time diagnostics, low-latency data processing, and secure decision-making. Such architectures will optimize computational efficiency while maintaining data privacy at distributed clinical nodes.
  • Expansion of AI-Enabled Wearable Technologies: The proliferation of AI-integrated wearable devices will enable continuous physiological monitoring, early anomaly detection, and proactive healthcare interventions. These systems are expected to play a central role in preventive medicine and personalized health analytics.
  • AI-Driven Remote Patient Monitoring(RPM): Remote monitoring platforms powered by AI will become fundamental to telemedicine ecosystems. Future frameworks are expected to integrate wearable sensors, fog/cloud computing infrastructures, and blockchain-enabled audit mechanisms to ensure secure, scalable, and reliable healthcare delivery.
  • Personalized FL for Non-IID Medical Data: A significant research direction involves the development of personalized FL models capable of handling non-IID medical datasets. Client-specific fine-tuning and adaptive aggregation strategies will improve generalization across heterogeneous hospitals, imaging modalities, and demographic populations while preserving patient privacy.
  • Secure Gradient Transmission Mechanisms: Although federated learning eliminates raw data sharing, exchanged gradients may still expose sensitive patient information through inference attacks. Future research should therefore prioritize robust and computationally efficient gradient protection mechanisms. Promising directions include differential privacy techniques tailored for medical imaging, gradient pruning and sparsification to minimize leakage risk, homomorphic encryption optimized for low-latency clinical environments, and secure multi-party computation with reduced overhead. Strengthening gradient security will be essential for ensuring trustworthy and regulation-compliant deployment of federated healthcare systems.
  • Lightweight Blockchain Architectures for Secure FL Coordination: Another promising avenue lies in designing lightweight permissioned blockchain architectures for FL that reduce computational overhead and communication latency while ensuring secure and verifiable model update exchange among multiple healthcare institutions.
  • Robustness Against Adversarial and Poisoning Attacks: Robustness against adversarial and poisoning attacks remains underexplored in federated healthcare systems. Future research should simulate realistic attack scenarios, develop anomaly detection for malicious client updates, and implement trust-aware and Byzantine-resilient aggregation mechanisms. Security validation must become a standard requirement to ensure safe and reliable deployment of federated AI in clinical environments.
  • Digital Twins and Immersive Healthcare Technologies: Emerging innovations, including digital twin technologies, augmented reality (AR), and virtual reality (VR), will enable highly personalized patient care, advanced medical simulations, and immersive diagnostic experiences.
  • AI Scribes and Clinical Automation: To reduce the administrative burden on clinicians, AI-powered medical scribes are anticipated to become an integral part of healthcare systems, allowing practitioners to focus more on patient-centric care and reducing burnout.
  • Personalized and Preventive Medicine: The future of healthcare is shifting toward precision medicine, where AI-based models will analyze multimodal data—including medical imaging, genomics, electronic health records, and sensor data to enable patient-specific diagnosis, risk prediction, and treatment planning.

10. Conclusion

AI has emerged as a transformative force in modern healthcare, offering unprecedented opportunities to enhance diagnostic accuracy, operational efficiency, and patient-centered care. This review has examined the evolution of AI in healthcare, key application domains, and the major challenges that continue to restrict its broader clinical adoption. While AI-based techniques have demonstrated promising results in areas such as medical imaging, predictive analytics, remote monitoring, and healthcare operations, several limitations related to data privacy, data quality, interpretability, bias, security, and scalability remain significant. The findings of this review indicate that emerging strategies, including FL, blockchain-based frameworks, explainable AI, and data standardization approaches, provide viable directions for addressing these challenges. However, their effectiveness depends on careful system design, regulatory compliance, and alignment with clinical workflows. High model performance alone is insufficient for real-world deployment without transparency, robustness, and ethical accountability. Moreover, this study critically evaluates and compares existing methodologies, highlighting their strengths, limitations, and research gaps across diverse healthcare applications. Beyond summarizing current progress, this review systematically identifies critical future research directions that can shape the next phase of AI development in healthcare.
Overall, this review highlights that AI should be viewed as a complementary tool to clinical expertise rather than a replacement. Sustainable progress will require interdisciplinary collaboration among clinicians, researchers, engineers, and policymakers to ensure safe, equitable, and clinically validated integration of AI technologies across healthcare systems.

Data Availability Statement

No datasets were generated or analyzed during the current study; therefore, data sharing is not applicable to this article.

Conflicts of Interest

The authors have no relevant financial or non-financial interests to disclose.

References

  1. Malik, P.; Pathania, M.; Rathaur, V.K.; et al. Overview of artificial intelligence in medicine. Journal of family medicine and primary care 2019, 8, 2328–2331. [Google Scholar]
  2. Mintz, Y.; Brodie, R. Introduction to artificial intelligence in medicine. Minimally Invasive Therapy & Allied Technologies 2019, 28, 73–81. [Google Scholar] [CrossRef] [PubMed]
  3. Kaul, V.; Enslin, S.; Gross, S.A. History of artificial intelligence in medicine. Gastrointestinal endoscopy 2020, 92, 807–812. [Google Scholar] [CrossRef] [PubMed]
  4. Abdelwanis, M.; Simsekler, M.C.E.; Gabor, A.F.; Sleptchenko, A.; Omar, M. Artificial intelligence adoption challenges from healthcare providers’ perspectives: A comprehensive review and future directions. Safety Science 2026, 193, 107028. [Google Scholar] [CrossRef]
  5. Sarun, H.; Rotana, S.; Chhunla, C. The Role and Significance of Artificial Intelligence in Transforming Modern Society: Opportunities, Challenges, and Future Directions. Journal of Agriculture and Environment 2026, 3, 133–141. [Google Scholar]
  6. Çapuk, H.; Yiğit, M.F.; Uçar, M. Exploring the relationship between health professionals’ artificial intelligence literacy and their attitudes toward artificial intelligence. Informatics for Health and Social Care 2026, 1–12. [Google Scholar] [CrossRef]
  7. Haileamlak, A. The impact of COVID-19 on health and health systems. Ethiopian journal of health sciences 2021, 31, 1073. [Google Scholar]
  8. Khetrapal, S.; Bhatia, R. Impact of COVID-19 pandemic on health system & Sustainable Development Goal 3. Indian Journal of Medical Research 2020, 151, 395–399. [Google Scholar] [CrossRef]
  9. Xsolis. The Evolution of AI in Healthcare, 2023. Accessed. 12 06 2025.
  10. Buchanan, B.G. Research on expert systems. Technical report. 1981. [Google Scholar]
  11. Jones, L.; Golan, D.; Hanna, S.; Ramachandran, M. Artificial intelligence, machine learning and the evolution of healthcare: A bright future or cause for concern? Bone & joint research 2018, 7, 223–225. [Google Scholar]
  12. Hirani, R.; Noruzi, K.; Khuram, H.; Hussaini, A.S.; Aifuwa, E.I.; Ely, K.E.; Lewis, J.M.; Gabr, A.E.; Smiley, A.; Tiwari, R.K.; et al. Artificial intelligence and healthcare: a journey through history, present innovations, and future possibilities. Life 2024, 14, 557. [Google Scholar] [CrossRef]
  13. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: review, opportunities and challenges. Briefings in bioinformatics 2018, 19, 1236–1246. [Google Scholar] [CrossRef] [PubMed]
  14. Kim, H.E.; Cosa-Linan, A.; Santhanam, N.; Jannesari, M.; Maros, M.E.; Ganslandt, T. Transfer learning for medical image classification: a literature review. BMC medical imaging 2022, 22, 69. [Google Scholar] [CrossRef] [PubMed]
  15. Tjoa, E.; Guan, C. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems 2020, 32, 4793–4813. [Google Scholar] [CrossRef] [PubMed]
  16. Repetto, S.; Maljkovic, I.; Lotto, M.; Cinà, A.E.; Vascon, S.; Roli, F. Evaluating the robustness of explainable AI in medical image recognition under natural and adversarial data corruption. Machine Learning 2026, 115, 4. [Google Scholar] [CrossRef]
  17. Sandhu, S.S.; Gorji, H.T.; Tavakolian, P.; Tavakolian, K.; Akhbardeh, A. Medical imaging applications of federated learning. Diagnostics 2023, 13, 3140. [Google Scholar] [CrossRef]
  18. Mir, B.A.; Abbas, S.R.; Lee, S.W. Federated Learning in Healthcare Ethics: A Systematic Review of Privacy-Preserving and Equitable Medical AI. Proceedings of the Healthcare. MDPI 2026, Vol. 14, 306. [Google Scholar] [CrossRef]
  19. Nasar, M.; Kausar, M.A.; Al Musalhi, N. Collaborative health intelligence: Federated learning as the foundation of future medical care. In Applied AI and Computational Intelligence in Diagnostics and Decision-Making; IGI Global Scientific Publishing, 2026; pp. 163–190. [Google Scholar]
  20. Ghosh, D.; Mehjabin, M.; Rayed, M.E.; Mridha, M.; Kabir, M.M. Advancements and challenges of federated learning in medical imaging: a systematic literature review. Artificial Intelligence Review 2026. [Google Scholar] [CrossRef]
  21. Shen, R.; Zhang, H.; Chai, B.; Wang, W.; Wang, G.; Yan, B.; Yu, J. BAFL-SVM: A blockchain-assisted federated learning-driven SVM framework for smart agriculture. High-Confidence Computing 2025, 5, 100243. [Google Scholar] [CrossRef]
  22. Reddy, G.R.; Kollu, V.N.; Elshafie, H.; Qamar, S.; et al. A novel blockchain-federated learning framework with quantum neural networks and wavelet transforms for secure IoT healthcare monitoring. Biomedical Signal Processing and Control 2026, 113, 108759. [Google Scholar] [CrossRef]
  23. Vhatkar, K.N.; Sontakke, P.V.; Pawar, A.S.; Shirode, U.R.; Salunke, D. Federated Deep Learning Model for Secured Data Transmission in the Healthcare Sector Using Adaptive and Attention-Based Residual Capsnet With Blockchain. Computational Intelligence 2026, 42, e70196. [Google Scholar] [CrossRef]
  24. Sukanya, M.; Jude Nirmal, V.; Richard Roy, G. FLPPBC: A Federated Learning-Driven Privacy-Preserving Model Using Private Permissioned Blockchain for Healthcare Systems. Indian Journal of Science and Technology 2026, 19, 59–70. [Google Scholar] [CrossRef]
  25. Abhisheka, B.; Biswas, S.K.; Purkayastha, B. Infusing Weighted Average Ensemble Diversity for Advanced Breast Cancer Detection. International Journal of Imaging Systems and Technology 2024, 34, e23146. [Google Scholar] [CrossRef]
  26. Abhisheka, B.; Biswas, S.K.; Purkayastha, B. HBNet: An integrated approach for resolving class imbalance and global local feature fusion for accurate breast cancer classification. Neural Computing and Applications 2024, 36, 8455–8472. [Google Scholar] [CrossRef]
  27. Wang, D.; Xu, J.; Zhang, Z.; Li, S.; Zhang, X.; Zhou, Y.; Zhang, X.; Lu, Y. Evaluation of rectal cancer circumferential resection margin using faster region-based convolutional neural network in high-resolution magnetic resonance images. Diseases of the Colon & Rectum 2020, 63, 143–151. [Google Scholar]
  28. Hendrix, W.; Hendrix, N.; Scholten, E.T.; van Ginneken, B.; Prokop, M.; Rutten, M.; Jacobs, C. Artificial intelligence for the detection of airway nodules in chest CT scans. European Radiology 2025, 1–11. [Google Scholar] [CrossRef]
  29. Zhang, P.; Gao, C.; Huang, Y.; Chen, X.; Pan, Z.; Wang, L.; Dong, D.; Li, S.; Qi, X. Artificial intelligence in liver imaging: methods and applications. Hepatology International 2024, 18, 422–434. [Google Scholar] [CrossRef]
  30. Vought, R.; Vought, V.; Shah, M.; Szirth, B.; Bhagat, N. EyeArt artificial intelligence analysis of diabetic retinopathy in retinal screening events. International Ophthalmology 2023, 43, 4851–4859. [Google Scholar] [CrossRef]
  31. Chen, J.; Wang, G.; Xia, K.; Wang, Z.; Liu, L.; Xu, X. Constructing an artificial intelligence-assisted system for the assessment of gastroesophageal valve function based on the hill classification (with video). BMC Medical Informatics and Decision Making 2025, 25, 144. [Google Scholar] [CrossRef]
  32. Liang, X.; Wang, X.; Chen, Y.; He, D.; Li, L.; Chen, G.; Li, J.; Li, J.; Liu, S.; Xu, Z. Predictive value of intraoperative contrast-enhanced ultrasound in functional recovery of non-traumatic cervical spinal cord injury. European Radiology 2024, 34, 2297–2309. [Google Scholar] [CrossRef]
  33. Xie, P.; Li, Y.; Deng, B.; Du, C.; Rui, S.; Deng, W.; Wang, M.; Boey, J.; Armstrong, D.G.; Ma, Y.; et al. An explainable machine learning model for predicting in-hospital amputation rate of patients with diabetic foot ulcer. International wound journal 2022, 19, 910–918. [Google Scholar] [CrossRef]
  34. Rajkomar, A.; Oren, E.; Chen, K.; Dai, A.M.; Hajaj, N.; Hardt, M.; Liu, P.J.; Liu, X.; Marcus, J.; Sun, M.; et al. Scalable and accurate deep learning with electronic health records. NPJ digital medicine 2018, 1, 18. [Google Scholar] [CrossRef]
  35. Cruz, J.A.; Wishart, D.S. Applications of machine learning in cancer prediction and prognosis. Cancer informatics 2006, 2, 117693510600200030. [Google Scholar] [CrossRef]
  36. Moor, M.; Rieck, B.; Horn, M.; Jutzeler, C.R.; Borgwardt, K. Early prediction of sepsis in the ICU using machine learning: a systematic review. Frontiers in medicine 2021, 8, 607952. [Google Scholar] [CrossRef] [PubMed]
  37. Tomašev, N.; Glorot, X.; Rae, J.W.; Zielinski, M.; Askham, H.; Saraiva, A.; Mottram, A.; Meyer, C.; Ravuri, S.; Protsyuk, I.; et al. A clinically applicable approach to continuous prediction of future acute kidney injury. Nature 2019, 572, 116–119. [Google Scholar] [CrossRef] [PubMed]
  38. Gaudelet, T.; Day, B.; Jamasb, A.R.; Soman, J.; Regep, C.; Liu, G.; Hayter, J.B.; Vickers, R.; Roberts, C.; Tang, J.; et al. Utilizing graph machine learning within drug discovery and development. Briefings in bioinformatics 2021, 22. [Google Scholar] [CrossRef] [PubMed]
  39. Tang, X.; Dai, H.; Knight, E.; Wu, F.; Li, Y.; Li, T.; Gerstein, M. A survey of generative AI for de novo drug design: new frontiers in molecule and protein generation. Briefings in Bioinformatics 2024, 25. [Google Scholar] [CrossRef]
  40. Alizadehsani, R.; Oyelere, S.S.; Hussain, S.; Jagatheesaperumal, S.K.; Calixto, R.R.; Rahouti, M.; Roshanzamir, M.; De Albuquerque, V.H.C. Explainable artificial intelligence for drug discovery and development: a comprehensive survey. IEEE Access 2024, 12, 35796–35812. [Google Scholar] [CrossRef]
  41. Mak, K.K.; Wong, Y.H.; Pichika, M.R. Artificial intelligence in drug discovery and development. Drug discovery and evaluation: safety and pharmacokinetic assays 2024, 1461–1498. [Google Scholar]
  42. Ocana, A.; Pandiella, A.; Privat, C.; Bravo, I.; Luengo-Oroz, M.; Amir, E.; Gyorffy, B. Integrating artificial intelligence in drug discovery and early drug development: a transformative approach. Biomarker Research 2025, 13, 45. [Google Scholar] [CrossRef]
  43. Kavitha, M.; Roobini, S.; Prasanth, A.; Sujaritha, M. Systematic view and impact of artificial intelligence in smart healthcare systems, principles, challenges and applications. Machine learning and artificial intelligence in healthcare systems 2023, 25–56. [Google Scholar]
  44. Kim, H.K. The effects of artificial intelligence chatbots on women’s health: A systematic review and meta-analysis. Proceedings of the Healthcare. MDPI 2024, Vol. 12, 534. [Google Scholar] [CrossRef]
  45. Singhal, K.; Tu, T.; Gottweis, J.; Sayres, R.; Wulczyn, E.; Amin, M.; Hou, L.; Clark, K.; Pfohl, S.R.; Cole-Lewis, H.; et al. Toward expert-level medical question answering with large language models. Nature Medicine 2025, 31, 943–950. [Google Scholar] [CrossRef]
  46. García-Méndez, S.; de Arriba-Pérez, F. Detecting and Explaining Postpartum Depression in Real-Time with Generative Artificial Intelligence. Applied Artificial Intelligence 2025, 39, 2515063. [Google Scholar] [CrossRef]
  47. Vahdati, M.; Laamarti, F.; El Saddik, A. Meta-review of wearable devices for healthcare in the Metaverse. ACM Transactions on Multimedia Computing, Communications and Applications 2025, 21, 1–36. [Google Scholar] [CrossRef]
  48. Ghaffour, J.e.; Ahajjam, S.; Taqafi, I.; Ezzati, A. Artificial Intelligence in Remote Healthcare. In Proceedings of the International Conference on intelligent systems and digital applications, 2025; Springer; pp. 230–240. [Google Scholar]
  49. Nigar, N. AI in remote patient monitoring. In Transformation in health care: Game-changers in digitalization, technology, AI and longevity; Springer, 2025; pp. 245–259. [Google Scholar]
  50. Nurmi, J.; Lohan, E.S. Systematic review on machine-learning algorithms used in wearable-based eHealth data analysis. IEEE Access 2021, 9, 112221–112235. [Google Scholar]
  51. Gautam, N.; Ghanta, S.N.; Mueller, J.; Mansour, M.; Chen, Z.; Puente, C.; Ha, Y.M.; Tarun, T.; Dhar, G.; Sivakumar, K.; et al. Artificial intelligence, wearables and remote monitoring for heart failure: current and future applications. Diagnostics 2022, 12, 2964. [Google Scholar] [CrossRef] [PubMed]
  52. Gagnon, M.P.; Ouellet, S.; Attisso, E.; Supper, W.; Amil, S.; Rhéaume, C.; Paquette, J.S.; Chabot, C.; Laferrière, M.C.; Sasseville, M. Wearable devices for supporting chronic disease self-management: scoping review. Interactive journal of medical research 2024, 13, e55925. [Google Scholar] [CrossRef]
  53. Suryawanshi, V.; Bhoyar, V.; et al. The role of AI in enhancing hospital operational efficiency and patient care. Multidisciplinary Reviews 2025, 8, 2025153–2025153. [Google Scholar] [CrossRef]
  54. Hong, W.S.; Haimovich, A.D.; Taylor, R.A. Predicting hospital admission at emergency department triage using machine learning. PloS one 2018, 13, e0201016. [Google Scholar] [CrossRef]
  55. Wu, Q.; Han, J.; Yan, Y.; Kuo, Y.H.; Shen, Z.J.M. Reinforcement learning for healthcare operations management: methodological framework, recent developments, and future research directions. Health Care Management Science 2025, 1–36. [Google Scholar] [CrossRef]
  56. Saria, S.; Butte, A.; Sheikh, A. Better medicine through machine learning: what’s real, and what’s artificial? 2018. [Google Scholar]
  57. Letourneau-Guillon, L.; Camirand, D.; Guilbert, F.; Forghani, R. Artificial intelligence applications for workflow, process optimization and predictive analytics. Neuroimaging Clinics of North America 2020, 30, e1–e15. [Google Scholar] [CrossRef]
  58. Suresh, N.; Selvakumar, A.; Sridhar, G.; et al. Operational efficiency and cost reduction: the role of AI in healthcare administration. In Revolutionizing the Healthcare Sector with AI; IGI Global, 2024; pp. 262–272. [Google Scholar]
  59. Virmani, N.; Singh, R.K.; Agarwal, V.; Aktas, E. Artificial intelligence applications for responsive healthcare supply chains: A decision-making framework. IEEE Transactions on Engineering Management 2024, 71, 8591–8605. [Google Scholar] [CrossRef]
  60. Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE symposium on security and privacy (SP), 2017; IEEE; pp. 3–18. [Google Scholar]
  61. Kaushik, A.; Barcellona, C.; Mandyam, N.K.; Tan, S.Y.; Tromp, J. Challenges and opportunities for data sharing related to artificial intelligence tools in health Care in low-and Middle-Income Countries: systematic review and case study from Thailand. Journal of Medical Internet Research 2025, 27, e58338. [Google Scholar] [CrossRef]
  62. Kaissis, G.A.; Makowski, M.R.; Rückert, D.; Braren, R.F. Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence 2020, 2, 305–311. [Google Scholar] [CrossRef]
  63. O’herrin, J.K.; Fost, N.; Kudsk, K.A. Health Insurance Portability Accountability Act (HIPAA) regulations: effect on medical record research. Annals of surgery 2004, 239, 772–778. [Google Scholar] [CrossRef] [PubMed]
  64. Botsis, T.; Hartvigsen, G.; Chen, F.; Weng, C. Secondary use of EHR: data quality issues and informatics opportunities. Summit on translational bioinformatics, 2010, 2010; 1. [Google Scholar]
  65. Zhu, C.; Xin, J.; Trinh, T.K. Data quality challenges and governance frameworks for AI implementation in supply chain management. Pinnacle Academic Press Proceedings Series 2025, 2, 28–43. [Google Scholar]
  66. Myakala, P.K. Scaling AI with Limited Labeled Data: A Self-Supervised Learning Approach. ICCK Transactions on Emerging Topics in Artificial Intelligence 2025, 2, 26–35. [Google Scholar] [CrossRef]
  67. Jeon, Y.; Hwang, C.; Chen, X. Empowering Medical Data Labeling for Non-Experts with DANNY: Enhancing Accuracy and Mitigating Over-Reliance on AI. In Proceedings of the Proceedings of the 30th International Conference on Intelligent User Interfaces, 2025; pp. 624–640. [Google Scholar]
  68. Chinta, S.V.; Wang, Z.; Palikhe, A.; Zhang, X.; Kashif, A.; Smith, M.A.; Liu, J.; Zhang, W. AI-Driven Healthcare: A Review on Ensuring Fairness and Mitigating Bias. [CrossRef]
  69. Seyyed-Kalantari, L.; Zhang, H.; McDermott, M.B.; Chen, I.Y.; Ghassemi, M. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature medicine 2021, 27, 2176–2182. [Google Scholar] [CrossRef]
  70. Van der Velden, B.H.; Kuijf, H.J.; Gilhuijs, K.G.; Viergever, M.A. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Medical Image Analysis 2022, 79, 102470. [Google Scholar] [CrossRef]
  71. Habiba, U.e.; Habib, M.K.; Bogner, J.; Fritzsch, J.; Wagner, S. How do ML practitioners perceive explainability? an interview study of practices and challenges. Empirical Software Engineering 2025, 30, 18. [Google Scholar] [CrossRef]
  72. Ahangar, M.N.; Farhat, Z.; Sivanathan, A. AI trustworthiness in manufacturing: challenges, toolkits, and the path to industry 5.0. Sensors 2025, 25, 4357. [Google Scholar] [CrossRef]
  73. Gerke, S.; Minssen, T.; Cohen, G. Ethical and legal challenges of artificial intelligence-driven healthcare. In Artificial Intelligence in Healthcare; Elsevier: Amsterdam, 2020; pp. 295–336. [Google Scholar]
  74. Holzinger, A.; Langs, G.; Denk, H.; Zatloukal, K.; Müller, H. Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2019, 9, e1312. [Google Scholar] [CrossRef] [PubMed]
  75. Samek, W.; Wiegand, T.; Müller, K.R. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. arXiv arXiv:1708.08296. [CrossRef]
  76. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys (CSUR) 2018, 51, 1–42. [Google Scholar] [CrossRef]
  77. Crawford, K.; Calo, R. There is a blind spot in AI research. Nature 2016, 538, 311–313. [Google Scholar] [CrossRef]
  78. Barocas, S.; Selbst, A.D. Big Data’s Disparate Impact. California Law Review 2016, 104, 671–732. [Google Scholar] [CrossRef]
  79. Chen, I.Y.; Johansson, F.D.; Sontag, D. Why is my classifier discriminatory? In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), 2018; pp. 3539–3550. [Google Scholar]
  80. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
  81. Haenssle, H.A.; et al. Reply to ’Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists’. Annals of Oncology 2019. [Google Scholar] [CrossRef]
  82. Ward-Peterson, M.; Hill, S.; Hurley, D.; Harlan, S. Association between race/ethnicity and survival of melanoma patients in the United States over 3 decades. Medicine 2016, 95, e3315. [Google Scholar] [CrossRef]
  83. Al-Otaibi, S.; Ayouni, S.; Sarwar, N.; Irshad, A.; Ullah, F. AI-driven security framework for medical sensor networks: enhancing privacy and trust in smart healthcare systems. Cluster Computing 2025, 28, 408. [Google Scholar] [CrossRef]
  84. Gupta, S.; Kapoor, M.; Debnath, S.K. Challenges and Risks of AI-Enabled Healthcare Security. In Artificial Intelligence-Enabled Security for Healthcare Systems: Safeguarding Patient Data and Improving Services; Springer, 2025; pp. 101–112. [Google Scholar]
  85. Tschandl, P.; Ružička, E.; Akay, B.N.; Argenziano, G.; Babilon, S.; Braun, R.P.; Cabo, H.; Cannizzaro, G.; Carrera, C.; Dalle, S.; et al. Clinically relevant vulnerabilities of deep learning systems for skin cancer diagnosis. Nature Medicine 2021, 27, 848–854. [Google Scholar] [CrossRef]
  86. El-Saleh, A.A.; Sheikh, A.M.; Albreem, M.A.; Honnurvali, M.S. The internet of medical things (IoMT): opportunities and challenges. Wireless networks 2025, 31, 327–344. [Google Scholar] [CrossRef]
  87. Kioskli, K.; Grigoriou, E.; Islam, S.; Yiorkas, A.M.; Christofi, L.; Mouratidis, H. A risk and conformity assessment framework to ensure security and resilience of healthcare systems and medical supply chain. International Journal of Information Security 2025, 24, 1–28. [Google Scholar] [CrossRef]
  88. Hu, Y.; Kuang, W.; Qin, Z.; Li, K.; Zhang, J.; Gao, Y.; Li, W.; Li, K. Artificial intelligence security: Threats and countermeasures. ACM Computing Surveys (CSUR) 2021, 55, 1–36. [Google Scholar] [CrossRef]
  89. Rajpurkar, P.; Irvin, J.; Zhu, K. AI in healthcare: The challenges of developing models that work. The Lancet Digital Health 2022, 4, e425–e435. [Google Scholar]
  90. Han, S.; Mao, H.; Dally, W.J. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv arXiv:1510.00149. [CrossRef]
  91. Zech, J.R.; Badgeley, M.A.; Liu, M.; Costa, A.B.; Titano, J.J.; Oermann, E.K. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs. PLOS Medicine 2018, 15, e1002683. [Google Scholar] [CrossRef]
  92. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine 2020, 37, 50–60. [Google Scholar] [CrossRef]
  93. Holzinger, A.; Biemann, C.; Pattichis, C.S.; Kell, D.B. What do we need to build explainable AI systems for the medical domain? arXiv arXiv:1712.09923. [CrossRef]
  94. He, J.; Baxter, S.L.; Xu, J.; Xu, J.; Zhou, X.; Zhang, K. The practical implementation of artificial intelligence technologies in medicine. Nature Medicine 2019, 25, 30–36. [Google Scholar] [CrossRef]
  95. Esteva, A.; Robicquet, A.; Ramsundar, B. A guide to deep learning in healthcare. Nature Medicine 2019, 25, 24–29. [Google Scholar] [CrossRef] [PubMed]
  96. Xu, Y.; Liu, Q.; Zhang, Y. Edge computing for deep learning in real-time health monitoring. IEEE Network 2021, 35, 156–162. [Google Scholar]
  97. Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. Communications of the ACM 2020, 63, 54–63. [Google Scholar] [CrossRef]
  98. Choi, E.; Bahadori, M.; Schuetz, A.; Stewart, W.; Sun, J. RETAIN: An interpretable predictive model for healthcare using reverse time attention mechanism. Advances in Neural Information Processing Systems (NeurIPS) 2016, 29. [Google Scholar]
  99. Chen, I.Y.; Szolovits, P.; Ghassemi, M. Scalable and accurate deep learning with electronic health records. npj Digital Medicine 2021, 4, 1–10. [Google Scholar]
  100. Li, Y.; Deng, C.; Hu, C. Edge AI: On-demand deep learning model co-inference with device-edge synergy. In Proceedings of the ACM/IEEE Symposium on Edge Computing, 2019; pp. 31–44. [Google Scholar]
  101. Wang, Z.; Ye, M.; Kumar, A. Reliable real-time AI for healthcare monitoring and prediction. IEEE Transactions on Network and Service Management 2020, 17, 1456–1469. [Google Scholar]
  102. Li, J.; Meng, Y.; Ma, L.; Du, S.; Zhu, H.; Pei, Q.; Shen, X. A federated learning based privacy-preserving smart healthcare system. IEEE Transactions on Industrial Informatics 2021, 18. [Google Scholar] [CrossRef]
  103. Wang, H.; Wang, Q.; Ding, Y.; Tang, S.; Wang, Y. Privacy-preserving federated learning based on partial low-quality data. Journal of Cloud Computing 2024, 13, 62. [Google Scholar] [CrossRef]
  104. Liu, B.; Yan, B.; Zhou, Y.; Yang, Y.; Zhang, Y. Experiments of federated learning for covid-19 chest x-ray images. arXiv arXiv:2007.05592. [CrossRef]
  105. Feki, I.; Ammar, S.; Kessentini, Y.; Muhammad, K. Federated learning for COVID-19 screening from Chest X-ray images. Applied Soft Computing 2021, 106, 107330. [Google Scholar] [CrossRef] [PubMed]
  106. Yan, Z.; Wicaksana, J.; Wang, Z.; Yang, X.; Cheng, K.T. Variation-aware federated learning with multi-source decentralized medical image data. IEEE Journal of Biomedical and Health Informatics 2020, 25, 2615–2628. [Google Scholar] [CrossRef] [PubMed]
  107. Wicaksana, J.; Yan, Z.; Yang, X.; Liu, Y.; Fan, L.; Cheng, K.T. Customized federated learning for multi-source decentralized medical image classification. IEEE Journal of Biomedical and Health Informatics 2022, 26, 5596–5607. [Google Scholar] [CrossRef] [PubMed]
  108. Zhang, M.; Qu, L.; Singh, P.; Kalpathy-Cramer, J.; Rubin, D.L. Splitavg: A heterogeneity-aware federated deep learning method for medical imaging. IEEE Journal of Biomedical and Health Informatics 2022, 26, 4635–4644. [Google Scholar] [CrossRef]
  109. Johnson, A.E.W.; Pollard, T.J.; Shen, L. MIMIC-III, a freely accessible critical care database. Scientific Data 2016, 3, 160035. [Google Scholar] [CrossRef]
  110. Kondrateva, E.; Druzhinina, P.; Dalechina, A. Negligible effect of brain MRI data preprocessing for tumor segmentation. arXiv 2022, arXiv:2204.07954. [Google Scholar] [CrossRef]
  111. Reinhold, J.C.; Dewey, B.E.; Carass, A.; Prince, J.L. Multi-site harmonization for lung CT radiomics. Radiology: Artificial Intelligence 2021, 3, e210070. [Google Scholar]
  112. Hripcsak, G.; Duke, J.D.; Shah, N.H.; Reich, C.G.; Huser, V.; Schuemie, M.J.; Suchard, M.A.; Park, R.W.; Wong, I.C.K.; Rijnbeek, P.R.; et al. Observational Health Data Sciences and Informatics (OHDSI): opportunities for observational researchers. In MEDINFO 2015: eHealth-enabled Health; IOS Press, 2015; pp. 574–578. [Google Scholar]
  113. Banerjee, I.; Chen, M.C.; Lungren, M.P. Comparison of NLP Preprocessing Approaches for Clinical Radiology Reports. Journal of Biomedical Informatics 2019, 100, 103327. [Google Scholar] [CrossRef]
  114. Sheller, M.J.; Edwards, B.; Reina, G.A. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific Reports 2020, 10, 12598. [Google Scholar] [CrossRef]
  115. Yazdinejad, A.; Srivastava, G.; Parizi, R.M.; Dehghantanha, A.; Choo, K.K.R.; Aledhari, M. Decentralized authentication of distributed patients in hospital networks using blockchain. IEEE journal of biomedical and health informatics 2020, 24, 2146–2156. [Google Scholar] [CrossRef]
  116. Chen, Y.; Ding, S.; Xu, Z.; Zheng, H.; Yang, S. Blockchain-based medical records secure storage and medical service framework. Journal of medical systems 2019, 43, 1–9. [Google Scholar] [CrossRef]
  117. Li, C.; Cao, Y.; Hu, Z.; Yoshikawa, M. Blockchain-based bidirectional updates on fine-grained medical data. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering Workshops (ICDEW), 2019; IEEE; pp. 22–27. [Google Scholar]
  118. Hang, L.; Choi, E.; Kim, D.H. A novel EMR integrity management based on a medical blockchain platform in hospital. Electronics 2019, 8, 467. [Google Scholar] [CrossRef]
  119. Hasan, H.R.; Salah, K.; Jayaraman, R.; Arshad, J.; Yaqoob, I.; Omar, M.; Ellahham, S. Blockchain-based solution for COVID-19 digital medical passports and immunity certificates. Ieee Access 2020, 8, 222093–222108. [Google Scholar] [CrossRef] [PubMed]
  120. Alshamsi, H.; Alteneiji, S.; Madine, M.; Musamih, A.; Nemer, M.; Salah, K.; Jayaraman, R.; Antony, J.; Omar, M. Blockchain-based resale and leasing of pre-owned medical equipment. Technology in Society 2024, 77, 102549. [Google Scholar] [CrossRef]
  121. Omar, I.A.; Hasan, H.R.; AlKhader, W.; Jayaraman, R.; Salah, K.; Omar, M. Blockchain-based trusted accountability in the maintenance of medical imaging equipment. Expert Systems with Applications 2024, 241, 122718. [Google Scholar] [CrossRef]
  122. Myrzashova, R.; Alsamhi, S.H.; Hawbani, A.; Curry, E.; Guizani, M.; Wei, X. Safeguarding patient data-sharing: Blockchain-enabled federated learning in medical diagnostics. IEEE Transactions on Sustainable Computing, 2024. [Google Scholar]
  123. Zhou, X.; Huang, W.; Liang, W.; Yan, Z.; Ma, J.; Pan, Y.; Wang, K.I.K. Federated distillation and blockchain empowered secure knowledge sharing for internet of medical things. Information Sciences 2024, 662, 120217. [Google Scholar] [CrossRef]
  124. Kumar, R.; Bernard, C.M.; Ullah, A.; Khan, R.U.; Kumar, J.; Kulevome, D.K.; Yunbo, R.; Zeng, S. Privacy-preserving blockchain-based federated learning for brain tumor segmentation. Computers in Biology and Medicine 2024, 177, 108646. [Google Scholar] [CrossRef]
  125. Chikhaoui, E.; Alajmi, A.; Larabi-Marie-sainte, S. Artificial intelligence applications in healthcare sector: Ethical and legal challenges. Emerging Science Journal 2022, 6(4), 717À738. [Google Scholar] [CrossRef]
  126. Stogiannos, N.; Georgiadou, E.; Rarri, N.; Malamateniou, C. Ethical AI: a qualitative study exploring ethical challenges and solutions on the use of AI in medical imaging. European Journal of Radiology Artificial Intelligence 2025, 1, 100006. [Google Scholar] [CrossRef]
  127. Naik, N.; Hameed, B.; Shetty, D.K.; Swain, D.; Shah, M.; Paul, R.; Aggarwal, K.; Ibrahim, S.; Patil, V.; Smriti, K.; et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Frontiers in surgery 2022, 9, 862322. [Google Scholar] [CrossRef]
  128. Pham, T. Ethical and legal considerations in healthcare AI: innovation and policy for safe and fair use. Royal Society Open Science 2025, 12, 241873. [Google Scholar] [CrossRef] [PubMed]
  129. Bataineh, A.Q.; Mushtaha, A.S.; Abu-AlSondos, I.A.; Aldulaimi, S.H.; Abdeldayem, M. Ethical & legal concerns of artificial intelligence in the healthcare sector. In Proceedings of the 2024 ASU International Conference in Emerging Technologies for Sustainability and Intelligent Systems (ICETSIS), 2024; IEEE; pp. 491–495. [Google Scholar]
  130. Tamhane, P. Ethical and Legal Implications of AI in Healthcare: A Global Perspective. Journal Publication of International Research for Engineering and Management (JOIREM) 2025, 5. [Google Scholar]
  131. Hossain, M.I.; Zamzmi, G.; Mouton, P.R.; Salekin, M.S.; Sun, Y.; Goldgof, D. Explainable AI for medical data: Current methods, limitations, and future directions. ACM Computing Surveys 2025, 57, 1–46. [Google Scholar] [CrossRef]
  132. Sun, Q.; Akman, A.; Schuller, B.W. Explainable artificial intelligence for medical applications: A review. ACM Transactions on Computing for Healthcare 2025, 6, 1–31. [Google Scholar] [CrossRef]
  133. Ribeiro, M.T.; Singh, S.; Guestrin, C. Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016; pp. 1135–1144. [Google Scholar]
  134. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Advances in neural information processing systems 2017, 30. [Google Scholar]
  135. Nahiduzzaman, M.; Abdulrazak, L.F.; Ayari, M.A.; Khandakar, A.; Islam, S.R. A novel framework for lung cancer classification using lightweight convolutional neural networks and ridge extreme learning machine model with SHapley Additive exPlanations (SHAP). Expert Systems with Applications 2024, 248, 123392. [Google Scholar] [CrossRef]
  136. Moujahid, H.; Cherradi, B.; Al-Sarem, M.; Bahatti, L.; Eljialy, A.B.A.M.Y.; Alsaeedi, A.; Saeed, F. Combining CNN and Grad-Cam for COVID-19 Disease Prediction and Visual Explanation. Intelligent Automation & Soft Computing 2022, 32. [Google Scholar]
  137. Mir, A.N.; Rizvi, D.R.; Ahmad, M.R. Enhancing histopathological image analysis: An explainable vision transformer approach with comprehensive interpretation methods and evaluation of explanation quality. Engineering Applications of Artificial Intelligence 2025, 149, 110519. [Google Scholar] [CrossRef]
  138. Agnihotri, A.; Kohli, N. (XAI-AGUWEM) Explainable Artificial Intelligence-based Attention Guided Uncertainty Weighting Ensemble Model for the Classification of COVID-19 and Pneumonia in X-ray Medical Images. Recent Patents on Electrical Engineering, 2025. [Google Scholar]
  139. Chauhan, K.; Tiwari, R.; Freyberg, J.; Shenoy, P.; Dvijotham, K. Interactive concept bottleneck models. Proceedings of the Proceedings of the aaai conference on artificial intelligence 2023, Vol. 37, 5948–5955. [Google Scholar] [CrossRef]
  140. Lou, X.; Yan, D.; Shen, W.; Yan, Y.; Xie, J.; Zhang, J. Uncertainty-aware reward model: Teaching reward models to know what is unknown. arXiv 2024. arXiv:2410.00847. [CrossRef]
  141. Feng, Q.; Du, M.; Zou, N.; Hu, X. Fair machine learning in healthcare: A survey. IEEE Transactions on Artificial Intelligence, 2024. [Google Scholar]
  142. Yang, Y.; Lin, M.; Zhao, H.; Peng, Y.; Huang, F.; Lu, Z. A survey of recent methods for addressing AI fairness and bias in biomedicine. Journal of Biomedical Informatics 2024, 154, 104646. [Google Scholar] [CrossRef] [PubMed]
  143. Chinta, S.V.; Wang, Z.; Palikhe, A.; Zhang, X.; Kashif, A.; Smith, M.A.; Liu, J.; Zhang, W. AI-Driven Healthcare: A Review on Ensuring Fairness and Mitigating Bias. arXiv 2024. arXiv:2407.19655. [CrossRef] [PubMed]
  144. Chen, R.J.; Chen, T.Y.; Lipkova, J.; Wang, J.J.; Williamson, D.F.; Lu, M.Y.; Sahai, S.; Mahmood, F. Algorithm fairness in ai for medicine and healthcare. arXiv arXiv:2110.00603. [CrossRef]
  145. Rauniyar, A.; Hagos, D.H.; Jha, D.; Håkegård, J.E.; Bagci, U.; Rawat, D.B.; Vlassov, V. Federated learning for medical applications: A taxonomy, current trends, challenges, and future research directions. IEEE Internet of Things Journal 2023, 11, 7374–7398. [Google Scholar] [CrossRef]
  146. Xu, J.; Xiao, Y.; Wang, W.H.; Ning, Y.; Shenkman, E.A.; Bian, J.; Wang, F. Algorithmic fairness in computational medicine. EBioMedicine 2022, 84. [Google Scholar] [CrossRef]
  147. Lund, B.; Orhan, Z.; Mannuru, N.R.; Bevara, R.V.K.; Porter, B.; Vinaih, M.K.; Bhaskara, P. Standards, frameworks, and legislation for artificial intelligence (AI) transparency. AI and Ethics 2025, 1–17. [Google Scholar] [CrossRef]
  148. Kennedy-Mayo, D.; Gord, J. “Model Cards for Model Reporting” in 2024: Reclassifying Category of Ethical Considerations in Terms of Trustworthiness and Risk Management. In Proceedings of the Future of Information and Communication Conference, 2025; Springer; pp. 179–196. [Google Scholar]
  149. Jain, A.; Brooks, J.R.; Alford, C.C.; Chang, C.S.; Mueller, N.M.; Umscheid, C.A.; Bierman, A.S. Awareness of racial and ethnic bias and potential solutions to address bias with use of health care algorithms. Proceedings of the JAMA Health Forum. American Medical Association 2023, Vol. 4, e231197–e231197. [Google Scholar] [CrossRef]
  150. Shayea, G.G.; Zabil, M.H.M.; Habeeb, M.A.; Khaleel, Y.L.; Albahri, A. Strategies for protection against adversarial attacks in AI models: An in-depth review. Journal of Intelligent Systems 2025, 34, 20240277. [Google Scholar] [CrossRef]
  151. Vatian, A.; Gusarova, N.; Dobrenko, N.; Dudorov, S.; Nigmatullin, N.; Shalyto, A.; Lobantsev, A. Impact of adversarial examples on the efficiency of interpretation and use of information from high-tech medical images. In Proceedings of the 2019 24th Conference of Open Innovations Association (FRUCT), 2019; IEEE; pp. 472–478. [Google Scholar]
  152. Rao, C.; Cao, J.; Zeng, R.; Chen, Q.; Fu, H.; Xu, Y.; Tan, M. A thorough comparison study on adversarial attacks and defenses for common thorax disease classification in chest X-rays. arXiv arXiv:2003.13969. [CrossRef]
  153. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the Proceedings of the IEEE international conference on computer vision, 2015; pp. 1026–1034. [Google Scholar]
  154. Ma, X.; Niu, Y.; Gu, L.; Wang, Y.; Zhao, Y.; Bailey, J.; Lu, F. Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition 2021, 110, 107332. [Google Scholar] [CrossRef]
  155. Li, X.; Pan, D.; Zhu, D. Defending against adversarial attacks on medical imaging AI system, classification or detection? In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), 2021; IEEE; pp. 1677–1681. [Google Scholar]
  156. Li, X.; Zhu, D. Robust detection of adversarial attacks on medical images. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), 2020; IEEE; pp. 1154–1158. [Google Scholar]
  157. Gowda, D.; Chaithra, S.; Gujar, S.S.; Shaikh, S.F.; Ingole, B.S.; Reddy, N.S. Scalable ai solutions for iot-based healthcare systems using cloud platforms. In Proceedings of the 2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), 2024; IEEE; pp. 156–162. [Google Scholar]
  158. D.G., V; S M, C.; Gujar, S.S.; Firoz Shaikh, S.; Ingole, B.S.; Sudhakar Reddy, N. Scalable AI Solutions for IoT-based Healthcare Systems using Cloud Platforms. In Proceedings of the 2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2024; pp. 156–162. [Google Scholar] [CrossRef]
  159. Sai, S.; Chamola, V.; Choo, K.K.R.; Sikdar, B.; Rodrigues, J.J.P.C. Confluence of Blockchain and Artificial Intelligence Technologies for Secure and Scalable Healthcare Solutions: A Review. IEEE Internet of Things Journal 2023, 10, 5873–5896. [Google Scholar] [CrossRef]
  160. Ali, A.; Ali, H.; Saeed, A.; Khan, A.A.; Tin, T.T.; Assam, M.; Ghadi, Y.Y.; Mohamed, H.G. Blockchain-Powered Healthcare Systems: Enhancing Scalability and Security with Hybrid Deep Learning. Sensors 2023, 23, 7740. [Google Scholar] [CrossRef] [PubMed]
  161. Ali, A.; Ali, H.; Saeed, A.; Ahmed Khan, A.; Tin, T.T.; Assam, M.; Ghadi, Y.Y.; Mohamed, H.G. Blockchain-powered healthcare systems: enhancing scalability and security with hybrid deep learning. Sensors 2023, 23, 7740. [Google Scholar] [CrossRef] [PubMed]
  162. Dhanka, S.; Sharma, A.; Kumar, A.; Maini, S.; Vundavilli, H. Advancements in Hybrid Machine Learning Models for Biomedical Disease Classification Using Integration of Hyperparameter-Tuning and Feature Selection Methodologies: A Comprehensive Review. In Archives of Computational Methods in Engineering; 2025. [Google Scholar] [CrossRef]
  163. Yang, S.; Li, Y.; Nie, J.; Ercisli, S.; Gadekallu, T.R. Enhanced Brain Tumor Detection Using DCGAN Augmentation and Optimized EfficientDet in IoT-Based Healthcare Industry 5.0. IEEE Internet of Things Journal 2025. [Google Scholar] [CrossRef]
  164. not extracted, A. Cloud Computing Framework for Healthcare: Architecture, Applications, and Challenges. Healthcare 2023, 11, 4944. [Google Scholar]
  165. Waqar, M.; et al. Optimized Explainable ConvNeXt Framework for Monkeypox Diagnosis: Balancing Accuracy and Interpretability. SLAS Technology 2025, 33, 100336. [Google Scholar] [CrossRef]
  166. Wu, X.; Fu, Y.; Bian, L.; Feng, M.; Lu, X.; He, D.; Han, Y.; Dong, S. Single Drive Multi-Training Mode Adaptive Wrist Rehabilitation Device. In Proceedings of the Proceedings of the IEEE International Conference on Mechatronics and Automation (ICMA), 2025; IEEE. [Google Scholar] [CrossRef]
  167. Esmaeilzadeh, P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artificial Intelligence in Medicine 2024, 151, 102861. [Google Scholar] [CrossRef]
  168. Khan, S.; Naz, N.S.; Mazhar, T.; Tariq, M.U.; Shahzad, T.; Guizani, S.; Hamam, H. Green AI techniques for reducing energy consumption in AI systems. Array 2025, 100652. [Google Scholar] [CrossRef]
  169. Bogoch, I.I.; Watts, A.; Thomas-Bachli, A.; Huber, C.; Kraemer, M.U.; Khan, K. Pneumonia of unknown etiology in Wuhan, China: Potential for international spread via commercial air travel. Journal of Travel Medicine 2020, 27, taaa008. [Google Scholar] [CrossRef]
  170. Khan, K.; McNabb, S.J.; Memish, Z.A.; Eckhardt, R.; Hu, W.; Kossowsky, D.; et al. Infectious disease surveillance and modeling across geographic frontiers and scientific disciplines. Journal of Travel Medicine 2020, 27, taz020. [Google Scholar] [CrossRef]
  171. MacIntyre, C.R.; Chen, X.; Kunasekaran, M.; Quigley, A.; Lim, S.; Stone, H.; et al. Artificial intelligence in public health: The potential of epidemic early warning systems. Journal of International Medical Research 2023, 51, 3000605231159335. [Google Scholar] [CrossRef] [PubMed]
  172. Qin, X.; Jiang, X.; Wang, J.; Zhao, Y. Exploring big data in the early detection of infectious diseases. Frontiers in Public Health 2024, 12, 1371852. [Google Scholar] [CrossRef]
  173. Freifeld, C.C.; Mandl, K.; Reis, B.Y.; Brownstein, J.S. HealthMap: Global infectious disease monitoring through automated classification and visualization of internet media reports. Journal of the American Medical Informatics Association 2008, 15, 150–157. [Google Scholar] [CrossRef] [PubMed]
  174. Brownstein, J.S.; Freifeld, C.C.; Madoff, L.C. Digital disease detection — harnessing the web for public health surveillance. New England Journal of Medicine 2009, 360, 2153–2157. [Google Scholar] [CrossRef]
  175. Milinovich, G.; Williams, S.; Clements, A.; Hu, W. Role of big data in the early detection of Ebola and other emerging infectious diseases. The Lancet Global Health 2015, 3, e20–e21. [Google Scholar] [CrossRef]
  176. Bhatia, S.; Lassmann, B.; Cohn, E.; Desai, A.N.; Carrion, M.; Kraemer, M.U.; et al. Using digital surveillance tools for near real-time mapping of the risk of infectious disease spread. NPJ Digital Medicine 2021, 4, 73. [Google Scholar] [CrossRef]
  177. Hadfield, J.; Megill, C.; Bell, S.M.; Huddleston, J.; Potter, B.; Callender, C.; Sagulenko, P.; Bedford, T.; Neher, R.A. Nextstrain: real-time tracking of pathogen evolution. Bioinformatics 2018, 34, 4121–4123. [Google Scholar] [CrossRef]
  178. Sagulenko, P.; Puller, V.; Neher, R.A. TreeTime: Maximum-likelihood phylodynamic analysis. Virus Evolution 2018, 4, vex042. [Google Scholar] [CrossRef]
  179. Schmidt, M.; Arshad, M.; Bernhart, S.H.; Hakobyan, S.; Arakelyan, A.; Löffler-Wirth, H.; et al. The evolving faces of the SARS-CoV-2 genome. Viruses 2021, 13, 1764. [Google Scholar] [CrossRef]
  180. Whaley, C.M.; Taylor, E.A.; Koenig, L.; White, C. Health Care Cost and Utilization Report: 2013. Health Care Cost Institute. Accessed. 2014.
  181. Institute, H.C.C. 2020 Health Care Cost and Utilization Report. Accessed. 2021.
  182. Farid, F.; Bello, A.; Ahamed, F.; Hossain, F. The Roles of AI Technologies in Reducing Hospital Readmission for Chronic Diseases: A Comprehensive Analysis. Preprints. 2023. Preprint. available at. [CrossRef]
  183. Kumar, Y.; Koul, A.; Singla, R.; Ijaz, M.F. Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework, and future research agenda. Journal of Ambient Intelligence and Humanized Computing 2023, 14, 8459–8486. [Google Scholar] [CrossRef]
  184. Fitzpatrick, K.K.; Darcy, A.; Vierhile, M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health 2017, 4, e19. [Google Scholar] [CrossRef]
  185. Angelis, L.D.; Baglivo, F.; Arzilli, G.; Privitera, G.P.; Ferragina, P.; Tozzi, A.E.; et al. ChatGPT and the rise of large language models: The new AI-driven infodemic threat in public health. Frontiers in Public Health 2023, 11, 1166120. [Google Scholar] [CrossRef]
  186. Topol, E.J. High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine 2019, 25, 44–56. [Google Scholar] [CrossRef]
  187. Rong, G.; Mendez, A.; Assi, E.B.; Zhao, B.; Sawan, M. Artificial intelligence in healthcare: review and prediction case studies. Engineering 2020, 6, 291–301. [Google Scholar] [CrossRef]
  188. Shaheen, M.Y. Applications of Artificial Intelligence (AI) in healthcare: A review. In ScienceOpen Preprints; 2021. [Google Scholar]
  189. Aung, Y.Y.; Wong, D.C.; Ting, D.S. The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. British medical bulletin 2021, 139, 4–15. [Google Scholar] [CrossRef] [PubMed]
  190. Sadeghi, Z.; Alizadehsani, R.; Cifci, M.A.; Kausar, S.; Rehman, R.; Mahanta, P.; Bora, P.K.; Almasri, A.; Alkhawaldeh, R.S.; Hussain, S.; et al. A review of Explainable Artificial Intelligence in healthcare. Computers and Electrical Engineering 2024, 118, 109370. [Google Scholar] [CrossRef]
  191. Wubineh, B.Z.; Deriba, F.G.; Woldeyohannis, M.M. Exploring the opportunities and challenges of implementing artificial intelligence in healthcare: A systematic literature review. In Proceedings of the Urologic Oncology: Seminars and Original Investigations; Elsevier, 2024; Vol. 42, pp. 48–56. [Google Scholar]
  192. Aminizadeh, S.; Heidari, A.; Dehghan, M.; Toumaj, S.; Rezaei, M.; Navimipour, N.J.; Stroppa, F.; Unal, M. Opportunities and challenges of artificial intelligence and distributed systems to improve the quality of healthcare service. Artificial Intelligence in Medicine 2024, 149, 102779. [Google Scholar] [CrossRef] [PubMed]
  193. Kasula, B.Y. AI applications in healthcare a comprehensive review of advancements and challenges. International Journal of Managment Education for Sustainable Development 2023, 6. [Google Scholar]
Figure 1. Description of the search terms used for the inclusion and exclusion of research papers considered in this review.
Figure 1. Description of the search terms used for the inclusion and exclusion of research papers considered in this review.
Preprints 201305 g001
Figure 2. Year wise distribution of selected research papers.
Figure 2. Year wise distribution of selected research papers.
Preprints 201305 g002
Figure 3. Timeline illustrating the evolution of AI techniques in healthcare applications.
Figure 3. Timeline illustrating the evolution of AI techniques in healthcare applications.
Preprints 201305 g003
Figure 4. Application of AI in healthcare.
Figure 4. Application of AI in healthcare.
Preprints 201305 g004
Figure 5. Illustration of model accuracy and interpretability across common AI techniques.
Figure 5. Illustration of model accuracy and interpretability across common AI techniques.
Preprints 201305 g005
Figure 6. Illustration of possible attackes in different stages.
Figure 6. Illustration of possible attackes in different stages.
Preprints 201305 g006
Figure 7. Workflow of Federated Learning.
Figure 7. Workflow of Federated Learning.
Preprints 201305 g007
Figure 8. Categorization of Federated Learning.
Figure 8. Categorization of Federated Learning.
Preprints 201305 g008
Figure 9. Blockchain assisted Federated Learning workflow.
Figure 9. Blockchain assisted Federated Learning workflow.
Preprints 201305 g009
Table 1. Subfields of Artificial Intelligence.
Table 1. Subfields of Artificial Intelligence.
Major Domains within AI
ML Learns from data to improve decisions over time.
DL Multi-layered neural networks that automatically learn complex data features.
CNN Deep learning model for extracting spatial features from image data.
CV Technique to interpret and analyze visual information from images or videos.
Table 2. Defense methods against adversarial attacks in medical imaging.
Table 2. Defense methods against adversarial attacks in medical imaging.
Ref Method Attack Dataset Architecture Task
[151] Adversarial Training FGSM, JSMA CT, MRI UNet + RPN Classification
[152] PDT + adv_train FGSM, PGD,
MIFGSM, DAA
X-ray DenseNet, ResNet,
VGG, Inception V3
Classification
[153] NLCE FGSM Lung, Skin Lesion SLSDeep, WNCN,
UNet, InvertNet
Segmentation
[154] KD & LID FGSM, PGD, BIM DR, X-ray ResNet Classification
[155] Unsupervised Anomaly Detection FGSM, BIM, MIM, PGD X-ray DenseNet, ResNet Classification
[156] SSAT & UAD FGSM, PGD, C&W DR (OCT) ResNet Classification
Table 3. Summary of key studies highlighting AI challenges and solutions in healthcare.
Table 3. Summary of key studies highlighting AI challenges and solutions in healthcare.
Challenges Study Core Issue Solution Drawback
[102] Secure data sharing Federated learning Synchronization
[103] Low Data Quality Privacy-preserving FL cryptographic complexity
[104] Non-IID data distribution Collaborative model training Reduced speed
Data Privacy [105] unbalanced data distribution Federated Learning IID assumption
[106] heterogeneity) VAFL framework transformation complexity
[107] Client heterogeneity CusFL Training complexity
[108] Data heterogeneity SplitAVG Architectural complexity
[115] Re-authentication Decentralized Authentication Inaccessibility
[116] Data silos Blockchain-enabled sharing Off-chain vulnerability
[117] Synchronization Fine-grained data sharing Lack of concurrency support
[118] EMR Integrity Permissioned Blockchain Lack of Interoperability
[119] Trusted Verification Secure Health Passports Key Management Risk
Data Security [120] Improper disposal
of functional devices
Decentralized Trading Adoption Barrier
[121] Risk of equipment
failure
Blockchain Maintenance Regulatory Challenge
[122] Trust and Privacy Blockchain FL Metadata Exposure
[123] Consensus Inefficiency Blockchain Distillation Privacy Leakage Risk
[124] Trust and Privacy Blockchain Aggregation High Overhead
[125] Ethical Oversight Ethical Governance Implementation Gap
[127] Accountability Ethical Validation
Framework
Algorithmic Opacity
Ethical & Legal [128] Legal Uncertainty Policy Reform Blurred Responsibility
[129] Regulatory Gaps Governance Reform Overgeneralized
[130] Fragmentation Harmonization Theoretical
[133] Interpretability LIME Local approximation
[134] Explainability SHAP Computationally expensive
[135] Multiclass efficiency LPDCNN, Ridge-ELM Limited generalization
Technical [144] Dataset bias Federated learning Heterogeneity
[146] Algorithmic bias Fairness-aware learning Tradeoffs
[157] Latency,scalability CNN–LSTM Network dependency
[161] Decentralized Blockchain +IOT Interoperability
[162] High cost Pruning, quantization Accuracy drop risk
[164] Latency Cloud–Edge allocation Network limits
Deployment [165] Clinical trust Grad-CAM Computation cost
[166] Safe Deploy Rehab AI Hardware cost
[168] Hardware cost Green AI Precision loss
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated