1. Introduction
Generative Artificial Intelligence (GenAI) has rapidly emerged as a transformative tool in healthcare practice, offering potential advancements in diagnostic accuracy, treatment personalization, operational efficiency, and patient outcomes. GenAI applications now span various areas, including radiology, oncology, cardiology, pathology, and general practice, each utilizing unique AI-driven approaches to improve patient care [
1,
2,
3,
4,
5,
6].
In radiology, AI-powered tools have shown significant utility in image analysis, enabling faster and more accurate diagnoses [
1]. A study by McKinney et al. published in 2020 demonstrated that an AI system outperformed radiologists in detecting breast cancer from mammograms, showcasing the potential of AI to enhance diagnostic precision and reduce human error in high-stakes fields [
7]. In oncology, AI algorithms have been developed to predict patient response to specific therapies, optimizing treatment plans [
2]. Esteva et al. (2017) found that AI could detect skin cancer with an accuracy comparable to that of dermatologists, underscoring the ability of AI to aid in early cancer detection and potentially improve survival rates [
8].
GenAI tools, applications, and resources are becoming very valuable in cardiology, where machine-learning algorithms analyze electrocardiograms (ECGs) and imaging data to detect heart disease at earlier stages [
3]. Research by Attia et al. (2019) revealed that AI could accurately identify left ventricular dysfunction through ECG data, which is often challenging for human interpretation, thus aiding in early intervention and management of cardiac patients [
9].
Similarly, in pathology, AI applications are used to assess pathology slides for diseases like prostate and cervical cancer. Studies have shown that AI-driven pathology tools can achieve accuracy levels that match or exceed those of human pathologists, allowing for quicker diagnoses and enabling pathologists to focus on complex cases [
6]. Moreover, AI is increasingly adopted in general practice for tasks such as clinical decision support, patient reports, predictive analytics, and patient monitoring. By analyzing vast datasets from electronic health records (EHRs), GenAI aids clinicians in identifying high-risk patients and guiding treatment decisions, which has become especially relevant in managing chronic diseases like diabetes and hypertension [
10].
Despite these advancements, physicians and other healthcare providers often express mixed sentiments about GenAI adoption, citing concerns over reliability, data privacy, and the potential for diminished patient-physician relationships [
11]. The 2024 American Medical Association (AMA) study highlighted that while 36% of physicians felt more excited than concerned about AI (up from 30% in 2023), there remains a critical need to build trust in AI applications as physicians emphasized the necessity for feedback loops, data privacy assurances, seamless workflow integration, and adequate training as essential factors for AI adoption [
11].
Although a growing majority of physicians recognize the benefits of AI, with 68% in 2024 reporting at least some advantage in patient care (up from 63% in 2023), our recent global study found AI adoption in healthcare in the United States to be lower than that of Europe [
11,
12].
Understanding the root cause of lower GenAI acceptance and adoption rates is crucial, as this is integral to AI’s integration into healthcare and clinical practice. Understanding and addressing US healthcare providers’ viewpoints would ensure AI tools are designed to support, rather than supplant, the role of providers, thus fostering a collaborative and patient-centered approach to healthcare. As GenAI technology becomes increasingly embedded in healthcare settings, understanding its acceptance, perceived benefits, and concerns among healthcare providers is essential to facilitate effective integration and promote patient-centered care.
This study aimed to assess U.S. healthcare providers’ perceptions of AI adoption, including its perceived usefulness, training needs, barriers, and strategies for safe integration. Specifically, the study quantified current levels of AI adoption among healthcare professionals, explored perceived benefits, barriers, and ethical concerns, and identified key factors influencing providers’ willingness to support broader AI implementation in clinical practice.
2. Materials and Methods
We conducted a nationwide cross-sectional survey in line with the STROBE checklist for cross-sectional studies (
Table S1) using a self-administered questionnaire developed with the Qualtrics electronic data collection (
https://www.qualtrics.com/) tool, targeting healthcare providers for their perspectives, beliefs, and opinions on the use of AI in the healthcare sector.
Study Population: The study focused on US healthcare providers (physicians, nurses, pharmacists, laboratory scientists, etc.). Only healthcare providers currently in practice and working in the United States were eligible and included in the study.
Data Collection Tool: A pretested self-administered questionnaire was used. The questionnaire included sections on AI adoption, deployment, use, benefits, and barriers to AI adoption, as well as basic, anonymized demographic information of the participants. We piloted and reviewed the questionnaire to ensure completeness, accuracy, acceptability, cultural sensitivity, and relevance. Individuals who participated in the pilot were not included in the final study population. The questionnaire and subsequent data analysis complied with relevant protocols and checklists [
13].
Sample Size: Sample size estimation followed Cochran’s formula for proportions. Because no robust prevalence figure for GenAI adoption among U.S. clinicians existed when the study was designed, we applied the conservative assumption of P = 0.50, a 95% confidence level (Z = 1.96), and a ±5% margin of error, yielding n = 385. Allowing 4% for incomplete surveys produced a target of 402 respondents [
14]. A supplementary calculation using the 23% adoption rate [
15] indicated a minimum number of 273 participants, confirming that our conservative target of 402 remained adequate.
Sampling and Data Collection Technique: Healthcare providers (e.g., physicians, nurses, etc.) were identified through a convenience sampling technique of US-based professional associations/organizations, social media platforms, and US member professional networks. The closed, non-randomized survey was sent to over 300 healthcare professionals, using personalized emails. Participants were informed in the invitation email and through an information sheet of the nature of the study, the length of time necessary to complete the questionnaire (less than 10 minutes), and who the principal investigators were. They were also advised of the investigators’ professional affiliations, that fully anonymized information would be collected and stored for 3 years, behind an institution-acquired and managed firewall, and that not personally traceable or identifiable data would be collected. A link to the questionnaire was provided in the email, which required a “one-time only” access to prevent multiple completions of the questionnaire by individual participants.
Data collection occurred over 12 weeks from December 1, 2024 to February 28, 2025. Reminder emails were sent out monthly to prospective participants. The questionnaire was formatted over 20 pages with one to two questions per page and hosted on the Qualtrics website for the duration of the study (
https://csudh.qualtrics.com/jfe/form/SV_6gtCXoB8JcqEDA2) Participants were able to check for completeness and could review their answers using a “back button”. If participants were unsure or unwilling to disclose their responses, options including “not sure”, “not applicable”, or “prefer not to say” were available.
Data Analysis: We analyzed data on submitted questionnaires using IBM SPSS version 27 and Microsoft Excel. Item-level non-response ranged from 0-18%. Analyses use the available-case denominator for each question. Data were uploaded automatically by Qualtrics for analysis. We performed univariate and bivariate analyses. Frequencies, percentages, Chi-square (χ2), P-value, and degrees of freedom were documented. Comparative analysis was done according to professional role, gender, age, and qualification of the participants. A P-value of < 0.05 was deemed to be statistically significant. To maintain participants’ anonymity, data were aggregated before analysis. Results are presented in tables, charts, and narrative for clarity and comprehension.
Ethical Issues: Ethical approval was received from the California State University, Dominguez Hills (CSUDH) Institutional Review Board (IRB #: CSUDH IRB-FY2025-98 on 26 November 2024). Participation was voluntary and no incentives were offered. Written, informed consent was obtained from all the subjects prior to study initiation.
3. Results
A total of 130 individuals accessed the survey, with 109 completing the core items (completion rate of 83.8%). Of these 109, 38.5% (42/109) were physicians and 16.5% (18/109) were nurses (Item level n for demographic variables ranged from 71 to 109 due to optional questions). Of those who responded to the question, 54.2% (39/72) identified as females at birth, 64.8% (46/71) were aged 50years and older, 78.8% (56/71) had graduate diplomas, and 63.4% (45/71) had worked in the health industry for 20 or more years (see
Table 1).
Participants were largely Black/African Americans (38%, 27/71) and Caucasians (33.8%, 24/71). The majority were drawn from California (16.7%, 12/72) and New York (15.3%, 11/72), 23.6% (17/72) were public health specialists, and 23.9% (17/71) worked in the private sector.
3.1. Attitude to AI Usefulness in Patient Care And Management
Over 86% (93/107) of the respondents believed that AI is useful in patient care and management. Of these, although equal percentage of participants believed that AI is useful in patient care and management now and in the future (98.9% (92/93) vs 98.0% (97/99), more participants believed that AI will be very to extremely useful in the future than in the present (70.7% (70/99) vs 55.9% 52/93) (
Figure 1). While more participants (33.3%, 31/93) believed that AI was currently moderately useful compared to in the future (17.2%, 17/99), a small minority of participants believed that AI is neither useful in the present (1.1%, 1/93) nor in the future (2.0%, 2/99). Males when were statistically more likely to believe that AI has a role or usefulness in patient care and management (χ
2 = 29.2; P < 0.001). However, there were no statistically significant difference by professional roles, age or qualifications of participants.
3.2. Training and Adoption of AI
Less than half of respondents have had formal exposures or training on AI (42.4%, 42/99), and this was a basic orientation to AI (83.3%, 35/42), AI use in patient care (31%, 13/42), or technical aspects of AI (33.3%, 14/42). While 23.2% (23/99) of respondents’ organizations have officially adopted AI, 38.4% (38/99) have trained staff on AI. Most adoption processes are led by top-level/executive leadership (32.8%, 20/61), although 24.6% (15/61) of participants were unaware of who was leading the adoption process (
Table 2). Organizations of participants holding doctorate degrees were statistically more likely to adopt AI when compared with participants with other qualifications (χ
2 = 31.6; P = 0.047). However, there were no statistically significant difference by professional roles, gender or age of participants.
3.3. AI Use in Patient Care and Management
AI was mostly used in report writing (43.1%, 28/65), research (27.7%, 18/65), patient care (26.2%, 17/65) and diagnosis (24.6%, 16/65). AI was also used in leadership and management (21.5%, 14/65). However, in patient care, AI was mostly useful in time management and documentation activities (34.2%, 25/73), and to improve patient registration process and research (20.5%, 15/73) (
Table 3). Participants holding a doctorate degree were statistically more likely to identify patient diagnosis and report writing as areas where AI was most useful when compared to participants with other qualifications (χ
2 = 101.1; P < 0.001). However, there were no statistically significant differences by professional roles, gender or age of participants.
3.4. Challenges and Barriers to AI Adoption and Use
While poor knowledge of AI (24.7%, 19/77) and fear of job loss (16.9%, 13/77) were the leading barriers to AI adoption and use, 72.2%, (57/79) of providers were willing to support AI adoption in clinical care. Other identified barriers included cost of acquisition of AI, staff skills and capacities, staff resistance to change, leadership and management issues, inadequate technology and equipment, and limited interest and or negative attitude of staff. Key patient care practice challenges included lack of human oversight (58.2%, 46/79), bias in AI algorithms and overdependence on AI (54.4%, 43/79). Others were unintended consequences and ethical/legal challenges (48.1% [38/79] and 41.8% [33/79] respectively) (
Table 4). Participants who were less than 50 years of age were statistically more likely to support AI adoption and embedding in organizations (χ
2 = 18.0; P = 0.02). However, there were no statistically significant difference by professional roles, gender or qualifications of participants.
However, strategies identified by participants to mitigate challenges of AI in healthcare include (but not limited to) the absence of human oversight (67.1%, 53/79), poor staff training 60.8%, 48/79, and lack of provider involvement in design and development of AI tools and resources (57.0%, 45/79) (
Table 4).
3.5. Core Benefits of and Ethical Issues to AI in Healthcare
In practice, participants believed that the most important benefit of AI was in patients’ documentation and clerking (57.0%, 45/79), as it minimizes errors and mistakes (49.4%, 39/79). Privacy and surveillance issues (63.3%, 50/79) and security risks (55.7%, 44/79) were identified as the most important ethical issues associated with the use of AI in healthcare (
Table 5).
3.6. Impact of AI Adoption Integration on Clinicians’ Workload
While 17.1% (13/76) of respondents were willing to support an increase in providers’ patient load due to AI efficiency gains, the rest were either against it (38.2%, 29/76) or undecided (maybe, 44.7%, 34/76).
Table 6.
Chi-Square (χ2) analysis of participants’ responses to AI use, adoption, and staff training.
Table 6.
Chi-Square (χ2) analysis of participants’ responses to AI use, adoption, and staff training.
| Description |
Professional Role |
Gender |
Age |
Qualification |
|
| |
Chi-Square (χ2) |
P-Value (df) |
Chi-Square (χ2) |
P-Value (df) |
Chi-Square (χ2) |
P-Value (df) |
Chi-Square (χ2) |
P-Value (df) |
| Artificial Intelligence (AI) has a role or usefulness in patient care and management |
9.3 |
0.32 (8) |
29.2 |
< 0.001 (4) |
7.7 |
0.46 (8) |
3.5 |
0.97 (10) |
| AI usefulness in the future in patient care and practice |
14.1 |
0.59 (16) |
7.1 |
0.31 (6) |
12.7 |
0.39 (12) |
11.6 |
0.71 (15) |
| Have had formal exposure or training in AI |
1.2 |
0.99 (8) |
0.3 |
0.99 (4) |
7.8 |
0.45 (8) |
8.7 |
0.57 (10) |
| The organization has adopted/begun the process of adopting AI |
18.8 |
0.28 (16) |
6.0 |
0.64 (8) |
23.3 |
0.06 (16) |
31.6 |
0.047 (20) |
| The organization has trained someone in AI use. |
7.2 |
0.51 (8) |
0.9 |
0.92 (4) |
14.8 |
0.06 (8) |
6.3 |
0.79 (10) |
| Where AI is most useful in the healthcare industry |
32.5 |
0.90 (44) |
22.7 |
0.42 (22) |
28.5 |
0.97 (44) |
101.1 |
< 0.001 (55) |
| Where AI is least useful in the healthcare industry |
42.5 |
0.54 (44) |
12.9 |
0.94 (22) |
41.4 |
0.58 (44) |
56.3 |
0.43 (55) |
| The most important barrier to AI adoption and implementation in patient care |
36.4 |
0.45 (36) |
8.6 |
0.48 (9) |
37.2 |
0.41 (36) |
29.6 |
0.96 (45) |
| Will support AI adoption and embedding in the organization |
8.2 |
0.42 (8) |
6.2 |
0.18 (4) |
18.0 |
0.02 (8) |
11.4 |
0.33 (10) |
4. Discussion
Over 86% (93/107) of the respondents believed that AI is useful in-patient care and management with less than half of the respondents having had formal exposures or training on AI (42.4%, 42/99). Similarly, while 23.2% (23/99) of respondents’ organizations have officially adopted AI, and 38.4% (38/99) have trained staff on AI, most adoption processes were led by top-level/executive leadership (32.8%, 20/61). According to the participants, AI is mostly used for report writing (43.1%, 28/65), research (27.7%, 18/65), patient care (26.2%, 17/65) and diagnosis (24.6%, 16/65). While poor knowledge of AI (24.7%, 19/77) and fear of job loss (16.9%, 13/77) were the leading barriers to AI adoption and use, 72.2%, (57/79) of providers are willing to support AI adoption in clinical care. Core strategies identified by participants to mitigate challenges of AI in healthcare include lack of human oversight (67.1%, 53/79), poor staff training 60.8%, 48/79, and absence of provider involvement in design and development of AI tools and resources (57.0%, 45/79). Privacy and surveillance issues (63.3%, 50/79) and security risks (55.7%, 44/79) were identified as the most important ethical issues associated with the use of AI in healthcare and only 17.1% (13/76) of respondents will support an increase in providers’ patient load due to AI efficiency gains. There were statistically significant findings in perceived AI usefulness by gender (χ2 = 29.2; P < 0.001); organizational adoption of AI (χ2 = 31.6.2; P = 0.047) and where AI is most useful (χ2 = 101.1; P < 0.001) by qualifications; and support for AI adoption by age (χ2 = 18.0; P = 0.02).
Recent studies have shown a significant uptake in the use of GenAI in clinical practice among physicians and other providers. The American Medical Association (AMA) augmented Intelligence research (2025) involving 1,183 physicians revealed that a growing majority of physicians are beginning to recognize the benefits of AI, with 68% in 2024 reporting at least some advantage in patient care (up from 63% in 2023) and 36% of them reporting feeling more excited than concerned about AI (up from 30% in 2023) [
11]. On the heels of this finding, our study reveals that between 87% (current) and 93% (future) of healthcare providers who participated in the study believe that GenAI is at least moderately useful in-patient care in the present and in the future. This is a massive acceptance rate, showing that AI may have come to stay in healthcare and patient management. The substantial growth in physician use of AI in practice, with the usage of AI nearly doubling from 2023 to 2024 and a dramatic drop in non-users in just one year in the AMA study, supports this assertion. Furthermore, the very high rate of AI acceptance in our study in less than a year after the AMA study shows a continued improvement in providers’ acceptance and use of AI in clinical care. However, providers still have their doubts, issues, and fears as regards AI that could be minimized by proper training, formal exposure, and continued top management support and use of AI. These fears are similar to recent findings in another global study including US providers [
12].
Providers have significant faith in GenAI, understand where AI is most useful and some current barriers/concerns. However, less than half have been formally trained or exposed to AI, and less than a quarter of participants’ organizations have adopted AI. To advance GenAI in healthcare, training of providers is imperative. Formal training of healthcare leaders will also help them to become advocates of AI adoption and embedding in healthcare systems as these will expose them to the benefits of AI in healthcare systems. And with proper guidance and better understanding of human-in-the-loop AI development and deployment strategies, providers will better accept human-supervised GenAI as safe, reproducible, reliable and significantly accurate, and that AI is not positioned to take over their jobs. This will motivate more providers to venture into GenAI-supported patient care and health management.
Like the AMA study findings where 68% of physicians believed that AI has some, or definite advantages in patient care, our study revealed that GenAI has significant advantages in patient care, especially in patient documentation and report writing. Also, our findings revealed that significantly more providers are currently using AI for patient documentation process and report writing, discharge summaries and care plans, and medical research and standard of care summaries, similar to AMA findings [
11]. The current increase in clinicians’ use may be attributed to accelerated AI adoption in healthcare during the COVID-19 pandemic which may have influenced clinicians’ perception by highlighting AI’s practical utility in crisis response [
16,
17,
18]. Also, AI’s perceived usefulness and clinical value with evidence of performance, improved transparency and explainability, perceived ease of use, regulation and governance systems, improved data quality and security, organizational and social influence and specialty and task fits may all have contributed directly or otherwise to improved adoption and use by clinicians [
19,
20,
21,
22,
23,
24,
25,
26,
27]. Also, the current sociocultural and economic contexts, including large-scale investments in AI technology and increasing public awareness of AI, could partly account for the external factors shaping healthcare workers’ attitudes to AI in health [
28,
29,
30,
31,
32].
However, the integration of GenAI in healthcare comes with both opportunities and challenges, as AI adoption in healthcare has significantly improved diagnostic accuracy, streamlined workflow processes, and enhanced patient care, including personalized treatment [
33]. Despite major advances in AI research for healthcare, the deployment and adoption of AI technologies remain limited in clinical practice, thus the need for more formal training, on the job mentoring, and clarity on their use [
34,
35]. The increased use of AI in health and the expanding science around AI in clinical medicine have not eliminated the concerns people have, requiring that developers and users prioritize patient-relevant outcomes to fully understand AI’s true effects and limitations in health care [
36]. Also, providers should play significant role in AI design, development and deployment through in and out innovation approaches [
12,
37,
38]
Our study outcomes support previously identified findings that challenges such as ethical and legal concerns, patient privacy, and data security remain prominent obstacles hindering providers’ adoption of AI [
12,
37,
38,
39] as the continued use of GenAI tools and resources increases the risk of unauthorized data breaches. Beyond data-breach risks, 63% of clinicians expressed unease about GenAI-enabled surveillance and the continuous algorithmic monitoring of both patients and providers, which they felt could erode autonomy and trust, unless strict transparency, opt-in consent, and usage boundary safeguards are put into place. This calls for better data security and improved use of firewalls and relevant tools to protect against patient data breaches. To minimize these concerns, providers and healthcare organizations must promote a culture of transparency, accountability, and openness as to the capacity, potential and use of GenAI, and ensure vigilance over patient data security. While waiting for relevant policies and guidelines, there is an urgent need for developers and users alike to address these ethical, technical, and security challenges that GenAI brings.
Therefore, to effectively navigate the path forward to realize the potential of GenAI in health care and health, there is an urgent need to ensure appropriate skill generation; model testing, implementation, and monitoring; resources and infrastructure; and standardized oversight and guidelines [
39]. Additional large-scale multiple-site studies that will explore in depth the findings of this study are needed using a deliberate proactive strategy [
40].
5. Limitations
We could not achieve our calculated sample size due to providers’ inability to allocate the time needed to complete the questionnaires. With 130 completed responses, the survey achieved a 95% confidence interval of ±8.6% around a 50% proportion, wider than the ±5% originally planned. The lower response rate limited our analysis to mainly frequencies and percentages. But the insights shared need to be published to inform science. However, the generalizability of our findings is not guaranteed. Also, we do not have complete data on some participants who started the process but did not fully complete the questionnaire. Their experiences and views may be different from that of those who completed the questionnaire. The study is also subject to challenges of online surveys, including selection bias as only providers within the authors’ personal and professional networks were invited to participate in the survey. Because recruitment relied on authors’ networks and listservs, the sample is not probabilistic, and over-represents providers from California and New York, as well as Black/African Americans. Findings should therefore be interpreted as hypothesis-generating, rather than nationally representative.
6. Conclusions
Although the sample size is small, the views expressed therein are still informative and, to a large extent, can be used to gauge the views of providers in the United States. With a high GenAI acceptance rate and poor formal training of providers and other users, there is an urgent need to develop an AI-In-Service training curriculum and expose US providers formally to GenAI. This shortfall in formal instruction is reflected in our data, as ‘limited knowledge of AI’ was the single most cited barrier (24.7%), indicating that structured, hands-on training could directly mitigate clinicians’ knowledge gaps and increase adoption readiness. Also, healthcare managers should be more transparent and communicative on their GenAI adoption initiatives and share the process, experiences, challenges and successes with their teams.
On-the-job training and hands-on mentoring on AI use will help improve the understanding of clinicians and other healthcare providers that GenAI is not a threat to their jobs. However, individuals with appropriate GenAI skills and capacities will. in the coming years have advantages over those without the requisite skills. Thus, healthcare professionals should be trained in GenAI to mitigate the AI digital divide, which has already begun and is rapidly widening.
Finally, administrators should not increase the work packages of providers due to efficiency gains from AI but rather allow providers to reinvest this time into better patient care, proper documentation, and self-care towards reduced burnout and better work-life balance.
Supplementary Materials
The following supporting information can be downloaded at the website of this paper posted on Preprints.org. Table S1: Strobe Checklist.
Author Contributions
Conceptualization, OO and MB; methodology, OO; software, OO; validation, RI and SDTR; formal analysis, OO; investigation, OO and MB; resources, OO and RI; data curation, OO and MB; writing—original draft preparation, OO; writing—review and editing, all authors.; visualization, OO; supervision, RI and SDTR; project administration, OO. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and approved by the California State University, Dominguez Hills (CSUDH) Institutional Review Board (IRB #: CSUDH IRB-FY2025-98 on 26 November 2024).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
Data supporting reported results can be obtained upon request from the first author.
Acknowledgments
OOO and RI are grateful Anupuma Joshi, Judy Aguirre, and Mi-Sook Kim for their guidance, and to the College of Health, Human Services, and Nursing, California State University, Dominguez Hills, Carson, California for institutional support. We thank Marsha Morgan from University College London for helpful comments. SDT-R was supported by the Wellcome Trust Institutional Strategic Support Fund at Imperial College London, London, United Kingdom. All authors acknowledge the United Kingdom National Institute for Healthcare Research Biomedical Facility at Imperial College London for infrastructural support. The views and opinions of authors expressed herein do not necessarily state or reflect those of CSUDH and other supporting institutions.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| AI |
Artificial Intelligence |
| AMA |
American Medical Association |
| EHR |
Electronic Health Records |
| GenAI |
Generative Artificial Intelligence |
| SPSS |
Statistical Package for Social Sciences |
References
- Najjar R. Redefining radiology: a review of artificial intelligence integration in medical imaging. Diagnostics. 2023 Aug 25;13(17):2760.
- Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, Matsui Y, Fushimi Y, Fujioka T, Nozaki T, Yamada A. Revolutionizing radiation therapy: the role of AI in clinical practice. Journal of radiation research. 2024 Jan;65(1):1-9.
- Miranda I, Luz JM, Pereira AR, Augusto JB. Artificial intelligence in cardiovascular imaging algorithms – what is used in clinical routine? [Internet]. European Society of Cardiology; 2024 Apr 02 [cited 2025-02-25]. Available from: https://www.escardio.org/Councils/Council-for-Cardiology-Practice-(CCP)/Cardiopractice/artificial-intelligence-in-cardiovascular-imaging-algorithms-what-is-used-in-c.
- Goldenberg SL, Nir G, Salcudean SE. A new era: artificial intelligence and machine learning in prostate cancer. Nature Reviews Urology. 2019 Jul;16(7):391-403.
- Riaz IB, Harmon S, Chen Z, Naqvi SA, Cheng L. Applications of artificial intelligence in prostate cancer care: a path to enhanced efficiency and outcomes. American Society of Clinical Oncology Educational Book. 2024 Jun;44(3):e438516.
- Tătaru OS, Vartolomei MD, Rassweiler JJ, Virgil O, Lucarelli G, Porpiglia F, Amparore D, Manfredi M, Carrieri G, Falagario U, Terracciano D. Artificial intelligence and machine learning in prostate cancer patient management—current trends and future perspectives. Diagnostics. 2021 Feb 20;11(2):354.
- McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, Back T, Chesus M, Corrado GS, Darzi A, Etemadi M. International evaluation of an AI system for breast cancer screening. Nature. 2020 Jan 2;577(7788):89-94.
- Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. nature. 2017 Feb 2;542(7639):115-8.
- Attia ZI, Noseworthy PA, Lopez-Jimenez F, Asirvatham SJ, Deshmukh AJ, Gersh BJ, Carter RE, Yao X, Rabinstein AA, Erickson BJ, Kapa S. An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. The Lancet. 2019 Sep 7;394(10201):861-7.
- Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, Duan T, Ding D, Bagul A, Langlotz CP, Patel BN. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS medicine. 2018 Nov 20;15(11):e1002686.
- American Medical Association. AMA Augmented Intelligence Research. Physician sentiments around the use of AI in heath care: motivations, opportunities, risks, and use cases. Shifts from 2023 to 2024. Published February 2025.
- Oleribe OO, Taylor-Robinson AW, Agala VR, Sobande OO, Izurieta R & Taylor-Robinson SD. Global Adoption, Promotion, Impact, and Deployment of AI in Patient Care, Health Care Delivery, Management, and Health Care Systems Leadership: Cross-Sectional Survey. J Med Internet Res 2025;27:e70805). https://doi.org/10.2196/7080. https://www.jmir.org/2025/1/e70805b.
- Eysenbach, G. (2004) Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res, 6(3) p. e34.
- Dave, D. The Statistical Landscape of AI Adoption in Healthcare. Radix: Software Development Updated Aug 1, 2024. https://radixweb.com/blog/ai-in-healthcare-statistics Last accessed on 11/10/2024.
- Sample Size Calculators for designing clinical research. https://sample-size.net/sample-size-conf-interval-proportion/ Last accessed 11/10/2024.
- OECD Publishing. Artificial Intelligence and the Health Workforce Perspectives From Medical Associations on AI in Health. OECD Artificial Intelligence Papers November 2024 No. 28. Retrieved from https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/11/artificial-intelligence-and-the-health-workforce_c8e4433d/9a31d8af-en.pdf on October 27, 205.
- Waheed MA, Liu L. Perceptions of family physicians about applying AI in primary health care: case study from a premier health care organization. JMIR AI. 2024 Apr 17;3:e40781.
- Dongre AS, More SD, Wilson V, Singh RJ. Medical doctor’s perception of artificial intelligence during the COVID-19 era: a mixed methods study. Journal of Family Medicine and Primary Care. 2024 May 1;13(5):1931-6.
- AlQudah AA, Al-Emran M, Shaalan K. Technology acceptance in healthcare: a systematic review. Applied Sciences. 2021 Nov 9;11(22):10537.
- Hassan M, Kushniruk A, Borycki E. Barriers to and facilitators of artificial intelligence adoption in health care: scoping review. JMIR Human Factors. 2024 Aug 29;11:e48633.
- Rosenbacke R, Melhus Å, McKee M, Stuckler D. How explainable artificial intelligence can increase or decrease clinicians’ trust in AI applications in health care: systematic review. JMIR AI. 2024 Oct 30;3:e53207.
- Scipion CE, Manchester MA, Federman A, Wang Y, Arias JJ. Barriers to and facilitators of clinician acceptance and use of artificial intelligence in healthcare settings: a scoping review. BMJ Open. 2025 Apr 1;15(4):e092624.
- Finkelstein J, Gabriel A, Schmer S, Truong TT, Dunn A. Identifying facilitators and barriers to implementation of AI-assisted clinical decision support in an electronic health record system. Journal of Medical Systems. 2024 Sep 18;48(1):89.
- Tucci V, Saary J, Doyle TE. Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. Journal of Medical Artificial Intelligence. 2022 Mar 30;5.4.
- Roppelt JS, Kanbach DK, Kraus S. Artificial intelligence in healthcare institutions: A systematic literature review on influencing factors. Technology in Society. 2024 Mar 1;76:102443.
- Hou T, Li M, Tan Y, Zhao H. Physician adoption of AI assistant. Manufacturing & Service Operations Management. 2024 Sep;26(5):1639-55.
- Bettelheim A. Majority of doctors worry about AI driving clinical decisions, survey shows. Axios. Published on Oct 31, 2023. Retrieved from Majority of doctors worry about AI driving clinical decisions, survey shows on October 27, 2025.
- Sahni N, Stein G, McKinsey O, Zemmel R, Cutler DM. The potential impact of artificial intelligence on healthcare spending. Cambridge, MA, USA: National Bureau of Economic Research; 2023 Jan 23.
- Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello CP, Stephan A. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digital Medicine. 2023 Jun 10;6(1):111.
- Witkowski K, Dougherty RB, Neely SR. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Medical Ethics. 2024 Jun 22;25(1):74.
- Dankwa-Mullan I. Health equity and ethical considerations in using artificial intelligence in public health and medicine. Preventing Chronic Disease. 2024 Aug 22;21:E64.
- Sagona M, Dai T, Macis M, Darden M. Trust in AI-assisted health systems and AI’s trust in humans. NPJ Health Systems. 2025 Mar 28;2(1):10.
- Lainjo B. Integrating artificial intelligence into healthcare systems: opportunities and challenges. Academia Medicine. 2024 Oct 30;(1).1-13 https://doi.org/10.20935/AcadMed7382.
- Lekadir K, Frangi AF, Porras AR, Glocker B, Cintas C, Langlotz CP, Weicken E, Asselbergs FW, Prior F, Collins GS, Kaissis G. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ. 2025 Feb 5;388:1-22.
- Ng JY, Maduranayagam SG, Suthakar N, Li A, Lokker C, Iorio A, Haynes RB, Moher D. Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey. The Lancet Digital Health. 2025 Jan 1;7(1):e94-102.
- Han R, Acosta JN, Shakeri Z, Ioannidis JP, Topol EJ, Rajpurkar P. Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review. The Lancet Digital Health. 2024 May 1;6(5):e367-73.
- Oleribe, O. O, Taylor-Robinson SD. Leveraging Artificial Intelligence Tools and Resources in Leadership Decisions. American Journal of Health Care Strategy Vol 1, Issue 3, Aug 21, 2025:107-123. https://doi.org/10.61449/ajhcs.2025.16. https://ajhcs.org/article/leveraging-artificial-intelligence-in-leadership.
- Oleribe O. O. Leveraging and Harnessing Generative Artificial Intelligence to Mitigate the Burden of Neurodevelopmental Disorders (NDDs) in Children. Healthcare, 13(15), 1898(1-13); https://doi.org/10.3390/healthcare13151898.
- National Academy of Medicine. 2025. Generative Artificial Intelligence in Health and Medicine: Opportunities and Responsibilities for Transformative Innovation. Washington, DC: The National Academies Press. https://doi.org/10.17226/28907.
- Oleribe, O. O. Leading the next pandemics. Public Health in Practice, 2025 June. 9, 100605. https://doi.org/10.1016/j.puhip.2025.100605.
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).