Preprint
Article

This version is not peer-reviewed.

Healthcare Providers' Perspectives on Generative Artificial Intelligence (GenAI) Adoption, Adaptation, Assimilation, and Use in the United States

Submitted:

17 February 2026

Posted:

18 February 2026

You are already at the latest version

Abstract
Background: Generative artificial intelligence (GenAI) is rapidly permeating healthcare; yet, U.S. clinicians still report mixed feelings about its reliability, impact on workflow, and ethical implications. Current data on provider sentiment are needed to guide safe, patient-centered AI implementation in healthcare. Objective: To assess U.S. healthcare providers’ perceptions of GenAI adoption, perceived usefulness, training needs, barriers, and strategies for safe integration. Methods: A nationwide, IRB-approved, cross-sectional survey was administered to healthcare professionals using Qualtrics. A convenience sample of clinicians was recruited via professional listservs and e-mail invitations. The 20-page questionnaire captured demographics, GenAI exposure, organizational adoption status, perceived usefulness (5-point scale), barriers, and mitigation strategies. SPSS v27 and Microsoft Excel were used for statistical analysis. Results: Of 130 respondents, 109 completed the core survey (completion rate 83.8 %). Participants were 38.5 % physicians, 16.5 % nurses, 12.8 % allied professionals, and 32.2% other providers; 54.2 % were women, and 64.8 % were ≥50 years. Overall, 86.9 % agreed that GenAI is useful in current patient care, rising to 92.9 % when asked about future usefulness. Only 42.4 % had received formal GenAI training, and just 23.2 % reported that their organization had begun adopting AI. The top perceived benefits were improved documentation/clerking (57.0 %) and error reduction (49.4 %). Dominant barriers included limited AI knowledge (24.7 %) and fear of job loss (16.9 %). Despite concerns, 72 % expressed willingness to support broader GenAI adoption, favoring human oversight (67.1 %) and staff training (60.8 %) as key safeguards. There were statistically significant findings in perceived AI usefulness by gender (χ²= 29.2; P<.001); organizational adoption of AI (χ²= 31.6.2; P=.047) and where AI is most useful (χ²=101.1; P<.001) by qualifications; and support for AI adoption by age (χ²= 18.0; P=.02). Conclusions: U.S. clinicians in our survey viewed GenAI positively but lacked the training and organizational infrastructure needed for confident use. Structured education programs and transparent, provider-led implementation strategies may accelerate responsible GenAI assimilation while addressing ethical and workforce concerns. Also, health administrators should use the efficiency gains to improve provider-patient relationships and clinicians' work-life balance while reducing clinician burnout rates.
Keywords: 
;  ;  ;  ;  

1. Introduction

Generative Artificial Intelligence (GenAI) has rapidly emerged as a transformative tool in healthcare practice, offering potential advancements in diagnostic accuracy, treatment personalization, operational efficiency, and patient outcomes. GenAI applications now span various areas, including radiology, oncology, cardiology, pathology, and general practice, each utilizing unique AI-driven approaches to improve patient care [1,2,3,4,5,6].
In radiology, AI-powered tools have shown significant utility in image analysis, enabling faster and more accurate diagnoses [1]. A study by McKinney et al. published in 2020 demonstrated that an AI system outperformed radiologists in detecting breast cancer from mammograms, showcasing the potential of AI to enhance diagnostic precision and reduce human error in high-stakes fields [7]. In oncology, AI algorithms have been developed to predict patient response to specific therapies, optimizing treatment plans [2]. Esteva et al. (2017) found that AI could detect skin cancer with an accuracy comparable to that of dermatologists, underscoring the ability of AI to aid in early cancer detection and potentially improve survival rates [8].
GenAI tools, applications, and resources are becoming very valuable in cardiology, where machine-learning algorithms analyze electrocardiograms (ECGs) and imaging data to detect heart disease at earlier stages [3]. Research by Attia et al. (2019) revealed that AI could accurately identify left ventricular dysfunction through ECG data, which is often challenging for human interpretation, thus aiding in early intervention and management of cardiac patients [9].
Similarly, in pathology, AI applications are used to assess pathology slides for diseases like prostate and cervical cancer. Studies have shown that AI-driven pathology tools can achieve accuracy levels that match or exceed those of human pathologists, allowing for quicker diagnoses and enabling pathologists to focus on complex cases [6]. Moreover, AI is increasingly adopted in general practice for tasks such as clinical decision support, patient reports, predictive analytics, and patient monitoring. By analyzing vast datasets from electronic health records (EHRs), GenAI aids clinicians in identifying high-risk patients and guiding treatment decisions, which has become especially relevant in managing chronic diseases like diabetes and hypertension [10].
Despite these advancements, physicians and other healthcare providers often express mixed sentiments about GenAI adoption, citing concerns over reliability, data privacy, and the potential for diminished patient-physician relationships [11]. The 2024 American Medical Association (AMA) study highlighted that while 36% of physicians felt more excited than concerned about AI (up from 30% in 2023), there remains a critical need to build trust in AI applications as physicians emphasized the necessity for feedback loops, data privacy assurances, seamless workflow integration, and adequate training as essential factors for AI adoption [11].
Although a growing majority of physicians recognize the benefits of AI, with 68% in 2024 reporting at least some advantage in patient care (up from 63% in 2023), our recent global study found AI adoption in healthcare in the United States to be lower than that of Europe [11,12].
Understanding the root cause of lower GenAI acceptance and adoption rates is crucial, as this is integral to AI’s integration into healthcare and clinical practice. Understanding and addressing US healthcare providers’ viewpoints would ensure AI tools are designed to support, rather than supplant, the role of providers, thus fostering a collaborative and patient-centered approach to healthcare. As GenAI technology becomes increasingly embedded in healthcare settings, understanding its acceptance, perceived benefits, and concerns among healthcare providers is essential to facilitate effective integration and promote patient-centered care.
This study aimed to assess U.S. healthcare providers’ perceptions of AI adoption, including its perceived usefulness, training needs, barriers, and strategies for safe integration. Specifically, the study quantified current levels of AI adoption among healthcare professionals, explored perceived benefits, barriers, and ethical concerns, and identified key factors influencing providers’ willingness to support broader AI implementation in clinical practice.

2. Materials and Methods

We conducted a nationwide cross-sectional survey in line with the STROBE checklist for cross-sectional studies (Table S1) using a self-administered questionnaire developed with the Qualtrics electronic data collection (https://www.qualtrics.com/) tool, targeting healthcare providers for their perspectives, beliefs, and opinions on the use of AI in the healthcare sector.
Study Population: The study focused on US healthcare providers (physicians, nurses, pharmacists, laboratory scientists, etc.). Only healthcare providers currently in practice and working in the United States were eligible and included in the study.
Data Collection Tool: A pretested self-administered questionnaire was used. The questionnaire included sections on AI adoption, deployment, use, benefits, and barriers to AI adoption, as well as basic, anonymized demographic information of the participants. We piloted and reviewed the questionnaire to ensure completeness, accuracy, acceptability, cultural sensitivity, and relevance. Individuals who participated in the pilot were not included in the final study population. The questionnaire and subsequent data analysis complied with relevant protocols and checklists [13].
Sample Size: Sample size estimation followed Cochran’s formula for proportions. Because no robust prevalence figure for GenAI adoption among U.S. clinicians existed when the study was designed, we applied the conservative assumption of P = 0.50, a 95% confidence level (Z = 1.96), and a ±5% margin of error, yielding n = 385. Allowing 4% for incomplete surveys produced a target of 402 respondents [14]. A supplementary calculation using the 23% adoption rate [15] indicated a minimum number of 273 participants, confirming that our conservative target of 402 remained adequate.
Sampling and Data Collection Technique: Healthcare providers (e.g., physicians, nurses, etc.) were identified through a convenience sampling technique of US-based professional associations/organizations, social media platforms, and US member professional networks. The closed, non-randomized survey was sent to over 300 healthcare professionals, using personalized emails. Participants were informed in the invitation email and through an information sheet of the nature of the study, the length of time necessary to complete the questionnaire (less than 10 minutes), and who the principal investigators were. They were also advised of the investigators’ professional affiliations, that fully anonymized information would be collected and stored for 3 years, behind an institution-acquired and managed firewall, and that not personally traceable or identifiable data would be collected. A link to the questionnaire was provided in the email, which required a “one-time only” access to prevent multiple completions of the questionnaire by individual participants.
Data collection occurred over 12 weeks from December 1, 2024 to February 28, 2025. Reminder emails were sent out monthly to prospective participants. The questionnaire was formatted over 20 pages with one to two questions per page and hosted on the Qualtrics website for the duration of the study (https://csudh.qualtrics.com/jfe/form/SV_6gtCXoB8JcqEDA2) Participants were able to check for completeness and could review their answers using a “back button”. If participants were unsure or unwilling to disclose their responses, options including “not sure”, “not applicable”, or “prefer not to say” were available.
Data Analysis: We analyzed data on submitted questionnaires using IBM SPSS version 27 and Microsoft Excel. Item-level non-response ranged from 0-18%. Analyses use the available-case denominator for each question. Data were uploaded automatically by Qualtrics for analysis. We performed univariate and bivariate analyses. Frequencies, percentages, Chi-square (χ2), P-value, and degrees of freedom were documented. Comparative analysis was done according to professional role, gender, age, and qualification of the participants. A P-value of < 0.05 was deemed to be statistically significant. To maintain participants’ anonymity, data were aggregated before analysis. Results are presented in tables, charts, and narrative for clarity and comprehension.
Ethical Issues: Ethical approval was received from the California State University, Dominguez Hills (CSUDH) Institutional Review Board (IRB #: CSUDH IRB-FY2025-98 on 26 November 2024). Participation was voluntary and no incentives were offered. Written, informed consent was obtained from all the subjects prior to study initiation.

3. Results

A total of 130 individuals accessed the survey, with 109 completing the core items (completion rate of 83.8%). Of these 109, 38.5% (42/109) were physicians and 16.5% (18/109) were nurses (Item level n for demographic variables ranged from 71 to 109 due to optional questions). Of those who responded to the question, 54.2% (39/72) identified as females at birth, 64.8% (46/71) were aged 50years and older, 78.8% (56/71) had graduate diplomas, and 63.4% (45/71) had worked in the health industry for 20 or more years (see Table 1).
Participants were largely Black/African Americans (38%, 27/71) and Caucasians (33.8%, 24/71). The majority were drawn from California (16.7%, 12/72) and New York (15.3%, 11/72), 23.6% (17/72) were public health specialists, and 23.9% (17/71) worked in the private sector.

3.1. Attitude to AI Usefulness in Patient Care And Management

Over 86% (93/107) of the respondents believed that AI is useful in patient care and management. Of these, although equal percentage of participants believed that AI is useful in patient care and management now and in the future (98.9% (92/93) vs 98.0% (97/99), more participants believed that AI will be very to extremely useful in the future than in the present (70.7% (70/99) vs 55.9% 52/93) (Figure 1). While more participants (33.3%, 31/93) believed that AI was currently moderately useful compared to in the future (17.2%, 17/99), a small minority of participants believed that AI is neither useful in the present (1.1%, 1/93) nor in the future (2.0%, 2/99). Males when were statistically more likely to believe that AI has a role or usefulness in patient care and management (χ2 = 29.2; P < 0.001). However, there were no statistically significant difference by professional roles, age or qualifications of participants.

3.2. Training and Adoption of AI

Less than half of respondents have had formal exposures or training on AI (42.4%, 42/99), and this was a basic orientation to AI (83.3%, 35/42), AI use in patient care (31%, 13/42), or technical aspects of AI (33.3%, 14/42). While 23.2% (23/99) of respondents’ organizations have officially adopted AI, 38.4% (38/99) have trained staff on AI. Most adoption processes are led by top-level/executive leadership (32.8%, 20/61), although 24.6% (15/61) of participants were unaware of who was leading the adoption process (Table 2). Organizations of participants holding doctorate degrees were statistically more likely to adopt AI when compared with participants with other qualifications (χ2 = 31.6; P = 0.047). However, there were no statistically significant difference by professional roles, gender or age of participants.

3.3. AI Use in Patient Care and Management

AI was mostly used in report writing (43.1%, 28/65), research (27.7%, 18/65), patient care (26.2%, 17/65) and diagnosis (24.6%, 16/65). AI was also used in leadership and management (21.5%, 14/65). However, in patient care, AI was mostly useful in time management and documentation activities (34.2%, 25/73), and to improve patient registration process and research (20.5%, 15/73) (Table 3). Participants holding a doctorate degree were statistically more likely to identify patient diagnosis and report writing as areas where AI was most useful when compared to participants with other qualifications (χ2 = 101.1; P < 0.001). However, there were no statistically significant differences by professional roles, gender or age of participants.

3.4. Challenges and Barriers to AI Adoption and Use

While poor knowledge of AI (24.7%, 19/77) and fear of job loss (16.9%, 13/77) were the leading barriers to AI adoption and use, 72.2%, (57/79) of providers were willing to support AI adoption in clinical care. Other identified barriers included cost of acquisition of AI, staff skills and capacities, staff resistance to change, leadership and management issues, inadequate technology and equipment, and limited interest and or negative attitude of staff. Key patient care practice challenges included lack of human oversight (58.2%, 46/79), bias in AI algorithms and overdependence on AI (54.4%, 43/79). Others were unintended consequences and ethical/legal challenges (48.1% [38/79] and 41.8% [33/79] respectively) (Table 4). Participants who were less than 50 years of age were statistically more likely to support AI adoption and embedding in organizations (χ2 = 18.0; P = 0.02). However, there were no statistically significant difference by professional roles, gender or qualifications of participants.
However, strategies identified by participants to mitigate challenges of AI in healthcare include (but not limited to) the absence of human oversight (67.1%, 53/79), poor staff training 60.8%, 48/79, and lack of provider involvement in design and development of AI tools and resources (57.0%, 45/79) (Table 4).

3.5. Core Benefits of and Ethical Issues to AI in Healthcare

In practice, participants believed that the most important benefit of AI was in patients’ documentation and clerking (57.0%, 45/79), as it minimizes errors and mistakes (49.4%, 39/79). Privacy and surveillance issues (63.3%, 50/79) and security risks (55.7%, 44/79) were identified as the most important ethical issues associated with the use of AI in healthcare (Table 5).

3.6. Impact of AI Adoption Integration on Clinicians’ Workload

While 17.1% (13/76) of respondents were willing to support an increase in providers’ patient load due to AI efficiency gains, the rest were either against it (38.2%, 29/76) or undecided (maybe, 44.7%, 34/76).
Table 6. Chi-Square (χ2) analysis of participants’ responses to AI use, adoption, and staff training.
Table 6. Chi-Square (χ2) analysis of participants’ responses to AI use, adoption, and staff training.
Description Professional Role Gender Age Qualification
Chi-Square (χ2) P-Value (df) Chi-Square (χ2) P-Value (df) Chi-Square (χ2) P-Value (df) Chi-Square (χ2) P-Value (df)
Artificial Intelligence (AI) has a role or usefulness in patient care and management 9.3 0.32 (8) 29.2 < 0.001 (4) 7.7 0.46 (8) 3.5 0.97 (10)
AI usefulness in the future in patient care and practice 14.1 0.59 (16) 7.1 0.31 (6) 12.7 0.39 (12) 11.6 0.71 (15)
Have had formal exposure or training in AI 1.2 0.99 (8) 0.3 0.99 (4) 7.8 0.45 (8) 8.7 0.57 (10)
The organization has adopted/begun the process of adopting AI 18.8 0.28 (16) 6.0 0.64 (8) 23.3 0.06 (16) 31.6 0.047 (20)
The organization has trained someone in AI use. 7.2 0.51 (8) 0.9 0.92 (4) 14.8 0.06 (8) 6.3 0.79 (10)
Where AI is most useful in the healthcare industry 32.5 0.90 (44) 22.7 0.42 (22) 28.5 0.97 (44) 101.1 < 0.001 (55)
Where AI is least useful in the healthcare industry 42.5 0.54 (44) 12.9 0.94 (22) 41.4 0.58 (44) 56.3 0.43 (55)
The most important barrier to AI adoption and implementation in patient care 36.4 0.45 (36) 8.6 0.48 (9) 37.2 0.41 (36) 29.6 0.96 (45)
Will support AI adoption and embedding in the organization 8.2 0.42 (8) 6.2 0.18 (4) 18.0 0.02 (8) 11.4 0.33 (10)

4. Discussion

Over 86% (93/107) of the respondents believed that AI is useful in-patient care and management with less than half of the respondents having had formal exposures or training on AI (42.4%, 42/99). Similarly, while 23.2% (23/99) of respondents’ organizations have officially adopted AI, and 38.4% (38/99) have trained staff on AI, most adoption processes were led by top-level/executive leadership (32.8%, 20/61). According to the participants, AI is mostly used for report writing (43.1%, 28/65), research (27.7%, 18/65), patient care (26.2%, 17/65) and diagnosis (24.6%, 16/65). While poor knowledge of AI (24.7%, 19/77) and fear of job loss (16.9%, 13/77) were the leading barriers to AI adoption and use, 72.2%, (57/79) of providers are willing to support AI adoption in clinical care. Core strategies identified by participants to mitigate challenges of AI in healthcare include lack of human oversight (67.1%, 53/79), poor staff training 60.8%, 48/79, and absence of provider involvement in design and development of AI tools and resources (57.0%, 45/79). Privacy and surveillance issues (63.3%, 50/79) and security risks (55.7%, 44/79) were identified as the most important ethical issues associated with the use of AI in healthcare and only 17.1% (13/76) of respondents will support an increase in providers’ patient load due to AI efficiency gains. There were statistically significant findings in perceived AI usefulness by gender (χ2 = 29.2; P < 0.001); organizational adoption of AI (χ2 = 31.6.2; P = 0.047) and where AI is most useful (χ2 = 101.1; P < 0.001) by qualifications; and support for AI adoption by age (χ2 = 18.0; P = 0.02).
Recent studies have shown a significant uptake in the use of GenAI in clinical practice among physicians and other providers. The American Medical Association (AMA) augmented Intelligence research (2025) involving 1,183 physicians revealed that a growing majority of physicians are beginning to recognize the benefits of AI, with 68% in 2024 reporting at least some advantage in patient care (up from 63% in 2023) and 36% of them reporting feeling more excited than concerned about AI (up from 30% in 2023) [11]. On the heels of this finding, our study reveals that between 87% (current) and 93% (future) of healthcare providers who participated in the study believe that GenAI is at least moderately useful in-patient care in the present and in the future. This is a massive acceptance rate, showing that AI may have come to stay in healthcare and patient management. The substantial growth in physician use of AI in practice, with the usage of AI nearly doubling from 2023 to 2024 and a dramatic drop in non-users in just one year in the AMA study, supports this assertion. Furthermore, the very high rate of AI acceptance in our study in less than a year after the AMA study shows a continued improvement in providers’ acceptance and use of AI in clinical care. However, providers still have their doubts, issues, and fears as regards AI that could be minimized by proper training, formal exposure, and continued top management support and use of AI. These fears are similar to recent findings in another global study including US providers [12].
Providers have significant faith in GenAI, understand where AI is most useful and some current barriers/concerns. However, less than half have been formally trained or exposed to AI, and less than a quarter of participants’ organizations have adopted AI. To advance GenAI in healthcare, training of providers is imperative. Formal training of healthcare leaders will also help them to become advocates of AI adoption and embedding in healthcare systems as these will expose them to the benefits of AI in healthcare systems. And with proper guidance and better understanding of human-in-the-loop AI development and deployment strategies, providers will better accept human-supervised GenAI as safe, reproducible, reliable and significantly accurate, and that AI is not positioned to take over their jobs. This will motivate more providers to venture into GenAI-supported patient care and health management.
Like the AMA study findings where 68% of physicians believed that AI has some, or definite advantages in patient care, our study revealed that GenAI has significant advantages in patient care, especially in patient documentation and report writing. Also, our findings revealed that significantly more providers are currently using AI for patient documentation process and report writing, discharge summaries and care plans, and medical research and standard of care summaries, similar to AMA findings [11]. The current increase in clinicians’ use may be attributed to accelerated AI adoption in healthcare during the COVID-19 pandemic which may have influenced clinicians’ perception by highlighting AI’s practical utility in crisis response [16,17,18]. Also, AI’s perceived usefulness and clinical value with evidence of performance, improved transparency and explainability, perceived ease of use, regulation and governance systems, improved data quality and security, organizational and social influence and specialty and task fits may all have contributed directly or otherwise to improved adoption and use by clinicians [19,20,21,22,23,24,25,26,27]. Also, the current sociocultural and economic contexts, including large-scale investments in AI technology and increasing public awareness of AI, could partly account for the external factors shaping healthcare workers’ attitudes to AI in health [28,29,30,31,32].
However, the integration of GenAI in healthcare comes with both opportunities and challenges, as AI adoption in healthcare has significantly improved diagnostic accuracy, streamlined workflow processes, and enhanced patient care, including personalized treatment [33]. Despite major advances in AI research for healthcare, the deployment and adoption of AI technologies remain limited in clinical practice, thus the need for more formal training, on the job mentoring, and clarity on their use [34,35]. The increased use of AI in health and the expanding science around AI in clinical medicine have not eliminated the concerns people have, requiring that developers and users prioritize patient-relevant outcomes to fully understand AI’s true effects and limitations in health care [36]. Also, providers should play significant role in AI design, development and deployment through in and out innovation approaches [12,37,38]
Our study outcomes support previously identified findings that challenges such as ethical and legal concerns, patient privacy, and data security remain prominent obstacles hindering providers’ adoption of AI [12,37,38,39] as the continued use of GenAI tools and resources increases the risk of unauthorized data breaches. Beyond data-breach risks, 63% of clinicians expressed unease about GenAI-enabled surveillance and the continuous algorithmic monitoring of both patients and providers, which they felt could erode autonomy and trust, unless strict transparency, opt-in consent, and usage boundary safeguards are put into place. This calls for better data security and improved use of firewalls and relevant tools to protect against patient data breaches. To minimize these concerns, providers and healthcare organizations must promote a culture of transparency, accountability, and openness as to the capacity, potential and use of GenAI, and ensure vigilance over patient data security. While waiting for relevant policies and guidelines, there is an urgent need for developers and users alike to address these ethical, technical, and security challenges that GenAI brings.
Therefore, to effectively navigate the path forward to realize the potential of GenAI in health care and health, there is an urgent need to ensure appropriate skill generation; model testing, implementation, and monitoring; resources and infrastructure; and standardized oversight and guidelines [39]. Additional large-scale multiple-site studies that will explore in depth the findings of this study are needed using a deliberate proactive strategy [40].

5. Limitations

We could not achieve our calculated sample size due to providers’ inability to allocate the time needed to complete the questionnaires. With 130 completed responses, the survey achieved a 95% confidence interval of ±8.6% around a 50% proportion, wider than the ±5% originally planned. The lower response rate limited our analysis to mainly frequencies and percentages. But the insights shared need to be published to inform science. However, the generalizability of our findings is not guaranteed. Also, we do not have complete data on some participants who started the process but did not fully complete the questionnaire. Their experiences and views may be different from that of those who completed the questionnaire. The study is also subject to challenges of online surveys, including selection bias as only providers within the authors’ personal and professional networks were invited to participate in the survey. Because recruitment relied on authors’ networks and listservs, the sample is not probabilistic, and over-represents providers from California and New York, as well as Black/African Americans. Findings should therefore be interpreted as hypothesis-generating, rather than nationally representative.

6. Conclusions

Although the sample size is small, the views expressed therein are still informative and, to a large extent, can be used to gauge the views of providers in the United States. With a high GenAI acceptance rate and poor formal training of providers and other users, there is an urgent need to develop an AI-In-Service training curriculum and expose US providers formally to GenAI. This shortfall in formal instruction is reflected in our data, as ‘limited knowledge of AI’ was the single most cited barrier (24.7%), indicating that structured, hands-on training could directly mitigate clinicians’ knowledge gaps and increase adoption readiness. Also, healthcare managers should be more transparent and communicative on their GenAI adoption initiatives and share the process, experiences, challenges and successes with their teams.
On-the-job training and hands-on mentoring on AI use will help improve the understanding of clinicians and other healthcare providers that GenAI is not a threat to their jobs. However, individuals with appropriate GenAI skills and capacities will. in the coming years have advantages over those without the requisite skills. Thus, healthcare professionals should be trained in GenAI to mitigate the AI digital divide, which has already begun and is rapidly widening.
Finally, administrators should not increase the work packages of providers due to efficiency gains from AI but rather allow providers to reinvest this time into better patient care, proper documentation, and self-care towards reduced burnout and better work-life balance.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org. Table S1: Strobe Checklist.

Author Contributions

Conceptualization, OO and MB; methodology, OO; software, OO; validation, RI and SDTR; formal analysis, OO; investigation, OO and MB; resources, OO and RI; data curation, OO and MB; writing—original draft preparation, OO; writing—review and editing, all authors.; visualization, OO; supervision, RI and SDTR; project administration, OO. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the California State University, Dominguez Hills (CSUDH) Institutional Review Board (IRB #: CSUDH IRB-FY2025-98 on 26 November 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data supporting reported results can be obtained upon request from the first author.

Acknowledgments

OOO and RI are grateful Anupuma Joshi, Judy Aguirre, and Mi-Sook Kim for their guidance, and to the College of Health, Human Services, and Nursing, California State University, Dominguez Hills, Carson, California for institutional support. We thank Marsha Morgan from University College London for helpful comments. SDT-R was supported by the Wellcome Trust Institutional Strategic Support Fund at Imperial College London, London, United Kingdom. All authors acknowledge the United Kingdom National Institute for Healthcare Research Biomedical Facility at Imperial College London for infrastructural support. The views and opinions of authors expressed herein do not necessarily state or reflect those of CSUDH and other supporting institutions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AI Artificial Intelligence
AMA American Medical Association
EHR Electronic Health Records
GenAI Generative Artificial Intelligence
SPSS Statistical Package for Social Sciences

References

  1. Najjar R. Redefining radiology: a review of artificial intelligence integration in medical imaging. Diagnostics. 2023 Aug 25;13(17):2760.
  2. Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, Matsui Y, Fushimi Y, Fujioka T, Nozaki T, Yamada A. Revolutionizing radiation therapy: the role of AI in clinical practice. Journal of radiation research. 2024 Jan;65(1):1-9.
  3. Miranda I, Luz JM, Pereira AR, Augusto JB. Artificial intelligence in cardiovascular imaging algorithms – what is used in clinical routine? [Internet]. European Society of Cardiology; 2024 Apr 02 [cited 2025-02-25]. Available from: https://www.escardio.org/Councils/Council-for-Cardiology-Practice-(CCP)/Cardiopractice/artificial-intelligence-in-cardiovascular-imaging-algorithms-what-is-used-in-c.
  4. Goldenberg SL, Nir G, Salcudean SE. A new era: artificial intelligence and machine learning in prostate cancer. Nature Reviews Urology. 2019 Jul;16(7):391-403.
  5. Riaz IB, Harmon S, Chen Z, Naqvi SA, Cheng L. Applications of artificial intelligence in prostate cancer care: a path to enhanced efficiency and outcomes. American Society of Clinical Oncology Educational Book. 2024 Jun;44(3):e438516.
  6. Tătaru OS, Vartolomei MD, Rassweiler JJ, Virgil O, Lucarelli G, Porpiglia F, Amparore D, Manfredi M, Carrieri G, Falagario U, Terracciano D. Artificial intelligence and machine learning in prostate cancer patient management—current trends and future perspectives. Diagnostics. 2021 Feb 20;11(2):354.
  7. McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, Back T, Chesus M, Corrado GS, Darzi A, Etemadi M. International evaluation of an AI system for breast cancer screening. Nature. 2020 Jan 2;577(7788):89-94.
  8. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. nature. 2017 Feb 2;542(7639):115-8.
  9. Attia ZI, Noseworthy PA, Lopez-Jimenez F, Asirvatham SJ, Deshmukh AJ, Gersh BJ, Carter RE, Yao X, Rabinstein AA, Erickson BJ, Kapa S. An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction. The Lancet. 2019 Sep 7;394(10201):861-7.
  10. Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, Duan T, Ding D, Bagul A, Langlotz CP, Patel BN. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS medicine. 2018 Nov 20;15(11):e1002686.
  11. American Medical Association. AMA Augmented Intelligence Research. Physician sentiments around the use of AI in heath care: motivations, opportunities, risks, and use cases. Shifts from 2023 to 2024. Published February 2025.
  12. Oleribe OO, Taylor-Robinson AW, Agala VR, Sobande OO, Izurieta R & Taylor-Robinson SD. Global Adoption, Promotion, Impact, and Deployment of AI in Patient Care, Health Care Delivery, Management, and Health Care Systems Leadership: Cross-Sectional Survey. J Med Internet Res 2025;27:e70805). https://doi.org/10.2196/7080. https://www.jmir.org/2025/1/e70805b.
  13. Eysenbach, G. (2004) Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res, 6(3) p. e34.
  14. Dave, D. The Statistical Landscape of AI Adoption in Healthcare. Radix: Software Development Updated Aug 1, 2024. https://radixweb.com/blog/ai-in-healthcare-statistics Last accessed on 11/10/2024.
  15. Sample Size Calculators for designing clinical research. https://sample-size.net/sample-size-conf-interval-proportion/ Last accessed 11/10/2024.
  16. OECD Publishing. Artificial Intelligence and the Health Workforce Perspectives From Medical Associations on AI in Health. OECD Artificial Intelligence Papers November 2024 No. 28. Retrieved from https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/11/artificial-intelligence-and-the-health-workforce_c8e4433d/9a31d8af-en.pdf on October 27, 205.
  17. Waheed MA, Liu L. Perceptions of family physicians about applying AI in primary health care: case study from a premier health care organization. JMIR AI. 2024 Apr 17;3:e40781.
  18. Dongre AS, More SD, Wilson V, Singh RJ. Medical doctor’s perception of artificial intelligence during the COVID-19 era: a mixed methods study. Journal of Family Medicine and Primary Care. 2024 May 1;13(5):1931-6.
  19. AlQudah AA, Al-Emran M, Shaalan K. Technology acceptance in healthcare: a systematic review. Applied Sciences. 2021 Nov 9;11(22):10537.
  20. Hassan M, Kushniruk A, Borycki E. Barriers to and facilitators of artificial intelligence adoption in health care: scoping review. JMIR Human Factors. 2024 Aug 29;11:e48633.
  21. Rosenbacke R, Melhus Å, McKee M, Stuckler D. How explainable artificial intelligence can increase or decrease clinicians’ trust in AI applications in health care: systematic review. JMIR AI. 2024 Oct 30;3:e53207.
  22. Scipion CE, Manchester MA, Federman A, Wang Y, Arias JJ. Barriers to and facilitators of clinician acceptance and use of artificial intelligence in healthcare settings: a scoping review. BMJ Open. 2025 Apr 1;15(4):e092624.
  23. Finkelstein J, Gabriel A, Schmer S, Truong TT, Dunn A. Identifying facilitators and barriers to implementation of AI-assisted clinical decision support in an electronic health record system. Journal of Medical Systems. 2024 Sep 18;48(1):89.
  24. Tucci V, Saary J, Doyle TE. Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. Journal of Medical Artificial Intelligence. 2022 Mar 30;5.4.
  25. Roppelt JS, Kanbach DK, Kraus S. Artificial intelligence in healthcare institutions: A systematic literature review on influencing factors. Technology in Society. 2024 Mar 1;76:102443.
  26. Hou T, Li M, Tan Y, Zhao H. Physician adoption of AI assistant. Manufacturing & Service Operations Management. 2024 Sep;26(5):1639-55.
  27. Bettelheim A. Majority of doctors worry about AI driving clinical decisions, survey shows. Axios. Published on Oct 31, 2023. Retrieved from Majority of doctors worry about AI driving clinical decisions, survey shows on October 27, 2025.
  28. Sahni N, Stein G, McKinsey O, Zemmel R, Cutler DM. The potential impact of artificial intelligence on healthcare spending. Cambridge, MA, USA: National Bureau of Economic Research; 2023 Jan 23.
  29. Lambert SI, Madi M, Sopka S, Lenes A, Stange H, Buszello CP, Stephan A. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digital Medicine. 2023 Jun 10;6(1):111.
  30. Witkowski K, Dougherty RB, Neely SR. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Medical Ethics. 2024 Jun 22;25(1):74.
  31. Dankwa-Mullan I. Health equity and ethical considerations in using artificial intelligence in public health and medicine. Preventing Chronic Disease. 2024 Aug 22;21:E64.
  32. Sagona M, Dai T, Macis M, Darden M. Trust in AI-assisted health systems and AI’s trust in humans. NPJ Health Systems. 2025 Mar 28;2(1):10.
  33. Lainjo B. Integrating artificial intelligence into healthcare systems: opportunities and challenges. Academia Medicine. 2024 Oct 30;(1).1-13 https://doi.org/10.20935/AcadMed7382.
  34. Lekadir K, Frangi AF, Porras AR, Glocker B, Cintas C, Langlotz CP, Weicken E, Asselbergs FW, Prior F, Collins GS, Kaissis G. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ. 2025 Feb 5;388:1-22.
  35. Ng JY, Maduranayagam SG, Suthakar N, Li A, Lokker C, Iorio A, Haynes RB, Moher D. Attitudes and perceptions of medical researchers towards the use of artificial intelligence chatbots in the scientific process: an international cross-sectional survey. The Lancet Digital Health. 2025 Jan 1;7(1):e94-102.
  36. Han R, Acosta JN, Shakeri Z, Ioannidis JP, Topol EJ, Rajpurkar P. Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review. The Lancet Digital Health. 2024 May 1;6(5):e367-73.
  37. Oleribe, O. O, Taylor-Robinson SD. Leveraging Artificial Intelligence Tools and Resources in Leadership Decisions. American Journal of Health Care Strategy Vol 1, Issue 3, Aug 21, 2025:107-123. https://doi.org/10.61449/ajhcs.2025.16. https://ajhcs.org/article/leveraging-artificial-intelligence-in-leadership.
  38. Oleribe O. O. Leveraging and Harnessing Generative Artificial Intelligence to Mitigate the Burden of Neurodevelopmental Disorders (NDDs) in Children. Healthcare, 13(15), 1898(1-13); https://doi.org/10.3390/healthcare13151898.
  39. National Academy of Medicine. 2025. Generative Artificial Intelligence in Health and Medicine: Opportunities and Responsibilities for Transformative Innovation. Washington, DC: The National Academies Press. https://doi.org/10.17226/28907.
  40. Oleribe, O. O. Leading the next pandemics. Public Health in Practice, 2025 June. 9, 100605. https://doi.org/10.1016/j.puhip.2025.100605.
Figure 1. Perceived Current and Future Usefulness of GenAI in Patient Care Among U.S. Clinicians.
Figure 1. Perceived Current and Future Usefulness of GenAI in Patient Care Among U.S. Clinicians.
Preprints 199334 g001
Table 1. Demographic Information of Respondents.
Table 1. Demographic Information of Respondents.
Description Frequency Percentage
Role in Healthcare (n = 109)
Physician 42 38.50%
Nurse 18 16.50%
Allied healthcare professional 14 12.80%
Hospital administrator 3 2.80%
Other providers 32 29.40%
Gender at Birth (n = 72)
Prefer not to say 2 2.80%
Female 39 54.20%
Male 31 43.10%
Age of Respondents (n = 71)
60 years and above 19 26.80%
50 - 59 years 27 38.00%
40 - 49 years 9 12.70%
30 - 39 years 14 19.70%
20 - 29 years 2 2.80%
Highest Education (n = 71)
Doctorate 39 54.90%
Masters 17 23.90%
Bachelors 11 15.50%
High School Diploma/GED 2 2.80%
Others 2 2.80%
Length in Health Industry (n = 71)
25 or more years 33 46.50%
20 - 24 years 12 16.90%
15 - 19 years 4 5.60%
10 - 14 years 9 12.70%
5 - 9 years 10 14.10%
Less than 5 years 3 4.20%
Current Work location (n = 71)
Private 17 23.90%
Nonprofit/Public Charity 12 16.90%
College/University 14 19.70%
County/Local 7 9.90%
Federal or State 10 14.10%
Others 11 15.50%
Current work Section (n = 72)
Public Health and Preventive Medicine 17 23.60%
Internal Medicine 10 13.90%
Family Medicine 8 11.10%
Pediatrics 7 9.70%
Pathology 3 4.20%
Psychiatry 3 4.20%
Geriatrics 2 2.80%
Ophthalmology 2 2.80%
Obstetrics and Gynecology 2 2.80%
Surgery 1 1.40%
Race or Ethnicity (n = 71)
Black/African American 27 38.00%
White/Caucasian 24 33.80%
Hispanic/Latino/Latinx 7 9.90%
Native American/Alaska Native 1 1.40%
Pacific Island/Hawaii 0 0.00%
East Asian 5 7.00%
South Asian 4 5.60%
Arab/Middle Eastern 2 2.80%
Mixed 0 0.00%
I prefer not to say 7 9.90%
Table 2. AI adoption and embedding in the organization.
Table 2. AI adoption and embedding in the organization.
Description Freq Percentage
AI is useful in-patient care and management (n = 107)
Yes 93 86.90%
Neither true nor false 9 8.40%
No 5 4.70%
How useful AI is in patient care and practice (n = 93)
Extremely useful 20 21.50%
Very useful 32 34.40%
Moderately useful 31 33.30%
Slightly useful 9 9.70%
Not at all useful 1 1.10%
AI usefulness in patient care and practice in the future (n = 99)
Extremely useful 32 32.30%
Very useful 38 38.40%
Moderately useful 17 17.20%
Slightly useful 10 10.10%
Not at all useful 2 2.00%
Formal exposure or training in AI (n = 99)
Not Sure 5 5.10%
No 52 52.50%
Yes 42 42.40%
Training individuals were exposed to (n = 42)
Basic Orientation to AI 35 83.30%
Training on AI use in patient care (diagnosis, treatment, etc.) 13 31.00%
Training in AI use in management and leadership 11 26.20%
Training in technical aspects of AI 14 33.30%
Other forms of AI training 11 26.20%
Organization has adopted/begun the process of AI adoption (n = 99)
I do not know 11 11.10%
No, we have not started adopting AI 37 37.40%
Yes, we are beginning to think about adopting AI 24 24.20%
Yes, we will adopt AI 4 4.00%
Yes, we have adopted AI 23 23.20%
Leaders of AI adoption in Organizations (n = 61)
Others (Please specify) 4 6.60%
I do not know 15 24.60%
Outsourced 5 8.20%
IT Staff 8 13.10%
Administration Staff 5 8.20%
Middle Level/Management Staff 4 6.60%
Top-level/Executive Leadership 20 32.80%
Organizational training AI use (n = 99)
I do not know/I am not sure 38 38.40%
No 36 38.40%
Yes 25 38.40%
Table 3. Acceptance and use of AI by healthcare providers.
Table 3. Acceptance and use of AI by healthcare providers.
Description Freq Percentage
Where AI is commonly used (n = 65)
Report Writing 28 43.10%
Research 18 27.70%
Patient care (e.g., treatment, continuity of care, referral, etc.) 17 26.20%
Diagnosis (e.g., radiology, pathology, endoscopy, etc.) 16 24.60%
Leadership and management 14 21.50%
Strategic management 12 18.50%
Staff and personnel management 9 13.80%
Resource management 8 12.30%
Precision Medicine (e.g., gene therapy, cancer management, etc.) 6 9.20%
I do not want to specify 6 9.20%
Others 14 21.50%
Aspects of patient care where AI is most useful (n = 73)
Time management 25 34.20%
Documentation activities 25 34.20%
Improved patient registration processes 15 20.50%
Research 15 20.50%
Diagnosis 13 17.80%
Team management 11 15.10%
Patient management and care 10 13.70%
Errors and mistakes 10 13.70%
Continuity of care and follow up processes 10 13.70%
Patient clerking and history taking 9 12.30%
Provider’s personal job satisfaction 9 12.30%
Laboratory processes 8 11.00%
Prescription practices 7 9.60%
Provider burnout of providers 5 6.80%
Work Life Balance 5 6.80%
Provider health and wellbeing 4 5.50%
Patient satisfaction 3 4.10%
Others 19 26.00%
Healthcare activity where AI is very useful (n = 77)
Report writing 19 24.70%
Diagnosis (e.g., Radiology, Pathology, Endoscopy, etc.) 18 23.40%
Strategy development 7 9.10%
Patient care (e.g., Treatment, Continuity of Care, Referral, etc.) 7 9.10%
None of the above 6 7.80%
Leadership and management 5 6.50%
Precision medicine (e.g., cancer management) 4 5.20%
Resource management 3 3.90%
I do not want to specify 2 2.60%
Financial management 2 2.60%
Staff management 1 1.30%
Others 3 3.90%
Table 4. Barriers to AI Use and Mitigation Strategies.
Table 4. Barriers to AI Use and Mitigation Strategies.
Descriptions Freq Percentage
Most important barrier to AI adoption and implementation in patient care (n = 77)
Knowledge of AI 19 24.70%
Fear of job loss 13 16.90%
Cost of acquisition 8 10.40%
Staff skills and capacities 7 9.10%
Organization-wide adoption of AI 6 7.80%
Staff resistance to change 5 6.50%
Leadership and management 4 5.20%
Technology and equipment 4 5.20%
Interest and attitude of staff 1 1.30%
Others 8 10.40%
Willingness to support AI adoption and embedding (N = 79)
Not sure 17 22.50%
No 5 6.30%
Yes 57 72.20%
Patient care practice challenges (n = 79)
Lack of Human Oversight 46 58.20%
Bias in AI Algorithms 43 54.40%
Overdependence on AI 43 54.40%
Unintended Consequences 38 48.10%
Ethical and Legal Challenges 37 46.80%
Data Privacy and Security Concerns 33 41.80%
Algorithmic Opacity (Black Box problems) 31 39.20%
Job Displacement 26 32.90%
Reduced Patient-Provider Interaction 25 31.60%
More workload 19 24.10%
High Cost and Accessibility Issues 17 21.50%
Strategies to mitigate the challenges of AI in healthcare (N = 79)
Human Oversight 53 67.10%
Staff Training 48 60.80%
Provider involvement in design and development 45 57.00%
Data protection 39 49.40%
Enhanced Transparency 34 43.00%
Improved accessibility 25 31.60%
Early adoption and integration 23 29.10%
Table 5. Core benefits of and ethical Issues to AI use in healthcare.
Table 5. Core benefits of and ethical Issues to AI use in healthcare.
Description Freq Percentage
Core benefits of AI in clinical practice (n = 79)
Facilitates patients’ documentation and clerking 45 57.00%
Minimize errors and mistakes 39 49.40%
Open up time for better provider-patient communication 34 43.00%
Shortens turnaround time for requests 34 43.00%
Improve provider-patient relationship 13 16.50%
Encourages provider-patient relationship 12 15.20%
Others 9 11.40%
Ethical issues associated with AI use in healthcare (n = 79)
Privacy and Surveillance 50 63.30%
Security Risks 44 55.70%
Misinformation and Deepfakes 40 50.60%
Lack of regulations and polices 40 50.60%
Bias and Fairness 37 46.80%
Autonomy and Decision Making 32 40.50%
Ethical use in Education and Patient care 30 38.00%
Ownership and Intellectual Property 30 38.00%
Job Displacement and Economic Impact 27 34.20%
Transparency and Accountability 22 27.80%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated