Preprint
Article

This version is not peer-reviewed.

Ethical and Practical Considerations of Physicians and Nurses on Integrating Artificial Intelligence in Clinical Practices in Saudi Arabia: A Cross-Sectional Study

A peer-reviewed article of this preprint also exists.

Submitted:

19 June 2025

Posted:

20 June 2025

You are already at the latest version

Abstract
Background/Objectives: The emergence of artificial intelligence (AI) has revolutionized the healthcare industry. However, its integration into clinical practices raises ethical and practical concerns. This study aims to explore ethical and practical considerations perceived by physicians and nurses in Saudi Arabia. Methods: It employed a cross-sectional design with 400 physicians and nurses using a pre-established online questionnaire. Descriptive data were analyzed through means and standard deviations, while inferential statistics were done using the independent samples t-test. Results: The majority of participants were male (57%) and physicians (73.8%), with most employed in governmental organizations (87%). Key findings revealed significant concerns: participants perceived a lack of skills to effectively utilize AI in clinical practice (mean = 4.04) and security risks such as AI manipulation or hacking (mean = 3.83). The most pressing ethical challenges included AI’s potential incompatibility with all populations and cultural norms (mean = 3.90) and uncertainty regarding responsibility for AI-related errors (mean = 3.84). Conclusion: These findings highlight substantial barriers that hinder the effective integration of AI in clinical practices in Saudi Arabia. Addressing these challenges requires leadership support, specific training initiatives, and developing practical strategies tailored to the local context. Future research should include other healthcare professionals and qualitatively explore further underlying factors influencing AI adoption.
Keywords: 
;  ;  ;  ;  

1. Introduction

Artificial Intelligence (AI) has recently been at the forefront of global technological innovation, obtaining headlines all over the world [1]. It has evolved as a result of the growing use of automation and technology to simulate human abilities in critical thinking and decision-making processes, which has led to the emergence of AI [2]. AI was first developed in 1950, when Alan Turing raised his widely discussed question, “Can machines think?” [3].
In earlier times, the inauguration of AI was with a form of “if, then rules,” and then it advanced over time to more sophisticated algorithms that imitate the human brain’s abilities in performing tasks [4]. Practically, AI consists of diverse techniques including machine learning (ML), deep learning (DL), natural language processing (NLP), and computer vision (CV) [4,5]. The features offered by these applications include, but are not limited to, visual perceptions, speech recognition, decision-making, and translation between languages [6].
Since its evolution, AI has been subsequently adopted and employed by many stakeholders, including manufacturing, e-commerce, banking, education, and healthcare [6]. In healthcare, for example, the adoption of AI has been attributed to improving patient outcomes, including diagnosis of medical conditions, development of customized treatment plans, provision of preventive interventions, and revelation of drugs [7]. As a result, a notable transformation has been experienced in the provision of medical services such as medicine, radiology, pathology, and dermatology [8]. Apart from this, it has gained a crucial role in supporting medical practices in the prognosis of conditions as well as the invention of novel therapies.
AI applications are being applied more and more to improve clinical and administrative performance, which then in turn improves overall patient outcomes. To cope with textual data entered in medical records, for example, NLP is being utilized [9]. Additionally, CV technology has improved the process of reading radiological images by influencing their interpretation and analysis, while ML has assisted with data analysis and giving health professionals insights [10].
Even though adopting AI has transformed the healthcare industry and produced exceptional performance in terms of services provided, a review of literature revealed several challenges that hinder its full integration and application in clinical practice. Moreover, there is a dearth of solid and realistic research in the literature that fully tackles the ethical and practical issues from the perspectives of healthcare professionals. Therefore, promoting the appropriate application of AI in healthcare requires ensuring that ethical considerations are considered when integrating AI in clinical practices.
Additionally, proactive assessment of AI integration-related ethical concerns will assist in closing the current knowledge gap and the guarantee of the provision of safe, reliable, and high-quality patient care [11,12]. To ameliorate this matter, this study aimed to identify the ethical and practical considerations from physicians’ and nurses’ perspectives regarding AI integration in clinical practices in Saudi Arabia.

2. Materials and Methods

Study Design

A cross-sectional study was carried out targeting Saudi Arabian practicing physicians and nurses, with the exception of dentists and those assigned with administrative responsibilities. The formula n = Z2P (1 − P)/d2; where Z = 1.96 for a 95% confidence level, P = 0.5 for an assumed proportion, and d = 0.05 as the margin of error, was used to determine the sample size. Accordingly, the estimated sample size was 385 respondents.
A close-ended online questionnaire was developed by the research team members and was voluntarily validated by three independent subject matter experts to ensure its validity. It included 26 questions divided into four sections: demographics (4 items), practitioners’ experience with AI (5 items), practitioners’ concerns about the integration of AI in clinical practice (10 items), and ethical challenges of integrating AI in clinical practice (7 items) (Appendix A).
Post-obtainment of the IRB approval, the questionnaire was randomly piloted on 30 selected practitioners from different healthcare institutions who met the inclusion criteria. This step aimed to ensure the face validity of the developed questionnaire and to measure the internal consistency (reliability) using the Cronbach’s alpha coefficient. Accordingly, the internal consistency was found to be good with an overall Cronbach’s alpha score of .866.

Data Collection

Following the obtaining of IRB approval, the data collection process commenced. Questionnaires were distributed online via Google Forms on social media, including X (formerly Twitter), WhatsApp, Telegram, and LinkedIn platforms. All collected responses were de-identified, stored, and kept confidential. Participation was voluntary, where the respondent had to agree before answering questions. Nevertheless, participants had the right to withdraw at any point, and their responses would then be excluded immediately.

Statistical Analysis

Questionnaires were analyzed using the Statistical Package for Social Sciences (SPSS) version 29. Descriptive data were analyzed using the central tendency, including frequency, mean, and standard deviation (SD), while the inferential data were analyzed using the independent samples t-test.

3. Results

3.1. Descriptive Statistics

Out of the distributed questionnaires, 400 participants responded to the questionnaires, representing diverse sites of healthcare institutions in Saudi Arabia. It was difficult to calculate the response rate where there was no identified population frame. Of these responses, n=228 (57.0%) were male, and the remaining were female, with n=172 (43.0%). In terms of the position, n=295 (73.8%) were physicians, while nurses represented a lower percentage with n=102 (26.3%). Most of the respondents work in governmental hospitals n=348 (87.0%), while the remaining work at private and military hospitals n=30 (7.5%), n=22 (5.5%), respectively. Finally, the results showed that the majority of participants were from urban areas with n=352 (88.0%), and the others were from rural areas with n=48 (12.0%) (Table 1).
Relating to the participants’ use and awareness of AI, the study revealed that the overall frequency was low, as never n=136 (34.0%), occasionally n=128 (32.0%), daily n=86 (21.5%), and weekly n=50 (12.5%). The majority of respondents were willing to use AI in clinical practices, with n=298 (74.5%). In terms of potential benefits and concerns, n= 322 (80.5%) of participants were aware of the AI benefits, while n=284 (71.0%) had background about AI ethical concerns related to its implementation in their clinical practices. The majority of respondents recognized the applicability of AI in their specialty, with n = 248 (62.0%), while the rest did not, with n = n=152 (38.0%).
As for the concerns related to AI integration in clinical practice, highest mean scores were associated with a lack of adequate skills for effective utilization of AI as well as manipulation of AI-based systems by a third party, with mean scores of 4.04 and 3.83, respectively. On the other hand, the lowest mean scores were related to diminished roles of physicians and nurses along with increased wastes related to overutilization of healthcare services, 3.24 and 3.22, respectively. (Table 2).
As part of the ethical challenges related to AI integration in clinical practice, the highest mean scores were first associated with the inapplicability of specific AI algorithms to diverse populations and cultural norms and then the accountability and responsibility issues related to potential decision errors resulting from the utilization of AI, 3.90 and 3.84, respectively. Where the lowest mean scores were linked to acquisition of informed consent from patients in regard to AI functionalities and potential delivery of inequitable patient care as a result of AI biases, 3.55 and 3.28, respectively. (Table 3).

3.2. Independent t-Test: Difference in the Mean Score of AI Concerns by Position

As reflected in Table 4 and Table 5, an independent samples t-test was conducted to measure the difference between physicians and nurses regarding the integration of AI in clinical practices. The results showed that there was no significant difference in the mean scores between physicians (M = 3.52, SD = 0.57) and nurses (M = 3.56, SD = 0.57); t (398) = -0.64, p = .522, 95% CI [-0.17, 0.09]. Levene’s test indicated that the assumption of equal variances was met, F = 0.17, p = .683.

3.3. Independent t-Test: Difference in the Mean Score of AI Ethical Challenges by Position

The independent samples t-test showed that there was no significant difference in the mean scores for physicians (M = 3.66, SD = 0.65, n = 295) and nurses (M = 3.69, SD = 0.65, n = 105); t (398) = -0.39, p = .697, 95% CI [-0.18, 0.12]. Levene’s test for equality of variances was not significant, F = 0.18, p = .668, indicating that the assumption of equal variances was met. (Table 6 and Table 7).

Discussion

AI has a promising impact on the provision of healthcare services. However, to ensure its effective and reliable operability in healthcare, AI must be comprehensively assessed and evaluated. This cross-sectional study has addressed physicians’ and nurses’ perspectives regarding the ethical and practical considerations of AI integration in clinical practices. Overall, the study revealed that both physicians and nurses lack the skills to use AI effectively, and it emphasized the fact that AI algorithms developed for specific populations might not be suitable for another.
When it comes to the AI-related concerns from the perspective of healthcare professionals, the results of our study revealed that physicians and nurses believe that the lack of essential skills was attributed to the ineffective adoption of AI in clinical fields. This implies that there is a demand for fostering a culture of AI in healthcare that can be achieved by equipping practitioners with the required knowledge and training to leverage AI. This might be because AI is still relatively a new technology in the medical field and is not being used extensively in hospitals. The result aligns with the study of Naik et al. [13], which emphasized the importance of clinicians’ competencies and proficiency in augmenting the benefits gained from AI. Thus, for successful adoption of AI-supported clinical care, healthcare leadership should take a multidimensional approach where AI governance is ensured through cultivating a culture of AI integration, focusing on strategic planning, providing mentoring and resources, and investing in training and development. In addition, to meet the potential demand of the healthcare sectors, leaders in healthcare education should proactively equip the future healthcare workforce with essential skills to fully harness AI potential. This is consistent with the study of Hamd et al. [14], which recommends teaching medical students about the use of AI as part of the medical school curriculum.
Another key finding for the study is the potential manipulation of the AI database by a third party. The majority of participants expressed their worries about the quality of data released from AI-based systems that, as a result, could lead to poor clinical outcomes. From the technical point of view, these AI algorithms are trained using massive databases [15]. Consequently, inadequate data may be incorporated and have an impact on the provision of healthcare services, which could result in substandard clinical outcomes. According to the study of Goktas and Grzybowski [16], data itself has the potential to negatively influence the clinical recommendations, leading to suboptimal diagnosis of medical conditions, therapeutic plans, and overall patient health outcomes. Likewise, biased or manipulated data were the most common concerns perceived by healthcare professionals [15]. Moreover, the study of Obermeyer et al. [17] confirms our findings. It reported that AI algorithms had racial bias affecting the prediction of health status among White and Black patients. Inadvertently, AI algorithms may have potential biases that could lead to inequitable diagnosis or treatment [18]. In dermatology as an example, AI may exhibit bias based on race when determining skin disorders among people with darker skin colours in comparison to those with lighter-colored types [19].
Additionally, the study showed that physicians and nurses do not believe that AI can accurately understand patients’ medical conditions. Accordingly, they believe that management and diagnosis of medical conditions by physicians are more reliable than such technology. In contrast, the study of Elendu et al. [18] highlighted that the introduction of AI in healthcare has refined the role of practitioners in one way or another. Apart from this, AI may facilitate and streamline the process of analyzing huge volumes of data and identifying patterns that challenge physicians and other health care providers [20]. Therefore, failure to appropriately understand patients’ medical status might expose them to diagnostic errors and potential harm [21]. In addition to that, the study of Gundlack et al. [22] reported the inability of AI to simulate the essential characteristics that the human has, such as patient-provider rapport in general and empathy as well as emotional intelligence in particular and can dramatically impact health outcomes. Despite the rapidly increasing introduction of AI in clinical practices, the study of Pressman et al. [23] demonstrated the inability of AI to substitute physicians’ knowledge and judgment. This supports the fact that AI-based tools and systems must be leveraged to help with and improve the provision of healthcare services but not to replace clinicians [24].
Regarding the challenges of AI, the current study reported three major issues as perceived by participants. It showed that algorithms programmed in one culture may not be appropriate for the other one. This is similar to the study findings of Tilala et al. [20] which came to the conclusion that a customized AI for one group might not be suitable for another, particularly if two populations have different cultures and norms. Due to such differences, AI might provide inaccurate outcomes. For example, AI algorithms that are widely disseminated and utilized in the US to determine required clinical care for African American patients give different information for Caucasian patients, even if they have the same score [25]. This is supported by the findings of a study by Monteith et al. [26], which reported that AI models do not perform well when they are deployed in settings where the characteristics of their population differ from the characteristics used for the training.
Another key finding to highlight is protecting patient privacy and ensuring data security in healthcare. According to the participants in this study, the privacy and security issues that are associated with the integration of AI-based systems are still not adequately addressed. This matter has been raised by many studies, such as Weiner et al. [27], He et al. [28], and Currie and Hawk [29]. It is evident that the availability of patients’ data is crucial in training and testing AI models. Therefore, the lack of such data leads to limited training and eventually hinders the potential benefits of AI tools [6]. Having a tendency to gather and analyze enormous volumes of patient data makes AI systems a desirable target for hackers, with possible consequences including identity theft, monetary loss, harm to an individual’s reputation, and loss of trust [8]. This emphasizes the need for strong data protection protocols.
According to participants’ perspectives, the study reported that the cross-border issue is one of the main challenges. It is believed that information exchange and international collaboration might necessitate the adoption of different regulations and standards of care. The study of Lewin et al. [30] raised the concerns of sharing patient information where this cutting-edge technology should not affect or breach individuals’ privacy. While previous studies emphasized that sharing patient information is fundamental to feeding AI models with real data. These models have the potential to assist healthcare providers in advancing precision medicine and optimizing care plans [5]. Despite the urgency for regulatory rules to govern the disclosure of patient data at the national and international levels, the systematic review conducted by Karimian et al. [31] reported a lack of a comprehensive ethical framework for AI in healthcare. Nevertheless, sharing patient medical information mitigates potential biases and ensures open access to these data by developers to enhance the process of testing and training AI models [32].
Our study had several strengths, including a sample from different healthcare settings and locations in Saudi Arabia, which increases the generalizability of the study findings. Another strength was the thorough analytical statistics used. The study conducted independent samples t-tests to figure out the differences between physicians’ and nurses’ perspectives regarding the AI concerns and ethical challenges of its integration in clinical practices.
The study has three limitations. Firstly, the current study targeted only physicians and nurses. Involving dentists and allied health professionals might enrich the findings with insightful information. Secondly, the number of received responses from participants is somewhat small compared to the number of physicians and nurses in the Kingdom of Saudi Arabia. Thirdly, the study was focused on quantitative data only, using predetermined concerns and challenges. If it involved individual semi-structured interviews, it might explore more sensitive factors related to the effective integration of AI in clinical practices.

5. Conclusions

The study focused on identifying the physicians’ and nurses’ concerns about the integration of AI in clinical practices and the associated ethical considerations in Saudi Arabia. The findings indicate that participants raised some concerns that avert the effective adoption of AI, particularly in the clinical field. Furthermore, these results contribute to a growing body of evidence supporting the governments and healthcare decision-makers in support of the integration of AI in healthcare. While the study was limited to physicians and nurses, the study insights suggest that similar studies could benefit dentists and other allied health specialists. Therefore, leadership support is a cornerstone for building a culture of AI integration where AI is incorporated as part of the workflows and daily roles. Furthermore, qualitative research should be conducted to explore other challenges and practical recommendations. Ultimately, mitigating these challenges is essential to maximize the benefits from AI and enhance both patient safety and patient experience.

Guidelines and Standards Statement

This manuscript was drafted against the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) for the cross-sectional research. A complete list of reporting guidelines can be accessed via the equator network: https://www.equator-network.org/.

Author Contributions

Conceptualization, A.A. and M.H.; methodology, A.A., R.K.; formal analysis, A.A.; investigation, S.S., A.E., A.G.; writing—original draft preparation, A.A.; writing—review and editing, M.H., R.K., and S.S.; supervision, A.A. and M.H.; project administration, R.K., S.S., A.E, and A.G.; All authors have read and agreed to the published version of the manuscript.”

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the General Directorate of Health Affairs in Madinah (IRB log No: 09-24, 04 October 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available upon request from the corresponding author due to the privacy of participants.

Acknowledgments

We gratefully acknowledge the Saudi Commission for Health Specialists (SCFHS), Dr. Doaa’ Al-Idrisi, and Hattan Alsubhi for their valuable support and cooperation in the data collection phase. Also, we thank Prof. Rana Abu Farha for validating and finalizing the research tool used in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Public Involvement Statement

Please describe how the public (patients, consumers, carers) were involved in the research. Consider reporting against the GRIPP2 (Guidance for Reporting Involvement of Patients and the Public) checklist. If the public were not involved in any aspect of the research add: “No public involvement in any aspect of this research”.

Use of Artificial Intelligence

AI or AI-assisted tools were not used in drafting any aspect of this manuscript.

Appendix A

Ethical and Practical Perspectives of Physicians and Nurses on Integrating Artificial Intelligence in Clinical Practice
Informed Consent Form
Dear participant,
Researchers from different hospitals and universities are carrying out a research project with the purpose of assessing physicians’ and nurses’ ethical and practical perspectives on integrating artificial intelligence in clinical practice.
Your participation in this research study is voluntary. You may choose not to participate. If you decide to participate in this research survey, you may withdraw at any time.
We would like to confirm that all information provided here will be kept confidential. All data will be stored in a password-protected electronic format. To help protect your confidentiality, the surveys will not contain information that will personally identify you, and it will be used only for research purposes.
The procedure involves filling out an online survey that will take approximately 10 minutes. Your participation in completing this survey is highly appreciated.
ELECTRONIC CONSENT: Please select your choice below.
Clicking on the “agree” button below indicates that:
  • You have read the above information, AND
  • You voluntarily agree to participate
If you do not wish to participate in the research study, please decline participation by clicking on the “disagree” button.
Agree
Disagree
Part 1. Demographic information
Gender:
Female
Male
Position:
Physician
Nurse
Type of institution:
Governmental hospitals
Military clinic
Private practice
Location:
Urban area
Rural area
Part 2. Physicians’ experience with artificial intelligence
Frequency of artificial intelligence use in clinical practice:
Daily
Weekly
Occasionally
Never
Are you willing to use “artificial intelligence” tools in your clinical practice?
Yes
No
I don’t know
Not able to
Are you aware of the potential benefits of using artificial intelligence?
Yes
No
Are you aware of the potential concerns of using artificial intelligence?
Yes
No
Do you know there is an area for use AI in your specialty?
Yes
No
I don’t know
Part 3. Physicians / Nurses concerns about the integration of artificial intelligence in clinical practice
From the following statements please select your level of agreement, with the scale from 1-5, where 1: strongly disagree, and 5: strongly agree.
Statements (10 items) Strongly agree Agree Neutral Disagree Strongly disagree
Artificial intelligence might not understand complex medical conditions as accurately as physicians and nurses do.
Artificial intelligence could reduce the roles that physicians / nurses traditionally play.
Physicians / Nurses might feel more stressed because of the additional demands of using technology/ AI.
Artificial intelligence could potentially weaken the relationship between patients and their treating team.
Not all physicians / nurses have the adequate skills to use artificial intelligence effectively
There is a concern that artificial intelligence-based system could be manipulated from outside (third party, hackers...etc.)
Artificial intelligence will worsen problems in healthcare such as over utilization of laboratory testing, overdiagnosis, and overtreatment.
The use of artificial intelligence may negatively impact physicians / nurses analytical thinking, critical thinking and decision-making skills.
Artificial intelligence lacks contextual knowledge and ability to read social clues.
Physicians / Nurses lack the time to learn how to use complex artificial intelligence -based medical devices.
Part 4: Ethical challenges of integrating artificial intelligence in clinical practice
From the following statements please select your level of agreement, with the scale from 1-5, where 1: strongly disagree, and 5: strongly agree.
Statements (7 items) Strongly agree Agree Neutral Disagree Strongly disagree
Security and Safety: Patient privacy and data security may be inadequately addressed in the integration of artificial intelligence systems in hospital practices.
Patients Equity: Bias in artificial intelligence tools may result in unfair healthcare delivery.
Informed Consent:
Ensuring appropriate informed consent becomes challenging when medical professionals are unable to effectively explain the functioning of artificial intelligence medical devices to patients.
Accountability and Responsibility:
There is a concern on who is responsible if artificial intelligence makes medical errors without healthcare professionals’ input.
Data Ownership and Control: Determining who owns the medical data used to train artificial intelligence systems and how it can be ethically and legally shared or sold.
Cross-border Issues: As artificial intelligence in healthcare often involves international collaborations and data sharing, which arise ethical challenges concerning regulatory differences and standards used for patient care.
Cultural Sensitivity: Artificial intelligence algorithms developed in one cultural may not be appropriate or effective when applied to diverse populations with different cultural norms

References

  1. A. Géron, Hands-on Machine Learning whith Scikit-Learing, Keras and Tensorfow. 2019.
  2. M. Cascella, M. C. Tracey, E. Petrucci, and E. G. Bignami, “Exploring Artificial Intelligence in Anesthesia: A Primer on Ethics, and Clinical Applications,” Surgeries (Switzerland), vol. 4, no. 2, pp. 264–274, 2023. [CrossRef]
  3. S. Castagno and M. Khalifa, “Perceptions of Artificial Intelligence Among Healthcare Staff: A Qualitative Survey Study,” Front Artif Intell, vol. 3, no. October, pp. 1–7, 2020. [CrossRef]
  4. V. Kaul, S. Enslin, and S. A. Gross, “History of artificial intelligence in medicine,” Gastrointest Endosc, vol. 92, no. 4, pp. 807–812, 2020. [CrossRef]
  5. A. Salam and N. Abhinesh, “Revolutionizing healthcare: the role of artificial intelligence in clinical practice,” IP Indian Journal of Clinical and Experimental Dermatology, vol. 10, no. 2, pp. 107–112, 2024. [CrossRef]
  6. K. Basu, R. Sinha, A. Ong, and T. Basu, “Artificial Intelligence: How is it changing medical sciences and its future?,” Indian J Dermatol, vol. 65, no. 5, pp. 365–370, 2020. [CrossRef]
  7. S. Prakash, J. N. Balaji, A. Joshi, and K. M. Surapaneni, “Ethical Conundrums in the Application of Artificial Intelligence (AI) in Healthcare—A Scoping Review of Reviews,” J Pers Med, vol. 12, no. 11, 2022. [CrossRef]
  8. M. Li, X. M. Xiong, B. Xu, and C. Dickson, “Chinese Oncologists’ Perspectives on Integrating AI into Clinical Practice: Cross-Sectional Survey Study,” JMIR Form Res, vol. 8, pp. 1–14, 2024. [CrossRef]
  9. M. Terra, M. Baklola, S. Ali, and K. El-Bastawisy, “Opportunities, applications, challenges and ethical implications of artificial intelligence in psychiatry: a narrative review,” Egyptian Journal of Neurology, Psychiatry and Neurosurgery, vol. 59, no. 1, 2023. [CrossRef]
  10. T. H. Koo, A. D. Zakaria, J. K. Ng, and X. Bin Leong, “Systematic Review of the Application of Artificial Intelligence in Healthcare and Nursing Care,” Malaysian Journal of Medical Sciences, vol. 31, no. 5, pp. 135–142, 2024. [CrossRef]
  11. F. De Micco, S. Grassi, L. Tomassini, G. Di Palma, G. Ricchezze, and R. Scendoni, “Robotics and AI into healthcare from the perspective of European regulation: who is responsible for medical malpractice?,” Front Med (Lausanne), vol. 11, no. September, pp. 1–10, 2024. [CrossRef]
  12. A. Y. Z. Tung and L. W. Dong, “Malaysian Medical Students’ Attitudes and Readiness Toward AI (Artificial Intelligence): A Cross-Sectional Study,” J Med Educ Curric Dev, vol. 10, 2023. [CrossRef]
  13. N. Naik et al., “Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility?,” Front Surg, vol. 9, no. March, pp. 1–6, 2022. [CrossRef]
  14. Z. Y. Hamd, W. Elshami, S. Al, H. Aljuaid, and M. M. Abuzaid, A closer look at the current knowledge and prospects of artificial intelligence integration in dentistry practice: A cross-sectional study, (2023). [CrossRef]
  15. M. A. Siddiqui, “a comprehensive review of AI in agriculture,” vol. 14, no. 1, 2024. [CrossRef]
  16. P. Goktas and A. Grzybowski, “Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI,” J Clin Med, vol. 14, no. 5, pp. 1–28, 2025. [CrossRef]
  17. Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, “Dissecting racial bias in an algorithm used to manage the health of populations,” Science (1979), vol. 366, no. 6464, pp. 447–453, 2019. [CrossRef]
  18. C. Elendu et al., “Ethical implications of AI and robotics in healthcare: A review,” Medicine (United States), vol. 102, no. 50, p. E36671, 2023. [CrossRef]
  19. K. Koshechkin and A. Khokholov, “Ethical issues in implementing artificial intelligence in healthcare,” Медицинская Этика, no. 2024(1), pp. 11–17, 2024. [CrossRef]
  20. M. H. Tilala et al., “Ethical Considerations in the Use of Artificial Intelligence and Machine Learning in Health Care: A Comprehensive Review,” Cureus, vol. 16, no. 6, 2024. [CrossRef]
  21. M. Beil, I. Proft, D. van Heerden, S. Sviri, and P. V. van Heerden, “Ethical considerations about artificial intelligence for prognostication in intensive care,” Intensive Care Medicine Experimental, vol. 7, no. 1, 2019. [CrossRef]
  22. J. Gundlack et al., “Artificial Intelligence in Medical Care - Patients’ Perceptions on Caregiving Relationships and Ethics: A Qualitative Study,” Health Expect, vol. 28, no. 2, p. e70216, 2025. [CrossRef]
  23. S. M. Pressman, S. Borna, C. A. Gomez-Cabello, S. A. Haider, C. Haider, and A. J. Forte, “AI and Ethics: A Systematic Review of the Ethical Considerations of Large Language Model Use in Surgery Research,” Healthcare (Switzerland), vol. 12, no. 8, 2024. [CrossRef]
  24. H. Intissar and R. Sara, “Artificial intelligence in healthcare: a focus on the best practices,” vol. 02010, pp. 1–10, 2024. [CrossRef]
  25. M. E. Fenech and O. Buston, “AI in Cardiac Imaging: A UK-Based Perspective on Addressing the Ethical, Social, and Political Challenges,” Front Cardiovasc Med, vol. 7, no. April, pp. 1–8, 2020. [CrossRef]
  26. S. Monteith, T. Glenn, J. R. Geddes, E. D. Achtyes, P. C. Whybrow, and M. Bauer, “Challenges and Ethical Considerations to Successfully Implement Artificial Intelligence in Clinical Medicine and Neuroscience: A Narrative Review,” Pharmacopsychiatry, vol. 56, no. 6, pp. 209–213, 2023. [CrossRef]
  27. E. B. Weiner, I. Dankwa-Mullan, W. A. Nelson, and S. Hassanpour, “Ethical challenges and evolving strategies in the integration of artificial intelligence into clinical practice,” PLOS Digital Health, vol. 4, no. 4, pp. 1–12, 2025. [CrossRef]
  28. C. He et al., “Efficiency, accuracy, and health professional’s perspectives regarding artificial intelligence in radiology practice: A scoping review,” iRADIOLOGY, vol. 2, no. 2, pp. 156–172, 2024. [CrossRef]
  29. G. Currie and E. K. Hawk, “Ethical and Legal Challenges of Artificial Intelligence in Nuclear Medicine,” Semin Nucl Med, vol. 51, no. 2, pp. 120–125, 2021. [CrossRef]
  30. S. Lewin, R. Chetty, A. R. Ihdayhid, and G. Dwivedi, “Ethical Challenges and Opportunities in Applying Artificial Intelligence to Cardiovascular Medicine,” Canadian Journal of Cardiology, vol. 40, no. 10, pp. 1897–1906, 2024. [CrossRef]
  31. G. Karimian, E. Petelos, and S. M. A. A. Evers, “The ethical issues of the application of artificial intelligence in healthcare: a systematic scoping review,” AI and Ethics, vol. 2, no. 4, pp. 539–551, 2022. [CrossRef]
  32. U. S. G. A. Office, “Science, Technology Assessment, and Analytics Report to Congressional Requesters Artificial Intelligence in Health Care Benefits and Challenges of Technologies to Augment Patient Care With content from the National Academy of Medicine Accessible Version,” Artificial Intelligence in Health Care, 2020.
Table 1. Demographic statistics of study participants.
Table 1. Demographic statistics of study participants.
N (%)
Gender
Male 228 (57.0%)
Female 172 (43.0%)
Position
Physician 295 (73.8%)
Nurse 105 (26.3%)
Type of institution
Governmental hospitals 348 (87.0%)
Military clinic 22 (5.5%)
Private practice 30 (7.5%)
Location
Urban area 352 (88.0%)
Rural area 48 (12.0%)
Frequency of artificial intelligence use in clinical practice
Daily 86 (21.5%)
Weekly 50 (12.5%)
Occasionally 128 (32.0%)
Never 136 (34.0%)
Are you willing to use “artificial intelligence” tools in your clinical practice?
Yes 298 (74.5%)
No 102 (25.5%)
Are you aware of the potential benefits of using artificial intelligence?
Yes 322 (80.5%)
No 78 (19.5%)
Are you aware of the potential concerns of using artificial intelligence?
Yes 284 (71.0%)
No 116 (29.0%)
Do you know there is an area for use of AI in your specialty?
Yes 248 (62.0%)
No 152 (38.0%)
Table 2. Physicians’ / Nurses’ concerns about the integration of artificial intelligence in clinical practice.
Table 2. Physicians’ / Nurses’ concerns about the integration of artificial intelligence in clinical practice.
Strongly disagree Disagree Neutral Agree Strongly agree Mean Rank
  • Artificial intelligence might not understand complex medical conditions as accurately as physicians and nurses do.
1.5% 10.5% 22.5% 42.5% 23.0% 3.75 3
2.
Artificial intelligence could reduce the roles that physicians/nurses traditionally play.
6.5% 21.5% 22.5% 40.5% 9.0% 3.24 9
3.
Physicians/ nurses might feel more stressed because of the additional demands of using technology/AI.
4.0% 25.0% 24.5% 34.0% 12.5% 3.26 8
4.
Artificial intelligence could potentially weaken the relationship between patients and their treating team.
5.0% 23.0% 17.0% 38.5% 16.5% 3.39 7
5.
Not all physicians/ nurses have the adequate skills to use artificial intelligence effectively.
0.5% 3.5% 15.5% 53.5% 27.5% 4.04 1
6.
There is a concern that artificial intelligence-based systems could be manipulated from outside (third party, hackers...etc.)
1.5% 5.5% 26.5% 41.5% 25.0% 3.83 2
7.
Artificial intelligence will worsen problems in healthcare such as overutilization of laboratory testing, overdiagnosis, and overtreatment.
4.0% 24.5% 29.5% 30.0% 12.0% 3.22 10
8.
The use of artificial intelligence may negatively impact physicians’ analytical thinking, critical thinking and decision-making skills.
3.0% 20.0% 23.5% 32.0% 21.5% 3.49 5
9.
Artificial intelligence lacks contextual knowledge and ability to read social clues.
1.5% 11.5% 27.5% 39.0% 20.5% 3.66 4
10.
Physicians/ nurses lack the time to learn how to use complex artificial intelligence -based medical devices.
2.5% 17.0% 28.5% 41.5% 10.5% 3.41 6
Table 3. Ethical challenges of integrating artificial intelligence in clinical practice.
Table 3. Ethical challenges of integrating artificial intelligence in clinical practice.
Strongly disagree Disagree Neutral Agree Strongly agree Mean Rank
  • Security and Safety: Patient privacy and data security may be inadequately addressed in the integration of artificial intelligence systems in hospital practices.
1.0% 14.5% 24.0% 44.0% 16.5% 3.61 5
2.
Health Equity: Bias in artificial intelligence tools may result in unfair healthcare delivery.
4.0% 17.5% 36.5% 31.0% 11.0% 3.28 7
3.
Informed Consent: Ensuring appropriate informed consent becomes challenging when medical professionals are unable to effectively explain the functioning of artificial intelligence medical devices to patients.
1.5% 10.0% 30.0% 49.0% 9.5% 3.55 6
4.
Accountability and Responsibility: There is a concern on who is responsible if artificial intelligence makes medical errors without healthcare professionals’ input.
0.5% 6.5% 29.5% 35.5% 28.0% 3.84 2
5.
Data Ownership and Control: Determining who owns the medical data used to train artificial intelligence systems and how it can be ethically and legally shared or sold.
1.0% 6.5% 34.5% 35.0% 23.0% 3.73 4
6.
Cross-border Issues: As artificial intelligence in healthcare often involves international collaborations and data sharing, which arise ethical challenges concerning regulatory differences and standards used for patient care.
1.0% 5.0% 32.5% 39.5% 22.0% 3.77 3
7.
Cultural Sensitivity: Artificial intelligence algorithms developed in one culture may not be appropriate or effective when applied to diverse populations with different cultural norms.
0.5% 4.0% 26.0% 44.0% 25.5% 3.90 1
Table 4. Group Statistics for the difference by position.
Table 4. Group Statistics for the difference by position.
Education Level N Mean Std. Deviation Std. Error Mean
Physicians/ Nurses concerns on AI Physician 295 3.5156 .56990 .03318
Nurse 105 3.5571 .57208 .05583
Table 5. Independent Samples t-Test for the difference by position.
Table 5. Independent Samples t-Test for the difference by position.
Levene’s Test for Equality of Variances t-test for Equality of Means
F Sig. t df Significance Mean Difference Std. Error Difference 95% Confidence Interval of the Difference
One-Sided p Two-Sided p Lower Upper
Physicians/ Nursing concerns on AI Equal variances assumed .167 .683 -.641 398 .261 .522 -.04155 .06483 -.16900 .08590
Equal variances not assumed -.640 182.396 .262 .523 -.04155 .06495 -.16969 .08659
Table 6. Group Statistics for the difference by position.
Table 6. Group Statistics for the difference by position.
Education Level N Mean Std. Deviation Std. Error Mean
Ethical challenges of AI Physician 295 3.6581 .65499 .03814
Nurse 105 3.6871 .65198 .06363
Table 7. Independent Samples t-Test for the difference by position.
Table 7. Independent Samples t-Test for the difference by position.
Levene’s Test for Equality of Variances t-test for Equality of Means
F Sig. t df Significance Mean Difference Std. Error Difference 95% Confidence Interval of the Difference
One-Sided p Two-Sided p Lower Upper
Ethical challenges of AI Equal variances assumed .184 .668 -.390 398 .349 .697 -.02896 .07434 -.17512 .11719
Equal variances not assumed -.390 183.752 .348 .697 -.02896 .07418 -.17532 .11739
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated