Discussion
AI has a promising impact on the provision of healthcare services. However, to ensure its effective and reliable operability in healthcare, AI must be comprehensively assessed and evaluated. This cross-sectional study has addressed physicians’ and nurses’ perspectives regarding the ethical and practical considerations of AI integration in clinical practices. Overall, the study revealed that both physicians and nurses lack the skills to use AI effectively, and it emphasized the fact that AI algorithms developed for specific populations might not be suitable for another.
When it comes to the AI-related concerns from the perspective of healthcare professionals, the results of our study revealed that physicians and nurses believe that the lack of essential skills was attributed to the ineffective adoption of AI in clinical fields. This implies that there is a demand for fostering a culture of AI in healthcare that can be achieved by equipping practitioners with the required knowledge and training to leverage AI. This might be because AI is still relatively a new technology in the medical field and is not being used extensively in hospitals. The result aligns with the study of Naik et al. [
13], which emphasized the importance of clinicians’ competencies and proficiency in augmenting the benefits gained from AI. Thus, for successful adoption of AI-supported clinical care, healthcare leadership should take a multidimensional approach where AI governance is ensured through cultivating a culture of AI integration, focusing on strategic planning, providing mentoring and resources, and investing in training and development. In addition, to meet the potential demand of the healthcare sectors, leaders in healthcare education should proactively equip the future healthcare workforce with essential skills to fully harness AI potential. This is consistent with the study of Hamd et al. [
14], which recommends teaching medical students about the use of AI as part of the medical school curriculum.
Another key finding for the study is the potential manipulation of the AI database by a third party. The majority of participants expressed their worries about the quality of data released from AI-based systems that, as a result, could lead to poor clinical outcomes. From the technical point of view, these AI algorithms are trained using massive databases [
15]. Consequently, inadequate data may be incorporated and have an impact on the provision of healthcare services, which could result in substandard clinical outcomes. According to the study of Goktas and Grzybowski [
16], data itself has the potential to negatively influence the clinical recommendations, leading to suboptimal diagnosis of medical conditions, therapeutic plans, and overall patient health outcomes. Likewise, biased or manipulated data were the most common concerns perceived by healthcare professionals [
15]. Moreover, the study of Obermeyer et al. [
17] confirms our findings. It reported that AI algorithms had racial bias affecting the prediction of health status among White and Black patients. Inadvertently, AI algorithms may have potential biases that could lead to inequitable diagnosis or treatment [
18]. In dermatology as an example, AI may exhibit bias based on race when determining skin disorders among people with darker skin colours in comparison to those with lighter-colored types [
19].
Additionally, the study showed that physicians and nurses do not believe that AI can accurately understand patients’ medical conditions. Accordingly, they believe that management and diagnosis of medical conditions by physicians are more reliable than such technology. In contrast, the study of Elendu et al. [
18] highlighted that the introduction of AI in healthcare has refined the role of practitioners in one way or another. Apart from this, AI may facilitate and streamline the process of analyzing huge volumes of data and identifying patterns that challenge physicians and other health care providers [
20]. Therefore, failure to appropriately understand patients’ medical status might expose them to diagnostic errors and potential harm [
21]. In addition to that, the study of Gundlack et al. [
22] reported the inability of AI to simulate the essential characteristics that the human has, such as patient-provider rapport in general and empathy as well as emotional intelligence in particular and can dramatically impact health outcomes. Despite the rapidly increasing introduction of AI in clinical practices, the study of Pressman et al. [
23] demonstrated the inability of AI to substitute physicians’ knowledge and judgment. This supports the fact that AI-based tools and systems must be leveraged to help with and improve the provision of healthcare services but not to replace clinicians [
24].
Regarding the challenges of AI, the current study reported three major issues as perceived by participants. It showed that algorithms programmed in one culture may not be appropriate for the other one. This is similar to the study findings of Tilala et al. [
20] which came to the conclusion that a customized AI for one group might not be suitable for another, particularly if two populations have different cultures and norms. Due to such differences, AI might provide inaccurate outcomes. For example, AI algorithms that are widely disseminated and utilized in the US to determine required clinical care for African American patients give different information for Caucasian patients, even if they have the same score [
25]. This is supported by the findings of a study by Monteith et al. [
26], which reported that AI models do not perform well when they are deployed in settings where the characteristics of their population differ from the characteristics used for the training.
Another key finding to highlight is protecting patient privacy and ensuring data security in healthcare. According to the participants in this study, the privacy and security issues that are associated with the integration of AI-based systems are still not adequately addressed. This matter has been raised by many studies, such as Weiner et al. [
27], He et al. [
28], and Currie and Hawk [
29]. It is evident that the availability of patients’ data is crucial in training and testing AI models. Therefore, the lack of such data leads to limited training and eventually hinders the potential benefits of AI tools [
6]. Having a tendency to gather and analyze enormous volumes of patient data makes AI systems a desirable target for hackers, with possible consequences including identity theft, monetary loss, harm to an individual’s reputation, and loss of trust [
8]. This emphasizes the need for strong data protection protocols.
According to participants’ perspectives, the study reported that the cross-border issue is one of the main challenges. It is believed that information exchange and international collaboration might necessitate the adoption of different regulations and standards of care. The study of Lewin et al. [
30] raised the concerns of sharing patient information where this cutting-edge technology should not affect or breach individuals’ privacy. While previous studies emphasized that sharing patient information is fundamental to feeding AI models with real data. These models have the potential to assist healthcare providers in advancing precision medicine and optimizing care plans [
5]. Despite the urgency for regulatory rules to govern the disclosure of patient data at the national and international levels, the systematic review conducted by Karimian et al. [
31] reported a lack of a comprehensive ethical framework for AI in healthcare. Nevertheless, sharing patient medical information mitigates potential biases and ensures open access to these data by developers to enhance the process of testing and training AI models [
32].
Our study had several strengths, including a sample from different healthcare settings and locations in Saudi Arabia, which increases the generalizability of the study findings. Another strength was the thorough analytical statistics used. The study conducted independent samples t-tests to figure out the differences between physicians’ and nurses’ perspectives regarding the AI concerns and ethical challenges of its integration in clinical practices.
The study has three limitations. Firstly, the current study targeted only physicians and nurses. Involving dentists and allied health professionals might enrich the findings with insightful information. Secondly, the number of received responses from participants is somewhat small compared to the number of physicians and nurses in the Kingdom of Saudi Arabia. Thirdly, the study was focused on quantitative data only, using predetermined concerns and challenges. If it involved individual semi-structured interviews, it might explore more sensitive factors related to the effective integration of AI in clinical practices.