Introduction
Digital health transformation is central to Oman’s Vision 2040. AI offers opportunities across clinical and administrative domains, including precision diagnostics, triage and risk stratification, workflow automation, and resource optimization. Successful adoption, however, depends on end-user acceptance, workforce capabilities, supportive infrastructure, and robust governance. Evidence specific to Oman’s public sector, particularly the Al Buraimi Governorate, remains sparse (Al-Busaidy & Weerakkody, 2009). Artificial intelligence (AI) is increasingly embedded across healthcare, augmenting both bedside and boardroom decisions. Recent syntheses emphasise how machine learning and natural language processing extract actionable signals from complex, high-volume data to support risk stratification, early detection, and pathway design. Islam et al. (2025) highlight AI-enabled risk assessment in reproductive and mental health, reporting earlier identification and more precise patient segmentation. Extending into service operations, Auf et al. (2025) show how real-time analytics can forecast demand, triage cases, and prioritise interventions, underscoring AI’s dual value proposition: improving patient-level choices while strengthening system-level management.
Effectiveness, however, rests on socio-technical fit as much as on model accuracy. Abdullh et al. (2022) describe gains in diagnostic precision, error reduction, and administrative throughput, yet such benefits materialise only when tools align with clinician workflows and information systems. Henzler et al. (2025) center facilitators training, organisational readiness, trustworthy design and barriers weak data infrastructure, poor integration, skepticism that determine whether algorithms reduce friction or inadvertently add it. Understanding the views of the practitioners continues to be critical, especially in developing contexts. Williams & Yampolskiy, (2025) discover lack of knowing the principles and boundaries of AI, which contributes to safety, trust, and professional redundancy concerns. As reported by Sriharan et al. (2025), alongside underspecified expectations, insufficient and ambiguous training offerings pointed to clear role expectation gaps. The importance of interest piquing training that emphasizes removal of professional boundary shifting and more supporting not replacement functions is clear in regard to intrigue to confidence shift training, restated, ‘Convinced to ensure safe practice’ training is paramount (Al Kuwaiti et al, 2023).
On the hand, Kooli and Al Muftah (2022) focus on ethical governance—privacy, fairness, explains ability, and accountability as significant reasons why implementation, especially in healthcare, is slow. There are also cultural concerns such as the possibility that automation would undermine therapeutic relationships. As lack of trust, combined with sophisticated rules, organisational sloth. Kumar et al (2023) observes, ‘disorganization’ dominated, more so than poor legacies and informatics in trust, spools of cunningly sophisticated rules, alongside languor in above legislative systems, than where disorganization rules. Here training, certainly more so than in analytic, is profoundly problematic in algorithms, provisioning, strong protective arms of arms, training, monitoring, validation, and deployment of systems. Within these obscure confines near recent, regulation emphasizes that AI should be both actionable and reliable. These cannot, Olawade et al (2023), and later clear ‘go public leading public health strategy’, strong said in ‘coarse of systems include, through shaped purposes and public good, alignment’, of superbly thoughtful and consolidated interdisciplinary framing.
They emphasise transparency, stakeholder participation, and iterative review. Sáez et al. (2024) complement this with resilience-oriented practices: ethics-by-design, local validation, human-in-the-loop oversight, and operational readiness for model drift. Practically, this means staged pathways co-design and sandboxing, pilots in real-world settings, and scaled rollout with continuous surveillance and evaluation that extends beyond AUC to capture clinical outcomes, workflow impact, cost-effectiveness, and equity. Perceived benefits help shape acceptance ) Al Salmi et al, 2024). Elhaddad and Hamam (2024) document improvements through predictive diagnostics, automated triage, and data-driven recommendations, particularly where rapid decisions are vital. Administrative domains see returns in capacity planning, bed management, and inventory optimisation, which can smooth access and improve patient experience. Yet reservations persist. Fawzi (2023) notes divergent views: many clinicians welcome relief from cognitive load and standardisation of best practice, while others worry about opacity, over-reliance, or poor fit when models are trained elsewhere. Acceptance strengthens when systems are locally calibrated, accountability is explicit, and AI is framed as augmenting clinical expertise. Sustainable integration ultimately hinges on workforce capability. Panteli et al. (2025) observe that many clinicians lack the knowledge and confidence to interpret outputs, recognise bias, and integrate recommendations under time pressure. Gendron (2022) reports parallel needs among administrators responsible for procurement, budgeting, and oversight. Effective capacity-building is tiered and role-specific basic literacy for all staff, applied competencies for clinicians, and strategic literacy for managers and includes post-deployment vigilance: performance monitoring, incident reporting, and escalation protocols.
Two theoretical lenses help organise these imperatives. The Technology Acceptance Model (TAM) posits that Perceived Usefulness and Perceived Ease of Use shape intention and uptake (Davis, 1989; Musa et al., 2024). Design choices that surface uncertainty, provide fit-for-purpose explanations, and minimise cognitive load can raise both constructs. Diffusion of Innovations (DOI) theory explains spread via relative advantage, compatibility, complexity, trialability, and observability (Rogers, 1962; de Zayas & Matusitz, 2021). For AI, demonstrable local gains, alignment with clinical norms and infrastructures, user-centred interfaces, sandboxing, and transparent reporting together foster adoption. This study addresses that gap by examining (i) perceived benefits of AI in clinical decision-making and service delivery; (ii) preparedness and training; (iii) perceived barriers; and (iv) recommendations for effective integration in government healthcare institutions.
Materials and methods
Study design and setting, we conducted a quantitative, cross-sectional survey using a deductive approach aligned with the Technology Acceptance Model (TAM) and Diffusion of Innovations (DOI). The study took place in government healthcare institutions in Al Buraimi Governorate, Oman. The data collection and management, the responses were captured via Google Forms. No identifiable personal health information was collected. Data were stored securely and analysed in aggregate. The Instrument for data were collected using a structured, closed-ended questionnaire comprising four domains: (1) Perceptions of AI Benefits, (2) Preparedness and Training, (3) Challenges and Barriers, and (4) Recommendations. Items were rated on 5-point Likert scales, adapted from recent AI-in-healthcare literature and contextualised to the Omani public sector. The participants and sampling, the eligible participants were clinicians (physicians, nurses, and allied health professionals) and administrative staff involved in, or affected by, clinical or operational decision-making. Recruitment occurred via official email from July through the first week of September. Stratified random sampling ensured proportional representation by role. we calculated a target sample of 300 from a sampling frame of 1,284 eligible personnel. The statistical analysis, the internal consistency was assessed using Cronbach’s alpha. Descriptive statistics summarised demographics and domain scores. Pearson’s correlation measured associations between preparedness/training and willingness to apply AI. Chi-square tests examined relationships between demographic variables and domain levels. Multiple linear regression modeled the effects of perceptions, preparedness, and challenges on AI implementation success.
Ethics and permissions. Institutional permissions were obtained from Directory General health service in management Al-Buraimi Governorate and the Studies and Research Centre (DGHS). Participation was anonymous and voluntary, and the study complied with local regulations for minimal-risk research.
Results
Reliability
Internal consistency was high for all domains, yielding an overall Cronbach’s alpha of 0.887.
Table 1.
Reliability test.
Table 1.
Reliability test.
| Variable |
Number of Items |
Cronbach's Alpha |
| Perceptions of AI Benefits in Healthcare |
7 |
0.921 |
| Preparedness and Training of Healthcare Professionals |
7 |
0.892 |
| Challenges and Barriers to AI Adoption |
7 |
0.889 |
| Recommendations for AI Integration |
4 |
0.926 |
| All Questions |
25 |
0.887 |
Participant Characteristics
Of 300 respondents, most were aged 26–45 years (76%) and predominantly female (61%). Nursing/midwifery constituted the largest group (39.7%), followed by physicians (25.7%) and allied health professionals (24%), with administrative staff at 10.7%. Experience was concentrated in 6–20 years (72.7%).
Table 2.
Demographic information.
Table 2.
Demographic information.
| Characteristics |
Categories |
Frequency |
Percentage (%)
|
| Age |
18–25 years |
17 |
5.7 |
| 26–35 years |
120 |
40.0 |
| 36–45 years |
108 |
36.0 |
| 46–55 years |
45 |
15.0 |
| 56 years and above |
10 |
3.3 |
| Gender |
Male |
117 |
39.0 |
| Female |
183 |
61.0 |
| Current role |
Administrative staff |
32 |
10.7 |
| Nursing/midwifery |
119 |
39.7 |
| Doctor |
77 |
25.7 |
| Allied Health Professional |
72 |
24.0 |
| Years of experience |
1–5 years |
33 |
11.0 |
| 6–10 years |
98 |
32.7 |
| 11–20 years |
120 |
40.0 |
| More than 20 years |
49 |
16.3 |
Perceptions of AI Benefits
Perceptions were strongly positive (overall mean 3.95 ± 0.89). The majority rated AI’s potential as high for improving patient care, enhancing decision-making accuracy, enabling early detection, reducing errors, improving population-level outcomes, and reducing workload. Personalized treatment garnered more ambivalence.
Table 3.
Perceptions of AI Benefits in Healthcare.
Table 3.
Perceptions of AI Benefits in Healthcare.
| Item |
Low (1–2) n (%) |
Moderate (3) n (%) |
High (4–5) n (%) |
Mean ± SD |
| 1.AI has the potential to significantly improve the quality of patient care. |
11 (3.7%) |
31 (10.3%) |
258 (86.0%) |
4.10 ± 0.762 |
| 2.AI technologies can enhance the accuracy of clinical decision-making. |
15 (5.0%) |
38 (12.7%) |
247 (82.3%) |
4.02 ± 0.824 |
| 3.The use of AI can contribute to early disease detection and diagnosis. |
19 (6.3%) |
34 (11.3%) |
247 (82.3%) |
4.02 ± 0.852 |
| 4.AI applications can reduce medical errors in healthcare settings. |
17 (5.7%) |
38 (12.7%) |
245 (81.7%) |
4.03 ± 0.853 |
| 5.AI can improve healthcare outcomes at the population level. |
14 (4.7%) |
34 (11.3%) |
252 (84.0%) |
4.06 ± 0.838 |
| 6.AI can help reduce the workload of healthcare professionals. |
15 (5.0%) |
33 (11.0%) |
252 (84.0%) |
4.06 ± 0.826 |
| 7.AI can provide more personalized treatment recommendations for patients. |
86 (28.7%) |
40 (13.3%) |
174 (58.0%) |
3.38 ± 1.29 |
| Overall Mean ± SD |
– |
– |
– |
3.95 ± 0.89 |
Preparedness and Training
Preparedness was modest (overall mean 2.94 ± 1.01). A total of 77.3% reported inadequate training, 75.7% cited limited institutional training opportunities, and only 38% reported high confidence using AI tools; 16.3% felt prepared to integrate AI routinely. Interest in future workshops was high (87.3%).
Table 4.
Preparedness and Training of Healthcare Professionals.
Table 4.
Preparedness and Training of Healthcare Professionals.
| Item |
Low (1–2) n (%) |
Moderate (3) n (%) |
High (4–5) n (%) |
Mean ± SD |
| 1. I am confident in my ability to use AI tools in clinical practice. |
103 (34.3%) |
83 (27.7%) |
114 (38.0%) |
3.06 ± 1.07 |
| 2. I have received adequate training on how to use AI in my healthcare role. |
232 (77.3%) |
31 (10.3%) |
37 (12.3%) |
2.31 ± 0.91 |
| 3. My institution provides opportunities for training on AI-related technologies. |
227 (75.7%) |
34 (11.3%) |
39 (13.0%) |
2.30 ± 0.94 |
| 4.I am aware of how AI systems are applied in my area of practice. |
123 (41.0%) |
51 (17.0%) |
126 (42.0%) |
3.00 ± 1.10 |
| 5.I feel prepared to integrate AI into my routine clinical decisions. |
170 (56.7%) |
81 (27.0%) |
49 (16.3%) |
2.58 ± 0.94 |
| 6. Continuing professional development on AI is essential for healthcare workers. |
119 (39.7%) |
45 (15.0%) |
136 (45.3%) |
3.10 ± 1.19 |
| 7. I would be interested in attending future AI training workshops. |
26 (8.7%) |
12 (4.0%) |
262 (87.3%) |
4.20 ± 0.93 |
| Overall Mean ± SD |
– |
– |
– |
2.94 ± 1.01 |
Challenges and Barriers
Prominent barriers included infrastructure limitations, high costs, lack of technical support, integration challenges, and data privacy/security concerns. Cultural factors such as staff resistance and job displacement concerns were also salient.
Table 5.
Challenges and Barriers to AI Adoption.
Table 5.
Challenges and Barriers to AI Adoption.
| Item |
Low (1–2) n (%) |
Moderate (3) n (%) |
High (4–5) n (%) |
Mean ± SD |
| 1.High costs of AI implementation are a barrier to its use in my institution. |
6 (2.0%) |
40 (13.3%) |
254 (84.7%) |
4.29 ± 0.77 |
| 2.Lack of infrastructure limits the adoption of AI in my healthcare setting. |
6 (2.0%) |
36 (12.0%) |
258 (86.0%) |
4.27 ± 0.76 |
| 3.Resistance from healthcare staff hinders the adoption of AI technologies. |
32 (10.7%) |
68 (22.7%) |
200 (66.7%) |
3.76 ± 0.92 |
| 4.Concerns about job displacement affect acceptance of AI in healthcare. |
24 (8.0%) |
57 (19.0%) |
219 (73.0%) |
3.90 ± 0.87 |
| 5.Integration of AI with current hospital systems is difficult. |
17 (5.7%) |
45 (15.0%) |
238 (79.3%) |
4.08 ± 0.88 |
| 6.There is a lack of technical support for AI tools in my workplace. |
9 (3.0%) |
34 (11.3%) |
257 (85.7%) |
4.20 ± 0.78 |
| 7.Data privacy and security concerns limit AI use in clinical settings. |
12 (4.0%) |
55 (18.3%) |
233 (77.7%) |
3.96 ± 0.76 |
| Overall Mean ± SD |
– |
– |
– |
4.07 ± 0.82 |
Correlations and Multivariable Modeling
Preparedness/training was significantly correlated with willingness to apply AI (r = 0.294, p = 0.001). In multiple linear regression, all predictors were significant: Perceptions (β = 0.173, p = 0.002), Preparedness (β = 0.145, p = 0.007), and Challenges (β = 0.322, p < 0.001). The model explained 21.8% of variance in implementation success (R = 0.467, R²=0.218, adjusted R²=0.210; F = 27.540, p < 0.001).
Table 6.
Pearson correlation between preparedness/training and willingness to apply AI.
Table 6.
Pearson correlation between preparedness/training and willingness to apply AI.
| Variables |
Healthcare Professional Preparation and Training |
| Willingness to apply AI technologies |
Pearson Correlation |
0.294 |
| p-value |
0.001 |
Demographic Patterns
Younger professionals (18–35 years) tended to report higher perceptions and preparedness; senior groups (≥56 years) indicated more reservations. Nurses/midwives reported the highest perceived benefits, while physicians reported comparatively lower preparedness. Females showed slightly higher preparedness than males. Tailored strategies are warranted across age groups and roles.
| Characteristics |
Categories |
Perceptions |
Preparedness |
Challenges & Barriers |
Recommendations |
| Low |
Moderate |
High |
Low |
Moderate |
High |
Low |
Moderate |
High |
Low |
Moderate |
High |
| Age |
18–25 years |
0.0% |
17.6% |
82.4% |
11.8% |
35.3% |
52.9% |
0.0% |
0.0% |
100.0% |
0.0% |
0.0% |
100.0% |
| |
26–35 years |
3.3% |
5.0% |
91.7% |
31.7% |
45.0% |
23.3% |
3.3% |
5.8% |
90.8% |
0.0% |
5.0% |
95.0% |
| |
36–45 years |
2.8% |
8.3% |
88.9% |
36.1% |
34.3% |
29.6% |
3.7% |
20.4% |
75.9% |
0.0% |
4.6% |
95.4% |
| |
46–55 years |
11.1% |
22.2% |
66.7% |
48.9% |
31.1% |
20.0% |
0.0% |
20.0% |
80.0% |
0.0% |
17.8% |
82.2% |
| |
56+ years |
20.0% |
50.0% |
30.0% |
50.0% |
20.0% |
30.0% |
0.0% |
20.0% |
80.0% |
0.0% |
10.0% |
90.0% |
| p-value |
0.001 |
0.055 |
0.004 |
0.037 |
| Gender |
Male |
6.8% |
14.5% |
78.6% |
47.9% |
31.6% |
20.5% |
2.6% |
13.7% |
83.8% |
0.0% |
9.4% |
90.6% |
| |
Female |
3.3% |
8.7% |
88.0% |
27.3% |
41.5% |
31.1% |
2.7% |
13.1% |
84.2% |
0.0% |
4.9% |
95.1% |
| p-value |
0.087 |
0.001 |
0.98 |
0.156 |
| Current Role |
Admin staff |
3.1% |
25.0% |
71.9% |
25.0% |
31.3% |
43.8% |
3.1% |
15.6% |
81.3% |
0.0% |
6.3% |
93.8% |
| |
Nursing/Midwifery |
0.0% |
6.7% |
93.3% |
24.4% |
42.9% |
32.8% |
1.7% |
10.9% |
87.4% |
0.0% |
2.5% |
97.5% |
| |
Doctor |
7.8% |
11.7% |
80.5% |
45.5% |
27.3% |
27.3% |
6.5% |
23.4% |
70.1% |
0.0% |
15.6% |
84.4% |
| |
Allied Health |
9.7% |
11.1% |
79.2% |
47.2% |
43.1% |
9.7% |
0.0% |
5.6% |
94.4% |
0.0% |
4.2% |
95.8% |
| p-value |
0.001 |
0.001 |
0.003 |
0.005 |
| Years of Exp. |
1–5 years |
0.0% |
15.2% |
84.8% |
9.1% |
39.4% |
51.5% |
3.0% |
3.0% |
93.9% |
0.0% |
0.0% |
100.0% |
| |
6–10 years |
3.1% |
5.1% |
91.8% |
32.7% |
50.0% |
17.3% |
2.0% |
6.1% |
91.8% |
0.0% |
5.1% |
94.9% |
| |
11–20 years |
2.5% |
11.7% |
85.8% |
40.8% |
34.2% |
25.0% |
4.2% |
17.5% |
78.3% |
0.0% |
8.3% |
91.7% |
| |
>20 years |
16.3% |
18.4% |
65.3% |
44.9% |
20.4% |
34.7% |
0.0% |
24.5% |
75.5% |
0.0% |
10.2% |
89.8% |
| p-value |
0.001 |
0.001 |
0.005 |
0.221 |
Discussion
The first research question was to explore healthcare professionals’ perceptions of the benefits of artificial intelligence (AI) in enhancing clinical decision-making and healthcare service delivery. The findings strongly supported this hypothesis.
Table 3 showed an overall mean score of 3.95 (SD = 0.89) for perceptions of AI benefits, with 86% of respondents rating AI’s potential to improve patient care as high (M = 4.10 ± 0.762). High agreement was also observed for AI’s role in enhancing clinical decision-making accuracy (82.3%, M = 4.02 ± 0.824), early disease detection (82.3%, M = 4.02 ± 0.852), reducing medical errors (81.7%, M = 4.03 ± 0.853), improving population-level outcomes (84%, M = 4.06 ± 0.838), and reducing workloads (84%, M = 4.06 ± 0.826). However, perceptions were more moderate for personalized treatment recommendations (58% high, M = 3.38 ± 1.29), indicating some skepticism about AI’s ability to handle individualized care. These results aligned with global literature emphasizing AI’s transformative potential in healthcare. For instance, Islam et al. (2025) highlighted AI’s efficacy in risk assessment for reproductive and mental health, improving decision accuracy and patient-centred care, which mirrored the high perceptions of early detection and error reduction reported here. Similarly, Abdullh et al. (2022) discussed AI’s role in enhancing diagnostic accuracy and operational efficiency, supporting respondents’ views on workload reduction and outcome improvement. In the Omani context, Elhaddad and Hamam (2024) noted AI’s benefits in predictive diagnostics and automated triage but also cautioned about variability in perceptions of personalisation, consistent with the lower scores in this area. Demographic analysis (Table 8) further showed that younger professionals (26–35 years, 91.7% high perceptions) and nurses/midwives (93.3% high) were more optimistic, possibly due to generational familiarity with technology, as echoed by Miller (2025), who reported higher AI acceptance among digitally native healthcare workers in developing nations.
The second research question was to assess the preparedness and training levels of healthcare professionals and administrative staff for the integration of AI technologies in government healthcare institutions. The data supported this hypothesis, with an overall mean preparedness score of 2.94 (SD = 1.01), indicating low to moderate readiness (
Table 4). A striking 77.3% reported inadequate training (M = 2.31 ± 0.91), and 75.7% noted a lack of institutional training opportunities (M = 2.30 ± 0.94). Confidence in using AI tools was split (38% high, M = 3.06 ± 1.07), and preparedness for routine integration was low (16.3% high, M = 2.58 ± 0.94). However, 87.3% expressed high interest in future workshops (M = 4.20 ± 0.93), and 45.3% agreed that ongoing AI development was essential (M = 3.10 ± 1.19). Pearson correlation (Table 7) confirmed a significant positive association between preparedness/training and willingness to adopt AI (r = 0.294, p = 0.001).
This underscored a critical gap in workforce readiness, supported by literature from developing contexts. Sriharan et al. (2025) argued that mismatched training and unclear expectations hindered AI adoption, aligning with the low training scores reported here. Panteli et al. (2025) emphasised the need for comprehensive capacity-building in public health, noting that limited knowledge led to under confidence. as seen in the 56.7% who felt unprepared for integration. Demographic variations (Table 8) revealed that females (31.1% high preparedness) and less experienced staff (1–5 years, 51.5% high) were more prepared, possibly reflecting gender differences in adaptability and openness among newer professionals (Gendron, 2022). These findings highlighted that while motivation existed, institutional support was lacking, risking suboptimal AI implementation.
The third objective was to identify key challenges and barriers such as infrastructure limitations, cost, and resistance to change, and system integration that might have affected AI adoption. The hypothesis was confirmed by the high overall mean of 4.07 (SD = 0.82) for challenges (
Table 5). Dominant barriers included infrastructure limitations (86% high, M = 4.27 ± 0.76), high costs (84.7% high, M = 4.29 ± 0.77), lack of technical support (85.7% high, M = 4.20 ± 0.78), system integration difficulties (79.3% high, M = 4.08 ± 0.88), and data privacy concerns (77.7% high, M = 3.96 ± 0.76). Cultural barriers such as staff resistance (66.7% high, M = 3.76 ± 0.92) and job-displacement fears (73% high, M = 3.90 ± 0.87) were also prominent. The literature corroborated these systemic and human-centred barriers. Henzler et al. (2025) identified infrastructure, training, and trust as key impediments in their systematic review, paralleling the high scores for costs and support reported here. In ethical terms, Kooli and Al Muftah (2022) discussed privacy and accountability issues in government healthcare, aligning with the privacy concerns noted. Kumar et al. (2023) highlighted regulatory and trust gaps in developing countries, similar to the resistance observed. Regionally, Al Salmi et al. (2024) pointed to Oman’s challenges with digital readiness and cultural reluctance, explaining the demographic patterns (Table 8) in which older staff (56+ years, 80% high challenges) and doctors (70.1% high) perceived more barriers, possibly due to entrenched practices (Fawzi, 2023). These findings indicated that, without addressing these multifaceted barriers, AI adoption would likely have remained stalled.
The fourth objective was to develop practical recommendations for integrating AI into healthcare decision-making frameworks within Al Buraimi Governorate to support institutional performance improvement. The data supported this hypothesis, with an overall mean of 4.27 (SD = 0.61) for recommendations (
Table 6). Respondents strongly endorsed involving professionals in AI development (92% high, M = 4.27 ± 0.61), policymaker–hospital collaboration (89% high, M = 4.21 ± 0.64), clear guidelines (93.7% high, M = 4.31 ± 0.58), and AI’s potential to enhance performance (91.7% high, M = 4.30 ± 0.61). The correlation between preparedness and willingness to adopt (r = 0.294, p = 0.001) further implied that targeted integration could improve outcomes. These recommendations were grounded in strategic literature. Olawade et al. (2023) advocated context-specific AI strategies with stakeholder involvement and transparency, matching the call for professional engagement. Sáez et al. (2024) emphasised resilient AI frameworks with ethical oversight and real-world testing, supporting the need for clear guidelines. In hospital management, Alves et al. (2024) and Kamel Rahimi et al. (2024) stressed collaboration and policy alignment for learning health systems, aligning with the high perceptions of performance improvement. Demographic insights (Table 8) showed near-universal agreement on recommendations across groups (e.g., 95% high among females), suggesting broad buy-in if implemented inclusively. Overall, these findings proposed a roadmap for AI integration that could bridge the gaps identified in earlier objectives and foster institutional advancement in line with Oman’s Vision 2040.
Conclusions
This study found that the planned integration of artificial intelligence (AI) into the government healthcare system of Al Buraimi Governorate was characterized by skepticism, lack of preparedness, and systemic barriers. Notably, the findings suggest that practitioners perceive AI as underutilized, with limited understanding of its advantages and inadequate organizational readiness and training policies. Nonetheless, the study highlighted clear opportunities for improvement. Skepticism could be transformed into acceptability through awareness-building via education, targeted training that improved professional confidence, alignment of AI with institutional goals, and investment in organizational infrastructure. To reduce resistance, organizational investment in infrastructure, executive sponsorship, and intentional change management were crucial. In closing, although there were apparent gaps and institutional constraints, the findings suggested that the healthcare sector could strategically employ AI as one of the most impactful and transformative tools to improve service delivery and enhance efficiency.
Funding
This research received no external funding.
Informed Consent Statement
The respondents of questionnaires were confirmed that participation in questionnaire is voluntary and they have the right to not participate. However, there is no names or important information that reveal personal life of participants were asked.
Data Availability Statement
The data that support the findings of this research is available from the author with a reasonable request.
Acknowledgments
We thank the Directory General of Health and Services in Al-Buraimi Governorate for approving for the survey to be conducted; to the Studies and Research Centre for their valued support and feedback; and to all our colleagues who supported in the distribution of the survey and all the participants for their time to contribute their opinion in suervy.
Conflicts of Interest
The authors declare no competing interests.
References
- Abdullh, A. A. O., Yehia, A., Abdullah, A., Aldawish, S. N., Al_Beshri, S. O., Abdullah, S. A., Alsahli, A. M., Taher, M., Almajhad, M. A., Alhamad, A. H., & Ahmed. (2022). The use of artificial intelligence in healthcare decision-making. Journal of Pharmaceutical Technology and Clinical Practice, 29(4), Article 5367. [CrossRef]
- Al-Busaidy, M., & Weerakkody, V. (2009). E-government diffusion in Oman: a public sector employees' perspective. Transforming Government: People, Process and Policy, 3(4), 375-393. [CrossRef]
- Al Kuwaiti, A., Nazer, K., Al-Reedy, A., Al-Shehri, S., Al-Muhanna, A., Subbarayalu, A. V., Al Muhanna, D., & Al-Muhanna, F. A. (2023). A review of the role of artificial intelligence in healthcare. Journal of Personalized Medicine, 13(6), 951. [CrossRef]
- Al Salmi, H. M., Divyajyothi, M. G., Jopate, R., Al Ghafri, R. M., & Al Abri, M. S. (2024). Exploring the integration of artificial intelligence in education and smart transport technology in Oman; perceptions, challenges and ethical considerations. In 2024 2nd International Conference on Computing and Data Analytics (ICCDA) (pp. 299–303). IEEE. [CrossRef]
- Alves, M., Seringa, J., Silvestre, T., & Magalhães, T. (2024). Use of artificial intelligence tools in supporting decision-making in hospital management. BMC Health Services Research, 24(1), 1160. [CrossRef]
- Auf, H., Svedberg, P., Nygren, J., Nair, M., & Lundgren, L. E. (2025). The use of AI in mental health services to support decision-making: Scoping review. Journal of Medical Internet Research, 27, e63548. [CrossRef]
- Elhaddad, M., & Hamam, S. (2024). AI-driven clinical decision support systems: An ongoing pursuit of potential. Cureus, 16(4), e57728. [CrossRef]
- Fawzi, F. (2023). What is AI optimism—and why is it important? Forbes. https://www.forbes.com/councils/forbestechcouncil/2023/07/05/what-is-ai-optimism-and-why-is-it-important/.
- Gendron, B. (2022). How the lack of training affects your organisation. The Training Associates. https://thetrainingassociates.com/lack-training-affects-organisation.
- Henzler, D., Schmidt, S., Koçar, A., Herdegen, S., Lindinger, G. L., Maris, M. T., Bak, M. A. R., Willems, D. L., Tan, H. L., Lauerer, M., Nagel, E., Hindricks, G., Dagres, N., & Konopka, M. J. (2025). Healthcare professionals’ perspectives on artificial intelligence in patient care: A systematic review. BMC Health Services Research, 25(1), 1094. [CrossRef]
- Islam, S., Shahriyar, R., Agarwala, A., Zaman, M., Ahamed, S., Rahman, R., Chowdhury, M. H., Sarker, F., & Mamun, K. (2025). AI-based risk assessment tools for sexual, reproductive and mental health: A systematic review. BMC Medical Informatics and Decision Making, 25(1), 97. [CrossRef]
- Kamel Rahimi, A., Pienaar, O., Ghadimi, M., Canfell, O. J., Pole, J. D., Shrapnel, S., van der Vegt, A. H., & Sullivan, C. (2024). Implementing AI in hospitals to achieve a learning health system: Systematic review. Journal of Medical Internet Research, 26, e49655. [CrossRef]
- Kooli, C., & Al Muftah, H. (2022). Artificial intelligence in healthcare: Ethical concerns. Technological Sustainability, 1(2), 121–136. [CrossRef]
- Kumar, P., Chauhan, S., & Awasthi, L. K. (2023). Artificial intelligence in healthcare: Review, ethics, trust challenges & future directions. Engineering Applications of Artificial Intelligence, 120, 105894. [CrossRef]
- Miller, G. J. (2022). Stakeholder roles in artificial intelligence projects. Project Leadership and Society, 3, 100068. [CrossRef]
- Musa, H. G., Fatmawati, I., Nuryakin, N., & Suyanto, M. (2024). Marketing research trends using technology acceptance model (TAM): A comprehensive review of researches (2002–2022). Cogent Business & Management, 11(1), Article 2329375. [CrossRef]
- Olawade, D. B., Wada, O. J., David-Olawade, A. C., Kunonga, E., Abaire, O. J., & Ling, J. (2023). Using artificial intelligence to improve public health: A narrative review. Frontiers in Public Health, 11, 1196397. [CrossRef]
- Panteli, D., Adib, K., Buttigieg, S., Goiana-da-Silva, F., Ladewig, K., Azzopardi-Muscat, N., Figueras, J., Novillo-Ortiz, D., & McKee, M. (2025). Artificial intelligence in public health: Promises, challenges, and an agenda for policy makers. The Lancet Public Health, 10(5), e346–e355. [CrossRef]
- Sáez, C., Ferri, P., & García-Gómez, J. M. (2024). Resilient AI in health: Research agenda for trustworthy clinical decision support. Journal of Medical Internet Research, 26, e50295. [CrossRef]
- Sriharan, A., Kuhlmann, E., Correia, T., Tahzib, F., Czabanowska, K., Ungureanu, M., & Kumar, B. N. (2025). Artificial intelligence in healthcare: Balancing innovation with workforce priorities. The International Journal of Health Planning and Management. [CrossRef]
- Williams, R., & Yampolskiy, R. (2021). Understanding and avoiding AI failures: A practical guide. Philosophies, 6(3), 53. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).