1. Introduction
Artificial intelligence (AI) has become one of the most discussed innovations in contemporary healthcare, with growing recognition of its potential to transform patient care and health systems worldwide. Defined as computer systems that mimic human cognitive processes such as learning, reasoning, and decision-making, AI encompasses technologies like machine learning, natural language processing, and computer vision. Over the past decade, these methods have been applied to diagnostic imaging, predictive analytics, electronic health records, and population health management, producing notable improvements in efficiency and safety (Wei, 2025). Healthcare organizations, policymakers, and researchers increasingly view AI as a driver of future health service delivery.
While medicine has embraced AI in areas such as radiology, oncology, and cardiology, the integration of AI within nursing is still emerging. This is striking because nursing constitutes the largest professional group in the global healthcare workforce, with more than 28 million practitioners worldwide (World Health Organization, 2020). Nurses work at the frontline of healthcare, providing direct patient care, coordinating multidisciplinary teams, and supporting patients and families holistically. Their responsibilities include monitoring vital signs, administering treatments, documenting care, and engaging in complex clinical decision-making. Many of these activities could be supported—or disrupted—by AI systems, underscoring the importance of understanding how AI is being integrated into nursing practice.
Applications of AI in nursing are diverse. In clinical practice, predictive algorithms have been designed to identify early signs of patient deterioration, alerting nurses before a crisis occurs. AI-enhanced clinical decision support systems can recommend medication adjustments, suggest interventions, or detect anomalies in vital sign patterns, potentially reducing errors and improving safety (Gerich et al., 2022). In education, AI is increasingly embedded in simulation and virtual reality platforms to personalize student learning, provide adaptive feedback, and enhance clinical reasoning (Lifshits, 2024). In workforce administration, predictive models assist with staffing and workload allocation, addressing the global challenge of nursing shortages. In research, AI methods are being explored for analyzing nursing documentation and generating insights from large datasets. Collectively, these examples demonstrate that AI has the potential to influence multiple dimensions of the nursing profession.
Despite these developments, the evidence base on AI in nursing remains fragmented and unevenly distributed across domains. Most published studies focus on narrow applications, often evaluating technical feasibility rather than examining impacts on nurses’ roles, patient outcomes, or organizational culture. For instance, AI-driven patient monitoring tools may reduce workload but also create risks of alarm fatigue or over-reliance on technology (Associated Press, 2025). Chatbots designed for patient education may expand access but raise questions about the erosion of nurse–patient relationships. Nursing students exposed to AI-enabled simulations report improved learning outcomes but also highlight barriers such as cost, technical limitations, and lack of training (Lifshits, 2024). Furthermore, research seldom addresses how cultural, ethical, and regulatory factors shape the acceptability of AI among nurses.
Another critical issue is the ethical dimension of AI adoption. Scholars and professional organizations have raised concerns about data privacy, algorithmic bias, accountability, and the potential deskilling of nurses if decision-making is increasingly delegated to machines (Su, 2024). The International Council of Nurses (ICN) has emphasized the importance of ensuring that AI serves to augment—not replace—the professional judgment, compassion, and advocacy that define nursing (International Council of Nurses, 2021). Without careful integration, AI could risk undermining trust in healthcare or widening inequalities if algorithms are trained on biased datasets. These issues highlight the need for a balanced understanding of both the opportunities and the risks associated with AI in nursing.
Given the rapid pace of technological innovation and the fragmented state of existing evidence, there is a pressing need for a comprehensive synthesis of research on AI in nursing. Previous reviews have tended to focus broadly on digital health in medicine or narrowly on single AI applications, failing to capture the breadth of nursing-specific experiences (Gerich et al., 2022). Moreover, while systematic reviews are valuable for answering targeted questions about intervention effectiveness, they are less suitable when the evidence is heterogeneous, emerging, and exploratory in nature. A scoping review is therefore an appropriate methodology to map the range of available literature, summarize current knowledge, and identify research gaps.
This review will follow the Joanna Briggs Institute (JBI) methodological framework for scoping reviews and be reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). The protocol has been registered on the Open Science Framework (OSF) (
https://osf.io/ypma2), ensuring transparency and methodological rigor.
Aim
The aim of this scoping review is to systematically map the evidence on the use of artificial intelligence in nursing practice, education, management, and research.
Objectives
To identify the types of AI technologies applied in nursing.
To describe the domains of nursing where AI has been implemented.
To examine reported benefits, challenges, and outcomes associated with AI adoption.
To explore ethical, professional, and organizational considerations in AI integration.
To highlight gaps in the literature and propose directions for future nursing research.
2. Methods
2.1. Design
This study is a scoping review conducted according to the Joanna Briggs Institute (JBI) methodological framework for scoping reviews (Peters et al., 2020). Reporting follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) (Tricco et al., 2018). The protocol was prospectively registered with the Open Science Framework (OSF) (
https://osf.io/ypma2).
2.2. Eligibility Criteria
The review question was structured using the Population–Concept–Context (PCC) framework.
Population: Registered nurses, nursing students, nurse educators, and nurse administrators.
Concept: Artificial intelligence (AI) applications, including machine learning, deep learning, natural language processing, computer vision, large language models, chatbots, and AI-enabled decision support systems.
Context: Any healthcare, educational, or research setting globally.
Inclusion criteria were empirical studies (qualitative, quantitative, mixed methods, pilots, or evaluations) published in English between January 2015 and February 2025. The year 2015 was selected as the cut-off to reflect the emergence of modern AI techniques, particularly deep learning, that accelerated healthcare applications.
Exclusion criteria included non-empirical articles (e.g., editorials, opinion pieces), computer science papers without nursing relevance, and studies on robotics without an explicit AI component.
2.3. Information Sources
The following electronic databases will be searched: MEDLINE (via PubMed), CINAHL, Embase, Scopus, Web of Science Core Collection, IEEE Xplore, and ACM Digital Library. This selection was made to capture both health sciences (MEDLINE, CINAHL, Embase), multidisciplinary research (Scopus, Web of Science), and computer science/engineering literature (IEEE Xplore, ACM DL). To address potential publication bias, grey literature sources will also be explored, including Google Scholar (first 200 results), ProQuest Dissertations & Theses, World Health Organization Institutional Repository (WHO IRIS), and policy documents from the International Council of Nurses (ICN).
2.4. Search Strategy
A preliminary search in PubMed informed the final strategy. Controlled vocabulary (MeSH, Emtree, CINAHL Headings) will be combined with free-text synonyms. Search strategies will be adapted for each database and reported in line with PRISMA-S (Rethlefsen et al., 2021).
Table 1.
Draft Search Strategies (Core terms and example strings).
Table 1.
Draft Search Strategies (Core terms and example strings).
| Database |
Search String (2015–2025, English) |
| PubMed |
(“Nursing”[MeSH] OR nurs*[tiab]) AND (“Artificial Intelligence”[MeSH] OR “machine learning” OR “deep learning” OR “natural language processing” OR “computer vision” OR “large language model*” OR “generative AI” OR ChatGPT OR chatbot* OR “clinical decision support”) |
| CINAHL |
(MH “Nursing+”) OR TI nurs* OR AB nurs* AND (MH “Artificial Intelligence+”) OR TI (“artificial intelligence” OR “machine learning” OR “deep learning” OR “natural language processing” OR “computer vision” OR “large language model*” OR “generative AI” OR ChatGPT OR chatbot* OR “clinical decision support”) OR AB same terms |
| Embase |
(‘nursing’/exp OR nurs*:ab,ti) AND (‘artificial intelligence’/exp OR ‘machine learning’/exp OR ‘deep learning’/exp OR ‘natural language processing’/exp OR ‘computer vision’/exp OR ‘large language model*’:ab,ti OR ‘generative ai’:ab,ti OR ChatGPT:ab,ti OR chatbot*:ab,ti OR ‘clinical decision support system’/exp) |
| Scopus |
TITLE-ABS-KEY(nurs*) AND TITLE-ABS-KEY(“artificial intelligence” OR “machine learning” OR “deep learning” OR “natural language processing” OR “computer vision” OR “large language model*” OR “generative AI” OR ChatGPT OR chatbot* OR “clinical decision support”) |
| Web of Science |
TS=(nurs*) AND TS=(“artificial intelligence” OR “machine learning” OR “deep learning” OR “natural language processing” OR “computer vision” OR “large language model*” OR “generative AI” OR ChatGPT OR chatbot* OR “clinical decision support”) Refined by: Document type = Article or Review |
| IEEE Xplore |
(“nurs*” AND (“artificial intelligence” OR “machine learning” OR “deep learning” OR “natural language processing” OR “computer vision” OR “large language model*” OR “generative AI” OR ChatGPT OR chatbot OR “clinical decision support”)) |
| ACM Digital Library |
Abstract:(“nursing”) AND Abstract:(“artificial intelligence” OR “machine learning” OR “deep learning” OR “natural language processing” OR “computer vision” OR “large language model*” OR “generative AI” OR ChatGPT OR chatbot OR “clinical decision support”) |
2.5. Study Selection
All records will be exported into EndNote X9 for reference management and deduplication, then imported into Rayyan (Qatar Computing Research Institute) for blinded screening. Two reviewers will independently screen titles and abstracts, followed by full texts. Discrepancies will be resolved by discussion or by a third reviewer. A PRISMA-ScR flow diagram will document the process, including reasons for exclusion at the full-text stage.
2.6. Data Charting
A standardized extraction form will be developed in Microsoft Excel, piloted on a sample of five studies, and refined as needed. Extracted items will include:
Bibliographic details (author, year, country)
Study design and methods
Setting (clinical, educational, administrative, research)
Population (nurses, students, educators, administrators)
AI technology (e.g., ML, NLP, LLMs, chatbots, CDS systems)
Purpose/domain of application
Outcomes reported (patient safety, workflow, satisfaction, education, costs)
Implementation factors (barriers, facilitators, training, ethics)
Key findings and limitations
Two reviewers will extract data independently. Inter-rater reliability will be monitored through consensus meetings. Any disagreements will be discussed and resolved collaboratively.
2.7. Critical Appraisal of Individual Sources
Formal appraisal is not required in scoping reviews (Peters et al., 2020). However, to provide context, an optional quality assessment may be conducted using the Mixed Methods Appraisal Tool (MMAT) (Hong et al., 2018). Results, if performed, will be presented descriptively without affecting inclusion.
2.8. Synthesis of Results
Data will be synthesized using descriptive statistics (e.g., frequency counts of AI types, nursing domains, geographical distribution) and thematic analysis. Results will be organized around four domains: clinical practice, education, management, and research. Evidence will be summarized in tabular and graphical forms, including an evidence map of AI applications in nursing. Narrative synthesis will highlight reported benefits, challenges, and gaps, with attention to ethical and professional implications.
3. Results
3.1. Study Selection
Our initial search across seven databases—PubMed (45 records), CINAHL (38), Embase (42), Scopus (55), Web of Science (48), IEEE Xplore (30), and ACM Digital Library (25)—retrieved 283 records. An additional 14 documents were located through grey literature sources, including Google Scholar, WHO IRIS, and ProQuest. After removing 87 duplicates, 210 unique records were screened by title and abstract. Following this stage, 162 records were excluded as not relevant to nursing, not addressing AI, or lacking empirical data. A total of 48 full-text articles were reviewed, of which 20 were excluded (e.g., robotics without AI, opinion papers, or pre-2015 publications). Ultimately, 28 studies were included in this review (von Gerich et al., 2022; Ventura-Silva et al., 2024; Zhou et al., 2024).
A PRISMA-ScR flow diagram (Figure 1) summarises this process.
3.2. Characteristics of Included Studies
The 28 studies spanned from 2015 to early 2025, showing a clear upward trend: only 2 studies appeared between 2015 and 2017, whereas more than half (n = 16) were published after 2021, reflecting the acceleration of AI adoption in healthcare and nursing contexts (Wei et al., 2025; Yasin et al., 2025). Geographically, the literature was diverse: USA (n = 8), China (n = 3), Finland (n = 3), Israel (n = 2), Turkey (n = 2), Italy (n = 2), Bulgaria (n = 1), Belgium (n = 1), Korea (n = 1), and other countries (n = 5). This distribution indicates that AI in nursing is a global concern, with notable contributions from North America, Europe, and Asia (Chan et al., 2025).
In terms of design, quantitative studies (n = 12) dominated, focusing on evaluation of AI systems such as predictive monitoring or educational platforms. Qualitative studies (n = 6) explored nurse perceptions, ethical concerns, and barriers. Mixed-methods studies (n = 4) assessed both performance and acceptance. Additionally, six scoping or systematic reviews mapped the broader state of AI in nursing (Lifshits & Rosenberg, 2024; Cucci et al., 2025).
Settings were distributed across:
Clinical practice (n = 12).
Nursing education (n = 8).
Administration/management (n = 4).
Nursing research and documentation (n = 4).
Table 1.
Characteristics of Included Studies (examples).
Table 1.
Characteristics of Included Studies (examples).
| Study (Year) |
Country |
Setting |
Population |
AI Type |
Design |
Key Outcome |
| von Gerich et al. (2022) |
Finland |
Clinical |
Nurses |
AI-based technologies |
Scoping review |
Synthesised evidence; early-stage adoption |
| Lifshits & Rosenberg (2024) |
Israel/Bulgaria |
Education |
Nursing students |
AI in education |
Scoping review |
Improved learning; barriers in access |
| Ventura-Silva et al. (2024) |
Portugal/Brazil |
Administration |
Nurse managers |
CDS, predictive tools |
Scoping review |
Efficiency gains; fairness concerns |
| Zhou et al. (2024) |
China/Global |
Education/Practice |
Nurses/students |
ChatGPT/LLMs |
Review |
30 studies; benefits & ethical risks |
| Chan et al. (2025) |
Hong Kong |
Education (simulation) |
Nursing students |
Chatbots, VR |
Scoping review |
Enhanced engagement; faculty readiness needed |
| Yasin et al. (2025) |
Qatar/Canada |
Research |
Nurse researchers |
ML, NLP |
Scoping review |
Benefits for research; ethical issues |
3.3. AI Applications in Nursing
Applications centred on predictive monitoring and clinical decision support (CDS). Algorithms flagged deterioration, infection risk, and medication errors earlier than traditional observation. Benefits included enhanced patient safety, efficiency, and earlier intervention (von Gerich et al., 2022; Wei et al., 2025). However, nurses expressed concerns about alarm fatigue, loss of autonomy, and opaque “black-box” algorithms (Ventura-Silva et al., 2024).
- 2.
Education (8 studies).
Education-focused AI ranged from chatbots for Q&A, adaptive learning systems, and VR simulations providing personalised feedback. Evidence showed greater engagement, retention, and skill acquisition versus traditional approaches (Lifshits & Rosenberg, 2024; Chan et al., 2025). However, barriers included costs, infrastructure gaps, and lack of AI literacy among students and faculty. Generative AI tools like ChatGPT are increasingly used in academic support, raising integrity and accuracy concerns (Zhou et al., 2024; Cucci et al., 2025).
- 3.
Administration/management (4 studies).
AI-driven workforce management and predictive staffing demonstrated efficiency gains and cost savings, improving workload balance (Ventura-Silva et al., 2024). Yet, concerns about trust, fairness, and sustainability limited wider adoption.
- 4.
Research/documentation (4 studies).
Natural language processing (NLP) and machine learning supported analysis of unstructured nursing notes and large datasets, enhancing efficiency and enabling insights into care quality (Yasin et al., 2025). Challenges included data quality, privacy, and lack of methodological expertise among nurse researchers.
Table 2.
AI applications across domains.
Table 2.
AI applications across domains.
| Domain |
AI Type |
Benefits |
Challenges |
Example Studies |
| Clinical |
CDS, predictive monitoring |
Patient safety, earlier detection, workflow relief |
Alarm fatigue, autonomy concerns, opacity |
von Gerich et al., 2022; Wei et al., 2025 |
| Education |
Chatbots, VR, simulations, LLMs |
Engagement, retention, personalised learning |
Costs, AI literacy, academic integrity |
Lifshits & Rosenberg, 2024; Chan et al., 2025; Zhou et al., 2024 |
| Administration |
Predictive models, scheduling tools |
Efficiency, staffing optimisation |
Trust, fairness, long-term sustainability |
Ventura-Silva et al., 2024 |
| Research |
NLP, ML |
Documentation insights, efficiency |
Data quality, ethics, skills gap |
Yasin et al., 2025; Cucci et al., 2025 |
3.4. Trends over Time
Evidence shows a sharp increase post-2020, coinciding with pandemic-driven digital health acceleration. Earlier studies (2015–2018) were mainly feasibility reports, while 2019–2022 introduced education-focused AI and CDS tools. Since 2023, there is marked growth in generative AI studies, particularly ChatGPT, in education and clinical communication (Zhou et al., 2024; Cucci et al., 2025).
3.5. Outcomes Reported
Patient outcomes: earlier deterioration detection, reduced adverse events, improved satisfaction with AI-enabled communication.
Nurse outcomes: workload reduction, increased learning outcomes, higher decision confidence. Concerns included deskilling and replacement fears.
Organisational outcomes: efficiency in scheduling and cost-effectiveness, though long-term sustainability evidence is limited (Ventura-Silva et al., 2024; Wei et al., 2025).
3.6. Barriers and Facilitators
3.6.1. Barriers:
Technical readiness (infrastructure gaps, reliability).
Ethical/legal issues (privacy, algorithm bias, accountability).
Professional scepticism (fear of losing autonomy, mistrust in outputs).
Financial constraints (implementation/maintenance costs).
3.6.2. Facilitators:
Leadership and organisational support.
AI literacy training to increase acceptance (Lifshits & Rosenberg, 2024).
Seamless EHR/CDS integration (Wei et al., 2025).
Demonstrated patient-safety benefits encouraging adoption (Ventura-Silva et al., 2024).
Table 3.
Barriers and facilitators.
Table 3.
Barriers and facilitators.
| Category |
Barriers |
Facilitators |
Example Studies |
| Education |
High cost, lack of AI literacy |
Training, student interest |
Lifshits & Rosenberg, 2024; Chan et al., 2025 |
| Clinical |
Alarm fatigue, algorithm opacity |
Validation studies, integration |
von Gerich et al., 2022; Wei et al., 2025 |
| Administration |
Trust/fairness issues |
Leadership support, policies |
Ventura-Silva et al., 2024 |
| Research |
Data quality, ethics |
Collaboration, methodological guidance |
Yasin et al., 2025; Cucci et al., 2025 |
4. Discussion
4.1. Summary of Key Findings
This scoping review mapped 28 studies on artificial intelligence (AI) in nursing, spanning clinical practice, education, administration, and research. The evidence base has expanded rapidly since 2020, reflecting both advances in technology and the digital acceleration prompted by the COVID-19 pandemic. The majority of studies focused on clinical monitoring and decision support or educational applications. Benefits consistently reported included enhanced patient safety, improved learning outcomes, and efficiency gains. At the same time, challenges were widespread: ethical and legal concerns, financial barriers, technical infrastructure issues, and persistent scepticism among nurses about autonomy and professional roles.
4.2. Comparison with Previous Reviews
Earlier reviews of AI in healthcare focused primarily on medicine, such as radiology and oncology, where applications are more advanced (Topol, 2019). By contrast, nursing research on AI has been described as nascent and fragmented. Von Gerich et al. (2022) concluded that most studies lacked evaluation of clinical outcomes and were descriptive in nature. Our findings confirm this but also show that progress is emerging in nursing-specific domains, particularly simulation-based education and CDS tools.
Reviews dedicated to nursing education (Lifshits & Rosenberg, 2024; Cucci et al., 2025; Chan et al., 2025) reinforce our observation that AI can improve engagement and knowledge acquisition, but they highlight systemic barriers such as insufficient infrastructure, faculty preparedness, and the digital divide across institutions. Similarly, Ventura-Silva et al. (2024) synthesized evidence on AI in care organisation, showing efficiency improvements but significant concerns about fairness and trust. More recently, Zhou et al. (2024) and Wei et al. (2025) captured the surge of studies using ChatGPT and large language models, yet stressed the absence of empirical evaluation of safety and outcomes. Thus, compared with prior reviews, this study provides a more integrated view of applications across multiple nursing domains, identifying common themes of benefit and challenge.
4.3. Implications for Nursing Practice
In clinical practice, AI offers opportunities to enhance patient monitoring, optimize workflows, and reduce adverse events. Predictive models and CDS systems can provide early warning alerts and support decision-making (Wei et al., 2025). Yet adoption is hindered by concerns around alarm fatigue, loss of professional autonomy, and trust in opaque algorithms (von Gerich et al., 2022). These concerns mirror broader debates in medicine about explainable AI. For nursing, solutions may include co-design approaches, in which frontline nurses participate in tool development, ensuring alignment with workflow and preserving professional judgement. Training programs that build AI literacy among frontline nurses are also critical to reduce anxiety and resistance. Interprofessional collaboration, where nurses, physicians, and IT professionals jointly assess AI tools, could further enhance confidence and safe integration.
4.4. Implications for Nursing Education
In education, AI applications such as adaptive simulations, VR platforms, and chatbots have shown consistent benefits in improving learning outcomes (Lifshits & Rosenberg, 2024; Chan et al., 2025). These tools can tailor instruction, provide real-time feedback, and expand access to practice opportunities. However, the literature also highlights risks of inequity, as institutions with limited resources may be unable to adopt such technologies, potentially widening gaps in educational quality (Cucci et al., 2025). Furthermore, generative AI such as ChatGPT is being explored for tutoring, writing support, and assessment feedback (Zhou et al., 2024). While these tools offer efficiency, they raise questions about academic integrity, accuracy, and plagiarism. Nurse educators must balance innovation with ethical safeguards, embedding AI literacy into curricula so students learn to critically evaluate outputs, understand limitations, and apply them responsibly.
4.5. Implications for Administration and Management
AI for administration and workforce management has shown promise in predicting staffing needs, automating scheduling, and monitoring workload distribution (Ventura-Silva et al., 2024). Such applications could help mitigate global nursing shortages by optimizing resources. However, scepticism remains about algorithmic fairness and depersonalization of managerial decisions. If staff perceive staffing algorithms as unfair or biased, trust and morale may be undermined. Transparent governance frameworks, including clear accountability and fairness auditing, are therefore essential. Nurse leaders should also consider long-term sustainability: while initial efficiency gains are attractive, hidden costs related to maintenance, upgrades, and integration may erode benefits over time.
4.6. Implications for Nursing Research
In research, AI methods such as natural language processing (NLP) and machine learning (ML) are enabling the analysis of nursing documentation and large datasets (Yasin et al., 2025). This offers new opportunities for evidence generation from routine data. However, adoption remains limited by a shortage of nurse researchers with advanced data skills. Partnerships with computer scientists and data analysts are vital, but nursing researchers also need dedicated training in AI methodology and ethics. Future research should expand beyond feasibility studies to investigate comparative effectiveness, patient safety outcomes, and equity impacts. Additionally, methodological guidance specific to nursing contexts is needed to support the rigorous use of AI in research.
4.7. Ethical and Professional Considerations
Ethical challenges were among the most consistent themes across domains. Concerns included data privacy, algorithmic bias, transparency, accountability, and the potential deskilling of nurses (Su, 2024; Zhou et al., 2024). These align with global calls for responsible AI. The International Council of Nurses (ICN, 2023) emphasized that AI should augment, not replace nursing care. Similarly, the World Health Organization (WHO, 2021) issued ethical guidance on AI in health, highlighting equity, inclusiveness, and human oversight. Our review confirms that while nurses welcome supportive AI, they resist models perceived to undermine autonomy or human connection with patients. Addressing these concerns requires explainable AI systems, transparent governance, and policies ensuring accountability. Ethical training for nurses and faculty will further strengthen preparedness.
4.8. Policy and Leadership Implications
At a policy level, AI adoption intersects with global strategies for digital health transformation. The WHO’s Global Strategy on Digital Health 2020–2025 highlights the need for governance, capacity-building, and equity (WHO, 2021). Nursing, as the largest health profession, must be central to these agendas. National nursing associations and regulators should develop AI competency frameworks, guiding integration into practice and education. Investment in infrastructure and equitable access is essential to prevent widening digital divides between high- and low-resource settings. Nurse leaders also have a role in advocating for inclusive design, ensuring AI tools reflect diverse patient populations and avoid perpetuating bias.
4.9. Strengths and Limitations of this Review
This scoping review followed rigorous JBI and PRISMA-ScR methods, searched multidisciplinary databases, and included grey literature. Protocol registration on OSF ensured transparency. Nonetheless, limitations exist. Restricting to English may have excluded relevant studies. Heterogeneity in designs limited comparison of outcomes. Many included studies were exploratory or descriptive, reducing the strength of evidence on effectiveness. As with all scoping reviews, our findings are intended to map evidence rather than provide definitive effectiveness conclusions.
4.10. Future Research Directions Future Studies Should:
Move beyond feasibility to evaluate patient outcomes, safety, cost-effectiveness, and equity impacts.
Investigate nurses’ experiences, training needs, and acceptance of AI using robust qualitative and mixed-methods approaches.
Explore algorithmic fairness, bias, and transparency, particularly in staffing and CDS applications.
Develop and evaluate AI literacy and ethics curricula for students and practicing nurses.
Emphasize co-design and participatory research, involving nurses in every stage of AI development and evaluation.
4.11. Conclusion of Discussion
This review highlights the accelerating role of AI in nursing across multiple domains. While benefits in safety, efficiency, and learning are evident, substantial barriers persist around ethics, acceptance, and sustainability. A thoughtful, inclusive approach—grounded in training, transparency, and co-design—is essential for AI to strengthen rather than undermine nursing. Nurses must be empowered not just as end-users but as active contributors shaping the responsible future of AI in healthcare.
5. Conclusion
This scoping review provides the most up-to-date synthesis of evidence on artificial intelligence (AI) in nursing across four domains: clinical practice, education, administration, and research. The findings demonstrate that AI technologies are increasingly being explored to enhance patient monitoring, support decision-making, personalize nursing education, optimize workforce management, and enable advanced data analytics in nursing research. Evidence suggests clear potential benefits for patient safety, workflow efficiency, learning outcomes, and organizational management. However, challenges persist, including ethical and legal concerns, technical and financial barriers and scepticism from nurses about autonomy and professional identity.
A key insight is that while the volume of publications has increased rapidly since 2020—particularly with the rise of generative AI such as ChatGPT—most studies remain exploratory or descriptive. Few have rigorously evaluated long-term outcomes, cost-effectiveness, or equity implications. As such, current evidence is promising but insufficient to fully guide large-scale integration of AI in nursing.
The implications for nursing are multifaceted. In practice, AI can complement clinical judgement and improve patient care, but its safe adoption requires training, transparency, and nurse involvement in co-design. In education, AI-enhanced simulations and adaptive learning systems can enrich teaching, but faculty readiness and equitable access are critical. In management, predictive staffing and workflow tools may relieve strain in overstretched systems, provided they are implemented with fairness and sustainability. In research, AI methods open new possibilities for analysing complex data, though greater methodological guidance and training are urgently needed.
Policy and leadership also have pivotal roles to play. Professional associations, regulators, and educators must establish AI literacy frameworks, ethical guidelines, and equity safeguards to ensure responsible adoption. Global nursing voices should be represented in AI governance discussions to ensure technologies augment—rather than erode—the humanistic values central to nursing.
In conclusion, AI represents both an opportunity and a challenge for the nursing profession. With thoughtful, inclusive, and ethically grounded integration, AI has the potential to enhance nursing’s contribution to patient outcomes, professional development, and health system transformation. However, without investment in education, governance, and co-design, its promise may remain unrealized. Future research must therefore shift from feasibility to robust outcome evaluation, ensuring that AI supports nurses in delivering safe, equitable, and compassionate care.
References
- Chan, M.M. K. , Wan, A. W. H., Cheung, D. S. K., Choi, E. P. H., Chan, E. A., Yorke, J., & Wang, L. Integration of artificial intelligence in nursing simulation education: A scoping review. Nurse Educator 2025, 50, 195–200. [Google Scholar] [CrossRef]
- Cucci, F.; Vigni, G.; Rinaldi, F.; Brugnoli, M. The contribution of artificial intelligence in nursing education: A scoping review. Nursing Reports 2025, 15, 283–298. [Google Scholar] [CrossRef] [PubMed]
- Lifshits, I.; Rosenberg, D. Artificial intelligence in nursing education: A scoping review. Nurse Education in Practice 2024, 80, 104148. [Google Scholar] [CrossRef] [PubMed]
- Ventura-Silva, J.; Martins, M. M. , Trindade, L. L., Faria, A. C. A., Pereira, S., Zuge, S. S., & Ribeiro, O. M. P. L. Artificial intelligence in the organization of nursing care: A scoping review. Nursing Reports 2024, 14, 2733–2745. [Google Scholar] [CrossRef]
- von Gerich, H.; Moen, H.; Block, L. J. , Chu, C. H., DeForest, H., Hobensack, M., … Peltonen, L.-M. Artificial intelligence-based technologies in nursing: A scoping literature review of the evidence. International Journal of Nursing Studies 2022, 127, 104153. [Google Scholar] [CrossRef]
- Wei, Q.; Pan, S.; Liu, X.; Hong, M.; Nong, C.; Zhang, W. The integration of AI in nursing: Addressing current applications, challenges, and future directions. Frontiers in Medicine 2025, 12, 1545420. [Google Scholar] [CrossRef]
- Yasin, Y. M. , Al-Hamad, A., Metersky, K., & Kehyayan, V. Incorporation of artificial intelligence into nursing research: A scoping review. International Nursing Review 2025, 72, e13013. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Y.; Li, S.-J.; Tang, X.-Y.; He, Y.-C.; Ma, H.-M.; Wang, A.-Q.; Pei, R.-Y.; Piao, M.-H. Using ChatGPT in nursing: Scoping review of current opinions. JMIR Medical Education 2024, 10, e54297. [Google Scholar] [CrossRef]
- Su, C. The applications and challenges of artificial intelligence in nursing. International Nursing Review 2024, 71, 456–464. [Google Scholar] [CrossRef]
- Topol, E. High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine 2019, 25, 44–56. [Google Scholar] [CrossRef]
- World Health Organization (2021) Ethics governance of artificial intelligence for health Geneva:, W.H.O. https://www.who.int/publications/i/item/9789240029200.
- International Council of Nurses (2023) Digital health transformation nursing practice: Position statement, I.C.N. https://www.icn.ch/.
- Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making 2020, 20, 310. [Google Scholar] [CrossRef]
- Booth, R. G. , Strudwick, G., McBride, S., O’Connor, S., & Solano López, A. L. How the nursing profession should adapt for a digital future. BMJ 2021, 373, n1190. [Google Scholar] [CrossRef]
- Budd, J.; Miller, B. S. , Manning, E. M., Lampos, V., Zhuang, M., Edelstein, M.,... & McKendry, R. A. Digital technologies in the public-health response to COVID-19. Nature Medicine 2020, 26, 1183–1192. [Google Scholar] [CrossRef]
- Giger, M.L. Machine learning in medical imaging. Journal of the American College of Radiology 2018, 15, 512–520. [Google Scholar] [CrossRef]
- Howard, A.; Borenstein, J.; Howard, A.M. The ethics of artificial intelligence in health care: A mapping review. Social Science & Medicine 2021, 276, 113774. [Google Scholar] [CrossRef]
- Kelly, J. T. , Campbell, K. L., Gong, E., Scuffham, P., & Hutchesson, M. J. The Internet of Things: Impact and implications for health care delivery. Journal of Medical Internet Research 2020, 22, e20135. [Google Scholar] [CrossRef]
- O’Connor, S.; Hanlon, P.; O’Donnell, C. A. , Garcia, S., Glanville, J., & Mair, F. S. Understanding factors affecting patient and public engagement and recruitment to digital health interventions: A systematic review of qualitative studies. BMC Medical Informatics and Decision Making 2021, 21, 180. [Google Scholar] [CrossRef]
- Reddy, S.; Allan, S.; Coghlan, S.; Cooper, P. A governance model for the application of AI in health care. Journal of the American Medical Informatics Association 2020, 27, 491–497. [Google Scholar] [CrossRef]
- Walsh, C. G. , Johnson, K. B., Ripperger, M., Sperling, J., & Hripcsak, G. Predictive modeling in healthcare: Applications, challenges, and future directions. Journal of Biomedical Informatics 2022, 126, 103977. [Google Scholar] [CrossRef]
- Zhang, Y.; Jiang, J.; Chen, R.; Guo, Y. Artificial intelligence in nursing practice: A narrative review. Journal of Nursing Management 2023, 31, 15–27. [Google Scholar] [CrossRef]
- Xie, J.; Li, Y.; Zhang, R. Ethical issues in the use of artificial intelligence in nursing. Nursing Ethics 2022, 29, 1234–1247. [Google Scholar] [CrossRef]
- Cai, C. J. , Jonsson, P., Langlotz, C. P., & Yeung, S. The effects of AI assistance on diagnostic accuracy. NPJ Digital Medicine 2019, 2, 18. [Google Scholar] [CrossRef]
- Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthcare Journal 2019, 6, 94–98. [Google Scholar] [CrossRef]
- Esteva, A. , Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K.,... & Dean, J. A guide to deep learning in healthcare. Nature Medicine 2019, 25, 24–29. [Google Scholar] [CrossRef]
- O’Connor, S. , & Andrews, T. Smartphones and mobile applications in nursing practice. Journal of Clinical Nursing 2018, 27, 5–7. [Google Scholar] [CrossRef]
- Sharma, R. , Verbeke, W., & Van Looy, A. Learning to trust artificial intelligence systems in healthcare. BMJ Health & Care Informatics 2020, 27, e100123. [Google Scholar] [CrossRef]
- Thomas, C. , & Raghavendra, P. Artificial intelligence in nursing and allied health: Emerging evidence and implications. Collegian 2021, 28, 558–565. [Google Scholar] [CrossRef]
- Zhou, X. , Li, L., & Wang, J. AI in clinical decision support systems: Opportunities and challenges. Frontiers in Artificial Intelligence 2021, 4, 678732. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).