1. Introduction
According to WHO, in 2019, 970 million people globally struggle with a mental disorder, with anxiety and depression the most common [
1]. Throughout the complex content of mental disturbances, many aspects of life are impacted, including relationships, family, friends and community. They can result to problems at school and work. People who suffer from a mental disorder, are in a high risk of a suicide, lose their productivity and as a consequence the economic impact in health care is tremendous. Approximately it is estimated that 70% of people with mental illness do not receive a formal treatment [
2], while others do not seek for treatment due to low perceived need, attitudinal barriers like stigma or low income [
3]. Stigma associated with mental illness is a significant barrier to seek for help, leading to negative attitudes and intentions toward mental health services among individuals [
3,
4].
On the other hand, there are critical concerns about the global shortage of mental health professionals [
5]. There is reported worldwide poor distribution of behavioral health professionals across the health care domains, which considers several issues about the delivery of mental health services to individuals. Findings support that there is, also, shortage of mental health nurses [
6,
7]. Nursing workforce shortages have long been addressed with exacerbating attrition and projected workforce shortfalls [
8]. Due to the relational nature of interpersonal work, the large part of stressors in the everyday workplace attrition, nurses require strong cognitive, emotional and relational skills and the ability to self-regulate [
9]. They are often exposed to challenges related to the provided interpersonal care of people in mental, emotional distress, with self-harm or suicidal behaviors, as well as, they have to handle clinical aggression and interpersonal conflicts [
10]. These can impact nurses’ own mental health, resulting lower mental health than population norms [
10].
During COVID-19 pandemic, issues about the accessibility in mental health care were raised, due to the fact that access to care became more difficult. Immediate interventions to target in mental health care were needed to overcome the barriers and to engage adult and young population in psychosocial mental health treatment [
11]. In this direction, the rapid expansion of the data ecosystem leading to the development of artificial intelligence innovations and the development of machine learning models. Among this, the integration of large language models (LLMs) into health care gathered attention due to their potential to enhance diagnostic accuracy, streamline clinical workflows, and address health care disparities [
12]. LLMs typically refer to pretrained models (PLMs) that have a large number of parameters and are trained on massive amount of data. The innovation that has emerged the LLMs as the most prominent area of research in artificial intelligence is the ability to solve complex tasks. LLMs are the core element of generative AI applications which have the ability to understand natural language but also generate corresponding text, images, and videos. This human-machine interaction has a great potential in medical diagnostics and a medical decision-making [
12]. The field of LLMs is rapidly expanding to disease classification, medical question answering and diagnostic content generation. Previous studies demonstrated high accuracy of LLMs in radiology, psychiatry and neurology [
13,
14].
In the era of LLMs, conversational AI, driven by these models, rapidly expands. AI-driven conversational technologies has the potential to provide instant, interactive and context-aware access to clinical recommendations [
15]. AI-powered chatbots are referred as conversational agents or relational agents, that are able to hold a conversation with the human user, enhancing the engagement of healthcare professionals with clinical guidance [
16,
17]. Digital mental health care industry bridges the gaps between mental health services and individuals through the incorporation of artificial intelligence (AI) creating AI-guided products. AI technologies that perform human-like physical and cognitive tasks, enable the automation, engagement and decision empowered by AI commonly reviewed by domain experts before they implement in a patient’s treatment plan [
18]. Moreover, the capability of AI systems support both clinicians and patients, impact the quality of population health by transforming mental healthcare through enhancing patient retention, work life of healthcare professionals and reducing costs [
19]. LLM-based chatbots or conversational agents are digital assistants that exist either in hardware or software that use machine learning and artificial intelligence methods to mimic human behaviors and evolve dialogue able to participate in conversation [
17].
Previous studies have explored the potential of AI-powered chatbots in mental health [
11,
16,
20,
21]. For nurses, intelligent conversational agents highlight opportunities to augment nursing practice, care planning, support patient education, empower individuals and manage their health [
22,
23]. While AI chatbots show promising signs in diagnostic accuracy and clinical-decision making, currently human clinicians exhibit benefits in holistic clinical abilities, a skill requiring experience, contextual knowledge and ability to bring concise responses [
22]. Prior research ensures that conversational agent-based interventions are feasible, acceptable and have positive effects on physical function, healthy lifestyle, mental health and psychosocial outcomes [
24]. Others suggest the adoption of Retrieval Augmented Generation (RAG)-enabled AI conversational systems to mitigate misleading information due to biases, outdated training data or misinformation embedded in public sources [
15]. There is a further need of ongoing research to examine if AI conversational technology can adequately substitute nursing expertise, while prioritizing patient safety, privacy and equitable assess to care.
Since LLMs and AI-driven chatbots have been increasingly explored as problem-solving aids across various healthcare domains, the access in medical information is providing quickly. Scientific community has continued working to expand the capabilities of these models and as a consequence many ethical concerns have raised, including privacy risks, model hallucination, alongside regulatory fragmentation. As technology evolves, confidence among healthcare professionals and patients on medical technology is affected by ensuring that AI is evidence-based, free from bias and promotes equity in care [
19]. More precisely, all the above determine that the adoption of AI chatbots in clinical settings remains undoubtedly [
19,
22]. Therefore, giving the growing presence of artificial intelligence in healthcare, it is essential to assess whether nurses trust and adopt AI-powered conversational technologies in nursing decision-making, under what conditions and circumstances or reject and dispute them.
Thus, this study explores nurses’ perceptions and concerns regarding the use of artificial intelligence (AI) chatbots as a tool for mental health support. Additionally, we evaluate the levels of acceptance and fear toward AI, while we focus to examine the influence of demographic variables on these attitudes.
4. Discussion
This study aimed to identify the nurses’ perceptions regarding the use of artificial intelligence driven mental health chatbots. Also, the findings from this study highlight the levels of acceptance and the fear toward AI integration in clinical practice. Also, our findings offer several insights into the relationships among nurses’ familiarity with AI, their beliefs, their affective attitudes and their behavioral intention among mental health support.
A key implication of this research is that the perceptions of AI-driven mental health chatbots varied widely across participants, but overall nurses present positive attitudes among the utilization of AI-driven chatbots. Generally, nurses did not recognize strong technical advantages of AI technology, but they reported more personal benefits among the use of AI mental health chatbots, although they pointed out a considerable amount of fear. As illustrated by results, nurses who perceive more technical advantages of artificial intelligence, feel more comfortable and less fearful to accept AI applications. Also, nurses who see personal benefits from the use of AI chatbots, tend to experience higher fear which means that although they recognize personal gains, they still feel anxious about AI. These findings align with previous research which suggest that experience with technology influence the adoption and acceptance of AI in healthcare [
32,
33]. Due to the fact that, nurses perceive many advantages by the use of artificial intelligence, they remain wary of their shortcoming [
34]. AI-conversational agents have the potential to become a useful tool in nursing tasks, but many practical factors such as lack of knowledge, explainability, ethical issues related to nursing profession are still pending. Previous inquiry highlights the ability of large language models to assess the prognosis of schizophrenia and found that they aligned closely with the predictions of mental health professionals [
35]. Overall, nurses need to have an attitude of openness toward learning new technologies and integrating appropriate culture into artificial intelligence use.
Another important result of our survey was that participants who receive more technical advantages of AI tend to have higher acceptance of AI. In other words, as belief in technical benefits increases, the acceptance increases. Nurses who admitted more technical advantages in AI-driven mental health technology, they tend to experience slightly less fear of AI. In other words, nurses who recognized the meaningful technical contribution of AI to the nursing profession, were more inclined to trust AI tools. These results are consistent with prior research, which emphasize the role of beliefs to behavioral intention [
36,
37,
38], leading to a common conclusion with those of previous studies. The existing literature underscores AI’s expanding role in nursing to ensure that intelligent nursing evolves in alignment with clinical routines [
39,
40]. AI’s strong prospects as a nursing support tool in assisting decision-making, writing nursing documents, nursing operations with exposure risks and physical exertion, carrying out many nursing activities [
39]. Indeed, health care professionals are optimistic about AI’s potential to improve decision-making safety and quality, literature emphasizes that the human touch remains essential for patients with complex needs [
41].
Results from linear regression analysis support the notion that the older participants, the male gender and more time daily engagement on social media or websites tend to have more positive attitudes toward AI mental health chatbots. Furthermore, higher levels of digital technology competence were significantly associated with greater acceptance of artificial intelligence. Additionally, male nurses reported significantly higher acceptance of AI compared to their female counterparts, while they reported significantly greater levels of fear. Consistent with previous literature [
36], in which female nurses reported lower levels of familiarity with AI, stronger beliefs about AI’ role in their job and higher levels of anxiety toward AI compared to their male counterparts. Additionally, previous literacy was significantly associated with this study dealing with the acceptance to use AI [
32,
45]. As indicated in literature, neonatal nurses can utilize generative artificial intelligence in clinical practice, but its effectiveness depends on structured training, reliable infrastructure and culturally sensitive implementation [
45]. Notably, nurses’ perceptions of AI’s relevance to their role and their trust in AI-driven technologies were significant predictors of their intention to integrate AI into clinical practice. Nurses who recognized the meaningful contribution of AI to the nursing profession, trust its outputs and are more inclined to adopt AI tools [
37]. These are consistent to prior research which highlight the role of beliefs as a cognitive antecedents to behavioral intention [
42,
43,
44]. According to Schaivo et al. [
46], literacy fosters a positive attitude toward acceptance, on the other hand anxiety has a significant, direct negative effect. Moreover, they found that learning and sociotechnical dimensions of AI anxiety serve as a complementary partial mediator between AI literacy and acceptance. Moreover, previous study illustrates the emergency of AI technologies in mental health care. Mental health professionals have generally been slower in the adoption of AI [
32].
AI-driven conversational technology can provide to clinicians advanced tools for understanding and addressing complex behavioral health patterns. Moreover, through realistic stimulations and adaptive technologies, facilitate data driven monitoring for patient progress, collaborative treatments, planning and timely adjustment to interventions [
48].
Our study underscores the perceptions and readiness of health care professionals for the transformative role of AI-driven conversational technology in health care, particularly in mental heath care. Previous literature emphasis the potential of mental health chatbots to improve patient care accessibility and patient management [
49]. Moreover, the administrative role of AI chatbots aligns with the growing need for patient engagement in health care and resource optimization in health care settings. AI chatbots are capable to revolutionize mental health care though enhanced accessibility and personalized interventions [
50]. This approach allows the individualization of care, enhances cultural sensitivity, optimize outcomes contributing to a more responsive and effective mental health care [
48]. However, careful consideration of the ethical implications and methodological concept under cultural adaptions are essential to ensure the responsible deployment of AI-driven technology [
51,
52]. A conscientious and ethical integration of generative AI techniques is an obligatory, ensuring a balanced approach that maximizes the benefits in mental health practices [
47,
53].
To the best of our knowledge, this study is the first study that examines nurses’ perceptions regarding the use of AI chatbots as a tool for mental health support. Nonetheless, several limitations should be addressed. First, the cross-sectional design of our study limits the ability to draw causal inferences. Thus, we cannot determine the temporal sequence of the observed relationships. Future longitudinal research designs must be adherend to track nurses’ changes in beliefs, attitudes and behavioral intentions. Second, this study was conducted in one country with a hybrid health care system (both public and private), so the generalizability of the findings is limited. Future studies could replicate this research in different countries and health care systems. Third, further research is needed to explore other potential factors influencing AI adoption on organizational and environmental basis to dive into the core barriers of AI integration into nursing practice.
Author Contributions
For research articles with several authors, a short paragraph specifying their individual contributions must be provided. The following statements should be used “Conceptualization, A.K. and P.G. (Petros Galanis); methodology, A.K., O.K., I.M. and P.G. (Petros Galanis); software, P.G. (Parisis Gallos); validation, O.G., P.L. and M.T.; formal analysis, O.K., P.G (Parisis Gallos), O.G., P.L. and P.G. (Petros Galanis); investigation, O.G., P.L. and M.T.; resources, P.G., P.L. and M.T.; data curation, O.K. P.G. (Parisis Gallos), O.G., and P.G. (Petros Galanis); writing—original draft preparation, P.L., A.K., I.M., P.G. (Parisis Gallos), O.G., M.T., and P.G. (Petros Galanis); writing—review and editing, A.K., I.M., O.K., P.G. (Parisis Gallos), O.G., P.L., M.,T. and P.G. (Petros Galanis); visualization, A.K., P.G. (Petros Galanis); supervision, P.G. (Petros Galanis); project administration, A.K. and P.G. (Petros Galanis); funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript.
Table 1.
Demographic characteristics of nurses (n=276).
Table 1.
Demographic characteristics of nurses (n=276).
| Characteristics |
N |
% |
| Gender |
|
|
| Females |
225 |
81.5 |
| Males |
51 |
18.5 |
| Age (years) a
|
42.94 |
9.61 |
| Financial statusa
|
5.97 |
1.56 |
| Daily use of social media/websites (hours) a
|
3.07 |
2.40 |
| Competence in digital technologiesa
|
7.63 |
1.66 |
Table 2.
Descriptive statistics for the study scales (n=276).
Table 2.
Descriptive statistics for the study scales (n=276).
| Scale |
Mean |
Standard deviation |
Median |
Interquartile range |
| Artificial Intelligence in Mental Health Scale |
|
|
|
|
| Technical advantages |
1.76 |
0.69 |
2.00 |
0.69 |
| Personal advantages |
3.26 |
0.88 |
3.00 |
1.00 |
| Attitudes Towards Artificial Intelligence Scale |
|
|
|
|
| Acceptance |
5.15 |
1.99 |
5.00 |
2.50 |
| Fear |
5.72 |
1.80 |
6.00 |
2.00 |
Table 3.
Correlation analysis between the study scales (n=276).
Table 3.
Correlation analysis between the study scales (n=276).
| |
Pearson’s correlation coefficient |
| Scale |
2 |
3 |
4 |
| AIMHS (technical advantages) |
0.522** |
0.533** |
-0.148* |
| AIMHS (personal advantages) |
|
0.537** |
0.001 |
| ATAI (acceptance) |
|
|
-0.221** |
| ATAI (fear) |
|
|
|
Table 4.
Linear regression models with the score on the technical advantages factor of the Artificial Intelligence in Mental Health Scale as the dependent variable (n=276).
Table 4.
Linear regression models with the score on the technical advantages factor of the Artificial Intelligence in Mental Health Scale as the dependent variable (n=276).
| Independent variables |
Univariate models |
Multivariable models |
VIF |
| Unadjusted coefficient beta |
95% CI for beta |
P-value |
Adjusted coefficient beta |
95% CI for beta |
P-value |
| Males vs. females |
0.096 |
-0.115 to 0.308 |
0.370 |
0.065 |
-0.149 to 0.279 |
0.551 |
1.068 |
| Age |
0.009 |
0.001 to 0.018 |
0.033 |
0.011 |
0.002 to 0.020 |
0.018 |
1.185 |
| Financial status |
0.031 |
-0.022 to 0.083 |
0.255 |
0,047 |
-0.008 to 0.102 |
0.090 |
1.126 |
| Daily use of social media/websites (hours) |
0.032 |
-0.002 to 0.066 |
0.069 |
0.058 |
0.021 to 0.095 |
0.002 |
1.231 |
| Competence in digital technologies |
-0.017 |
-0.066 to 0.033 |
0.504 |
-0.030 |
-0.084 to 0.024 |
0.268 |
1.235 |
Table 5.
Linear regression models with the score on the personal advantages factor of the Artificial Intelligence in Mental Health Scale as the dependent variable (n=276).
Table 5.
Linear regression models with the score on the personal advantages factor of the Artificial Intelligence in Mental Health Scale as the dependent variable (n=276).
| Independent variables |
Univariate models |
Multivariable models |
VIF |
| Unadjusted coefficient beta |
95% CI for beta |
P-value |
Adjusted coefficient beta |
95% CI for beta |
P-value |
| Males vs. females |
0.478 |
0.215 to 0.741 |
<0.001 |
0.548 |
0.277 to 0.820 |
<0.001 |
1.068 |
| Age |
-0.004 |
-0.015 to 0.007 |
0.506 |
-0.007 |
-0.019 to 0.005 |
0.238 |
1.185 |
| Financial status |
-0.023 |
-0.090 to 0.044 |
0.500 |
-0.033 |
-0.103 to 0.037 |
0.354 |
1.126 |
| Daily use of social media/websites (hours) |
0.017 |
-0.027 to 0.060 |
0.452 |
0.018 |
-0.030 to 0.065 |
0.462 |
1.231 |
| Competence in digital technologies |
-0.006 |
-0.069 to 0.057 |
0.853 |
-0.033 |
-0.101 to 0.036 |
0.347 |
1.235 |
Table 6.
Linear regression models with the score on the acceptance factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable (n=276).
Table 6.
Linear regression models with the score on the acceptance factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable (n=276).
| Independent variables |
Univariate models |
Multivariable modela
|
VIF |
| Unadjusted coefficient beta |
95% CI for beta |
P-value |
Adjusted coefficient beta |
95% CI for beta |
P-value |
| Males vs. females |
1.607 |
1.031 to 2.184 |
<0.001 |
1.587 |
0.993 to 2.181 |
<0.001 |
1.068 |
| Age |
0.003 |
-0.022 to 0.027 |
0.818 |
-0.001 |
-0.026 to 0.025 |
0.959 |
1.185 |
| Financial status |
0.012 |
-0.140 to 0.163 |
0.879 |
-0.098 |
-0.250 to 0.055 |
0.209 |
1.126 |
| Daily use of social media/websites (hours) |
-0.020 |
-0.118 το 0.078 |
0.688 |
-0.046 |
-0.149 το 0.057 |
0.380 |
1.231 |
| Competence in digital technologies |
0.167 |
0.027 to 0.308 |
0.020 |
0.164 |
0.014 to 0.313 |
0.032 |
1.235 |
Table 7.
Linear regression models with the score on the fear factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable (n=276).
Table 7.
Linear regression models with the score on the fear factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable (n=276).
| Independent variables |
Univariate models |
Multivariable modela
|
VIF |
| Unadjusted coefficient beta |
95% CI for beta |
P-value |
Adjusted coefficient beta |
95% CI for beta |
P-value |
| Males vs. females |
0.532 |
-0.016 to 1.079 |
0.057 |
0.772 |
0.231 to 1.312 |
0.005 |
1.068 |
| Age |
0.009 |
-0.014 to 0.031 |
0.445 |
0.011 |
-0.012 to 0.034 |
0.367 |
1.185 |
| Financial status |
-0.321 |
-0.453 to -0.189 |
<0.001 |
-0.329 |
-0.467 to -0.190 |
<0.001 |
1.126 |
| Daily use of social media/websites (hours) |
0.086 |
-0.003 to 0.175 |
0.057 |
0.071 |
-0.023 to 0.165 |
0.138 |
1.231 |
| Competence in digital technologies |
-0.057 |
-0.186 to 0.072 |
0.386 |
-0.040 |
-0.176 to 0.096 |
0.566 |
1.235 |