Preprint
Article

This version is not peer-reviewed.

Exploring Nurses’ Perspectives on the Use of Artificial Intelligence Chatbots for Mental Health Support: A Cross-Sectional Study in Greece

Submitted:

09 February 2026

Posted:

10 February 2026

You are already at the latest version

Abstract
Background/Objectives: Artificial intelligence (AI) has transformed healthcare delivery by revolutionizing the offering opportunities in prognosis, diagnosis, personalize treatment, improving patient outcomes. However, little is known about the nurses’ perceptions and attitudes toward the integration of AI-driven conversational technology, AI chatbots into clinical practice. The aim of our study was to investigate nurses’ perceptions regarding the use of AI chatbots as a tool for mental health support. Additionally, the study aimed to evaluate their levels of acceptance and fear toward AI, while examining the influence of demographic variables on these attitudes. Methods: A cross-sectional study was conducted. Attitudes toward the use of AI-powered chatbots for mental health support were measured using the Artificial Intelligence in Mental Health Scale (AIMHS). Additionally, the Attitudes Towards Artificial Intelligence Scale (ATAI) was employed to assess nurses’ levels of acceptance and fear regarding artificial intelligence. Results: AIMHS score reflected moderately positive attitudes toward AI chatbots for mental health support, while ATAI scores indicated a moderate level of acceptance and fear toward AI. Multivariable analysis showed that increased age and increased daily engagement with social media and websites were significantly associated with more favorable technical perceptions of AI-based mental health chatbots. Also, male nurses exhibited significantly more favorable attitudes toward AI-based mental health chatbots in terms of perceived personal benefits. Higher levels of digital technology competence were significantly associated with greater acceptance of artificial intelligence. Additionally, male nurses reported significantly higher acceptance of AI compared to their female counterparts. We found that lower financial status was significantly associated with heightened fear of AI. Conclusions: Nurses generally held moderately positive attitudes toward both AI-based mental health chatbots and AI more broadly. Several demographic factors were found to significantly influence these attitudes.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

According to WHO, in 2019, 970 million people globally struggle with a mental disorder, with anxiety and depression the most common [1]. Throughout the complex content of mental disturbances, many aspects of life are impacted, including relationships, family, friends and community. They can result to problems at school and work. People who suffer from a mental disorder, are in a high risk of a suicide, lose their productivity and as a consequence the economic impact in health care is tremendous. Approximately it is estimated that 70% of people with mental illness do not receive a formal treatment [2], while others do not seek for treatment due to low perceived need, attitudinal barriers like stigma or low income [3]. Stigma associated with mental illness is a significant barrier to seek for help, leading to negative attitudes and intentions toward mental health services among individuals [3,4].
On the other hand, there are critical concerns about the global shortage of mental health professionals [5]. There is reported worldwide poor distribution of behavioral health professionals across the health care domains, which considers several issues about the delivery of mental health services to individuals. Findings support that there is, also, shortage of mental health nurses [6,7]. Nursing workforce shortages have long been addressed with exacerbating attrition and projected workforce shortfalls [8]. Due to the relational nature of interpersonal work, the large part of stressors in the everyday workplace attrition, nurses require strong cognitive, emotional and relational skills and the ability to self-regulate [9]. They are often exposed to challenges related to the provided interpersonal care of people in mental, emotional distress, with self-harm or suicidal behaviors, as well as, they have to handle clinical aggression and interpersonal conflicts [10]. These can impact nurses’ own mental health, resulting lower mental health than population norms [10].
During COVID-19 pandemic, issues about the accessibility in mental health care were raised, due to the fact that access to care became more difficult. Immediate interventions to target in mental health care were needed to overcome the barriers and to engage adult and young population in psychosocial mental health treatment [11]. In this direction, the rapid expansion of the data ecosystem leading to the development of artificial intelligence innovations and the development of machine learning models. Among this, the integration of large language models (LLMs) into health care gathered attention due to their potential to enhance diagnostic accuracy, streamline clinical workflows, and address health care disparities [12]. LLMs typically refer to pretrained models (PLMs) that have a large number of parameters and are trained on massive amount of data. The innovation that has emerged the LLMs as the most prominent area of research in artificial intelligence is the ability to solve complex tasks. LLMs are the core element of generative AI applications which have the ability to understand natural language but also generate corresponding text, images, and videos. This human-machine interaction has a great potential in medical diagnostics and a medical decision-making [12]. The field of LLMs is rapidly expanding to disease classification, medical question answering and diagnostic content generation. Previous studies demonstrated high accuracy of LLMs in radiology, psychiatry and neurology [13,14].
In the era of LLMs, conversational AI, driven by these models, rapidly expands. AI-driven conversational technologies has the potential to provide instant, interactive and context-aware access to clinical recommendations [15]. AI-powered chatbots are referred as conversational agents or relational agents, that are able to hold a conversation with the human user, enhancing the engagement of healthcare professionals with clinical guidance [16,17]. Digital mental health care industry bridges the gaps between mental health services and individuals through the incorporation of artificial intelligence (AI) creating AI-guided products. AI technologies that perform human-like physical and cognitive tasks, enable the automation, engagement and decision empowered by AI commonly reviewed by domain experts before they implement in a patient’s treatment plan [18]. Moreover, the capability of AI systems support both clinicians and patients, impact the quality of population health by transforming mental healthcare through enhancing patient retention, work life of healthcare professionals and reducing costs [19]. LLM-based chatbots or conversational agents are digital assistants that exist either in hardware or software that use machine learning and artificial intelligence methods to mimic human behaviors and evolve dialogue able to participate in conversation [17].
Previous studies have explored the potential of AI-powered chatbots in mental health [11,16,20,21]. For nurses, intelligent conversational agents highlight opportunities to augment nursing practice, care planning, support patient education, empower individuals and manage their health [22,23]. While AI chatbots show promising signs in diagnostic accuracy and clinical-decision making, currently human clinicians exhibit benefits in holistic clinical abilities, a skill requiring experience, contextual knowledge and ability to bring concise responses [22]. Prior research ensures that conversational agent-based interventions are feasible, acceptable and have positive effects on physical function, healthy lifestyle, mental health and psychosocial outcomes [24]. Others suggest the adoption of Retrieval Augmented Generation (RAG)-enabled AI conversational systems to mitigate misleading information due to biases, outdated training data or misinformation embedded in public sources [15]. There is a further need of ongoing research to examine if AI conversational technology can adequately substitute nursing expertise, while prioritizing patient safety, privacy and equitable assess to care.
Since LLMs and AI-driven chatbots have been increasingly explored as problem-solving aids across various healthcare domains, the access in medical information is providing quickly. Scientific community has continued working to expand the capabilities of these models and as a consequence many ethical concerns have raised, including privacy risks, model hallucination, alongside regulatory fragmentation. As technology evolves, confidence among healthcare professionals and patients on medical technology is affected by ensuring that AI is evidence-based, free from bias and promotes equity in care [19]. More precisely, all the above determine that the adoption of AI chatbots in clinical settings remains undoubtedly [19,22]. Therefore, giving the growing presence of artificial intelligence in healthcare, it is essential to assess whether nurses trust and adopt AI-powered conversational technologies in nursing decision-making, under what conditions and circumstances or reject and dispute them.
Thus, this study explores nurses’ perceptions and concerns regarding the use of artificial intelligence (AI) chatbots as a tool for mental health support. Additionally, we evaluate the levels of acceptance and fear toward AI, while we focus to examine the influence of demographic variables on these attitudes.

2. Materials and Methods

2.1. Study Design

A cross-sectional study was conducted in Greece, with data collection taking place in October 2025 via an online survey. The questionnaire was digitized using Google Forms and disseminated through nursing-focused groups on social media platforms such as Facebook and LinkedIn, resulting in a convenience sample. Eligibility criteria for participation included: (a) being a registered nurse, (b) having an active account on at least one social media platform, (c) engaging with social media and/or the internet for a minimum of 30 minutes daily to ensure sufficient exposure to digital technologies, and (d) providing informed consent prior to completing the survey. The study adhered to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines [26].
The sample size was calculated using G*Power software (version 3.1.9.2). The following parameters were applied in the computation: (a) a Type I error probability of 5%, (b) statistical power of 95%, (c) five independent variables, and (d) a small effect size (f² = 0.05) for the association between each independent and dependent variable. Based on these criteria, the required sample size was determined to be 262 nurses.
The sample size was calculated using G*Power software (version 3.1.9.2). The following parameters were applied in the computation: (a) a Type I error probability of 5%, (b) statistical power of 95%, (c) five independent variables, and (d) a small effect size (f² = 0.05) for the association between each independent and dependent variable. Based on these criteria, the required sample size was determined to be 262 nurses.

2.2. Measurements

The demographic variables examined were: (a) gender (male or female), (b) age, treated as a continuous variable, (c) self-assessed financial status on a scale from 0 (very poor) to 10 (very good), (d) daily time spent on social media and/or websites (continuous variable), and (e) self-reported proficiency in digital technologies - including smartphones, personal computers, AI applications, and tablets - measured on a scale from 0 (very poor) to 10 (very good).
Nurses’ attitudes toward mental health chatbots powered by artificial intelligence were evaluated using the Artificial Intelligence in Mental Health Scale (AIMHS) [27]. This instrument comprises five items distributed across two dimensions: technical advantages (2 items) and personal advantages (3 items). Participants responded using a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Factor scores were computed by averaging the item responses within each dimension, yielding scores between 1 and 5, with higher values reflecting more favorable attitudes toward AI-based mental health chatbots. Two negatively worded items were reverse-coded prior to analysis. The validated Greek version of the AIMHS [27] was employed in this study. Internal consistency was deemed very good, with Cronbach’s alpha coefficients of 0.788 for the technical advantages subscale and 0.896 for the personal advantages subscale.
To assess nurses’ acceptance and fear of artificial intelligence, we employed the Attitudes Towards Artificial Intelligence Scale (ATAI) [28]. This instrument comprises five items: two items evaluate acceptance of AI, while three items assess fear related to AI. Responses are recorded on an 11-point Likert scale ranging from 0 (strongly disagree) to 10 (strongly agree). The overall score is calculated as the mean of all item responses, yielding a range from 0 to 10. Higher scores on the acceptance subscale reflect greater acceptance and more favorable attitudes toward AI, whereas higher scores on the fear subscale indicate increased fear and more negative attitudes. For this study, we utilized the validated Greek version of the ATAI [29]. Internal consistency was very good, with Cronbach’s alpha coefficients of 0.857 for the acceptance subscale and 0.715 for the fear subscale.

2.3. Ethical Issues

Participation was entirely voluntary and conducted anonymously. Prior to enrollment, all participating nurses received comprehensive information regarding the study’s aims and procedures and provided written informed consent. The research was carried out in full compliance with the ethical principles outlined in the Declaration of Helsinki [30]. Ethical approval for the study was granted by the Ethics Committee of the Faculty of Nursing at the National and Kapodistrian University of Athens (Protocol No. 01, dated September 14, 2025).

2.4. Statistical Analysis

Categorical variables were described using frequencies and percentages, whereas continuous variables were summarized through means, standard deviations (SD), medians, and interquartile ranges. The normality of continuous variable distributions was evaluated using the Kolmogorov–Smirnov test and Q–Q plots, both of which supported the assumption of normality. Demographic characteristics were treated as independent variables, while attitudes toward AI-based mental health chatbots, AI acceptance, and AI-related fear were considered as dependent variables. Given the continuous and normally distributed nature of the dependent variables, both simple and multivariable linear regression analyses were conducted. Initially, bivariate linear regressions were performed, followed by the development of final multivariable models. Multicollinearity was assessed using variance inflation factors (VIFs), with values exceeding 5 indicating potential multicollinearity concerns. Model assumptions were verified by inspecting residual histograms for normality, and residual-versus-predicted scatterplots for homoscedasticity and linearity [31]. The findings are presented as unadjusted and adjusted beta coefficients, along with 95% confidence intervals (CIs) and p-values. Furthermore, Pearson’s correlation coefficient was employed to examine the association between AIMHS and ATAI scores. P-values less than 0.05 were considered statistically significant. We used the IBM SPSS 28.0 (IBM Corp. Released 2021. IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY: IBM Corp) for the analysis.

3. Results

3.1. Demographic Characteristics

The final sample comprised 276 nurses, with the majority being female (76.2%). The mean age of participants was 42.94 years (SD = 9.61), with a median age of 45 years and an interquartile range of 13. The average self-perceived financial status was 5.97 (SD = 1.56), with a median score of 6 and an interquartile range of 2. Participants reported a mean daily duration of social media and/or website use of 3.07 hours (SD = 2.40), with a median of 2 hours and an interquartile range of 3.5. The mean score for self-assessed digital competence—including the use of smartphones, computers, AI applications, and tablets—was 7.63 (SD = 1.67), with a median of 8 and an interquartile range of 2. A detailed overview of the demographic characteristics is provided in Table 1.

3.2. Study Scales

The mean score on technical advantages factor was 1.76 (SD; 0.69), while on personal advantages factor was 3.26 (SD; 0.88). The mean acceptance score was 5.15 (SD; 1.99), while the mean fear score was 5.72 (SD; 1.80). Detailed descriptive statistics for the study scales are presented in Table 2.
We found a statistically significant positive correlation between technical advantages factor and acceptance of AI (r = 0.533, p-value < 0.001). Moreover, we found a statistically significant negative correlation between technical advantages factor and fear of AI (r = -0.148, p-value = 0.014). A positive correlation between personal advantages factor and acceptance of AI was identified (r = 0.537, p-value < 0.001). Detailed correlation results are presented in Table 3.

3.3. Dependent Variable: Artificial Intelligence in Mental Health Scale

Table 4 displays the findings from the linear regression analysis, wherein the "technical advantages" dimension of the AIMHS served as the dependent variable. In the final multivariable model, both older age (b = 0.011, 95% CI: 0.002–0.020, p-value = 0.018) and increased daily engagement with social media or websites (b = 0.058, 95% CI: 0.021–0.095, p-value = 0.002) were significantly associated with more favorable perceptions of AI-based mental health chatbots. Diagnostic checks confirmed that the model met the necessary assumptions: residuals were normally distributed (see Supplementary Figure 1), homoscedasticity and linearity were evident (see Supplementary Figure 2), and variance inflation factors (VIFs) revealed no multicollinearity among the independent variables.
Table 5 presents the results of the linear regression analysis, with the personal advantages dimension of the AIMHS serving as the dependent variable. In the final multivariable model, male nurses exhibited significantly more favorable attitudes toward AI-based mental health chatbots in terms of perceived personal benefits (b = 0.548, 95% CI: 0.277–0.820, p-value < 0.001). Diagnostic assessments confirmed that the assumptions underlying linear regression were satisfied: residuals were normally distributed (see Supplementary Figure 3), homoscedasticity and linearity were observed (see Supplementary Figure 4), and VIFs suggested the absence of multicollinearity among the independent variables.

3.4. Dependent Variable: Attitudes Toward Artificial Intelligence Scale

Table 6 displays the results of the linear regression analysis, with the acceptance dimension of the ATAI serving as the dependent variable. In the final multivariable model, higher levels of digital technology competence were significantly associated with greater acceptance of artificial intelligence (b = 0.164, 95% CI: 0.014–0.313, p = 0.032). Additionally, male nurses reported significantly higher acceptance of AI compared to their female counterparts (b = 1.587, 95% CI: 0.993–2.181, p < 0.001). Model diagnostics indicated that the assumptions of linear regression were adequately met: residuals were normally distributed (see Supplementary Figure 5), homoscedasticity and linearity were observed (see Supplementary Figure 6), and VIFs confirmed the absence of multicollinearity among the independent variables.
Table 7 outlines the findings of the linear regression analysis, with the fear dimension of the ATAI utilized as the dependent variable. In the final multivariable model, lower financial status was significantly associated with heightened fear of artificial intelligence (b = -0.329, 95% CI: -0.467 to -0.190, p < 0.001). Furthermore, male nurses reported significantly greater levels of fear toward AI compared to their female counterparts (b = 0.772, 95% CI: 0.231–1.312, p = 0.005). Diagnostic evaluations confirmed that the assumptions of linear regression were satisfactorily met: residuals exhibited a normal distribution (see Supplementary Figure 7), homoscedasticity and linearity were observed (see Supplementary Figure 8), and VIFs demonstrated no evidence of multicollinearity among the independent variables.

4. Discussion

This study aimed to identify the nurses’ perceptions regarding the use of artificial intelligence driven mental health chatbots. Also, the findings from this study highlight the levels of acceptance and the fear toward AI integration in clinical practice. Also, our findings offer several insights into the relationships among nurses’ familiarity with AI, their beliefs, their affective attitudes and their behavioral intention among mental health support.
A key implication of this research is that the perceptions of AI-driven mental health chatbots varied widely across participants, but overall nurses present positive attitudes among the utilization of AI-driven chatbots. Generally, nurses did not recognize strong technical advantages of AI technology, but they reported more personal benefits among the use of AI mental health chatbots, although they pointed out a considerable amount of fear. As illustrated by results, nurses who perceive more technical advantages of artificial intelligence, feel more comfortable and less fearful to accept AI applications. Also, nurses who see personal benefits from the use of AI chatbots, tend to experience higher fear which means that although they recognize personal gains, they still feel anxious about AI. These findings align with previous research which suggest that experience with technology influence the adoption and acceptance of AI in healthcare [32,33]. Due to the fact that, nurses perceive many advantages by the use of artificial intelligence, they remain wary of their shortcoming [34]. AI-conversational agents have the potential to become a useful tool in nursing tasks, but many practical factors such as lack of knowledge, explainability, ethical issues related to nursing profession are still pending. Previous inquiry highlights the ability of large language models to assess the prognosis of schizophrenia and found that they aligned closely with the predictions of mental health professionals [35]. Overall, nurses need to have an attitude of openness toward learning new technologies and integrating appropriate culture into artificial intelligence use.
Another important result of our survey was that participants who receive more technical advantages of AI tend to have higher acceptance of AI. In other words, as belief in technical benefits increases, the acceptance increases. Nurses who admitted more technical advantages in AI-driven mental health technology, they tend to experience slightly less fear of AI. In other words, nurses who recognized the meaningful technical contribution of AI to the nursing profession, were more inclined to trust AI tools. These results are consistent with prior research, which emphasize the role of beliefs to behavioral intention [36,37,38], leading to a common conclusion with those of previous studies. The existing literature underscores AI’s expanding role in nursing to ensure that intelligent nursing evolves in alignment with clinical routines [39,40]. AI’s strong prospects as a nursing support tool in assisting decision-making, writing nursing documents, nursing operations with exposure risks and physical exertion, carrying out many nursing activities [39]. Indeed, health care professionals are optimistic about AI’s potential to improve decision-making safety and quality, literature emphasizes that the human touch remains essential for patients with complex needs [41].
Results from linear regression analysis support the notion that the older participants, the male gender and more time daily engagement on social media or websites tend to have more positive attitudes toward AI mental health chatbots. Furthermore, higher levels of digital technology competence were significantly associated with greater acceptance of artificial intelligence. Additionally, male nurses reported significantly higher acceptance of AI compared to their female counterparts, while they reported significantly greater levels of fear. Consistent with previous literature [36], in which female nurses reported lower levels of familiarity with AI, stronger beliefs about AI’ role in their job and higher levels of anxiety toward AI compared to their male counterparts. Additionally, previous literacy was significantly associated with this study dealing with the acceptance to use AI [32,45]. As indicated in literature, neonatal nurses can utilize generative artificial intelligence in clinical practice, but its effectiveness depends on structured training, reliable infrastructure and culturally sensitive implementation [45]. Notably, nurses’ perceptions of AI’s relevance to their role and their trust in AI-driven technologies were significant predictors of their intention to integrate AI into clinical practice. Nurses who recognized the meaningful contribution of AI to the nursing profession, trust its outputs and are more inclined to adopt AI tools [37]. These are consistent to prior research which highlight the role of beliefs as a cognitive antecedents to behavioral intention [42,43,44]. According to Schaivo et al. [46], literacy fosters a positive attitude toward acceptance, on the other hand anxiety has a significant, direct negative effect. Moreover, they found that learning and sociotechnical dimensions of AI anxiety serve as a complementary partial mediator between AI literacy and acceptance. Moreover, previous study illustrates the emergency of AI technologies in mental health care. Mental health professionals have generally been slower in the adoption of AI [32].
AI-driven conversational technology can provide to clinicians advanced tools for understanding and addressing complex behavioral health patterns. Moreover, through realistic stimulations and adaptive technologies, facilitate data driven monitoring for patient progress, collaborative treatments, planning and timely adjustment to interventions [48].
Our study underscores the perceptions and readiness of health care professionals for the transformative role of AI-driven conversational technology in health care, particularly in mental heath care. Previous literature emphasis the potential of mental health chatbots to improve patient care accessibility and patient management [49]. Moreover, the administrative role of AI chatbots aligns with the growing need for patient engagement in health care and resource optimization in health care settings. AI chatbots are capable to revolutionize mental health care though enhanced accessibility and personalized interventions [50]. This approach allows the individualization of care, enhances cultural sensitivity, optimize outcomes contributing to a more responsive and effective mental health care [48]. However, careful consideration of the ethical implications and methodological concept under cultural adaptions are essential to ensure the responsible deployment of AI-driven technology [51,52]. A conscientious and ethical integration of generative AI techniques is an obligatory, ensuring a balanced approach that maximizes the benefits in mental health practices [47,53].
To the best of our knowledge, this study is the first study that examines nurses’ perceptions regarding the use of AI chatbots as a tool for mental health support. Nonetheless, several limitations should be addressed. First, the cross-sectional design of our study limits the ability to draw causal inferences. Thus, we cannot determine the temporal sequence of the observed relationships. Future longitudinal research designs must be adherend to track nurses’ changes in beliefs, attitudes and behavioral intentions. Second, this study was conducted in one country with a hybrid health care system (both public and private), so the generalizability of the findings is limited. Future studies could replicate this research in different countries and health care systems. Third, further research is needed to explore other potential factors influencing AI adoption on organizational and environmental basis to dive into the core barriers of AI integration into nursing practice.

5. Conclusions

This study leverages insights for policymakers and healthcare leaders to ensure artificial intelligence integration into nursing practice, maintaining patient safety and patient-centered. There is an indisputable need to strengthen policy, education and resource allocation to support the sustainable integration of AI into clinical care. Notably, strictly requirements must be considered for AI adoption, including patients’ perceptions, time efficiency, preservation of clinician and patient autonomy. Many studies promote the design of relevant and sustainable education programs to support the adoption of AI within mental health framework [48,54,55]. However, their effectiveness depends on many key issues, including ethical AI practices, security of patient data, and smooth integration into existing health care infrastructures [49]. Future initiatives should prioritize the responsible use of AI and the development of inclusive and accessible mental health chatbots supporting fair access of all diverse populations to health care.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org, Figure S1: Histogram of the residuals with the score on the “technical advantages” factor of the Artificial Intelligence in Mental Health Scale as the dependent variable; Figure S2: Scatterplot of residuals versus predicted values with the score on the “technical advantages” factor of the Artificial Intelligence in Mental Health Scale as the dependent variable; Figure S3: Histogram of the residuals with the score on the “personal advantages” factor of the Artificial Intelligence in Mental Health Scale as the dependent variable; Figure S4: Scatterplot of residuals versus predicted values with the score on the “personal advantages” factor of the Artificial Intelligence in Mental Health Scale as the dependent variable; Figure S5: Histogram of the residuals with the score on the acceptance factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable; Figure S6: Scatterplot of residuals versus predicted values with the score on the acceptance factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable; Figure S7: Histogram of the residuals with the score on the fear factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable; Figure S8: Scatterplot of residuals versus predicted values with the score on the fear factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable.

Author Contributions

For research articles with several authors, a short paragraph specifying their individual contributions must be provided. The following statements should be used “Conceptualization, A.K. and P.G. (Petros Galanis); methodology, A.K., O.K., I.M. and P.G. (Petros Galanis); software, P.G. (Parisis Gallos); validation, O.G., P.L. and M.T.; formal analysis, O.K., P.G (Parisis Gallos), O.G., P.L. and P.G. (Petros Galanis); investigation, O.G., P.L. and M.T.; resources, P.G., P.L. and M.T.; data curation, O.K. P.G. (Parisis Gallos), O.G., and P.G. (Petros Galanis); writing—original draft preparation, P.L., A.K., I.M., P.G. (Parisis Gallos), O.G., M.T., and P.G. (Petros Galanis); writing—review and editing, A.K., I.M., O.K., P.G. (Parisis Gallos), O.G., P.L., M.,T. and P.G. (Petros Galanis); visualization, A.K., P.G. (Petros Galanis); supervision, P.G. (Petros Galanis); project administration, A.K. and P.G. (Petros Galanis); funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of the Faculty of Nursing, National and Kapodistrian University of Athens (Protocol Approval #01, 14 September 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are available at Figshare at https://doi.org/10.6084/m9.figshare.30600857.

Acknowledgments

None.

Conflicts of Interest

The authors declare no conflicts of interest.

Use of Artificial Intelligence

AI or AI-assisted tools were not used in drafting any aspect of this manuscript.

Abbreviations

The following abbreviations are used in this manuscript:
AI Artificial Intelligence
AIMHS AI Mental Health Scale
ATAI Attitudes Towards Artificial Intelligence Scale
LLMs Large Language Models
PLMs Pretrained Models
RAG Retrieval Augmented Generation
SD Standard Deviation
VIFs Variance Inflation Factors
CIs Confidence Intervals

References

  1. Mental Health. Available online: https://www.who.int/health-topics/mental-health (accessed on 1 December 2025).
  2. Wang, P.S.; Aguilar-Gaxiola, S.; Alonso, J.; Angermeyer, M.C.; Borges, G.; Bromet, E.J.; Bruffaerts, R.; de Girolamo, G.; de Graaf, R.; Gureje, O.; et al. Use of Mental Health Services for Anxiety, Mood, and Substance Disorders in 17 Countries in the WHO World Mental Health Surveys. The Lancet 2007, 370, 841–850. [Google Scholar] [CrossRef] [PubMed]
  3. Conner, K.O.; Copeland, V.C.; Grote, N.K.; Koeske, G.; Rosen, D.; Reynolds, C.F.; Brown, C. Mental Health Treatment Seeking Among Older Adults With Depression: The Impact of Stigma and Race. Am. J. Geriatr. Psychiatry 2010, 18, 531–543. [Google Scholar] [CrossRef] [PubMed]
  4. Mojtabai, R.; Olfson, M.; Sampson, N.A.; Jin, R.; Druss, B.; Wang, P.S.; Wells, K.B.; Pincus, H.A.; Kessler, R.C. Barriers to Mental Health Treatment: Results from the National Comorbidity Survey Replication. Psychol. Med. 2011, 41, 1751–1761. [Google Scholar] [CrossRef]
  5. Thomas, K.C.; Ellis, A.R.; Konrad, T.R.; Holzer, C.E.; Morrissey, J.P. County-Level Estimates of Mental Health Professional Shortage in the United States. Psychiatr. Serv. 2009. [Google Scholar] [CrossRef]
  6. Frawley, T.; Culhane, A. Solving the Shortage of Psychiatric - Mental Health Nurses in Acute Inpatient Care Settings. J. Psychiatr. Ment. Health Nurs. 2024, 31, 119–124. [Google Scholar] [CrossRef]
  7. Wynn, S.D. Addressing the Nursing Workforce Shortage: Veterans as Mental Health Nurses. J. Psychosoc. Nurs. Ment. Health Serv. 2013, 51, 3–4. [Google Scholar] [CrossRef]
  8. Peters, M. Time to Solve Persistent, Pernicious and Widespread Nursing Workforce Shortages. Int. Nurs. Rev. 2023, 70, 247–253. [Google Scholar] [CrossRef]
  9. Foster, K.; Shochet, I.; Shakespeare-Finch, J.; Maybery, D.; Bui, M.V.; Gordon, I.; Bagot, K.L.; Roche, M. Promoting Resilience in Mental Health Nurses: A Partially Clustered Randomised Controlled Trial. Int. J. Nurs. Stud. 2024, 159, 104865. [Google Scholar] [CrossRef]
  10. Cranage, K.; Foster, K. Mental Health Nurses’ Experience of Challenging Workplace Situations: A Qualitative Descriptive Study. Int. J. Ment. Health Nurs. 2022, 31, 665–676. [Google Scholar] [CrossRef] [PubMed]
  11. Boucher, E.M.; Harake, N.R.; Ward, H.E.; Stoeckl, S.E.; Vargas, J.; Minkel, J.; Parks, A.C.; Zilca, R. Artificially Intelligent Chatbots in Digital Mental Health Interventions: A Review. Expert Rev. Med. Devices 2021, 18, 37–49. [Google Scholar] [CrossRef]
  12. Su, H.; Sun, Y.; Li, R.; Zhang, A.; Yang, Y.; Xiao, F.; Duan, Z.; Chen, J.; Hu, Q.; Yang, T.; et al. Large Language Models in Medical Diagnostics: Scoping Review With Bibliometric Analysis. J. Med. Internet Res. 2025, 27, e72062. [Google Scholar] [CrossRef]
  13. Su, H.; Sun, Y.; Li, R.; Zhang, A.; Yang, Y.; Xiao, F.; Duan, Z.; Chen, J.; Hu, Q.; Yang, T.; et al. Large Language Models in Medical Diagnostics: Scoping Review With Bibliometric Analysis. J. Med. Internet Res. 2025, 27, e72062. [Google Scholar] [CrossRef]
  14. Zeng, J.; Zou, X.; Li, S.; Tang, Y.; Teng, S.; Li, H.; Wang, C.; Wu, Y.; Zhang, L.; Zhong, Y.; et al. Assessing the Role of the Generative Pretrained Transformer (GPT) in Alzheimer’s Disease Management: Comparative Study of Neurologist- and Artificial Intelligence-Generated Responses. J. Med. Internet Res. 2024, 26, e51095. [Google Scholar] [CrossRef]
  15. Macia, G.; Liddell, A.; Doyle, V. Conversational AI with Large Language Models to Increase the Uptake of Clinical Guidance. Clin. EHealth 2024, 7, 147–152. [Google Scholar] [CrossRef]
  16. Bendig, E.; Erb, B.; Schulze-Thuesing, L.; Baumeister, H. The Next Generation: Chatbots in Clinical Psychology and Psychotherapy to Foster Mental Health – A Scoping Review. Verhaltenstherapie 2019, 32, 64–76. [Google Scholar] [CrossRef]
  17. Vaidyam, A.N.; Wisniewski, H.; Halamka, J.D.; Kashavan, M.S.; Torous, J.B. Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. Can. J. Psychiatry Rev. Can. Psychiatr. 2019, 64, 456–464. [Google Scholar] [CrossRef]
  18. Kellogg, K.C.; Sadeh-Sharvit, S. Pragmatic AI-Augmentation in Mental Healthcare: Key Technologies, Potential Benefits, and Real-World Challenges and Solutions for Frontline Clinicians. Front. Psychiatry 2022, 13, 990370. [Google Scholar] [CrossRef]
  19. Crigger, E.; Reinbold, K.; Hanson, C.; Kao, A.; Blake, K.; Irons, M. Trustworthy Augmented Intelligence in Health Care. J. Med. Syst. 2022, 46, 12. [Google Scholar] [CrossRef]
  20. Wang, L.; Bhanushali, T.; Huang, Z.; Yang, J.; Badami, S.; Hightow-Weidman, L. Evaluating Generative AI in Mental Health: Systematic Review of Capabilities and Limitations. JMIR Ment. Health 2025, 12, e70014–e70014. [Google Scholar] [CrossRef]
  21. Andrikyan, W.; Sametinger, S.M.; Kosfeld, F.; Jung-Poppe, L.; Fromm, M.F.; Maas, R.; Nicolaus, H.F. Artificial Intelligence-Powered Chatbots in Search Engines: A Cross-Sectional Study on the Quality and Risks of Drug Information for Patients. BMJ Qual. Saf. 2025, 34, 100–109. [Google Scholar] [CrossRef]
  22. Augmenting Community Nursing Practice With Generative AI: A Formative Study of Diagnostic Synergies Using Simulation-Based Clinical Cases - PMC. Available online: https://pmc.ncbi.nlm.nih.gov/articles/PMC11938853/ (accessed on 27 November 2025).
  23. Gunawan, J. Exploring the Future of Nursing: Insights from the ChatGPT Model. Belitung Nurs. J. 2023, 9, 1–5. [Google Scholar] [CrossRef]
  24. Li, Y.; Liang, S.; Zhu, B.; Liu, X.; Li, J.; Chen, D.; Qin, J.; Bressington, D. Feasibility and Effectiveness of Artificial Intelligence-Driven Conversational Agents in Healthcare Interventions: A Systematic Review of Randomized Controlled Trials. Int. J. Nurs. Stud. 2023, 143, 104494. [Google Scholar] [CrossRef]
  25. MacNeill, A.L.; MacNeill, L.; Luke, A.; Doucet, S. Health Professionals’ Views on the Use of Conversational Agents for Health Care: Qualitative Descriptive Study. J. Med. Internet Res. 2024, 26, e49387. [Google Scholar] [CrossRef]
  26. von Elm, E.; Altman, D.G.; Egger, M.; Pocock, S.J.; Gøtzsche, P.C.; Vandenbroucke, J.P. STROBE Initiative The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for Reporting Observational Studies. J. Clin. Epidemiol. 2008, 61, 344–349. [Google Scholar] [CrossRef]
  27. Katsiroumpa, A.; Konstantakopoulou, O.; Moisoglou, I.; Gallos, P.; Galani, O.; Lialiou, P.; Tsiachri, M.; Galanis, P. Development and Validation of the Artificial Intelligence in Mental Health Scale: Application for AI Mental Health Chatbots. Healthcare 2025, 13, 3269. [Google Scholar] [CrossRef]
  28. Sindermann, C.; Sha, P.; Zhou, M.; Wernicke, J.; Schmitt, H.S.; Li, M.; Sariyska, R.; Stavrou, M.; Becker, B.; Montag, C. Assessing the Attitude Towards Artificial Intelligence: Introduction of a Short Measure in German, Chinese, and English Language. KI - Künstl. Intell. 2021, 35, 109–118. [Google Scholar] [CrossRef]
  29. Konstantakopoulou, O.; Katsiroumpa, A.; Moisoglou, I.; Gallos, P.; Galani, O.; Tsiachri, M.; Lialiou, P.; Lamprakopoulou, K.; Galanis, P. Attitudes Towards Artificial Intelligence Scale: Translation and Validation in Greek. Arch. Hell. Med. 2026; Under press. [Google Scholar]
  30. World Medical Association World Medical Association. Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. JAMA 2013, 310, 2191–2194. [Google Scholar] [CrossRef]
  31. Kim, J.H. Multicollinearity and Misleading Statistical Results. Korean J. Anesthesiol. 2019, 72, 558–569. [Google Scholar] [CrossRef] [PubMed]
  32. Choe, J.; Woo, K. Factors Associated with Intention to Use Generative Artificial Intelligence in Nursing Practice: A Cross-Sectional Study. BMC Nurs. 2025, 24, 1327. [Google Scholar] [CrossRef]
  33. The Adoption of AI in Mental Health Care–Perspectives From Mental Health Professionals: Qualitative Descriptive Study. JMIR Form. Res. 2023, 7. [CrossRef]
  34. Liang, H.-F.; Wu, K.-M.; Weng, C.-H.; Hsieh, H.-W. Nurses’ Views on the Potential Use of Robots in the Pediatric Unit. J. Pediatr. Nurs. 2019, 47, e58–e64. [Google Scholar] [CrossRef]
  35. Elyoseph, Z.; Levkovich, I. Comparing the Perspectives of Generative AI, Mental Health Experts, and the General Public on Schizophrenia Recovery: Case Vignette Study. JMIR Ment. Health 2024, 11, e53043. [Google Scholar] [CrossRef]
  36. Pare, G.; Raymond, L.; Etindele Sosso, F.A. Nurses’ Intention to Integrate AI Into Their Practice: Survey Study in Canada. JMIR Nurs. 2025, 8, e76795–e76795. [Google Scholar] [CrossRef] [PubMed]
  37. Wang, L.; Wan, Z.; Ni, C.; Song, Q.; Li, Y.; Clayton, E.; Malin, B.; Yin, Z. Applications and Concerns of ChatGPT and Other Conversational Large Language Models in Health Care: Systematic Review. J. Med. Internet Res. 2024, 26, e22769. [Google Scholar] [CrossRef] [PubMed]
  38. Levin, C.; Zaboli, A.; Turcato, G.; Saban, M. Nursing Judgment in the Age of Generative Artificial Intelligence: A Cross-National Study on Clinical Decision-Making Performance among Emergency Nurses. Int. J. Nurs. Stud. 2025, 172, 105216. [Google Scholar] [CrossRef]
  39. Chen, Y.; Wu, F.; Zhang, W.; Xing, W.; Zhu, Z.; Huang, Q.; Yuan, C.; Chen, Y.; Wu, F.; Zhang, W.; et al. Perspectives on AI-Driven Nursing Science Among Nursing Professionals from China: A Qualitative Study. Nurs. Rep. 2025, 15. [Google Scholar] [CrossRef] [PubMed]
  40. Chen, C.; Lam, K.T.; Yip, K.M.; So, H.K.; Lum, T.Y.S.; Wong, I.C.K.; Yam, J.C.; Chui, C.S.L.; Ip, P. Comparison of an AI Chatbot With a Nurse Hotline in Reducing Anxiety and Depression Levels in the General Population: Pilot Randomized Controlled Trial. JMIR Hum. Factors 2025, 12, e65785–e65785. [Google Scholar] [CrossRef]
  41. Cooper, J.; Haroon, S.; Crowe, F.; Nirantharakumar, K.; Jackson, T.; Fitzsimmons, L.; Hathaway, E.; Flanagan, S.; Jackson, L.J.; Gunathilaka, N.; et al. Perspectives of Health Care Professionals on the Use of AI to Support Clinical Decision-Making in the Management of Multiple Long-Term Conditions: Interview Study. [CrossRef]
  42. Henrique, B.M.; Santos, E. Trust in Artificial Intelligence: Literature Review and Main Path Analysis. Comput. Hum. Behav. Artif. Hum. 2024, 2, 100043. [Google Scholar] [CrossRef]
  43. Wagner, G.; Raymond, L.; Paré, G. Understanding Prospective Physicians’ Intention to Use Artificial Intelligence in Their Future Medical Practice: Configurational Analysis. JMIR Med. Educ. 2023, 9, e45631. [Google Scholar] [CrossRef]
  44. Akter, S.; Ray, P.; D’Ambra, J. Continuance of mHealth Services at the Bottom of the Pyramid: The Roles of Service Quality and Trust. Electron. Mark. 2013, 23, 29–47. [Google Scholar] [CrossRef]
  45. Alruwaili, A.N.; Alshammari, A.M.; Alhaiti, A.; Elsharkawy, N.B.; Ali, S.I.; Elsayed Ramadan, O.M. Neonatal Nurses’ Experiences with Generative AI in Clinical Decision-Making: A Qualitative Exploration in High-Risk Nicus. BMC Nurs. 2025, 24, 386. [Google Scholar] [CrossRef]
  46. Schiavo, G.; Businaro, S.; Zancanaro, M. Comprehension, Apprehension, and Acceptance: Understanding the Influence of Literacy and Anxiety on Acceptance of Artificial Intelligence. Technol. Soc. 2024, 77, 102537. [Google Scholar] [CrossRef]
  47. Ni, Y.; Jia, F. A Scoping Review of AI-Driven Digital Interventions in Mental Health Care: Mapping Applications Across Screening, Support, Monitoring, Prevention, and Clinical Education. Healthc. Basel Switz. 2025, 13, 1205. [Google Scholar] [CrossRef]
  48. VanHook, C.; Abusuampeh, D.; Pollard, J. Leveraging Generative AI to Simulate Mental Healthcare Access and Utilization. Front. Health Serv. 2025, 5, 1654106. [Google Scholar] [CrossRef]
  49. Laymouna, M.; Ma, Y.; Lessard, D.; Schuster, T.; Engler, K.; Lebouché, B. Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review. J. Med. Internet Res. 2024, 26, e56930. [Google Scholar] [CrossRef] [PubMed]
  50. Dehbozorgi, R.; Zangeneh, S.; Khooshab, E.; Nia, D.H.; Hanif, H.R.; Samian, P.; Yousefi, M.; Hashemi, F.H.; Vakili, M.; Jamalimoghadam, N.; et al. The Application of Artificial Intelligence in the Field of Mental Health: A Systematic Review. BMC Psychiatry 2025, 25, 132. [Google Scholar] [CrossRef] [PubMed]
  51. Moylan, K.; Doherty, K. Expert and Interdisciplinary Analysis of AI-Driven Chatbots for Mental Health Support: Mixed Methods Study. J. Med. Internet Res. 2025, 27, e67114. [Google Scholar] [CrossRef] [PubMed]
  52. Zhang, Q.; Zhang, R.; Xiong, Y.; Sui, Y.; Tong, C.; Lin, F.-H. Generative AI Mental Health Chatbots as Therapeutic Tools: Systematic Review and Meta-Analysis of Their Role in Reducing Mental Health Issues. J. Med. Internet Res. 2025, 27, e78238. [Google Scholar] [CrossRef]
  53. Xian, X.; Chang, A.; Xiang, Y.-T.; Liu, M.T. Debate and Dilemmas Regarding Generative AI in Mental Health Care: Scoping Review. Interact. J. Med. Res. 2024, 13, e53672. [Google Scholar] [CrossRef]
  54. Zhang, G.; Xu, Z.; Jin, Q.; Chen, F.; Fang, Y.; Liu, Y.; Rousseau, J.F.; Xu, Z.; Lu, Z.; Weng, C.; et al. Leveraging Long Context in Retrieval Augmented Language Models for Medical Question Answering. Npj Digit. Med. 2025, 8, 239. [Google Scholar] [CrossRef] [PubMed]
  55. Milasan, L.H.; Scott-Purdy, D. The Future of Artificial Intelligence in Mental Health Nursing Practice: An Integrative Review. Int.J. Ment. Health Nurs. 2025, 34, e70003. [Google Scholar] [CrossRef] [PubMed]
Table 1. Demographic characteristics of nurses (n=276).
Table 1. Demographic characteristics of nurses (n=276).
Characteristics N %
Gender
Females 225 81.5
Males 51 18.5
Age (years) a 42.94 9.61
Financial statusa 5.97 1.56
Daily use of social media/websites (hours) a 3.07 2.40
Competence in digital technologiesa 7.63 1.66
a mean, standard deviation.
Table 2. Descriptive statistics for the study scales (n=276).
Table 2. Descriptive statistics for the study scales (n=276).
Scale Mean Standard deviation Median Interquartile range
Artificial Intelligence in Mental Health Scale
Technical advantages 1.76 0.69 2.00 0.69
Personal advantages 3.26 0.88 3.00 1.00
Attitudes Towards Artificial Intelligence Scale
Acceptance 5.15 1.99 5.00 2.50
Fear 5.72 1.80 6.00 2.00
Table 3. Correlation analysis between the study scales (n=276).
Table 3. Correlation analysis between the study scales (n=276).
Pearson’s correlation coefficient
Scale 2 3 4
AIMHS (technical advantages) 0.522** 0.533** -0.148*
AIMHS (personal advantages) 0.537** 0.001
ATAI (acceptance) -0.221**
ATAI (fear)
* p-value < 0.05; ** p-value < 0.01; AIMHS: Artificial Intelligence in Mental Health Scale; ATAI: Attitudes Towards Artificial Intelligence Scale.
Table 4. Linear regression models with the score on the technical advantages factor of the Artificial Intelligence in Mental Health Scale as the dependent variable (n=276).
Table 4. Linear regression models with the score on the technical advantages factor of the Artificial Intelligence in Mental Health Scale as the dependent variable (n=276).
Independent variables Univariate models Multivariable models VIF
Unadjusted coefficient beta 95% CI for beta P-value Adjusted coefficient beta 95% CI for beta P-value
Males vs. females 0.096 -0.115 to 0.308 0.370 0.065 -0.149 to 0.279 0.551 1.068
Age 0.009 0.001 to 0.018 0.033 0.011 0.002 to 0.020 0.018 1.185
Financial status 0.031 -0.022 to 0.083 0.255 0,047 -0.008 to 0.102 0.090 1.126
Daily use of social media/websites (hours) 0.032 -0.002 to 0.066 0.069 0.058 0.021 to 0.095 0.002 1.231
Competence in digital technologies -0.017 -0.066 to 0.033 0.504 -0.030 -0.084 to 0.024 0.268 1.235
a R2 for the multivariable model = 3.7%; p-value for ANOVA = 0.010. CI: confidence interval; VIF: variance inflation factor
Table 5. Linear regression models with the score on the personal advantages factor of the Artificial Intelligence in Mental Health Scale as the dependent variable (n=276).
Table 5. Linear regression models with the score on the personal advantages factor of the Artificial Intelligence in Mental Health Scale as the dependent variable (n=276).
Independent variables Univariate models Multivariable models VIF
Unadjusted coefficient beta 95% CI for beta P-value Adjusted coefficient beta 95% CI for beta P-value
Males vs. females 0.478 0.215 to 0.741 <0.001 0.548 0.277 to 0.820 <0.001 1.068
Age -0.004 -0.015 to 0.007 0.506 -0.007 -0.019 to 0.005 0.238 1.185
Financial status -0.023 -0.090 to 0.044 0.500 -0.033 -0.103 to 0.037 0.354 1.126
Daily use of social media/websites (hours) 0.017 -0.027 to 0.060 0.452 0.018 -0.030 to 0.065 0.462 1.231
Competence in digital technologies -0.006 -0.069 to 0.057 0.853 -0.033 -0.101 to 0.036 0.347 1.235
a R2 for the multivariable model = 4.2%; p-value for ANOVA = 0.005; CI: confidence interval; VIF: variance inflation factor.
Table 6. Linear regression models with the score on the acceptance factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable (n=276).
Table 6. Linear regression models with the score on the acceptance factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable (n=276).
Independent variables Univariate models Multivariable modela VIF
Unadjusted coefficient beta 95% CI for beta P-value Adjusted coefficient beta 95% CI for beta P-value
Males vs. females 1.607 1.031 to 2.184 <0.001 1.587 0.993 to 2.181 <0.001 1.068
Age 0.003 -0.022 to 0.027 0.818 -0.001 -0.026 to 0.025 0.959 1.185
Financial status 0.012 -0.140 to 0.163 0.879 -0.098 -0.250 to 0.055 0.209 1.126
Daily use of social media/websites (hours) -0.020 -0.118 το 0.078 0.688 -0.046 -0.149 το 0.057 0.380 1.231
Competence in digital technologies 0.167 0.027 to 0.308 0.020 0.164 0.014 to 0.313 0.032 1.235
a R2 for the multivariable model = 10.1%; p-value for ANOVA = <0.001; CI: confidence interval; VIF: variance inflation factor.
Table 7. Linear regression models with the score on the fear factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable (n=276).
Table 7. Linear regression models with the score on the fear factor of the Attitudes Towards Artificial Intelligence Scale as the dependent variable (n=276).
Independent variables Univariate models Multivariable modela VIF
Unadjusted coefficient beta 95% CI for beta P-value Adjusted coefficient beta 95% CI for beta P-value
Males vs. females 0.532 -0.016 to 1.079 0.057 0.772 0.231 to 1.312 0.005 1.068
Age 0.009 -0.014 to 0.031 0.445 0.011 -0.012 to 0.034 0.367 1.185
Financial status -0.321 -0.453 to -0.189 <0.001 -0.329 -0.467 to -0.190 <0.001 1.126
Daily use of social media/websites (hours) 0.086 -0.003 to 0.175 0.057 0.071 -0.023 to 0.165 0.138 1.231
Competence in digital technologies -0.057 -0.186 to 0.072 0.386 -0.040 -0.176 to 0.096 0.566 1.235
aR2 for the multivariable model = 9.7%; p-value for ANOVA = <0.001. CI: confidence interval; VIF: variance inflation factor
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated