Submitted:
19 March 2026
Posted:
20 March 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. The Rapid Evolution of AI and Its Uses
| Psychosis, mania or isolated delusions | Suicide and self harm | Emotional reliance | |
|---|---|---|---|
| Expert evaluation: fewer undesirable responses | |||
| -39% | -52% | -42% | |
| Non-policy-compliant responses | -65% | -65% | -80% |
3. Attachment Formation and Emotional Dependency
4. Trust in AI
5. Artificial Intimacy
6. Delusion Reinforcement and AI Psychosis
7. Designing Therapeutic LLMs
- Problem definition and task identification: The main focus is to define in a precise manner the problem at hand, by articulating the nature of input data, the desired output and clarifying the objectives of the task. Carefully defining the problem statement and task requirements allows researchers to efficiently use LLMs to address a wide range of challenges.
- Data acquisition and preprocessing: The most important part is to acquire high-quality data and prepare it for model training. The datasets need to correspond to the defined task objectives, while also being varied and qualitative, accounting for potential biases. The selection and preprocessing of data ensure the LLMs can be precise and trustworthy while accurately representing the nuances of natural language.
- Model selection and fine-tuning: The main objective of this stage of LLM modeling is to select an adequate pre-trained LLM architecture and personalize it to fit the specific tasks at hand. The factors that influence this selection include model size, computational resources and task requirements.
- Model evaluation: The trained model is evaluated for its performance and efficacy through rigorous testing and assessment. The model’s strengths and weaknesses are highlighted and finetuned.
- Deployment and monitoring: Following its release, the model’s performance is constantly monitored in order to detect errors, drifts and gathering feedback in order to maintain a high level of reliability and effectiveness.
8. Conclusions
Abbreviations
| LLM | Large Language Model |
| AI | Artificial Intelligence |
| AIP | Artificial Intelligence Psychosis |
| HCP | Healthcare Provider |
References
- (Al-Amin et al., 2024) Al-Amin, M., Ali, M. S., Salam, A., Khan, A., Ali, A., Ullah, A., ... & Chowdhury, S. K. (2024). History of generative Artificial Intelligence (AI) chatbots: past, present, and future development. arXiv preprint arXiv:2402.05122.
- (Ali & Vadsariya, 2024) Ali, K., Garcia, A., & Vadsariya, A. (2024). Impact of the AI dependency revolution on both physical and mental health. Journal of Strategic Innovation and Sustainability, 19(2). [CrossRef]
- (American Psychological Association, 2026) American Psychological Association. (2026). Monitor on psychology. https://www.apa.org/monitor/2026/01-02/trends-digital-ai-relationships-emotional-connection.
- (Bisconti et al., 2025) Bisconti et al. (2025). Adversarial poetry as a universal single-turn jailbreak mechanism in large language models. arXiv preprint arXiv:2511.15304.
- (Bowlby, 1969) Bowlby, J. (1969). Attachment and Loss: Attachment; John Bowlby. Basic books.
- (Bunim, 2024) Bunim, E. M. M. A. (2024). Parasocial Dependency Associated with Artificial Intelligence Chatbots. PDF, California State University, Fullerton, 2024.
- (Crowder et al., 2019) Crowder, J. A., Carbone, J., & Friess, S. (2019). Implicit learning in artificial intelligence. Artificial Psychology, 139–147. [CrossRef]
- (Elvery, 2022) Elvery, G. (2022). Undertale’s loveable monsters: Investigating parasocial relationships with non-player characters. Games and Culture, 18(4), 475–497. [CrossRef]
- (Eurostat, 2025) Digital economy and society statistics - households and individuals - Statistics Explained - Eurostat. (2025). https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Digital_economy_and_society_statistics_-_households_and_individuals.
- (Fiske et al., 2019) Fiske, A., Henningsen, P., & Buyx, A. (2019). Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research, 21(5). [CrossRef]
- (He et al., 2025) He, K., Mao, R., Lin, Q., Ruan, Y., Lan, X., Feng, M., & Cambria, E. (2025). A survey of large language models for healthcare: From Data, technology, and applications to accountability and Ethics. Information Fusion, 118, 102963. [CrossRef]
- (Horton & Wohl, 1956) Horton, D., & Wohl, R. (1956). Mass communication and para-social interaction. Psychiatry, 19(3), 215–229. [CrossRef]
- (Hudon & Stip, 2025) Hudon, A., & Stip, E. (2025). Delusional experiences emerging from AI chatbot interactions or “Ai psychosis.” JMIR Mental Health, 12. [CrossRef]
- (Jarzyna, 2020) Jarzyna, C. (2020). Parasocial interaction, the COVID-19 quarantine, and Digital Media. SSRN Electronic Journal. [CrossRef]
- (Jean, 2020) Jean, A. (2020). Une brève introduction à L’intelligence artificielle. Médecine/Sciences, 36(11), 1059–1067. [CrossRef]
- (Kalmykov, 2025) Kalmykov, V. L. (2025). Towards eXplicitly eXplainable Artificial Intelligence. Information Fusion, 123, 103352.
- (Kasneci et al., 2023) Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences, 103, 10227.
- (Kasturiratna & Hartanto, 2025) Kasturiratna, K. T., & Hartanto, A. (2025). Attachment to Artificial Intelligence: Development of the AI Attachment Scale, Construct Validation, and Psychological Correlates. [CrossRef]
- (Kooli et al., 2025) Kooli, C., Kooli, Y., & Kooli, E. (2025). Generative Artificial Intelligence Addiction Syndrome: A new behavioral disorder? Asian Journal of Psychiatry, 107, 104476. [CrossRef]
- (Laufer, 2025) Laufer, D. (2025). AI love you. Gender and intimacy in user content regarding AI chatbot characters from Character. ai.
- (Lee, 2020) Lee, R. S. (2020). Intelligent agents and software robots. In Artificial Intelligence in Daily Life (pp. 245-264). Singapore: Springer Singapore.
- (Li et al., 2023) Li, H., Zhang, R., Lee, Y.-C., Kraut, R., & Mohr, D. C. (2023). Systematic Review and Meta-Analysis of AI-Based Conversational Agents for Promoting Mental Health and Well-Being. [CrossRef]
- (Morrin et al.,2025) Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., Bhattacharyya, S., MacCabe, J., Tognin, S., Twumasi, R., Alderson-Day, B., & Pollak, T. (2025). Delusions by Design? How Everyday Ais Might Be Fuelling Psychosis (and What Can Be Done about It). [CrossRef]
- (Morrin et al.,2025) Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., Bhattacharyya, S., MacCabe, J., Tognin, S., Twumasi, R., Alderson-Day, B., & Pollak, T. (2025). Delusions by Design? How Everyday Ais Might Be Fuelling Psychosis (and What Can Be Done about It). [CrossRef]
- (Opel & Breakspear, 2026) Opel, N., & Breakspear, M. (2026). Transforming Mental Health Research and care through Artificial Intelligence. Science, 391(6782), 249–258. [CrossRef]
- 2025. Available online: https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/.
- (Preda, 2025) Preda, A. (2025). Special report: Ai-induced psychosis: A new frontier in Mental Health. Psychiatric News, 60(10). [CrossRef]
- (Raees et al., 2024) Raees, M., Meijerink, I., Lykourentzou, I., Khan, V. J., & Papangelis, K. (2024). From explainable to interactive AI: A literature review on current trends in human-AI interaction. International Journal of Human-Computer Studies, 189, 103301.
- (Sarker, 2024) Sarker, I. H. (2024). LLM Potentiality and Awareness: A Position Paper from the Perspective of Trustworthy and Responsible AI Modeling. [CrossRef]
- Schema. 28 December 2024. Available online: https://schemasim.com/.
- (Schoene et al., 2025) Schoene, A. M., & Canca, C. (2025). ‘for argument’s sake, Show me how to harm myself!’: Jailbreaking llms in suicide and self-harm contexts. 2025 IEEE International Symposium on Technology and Society (ISTAS), 1–7. [CrossRef]
- (Schuetzler et al., 2020) Schuetzler, R. M., Grimes, G. M., & Scott Giboney, J. (2020). The impact of chatbot conversational skill on engagement and perceived humanness. Journal of management information systems, 37(3), 875-900.
- (Spytska, 2025) Spytska, L. (2025). The use of Artificial Intelligence in psychotherapy: Development of intelligent therapeutic systems. BMC Psychology, 13(1). [CrossRef]
- (Stade et al., 2023) Stade, E., Stirman, S. W., Ungar, L. H., Yaden, D. B., Schwartz, H. A., Sedoc, J., ... & Eichstaedt, J. C. (2023). Artificial intelligence will change the future of psychotherapy: A proposal for responsible, psychologist-led development. PsyArXiv, 1-29.
- (Staines et al., 2022) Staines, L., Healy, C., Coughlan, H., Clarke, M., Kelleher, I., Cotter, D., & Cannon, M. (2022). Psychotic experiences in the general population, a review; definition, risk factors, outcomes and interventions. Psychological Medicine, 52(15), 3297–3308. [CrossRef]
- (Torres et al., 2023) Gonzalez Torres, A. P., Kajava, K., & Sawhney, N. (2023). Emerging AI discourses and policies in the EU: Implications for evolving AI governance. Communications in Computer and Information Science, 3–17. [CrossRef]
- (Trocin et al., 2023) Trocin, C., Mikalef, P., Papamitsiou, Z., & Conboy, K. (2023). Responsible AI for digital health: a synthesis and a research agenda. Information Systems Frontiers, 25(6), 2139-2157.
- (Xu & Shuttleworth, 2024) Xu, H., & Shuttleworth, K. M. (2024). Medical Artificial Intelligence and the Black Box Problem: A view based on the ethical principle of “Do no harm.” Intelligent Medicine, 4(1), 52–57. [CrossRef]
- (Zhang et al., 2025) Zhang, D., Wijaya, T. T., Wang, Y., Su, M., Li, X., & Damayanti, N. W. (2025). Exploring the relationship between AI literacy, AI trust, AI dependency, and 21st century skills in preservice mathematics teachers. Scientific Reports, 15(1), 14281.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.