Preprint
Article

This version is not peer-reviewed.

Companions Made of Code: Why Emotional AI Must Not Be Introduced into Mental Healthcare Without Regulation

Submitted:

28 December 2025

Posted:

29 December 2025

You are already at the latest version

Abstract
Artificial-intelligence systems that offer emotional companionship have rapidly moved from the margins of digital health to a presence embedded in ordinary life. Marketed as “friends”, “partners” and “listeners”, these chatbots now meet users in moments of loneliness, stress and despair, often at times when no human support is available. Their expansion raises a central question: what happens when emotional suffering is directed toward an artefact incapable of responsibility, action or moral accountability? Historically, cries for help summoned human presence. In digital contexts, however, disclosure is often absorbed by systems that respond with sentences but cannot intervene, protect or share burden. This article argues that emotional-support artificial intelligence must not be introduced into mental-health contexts without enforceable safeguards, regulatory classification and clinical oversight. It combines theoretical analysis with an exploratory examination of eight widely available chatbots, demonstrating that current systems frequently simulate empathy while failing to recognise suicide-risk cues or guide users toward human help. These findings gain further weight when considered alongside documented real-world cases in which chatbot interactions preceded self-harm or suicide. Although emotional AI may one day offer supplementary value within supervised care, its present deployment risks normalising substitution where human care is structurally absent. Ethical legitimacy requires that societies first guarantee equitable access to mental-health services, establish accountability for digital systems and ensure that artificial companions remain optional rather than inevitable. Only after these foundational duties are fulfilled can the question of emotional AI in mental-health care be meaningfully asked.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated