Artificial-intelligence systems that offer emotional companionship have rapidly moved from the margins of digital health to a presence embedded in ordinary life. Marketed as “friends”, “partners” and “listeners”, these chatbots now meet users in moments of loneliness, stress and despair, often at times when no human support is available. Their expansion raises a central question: what happens when emotional suffering is directed toward an artefact incapable of responsibility, action or moral accountability? Historically, cries for help summoned human presence. In digital contexts, however, disclosure is often absorbed by systems that respond with sentences but cannot intervene, protect or share burden. This article argues that emotional-support artificial intelligence must not be introduced into mental-health contexts without enforceable safeguards, regulatory classification and clinical oversight. It combines theoretical analysis with an exploratory examination of eight widely available chatbots, demonstrating that current systems frequently simulate empathy while failing to recognise suicide-risk cues or guide users toward human help. These findings gain further weight when considered alongside documented real-world cases in which chatbot interactions preceded self-harm or suicide. Although emotional AI may one day offer supplementary value within supervised care, its present deployment risks normalising substitution where human care is structurally absent. Ethical legitimacy requires that societies first guarantee equitable access to mental-health services, establish accountability for digital systems and ensure that artificial companions remain optional rather than inevitable. Only after these foundational duties are fulfilled can the question of emotional AI in mental-health care be meaningfully asked.