Preprint
Review

This version is not peer-reviewed.

Artificial Intelligence in Psychotherapy: Opportunities, Risks, and Implications for Mental Health Care — A Literature Review

Submitted:

16 April 2026

Posted:

17 April 2026

You are already at the latest version

Abstract
This concept paper examines the growing integration of artificial intelligence (AI) into psychotherapeutic contexts, with particular attention to its implications for the mental health of adults and young adults. Against the backdrop of a global mental health crisis characterized by insufficient therapeutic resources, rising demand, and persistent stigma, AI-assisted interventions — including chatbot-based tools, machine learning-supported diagnostics, and algorithmically personalized treatment pathways — have emerged as a promising yet contested field. The paper introduces key concepts such as AI-assisted psychotherapy, digital mental health interventions (DMHIs), and conversational agents, and situates them within current psychological and clinical frameworks. Drawing on recent empirical studies, theoretical analyses, and ethical debates, it investigates the potential of AI to democratize access to mental health support while critically addressing concerns around therapeutic alliance, algorithmic bias, data privacy, and the irreducible dimensions of human connection in clinical care. The paper further examines the intersection of AI and established psychotherapeutic modalities, including Cognitive Behavioral Therapy (CBT), Acceptance and Commitment Therapy (ACT), and supportive counseling. By identifying research gaps and unresolved tensions, this review advocates for an evidence-based, ethically grounded, and human-centered approach to AI integration in mental health — one that positions AI as a supplement to, rather than a substitute for, professional therapeutic relationships.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction and Background

Mental health represents one of the most pressing public health challenges of the early twenty-first century. The World Health Organization (WHO, 2022) estimates that nearly one billion people worldwide live with a diagnosable mental health condition, yet fewer than half receive adequate treatment. The treatment gap is not merely a question of resource scarcity; it reflects structural barriers including geographic inaccessibility, economic inequality, cultural stigma, and a chronic shortage of trained mental health professionals (Patel et al., 2018). In high-income countries, waiting times for publicly funded psychotherapy frequently extend to months or years, while low- and middle-income countries often lack the infrastructure to sustain even basic psychiatric services (Saxena et al., 2007).
Against this backdrop, the emergence of AI-assisted psychotherapy has attracted substantial scientific and clinical interest. The rapid proliferation of digital mental health interventions (DMHIs) — ranging from structured mobile applications and chatbot-based programs to machine learning-driven clinical decision support systems — has generated both hope and controversy. Proponents argue that AI can function as a scalable, cost-effective, and stigma-reducing pathway to mental health support, capable of delivering evidence-based interventions to populations who would otherwise remain untreated (Luxton, 2020; Torous et al., 2021). Critics, however, raise foundational objections: psychotherapy is not merely a behavioral algorithm but a fundamentally relational practice, and the substitution of human judgment and empathy with computational pattern recognition carries risks that extend beyond technical failure into the domain of clinical ethics (Stoll et al., 2020).
This paper does not position itself in either camp uncritically. Rather, it aims to provide a comprehensive, analytically grounded overview of the current state of research on AI in psychotherapy. It examines the empirical evidence for effectiveness, the theoretical frameworks within which AI tools operate, the structural and ethical tensions the field must navigate, and the conditions under which AI integration can be considered clinically responsible. The intended contribution is a critical synthesis that moves beyond both techno-optimism and reflexive skepticism, toward a nuanced understanding of what AI can and cannot offer within the complex ecology of mental health care.
The paper is structured as follows. Section 2 reviews the emerging landscape of AI applications in mental health, including current tools and technological frameworks. Section 3 examines mental health implications, both potential benefits and documented risks. Section 4 addresses the therapeutic alliance and its compatibility with AI-mediated interventions. Section 5 discusses ethical, regulatory, and equity considerations. Section 6 situates AI within established therapeutic modalities. The paper closes with conclusions and directions for future research.

2. Literature Review

Emerging Landscape of AI Applications in Mental Health

The field of AI-assisted mental health care is expanding rapidly and encompasses a heterogeneous range of tools and approaches. At the most accessible end of the spectrum are conversational AI agents — chatbots and virtual companions designed to simulate therapeutic dialogue, provide psychoeducation, guide evidence-based exercises, or offer emotional support. Systems such as Woebot, Wysa, and Replika have attracted millions of users globally and have been the subject of an increasing body of empirical research (Fitzpatrick et al., 2017; Inkster et al., 2018). These tools typically draw on manualized therapeutic protocols, particularly CBT-derived techniques, and deliver them through a text-based conversational interface accessible via smartphone.
Beyond conversational agents, AI is being applied to clinical diagnostic processes and risk stratification. Machine learning algorithms trained on multimodal datasets — including speech patterns, facial expressions, physiological indicators, and language use — are being developed to support the detection of depression, anxiety disorders, PTSD, and suicidality (Cummins et al., 2015; Coppersmith et al., 2018). These systems aim to assist clinicians in identifying at-risk individuals more efficiently and accurately than traditional self-report instruments, though their clinical deployment raises significant questions about interpretability, validity, and informed consent (Bzdok & Meyer-Lindenberg, 2018).
A third application domain involves AI-supported personalization of treatment pathways. Rather than applying standardized protocols uniformly, adaptive algorithms can modulate the content, pacing, and focus of an intervention based on real-time user data and predicted response profiles (Insel, 2017). This approach — sometimes framed within the precision psychiatry paradigm — represents a significant departure from the population-level logic of randomized controlled trials and invites new methodological and epistemological questions about the evidence base for individualized AI-driven care.
Recent years have also seen the integration of large language models (LLMs) such as GPT-4 and similar architectures into mental health applications. Unlike earlier rule-based chatbots, LLMs generate contextually responsive, linguistically flexible outputs that can approximate the surface features of empathic dialogue with notable sophistication (Ayers et al., 2023). This development has intensified both enthusiasm and concern: the plausibility of AI-generated empathy may increase user engagement and therapeutic outcomes, but it also raises deeper questions about authenticity, boundaries, and the clinical implications of users forming attachment-like relationships with non-human agents (Luxton, 2020).

Mental Health Implications: Potential Benefits

The case for AI-assisted psychotherapy rests on several well-documented dimensions of the mental health treatment gap. First, scalability: AI tools can be deployed at population scale without the constraints that limit human therapist capacity. In settings where the ratio of mental health professionals to the general population is critically low, AI-assisted self-help applications may constitute a meaningful intervention tier — not a replacement for professional care, but a first step that reduces the burden of untreated distress and builds the skills necessary for productive engagement with human providers (Kazdin & Blase, 2011; Naslund et al., 2017).
Second, accessibility: AI tools are available around the clock, require no appointment, and can be accessed in the privacy of one’s own home. For individuals who experience shame or fear in relation to formal help-seeking — a demographically significant group, particularly among young men and in cultures with pronounced mental health stigma — this anonymity may lower the threshold for initial engagement (Gulliver et al., 2010). Several studies have indicated that users disclose more sensitive information to AI systems than to human interviewers, a phenomenon attributed to reduced fear of negative evaluation (Lucas et al., 2014).
Third, and directly relevant to the global epidemiological picture, AI tools can operate without geographic or temporal constraints. Rural and remote populations, shift workers, caregivers, and others for whom conventional therapy is structurally inaccessible may find in AI applications a usable and effective resource. Fitzpatrick et al. (2017) conducted a randomized controlled trial demonstrating that users of Woebot showed significant reductions in depression and anxiety symptoms over two weeks — a finding consistent with subsequent replications in diverse populations (Hoermann et al., 2017; Prochaska et al., 2021).

Mental Health Implications: Documented Risks and Limitations

The empirical promise of AI-assisted psychotherapy must be read alongside an equally substantial body of critical findings. The most fundamental concern pertains to effectiveness: while short-term symptom reduction has been documented in several trials, evidence for sustained effects, comparability with face-to-face therapy, and effectiveness in severe or complex clinical presentations remains limited (Linardon et al., 2020). Current AI tools have been predominantly evaluated in samples presenting with mild to moderate depression and anxiety; their applicability to personality disorders, psychosis, trauma-related conditions, or acute suicidality has not been established and raises serious clinical risk questions.
A second cluster of concerns involves algorithmic bias and validity. Machine learning models for mental health are trained on datasets that frequently overrepresent specific demographic groups — typically young, educated, Western, digitally engaged individuals — and underrepresent older adults, ethnic minorities, and those with limited digital literacy (Obermeyer & Emanuel, 2016; Bzdok & Meyer-Lindenberg, 2018). Systems trained on such data may perform poorly or harmfully for underrepresented groups, paradoxically widening the very equity gaps they are intended to address. The language models underlying conversational agents are particularly susceptible to cultural and linguistic bias, which may distort both assessment outputs and therapeutic interactions in ways that are difficult to detect without systematic audit.
Third, the question of harm. AI tools that simulate therapeutic dialogue without the clinical judgment, crisis detection capability, or safeguarding infrastructure of a trained professional can fail dangerously in high-risk situations. There are documented cases in which chatbot platforms provided inappropriate or harmful responses to users in crisis (Miner et al., 2016; Torous & Roberts, 2017). The 2023 case involving the Chai AI chatbot, in which a vulnerable user reportedly engaged in a fatal interaction, illustrates the severity of what can occur when AI systems designed for engagement rather than clinical safety encounter individuals with acute psychiatric need.
Finally, the substitution risk. Despite consistent framing of AI as a supplement to human care, commercial pressures and system-level cost imperatives create structural incentives to deploy AI as a replacement for human clinicians. This substitution logic — already evident in some insurance and public health system contexts — threatens to erode professional standards of care under the guise of innovation. The risk is not primarily technological but political and economic: a well-functioning AI tool deployed in a healthcare system that uses it to justify reduced investment in human mental health services may produce net harm at the population level.

3. Therapeutic Alliance and the Limits of AI-Mediated Interaction

The therapeutic alliance — broadly defined as the collaborative bond between therapist and client, including agreement on goals and tasks and the quality of the relational connection — is one of the most robustly supported predictors of psychotherapy outcome, accounting for a meaningful proportion of variance in clinical improvement across modalities, populations, and presenting problems (Horvath et al., 2011; Wampold & Imel, 2015). Any assessment of AI’s role in psychotherapy must therefore engage seriously with the question of whether, and in what form, therapeutic alliance can exist between a human being and a computational system.
Research in this area is nascent but rapidly accumulating. Several studies have found that users of AI-based therapeutic tools report working alliance scores on standardized instruments (such as the Working Alliance Inventory) comparable to those reported by clients in human therapist dyads, at least in brief intervention contexts (Bickmore et al., 2010; Woebot Health, 2021). These findings have been interpreted by AI proponents as evidence that the relational dimensions of therapy are, to a meaningful degree, transferable to well-designed digital agents. A more cautious interpretation, however, notes that working alliance ratings in AI contexts may reflect user satisfaction with interface usability or perceived tool helpfulness rather than the relational construct the instrument was designed to measure.
A theoretically more grounded perspective draws on attachment theory, intersubjectivity, and the phenomenology of the therapeutic encounter to argue that the healing dimensions of psychotherapy are not reducible to technique delivery but are constituted in the relational matrix itself — in the experience of being truly seen, held, and responded to by another conscious subject (Stern, 2004; Buber, 1958). From this standpoint, an AI system that produces linguistically sophisticated empathic responses without genuine understanding, embodied co-presence, or moral responsibility cannot replicate what is therapeutically essential, regardless of its surface plausibility.
This does not, however, render AI therapeutically irrelevant. There is a meaningful distinction between the healing that occurs in depth psychotherapy and the relief that may be obtained from structured psychoeducation, guided skills practice, or accessible emotional support. AI tools may be well-suited to the latter functions, particularly when they are transparently framed as skills-based tools rather than relational therapies. The key clinical and ethical imperative is accuracy of framing: users must understand what they are engaging with and what it cannot offer, and AI systems must not be designed to elicit or exploit parasocial attachment in order to maximize engagement metrics.

4. Ethical, Regulatory, and Equity Considerations

The integration of AI into mental health care raises a complex set of ethical questions that exceed the scope of clinical efficacy research. At the level of individual rights, informed consent is a foundational concern. Users of AI mental health tools must be clearly informed that they are interacting with an automated system, not a human clinician, and must understand the implications of this distinction for the nature of support available, the safeguards in place, and the handling of their data (Vayena et al., 2018). Deceptive design that conceals the AI nature of an agent — even in service of therapeutic engagement — constitutes a violation of user autonomy that is ethically impermissible regardless of outcome.
Data privacy is a structurally distinct but related concern. Mental health data is among the most sensitive categories of personal information, and the collection, storage, and use of such data by commercial AI platforms creates risks of breach, misuse, and discriminatory profiling that carry consequences far beyond those associated with general consumer data. Regulatory frameworks such as the European General Data Protection Regulation (GDPR) impose significant constraints on the processing of health-related personal data, but the application of these frameworks to AI-generated inferences about mental health status — as distinct from formally diagnosed conditions — remains legally ambiguous and insufficiently addressed in current jurisprudence (Mittelstadt et al., 2016)
At the structural level, equity concerns are multidimensional. While AI tools are often positioned as democratizing forces that extend mental health access to underserved populations, the pattern of adoption in practice frequently reproduces existing inequalities. Digital mental health tools disproportionately reach those who are already connected, digitally literate, and sufficiently resourced to own and maintain a smartphone — populations that are, on average, better positioned to access conventional care as well (Hollis et al., 2015). The individuals most severely affected by the mental health treatment gap — those in poverty, with low educational attainment, living in rural or conflict-affected settings, or belonging to marginalized communities — are often the least likely to benefit from AI-based interventions as currently designed and deployed.
Regulatory frameworks for AI mental health tools remain inconsistent and, in many jurisdictions, inadequate. The U.S. Food and Drug Administration (FDA) has developed a framework for software as a medical device (SaMD), but the classification of AI mental health applications as medical devices — which would trigger rigorous pre-market evaluation requirements — is applied inconsistently and is often avoided by commercial developers through careful framing of their products as wellness rather than clinical tools (Torous & Roberts, 2017). The EU AI Act (2024), which categorizes certain AI applications in health and social care as high-risk and subjects them to enhanced oversight requirements, represents a more systematic regulatory approach, though its implementation and enforcement remain in early stages.

5. AI and Established Psychotherapeutic Modalities

A significant portion of existing AI mental health interventions is grounded in, or explicitly derived from, Cognitive Behavioral Therapy (CBT). This is not incidental: CBT is characterized by structured, manualized protocols with clearly defined techniques — thought records, behavioral activation, exposure hierarchies, cognitive restructuring — that are, in principle, amenable to algorithmic delivery (Marks et al., 2007). Several studies have demonstrated the effectiveness of computerized CBT (cCBT) for mild to moderate depression and anxiety, and the extension of these programs to AI-powered conversational interfaces has been a natural development (Andersson et al., 2019). Woebot, the most studied chatbot in this field, is explicitly modeled on CBT and delivers a curriculum of psychoeducation and skills practice within a conversational frame.
Acceptance and Commitment Therapy (ACT) has also been implemented in AI-assisted formats, with some evidence of efficacy for stress reduction and psychological flexibility outcomes (Mak et al., 2018). ACT’s emphasis on values clarification, mindfulness, and cognitive defusion offers a potentially richer fit for AI delivery than more technique-heavy CBT protocols, as its exercises can be readily adapted to interactive text-based formats. However, the experiential and relational dimensions of ACT — particularly the use of therapeutic metaphors and the therapist’s own presence as a modeling agent — are not readily translatable to AI contexts.
Psychodynamic and humanistic modalities present a more fundamental challenge to AI integration. Approaches grounded in the analysis of unconscious dynamics, the therapeutic relationship as the primary vehicle of change, or the unconditional positive regard of a genuinely responsive human presence cannot be operationalized as algorithmic sequences without fundamentally altering the nature of the intervention. This does not exclude AI from a supporting role in these contexts — for instance, as a between-session support tool or a psychoeducational resource — but it does argue against framing such tools as delivering these therapeutic modalities in any meaningful clinical sense.
An emerging research area concerns the integration of AI with human therapists as a hybrid model: AI tools that extend the reach of therapy between sessions, support therapist case conceptualization and monitoring, or provide structured skills practice that frees session time for the relational and exploratory dimensions of treatment (Stoll et al., 2020; Gaffney et al., 2019). This model — sometimes termed blended care — appears to offer a clinically coherent resolution to the opposition between AI capability and therapeutic necessity, and several trials are underway to evaluate its effectiveness and implementation requirements.

6. Conclusion

This literature review has examined the current state of AI integration in psychotherapy across multiple dimensions: the landscape of available technologies, evidence for effectiveness and risk, the question of therapeutic alliance, ethical and regulatory concerns, and the compatibility of AI with established therapeutic modalities. The overall picture is one of genuine promise constrained by significant and unresolved limitations.
The case for AI-assisted mental health interventions is strongest when the following conditions are met: the target population is experiencing mild to moderate symptoms that do not require intensive clinical management; access to human therapists is genuinely constrained by structural rather than preferential factors; the intervention is transparently framed as a digital skills tool rather than a therapeutic relationship; users are capable of informed consent and able to recognize when their needs exceed what the tool can provide; and adequate safety protocols are in place to identify and respond to acute risk.
Conversely, the case against unrestricted deployment of AI in mental health is strongest in contexts involving severe or complex presentations, vulnerable populations with limited capacity for self-direction, commercial platforms without clinical oversight, or healthcare systems in which AI is used to justify reductions in human staffing. The ethical imperative is not to resist AI, but to ensure that its integration is governed by the same principles that govern any clinical intervention: evidence of efficacy, minimization of harm, respect for autonomy, and accountability to the patient.
Several directions for future research emerge from this review. First, longitudinal studies are needed to assess the sustained effects of AI-assisted interventions beyond the short-term symptom reduction demonstrated in existing trials. Second, rigorous evaluation of AI tools in diverse populations — including older adults, ethnic minorities, and individuals with severe mental illness — is essential to close the equity gaps that current research leaves open. Third, the development and evaluation of blended care models that integrate AI with human clinical oversight represents a clinically promising and ethically responsible research direction. Fourth, theoretical work is needed to clarify the boundaries of the therapeutic alliance concept when applied to human-AI interaction, distinguishing genuine relational phenomena from interface engagement effects. Fifth, regulatory science must keep pace with technological development to ensure that AI mental health products are subject to rigorous pre-market evaluation and post-market surveillance comparable to other medical interventions.
The integration of AI into psychotherapy is not a future prospect but a present reality, and its trajectory will be shaped by the quality of the scientific, ethical, and regulatory frameworks brought to bear upon it. This paper has argued that such frameworks must be grounded in clinical realism — neither dismissing AI’s genuine potential to extend the reach of mental health support nor accepting the substitution of technological capability for therapeutic care. The measure of success is not the sophistication of the algorithm but the wellbeing of the person.

References

  1. Andersson, G.; Titov, N.; Dear, B.F.; Rozental, A.; Carlbring, P. Internet-delivered psychological treatments: From innovation to implementation. World Psychiatry 2019, 18(1), 20–28. [Google Scholar] [CrossRef] [PubMed]
  2. Ayers, J.W.; Zampieri, A.; Poliak, A.; Dredze, M.; Webber-Lind, B.; Ouillon, J. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine 2023, 183(6), 589–596. [Google Scholar] [CrossRef]
  3. Bickmore, T.W.; Gruber, A.; Picard, R. Establishing the computer-patient working alliance in automated health behavior change interventions. Patient Education and Counseling 2005, 59(1), 21–30. [Google Scholar] [CrossRef] [PubMed]
  4. Buber, M. I and Thou; Scribner: New York, 1958. [Google Scholar]
  5. Bzdok, D.; Meyer-Lindenberg, A. Machine learning for precision psychiatry: Opportunities and challenges. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging 2018, 3(3), 223–230. [Google Scholar] [CrossRef]
  6. Coppersmith, G.; Harman, C.; Dredze, M. Measuring post traumatic stress disorder in Twitter. Proceedings of the International AAAI Conference on Web and Social Media 2014, 8(1), 579–582. [Google Scholar] [CrossRef]
  7. Cummins, N.; Scherer, S.; Krajewski, J.; Schnieder, S.; Epps, J.; Quatieri, T.F. A review of depression and suicide risk assessment using speech analysis. Speech Communication 2015, 71, 10–49. [Google Scholar] [CrossRef]
  8. Fitzpatrick, K.K.; Darcy, A.; Vierhile, M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health 2017, 4(2), e19. [Google Scholar] [CrossRef]
  9. Gaffney, H.; Mansell, W.; Tai, S. Conversational agents in the treatment of mental health problems: Mixed-method systematic review. JMIR Mental Health 2019, 6(10), e14166. [Google Scholar] [CrossRef]
  10. Gulliver, A.; Griffiths, K.M.; Christensen, H. Perceived barriers and facilitators to mental health help-seeking in young people: A systematic review. BMC Psychiatry 2010, 10(1), 113. [Google Scholar] [CrossRef]
  11. Hoermann, S.; McCabe, K.L.; Milne, D.N.; Calvo, R.A. Application of synchronous text-based dialogue systems in mental health interventions: Systematic review. Journal of Medical Internet Research 2017, 19(8), e267. [Google Scholar] [CrossRef] [PubMed]
  12. Hollis, C.; Morriss, R.; Martin, J.; Amani, S.; Cotton, R.; Denis, M.; Lewis, S. Technological innovations in mental healthcare: Harnessing the digital revolution. British Journal of Psychiatry 2015, 206(4), 263–265. [Google Scholar] [CrossRef] [PubMed]
  13. Horvath, A.O.; Del Re, A.C.; Flückiger, C.; Symonds, D. Alliance in individual psychotherapy. Psychotherapy 2011, 48(1), 9–16. [Google Scholar] [CrossRef] [PubMed]
  14. Inkster, B.; Sarda, S.; Subramanian, V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: Real-world data evaluation mixed-methods study. JMIR mHealth and uHealth 2018, 6(11), e12106. [Google Scholar] [CrossRef]
  15. Insel, T.R. Digital phenotyping: Technology for a new science of behavior. JAMA 2017, 318(13), 1215–1216. [Google Scholar] [CrossRef]
  16. Kazdin, A.E.; Blase, S.L. Rebooting psychotherapy research and practice to reduce the burden of mental illness. Perspectives on Psychological Science 2011, 6(1), 21–37. [Google Scholar] [CrossRef]
  17. Linardon, J.; Cuijpers, P.; Carlbring, P.; Messer, M.; Fuller-Tyszkiewicz, M. The efficacy of app-supported smartphone interventions for mental health problems: A meta-analysis of randomized controlled trials. World Psychiatry 2019, 18(3), 325–336. [Google Scholar] [CrossRef]
  18. Lucas, G.M.; Gratch, J.; King, A.; Morency, L.P. It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior 2014, 37, 94–100. [Google Scholar] [CrossRef]
  19. Luxton, D.D. Artificial intelligence in behavioral and mental health care; Academic Press, 2020. [Google Scholar]
  20. Mak, W.W.S.; Tong, A.C.Y.; Ip, B.Y.T.; Yiu, M.M.W.; Lam, M.Y.Y.; Chan, A.T.Y.; Lo, H.H.M. Efficacy and moderation of mobile app-based programs for mindfulness-based training, self-compassion training, and cognitive behavioral psychoeducation on mental health: Randomized controlled noninferiority trial. JMIR Mental Health 2018, 5(4), e60. [Google Scholar] [CrossRef]
  21. Marks, I.M.; Cavanagh, K.; Gega, L. Hands-on help: Computer-aided psychotherapy; Psychology Press, 2007. [Google Scholar]
  22. Miner, A.S.; Milstein, A.; Schueller, S.; Hegde, R.; Mangurian, C.; Linos, E. Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Internal Medicine 2016, 176(5), 619–625. [Google Scholar] [CrossRef] [PubMed]
  23. Mittelstadt, B.D.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The ethics of algorithms: Mapping the debate. Big Data & Society 2016, 3(2), 1–21. [Google Scholar] [CrossRef]
  24. Naslund, J.A.; Aschbrenner, K.A.; Araya, R.; Marsch, L.A.; Unützer, J.; Patel, V.; Bartels, S.J. Digital technology for treating and preventing mental disorders in low-income and middle-income countries: A narrative review of the literature. Lancet Psychiatry 2017, 4(6), 486–500. [Google Scholar] [CrossRef] [PubMed]
  25. Obermeyer, Z.; Emanuel, E.J. Predicting the future — Big data, machine learning, and clinical medicine. New England Journal of Medicine 2016, 375(13), 1216–1219. [Google Scholar] [CrossRef]
  26. Patel, V.; Saxena, S.; Lund, C.; Thornicroft, G.; Baingana, F.; Bolton, P.; Unützer, J. The Lancet Commission on global mental health and sustainable development. The Lancet 2018, 392(10157), 1553–1598. [Google Scholar] [CrossRef]
  27. Prochaska, J.J.; Vogel, E.A.; Chieng, A.; Kendra, M.; Baiocchi, M.; Pajarito, S.; Robinson, A. A therapeutic relational agent for reducing problematic substance use (Woebot): Development and usability study. Journal of Medical Internet Research 2021, 23(3), e24850. [Google Scholar] [CrossRef]
  28. Saxena, S.; Thornicroft, G.; Knapp, M.; Whiteford, H. Resources for mental health: Scarcity, inequity, and inefficiency. The Lancet 2007, 370(9590), 878–889. [Google Scholar] [CrossRef] [PubMed]
  29. Stern, D.N. The present moment in psychotherapy and everyday life; Norton, 2004. [Google Scholar]
  30. Stoll, J.; Müller, J.A.; Trachsel, M. Ethical issues in online psychotherapy: A narrative review. Frontiers in Psychiatry 2020, 10, 993. [Google Scholar] [CrossRef] [PubMed]
  31. Torous, J.; Roberts, L.W. Needed innovation in digital health and smartphone applications for mental health: Transparency and trust. JAMA Psychiatry 2017, 74(5), 437–438. [Google Scholar] [CrossRef]
  32. Torous, J.; Bucci, S.; Bell, I.H.; Kessing, L.V.; Faurholt-Jepsen, M.; Whelan, P.; Firth, J. The growing field of digital psychiatry: Current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry 2021, 20(3), 318–335. [Google Scholar] [CrossRef]
  33. Vayena, E.; Haeusermann, T.; Adjekum, A.; Blasimme, A. Digital health: Meeting the ethical and policy challenges. Swiss Medical Weekly 2018, 148, w14571. [Google Scholar] [CrossRef]
  34. Wampold, B.E.; Imel, Z.E. The great psychotherapy debate: The evidence for what makes psychotherapy work, 2nd ed.; Routledge, 2015. [Google Scholar]
  35. World Health Organization. World mental health report: Transforming mental health for all; WHO Press, 2022. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated