Preprint
Article

This version is not peer-reviewed.

Artificial Intelligence and the Reconfiguration of Emotional Well-Being (2020–2025)

Submitted:

22 October 2025

Posted:

23 October 2025

You are already at the latest version

Abstract

Between 2020 and 2025, the rapid expansion of artificial intelligence (AI) in work, education, and healthcare profoundly transformed the ways in which individuals interact and manage their psychological well-being. This critical reflection article examines how AI applications—such as chatbots, conversational assistants, and algorithmic platforms—have influenced emotional expression, psychological support, and the construction of trust. The methodology is based on a critical review of 40 articles published between 2020 and 2025 in indexed journals, selected for their theoretical and empirical relevance. Criteria of theoretical validity, argumentative consistency, and bias control were applied through conceptual triangulation and cross-source review. The study includes research conducted during the pandemic as well as that addressing the transition toward broader digital environments. The findings reveal a central paradox: while AI expands access to emotional support and promotes self-regulation, it also introduces risks of dehumanization and technological dependence. The study underscores the need for a human-centered approach grounded in empathy, transparency, and ethical oversight. Theoretically, this article redefines the role of artificial intelligence within organizational psychology and mental health. Practically, it offers insights for designing technologies that enhance well-being, trust, and personal autonomy in increasingly complex digital environments.

Keywords: 
;  ;  ;  ;  

1. Introduction

Between 2020 and 2025, the accelerated expansion of artificial intelligence (AI) profoundly reshaped the ways people work, learn, communicate, and manage their emotional well-being. Advances in machine learning, natural language processing, and generative systems have made AI an integral part of everyday life (Lin & Chen, 2024; Ozmen Garibay et al., 2023). This integration has redefined human interaction patterns and the very notion of psychological balance within a society increasingly mediated by algorithms (Pataranutaporn et al., 2021; Tuomi, 2022). However, alongside promises of efficiency, personalization, and emotional assistance, new ethical and psychosocial questions have emerged regarding AI’s effects on identity, autonomy, and the quality of affective bonds (Sedlakova & Trachsel, 2023; Uysal et al., 2022; Velastegui-Hernández et al., 2023).
From a psychosocial perspective, the advent of AI has altered the dynamics of human relationships, shaping both perceptions of social support and experiences of loneliness. The so-called “emotional hyperconnectivity” has partly replaced face-to-face encounters with digital interactions, in which emotions are interpreted, classified, or even generated by automated systems (Sood & Gupta, 2025; Wang & Uysal, 2024). While this transformation has expanded access to support resources and reduced geographical and economic barriers (Lee & Yoon, 2021; Moghayedi et al., 2024; Nashwan et al., 2023), it has also raised concerns about depersonalization and the erosion of emotional authenticity (Alhuwaydi, 2024; Gual-Montolio et al., 2022; Sedlakova & Trachsel, 2023).
The emergence of generative AI tools has intensified the tension between autonomy and technological dependence. Interactive AI systems and self-care applications offer immediacy, privacy, and assistance—qualities particularly appealing to those seeking emotional support (Pataranutaporn et al., 2021; Lin & Chen, 2024). Yet these same features can foster excessive reliance on technology for emotional decision-making, encouraging dependency on devices and gradually replacing human contact with digital bonds (Shahzad et al., 2024; Uysal et al., 2022). This phenomenon illustrates what has been termed the “digital affective paradox”: as technological connectivity increases, people do not necessarily feel more connected. In fact, the more they are accompanied by digital systems, the harder it becomes to sustain genuine relationships and a stable sense of belonging (Velastegui-Hernández et al., 2023; Wang & Uysal, 2024).
Although recent literature on AI and well-being has grown considerably, most studies focus on clinical or functional outcomes—such as stress reduction, learning enhancement, or work efficiency (Makridis & Mishra, 2022; Shaikh et al., 2023; Xu et al., 2023). Few, however, examine the relational, identity-related, and ethical implications of coexisting with intelligent systems (Lin & Chen, 2024; Ozmen Garibay et al., 2023). This trend reveals a knowledge gap: the absence of integrative conceptual models explaining how interaction with AI shapes emotional well-being from a psychosocial perspective that encompasses cognitive, affective, and social dimensions (Nashwan et al., 2023; Tornero-Costa et al., 2023).
Moreover, existing studies tend to approach the relationship between AI and well-being from isolated disciplinary domains—clinical, educational, or organizational—without exploring their intersections (Pataranutaporn et al., 2021; Tuomi, 2022). While mental health research highlights AI’s therapeutic potential and its ability to broaden access to care (Nashwan et al., 2023; Sedlakova & Trachsel, 2023), organizational studies warn of digital stress and loss of meaning at work (Cramarenco et al., 2023; Tang et al., 2023). This lack of integration constrains a comprehensive understanding of AI’s impact on human well-being and hinders the development of ethical policies that balance innovation with humanization (Ozmen Garibay et al., 2023; Thakkar et al., 2024).
It is therefore essential to promote a critical reflection that transcends technological enthusiasm and focuses on the relational nature of emotional well-being (Dhimolea et al., 2022; Tuomi, 2022). Understanding AI’s impact from this lens entails recognizing that well-being depends not merely on access to digital tools but on the quality of human experiences those tools facilitate or transform (Wang & Uysal, 2024). In this sense, AI should be understood not as a functional mechanism but as a relational actor actively shaping emotions, values, and contemporary social interactions (Sedlakova & Trachsel, 2023; Uysal et al., 2022).
The pandemic and post-pandemic contexts further accelerated this emotional-technological integration (Makridis & Mishra, 2022). During the COVID-19 pandemic, the massive use of digital platforms and virtual psychological support assistants demonstrated both the therapeutic potential of technology and its ethical limitations (Nashwan et al., 2023; Tornero-Costa et al., 2023). The experiences accumulated between 2020 and 2025 reveal a significant evolution in how individuals experience intimacy, vulnerability, and emotional companionship in AI-mediated environments (Velastegui-Hernández et al., 2023; Wang & Uysal, 2024). Yet, an integrative understanding capable of articulating these insights from an interdisciplinary and ethical standpoint remains scarce (Ozmen Garibay et al., 2023; Thakkar et al., 2024).
Against this backdrop, the present article offers a critical reflection on the psychosocial impacts of artificial intelligence on emotional well-being, drawing upon studies published between 2020 and 2025 in journals ranked in Q1–Q3 of JCR and SJR. Its aim is to analyze the tensions, paradoxes, and ethical dilemmas arising from the intensive use of intelligent technologies, acknowledging both their benefits and the risks they pose to collective psychological health (Dhimolea et al., 2022; Sedlakova & Trachsel, 2023).
Furthermore, this work seeks to bridge the gap between clinical and technological approaches through a relational and contextual perspective that situates emotional well-being at the core of the debate on AI (Ozmen Garibay et al., 2023; Tuomi, 2022). It assumes that well-being is a hybrid construct shaped by the continuous interaction between human and artificial intelligence, in which technology can either mitigate or intensify distress (Thakkar et al., 2024; Wang & Uysal, 2024). Finally, the study invites an interdisciplinary dialogue aimed at fostering a more conscious, ethical, and emotionally sustainable relationship with AI, recognizing that human well-being largely depends on the quality of both human and digital relationships shaping contemporary emotional life (Lin & Chen, 2024; Uysal et al., 2022).

Methodology

This study adopts a critical reflection approach, grounded in an analytical review of scientific literature published between 2020 and 2025 on the psychosocial impact of artificial intelligence (AI) on emotional well-being. This approach is particularly suitable when the purpose is not to measure or empirically verify a phenomenon but to interpret, contextualize, and reconstruct it theoretically. As Grant and Booth (2009) and Torraco (2016) argue, critical reviews aim to integrate findings from diverse disciplines and perspectives to achieve a deeper understanding of complex and emerging phenomena.
The choice of this design responds to the dynamic nature of AI, whose social and psychological effects evolve faster than empirical validation can capture. Consequently, this approach moves beyond the mere description of results to focus on meaning-making processes and conceptual tensions arising from the relationship between technology and emotional well-being. According to Snyder (2019), critical reviews contribute to identifying knowledge gaps, examining implicit assumptions, and proposing new interpretive frameworks for future research.
Unlike systematic review models or protocols such as PRISMA—which seek comprehensiveness and replicability—the critical reflection approach prioritizes conceptual depth, argumentative coherence, and the capacity for synthesis as indicators of academic rigor. Rather than generating closed conclusions, this type of review opens questions and expands interpretive frameworks from an interdisciplinary standpoint.
The corpus analyzed includes 40 scientific publications selected for their theoretical relevance, quality, and currency. Sources were extracted from Scopus, Web of Science, and PsycINFO, restricting the search to journals indexed in Q1 to Q3 of the SJR or JCR systems. Inclusion criteria considered: (a) thematic relevance—i.e., the relationship between AI and emotional well-being or mental health; (b) temporal relevance (2020–2025); and (c) scientific solidity—ensured through peer review and publication in recognized academic outlets.
The review process unfolded in three phases: an exploratory reading to identify preliminary categories, a thematic clustering of findings around positive impacts, ethical dilemmas, and emerging challenges, and an interpretive synthesis that connected empirical evidence with conceptual frameworks. This procedure allowed the identification of underlying tensions—such as the contradiction between greater technological accessibility and diminished emotional authenticity—frequently highlighted in recent literature.
Although it does not conform to formal protocols like PRISMA, the study-maintained standards of rigor, transparency, and internal coherence to ensure scientific validity. Consistency between objectives, theoretical framework, and conclusions was guaranteed through conceptual triangulation, that is, the systematic comparison of arguments and evidence from different authors to identify convergences, contradictions, and theoretical gaps (Booth et al., 2021).
A reflexive and self-critical stance was also adopted regarding the researcher’s role as a mediator of knowledge. Recognizing the limits of interpretation—particularly in a rapidly evolving field—is essential to avoid confirmation bias or technocentric perspectives. Consequently, this critical review does not aim to replace systematic evidence but to complement it, offering a more integrative and human perspective on the psychological and social changes linked to technological expansion.
In summary, the methodology ensures a balance between academic rigor and interpretive flexibility, situating the debate on AI and emotional well-being within an interdisciplinary framework of reflection. Its value lies in generating renewed readings of the phenomenon, identifying conceptual gaps, and contributing to a more ethical, critical, and emotionally sustainable understanding of the relationship between artificial intelligence and human experience.
2. Psychosocial Impact of Artificial Intelligence on Emotional Well-Being: Review and Critical Analysis

2.1. Transformations in Emotional Experience and Human Relationships

Artificial intelligence (AI) has become a key mediator of contemporary emotional life. Its presence in workplaces, educational settings, and domestic environments has reconfigured how people express, interpret, and share emotions. More than a technical tool, AI functions as a relational environment that shapes processes of empathy, communication, and mutual recognition (Lin & Chen, 2024). This transformation affects not only digital interactions but also how individuals perceive their inner world and their relationships with others.
One of the most visible phenomena is the increasing delegation of emotional tasks to automated systems. Applications that provide psychological companionship, virtual assistants that appear to listen empathetically, and algorithms capable of identifying moods through voice or facial expressions delineate a new frontier in the human–technology relationship (Moghayedi et al., 2024). These tools promise constant support and unlimited availability—qualities that can be comforting amid loneliness or stress. Yet, they also introduce emotional mediation that redefines human expectations of empathy and care. Although algorithmic responses simulate understanding, they lack intentionality and genuine reciprocity (Sood & Gupta, 2025).
Emotional hyperconnectivity amplifies this dynamic. Increasingly, individuals turn to digital platforms to express emotions or seek comfort, displacing face-to-face encounters (Velastegui-Hernández et al., 2023). While this digital transition may foster an immediate sense of community, it simultaneously erodes the depth of bonds and the authenticity of communication. As Alhuwaydi (2024) and Gual-Montolio et al. (2022) warn, when connection is measured by immediacy and responsiveness, silence, pauses, and physical presence—essential components of emotional support—lose value.
From a psychosocial perspective, AI not only transforms the means of affective expression but also the meanings of emotions themselves. In many digital contexts, algorithms prioritize content that provokes reaction, shaping an affective culture based on immediacy and polarization. This climate of constant gratification promotes emotional exposure and external validation, factors that directly affect psychological stability and emotional self-perception (Lin & Chen, 2024).
The growing sophistication of generative systems has also introduced new forms of mediated intimacy. Some users develop bonds of trust with virtual assistants that provide a sense of understanding without judgment. Although these experiences may offer temporary relief, they also generate artificial emotional relationships that transform how individuals form and sustain real human connections (Shahzad et al., 2024). This dynamic aligns with what Velastegui-Hernández et al. (2023) term simulated empathy—an emotional response perceived as human but lacking existential authenticity.
Overall, these transformations reveal that AI not only expands the possibilities for interaction but also redefines what it means to be accompanied, heard, or understood. The boundary between the human and the technological becomes blurred, raising new questions about empathy, intimacy, and emotional well-being. Understanding these dynamics is crucial to assess how technology is shaping not only behavior but also human sensitivity and the capacity to form genuine connections in an increasingly digitalized world.

2.2. Psychological Well-Being and the Paradoxes of Technological Companionship

The link between AI and psychological well-being has become one of the most relevant debates of the past decade. The expansion of intelligent systems in clinical, educational, and work contexts has transformed how individuals manage their emotions and seek support. However, this process is ambivalent: alongside therapeutic opportunities arise ethical and relational dilemmas that invite critical reflection on the role of technology in contemporary mental health (Bankins et al., 2024; Beg et al., 2025; Cabrera et al., 2023).

2.2.1. The Therapeutic Dimension of Digital Companionship

Multiple studies highlight the potential of AI to broaden access to and effectiveness of psychological interventions. Virtual assistants, mobile applications, and machine learning algorithms enable continuous emotional support, early detection of symptoms, and personalized coping strategies (Gual-Montolio et al., 2022; Li et al., 2023). During the COVID-19 pandemic, the use of mHealth tools expanded globally, providing emotional companionship and large-scale monitoring of psychological states (Alam et al., 2021). In this context, AI became a bridge between clinical knowledge and everyday experience, giving rise to new forms of support characterized by immediacy, adaptability, and personalization.
Emotional support algorithms have proven effective for patient follow-up and real-time mood regulation (Olawade et al., 2024; Thakkar et al., 2024). These systems enhance emotional self-efficacy and self-understanding by translating behavioral or physiological data into useful feedback (Dhimolea et al., 2022). The incorporation of ethical standards such as IEEE 7010 strengthened the evaluation of AI’s impact on well-being, emphasizing its potential contribution to collective mental health (Schiff et al., 2020). In the educational domain, its use has fostered creativity, motivation, and positive emotions when guided toward formative purposes (Dai et al., 2020; Lin & Chen, 2024).
These advances do not imply substituting the human bond but rather expanding the capacities for care. Authors such as Alhuwaydi (2024) and Beg et al. (2025) argue that AI can act as an “emotional amplifier” that complements human intervention through constant availability and simulated empathic communication. Psychological accompaniment is thus redefined within hybrid environments where human and algorithmic agents coexist.

2.2.2. The Paradoxes of Technological Support

Despite its potential, the digital mediation of well-being generates contradictions. One of the most evident is the “paradox of artificial empathy”: users experience understanding and support even though they know that the interaction lacks genuine intentionality (Sood & Gupta, 2025). This phenomenon combines emotional closeness with ontological distance, raising questions about the authenticity of digital care and the internalization of algorithmically mediated emotions. The growing participation of AI in psychological support entails the risk of a “soft dehumanization,” in which well-being is valued in terms of efficiency and automated response (Jeyaraman et al., 2023).
Another significant tension is algorithmic fatigue, defined as the cognitive and emotional overload produced by continuous interaction with intelligent systems (Cramarenco et al., 2023; Tang et al., 2023). In work settings, this phenomenon can reduce well-being and autonomy (Shaikh et al., 2023; Xu et al., 2023); in education, it can lead to anxiety or affective dependence on feedback systems (Velastegui-Hernández et al., 2023; Vistorte et al., 2024).
Relational well-being, understood as the ability to maintain meaningful connections, is also reconfigured by technological mediation. Digital companionship promotes connection but can also foster a form of “connected loneliness,” in which contact is superficial or guided by algorithms (Moghayedi et al., 2024). This “absent presence” reflects how constant availability does not guarantee intimacy or authentic empathy. Consequently, AI can either strengthen or weaken the relational dimension of well-being depending on its design, use, and cultural context (Makridis & Mishra, 2022).

2.2.3. Toward an Integrative Understanding of AI-Mediated Well-Being

The impact of AI on psychological well-being cannot be understood linearly. Rather than opposing benefits and risks, it is more accurate to conceive a continuum of mediated emotional experiences in which well-being is negotiated between technological autonomy and dependence. This integrative perspective aligns with Bankins et al. (2024), who conceptualize AI as a relational actor within social systems, capable of shaping emotions, decisions, and behaviors (Murugesan et al., 2023).
From this standpoint, technological companionship constitutes both a cultural and psychological phenomenon that redefines notions of care, attention, and presence. Well-being in the algorithmic era requires learning to coexist with systems that simulate empathy while preserving the centrality of human connection and emotional self-regulation (Sood & Gupta, 2025). The key lies in an ethical and reflective integration that enhances human development without replacing the moral and relational dimension that sustains psychological health.
In sum, AI opens an ambivalent horizon: it broadens access to psychological assistance yet poses challenges of authenticity and dependence. Understanding this tension involves recognizing that AI-mediated well-being is a dynamic and contextual phenomenon requiring ethical, educational, and organizational frameworks capable of balancing technological efficiency and humaneness.

2.3. Psychosocial and Ethical Tensions of Emotional Well-Being in the Age of Artificial Intelligence

Between 2020 and 2025, the expansion of AI profoundly transformed the way individuals understand, manage, and experience their emotional well-being. This technological acceleration, driven by the COVID-19 pandemic, revealed both the therapeutic potential of AI and its most salient ethical dilemmas. Emotional well-being can be understood as the balance between what the environment demands and the internal resources of each individual. Today, that balance is challenged by new tensions: between autonomy and algorithmic control, personalization and digital uniformity, and virtual connection and emotional loneliness.
During the pandemic, mHealth applications provided early evidence of the potential of technology to sustain psychological stability in crisis contexts. Based on machine learning algorithms, these tools enabled the monitoring of mood states, offered guidance, and reduced feelings of isolation through empathic digital interactions (Alam et al., 2021). However, the same interface that provided emotional companionship partially replaced the human bond, redefining psychological support in digital terms. Studies by Beg et al. (2025) and Olawade et al. (2024) describe the emergence of hybrid models in which chatbots and cognitive assistants act as initial interlocutors—enhancing accessibility but also raising concerns about the authenticity of empathy.
The emotional impact of AI transcends the clinical sphere and extends to work and educational contexts. In organizations, the use of intelligent systems for performance management has shown ambivalent effects: improved efficiency and reduced cognitive overload (Shaikh et al., 2023; Xu et al., 2023), but also loss of purpose and balance when interaction with AI becomes impersonal (Cramarenco et al., 2023; Tang et al., 2023). During the 2020–2025 period, a dissonance emerged between the promise of AI as a facilitator of well-being and the erosion of emotional autonomy.
In the educational domain, AI-based applications directly influence emotional and motivational spheres. Research by Dai et al. (2020) and Lin and Chen (2024) reports positive effects on creativity and emotional self-efficacy, provided that algorithms remain transparent and foster authentic interactions. However, other studies warn that excessive dependence may diminish emotional self-regulation and promote a superficial relationship with learning (Velastegui-Hernández et al., 2023; Vistorte et al., 2024).
At a structural level, AI acts as an invisible actor shaping social well-being. Makridis and Mishra (2022) argue that the rise of “artificial intelligence as a service” drives economic growth but not necessarily subjective well-being, as it reproduces digital inequalities. Moghayedi et al. (2024) show that in sectors such as facilities management in South Africa, AI adoption generates both inclusion and labor precarity. These findings indicate that emotional well-being is a socially mediated phenomenon shaped by technological power and digital divides (Mendy et al., 2025).
The ethical dimension constitutes another central tension. Algorithmic assessment of well-being raises concerns about emotional privacy, transparency, and the right to affective disconnection. The IEEE 7010 standard (Schiff et al., 2020) represented a regulatory milestone by incorporating criteria of fairness and autonomy, yet technological velocity has outpaced regulatory capacity, producing what Jeyaraman et al. (2023) call an “ethical enigma”: the mismatch between technical progress and moral adaptation. This gap entails risks such as emotional manipulation, misuse of personal information, and confusion between automated care and human attention.
Overall, the literature shows that emotional well-being in the age of AI is relationally complex. Intelligent systems enhance human capacity to recognize and regulate emotions (Dhimolea et al., 2022; Thakkar et al., 2024), yet they also foster affective dependencies that externalize the search for emotional validation. Well-being thus becomes a co-created product between human and machine, with still-uncertain implications for authenticity and agency.
In summary, between 2020 and 2025, AI has operated simultaneously as a catalyst and disruptor of global emotional well-being. Advances in digital mental health, emotional education, and workplace well-being have demonstrated its potential to expand access to psychological support, but they also reveal growing risks: erosion of empathy, external control of emotions, technological dependence, and inequality in access to digital well-being. The challenge does not lie in perfecting algorithms, but in redefining the place of affectivity within a society mediated by intelligent machines. The well-being of the future will depend less on the technical sophistication of AI and more on the collective capacity to integrate it within an ethical and humanistic framework that preserves the authenticity of emotional experience.

2.4. Conceptual Model of Artificial Intelligence (AI)-Mediated Emotional Well-Being

The proposed conceptual model synthesizes the main relationships identified in the reviewed literature between artificial intelligence (AI) and emotional well-being during the 2020–2025 period. It posits that AI operates simultaneously as a technological mediator, relational actor, and emotional modulator, configuring a dynamic structure of three interdependent levels:
First, the technological–structural level encompasses algorithmic systems, chatbots, mHealth platforms, and intelligent digital environments that mediate emotional experience. These devices function as “affective infrastructures” that expand access to psychological support and emotional self-regulation (Alam et al., 2021; Dhimolea et al., 2022; Lin & Chen, 2024).
Second, the psychosocial–relational level describes the interaction processes between users and AI systems, where the main paradoxes of digital well-being are configured: simulated empathy, connected loneliness, and absent presence (Sood & Gupta, 2025; Velastegui-Hernández et al., 2023). At this level, technology redefines the meanings of intimacy, trust, and emotional authenticity.
Third, the ethical–existential level integrates the dilemmas that emerge from the algorithmic mediation of well-being, related to autonomy, emotional privacy, and the sense of humanity. Here lies the tension between the ethical design of systems (Schiff et al., 2020) and the trend toward the dehumanization of care (Cramarenco et al., 2023; Jeyaraman et al., 2023).
This conceptual model represents AI-mediated emotional well-being as a circular and bidirectional process in which human experiences feed algorithms, and these, in turn, shape emotions, perceptions, and social relationships. From this interaction emerge both adaptive benefits (access, personalization, constant support) and latent risks (affective dependence, algorithmic fatigue, erosion of empathy).
The model proposes that sustainable digital well-being can only be achieved when three principles are balanced. First, technological humanization, which situates empathy and ethical intentionality at the core of algorithmic design. Second, emotional autonomy, which promotes self-regulation in the face of technological dependence. Third, transparency and fairness, which ensure responsible use of affective data and the preservation of emotional privacy.
Taken together, the conceptual model invites us to understand AI not as a threat or a definitive solution but as a hybrid emotional ecosystem in which the human and the algorithmic evolve in parallel. This theoretical framework provides a foundation for future research and policies aimed at harmonizing technological innovation and psychological well-being.
Based on the described conceptual model, it becomes evident that the relationship between artificial intelligence (AI) and emotional well-being cannot be understood linearly or causally, but rather as a relational network in which technological mediations, psychosocial processes, and ethical dilemmas converge. This framework allows recent literature to be interpreted not as a collection of isolated findings but as an interdisciplinary dialogue that reveals the constitutive tensions of contemporary digital well-being.
In particular, the model shows that AI acts simultaneously as technical infrastructure, relational agent, and emotional modulator. This triple function explains why the effects of AI on well-being are so diverse and often contradictory: while it expands access to psychological support and emotional self-regulation, it also introduces new forms of affective dependence and depersonalization.
Accordingly, the following discussion critically examines the main findings identified in the literature between 2020 and 2025, articulating the benefits and risks of AI across three complementary domains—health, work, and education—and exploring how these contexts express the central paradox of digital well-being: the unstable balance between technological connection and emotional disconnection.

3. Discussion of Results

The analysis of literature published between 2020 and 2025 reveals a profound transformation in the relationship between artificial intelligence (AI) and human emotional well-being. While the reviewed studies confirm that AI has expanded the resources available to promote mental health and emotional regulation, they also expose psychological, social, and ethical tensions that redefine the very meaning of well-being. This discussion seeks to articulate both dimensions—opportunity and vulnerability—in order to address the identified theoretical gap: the absence of a holistic understanding of how AI simultaneously acts as an agent of support and emotional disruption in contemporary society.
One of the most consistent findings is that AI has improved accessibility and personalization in psychological support, particularly during the COVID-19 pandemic. The mHealth tools analyzed by Alam et al. (2021) demonstrated that digital mediation sustained emotional stability in isolation contexts, transforming the relationship between individuals and technology into an experience of self-care and self-observation (Alhwaiti, 2023). Complementarily, Nashwan et al. (2023) showed how mental health professionals—particularly psychiatric nurses—began integrating AI systems to optimize monitoring and follow-up, generating a hybrid model of care that expands, but does not replace, human sensitivity. However, as Sedlakova and Trachsel (2023) warn, the incorporation of chatbots and therapeutic assistants introduces unprecedented ontological dilemmas: are they merely tools or new agents with moral capacity? This tension between functional efficacy and emotional authenticity, already discussed by Beg et al. (2025) under the concept of “simulated empathy,” constitutes one of the main challenges of the algorithmic era.
In organizational settings, the evidence shows an ambivalent nature. On the one hand, AI has enabled more efficient work environments, reducing cognitive load and strengthening perceptions of self-efficacy and satisfaction (Shaikh et al., 2023; Xu et al., 2023). On the other hand, studies such as those by Cramarenco et al. (2023) and Tang et al. (2023) indicate that excessive automation may weaken professional identity and increase depersonalization. Uysal et al. (2022) explain that AI assistants with humanized traits can foster relationships of trust or emotional dependency, acting as an “affective Trojan horse” that infiltrates work and personal life with a false sense of reciprocity. Consequently, emotional well-being depends not only on the functional performance of AI but also on the subjective interpretation individuals make of their relationship with technology. The boundary between collaboration and emotional subordination becomes blurred, especially when organizations prioritize productivity over the quality of human relationships (Malik et al., 2022).
In the educational field, AI has shown remarkable potential to stimulate creativity, emotional self-regulation, and intrinsic motivation (Dai et al., 2020; Lin & Chen, 2024). Pataranutaporn et al. (2021) demonstrated that AI-generated characters can provide personalized emotional support and foster adaptive learning. Nevertheless, Tuomi (2022) warns that this mediation may shift emotional management from the student to algorithmic systems, creating a model of “assisted emotional learning” that reduces affective spontaneity. In the same line, Velastegui-Hernández et al. (2023) and Vistorte et al. (2024) caution about the possible emergence of a superficial or dependent form of well-being, in which performance metrics replace genuine introspective processes. At this point, the discussion acquires an epistemological dimension: if AI redefines the criteria of emotional authenticity, how can we ensure that digital well-being preserves its human essence? The answer seems to lie in technology serving emotional development rather than replacing it, which requires integrating ethical, affective, and relational principles into the design of intelligent systems.
The impact of AI on emotional well-being is also conditioned by structural and contextual factors. Makridis and Mishra (2022) point out that artificial intelligence as a service has stimulated economic growth but not necessarily emotional equity, concentrating benefits among those with greater technological literacy. Moghayedi et al. (2024) reveal that, in the workplaces of the Global South, AI adoption generates both inclusion opportunities and new forms of subjective precarity. Convergently, Tornero-Costa et al. (2023) note that much research on AI and mental health presents methodological limitations, such as lack of sample diversity or excessive reliance on quantitative metrics, which prevents a full understanding of the cultural complexity of digital well-being. These findings reinforce the need for plural and ethically sensitive approaches that acknowledge technological and affective inequalities across contexts.
From an ethical and regulatory perspective, the 2020–2025 period was marked by a mismatch between the speed of technological advancement and the regulatory capacity to ensure responsible use. The IEEE 7010 standard (Schiff et al., 2020) represented a pioneering effort by including dimensions such as fairness, autonomy, and subjective satisfaction. However, recent studies (Jeyaraman et al., 2023; Sood & Gupta, 2025) show that problems of emotional privacy, algorithmic manipulation, and affective dependence persist. In sectors like hospitality, Wang and Uysal (2024) introduced the notion of “AI-assisted mindfulness,” a practice that promotes emotional self-regulation but can also replace human introspection with automated scripts. The exploitation of affective data and the opacity of predictive models have given rise to a new form of psychological vulnerability: involuntary emotional exposure. In this regard, it becomes urgent to design systems that preserve emotional dignity and the right to affective silence.
Despite these risks, the reviewed studies agree that AI can be a valuable resource for promoting collective well-being, provided it is embedded within ethical and humanistic frameworks. Dhimolea et al. (2022) and Thakkar et al. (2024) emphasize that AI can enhance emotional intelligence and resilience when interactions with users are authentic and transparent (Prentice et al., 2020). Similarly, Ozmen Garibay et al. (2023) identify six core challenges for human-centered AI, highlighting the development of emotionally sensitive, ethically responsible, and culturally inclusive systems. These principles delineate the horizon toward which public policy and applied research should be directed: an artificial intelligence that amplifies, rather than replaces, human empathetic and relational capacities.
In summary, this critical reflection offers a balanced reading of emotional well-being in the age of artificial intelligence, avoiding both idealization and skepticism. The theoretical gap addressed—the fragmentation of studies on AI and well-being across clinical, organizational, and educational perspectives—is bridged through a synthesis that articulates these dimensions within a common psychosocial and ethical axis. Emotional well-being thus emerges as a hybrid construct shaped by the continuous interaction between human and artificial intelligence, where technology can both alleviate and intensify distress.
Between 2020 and 2025, AI has consolidated itself as an “emergent emotional agent”: a system that does not feel but provokes emotions, reorganizes relationships, and redefines how people seek support, recognition, and meaning. This phenomenon opens unprecedented opportunities, such as broader access to mental health and the expansion of digital emotional competencies, but also raises urgent challenges: ensuring emotional equity, preserving affective authenticity, and developing regulatory frameworks that protect psychological autonomy in an increasingly automated environment.
Consistent with the proposed conceptual model, the findings discussed here confirm that AI-mediated emotional well-being constitutes an interdependent system operating on three levels: technological, psychosocial, and ethical. Empirical evidence supports this structure, showing that the most significant transformations arise not only from technical innovation but also from the forms of interaction and moral dilemmas that emerge from it (Park et al., 2023; Thakkar et al., 2024). AI therefore acts as both a relational mediator and emotional modulator, simultaneously generating opportunities for psychological support and risks of depersonalization (Dhimolea et al., 2022; Shahzad et al., 2024). This framework allows for understanding the results not as isolated effects but as manifestations of a hybrid ecosystem in which technology, subjectivity, and ethics intertwine. Thus, the conceptual model serves as an explanatory foundation integrating the different domains of analysis—health, work, and education—and outlines a future agenda oriented toward constructing a more human, conscious, and sustainable form of digital well-being.

3.1. Theoretical Implications

The analysis of the psychosocial impact of artificial intelligence (AI) between 2020 and 2025 allows for rethinking the relationship between technology, emotions, and human well-being from an interdisciplinary perspective. The findings suggest that approaches focused exclusively on technological efficiency or instrumental benefits are insufficient to understand the phenomenon. It is necessary to advance toward models that integrate the emotional, relational, and ethical dimensions of human–machine interaction. Within this framework, AI should be understood not merely as a functional tool but as a relational agent that actively participates in shaping bonds, perceptions of social support, and the construction of the digital self (Lin & Chen, 2024; Velastegui-Hernández et al., 2023).
One of the main theoretical contributions of this study lies in proposing an integrative perspective on AI-mediated emotional well-being, articulating three dimensions: psychological, social, and technological. The first refers to internal processes of self-regulation and emotional self-perception modified by constant exposure to automated systems; the second concerns the reconfiguration of interpersonal bonds and the emergence of new forms of digital community; and the third encompasses the algorithmic mechanisms that interpret, recommend, and model emotional responses. This vision recognizes that contemporary affective experience is not produced in isolation but in hybrid environments where human emotionality is partially coded and reinterpreted by AI (Shahzad et al., 2024; Sood & Gupta, 2025). In this way, the proposal contributes to establishing a theoretical foundation for future comparative research on the impact of artificial intelligence on emotional health and the relational sustainability of digital environments.
The study also contributes to the debate on the “digital affective paradox,” understood as the tension between emotional connection and disconnection in hyperconnected societies. AI amplifies this paradox: it provides companionship and support but can also generate emotional dependence on non-human systems. This dynamic challenges classical notions of autonomy, emotional presence, and relational authenticity, demanding the revision of concepts such as digital empathy—understood as the capacity of interfaces to simulate emotional understanding—and authentic emotional well-being, referring to the congruence between human feeling and technological mediation (Gual-Montolio et al., 2022; Moghayedi et al., 2024).
Furthermore, the study proposes broadening the conceptual framework of well-being psychology through the notion of algorithmic emotional well-being, understood as the way algorithms filter, quantify, and modulate human emotions. This concept does not imply a pessimistic vision but rather invites reflection on how emotional personalization processes—present in self-care applications or mHealth platforms—are redefining emotional balance management. During the 2020–2025 period, the proliferation of such tools demonstrated that technological mediation can both enhance self-care capacities and reduce emotional autonomy, depending on users’ levels of digital literacy and critical awareness (Alam et al., 2021).
From a sociotechnical perspective, this work invites consideration of AI as a new social actor that helps determine collective well-being. Artificial intelligence not only reflects human emotions but also shapes them through its capacity to anticipate affective states, recommend behaviors, and offer simulated empathetic responses. In this sense, understanding AI entails recognizing its agency within a shared emotional ecology between humans and machines.
Finally, this theoretical reflection reinforces the need for an interdisciplinary framework linking social psychology, technological ethics, and digital communication studies. Such integration would make it possible to explain emerging phenomena such as emotional depersonalization, affective hyperconnectivity, or the delegation of emotional decisions to automated systems (Santiago-Torner et al., 2025b). Altogether, these contributions help fill the knowledge gap identified in this article by offering a solid conceptual foundation for future research aimed at understanding how AI redefines emotional experience and well-being dynamics in the digital era.

3.2. Practical Implications

The practical implications derived from this critical reflection invite a rethinking of how individuals, organizations, and institutions integrate artificial intelligence (AI) into the promotion of psychological well-being. Beyond technical or ethical debates, the analysis reveals that decisions regarding the design, implementation, and use of AI have direct consequences for collective emotional health and the quality of human relationships.
At the individual level, the findings highlight the need to strengthen emotional and critical digital education, enabling people to discern when interaction with intelligent systems enhances their well-being or, conversely, generates dependency or loss of autonomy (Santiago-Torner, 2024). It is essential to learn how to establish healthy boundaries with technologies that offer constant emotional companionship. Self-care applications and therapeutic chatbots developed between 2020 and 2025 demonstrated their value in contexts of stress or loneliness but also revealed the risk of replacing human interaction and eroding emotional self-regulation (Alhuwaydi, 2024; Chin et al., 2023; Denecke et al., 2021; Moghayedi et al., 2024). Therefore, well-being programs should include strategies to promote self-awareness, conscious management of digital time, and voluntary disconnection.
At the organizational level, companies incorporating AI into their workflows must understand that employee well-being depends not only on technological efficiency but also on the quality of relationships that technology facilitates. Automation can enhance productivity but may also generate digital fatigue, feelings of surveillance, or loss of community (Velastegui-Hernández et al., 2023). Leaders must develop emotional and ethical competencies that allow them to integrate technology through a humanizing approach. Training in conscious digital leadership emerges as a key axis for promoting hybrid environments where AI complements rather than replaces empathy and authentic interaction (Santiago-Torner, 2023).
At the institutional and public policy levels, the results underscore the urgency of establishing regulatory frameworks that govern the design and use of AI systems according to criteria of emotional well-being, equity, and social justice. This entails ensuring algorithmic transparency, affective data protection, and the ethical accountability of developers. Moreover, inclusive digital policies are needed to reduce technological and cultural divides. AI can democratize access to mental health care, provided that its development is guided by values of respect, care, and equity (Lin & Chen, 2024).
In health and education systems, it is pertinent to incorporate emotional assessment protocols for technological impact, especially in contexts where AI replaces or mediates human interaction. Such protocols would allow early identification of risks of depersonalization, digital anxiety, or loss of belonging. In the educational field, AI-based adaptive learning platforms could integrate emotional support modules that not only personalize content but also nurture students’ subjective experiences.
Finally, at the community and social levels, the implications point toward building an emotionally sustainable technological culture. This involves promoting practices that value empathy, cooperation, and ethical reflection on everyday uses of AI. Digital communities can become spaces for support and learning but also for misinformation and affective isolation. Therefore, digital education must be complemented by collective emotional literacy aimed at recovering the human meaning of technology and fostering conscious digital coexistence.
Taken together, these practical implications reinforce the notion that emotional well-being in the age of artificial intelligence depends not only on technical progress but also on the ethical and relational maturity of the societies that adopt it (Santiago-Torner et al., 2025a). Promoting a balanced relationship with AI requires inclusive policies, empathic leadership, and an emotionally competent citizenry capable of inhabiting the digital environment without losing their humanity.

3.3. Limitations and Future Research Lines

This study is situated within a historical period, 2020 to 2025, marked by unprecedented acceleration in the development and adoption of artificial intelligence (AI) technologies. Although this time frame is suitable for capturing recent transformations in the relationship between technology and emotional well-being, it also entails a limitation: the pace of change surpasses the academic capacity to construct stable theoretical frameworks. Consequently, the findings should be interpreted as an analytical snapshot of a phenomenon in full expansion rather than as a definitive synthesis.
Another relevant limitation derives from the theoretical and reflective nature of the adopted approach. The article does not present direct empirical evidence but rather a critical review supported by recent scientific literature. While this approach makes it possible to identify trends, ethical dilemmas, and conceptual gaps, it restricts the capacity to quantitatively assess the effects of AI on specific psychological variables such as anxiety, empathy, or subjective well-being. Therefore, future research should integrate mixed methodologies—quantitative, qualitative, and experimental—that triangulate empirical and conceptual evidence to expand the understanding of the mechanisms through which algorithmic mediation influences emotional health.
It is also necessary to consider the heterogeneity of the cultural, technological, and socioeconomic contexts analyzed. The adoption and acceptance of AI tools, such as virtual assistants, mHealth systems, or therapeutic chatbots, vary significantly across regions and social groups. These differences shape the experience of digital well-being and limit the generalizability of results. Consequently, it is essential to advance toward comparative and cross-cultural studies that explore how social, educational, and economic factors modulate the relationship between AI use and emotional well-being. In particular, perspectives from the Global South should be incorporated, where access, infrastructure, and ethical regulation gaps create emotionally vulnerable scenarios that remain underexplored (Santiago-Torner et al., 2024).
Added to this contextual diversity is the scarcity of longitudinal studies examining the prolonged effects of emotional interaction with intelligent systems. Most of the reviewed works are based on short-term observations or experimental contexts, failing to capture how sustained use of these technologies modifies attachment patterns, self-regulation, or perceptions of social support. Exploring the temporal evolution of these processes would allow distinguishing between healthy emotional adaptation and affective dependency on algorithmic systems.
Regarding future lines of research, the analysis suggests the need to deepen the study of algorithmic affectivity, understood as the capacity of intelligent systems to detect, classify, and modulate emotions. Examining its psychological and ethical implications would help develop more transparent regulatory frameworks and prevent forms of automated emotional manipulation or bias. Similarly, advancing research on digital emotional identity—that is, how individuals construct and express emotions in algorithmically mediated environments—is crucial. The increasing personalization of digital interactions is changing how people feel, connect, and understand their emotions, prompting reflection on how these transformations affect our ways of being and relating.
Another area of interest lies in AI-assisted emotional regulation, an emerging field that raises questions about whether these technologies enhance autonomy or generate new psychological dependencies. Analyzing this balance would allow the design of more humanized support tools capable of fostering self-regulation without replacing introspective processes. Likewise, the ethics of algorithmic well-being stands as a central dimension of future debate: understanding how automated decisions affect equitable access to psychological well-being requires public policies ensuring transparency, accountability, and users’ emotional protection.
Finally, it is crucial to investigate hybrid environments of emotional interaction, where human and digital bonds coexist and intertwine. Analyzing how empathy, trust, and support are distributed in these contexts will help move beyond traditional dichotomies between the human and the artificial and advance toward a relational, situated, and culturally sensitive understanding of emotions in the digital era.
Altogether, these research lines open pathways toward a more critical, ethical, and contextual comprehension of emotional well-being in times of artificial intelligence. The main challenge will be to construct integrative theoretical models that not only describe technological effects but also incorporate the cultural, moral, and affective dimensions shaping human experience in the algorithmic age.

4. Conclusions

The analysis conducted between 2020 and 2025 reveals that artificial intelligence (AI) has not only transformed work, education, and healthcare environments but has also reshaped the ways people feel, relate, and understand emotional well-being. More than a technical innovation, AI has become a relational agent capable of influencing affective life and the ways individuals construct meaning, support, and belonging in a digitalized world.
The main contribution of this reflection lies in situating emotional well-being as a relational and socially mediated phenomenon that cannot be reduced to clinical indicators or promises of technological efficiency. In an era marked by emotional automation, this article proposes rethinking well-being through the interaction between humans and intelligent systems, recognizing autonomy, authenticity, and empathy as essential dimensions of collective psychological health.
The reviewed evidence demonstrates that AI acts as an ambivalent catalyst: it broadens opportunities for access to psychological assistance, creativity, and emotional learning but also introduces new vulnerabilities related to affective dependency, depersonalization, and loss of authenticity. This duality underscores that digital well-being depends not only on the technical sophistication of systems but also on the ethical and human quality of their design and application.
Likewise, the developed reflection highlights the need for interdisciplinary dialogue among psychology, ethics, engineering, and the social sciences. Understanding AI’s emotional role requires moving beyond fragmented perspectives that regard it solely as a functional tool or a dehumanizing threat. Instead, it should be understood as an emerging actor within a hybrid emotional ecology, where the boundaries between human and artificial become porous and where technology can serve as both a means of emotional emancipation and symbolic control.
The distinctive value of this work lies in offering a balanced, critical, and contextual reading of the link between AI and well-being. By integrating clinical, organizational, and educational perspectives under a unified psychosocial and ethical framework, the study demonstrates that technology does not determine human well-being but co-constructs it alongside cultural, relational, and moral factors. In this sense, emotional well-being is conceived as a hybrid and dynamic experience shaped by the continuous interaction between human and artificial intelligence.
In summary, this article calls for reimagining the future of well-being through an ethical and humanistic lens. Rather than asking whether AI improves or deteriorates mental health, the challenge lies in defining the conditions under which technology can be responsibly integrated into people’s emotional lives. Sustainable digital well-being requires empathy, transparency, and a vision of technological development centered on the human being. Ultimately, preserving humanity in the algorithmic age does not mean rejecting AI but learning to coexist with it without renouncing what defines us: the capacity to feel, understand, and care for others.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee of the University of Vic—Central University of Catalonia (protocol code 170/2021 and 20 July 2021).

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AI artificial intelligence

References

  1. Alam, M. M. D., Alam, M. Z., Rahman, S. A., & Taghizadeh, S. K. (2021). Factors influencing mHealth adoption and its impact on mental well-being during COVID-19 pandemic: A SEM-ANN approach. Journal of biomedical informatics, 116(1), 103722. [CrossRef]
  2. Alhuwaydi, A. M. (2024). Exploring the role of artificial intelligence in mental healthcare: current trends and future directions–a narrative review for a comprehensive insight. Risk management and healthcare policy, 1339-1348. [CrossRef]
  3. Alhwaiti, M. (2023). Acceptance of artificial intelligence application in the post-covid ERA and its impact on faculty members’ occupational well-being and teaching self-efficacy: a path analysis using the utaut 2 model. Applied Artificial Intelligence, 37(1), 2175110. [CrossRef]
  4. Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of organizational behavior, 45(2), 159-182. [CrossRef]
  5. Beg, M. J., Verma, M., M, V. C. K., & Verma, M. K. (2025). Artificial intelligence for psychotherapy: a review of the current state and future directions. Indian Journal of Psychological Medicine, 47(4), 314-325. [CrossRef]
  6. Booth, A., Martyn-St James, M., Clowes, M., & Sutton, A. (2021). Systematic approaches to a successful literature review (2nd ed.). Sage Publications.
  7. Cabrera, J., Loyola, M. S., Magaña, I., & Rojas, R. (2023). Ethical dilemmas, mental health, artificial intelligence, and llm-based chatbots. In International Work-Conference on Bioinformatics and Biomedical Engineering (pp. 313-326). Cham: Springer Nature Switzerland. [CrossRef]
  8. Chin, H., Song, H., Baek, G., Shin, M., Jung, C., Cha, M.,... & Cha, C. (2023). The potential of chatbots for emotional support and promoting mental well-being in different cultures: mixed methods study. Journal of Medical Internet Research, 25(1), e51712. [CrossRef]
  9. Cramarenco, R. E., Burcă-Voicu, M. I., & Dabija, D. C. (2023). The impact of artificial intelligence (AI) on employees’ skills and well-being in global labor markets: A systematic review. Oeconomia Copernicana, 14(3), 731-767. [CrossRef]
  10. Dai, Y., Chai, C. S., Lin, P. Y., Jong, M. S. Y., Guo, Y., & Qin, J. (2020). Promoting students’ well-being by developing their readiness for the artificial intelligence age. Sustainability, 12(16), 6597. [CrossRef]
  11. Denecke, K., Abd-Alrazaq, A., Househ, M. (2021). Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. In: Househ, M., Borycki, E., Kushniruk, A. (eds) Multiple Perspectives on Artificial Intelligence in Healthcare. Lecture Notes in Bioengineering. Springer, Cham. [CrossRef]
  12. Dhimolea, T. K., Kaplan-Rakowski, R., & Lin, L. (2022). Supporting social and emotional well-being with artificial intelligence. In Bridging human intelligence and artificial intelligence (pp. 125-138). Cham: Springer International Publishing. [CrossRef]
  13. Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91–108. [CrossRef]
  14. Gual-Montolio, P., Jaén, I., Martínez-Borba, V., Castilla, D., & Suso-Ribera, C. (2022). Using artificial intelligence to enhance ongoing psychological interventions for emotional problems in real-or close to real-time: a systematic review. International journal of environmental research and public health, 19(13), 7737. [CrossRef]
  15. Jeyaraman, M., Balaji, S., Jeyaraman, N., & Yadav, S. (2023). Unraveling the ethical enigma: artificial intelligence in healthcare. Cureus, 15(8), e43262. [CrossRef]
  16. Lee, D., & Yoon, S. N. (2021). Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. International journal of environmental research and public health, 18(1), 271. [CrossRef]
  17. Li, H., Zhang, R., Lee, Y. C., Kraut, R. E., & Mohr, D. C. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digital Medicine, 6(1), 236. [CrossRef]
  18. Lin, H., & Chen, Q. (2024). Artificial intelligence (AI)-integrated educational applications and college students’ creativity and academic emotions: students and teachers’ perceptions and attitudes. BMC psychology, 12(1), 487. [CrossRef]
  19. Makridis, C. A., & Mishra, S. (2022). Artificial intelligence as a service, economic growth, and well-being. Journal of Service Research, 25(4), 505-520. [CrossRef]
  20. Malik, N., Tripathi, S. N., Kar, A. K., & Gupta, S. (2022). Impact of artificial intelligence on employees working in industry 4.0 led organizations. International Journal of Manpower, 43(2), 334-354. [CrossRef]
  21. Mendy, J., Jain, A., & Thomas, A. (2025). Artificial intelligence in the workplace–challenges, opportunities and HRM framework: a critical review and research agenda for change. Journal of Managerial Psychology, 40(5), 517-538. [CrossRef]
  22. Moghayedi, A., Michell, K., Awuzie, B., & Adama, U. J. (2024). A comprehensive analysis of the implications of artificial intelligence adoption on employee social well-being in South African facility management organizations. Journal of Corporate Real Estate, 26(3), 237-261. [CrossRef]
  23. Murugesan, U., Subramanian, P., Srivastava, S., & Dwivedi, A. (2023). A study of artificial intelligence impacts on human resource digitalization in industry 4.0. Decision Analytics Journal, 7(1), 100249. [CrossRef]
  24. Nashwan, A. J., Gharib, S., Alhadidi, M., El-Ashry, A. M., Alamgir, A., Al-Hassan, M.,... & Abufarsakh, B. (2023). Harnessing artificial intelligence: strategies for mental health nurses in optimizing psychiatric patient care. Issues in Mental Health Nursing, 44(10), 1020-1034. [CrossRef]
  25. Olawade, D. B., Wada, O. Z., Odetayo, A., David-Olawade, A. C., Asaolu, F., & Eberhardt, J. (2024). Enhancing mental health with Artificial Intelligence: Current trends and future prospects. Journal of medicine, surgery, and public health, 3(1), 100099. [CrossRef]
  26. Ozmen Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C.,... & Xu, W. (2023). Six human-centered artificial intelligence grand challenges. International Journal of Human–Computer Interaction, 39(3), 391-437. [CrossRef]
  27. Pataranutaporn, P., Danry, V., Leong, J., Punpongsanon, P., Novy, D., Maes, P., & Sra, M. (2021). AI-generated characters for supporting personalized learning and well-being. Nature Machine Intelligence, 3(12), 1013-1022. [CrossRef]
  28. Prentice, C., Dominique Lopes, S., & Wang, X. (2020). Emotional intelligence or artificial intelligence–an employee perspective. Journal of Hospitality Marketing & Management, 29(4), 377-403. [CrossRef]
  29. Santiago-Torner, C. (2023). Relación entre liderazgo ético y motivación intrínseca: El rol mediador de la creatividad y el múltiple efecto moderador del compromiso de continuidad. Revista de Métodos Cuantitativos para la Economía y la Empresa, 36 (1), 1-27. [CrossRef]
  30. Santiago-Torner, C. (2024). Creativity and emotional exhaustion in virtual work environments: The ambiguous role of work autonomy. European Journal of Investigation in Health, Psychology and Education, 14(7), 2087-2100. [CrossRef]
  31. Santiago-Torner, C., Corral-Marfil, J. A., & Tarrats-Pons, E. (2024). The relationship between ethical leadership and emotional exhaustion in a virtual work environment: A moderated mediation model. Systems, 12(11), 454. [CrossRef]
  32. Santiago-Torner, C., Corral-Marfil, J. A., Jiménez-Pérez, Y., & Tarrats-Pons, E. (2025a). Impact of ethical leadership on autonomy and self-efficacy in virtual work environments: The disintegrating effect of an egoistic climate. Behavioral Sciences, 15(1), 95. [CrossRef]
  33. Santiago-Torner, C., Jiménez-Pérez, Y., & Tarrats-Pons, E. (2025b). Ethical climate, intrinsic motivation, and affective commitment: The impact of depersonalization. European Journal of Investigation in Health, Psychology and Education, 15(4), 55. [CrossRef]
  34. Schiff, D., Ayesh, A., Musikanski, L., & Havens, J. C. (2020). IEEE 7010: A new standard for assessing the well-being implications of artificial intelligence. In 2020 IEEE international conference on systems, man, and cybernetics (SMC) (pp. 2746-2753). IEEE. [CrossRef]
  35. Sedlakova, J., & Trachsel, M. (2023). Conversational artificial intelligence in psychotherapy: a new therapeutic tool or agent?. The American Journal of Bioethics, 23(5), 4-13. [CrossRef]
  36. Shahzad, M. F., Xu, S., Lim, W. M., Yang, X., & Khan, Q. R. (2024). Artificial intelligence and social media on academic performance and mental well-being: Student perceptions of positive impact in the age of smart learning. Heliyon, 10(8), e2952. [CrossRef]
  37. Shaikh, F., Afshan, G., Anwar, R. S., Abbas, Z., & Chana, K. A. (2023). Analyzing the impact of artificial intelligence on employee productivity: the mediating effect of knowledge sharing and well-being. Asia Pacific Journal of Human Resources, 61(4), 794-820. [CrossRef]
  38. Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research, 104, 333–339. [CrossRef]
  39. Sood, M. S., & Gupta, A. (2025). The Impact of Artificial intelligence on Emotional, Spiritual and Mental wellbeing: Enhancing or Diminishing Quality of Life. American Journal of Psychiatric Rehabilitation, 28(1), 298-312. [CrossRef]
  40. Tang, P. M., Koopman, J., Mai, K. M., De Cremer, D., Zhang, J. H., Reynders, P.,... & Chen, I. (2023). No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. Journal of Applied Psychology, 108(11), 1766. [CrossRef]
  41. Thakkar, A., Gupta, A., & De Sousa, A. (2024). Artificial intelligence in positive mental health: a narrative review. Frontiers in digital health, 6(1), 1280235. [CrossRef]
  42. Tornero-Costa, R., Martinez-Millana, A., Azzopardi-Muscat, N., Lazeri, L., Traver, V., & Novillo-Ortiz, D. (2023). Methodological and quality flaws in the use of artificial intelligence in mental health research: systematic review. JMIR Mental Health, 10(1), e42045. [CrossRef]
  43. Torraco, R. J. (2016). Writing integrative literature reviews: Using the past and present to explore the future. Human Resource Development Review, 15(4), 404–428. [CrossRef]
  44. Tuomi, I. (2022). Artificial intelligence, 21st century competences, and socio-emotional learning in education: More than high-risk?. European Journal of Education, 57(4), 601-619. [CrossRef]
  45. Uysal, E., Alavi, S., & Bezençon, V. (2022). Trojan horse or useful helper? A relationship perspective on artificial intelligence assistants with humanlike features. Journal of the Academy of Marketing Science, 50(6), 1153-1175. [CrossRef]
  46. Velastegui-Hernandez, D. C.., Rodriguez-Pérez, M. L.., & Salazar-Garcés, L. F.. (2023). Impact of Artificial Intelligence on learning behaviors and psychological well-being of college students. Salud, Ciencia Y Tecnología - Serie De Conferencias, 2(1), 582. [CrossRef]
  47. Vistorte, A. O. R., Deroncele-Acosta, A., Ayala, J. L. M., Barrasa, A., López-Granero, C., & Martí-González, M. (2024). Integrating artificial intelligence to assess emotions in learning environments: a systematic literature review. Frontiers in psychology, 15(1), 1387089. [CrossRef]
  48. Wang, Y. C., & Uysal, M. (2024). Artificial intelligence-assisted mindfulness in tourism, hospitality, and events. International Journal of Contemporary Hospitality Management, 36(4), 1262-1278. [CrossRef]
  49. Xu, G., Xue, M., & Zhao, J. (2023). The relationship of artificial intelligence opportunity perception and employee workplace well-being: A moderated mediation model. International Journal of Environmental Research and Public Health, 20(3), 1974. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated