Preprint
Article

This version is not peer-reviewed.

Multi-Domain Self-Esteem Profiles Associated with Adolescent Engagement with Conversational AI: A Latent Profile Analysis

Submitted:

19 March 2026

Posted:

20 March 2026

You are already at the latest version

Abstract
This study examines whether multidomain self-esteem profiles differentiate adolescents’ interactions with conversational AI. Data come from a national school-based survey of Czech adolescents aged 13–17 (N = 42,772). Latent profile analysis based on Home, School, and Peer self-esteem identified three groups: high, moderate, and low self-esteem. These profiles were compared across AI-related domains, including social substitution, emotional regulation and self-disclosure, intimacy-related use, and perceived ease and safety of AI communication.Adolescents with low self-esteem reported the highest levels of friend-like interaction with AI, stronger beliefs that AI can replace human relationships, more frequent use for emotional support and disclosure, greater intimacy-related engagement, and stronger perceptions of AI as easier and safer than human communication. They also reported the highest psychological distress.
Keywords: 
;  ;  ;  

1. Introduction

The domain of adolescent social development is currently undergoing a significant transformation, the scope of which can be compared to the emergence of social networking sites at the beginning of the 21st century. Whereas previous decades were dominated by computer-mediated communication (CMC), in which digital technologies primarily functioned as channels between human actors, recent developments increasingly point toward interactions between humans and artificial intelligence (Gambo & Özad, 2020; Jiang et al., 2024).
This shift is particularly evident in the widespread adoption of conversational AI chatbots capable of simulating human-like dialogue, producing emotionally congruent responses, and sustaining long-term interaction patterns with users (Almoqbel, 2024). During the 2020s, the phenomenon of so-called synthetic relationships—previously confined largely to science fiction literature and film—has thus become part of the everyday experience of millions of adolescents worldwide (Faverio & Sidoti, 2025).
Unlike social networking platforms, where interactions primarily occur between human users, AI chatbots represent a qualitatively distinct form of social experience in which the interaction partner is non-human yet socially responsive. This development raises fundamental questions regarding adolescent social and emotional development, particularly in relation to the formation of relational expectations, identity construction, and emotion regulation. Despite their potential developmental significance, these processes remain insufficiently examined in the context of adolescents’ interactions with conversational AI systems, largely due to the relative novelty and rapid recent adoption of these technologies (Goldman & Poulin-Dubois, 2024; Loos & Ivan, 2025; Piombo et al., 2025). In particular, there is a lack of research examining which adolescents are more likely to engage in conversational interactions with AI chatbots and how individual psychosocial profiles shape such engagement (Herbener & Damholdt, 2025; Vanhoffelen et al., 2025).
Beyond empirical studies, several widely discussed case reports and media investigations have raised concerns about potential harms associated with prolonged or emotionally intensive interactions with conversational AI systems, including reports involving self-harm and suicide (Malfacini, 2025; Moore et al., 2025). While these accounts cannot establish causality, they point to real-world scenarios that demand careful empirical scrutiny, especially in adolescent (Montgomery, 2024; Ortutay, 2025; Yousif, 2025). From this perspective, research that identifies which adolescents are more likely to engage in emotionally intensive interactions with AI chatbots may also inform early risk detection and prevention efforts.
The present study aims to examine adolescents’ interactions with conversational AI chatbots from a person-centered perspective. Using latent profile analysis (LPA), we identify distinct psychological profiles of adolescents based on self-reported psychosocial characteristics derived from Hare’s scales, capturing dimensions of social functioning, family relationships, and school experiences.
Building on these profiles, the study investigates whether adolescents differ in their use of conversational AI chatbots, their emotional experiences during human–AI interaction, and their attitudes toward AI as a potential social or relational partner. Specifically, we examine profile-level differences in the frequency and purposes of chatbot use, perceived emotional support, self-disclosure, and beliefs about AI’s capacity to substitute or complement human relationships, drawing on data from the AI-focused section of a large-scale national survey.
By integrating psychological profiling with adolescents’ reported experiences of conversational AI, this study contributes to the emerging literature on human–AI interaction by highlighting how individual differences shape engagement with socially responsive AI systems during a critical developmental period.

1.1. Literature review

1.1.1. Adoption and Use of Conversational AI Among Adolescents

Evidence suggests that regular engagement with conversational AI among young people is already increasing and may continue to rise as these technologies become further integrated into digital platforms and youth culture (Faverio & Sidoti, 2025). Age-related differences in AI adoption have been reported, although findings remain inconsistent. Klarin et al. (2024) observed substantially higher rates of generative AI use among older adolescents (M = 17 years; 52.6%) compared to younger adolescents (M = 14 years; 14.8%). In contrast, Vanhoffelen et al. (2025) found that younger adolescents were more likely to engage with a conversational AI chatbot (My AI) and reported more positive emotional experiences during these interactions. Notably, this study did not identify differences based on gender or socioeconomic status. Extending beyond adolescence, Willoughby et al. (2025) showed that young adults aged 18–29 exhibited approximately twice the level of engagement with AI technologies designed for romantic or sexual interaction compared to adults aged 30 and older, indicating that age gradients in AI engagement may vary across functional domains of AI use.
Cross-national findings further underscore the variability of AI adoption. In a Norwegian adolescent sample (M = 17 years), nearly half of respondents reported not using social AI regularly (Brandtzaeg et al., 2025). As in other studies (Herbener & Damholdt, 2025; Klarin et al., 2024; Malfacini, 2025; Moore et al., 2025), the authors emphasized that data collection coincided with an early phase of widespread AI adoption, cautioning against interpreting these figures as stable prevalence estimates. Taken together, these findings highlight the importance of contextualizing reported usage rates within specific temporal, cultural, and technological conditions. Given the rapid evolution of generative AI, patterns of adolescent use are likely to continue changing at a pace that challenges traditional cross-sectional interpretations.

1.1.2. Conversational AI as Emotional Support and Coping Mechanism

A growing body of research indicates that a subset of adolescents engages with conversational AI for emotionally supportive purposes. Population-based evidence suggests that approximately 13–15% of adolescents and young people report using generative AI to seek advice or support when experiencing negative emotional states such as sadness, anxiety, or distress (McBain et al., 2025). Complementing these prevalence estimates, qualitative and typological analyses indicate that a smaller subgroup of users engages with chatbots in interactional patterns that resemble socially supportive or companion-like conversations rather than purely instrumental use (Herbener & Damholdt, 2025). A similar distinction is suggested by Brandtzaeg et al. (2025),who identified different types of use within the category of Support and Companionship, with informational and instrumental support being the most common, compared to emotional and appraisal support.
Herbener and Damholdt (2025) further distinguished two dominant modes of adolescent chatbot engagement—utilitarian use and socially supportive use—and showed that adolescents who primarily engaged in socially supportive interactions reported significantly higher levels of loneliness and lower perceived social support compared with both non-users and predominantly utilitarian users. These findings suggest that emotionally oriented engagement with conversational AI is more common among adolescents with pre-existing social vulnerabilities, rather than reflecting a uniformly adaptive form of digital social interaction.
These socially supportive interactions were most frequently initiated during periods of negative mood, self-disclosure needs, and feelings of isolation. Importantly, participants rarely described chatbots as friends, instead framing them as tools for emotion regulation. These findings suggest that conversational AI may function as a compensatory mechanism for adolescents experiencing social disconnection.
Comparable patterns have been observed among adults, with individuals experiencing distress or lacking human companionship seeking emotional support from social chatbots (Xie & Pentina, 2022). Theoretical perspectives offer competing explanations for these dynamics. The displacement hypothesis posits that AI-mediated interactions may substitute for human relationships and exacerbate loneliness, whereas the stimulation hypothesis suggests that such technologies may create new opportunities for social engagement and, in some cases, strengthen offline relationships (Maples et al., 2023). Empirical evidence remains mixed. While Herbener and Damholdt’s (2025) findings point toward compensatory patterns more consistent with displacement, Maples et al. (2023) suggest that conversational AI can function as an important source of emotional support for socially or structurally underserved individuals, particularly when access to traditional support services is limited.

1.1.3. AI and Adolescent Mental Health

Although AI offers advantages such as continuous availability and reduced barriers to access (Choudhary & Sybol, 2025; Kostenius et al., 2024; Maples et al., 2023; Zhang et al., 2024), it cannot replace therapeutic relationships. Large language models have been shown to produce inappropriate or stigmatizing responses in mental health contexts (Moore et al., 2025) and appear more reliable in less complex or more structured conditions (Levkovich, 2025). Social chatbots thus represent a double-edged phenomenon, offering short-term benefits while posing potential risks for vulnerable users (Brandtzaeg et al., 2025; Laestadius et al., 2024).
Experimental evidence suggests that emotional disclosure to AI can reduce moderate to severe negative affect in the short term (Hu et al., 2025). Nevertheless, large-scale survey research raises concerns that higher overall engagement with AI, including conversational use, may be weakly associated with increased depressive symptoms and reduced life satisfaction (Willoughby et al., 2025).

1.1.4. Problematic and Compensatory Use of AI

Emerging research has described patterns consistent with problematic AI chatbot use (PACU), characterized by reduced self-reliance and increased withdrawal from real-world social environments (Xie & Pentina, 2022; Yao et al., 2025). Strong emotional attachment to chatbots may foster compulsive use and psychological dependence (Pentina et al., 2023) mirroring patterns observed in other forms of digital addiction (Sun & Zhang, 2021).
The compensatory internet use theory provides a useful framework for understanding these dynamics, conceptualizing excessive AI use as a response to unmet psychosocial needs(Gori et al., 2023; Kardefelt-Winther, 2014). Individuals with low self-esteem appear particularly vulnerable, as they may seek emotional validation and escape through AI-mediated interactions (Ahmed et al., 2021). Conversely, higher self-esteem and better self-regulation serve as protective factors (Kim & Koh, 2018).
Family context also plays a critical role. Piombo et al. (2025) identified a subgroup of adolescents with low perceived family support who engaged more intensively with AI, frequently shared personal data, sought behavioral advice, and expressed greater trust in AI than in parents or peers. Such findings underscore the importance of examining psychosocial profiles when investigating adolescent AI use.

1.2. Current study

Building on the reviewed literature, existing research on adolescents’ interactions with conversational AI has predominantly relied on variable-centered approaches, focusing on average associations between AI use and isolated psychosocial indicators such as loneliness, anxiety, or perceived social support. While informative, such approaches may obscure meaningful heterogeneity in adolescents’ experiences, particularly given evidence that emotionally supportive and potentially risky forms of AI engagement are concentrated among specific subgroups rather than distributed uniformly across the population.
Adolescence is characterized by pronounced variability in social functioning, family relationships, and school adjustment, all of which may shape how socially responsive technologies are perceived, used, and emotionally integrated into everyday life. From this perspective, person-centered methods are particularly well suited to capturing distinct configurations of psychosocial characteristics and examining how these configurations relate to different patterns of human–AI interaction.
Accordingly, the present study adopts a latent profile analysis (LPA) to identify distinct psychosocial profiles of adolescents based on self-reported indicators of social functioning, family support, and school-related experiences derived from Hare’s scales. LPA allows for the identification of qualitatively different subgroups within the population, providing a nuanced understanding of which adolescents are more likely to engage with conversational AI and how such engagement varies across profiles.
The overarching aim of the present study is to explore heterogeneity in adolescents’ interactions with conversational AI chatbots from a person-centered perspective. Specifically, the study addresses the following research questions:
RQ1: What distinct latent psychosocial profiles of adolescents can be identified based on indicators of social functioning, family support, and school experiences?
RQ2: How do these psychosocial profiles differ in adolescents’ endorsement of conversational AI chatbot use and substitution-oriented beliefs, including friend-like interactions with AI?
RQ3: How do psychosocial profiles differ in adolescents’ self-reported motives and experiences of human–AI interaction, including emotion-regulation–related use (e.g., support when lonely/sad), perceived psychological safety and ease of self-disclosure, and intimacy-related uses (e.g., questions about relationships/sexuality, practicing communication)?
To address these research questions, latent profile analysis will be conducted to identify psychosocial subgroups of adolescents. Profile membership will then be used to examine between-profile differences in conversational AI use patterns, emotional engagement, and attitudinal measures drawn from the AI-focused section of a large-scale national survey. Given the exploratory nature of the study, analyses are intended to generate empirically grounded insights and hypotheses for future research rather than to test confirmatory causal models. All outcomes reflect adolescents’ self-reported experiences and interpretations of conversational AI rather than objective properties of AI systems or verified interaction outcomes.

2. Methods

2.1. Participants and procedures

Data was collected as part of a large-scale national survey, [blinded for review] conducted by [blinded for review]. The initial sample consisted of 53,119 respondents. A rigorous data cleaning process was implemented to ensure validity. Respondents who provided irrelevant strings in numerical fields (e.g., “red wine”) or reported implausible ages (e.g.,1 or 80 years) were excluded, resulting in 50,151 valid cases. For the purposes of this study, the sample was further narrowed to the adolescent population aged 13 to 17 years, yielding a final dataset of N = 42,772 participants.
The demographic composition of the final sample included 52.18% girls and 47.82% boys. The mean age was M = 14.81 years with a standard deviation of SD = 1.27. Data collection was conducted via Google Forms. The survey was distributed through teachers at primary and secondary schools across all regions of the Czech Republic. Informed consent was managed through a two-step process: (1) general parental consent for research activities and GDPR compliance signed at the beginning of the school year, and (2) participant assent at the start of the questionnaire. The administration took place during school hours under teacher supervision to ensure a controlled environment and high response rates. All data were anonymized in accordance with ethical standards for scientific research. The research design was reviewed and approved by the Ethics Commission for Science and Research of the [blinded for review] (approved on June 6, 2025).

2.2. Measures

dolescent self-esteem was assessed using the shortened 18-item Hare Self-Esteem Scale (HARE). This multidimensional instrument evaluates perception of self-worth across three primary social contexts: Home, School, and Peers, using a 4-point Likert scale. Each subscale consists of 6 items on a 4-point Likert scale. To ensure data quality, reverse-scored items were included in accordance with the original methodology to prevent response bias.
  • Home subscale: Assesses the child’s perceived value and support within the family environment. In the current study, this subscale demonstrated high reliability with α = 0.85
  • School subscale: Measures academic self-competence and perceived worth within the educational context. The reliability in the current sample was α = 0.71
  • Peer subscale: Evaluates social acceptance and value among age-mates. The reliability reached α = 0.63
Consistent with the work of Shoemaker (1980), these subscales were treated as relatively autonomous constructs. The total score was also calculated to provide an indicator of global self-esteem.
To validate the identified self-esteem profiles, psychological distress was assessed using the Patient Health Questionnaire-4 (PHQ-4). This brief four-item screening instrument measures the frequency of anxiety and depressive symptoms over the past two weeks. Items were rated on a 4-point Likert scale ranging from 1 (not at all) to 4 (nearly every day).
For the purposes of the present study, responses were averaged across items to create a mean symptom severity score, with higher values indicating greater psychological distress.
Interaction with Artificial Intelligence (AI) was assessed using a set of 14 exploratory items developed specifically for the [blinded for review] survey to capture emerging forms of human–AI interaction that are not yet covered by standardized measurement instruments. To minimize order effects and reduce common method bias, the items were administered within a single section of the questionnaire. Based on their functional role in adolescents’ everyday lives, the items were conceptually organized into four thematic domains reflecting distinct modes of engagement with conversational AI.
The first domain, Social Substitution, assessed the extent to which adolescents perceived conversational AI as a potential replacement for human social relationships. Items in this domain captured both behavioral engagement and attitudinal endorsement, including experiences of talking to AI in a friend-like manner and beliefs about AI’s capacity to substitute for friendships or romantic relationships.
The second domain, Emotional Regulation and Self-Disclosure, focused on adolescents’ use of AI as a resource for managing negative emotions and sharing sensitive personal information. Items addressed perceived emotional support from AI, ease of confiding in AI compared to humans, and experiences of disclosing information associated with shame, loneliness, or emotional vulnerability.
The third domain, Practicing and Learning About Intimacy, examined the use of AI for exploring romantic, relational, and sexual topics. These items captured both informational and experiential uses of conversational AI, including discussions about sexuality, emotions, and love, as well as practicing communication skills relevant to real-life romantic interactions.
The fourth domain, Ease of Communication and Accessibility, assessed adolescents’ perceptions of AI as an accessible, non-judgmental, and comfortable interaction partner. Items reflected experiences of communication ease, emotional safety, enjoyment of interaction, and the perception that AI is consistently available and non-critical.
Given the exploratory nature of the AI-related items and the heterogeneous response formats employed (Likert-type and binary), all items were dichotomized prior to analysis. Likert-type items were recoded into binary indicators reflecting endorsement (agreement or reported engagement) versus non-endorsement, while binary items retained their original coding. This analytical decision was guided by the study’s focus on mapping the presence or absence of specific behavioral and experiential interaction patterns across identified psychosocial profiles rather than estimating scale-based latent constructs or continuous effects. All AI-related outcomes thus reflect adolescents’ self-reported experiences and interpretations of conversational AI rather than objective properties of AI systems or interaction quality. The full wording of all AI-related items is provided in the Supplementary Materials.

2.3. Statistical Analysis

To identify qualitatively distinct subgroups of adolescents that may share certain observable characteristics (Weller et al., 2020) based on their self-esteem, Latent Profile Analysis (LPA) (Muthén, 2001) was performed using Python. The three subscales of the HARE (Home, School, Peer) served as continuous indicators. This approach enables researchers to move beyond the limitations of a single global score. Two students may exhibit identical overall levels of self-evaluation while possessing markedly different underlying profile configurations.
A critical step in applying LPA is determining the optimal number of latent profiles. Model selection was conducted by statistical fit indices (AIC, BIC and Entropy) and theoretical interpretability. Methodological literature (Wang et al., 2011; Masyn, 2013; Spurk et al., 2020) emphasizes that model selection should not rely exclusively on statistical fit indices. The number of profiles with favorable statistical indicators but lacking theoretical coherence is of limited practical value. Theoretical interpretability therefore constitutes one of the primary criteria in selecting the final model (Weller et al., 2020).
Following the classification, the identified profiles were compared regarding their AI interaction patterns. Furthermore, the PHQ-4 scores were used as external validation criteria to test whether the profiles differed significantly in their levels of anxiety and depression.

3. Results

3.1. Sample

The final analytical sample consisted of 42,772 adolescents. The gender distribution was relatively balanced, with a slight predominance of girls (52.18%) compared to boys (47.82%). The age composition corresponded to the adolescent developmental period and was distributed across the following age categories: 13 years (17.65%), 14 years (26.38%), 15 years (26.30%), 16 years (16.90%), and 17 years (12.77%), with the largest proportions represented by the 14-year-old and 15-year-old cohorts.
Descriptive statistics for the Hare Self-Esteem Scale (HARE) indicated moderate levels of perceived self-worth across relational domains, with relatively limited dispersion within the possible range of the 4-point response scale. The global HARE score yielded a mean of 2.79 (SD = 0.38) and a median of 2.83 (IQR = 2.56–3.06). Among the domain-specific subscales, the Home dimension demonstrated the highest central tendency (M = 3.14, SD = 0.62, Mdn = 3.17), indicating that adolescents evaluated their family-related self-worth more positively on average than their peer or school-related self-esteem (both Mdn = 2.67).
Psychological distress was assessed using the Patient Health Questionnaire-4 (PHQ-4). The total PHQ-4 score had a mean of 4.51 (SD = 3.41), a median of 4 (IQR = 2–7), and ranged from 0 to the maximum possible value of 12. These descriptive statistics indicate substantial variability in anxiety and depressive symptom severity across respondents.

3.2. Model fit and profile selection

Model fit indices for one- to six-profile solutions are presented in Table 1. As expected in large samples, both AIC and BIC decreased monotonically with increasing numbers of profiles, indicating improved statistical fit. However, the rate of improvement diminished after the three-profile solution, and subsequent decreases were relatively modest.
The three-profile model yielded the highest entropy (0.514), indicating the clearest class separation among the tested solutions. In contrast, solutions with four or more profiles showed lower entropy, suggesting reduced classification precision despite slightly improved information criteria.
Inspection of the profile structures further indicated that additional profiles primarily reflected quantitative subdivisions of intermediate groups rather than qualitatively distinct configurations. In other words, solutions with more than three profiles did not introduce substantively new patterns but instead split existing profiles into smaller segments without clear theoretical differentiation.
Given the large sample size, reliance solely on information criteria may lead to overextraction of profiles. Therefore, model selection was based on a combination of statistical indicators, classification quality, parsimony, and theoretical interpretability. The three-profile solution was retained as the most appropriate representation of the data.
Importantly, the identified profiles corresponded to a theoretically meaningful gradient of self-esteem (high, moderate, low), which is consistent with prior person-centered research showing that psychosocial constructs in adolescence often differentiate primarily by overall level rather than by distinct structural configurations. The profiles also demonstrated external validity, as they differed systematically in psychological distress (PHQ-4), further supporting their substantive relevance.

3.3. Profile characteristics

The latent profile analysis identified three distinct self-esteem profiles based on adolescents’ responses across the three HARE domains (Home, School, and Peer). Profiles differed primarily in overall level rather than domain-specific configuration. As illustrated in Figure 1, the profiles exhibited a parallel structure across domains, indicating a level effect. Across all subscales, profiles followed a consistent descending order (Profile 0 > Profile 1 > Profile 2), suggesting that differentiation was driven mainly by global self-esteem magnitude.
Profile 0 (High self-esteem; 31.32%, n = 13,396) demonstrated the highest median self-esteem scores across all domains (Peer = 2.83, Home = 3.83, School = 3.00). Adolescents in this profile also reported the lowest levels of psychological distress, with a median PHQ-4 total score of 3.
Profile 1 (Moderate self-esteem; 39.00%, n = 16,679), representing the largest group, displayed intermediate self-esteem levels across domains (Peer = 2.50, Home = 3.17, School = 2.67). Psychological distress was moderately elevated in this profile (PHQ-4 median = 4).
Profile 2 (Low self-esteem; 29.69%, n = 12,697) showed lower overall self-esteem across all domains (Peer = 2.33, Home = 2.50, School = 2.17). This group reported the highest distress levels, with a median PHQ-4 total score of 6.
Table 2. Median HARE domain scores and PHQ-4 psychological distress across Latent self-esteem profiles.
Table 2. Median HARE domain scores and PHQ-4 psychological distress across Latent self-esteem profiles.
Profile Peer Home School PHQ-4 Total
Profile 0 2.83 3.83 3.00 3.0
Profile 1 2.50 3.17 2.67 4.0
Profile 2 2.33 2.50 2.17 6.0
Note. Values represent medians (Mdn). HARE subscale scores (Peer, Home, School) are based on a 4-point Likert scale, with higher values indicating greater self-esteem. PHQ-4 Total scores range from 0 to 12, with higher values indicating greater anxiety and depressive symptom severity.

3.4. Social substitution

Items in this domain assessed the extent to which adolescents engaged with AI chatbots as social substitutes and perceived them as potential replacements for human relationships. As shown in Table 3, endorsement rates increased systematically across self-esteem profiles, with the highest levels consistently observed in Profile 2.
As shown in Table 3, The proportion of adolescents reporting that they had talked to an AI chatbot as a friend (regularly or sometimes) rose from 25% in Profile 0 to 43% in Profile 2. Similarly, agreement that AI could replace friends increased from 11% in Profile 0 to 23% in Profile 2. Endorsement of AI as a potential substitute for a romantic partner was less frequent overall but still showed an increasing trend across profiles (6% in Profile 0 vs. 12% in Profile 2). A comparable pattern emerged for the item indicating that AI understands the respondent better than other people, with endorsement rising from 8% in Profile 0 to 21% in Profile 2.
Odds ratios further confirmed these differences. Compared to adolescents in Profile 0, those in Profile 2 were substantially more likely to report social substitution tendencies, including talking to AI as a friend (OR = 2.24), perceiving AI as a replacement for friends (OR = 2.44), and feeling better understood by AI than by people (OR = 3.20). Profile 1 showed intermediate increases in likelihood (OR = 1.26–1.53), indicating a monotonic association between lower self-esteem and greater endorsement of social substitution items.

3.5. Emotional Regulation and Self-Disclosure

Items in this domain assessed the degree to which adolescents used AI chatbots as a confidant and source of emotional support. Endorsement rates increased systematically across self-esteem profiles, with the highest levels observed in Profile 2. Detailed endorsement rates across profiles are presented in Table 4.
The proportion of adolescents reporting that they had told AI something they would have been ashamed to tell others rose from 8% in Profile 0 to 20% in Profile 2. A similar pattern was observed for finding it easier to confide in AI than in a real person (7% vs. 20%) and for AI helping when feeling lonely or sad (6% vs. 17%). The item reflecting a sense of being better understood by AI than by people showed the steepest gradient, increasing from 5% in Profile 0 to 16% in Profile 2.
Odds ratios confirmed these differences. Compared to Profile 0, adolescents in Profile 2 were notably more likely to disclose sensitive content to AI (OR = 2.96), to find AI easier to confide in (OR = 3.31), to turn to AI for emotional support (OR = 3.08), and to feel better understood by AI than by people (OR = 3.78). Profile 1 showed intermediate elevations (OR = 1.47–1.64), consistent with a monotonic association between lower self-esteem and greater reliance on AI for emotional disclosure.

3.6. Practising and learning in intimacy

Items in this domain assessed whether adolescents used AI chatbots to explore or navigate romantic and sexual topics. Overall endorsement rates were lower compared to other domains, yet a consistent increasing trend across profiles was observed.
The proportion reporting conversations with AI about sexuality rose from 5% in Profile 0 to 11% in Profile 2, while discussions about relationships and feelings increased from 11% to 22%. Using AI to practice talking to a romantic partner was reported by 16% of Profile 0 adolescents and 25% of those in Profile 2. The most frequently endorsed item in this domain — preferring to ask AI rather than a person about relationship or sexual questions — was reported by 27% of Profile 0 and 39% of Profile 2. Full endorsement rates for all items in this domain are reported in Table 5.
Odds ratios indicated that adolescents in Profile 2 were more likely than those in Profile 0 to engage with AI around relational and sexual topics, including discussing sexuality (OR = 2.30), talking about relationships and feelings (OR = 2.29), practicing romantic conversations (OR = 1.73), and preferring AI for sensitive questions (OR = 1.73). Profile 1 showed more modest elevations (OR = 1.20–1.33), suggesting a gradual increase in AI-mediated relational exploration with declining self-esteem.

3.7. Ease of communication and accessibility

Items in this domain assessed general preferences for AI-mediated communication, including perceived ease, safety, and openness. Endorsement rates showed a consistent upward trend from Profile 0 to Profile 2.
The proportion of adolescents agreeing that writing with AI is easier than with people rose from 19% in Profile 0 to 32% in Profile 2. Feeling relaxed and safe with AI increased from 13% to 24%, and the sense that one can tell AI anything rose from 16% to 27%. The item reflecting enjoyment of AI responses showed the smallest overall endorsement but still increased from 10% in Profile 0 to 15% in Profile 2.
These patterns, summarized in Table 6, were further confirmed by odds ratio analyses. Odds ratios indicated that adolescents in Profile 2 were more likely than those in Profile 0 to prefer AI communication, including finding it easier to write to AI (OR = 1.95), feeling they can tell AI anything (OR = 1.94), and feeling relaxed and safe with AI (OR = 2.17). The item reflecting enjoyment of AI responses showed the smallest difference (OR = 1.73). Profile 1 showed modest but consistent elevations (OR = 1.09–1.35), supporting a gradual association between lower self-esteem and stronger preference for AI-mediated communication.

4. Discussion

Adolescents with lower self-esteem consistently reported greater engagement with conversational AI across multiple domains, including social substitution, emotional disclosure, and intimacy-related interaction, indicating a generalized shift toward AI-mediated socio-emotional engagement. Taken together, the findings suggest that conversational AI may function—at least for a subgroup of adolescents—not primarily as an informational tool but as a socio-emotional interaction partner that provides accessible support and low-threshold communication. Importantly, the observed patterns do not imply that AI use is inherently problematic. Rather, they indicate that adolescents experiencing lower self-esteem and higher distress may be more likely to rely on AI-mediated interactions as a coping resource. From this perspective, the phenomenon should be understood less as evidence of widespread technological harm and more as an emerging form of digitally mediated coping that deserves careful empirical attention.
A parsimonious interpretation of the observed gradient is provided by compensatory internet use theory, which conceptualizes intensive engagement with digital technologies as a response to psychosocial strain rather than as a manifestation of a single addictive tendency (Kardefelt-Winther, 2014). Within this framework, lower self-esteem may reflect broader vulnerability in adolescents’ relational environments, increasing motivations such as reassurance seeking, escapism, or social compensation. Conversational AI systems may be particularly attractive in this context because they offer immediate responsiveness, perceived non-judgmental interaction, and full control over the timing and content of communication. Such characteristics reduce interpersonal costs and may lower the threshold for emotional disclosure, especially for adolescents who experience anxiety or insecurity in face-to-face interactions.
Our findings are also consistent with emerging research on problematic or emotionally intensive engagement with AI chatbots. For example, Yao et al. (2025) extend compensatory internet use theory to AI interactions and demonstrate that associations between self-esteem and problematic chatbot use can be mediated by factors such as social anxiety, escapism, and immersive flow-like experiences. The graded increases observed in our study—particularly in social substitution beliefs and emotional disclosure to AI—are compatible with a motivational pathway in which conversational AI becomes appealing to adolescents who seek emotional reassurance or reduced evaluation pressure.
The present results can also be interpreted through the long-standing displacement versus stimulation debate in digital media research. The displacement hypothesis suggests that technology-mediated interaction may substitute for human relationships and potentially contribute to social withdrawal, whereas the stimulation hypothesis proposes that digital communication may support social skill development or facilitate offline connections(Maples et al., 2023). The concentration of substitution-oriented beliefs—such as perceiving AI as understanding the user better than people or as a potential replacement for friendships—within the low self-esteem profile may reflect a displacement-oriented mechanism for a subgroup of adolescents. At the same time, other findings, particularly those related to practicing romantic communication or discussing relationship topics with AI, may reflect stimulation-compatible uses in which AI functions as a rehearsal space that reduces anxiety about interpersonal interaction. Distinguishing between these trajectories will require longitudinal and qualitative research capable of capturing how AI-mediated interaction influences adolescents’ offline relationships over time.
Developmental perspectives further highlight why adolescents may be particularly responsive to socially interactive technologies. The APA Health Advisory on social media and youth emphasizes that adolescence is characterized by ongoing maturation of executive control, heightened sensitivity to social evaluation, and variability in self-regulatory capacity (APA, 2025). These developmental features may interact with technologies that provide socially responsive feedback and perceived emotional support. In this context, conversational AI offers a communication environment that minimizes social risk while maintaining elements of relational engagement. For adolescents experiencing insecurity or distress, the perceived safety and controllability of AI interaction may make it especially appealing compared with human communication.
The present findings are broadly consistent with research demonstrating the protective role of supportive social environments in adolescent wellbeing. Butler et al. (2022) show that family support, positive relationships with adults at school, and peer connections contribute cumulatively to mental wellbeing. Interpreted in light of our results, adolescents who experience stronger relational support across these contexts may have less need to seek compensatory emotional interaction through AI technologies. Conversely, adolescents who perceive deficits in family, peer, or school-related support may be more likely to turn toward AI as a supplementary interaction partner.
Evidence from research on human–chatbot relationships further supports this interpretation. Qualitative work by Xie and Pentina (2022) shows that individuals experiencing loneliness or emotional distress may develop attachment-like connections with social chatbots when the interaction is perceived as supportive and empathetic. Similarly, Maples et al. (2024) report that users of the AI companion application Replika often experience both elevated loneliness and high perceived social support from the chatbot, suggesting that such technologies may serve multiple roles simultaneously, including emotional companion, reflective dialogue partner, or informational resource. Differences between those findings and the present population-based survey likely reflect differences in sampling frames, as app-user samples consist of individuals already motivated to seek AI companionship, whereas school-based surveys capture a broader spectrum of engagement.
Our findings related to intimacy and relationship-related uses of AI are also consistent with emerging evidence that conversational AI is becoming integrated into modern romantic and sexual exploration. Willoughby et al. (2025) report substantial engagement with romantic and sexual AI technologies among young adults, while Merrill et al. (2022) demonstrate experimentally that perceived social presence and warmth increase the perceived usefulness of AI companions and willingness to recommend them to lonely individuals. These relational affordances may help explain why adolescents in the low self-esteem profile were more likely to report using AI to discuss relationship topics or practice communication with potential partners.
Importantly, the present study does not allow causal conclusions regarding the relationship between psychosocial vulnerability and AI engagement. Cross-sectional associations may reflect multiple underlying dynamics, including the possibility that adolescents experiencing distress are more likely to seek support through AI, but also that intensive AI engagement could shape emotional experiences over time. Longitudinal evidence illustrates the importance of disentangling these pathways. For example, Huang et al. (2024) use cross-lagged panel modeling to examine associations between mental health problems and AI dependence, demonstrating that motivations for AI use play a crucial mediating role.

5. Limitations

Several methodological choices shape interpretation. First, the design is cross-sectional and self-reported, so the results cannot establish whether low self-esteem leads to greater AI reliance, whether AI engagement affects self-esteem and distress, or whether shared third variables (e.g., social anxiety, family stress) drive both. The most defensible claim is one of robust co-occurrence and profile-based differentiation, not causality (APA, 2025; Huang et al., 2024).
Second, outcomes were based on dichotomized AI items, which improves interpretability (endorsed vs. not endorsed) but reduces information about intensity, frequency, and nuance. Threshold selection can change prevalence and effect sizes, and dichotomization can attenuate or distort associations when underlying distributions are skewed. Future analyses should test sensitivity to alternative cut points and consider modeling items in their original ordinal form or using item-response approaches where feasible.
Third, measurement quality varied across instruments. HARE subscales showed acceptable internal consistency within the variables (α ≈ .63 - .85), supporting their use as LPA indicators.
Fourth, model selection in latent profile analysis requires careful consideration, particularly in large samples where information criteria (AIC/BIC) tend to favor increasingly complex solutions. In such contexts, reliance on statistical fit indices alone may lead to overextraction of profiles. Accordingly, the present study prioritized parsimony, classification quality, and theoretical interpretability when selecting the three-profile solution.
At the same time, the identified profiles primarily reflected differences in overall self-esteem level rather than distinct domain-specific configurations. This pattern suggests that global psychosocial vulnerability, rather than isolated deficits in specific relational contexts (e.g., home or school), may be the key factor differentiating adolescents’ engagement with conversational AI.
However, this interpretation should be considered in light of measurement constraints. The use of relatively brief domain indicators may have limited the detection of more nuanced configural profiles. Future research should examine whether richer or more fine-grained measures of self-esteem yield more differentiated profile structures and whether such configurations provide additional explanatory value.
Finally, potential biases include school-based sampling (excluding adolescents absent from school), administration context (teacher presence may influence disclosure), and the online survey modality (Google Forms may affect perceived anonymity). The two-step consent procedure (school-year parental consent plus participant assent) strengthens ethics but may introduce selection effects by under-representing the most disengaged or high-risk families.
Acknowledgement: This article has been produced with the support of the project ReDiKid: Resilient Child in the Digital World, reg. no. CZ.02.01.01/00/23_025/0008686 co-funded by the EU. This research was reviewed and approved by the Ethics Commission for Science and Research of the Faculty of Education, Palacký University Olomouc (Approval No. 2025019).

References

  1. Ahmed, A.; Ali, N.; Aziz, S.; Abd-alrazaq, A. A.; Hassan, A.; Khalifa, M.; Elhusein, B.; Ahmed, M.; Ahmed, M. A. S.; Househ, M. A review of mobile chatbot apps for anxiety and depression and their self-care features. Computer Methods and Programs in Biomedicine Update 2021, 1, 100012. [Google Scholar] [CrossRef]
  2. American Psychological Association. Health advisory: Artificial intelligence and adolescent well-being. 2025. Available online: https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-ai-adolescent-well-being.
  3. Almoqbel, M. Y. Talking to machines: Personas and behavioral patterns in Gen AI interactions. Journal of Posthumanism 2024, 4(3). [Google Scholar] [CrossRef]
  4. Brandtzaeg, P. B.; Følstad, A.; Skjuve, M. Emerging AI individualism: how young people integrate social AI into everyday life. Communication and Change 2025, 2025 1:1(1(1)), 11. [Google Scholar] [CrossRef]
  5. Butler, N.; Quigg, Z.; Bates, R.; Jones, L.; Ashworth, E.; Gowland, S.; Jones, M. The Contributing Role of Family, School, and Peer Supportive Relationships in Protecting the Mental Wellbeing of Children and Adolescents. School mental health 2022, 14(3), 776–788. [Google Scholar] [CrossRef]
  6. Chen, J.; Yuan, D.; Dong, R.; Cai, J.; Ai, Z.; Zhou, S. Artificial intelligence significantly facilitates development in the mental health of college students: a bibliometric analysis. Frontiers in Psychology 2024, 15, 1375294. [Google Scholar] [CrossRef]
  7. Choudhary, L.; Sybol, S. S. Impact of AI on Students Mental Health and Well-Being; 2025; pp. 205–234. [Google Scholar] [CrossRef]
  8. Faverio, M.; Sidoti, O. Teens, Social Media and AI chatbots 2025. In Pew Research Center; 2025; Available online: www.pewresearch.org.
  9. Gambo, S.; Özad, B. O. The demographics of computer-mediated communication: A review of social media demographic trends among social networking site giants. Computers in Human Behavior Reports 2020, 2, 100016. [Google Scholar] [CrossRef]
  10. Gao, Y. The impact and application of artificial intelligence technology on mental health counseling services for college students. Journal of Computational Methods in Sciences and Engineering 2025, 25(2), 1686–1701. [Google Scholar] [CrossRef]
  11. Gesselman, A. N.; Kaufman, E. M.; Marcotte, A. S.; Reynolds, T. A.; Garcia, J. R. Engagement with Emerging Forms of Sextech: Demographic Correlates from a National Sample of Adults in the United States. Journal of Sex Research 2023, 60(2), 177–189. [Google Scholar] [CrossRef] [PubMed]
  12. Goldman, E. J.; Poulin-Dubois, D. Children’s anthropomorphism of inanimate agents. Wiley Interdisciplinary Reviews: Cognitive Science 2024, 15(4), e1676. [Google Scholar] [CrossRef] [PubMed]
  13. Gori, A.; Topino, E.; Griffiths, M. D. The associations between attachment, self-esteem, fear of missing out, daily time expenditure, and problematic social media use: A path analysis model. Addictive Behaviors 2023, 141, 107633. [Google Scholar] [CrossRef]
  14. Herbener, A. B.; Damholdt, M. F. Are lonely youngsters turning to chatbots for companionship? The relationship between chatbot usage and social connectedness in Danish high-school students. International Journal of Human-Computer Studies 2025, 196, 103409. [Google Scholar] [CrossRef]
  15. Hu, M.; Chua, X. C. W.; Diong, S. F.; Kasturiratna, K. T. A. S.; Majeed, N. M.; Hartanto, A. AI as your ally: The effects of AI-assisted venting on negative affect and perceived social support. In Applied Psychology: Health and Well-Being; PAGE:STRING:ARTICLE/CHAPTER, 2025; Volume 17, 1, p. e12621. [Google Scholar] [CrossRef]
  16. Jiang, T.; Sun, Z.; Fu, S.; Lv, Y. Human-AI interaction research agenda: A user-centered perspective. Data and Information Management 2024, 8(4), 100078. [Google Scholar] [CrossRef]
  17. Kardefelt-Winther, D. A conceptual and methodological critique of internet addiction research: Towards a model of compensatory internet use. Computers in Human Behavior 2014, 31(1), 351–354. [Google Scholar] [CrossRef]
  18. Kim, E.; Koh, E. Avoidant attachment and smartphone addiction in college students: The mediating effects of anxiety and self-esteem. Computers in Human Behavior 2018, 84, 264–271. [Google Scholar] [CrossRef]
  19. Klarin, J.; Hoff, E.; Larsson, A.; Daukantaitė, D. Adolescents’ use and perceived usefulness of generative AI for schoolwork: exploring their relationships with executive functioning and academic achievement. Frontiers in Artificial Intelligence 2024, 7, 1415782. [Google Scholar] [CrossRef] [PubMed]
  20. Klingbeil, A.; Grützner, C.; Schreck, P. Trust and reliance on AI — An experimental study on the extent and costs of overreliance on AI. Computers in Human Behavior 2024, 160, 108352. [Google Scholar] [CrossRef]
  21. Kostenius, C.; Lindstrom, F.; Potts, C.; Pekkari, N. Young peoples’ reflections about using a chatbot to promote their mental wellbeing in northern periphery areas - a qualitative study. International Journal of Circumpolar Health 2024, 83(1). [Google Scholar] [CrossRef] [PubMed]
  22. Laestadius, L.; Bishop, A.; Gonzalez, M.; Illenčík, D.; Campos-Castillo, C. Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media and Society 2024, 26(10), 5923–5941. [Google Scholar] [CrossRef]
  23. Lai, T.; Xie, C.; Ruan, M.; Wang, Z.; Lu, H.; Fu, S. Influence of artificial intelligence in education on adolescents’ social adaptability: The mediatory role of social support. PLOS ONE 2023, 18(3), e0283170. [Google Scholar] [CrossRef]
  24. Levkovich, I. Evaluating Diagnostic Accuracy and Treatment Efficacy in Mental Health: A Comparative Analysis of Large Language Model Tools and Mental Health Professionals. European Journal of Investigation in Health, Psychology and Education 2025, 15(1), 9. [Google Scholar] [CrossRef]
  25. Loos, E.; Ivan, L. Editorial: Chatbots as humanlike text generators: friend or foe? Frontiers in Human Dynamics 2025, 7, 1706740. [Google Scholar] [CrossRef]
  26. Ma, G.; Tian, S.; Song, Y.; Chen, Y.; Shi, H.; Li, J. When Technology Meets Anxiety:The Moderating Role of AI Usage in the Relationship Between Social Anxiety, Learning Adaptability, and Behavioral Problems Among Chinese Primary School Students. In Psychology Research and Behavior Management; JOURNAL:JOURNAL:DPRB20;REQUESTEDJOURNAL:JOURNAL:DPRB20; WGROUP:STRING:PUBLICATION, 2025; Volume 18, pp. 151–167. [Google Scholar] [CrossRef]
  27. Malfacini, K. The impacts of companion AI on human relationships: risks, benefits, and design considerations. AI & Soc 2025, 40, 5527–5540. [Google Scholar] [CrossRef]
  28. Maples, B.; Cerit, M.; Vishwanath, A.; Pea, R. Loneliness and suicide mitigation for students using GPT-3-enabled chatbots. npj Mental Health Research 3 2024, 4. [Google Scholar] [CrossRef]
  29. Maples, B.; Pea, R. D.; Markowitz, D. Learning from Intelligent Social Agents as Social and Intellectual Mirrors. AI in Learning: Designing the Future 2023, 73–89. [Google Scholar] [CrossRef]
  30. Masyn, K. E. Latent class analysis and finite mixture modeling. In The Oxford handbook of quantitative methods in psychology; Little, T. D., Ed.; Oxford University Press, 2013; Vol. 2, pp. 551–611. [Google Scholar] [CrossRef]
  31. McBain, R. K.; Bozick, R.; Diliberti, M.; Zhang, L. A.; Zhang, F.; Burnett, A.; Kofner, A.; Rader, B.; Breslau, J.; Stein, B. D.; Mehrotra, A.; Pines, L. U.; Cantor, J.; Yu, H. Use of Generative AI for Mental Health Advice Among US Adolescents and Young Adults. JAMA Network Open 2025, 8(11), e2542281–e2542281. [Google Scholar] [CrossRef] [PubMed]
  32. Merrill, K.; Kim, J.; Collins, C. AI companions for lonely individuals and the role of social presence. In Communication Research Reports; STRING:PUBLICATION: WGROUP, 2022; Volume 39, 2, pp. 93–103. [Google Scholar] [CrossRef]
  33. Montgomery, B. Mother says AI chatbot led her son to kill himself in lawsuit against its maker. The Guardian, 2024. Available online: https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death.
  34. Moore, J.; Grabb, D.; Agnew, W.; Klyman, K.; Chancellor, S.; Ong, D. C.; Haber, N. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. ACMF AccT 2025 - Proceedings of the 2025 ACM Conference on Fairness, Accountability,and Transparency 2025, 1, 599–627. [Google Scholar] [CrossRef]
  35. Muthén, B. O. Latent variable mixture modeling. In New developments and techniques in structural equation modeling; Marcoulides, G. A., Schumacker, R. E., Eds.; Lawrence Erlbaum Associates, 2001; pp. 1–33. [Google Scholar]
  36. Ortutay, B. Lawsuits accuse OpenAI of driving people to suicide and delusions. AP NEWS. 2025. Available online: https://apnews.com/article/openai-chatgpt-lawsuit-suicide-56e63e5538602ea39116f1904bf7cdc3.
  37. Pentina, I.; Hancock, T.; Xie, T. Exploring relationship development with social chatbots: A mixed-method study of replika. Computers in Human Behavior 2023, 140, 107600. [Google Scholar] [CrossRef]
  38. Piombo, M. A.; La Grutta, S.; Epifanio, M. S.; Di Napoli, G.; Novara, C. Emotional Intelligence and Adolescents’ Use of Artificial Intelligence: A Parent–Adolescent Study. Behavioral Sciences 2025, Vol. 15(Page 1142, 15(8)), 1142. [Google Scholar] [CrossRef] [PubMed]
  39. Shoemaker, A. L. Construct validity of area specific self-esteem: The Hare Self-Esteem Scale. Educational and Psychological Measurement 1980, 40(2), 495–501. [Google Scholar] [CrossRef]
  40. Spurk, D.; Hirschi, A.; Wang, M.; Valero, D.; Kauffeld, S. Latent profile analysis: A review and “how to” guide of its application within vocational behavior research. Journal of Vocational Behavior 120 2020, 103445. [Google Scholar] [CrossRef]
  41. Sun, Y.; Zhang, Y. A review of theories and models applied in studies of social media addiction and implications for future research. Addictive Behaviors 2021, 114, 106699. [Google Scholar] [CrossRef]
  42. Vanhoffelen, G.; Vandenbosch, L.; Schreurs, L. Teens, Tech, and Talk: Adolescents’ Use of and Emotional Reactions to Snapchat’s My AI Chatbot. Behavioral Sciences 2025, Vol. 15(Page 1037, 15(8)), 1037. [Google Scholar] [CrossRef]
  43. Wang, M.; Hanges, P. J. Latent class procedures: Applications to organizational research. Organizational Research Methods 2011, 14(1), 24–31. [Google Scholar] [CrossRef]
  44. Weller, B. E.; Bowen, N. K.; Faubert, S. J. Latent Class Analysis: A Guide to Best Practice. Journal of Black Psychology 2020, 46(4), 287–311. [Google Scholar] [CrossRef]
  45. Willoughby, B. J.; Dover, C. R.; Hakala, R. M.; Carroll, J. S. Artificial connections: Romantic relationship engagement with artificial intelligence in the United States. In Journal of Social and Personal Relationships; STRING:PUBLICATION: PAGEGROUP, 2025; Volume 42, 12, pp. 3363–3387. [Google Scholar] [CrossRef]
  46. Xie, T.; Pentina, I. Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika. In Proceedings of the 55th Hawaii International Conference on System Sciences | 2022, 2022; Available online: https://hdl.handle.net/10125/79590.
  47. Yao, R.; Qi, G.; Sheng, D.; Sun, H.; Zhang, J. Connecting self-esteem to problematic AI chatbot use: the multiple mediating roles of positive and negative psychological states. Frontiers in Psychology 2025, 16, 1453072. [Google Scholar] [CrossRef] [PubMed]
  48. Yılmaz, Ö.; Yılmaz, Ö. Personalised learning and artificial intelligence in science education: current state and future perspectives. Educational Technology Quarterly 2024, 2024(3), 255–274. [Google Scholar] [CrossRef]
  49. Yousif, N. Parents of teenager who took his own life sue OpenAI. BBC. 2025. Available online: https://www.bbc.com/news/articles/cgerwp7rdlvo.
  50. Zhang, W.; Luo, J.; Zhang, H. The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: A systematic review and meta-analysis. Journal of Affective Disorders 2024, 356, 459–469. [Google Scholar] [CrossRef]
Figure 1. Summary of distribution across all domains and profiles.
Figure 1. Summary of distribution across all domains and profiles.
Preprints 203975 g001
Table 1. Fit statistics for Latent Self-Esteem Profile Models.
Table 1. Fit statistics for Latent Self-Esteem Profile Models.
Model BIC AIC Entropy
1 profile 364209.015 364157.034 ---
2 profiles 351975.306 351862.679 0.509
3 profiles 347664.464 347491.191 0.514
4 profiles 347456.787 347222.869 0.491
5 profiles 346933.671 346639.107 0.475
6 profiles 345326.025 344970.816 0.474
Note. Lower AIC and BIC indicate better model fit. Entropy reflects classification accuracy
Table 3. Endorsement of social substitution items across latent self-esteem profiles.
Table 3. Endorsement of social substitution items across latent self-esteem profiles.
Item Profile 0 Profile 1 Profile 2
Talked to AI as a friend 25.1% 29.7% 42.8%
AI can replace friends 10.7% 13.9% 22.6%
AI can replace partner 6.4% 8.0% 12.3%
AI understands me better 7.6% 11.2% 20.9%
Note. Values represent the percentage of adolescents within each profile who endorsed the respective statement.
Table 4. Endorsement of emotional regulation items across latent self-esteem profiles.
Table 4. Endorsement of emotional regulation items across latent self-esteem profiles.
Item Profile 0 Profile 1 Profile 2
Told AI something I was ashamed to tell others 8% 11% 20%
Easier to confide in AI than a real person 7% 11% 20%
AI helps when I feel lonely or sad 6% 9% 17%
AI understands me better than people 5% 8% 16%
Note. Values represent the percentage of adolescents within each profile who endorsed the respective statement.
Table 5. Endorsement of intimacy-related learning items across latent self-esteem profiles.
Table 5. Endorsement of intimacy-related learning items across latent self-esteem profiles.
Item Profile 0 Profile 1 Profile 2
Talked about sexuality 5% 7% 11%
Talked about relationships and feelings 11% 14% 22%
Practiced talking to a partner 16% 19% 25%
Prefer asking AI about relationships/sex 27% 31% 39%
Note. Values represent the percentage of adolescents within each profile who endorsed the respective statement.
Table 6. Endorsement of ease of communication and accessibility items across latent self-esteem profiles.
Table 6. Endorsement of ease of communication and accessibility items across latent self-esteem profiles.
Item Profile 0 Profile 1 Profile 2
Writing with AI is easier than with people 19% 23% 32%
I like talking to AI; it responds the way I prefer 10% 10% 15%
I can tell AI anything 16% 21% 27%
I feel relaxed and safe with AI 13% 15% 24%
Note. Values represent the percentage of adolescents within each profile who endorsed the respective statement.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated