1. Introduction
About 60 years after the development of the first text-based conversational agent, ELIZA, chatbots have become widely integrated into various domains of everyday life, including education, healthcare, and entertainment (Meshram et al., 2021; Shevlin, 2024). Due to technological advancements such as generative artificial intelligence (AI) and large language models (LLM’s), chatbots enable increasingly human-like, real time conversations through text (e.g., OpenAI’s ChatGPT) and voice (e.g., Amazon’s Alexa; Schöbel et al., 2023). The resulting anthropomorphism of AI chatbots (i.e., attributing human characteristics to technology; Shevlin, 2024) seems to challenge humans to distinguish chatting with humans from chatting with chatbots. This was exemplified by ‘Human or Not?’, a recently developed game inspired by the Turing Test, in which 40% of the 1.5 million participants thought they were chatting with a human while this was actually an AI chatbot (Jannai et al., 2023). Consequently, the more human-like features are integrated into chatbots (e.g., expressing and understanding emotions), the more these chatbots are being used for social purposes such as romance or companionship (Shevlin, 2024). In this case, when a chatbot or conversational AI system is primarily designed to fulfill social needs and facilitate human-AI companionship, it is referred to as social AI (Brandtzaeg et al., 2024; Shevlin, 2024).
One particular group targeted by social AI designers, are youth (Brandtzaeg et al., 2024). According to recent reports, four in five (79%) UK online adolescents (aged 13-17; Ofcom, 2023), 66% of Belgian adolescents (aged 10-18; Apenstaartjaren, 2024), and 51% of US-based youth (aged 14-22; Hopelab, 2024) had used at least once a generative AI tool (e.g., ChatGPT, Snapchat’s My AI, Midjourney, DALL-E). Next to rather utilitarian motivations to use these AI chatbots (e.g., get information, brainstorm ideas, help with schoolwork, entertainment), adolescents use them to initiate social-supportive interactions (e.g., to discuss social and personal problems, ask advice, have casual conversations; Apenstaartjaren, 2024; Herbener & Damholdt, 2024; Hopelab, 2024). One AI chatbot which is specifically designed to meet the social-supportive needs of youth, is Snapchat’s My AI.
Given the rapid integration and growing popularity of social AI chatbots in adolescents’ daily lives, there is a notable lack of research exploring which adolescents are more likely to engage with these technologies. Furthermore, it remains unclear which adolescent users are more prone to experiencing positive or negative emotions from their interactions with AI chatbots. To address these research gaps, the current study aims to examine individual differences among adolescents in their use of social AI chatbots, as well as the emotional outcomes of these interactions. Specifically, the study will focus on the role of gender, age, and socioeconomic status (SES) in shaping adolescents' engagement with the widely used My AI chatbot on Snapchat.
1.1. Adolescents and Snapchat
The My AI chatbot, powered by OpenAI’s advanced LLM’s such as GPT-3.5 and GPT-4, is integrated into the Snapchat social media platform (Snapchat Support, 2024). Being used by 60% of U.S. and 94% of Belgian adolescents (Apenstaartjaren, 2024; Pew Research Center, 2023), Snapchat is one of the most popular content-sharing platforms among youth. With the aim of mimicking real-life interactions, Snapchat is characterized by affordances such as a low visibility (i.e., mainly focusing on small and well-known audiences) and a low persistence or high ephemerality (i.e., sent pictures or videos are by default only visible for a limited amount of time, ranging from a few seconds to 24h in a Story-format; Utz et al., 2015). Bayer et al. (2016) demonstrated the low visibility in their research as they found that the platform is mainly being used for interactions with close ties. Interestingly, these interactions mostly resulted in a more positive mood. Within the context of these intimate social bubbles on Snapchat, the chatbot My AI was added to the platform in February 2023. My AI takes the form of a customizable avatar, allowing users to personalize its appearance extensively according to their preferences (Shevlin, 2024). Although this built-in chatbot only became freely accessible to all Snapchat users in April 2023, 72% of UK online adolescents had already used it by June 2023 (Ofcom, 2023). Since My AI is currently the most popular social AI tool used by adolescents and one of the first chatbots implemented in a social medium platform highly popular among adolescents, the current study will focus on this specific AI chatbot.
1.2. User Characteristics & Perceived Outcomes
Despite the rising popularity of social AI among youth, not all adolescents’ experiences with chatbots seem to result in equally beneficial outcomes. For instance, a recent cross-sectional study among Danish high-school students showed that adolescents who engage in social-supportive conversations with chatbots such as My AI are lonelier and experience less perceived social support than non-users of chatbots (Herbener & Damholdt, 2024). Adolescents’ perceived outcomes were also studied in the Hopelab report (2024), showing that 41% of participating youth believed generative AI will have “both a positive and negative” effect on their lives in the next decade, while 19% and 16%, respectively, thought this impact would be mostly negative or positive. Hence, individual differences seem to emerge between adolescents who use social AI or not, as well as between adolescents who experience positive or negative emotions from chatting with these bots.
Drawing upon the circular Media Practice Model, adolescents do not use media in a vacuum as their media choices and interactions are being influenced by their developing identities and sociodemographic characteristics (e.g., gender, age, SES; Steele & Brown, 1995). In other words, adolescents’ sense of who they are or who they want to become influences their media selection, as well as their media interactions. Based upon previous research, adolescents’ gender, age, and SES might be particularly relevant factors in explaining their tendency to use social AI such as My AI and their likeability to benefit from these interactions.
1.2.1. Gender Differences
As for gender, adolescent boys and girls might not be equally prone to use My AI and flourish from these interactions. That is, several studies observed a gender bias in how generative AI are designed, pointing at a disproportional amount of AI chatbots with female characteristics such as female names or female-looking avatars (e.g., Feine et al., 2020; Unesco, 2019). One reason put forward is that female chatbots are perceived as more human-like and warmer compared to their male counterparts and are therefore more anthropomorphized and accepted as successful chatbots (Boreau et al., 2021). Although girls might identify more with chatbots than boys due to their gender similarities, evidence on gender differences in individuals’ use of AI chatbots remains mixed. According to one report among US youth, boys are more likely to have ever used an AI chatbot than girls (53% vs. 48%; Hopelab, 2024), while another report (Ofcom report, 2023) among UK adolescents found no gender differences when looking at AI chatbot use in general. Interestingly, the latter report also included My AI. With My AI, the gender is less stereotypical and customizable. Potentially these features might explain why girls were slightly more likely to have used this AI chatbot than boys (54% vs. 48%).
Paradoxically, despite AI chatbots’ overall female characteristics, they are mainly designed by men and have been found to reproduce traditional gender stereotypes in their outputs (e.g., reflecting sexism and misogynism) due to biases within their LLM’s (Unesco, 2024). Since these biases seem to perpetuate particularly unfavorable stereotypes about women (e.g., allocating traditional career roles to and objectification of women), girls might resonate less with the output of AI chatbots than boys, resulting in less agreeable chat-interactions for them. Accordingly, girls could experience less positive interactions with My AI on Snapchat compared to boys. However, this gender difference may be specific to certain types of chatbots and might not occur in the same way with My AI. Prior research finding the gender biases (Unesco, 2024) focused only on open LLM’s such as GPT-2, which differs from the more advanced – and potentially less biased – LLM’s on which My AI are based (i.e., GPT-3.5 and GPT-4; Snapchat Support, 2024). It remains therefore unclear whether gender differences influence the use of or perceived quality of interactions with Snapchat's My AI chatbot.
1.2.2. Age Differences
Although Snapchat’s minimum age is set at 13 years old, many children younger than 13 years old already have an account on this platform. For instance, 37% of Belgian 6 to 11 years old are sporadically active on Snapchat (Apenstaartjaren, 2024). By the age of 18 years old, this amount even increases up to 94%. Accordingly, adolescents of all ages thus use Snapchat and have access to the My AI tool. However, it is unclear whether all adolescents actually use My AI and whether their likelihood of using it varies by age. Moreover, not all adolescents of all ages are equally digitally literate (e.g., with older adolescents being more digitally literate; Jin et al., 2020) and might therefore differ regarding how they use and trust the app. High digital literacy likely relates to a more advanced understanding of digital media’s most beneficial uses and for which tasks they are more or less trustworthy (Vissenberg et al., 2022). Moreover, younger and older adolescents might regulate their feelings incited by the chatbot differently. Although individuals’ general negative and positive affect levels tend to increase and decrease during adolescence, respectively (Casas & González-Carrasco, 2019), older adolescents may experience fewer negative emotions, such as anxiety, after interacting with My AI, as they are more likely to understand that it is not a real human. However, empirical evidence regarding age differences in adolescents' use of AI chatbots remains scarce.
1.2.3. SES Differences
Since the integration of internet in daily life, societies are split by digital divides. Three levels of digital divides can be discerned: regarding individuals’ access to (first level), skills for (second level), and outcomes of using (third level) digital communication (Rosič et al., 2024). Individuals’ socioeconomic status (SES) is one of the most well-established factors in predicting the inequalities in adolescents’ access to digital communication (i.e., first level; Helsper, 2020). In the case of My AI, this means that adolescents with higher SES levels would be more likely to find their way to this AI chatbot than adolescents with lower SES levels. However, more recent research suggests that the role of SES has diminished in relation to this first level digital divide (George et al., 2020). It is, consequently, unclear whether a similar dynamic will take place regarding adolescents’ SES level and their use of My AI.
Regarding the second level, differences in SES seem to impact users’ level of AI self-efficacy, which refers to the self-perceived beliefs of being capable to use AI technology (Hong, 2022). Individuals with higher SES levels seem to have more AI self-efficacy due to social and cognitive reasons (e.g., social network recommending AI products, belief of having enough experience to use these tools; Hong, 2022). Considering the third level of digital divides, SES has been found to be a less consistent predictor. While some studies identify SES as an important predictor of achieving positive but not negative outcomes after engaging with digital media (George et al., 2020; Helsper, 2020; Livingstone et al., 2021), no research has studied the role of socio-economic inequalities in emotional outcomes after chatting with AI chatbots such as My AI.
1.3. The Current Study
Altogether, previous literature suggests that gender, age, and SES may play a role in the use and perceived emotional outcomes of chatting with social AI chatbots. The current study will therefore investigate whether these psychosocial factors also play a role in adolescents’ adoption and experience of chatting with more applied AI chatbots, such as Snapchat’s My AI. More precisely, a closer look will be given to adolescents’ positive (e.g., inspiration, happiness) and negative (e.g., distress, anxious) emotions after using this AI chatbot. While controlling for general time spent on social media, the following research questions will be tested:
RQ1: Are socio-demographic characteristics (i.e., gender, age, and SES level) related to whether or not adolescents have already used My AI?
RQ2.a: Are socio-demographic characteristics (i.e., gender, age, and SES level) related to adolescents’ perceived positive emotions after using My AI?
RQ2.b: Are socio-demographic characteristics (i.e., gender, age, and SES level) related to adolescents’ perceived negative emotions after using My AI?
2. Materials and Methods
2.1. Data Collection and Sample
The current study draws on the survey data from a larger project
1 which took place between August and September 2023, involving [nationality blinded] adolescents recruited from five secondary schools in [region blinded]. The study received ethical approval from the Ethics Committee of the University [blinded] (approval number: [blinded]). Passive parental consent and active informed assent were obtained and the confidentiality and anonymity of all responses were ensured. Each participant received a €5 gift voucher as a token of appreciation. The hypotheses, tested model, and the analytical strategy were pre-registered and data, syntaxes, and supplementary material can be found on the Open Science Framework (OSF).
From the 330 participating adolescents, 27 participants were excluded from analyses due to failing or missing the attention check (n = 25) or reporting dubious answers (e.g., 22 years old but in the 3rd class of secondary school). This resulted in an analytical sample of 303 participants (63.0% girls, 35.3% boys, 1.0% other, 0.7% did not report their gender; Mage = 15.89, SDage = 1.69). A majority was in their last year of secondary school (28%) and followed a general educational track (69.7%). 94.7% was born in [blinded] and 87.1% reported to have a West-European background. Most parents/guardians had a college or university degree (62.1% of the fathers/male guardians and 72.6% of the mothers/female guardians) and lived together (66.0%). Regarding SES, participants estimated how well off their family is in relation to others, which was overall rather high (MSES = 7.35; SDSES = 1.31; min: 1, max: 10).
2.2. Measures
2.2.1. Socio-Demographic Variables
Participants reported their gender (1 = girl, 2 = boy, 3 = other, 99 = prefer not to say) and age (2023-birth year). The first item of the MacArthur Scale of Subjective Social Status (youth version; Goodman et al., 2001) was used to assess adolescents’ SES. Participants situated their family on a ladder with ten rungs, with the first rung (= score of 1) representing families in society who are the worst off and the tenth rung (= score of 10) representing families in society who are the best off. Higher scores indicated a higher (self-perceived) SES.
2.2.2. Use of Snapchat’s My AI
To assess whether or not participants have already used Snapchat’s My AI chatbot, the following self-developed question was asked: “Did you already sent something to My AI on Snapchat?” with response categories “yes” (=1) and “no” (=2). On average, 69.8% of the participants reported to have already used Snapchat’s My AI.
2.2.3. Emotional Reactions to Snapchat’s My AI
Participants’ emotional reactions to chatting with My AI on Snapchat was measured based upon a shortened and adapted version of the PANAS scale (Watson et al., 1988). More precisely, all participants who responded “yes” to the previous variable (i.e., use of Snapchat’s My AI) received the question “How do you feel after chatting with My AI on Snapchat?”, followed by the nine emotional reactions (positive affect: enthusiastic, interested, inspired, happy; negative affect: upset, distressed, irritable, nervous, afraid). These emotions were selected out of the 20 original items based upon their applicability to an online chatbot-context. Answer options for each emotion ranged from never (= 1) to very often (= 5). Principal Axis Factoring (PAF) analysis indicated a two-factor solution with one factor comprising the positive emotional reactions (initial eigenvalue = 3.17, explained variance = 35.27%, ω =.84, 4 items, M = 2.04, SD = .90) and one factor comprising the negative emotional reactions (initial eigenvalue = 2.09, explained variance = 23.27%, ω =.84, 4 items, M = 1.58, SD = .63).
2.2.4. General Time Spent on Social Media
Following Frison & Eggermont’s (2016) approach, general time spent on social media was measured by making the distinction between a typical school and weekend-day. Participants indicated how much time they had spent on social media (e.g., Instagram, TikTok, Snapchat, BeReal, Facebook, YouTube) but not instant messaging services (e.g., WhatsApp) in the last week. Answer options covered every half hour, ranging from 1 (=0h, not using social media) to 49 (= 24h). Next, participants’ average daily time spent on social media was calculated by using the following formula: (time school-day*5)+(time weekend-day*2)/7. On average, participants spent more than 4 hours a day on social media (M = 8.40, SD = 5.06).
2.3. Analytical Strategy
First, descriptive, exploratory factor, and reliability analyses were conducted. Afterwards, gender was dichotomized since the groups of the ‘other’ category (n = 3) and the participants preferring not to mention their gender (n = 2) are too small to make valid group comparisons which are needed for the main analyses (64.1% girls, 35.9% boys). Next, zero-order correlations were calculated.
For RQ1, a binary logistic regression was conducted. Gender (with girls as reference category), age, and SES were added as predicting variables, participants’ My AI use as the dependent variable, and general time spent on social media as control variable. For RQ2.a and RQ2.b two separate hierarchical linear regressions were ran, with gender, age, and SES as predictors, and positive and negative emotional reactions, respectively, as continuous outcomes. General time spent on social media was again added as a control variable. Hence, all preregistered procedures were followed.
3. Results
3.1. Descriptives
Descriptive information as well as zero-order correlations between the main variables are displayed in
Table 1.
3.2. Hypotheses
First, the logistic regression analysis to test whether gender, age, and SES are related to whether or not a participant had already used My AI or not (RQ1) showed that the model fit was good (Hosmer & Lemeshow test, χ
2(8) = 6.79, p = .56) and significantly better than the null model (χ
2(1) = 11.98, p < .001, Cox and Snell R
2 = .04, and Nagelkerke R
2 = .06). Age was significantly and positively associated with using My AI (0 = yes, 1 = no), meaning that the odds ratio (OR) to use My AI decreased for older adolescents (OR = 1.39, p < .001, see
Table 2). Gender and SES did not significantly predict My AI use.
Next, the association of gender, age, and SES with, respectively, adolescents’ perceived positive (RQ2.a) and negative emotional reactions (RQ2.b) after using My AI was tested by conducting two hierarchical linear regressions. Model fit statistics can be found in
Table 3 and
Table 4. Regarding RQ2.a, only age was significantly and negatively associated with experiencing positive emotions after using My AI (β = -.13, p = .001). Older adolescents thus experience less positive emotions from using My AI than younger adolescents. Regarding RQ2.b, no socio-demographic variables were related to adolescents’ perceived negative emotions from using My AI.
4. Discussion
AI tools, such as chatbots, have become pervasive across various domains in life, including social interactions (Følstad et al., 2020). Within the intimate context of close ties on Snapchat, adolescents can now engage with the platform’s avatar-like chatbot, known as My AI. Despite the rapid evolution and widespread implementation of social AI tools such as My AI, little is known about which adolescents predominantly use them and what emotions these human-to-AI interactions evoke. Moreover, considering the novelty of social AI chatbots in adolescents’ daily lives and current theoretical reflections on the role of novelty in shaping media experiences and effects (Baumgartner, 2023), it is crucial to gain insights into the interactions of early adopter users with popular social AI chatbots, such as My AI. Accordingly, the current study investigated among 303 adolescent which socio-demographic characteristics (i.e., gender, age, and SES) are related to adolescents’ use of My AI and to their experience of positive and negative emotions after chatting with this social chatbot.
4.1. Main Findings and Implications
Among the three socio-demographic factors examined, age was the only one where differences emerged: younger adolescents were more likely to chat with My AI and reported more positive experienced emotions compared to older adolescents. Although no significant links were found between age and adolescents’ negative emotional experiences with My AI in this study, a potential lack of sufficient AI literacy among younger adolescents might explain the observed significant relationships. As young adolescence is often characterized by great emotional instability (Larson et al., 2002) and underdeveloped critical thinking capacities (Dumontheil, 2014) and, they may be more likely to trust social chatbots as human-like conversational partners. Consequently, this may lead to biases in assessing AI-related risks, such as overestimating the positivity of their experienced emotions. For instance, when they talk about a mental health struggle (e.g., low body satisfaction), the chatbot might seem helpful (e.g., change eating pattern) but have negative consequences regarding the user’s health (e.g., eating problems). Since these chatbots can still generate unreliable answers, perpetuate stereotypical biases (e.g., related to gender), and misuse sensitive self-disclosed information adolescents’ trust in social AI chatbots should be critically interpreted (Maeda, et al., 2024). Enhancing AI literacy among younger adolescents through interventions could therefore help them to better understand, manage, and critically evaluate their experiences with social AI chatbots (Celik et al., 2023; Ng et al., 2021), potentially mitigating negative, or decreased positive, emotional impacts in the future.
Next, no significant associations were found for gender, nor for SES. The absence of significant differences between adolescents with varying SES levels might be explained by the generally high SES levels of the current sample. Moreover, the little variation in this variable might hint at ceiling effect which obstructs to find significant relationships. Future research should target larger and more diverse SES samples. The null findings regarding gender suggest that My AI’s avatar and its LLM may be more gender-neutral and less likely to perpetuate gender stereotypes compared to previous research on other chatbots and LLM’s (e.g., Feine et al., 2020; Unesco, 2024). However, future research is needed to substantiate this reasoning and to explore whether any gender biases are present in My AI’s technology and whether these can be directly linked to gender differences among its users.
Overall, these results have implications for both practice and theory. First, the observed relationships between younger adolescents’ use of social AI and their experienced positive emotions may have important implications for the development of a new form of parasocial relationship. Parasocial relationships are traditionally defined as asymmetrical relationships between a media user and a media personality (e.g., influencers) that evoke a socioemotional connection (e.g., Horton & Wohl, 1956). These media personalities can refer to real humans (e.g., models, celebrities) or to representations of humans (e.g., animations, holograms; Maeda et al., 2024). While AI chatbots fall into the latter category, Maeda et al. (2024) suggest that the growing anthropomorphism of conversational agents, coupled with advancements in their LLM’s, enables AI chatbots to foster trust and emotional connectedness, resembling human interactions. By demonstrating that especially young adolescents experience positive emotions from using My AI, our study is among the first to empirically support the possibility of establishing a parasocial relationship with AI. Nevertheless, further (longitudinal) research is necessary to unravel how such parasocial relationships evolve over time. Moreover, since this study only looked at adolescents’ subjective emotional experiences, future research should go beyond adolescents’ own estimations and investigate whether these interactions with social AI chatbots are beneficial for adolescents’ social and mental health (e.g., identity formation, peer relationships, self-esteem) on the long term.
A second implication of our findings relates to the ongoing debates surrounding the minimum age requirement for using social media platforms like Snapchat. Although many users are younger than the official age limit (Apenstaartjaren, 2024), the platform’s terms of use require users to be at least 13 years old to create an account and gain access to features like the My AI chatbot. Recently, this minimum age for social media platforms has become a contested topic in both public and policy discussions, with several governments proposing an increase in this age limit (e.g., the social media ban for -16 years old Australian adolescents; Taylor, 2024) and asking for a more precise age verification system (e.g., Digital Services Act). Although our study specifically examines adolescents’ interactions with only one feature of Snapchat—the My AI chatbot—our results indicate that younger adolescents may sometimes benefit from the platform even more than older users. Therefore, stakeholders should consider these potential positive outcomes when shaping technology and policy, ensuring that these beneficial effects for younger adolescents are preserved in future social media designs and regulations.
4.2. Limitations
Despite being the first to offer empirical insights into adolescents’ socio-demographic differences in their use of and emotional responses to chatting with a social AI chatbot, some limitations should be addressed. First, since the study draws on cross-sectional data, no causal claims can be made. Although the emotional reactions variable specifically asked participants how they felt after chatting with My AI, younger adolescents’ are generally known to experience more positive emotions than older adolescents (Casas & González-Carrasco, 2019). This could potentially influence the emotional outcomes observed in the study.
Next, given the exclusive focus of the current study on socio-demographic factors, adolescents’ motivations for using social AI chatbots were not considered, though they may play a crucial role in the observed relationships. Drawing on uses and gratification theory (U&G), individuals’ media use and experiences are driven by, respectively, their needs and whether or not these needs are satisfied (i.e., gratifications; Katz et al., 1974). Previous research applying U&G to AI-powered chatbots identified four categories of gratifications: utilitarian (e.g., information seeking), technology-based (e.g., medium appeal), hedonic (e.g., entertainment), and social (e.g., establishing and maintaining social connections; Xie et al., 2024). While each of these gratifications has been linked to user satisfaction, these relationships have primarily been studied among adult populations across a diverse range of AI chatbot contexts (e.g., mental health service, financial advising), yet among an adult population; Xie et al., 2024). Since adolescents may share similar motivations for using social AI chatbots, future research should account for these factors when examining adolescents’ use and emotional experiences with such technologies. Moreover, adolescents of different ages may have varying dominant motivations for using social AI chatbots, potentially explaining the age-related differences observed in their use and emotional outcomes after interacting with My AI.
Third, the current study focused exclusively on adolescents’ interactions with the My AI chatbot on Snapchat. Given its presence among users’ close ties on Snapchat, its specific LLM and its specific affordances allowing, for instance, to customize the avatar of the chatbot, it is likely that adolescents’ interactions with other social AI chatbots on different platforms –each with distinct affordances and LLM’s– may differ significantly. Since adolescents’ socio-demographical characteristics might be differently related to their use of and experiences with other chatbots, caution is needed when generalizing the findings of the current study to other social AI chatbots. Further research is therefore needed to explore how these interactions vary across different platforms.
5. Conclusions
In sum, the current study is the first to investigate adolescents’ use of and perceived emotional experiences with Snapchat’s My AI chatbot, highlighting the critical role of age in shaping their adoption and emotional responses to social AI chatbots. These findings not only contribute to our theoretical understanding of user-AI interactions among adolescents but also emphasize the need to consider their developmental context when designing and regulating AI chatbots. Future research is advised to also study adolescents’ interactions with other social AI chatbots and delve further into their sociopsychological outcomes from these interactions.
Author Contributions
G.V.: Conceptualization, Methodology, Formal Analysis, Investigation, Writing – Original Draft. L.V.: Conceptualization, Methodology, Writing – Review & Editing, Supervision. L.S.: Conceptualization, Methodology, Writing – Review & Editing, Supervision.
Funding
The data collection was supported by the European Research Council (ERC) under grant agreement number 852317. The work forces were supported by the Research Foundation Flanders (FWO-Vlaanderen) under grant agreement number 11G2723N.
Institutional Review Board Statement
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the University of Leuven, SMEC (approval number: G-2023-6631; 11 May 2023).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
Conflicts of Interest
The authors declare that they have no conflict of interest to report.
Abbreviations
The following abbreviations are used in this manuscript:
| AI |
Artificial Intelligence |
| LLM |
Large Language Models |
| SES |
Socioeconomic Status |
| OSF |
Open Science Framework |
| 1 |
Only one other study draws upon the same sample, yet has completely other research objectives (see its preregistration on OSF [blinded]) and has no overlapping main variables. |
References
- Apenstaartjaren. De digitale leefwereld van jongeren en kinderen 2024. 2024. Available online: https://www.apenstaartjaren.be (accessed on 31 July 2024).
- Baumgartner, S. Why we see media effects but do not find them: Media effects stabilize after repeated media exposure. Presented at the Etmaal van de Communicatiewetenschap, Rotterdam, 8–9 Feb 2024. [Google Scholar]
- Bayer, J. B.; Ellison, N. B.; Schoenebeck, S. Y.; Falk, E. B. Sharing the small moments: ephemeral social interaction on Snapchat. Information Communication & Society 2015, 19(7), 956–977. [Google Scholar] [CrossRef]
- Borau, S.; Otterbring, T.; Laporte, S.; Wamba, S. F. The most human bot: Female gendering increases humanness perceptions of bots and acceptance of AI. Psychology And Marketing 2021, 38(7), 1052–1068. [Google Scholar] [CrossRef]
- Brandtzaeg, P. B.; Skjuve, M.; Følstad, A. Emerging AI-individualism: How young people integrate social AI into their lives. SSRN Electronic Journal 2024. [Google Scholar] [CrossRef]
- Casas, F.; González-Carrasco, M. The evolution of positive and negative affect in a longitudinal sample of children and adolescents. Child Indicators Research 2019, 13(5), 1503–1521. [Google Scholar] [CrossRef]
- Celik, I. Exploring the determinants of artificial intelligence (AI) literacy: Digital divide, computational thinking, cognitive absorption. Telematics And Informatics 2023, 83, 102026. [Google Scholar] [CrossRef]
- Dumontheil, I. Development of abstract thinking during childhood and adolescence: The role of rostrolateral prefrontal cortex. Developmental Cognitive Neuroscience 2014, 10, 57–76. [Google Scholar] [CrossRef]
- Feine, J.; Gnewuch, U.; Morana, S.; Maedche, A. Gender bias in Chatbot design. In Lecture notes in computer science; 2020; pp. 79–93. [Google Scholar] [CrossRef]
- Følstad, A.; Araujo, T.; Papadopoulos, S.; Law, E. L.; Granmo, O.; Luger, E.; Brandtzaeg, P. B. Chatbot Research and Design. Lecture notes in computer science 2020. [Google Scholar] [CrossRef]
- Frison, E.; Eggermont, S. Exploring the relationships between different types of Facebook use, perceived online social support, and adolescents’ depressed mood. Social Science Computer Review 2016, 34(2), 153–171. [Google Scholar] [CrossRef]
- George, M. J.; Jensen, M. R.; Russell, M. A.; Gassman-Pines, A.; Copeland, W. E.; Hoyle, R. H.; Odgers, C. L. Young adolescents’ digital technology use, perceived impairments, and well-being in a representative sample. The Journal Of Pediatrics 2020, 219, 180–187. [Google Scholar] [CrossRef]
- Goodman, E.; Adler, N. E.; Kawachi, I.; Frazier, A. L.; Huang, B.; Colditz, G. A. Adolescents’ perceptions of social status: Development and evaluation of a new indicator. Pediatrics 2001, 108(2), e31. [Google Scholar] [CrossRef]
- Helsper, E. J. Digital inequalities amongst digital natives. In Routledge eBooks; 2020; pp. 435–448. [Google Scholar] [CrossRef]
- Herbener, A. B.; Damholdt, M. F. Are lonely youngsters turning to Chatbots for companionship? The relationship between Chatbot usage and social connectedness in Danish high-school students. 2024. [Google Scholar] [CrossRef]
- Hong, J. I was born to love AI: The influence of social status on AI self-efficacy and intentions to use AI. International Journal Of Communication 2022, 16, 20. Available online: https://ijoc.org/index.php/ijoc/article/view/17728/3632.
- Hopelab. Teen and young adult perspectives on Generative AI: Patterns of use, excitements, and concerns. 2024. Available online: https://hopelab.org/teen-young-adult-perspectives-generative-ai/ (accessed on 13 August 2024).
- Horton, D.; Richard Wohl, R. Mass communication and para-social interaction: Observations on intimacy at a distance. Psychiatry 1956, 19(3), 215–229. [Google Scholar] [CrossRef]
- Jannai, D.; Meron, A.; Lenz, B.; Levine, Y.; Shoham, Y. Human or Not? A gamified approach to the Turing Test. ArXiv 2023. Available online: https://arxiv.org/pdf/2305.20010.
- Jin, K.; Reichert, F.; Cagasan, L. P.; De La Torre, J.; Law, N. Measuring digital literacy across three age cohorts: Exploring test dimensionality and performance differences. Computers & Education 2020, 157, 103968. [Google Scholar] [CrossRef]
- Katz, E.; Blumler, J. G.; Gurevitch, M. Utilization of mass communication by the individual. In The Uses of Mass Communications: Current Perspectives on Gratifications Research; Blumler, J. G., Katz, E., Eds.; Sage: Beverly Hills, 1974; pp. 19–31. [Google Scholar]
- Larson, R. W.; Moneta, G.; Richards, M. H.; Wilson, S. Continuity, Stability, and Change in Daily Emotional Experience across Adolescence. Child Development 2002, 73(4), 1151–1165. [Google Scholar] [CrossRef]
- Livingstone, S.; Mascheroni, G.; Stoilova, M. The outcomes of gaining digital skills for young people’s lives and wellbeing: A systematic evidence review. New Media & Society 2021, 25(5), 1176–1202. [Google Scholar] [CrossRef]
- Maeda, T.; Quan-Haase, A. When human-AI interactions become parasocial: Agency and anthropomorphism in affective design. In FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency; 2024; pp. 1068–1077. [Google Scholar] [CrossRef]
- Meshram, S.; Naik, N.; Vr, M.; More, T.; Kharche, S. Conversational AI: Chatbots. 2021 International Conference On Intelligent Technologies (CONIT). 2021. [Google Scholar] [CrossRef]
- Ng, D. T. K.; Leung, J. K. L.; Chu, K. W. S.; Qiao, M. S. AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings Of The Association For Information Science And Technology 2021, 58(1), 504–509. [Google Scholar] [CrossRef]
- Ofcom. Online nation, 2023 report. 2023. Available online: https://www.ofcom.org.uk/media-use-and-attitudes/online-habits/online-nation/ (accessed on 13 August 2024).
- Pew Research Center. Teens, social media and technology 2023. 2023. Available online: https://www.pewresearch.org/internet/2023/12/11/teens-social-media-and-technology-2023/ (accessed on 31 July 2024).
- Rosič, J.; Schreurs, L.; Janicke-Bowles, S. H.; Vandenbosch, L. Trajectories of digital flourishing in adolescence: The predictive roles of developmental changes and digital divide factors. Child Development 2024, 95(5), 1586–1602. [Google Scholar] [CrossRef]
- Schöbel, S.; Schmitt, A.; Benner, D.; Saqr, M.; Janson, A.; Leimeister, J. M. Charting the evolution and future of conversational agents: A research agenda along five waves and new frontiers. Information Systems Frontiers 2023. [Google Scholar] [CrossRef]
- Shevlin, H. All too human? Identifying and mitigating ethical risks of Social AI. Law Ethics & Technology 2024. [Google Scholar] [CrossRef]
- Snapchat Support. What is My AI on Snapchat and how do I use it? n.d. Available online: https://help.snapchat.com/hc/en-us/articles/13266788358932-What-is-My-AI-on-Snapchat-and-how-do-I-use-it.
- Steele, J. R.; Brown, J. D. Adolescent room culture: Studying media in the context of everyday life. Journal of Youth and Adolescence 1995, 24(5), 551–576. [Google Scholar] [CrossRef]
- Taylor, J. Australia plans to ban children from social media. Is checking and enforcing an age block possible? The Guardian. 10 September 2024. Available online: https://www.theguardian.com/australia-news/article/2024/sep/10/australia-children-social-media-ban-age-limit-under-16-details.
- Unesco. I’d blush if I could: closing gender divides in digital skills though education. 2019. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000367416 (accessed on 16 August 2024).
- Unesco. Systematic prejudices: an investigation into bias against women and girls in large language models. 2024. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000388971 (accessed on 16 August 2024).
- Utz, S.; Muscanell, N.; Khalid, C. Snapchat elicits more jealousy than Facebook: A comparison of Snapchat and Facebook use. Cyberpsychology, Behavior, And Social Networking 2015, 18(3), 141–146. [Google Scholar] [CrossRef]
- Vissenberg, J.; d'Haenens, L.; Livingstone, S. Digital literacy and online resilience as facilitators of young people’s well-being? A systematic review. European Psychologist 2022, 27(2), 76–85. [Google Scholar] [CrossRef]
- Watson, D.; Clark, L. A.; Tellegen, A. Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of Personality and Social Psychology 1988, 54(6), 1063. [Google Scholar] [CrossRef]
- Xie, C.; Wang, Y.; Cheng, Y. Does Artificial Intelligence satisfy you? A meta-analysis of user gratification and user satisfaction with AI-powered Chatbots. International Journal Of Human-Computer Interaction 2022, 40(3), 613–623. [Google Scholar] [CrossRef]
Table 1.
Descriptive statistics and zero-order correlations on variables of interest.
Table 1.
Descriptive statistics and zero-order correlations on variables of interest.
| |
Descriptive statistics |
|
Zero-order correlations |
| |
Min. |
Max. |
M |
SD |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
| 1. Gender |
- - |
- |
- |
1 |
|
|
|
|
|
|
|
| 2. SES |
1 |
10 |
7.35 |
1.31 |
.03 |
1 |
|
|
|
|
|
| 3. Age |
13 |
20 |
15.89 |
1.69 |
-.24***
|
-.27***
|
1 |
|
|
|
|
| 4. Positive emotional reactions |
1 |
5 |
2.04 |
0.8 |
-.04 |
.06 |
-.22**
|
1 |
|
|
|
| 5. Negative emotional reactions |
1 |
5 |
1.58 |
0.63 |
-.09 |
.05 |
-.01 |
.15*
|
1 |
|
|
| 6. My AI use |
- |
- |
- |
- |
-.02 |
.03 |
.16**
|
- |
- |
1 |
|
| 7. General time spent on social media |
1 |
49 |
8.4 |
5.06 |
-.07 |
-.12*
|
.11 |
.04 |
.07 |
-.18**
|
1 |
|
Note. *p < .05, **p < .01, ***p < .001. Gender: 1 = girl, 2 = boy. My AI use: 1 = yes, 2 = no.Since the variables of positive and negative emotional reactions were only asked to the participants who scored "1" on the variable of "My AI use", no correlations could be conducted between these variables. |
Table 2.
Logistic regression analysis with My AI use as dependent variable and socio-demographic factors as independent variables.
Table 2.
Logistic regression analysis with My AI use as dependent variable and socio-demographic factors as independent variables.
| Predictor |
b |
SE |
Wald χ²
|
Exp(b) |
χ² |
Cox & Snell R² |
Nagelkerke’s R² |
| Model 1 |
|
|
|
|
6.79 |
.04 |
.06 |
| General time spent on social media |
-.12 |
.04 |
9.81 |
.89 |
|
|
|
| Model 2 |
|
|
|
|
9.50 |
.09 |
.13 |
| Gender |
-.08 |
.30 |
.07 |
.07 |
|
|
|
| SES |
.06 |
.11 |
.36 |
.36 |
|
|
|
| Age |
.33***
|
.09 |
13.40 |
1.39 |
|
|
|
| Constant |
-5.30 |
1.79 |
8.75 |
8.75 |
|
|
|
|
Note: Gender (1 = girl, 2 = boy); My AI use (0 = yes and 1 = no); *p < .05, **p < .01, ***p < .001. |
Table 3.
Linear regression analysis with positive emotions after My AI use as dependent variables and socio-demographic factors as independent variables.
Table 3.
Linear regression analysis with positive emotions after My AI use as dependent variables and socio-demographic factors as independent variables.
| |
b |
SE |
β |
t |
Adj. R²
|
ΔR2
|
F |
F Change |
| Model 1 |
|
|
|
|
-.00 |
.00 |
.27 |
.27 |
| General time spent on social media |
.01 |
.01 |
.04 |
.52 |
|
|
|
| Model 2 |
|
|
|
|
.04 |
.06 |
3.02*
|
3.93 |
| General time spent on social media |
.01 |
.01 |
.06 |
.08 |
|
|
|
|
| Gender |
-.17 |
.13 |
-.09 |
-1.24 |
|
|
|
|
| SES |
.00 |
.05 |
.01 |
.08 |
|
|
|
|
| Age |
-.13 |
.04 |
-.24**
|
-3.26 |
|
|
|
|
|
Note: Gender (1 = girl, 2 = boy); *p < .05, **p < .01, ***p < .001. |
|
|
|
|
Table 4.
Linear regression analysis with negative emotions after My AI use as dependent variables and socio-demographic factors as independent variables.
Table 4.
Linear regression analysis with negative emotions after My AI use as dependent variables and socio-demographic factors as independent variables.
| |
b |
SE |
β |
t |
Adj. R²
|
ΔR2
|
F |
F Change |
| Model 1 |
|
|
|
|
.00 |
.01 |
1.03 |
1.03 |
| General time spent on social media |
.01 |
.09 |
.07 |
1.01 |
|
|
|
|
| Model 2 |
|
|
|
|
-.01 |
.01 |
.75 |
.66 |
| General time spent on social media |
.01 |
.01 |
.07 |
1.03 |
|
|
|
|
| Gender |
-.11 |
.10 |
-.09 |
-1.17 |
|
|
|
| SES |
.03 |
.04 |
.05 |
.70 |
|
|
|
| Age |
-.01 |
.03 |
-.02 |
-.28 |
|
|
|
|
Note: Gender (1 = girl, 2 = boy); *p < .05, **p < .01, ***p < .001. |
|
|
|
|
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).