1. Introduction
When users submit extensive autobiographical narratives to ChatGPT and inquire whether their life patterns 'make sense', they engage in something more profound than information retrieval. They enroll artificial intelligence in interpretive labor traditionally associated with introspection, psychotherapy, or intimate human dialogue, thus inviting algorithms to participate in the constitution of selfhood itself. This practice, increasingly prevalent among young adults, raises fundamental questions about how AI systems shape not merely what we do, but who we understand ourselves to be [
1]. The emergence of large language models capable of simulating empathic understanding marks a qualitative shift in human-technology relations, transforming the digital screen from passive mirror to active mold participating in the formation of the selves it purports to reflect [
2].
This study investigates how young adults integrate generative AI chatbots into everyday practices of self-reflection and meaning making. Whilst existing research on AI-mediated subjectivity has predominantly employed identity frameworks centered on social positioning and role enactment, this study foregrounds selfhood; understood as the organization of subjective experience through narrative coherence, interpretive authority, and practices of self-governance. Drawing upon Paul Ricœur's theory of narrative self and Michel Foucault's concept of technologies of the self, the analysis proceeds through in-depth qualitative interviews with sixteen young adults in Norway. The guiding research question asks: How do young adults integrate generative AI chatbots into everyday practices of self-reflection and meaning-making, and what tensions emerge as they navigate between AI-mediated and human modes of self-understanding?
A review of existing literature on human-chatbot interaction reveals a body of knowledge organized around several intersecting concerns: the emergence of synthetic companionship and affective dimensions of chatbot use [
3,
4]; algorithmic identity and the datafication of personhood [
2,
5]; the delegation of introspective practices to algorithmic systems [
6,
7]; and critical analyses of power dynamics operating through algorithmic governance [
8].
The first prominent concern involves synthetic companionship—the capacity of conversational AI to simulate relational presence and fulfil social-affective functions. Contemporary societies are marked by growing alienation, declining interpersonal trust, and what has been termed a 'loneliness epidemic' prompting policy responses across numerous countries [
9,
10]. In this context, AI chatbots have emerged as unexpected companions: always available, endlessly patient, apparently non-judgmental. Users report that chatbots feel psychologically safe and helpful for coping with loneliness and improving well-being [
11,
12,
13]. Some describe AI connections as similar to human friendship, reporting genuine feelings of loss when companions disappear—as documented following Replika's discontinuation of certain features [
14,
15]. Unlike earlier digital technologies that primarily mediated communication between humans, large language models function as interlocutors in their own right, generating personalized responses simulating understanding and empathy [
14,
16]. This literature illuminates how conversational AI fulfils relational functions previously reserved for human others, yet tends to foreground satisfaction and well-being outcomes rather than examining implications for how users understand themselves.
A second concern in the literature addresses algorithmic identity and the datafication of personhood; examining how digital systems categorize users and recursively shape self-presentation. The rapid proliferation of conversational AI into everyday communicative practice has produced what academics describe as digitally mediated forms of identity wherein personal awareness and emotional patterns are shaped through continuous feedback from algorithmic systems [
10,
17,
18]. Cheney-Lippold [
10] demonstrates how algorithms construct 'measurable types' categorizing users based on behavioral data, producing digital persona that may diverge from individuals' self-conceptions. Bhandari and Bimo [
19] examine TikTok's recommendation algorithm and its effects on the 'algorithmized self', revealing recursive dynamics whereby individuals modify behavior to satisfy algorithmic preferences. These digital identity feedback loops-closed systems reinforcing existing self-concepts—may hinder personal evolution, trapping users in echo chambers of self-perception [
20]. This body of knowledge provides crucial insight into algorithmic shaping of self-presentation and social positioning yet primarily addresses identity as externally oriented performance rather than the internal organization of subjective experience.
A third strand concerns the delegation of introspection—the process whereby self-awareness, traditionally cultivated through journaling or therapy, is outsourced to algorithmic systems [
6,
7]. The rise of emotionally intelligent AI: therapeutic chatbots like Woebot and Wysa, mood-tracking applications, conversational agents for emotional support—has created systems where users increasingly depend on algorithms for emotional navigation and self-reflection [
21,
22,
23]. This delegation carries both productive potential and significant risks: AI mental health tools can foster self-consciousness through structured prompts otherwise inaccessible to many users [
24], yet excessive reliance may result in cognitive disengagement correlating with reduced critical thinking [
25]. Critical perspectives emphasize the fundamental asymmetry: large language models function as 'automated subjects' simulating intersubjectivity without genuine reciprocity [
26]. A central concern within this literature is the potential erosion of narrative agency. Narrative psychology holds that humans make sense of themselves through stories providing meaning, continuity, and coherence [
27,
28]. In algorithmically curated reality, however, self-narratives become increasingly shaped by machine-generated frameworks [
29]. When AI assumes the interpreter's role; explaining feelings, identifying behavioral patterns, suggesting what experiences mean, individuals may become estranged from their own interpretive capacities. This strand approaches most closely the concerns of the present study yet typically examines specific therapeutic applications and their efficacy in achieving therapeutic goals rather than everyday chatbot use. It rarely draws upon theoretical frameworks adequate to the complexity of narrative selfhood.
A fourth strand in the body of work offers critical analysis of power dynamics operating through algorithmic governance. Rouvroy and Berns [
30] conceptualize 'algorithmic governmentality' as a mode of power operating through statistical correlations rather than disciplinary norms. Zuboff [
31] documents how surveillance capitalism extracts and commodifies personal data, converting intimate practices into profitable predictions. More recently, Sahakyan et al. [
8] develops a Foucauldian analysis arguing that AI systems instantiate new modes of subjectification through algorithmic rationalities that 'quietly shape behavior, organize possibilities for action, and determine what counts as truth or relevance'. This critical literature illuminates the political and economic contexts within which AI-mediated self-practices occur yet tends toward structural analysis that backgrounds the phenomenological texture of lived experience.
Despite this growing literature, existing body of knowledge predominantly employs identity frameworks describing external roles and social positioning rather than the internal organization of subjectivity. Identity is treated as modular and revisable—responsive to audience, context, and expectation [
32]. While valuable, this conceptualization offers limited traction for examining how AI might influence deeper structures through which individuals generate continuity, coherence, subjective meaning and agency. Resources for addressing such processes lie in literature on selfhood: phenomenological accounts emphasizing pre-reflective self-presence [
33], narrative theories conceptualizing selfhood through autobiographical reasoning [
34], and Foucauldian approaches highlighting how discursive regimes structure self-interpretation [
34,
35]. To be clear, identity and selfhood are not ontologically separate domains but interpenetrating dimensions of subjectivity; in lived experience, the boundary between performing for others and understanding oneself is often porous and unstable. The distinction is thus analytic rather than metaphysical matter of emphasis enabling examination of processes that identity frameworks tend to background. This conceptual reorientation between research on algorithmic identity and the deeper question of how AI influences selfhood, motivates the present study.
4. Results
This section examines how young adults integrate generative AI chatbots into everyday life and the implications for introspective processes and selfhood organization. The analysis reveals dialectical tensions structuring participants' ChatGPT experiences—not straightforward technological adoption but fundamental contradictions animating contemporary digital subjectivity: between instrumental efficiency and existential meaning, between algorithmic scaffolding and autonomous deliberation, between synthetic intimacy and relational authenticity. Four principal tensions are identified, each encompassing subthemes capturing nuanced and often contradictory reflections on AI-mediated selfhood.
4.1. Instrumental Rationality Meets Existential Unease
The first tension concerns ChatGPT as both accelerator of cognitive labor and potential diminisher of human capacities. Participants consistently articulated tension between celebrating efficiency gains and mourning potential atrophy of skills and creativity. Participants uniformly described ChatGPT as simplifying daily tasks and accelerating decision-making, often replacing traditional information retrieval: 'I almost never use Google anymore; I just ask Chat.' Applications ranged from culinary guidance—'I used it yesterday to figure out how to bake potatoes'—to professional communication, creative text generation, and event planning. These practices were framed through optimization discourses: 'It gives me a spark... tasks that used to take hours now take minutes'; 'it went so much faster… I could spend the rest of the day just socializing'.
However, empowerment narratives coexisted with concern about dependency and cognitive outsourcing. Several expressed ambivalence: 'It's a bit sad that I feel I need AI to be creative,' one reflected, whilst another worried, 'If we rely too much, we'll lose depth.' This oscillation reveals the productivity paradox of algorithmic assistance: affordances promising liberation through acceleration simultaneously threaten to constrain autonomy through habitual dependence. The language employed—'spark', 'catalyst', 'enhancement'—resonates with neoliberal discourses positioning the self as enterprise requiring continuous improvement [
35]. Yet articulated anxieties suggest awareness of costs: optimization taken to its limit may produce subjects technically proficient but existentially impoverished, capable of rapid task completion but divorced from generative friction demanding deeper engagement.
The tension manifested acutely in creative domains. Participants enlisted ChatGPT as 'brainstorming partner' rapidly generating ideas: 'It gives me ideas I wouldn't have thought of... even if some are silly, they spark something.' Whilst collaborations were celebrated for enhancing productivity, they provoked reflection on originality and creative agency. These concerns resonate with the emphasis on authorship as constitutive of narrative identity—the ability to claim ownership over one's story becomes destabilized when authorship boundaries blur between human intention and algorithmic generation [
34]. Yet anxiety was not uniform. Some embraced pragmatic stances: 'Ideas come from somewhere anyway—why not from Chat?' This suggests alternative creativity ontologies as inherently distributed and collaborative, challenging Romantic ideals of solitary genius.
4.2. Algorithmic Support Versus Relational Displacement
The second tension concerns ChatGPT's ambiguous positioning between cognitive scaffold and social substitute. Participants navigated complex terrain where the chatbot simultaneously enhanced interpersonal competence and threatened to displace interactions through which such competence is traditionally cultivated. Social navigation emerged as salient, particularly regarding tone calibration and normative etiquette. Participants used ChatGPT to craft socially optimal communications: 'I've heard before that I can sound too strict... so I ask ChatGPT to help me phrase things in a nicer way.' Others sought guidance on ambiguous situations: confirming plans, responding to humor, managing conflicts. These practices exemplify algorithmic impression management—delegating social performance to computational intermediaries.
Participants valued ChatGPT's responsiveness and non-judgmental stance: 'You get an answer without feeling embarrassed.' This created space for rehearsing social performances without risks inherent in human interaction; an interactional laboratory where politeness scripts could be tested. However, participants acknowledged limitations: algorithmic advice often felt 'generic, almost fake-sounding', lacking contextual sensitivity characterizing human judgement. This tension reveals algorithmic mediation's double-edged nature: whilst facilitating social navigation through templates, it risks producing homogenized sociality; interactions becoming performances of algorithmically certified appropriateness rather than expressions of genuine particularity. This constitutes normative regulation where politeness and competence are codified through algorithmic scripts, disciplining subjects into standardized relationality [
8].
Beyond scaffolding, participants described ChatGPT partially displacing human interlocutors. Several characterized it as 'mentor', 'guide', or 'personal assistant', signaling shift from tool to quasi-relational entity: 'It feels like a mentor—a kind of advisor I can turn to when I'm unsure.' Some noted using ChatGPT 'instead of asking a friend', particularly for minor deliberations. These substitutions were framed pragmatically: 'It's easier than bothering someone', yet suggest reconfiguration of the relational ecology through which selfhood is constituted. Companionship revealed acute normative tensions. Whilst acknowledging ChatGPT's appeal as frictionless alternative: 'It's easier than learning to be with others'; many were cautious about framing it as companion. This hesitation was deeply normative, reflecting cultural ideals of sociability. Several noted awareness of others using AI for friendship, perceiving such reliance as 'sad' or 'inferior', suggesting algorithmic companionship carries stigma signaling failure to meet expectations of social connection [
14].
4.3. Algorithmic Mirroring Between Depth and Superficiality
The third tension concerns ChatGPT's ambivalent status as introspection instrument. Participants appropriated it for deeply reflexive practices yet questioned the authenticity and depth of insights generated. Several described introspective uses resembling narrative therapy, from uploading autobiographical texts—'I pasted my whole self-biography—5,000 words—and asked if my patterns made sense'—to posing targeted queries: 'Why do I always fall for unavailable partners?' or 'Could my childhood experience explain why I react this way now?' These exemplify narrative identity work: ongoing effort to weave coherence into one's life story, integrating past experiences with present self-understanding and future aspirations [
34].
ChatGPT's appeal derived from perceived responsiveness and neutrality. Unlike human interlocutors whose reactions might be filtered through judgement, the chatbot offered unconditional availability: 'It's like a mix of diary and therapist—very informal, completely unfiltered.' Absence of social risk enabled disclosures felt too delicate for interpersonal exchange, creating space where vulnerability could be rehearsed without sanction. Participants described how algorithmic reframing facilitated new perspectives: 'It helped me see patterns I hadn't noticed... even if some links were a stretch, it made me think.' ChatGPT functioned as cognitive scaffold for autobiographical reasoning, offering interpretive frameworks participants could appropriate, modify, or reject.
Despite apparent benefits, introspective engagements were marked by profound ambivalence regarding algorithmically generated insights' epistemic status. Participants acknowledged limitations: 'It was fascinating... but also, how much can it really know? It always tries to find links, even when they might not be there.' Another reflected: 'It doesn't give me anything truly new... just makes me look at things differently.' This skepticism reveals tension between experiencing insight and recognizing mechanical production. What participants valued was not necessarily truth of interpretations but heuristic utility—capacity to stimulate reflection regardless of ultimate validity. Algorithmic mirroring functions less as epistemology than as technique for generating alternative perspectives serving as raw material for self-narrative, even when understood as probabilistic pattern-matching rather than genuine understanding.
Emotional validation constituted another complexity layer. Whilst few reported using ChatGPT for sustained emotional support, many acknowledged unexpected comfort: 'I felt validated by a robot... a strange but good feeling.' The chatbot's empathic formulations—'I understand why you might feel that way'—were interpreted ambivalently: some appreciated feeling 'heard', others dismissed it as 'fake' or 'not real comfort'. Validation operates through simulated attunement, responses calibrated to express empathy through linguistic convention rather than genuine recognition. Yet for many, simulation was 'good enough' for particular purposes: normalizing anxieties, providing reassurance, interrupting rumination and generating alternative perspectives.
4.4. Agency and Autonomy
The fourth tension concerns agency and autonomy negotiation amid increasingly delegated decision-making. Participants described using ChatGPT for choices ranging from mundane to existentially significant, revealing complex dynamics of sovereignty and abdication. A noticeable pattern was absence of clear boundaries between trivial queries and decisions of substantial ethical weight. Participants described using ChatGPT interchangeably for holiday destinations, study abroad programs, higher education choices, and voting for a political party—requesting 'pros and cons for each option'. This flattening of deliberative domains suggests decision-making reconfiguration: the same algorithmic voice recommending beaches is enlisted to arbitrate civic responsibility and life trajectory [
33].
This extended into intimate and normative territories. Some described consulting ChatGPT about sexual practices: whether certain acts were 'normal' or acceptable. Whilst framed as exploratory; 'just to get an overview'—the practice reveals how extensively algorithmic counsel has penetrated domains traditionally reserved for personal judgement, peer dialogue, professional guidance or trusted confidants. Crucially, however, the character of this engagement varied considerably across participants and decision types. A distinction emerged between what might be termed consultative scaffolding—using ChatGPT to gather information, explore options, or articulate considerations that remain subject to autonomous evaluation; and full deliberative outsourcing, wherein the algorithmic recommendation effectively substitutes for independent moral reasoning. Most participants' practices fell along a spectrum between these poles rather than at either extreme.
For many, ChatGPT functioned as preliminary resource: 'I ask it to lay out the arguments, but I still decide.' Political queries, for instance, often involved requesting summaries of party platforms rather than asking which party to vote for: consultative scaffolding preserving deliberative autonomy. Similarly, questions about intimate practices frequently sought information about prevalence or safety rather than normative verdicts. Yet other participants described patterns approaching genuine outsourcing: 'Sometimes I just do what it says—it's easier than thinking through everything myself.' The concern is not that consultation occurs, humans have always sought counsel, but that the ease and ubiquity of algorithmic advice may gradually reshape the disposition toward autonomous deliberation itself. When 'asking Chat' becomes reflexive, the distinction between consulting and outsourcing may erode through accumulated habit rather than explicit choice.
Variations in agential orientation were evident. Some portrayed themselves as strategic curators using ChatGPT instrumentally: 'I use it to quiz myself... to check my understanding.' These participants emphasized capacity to critically evaluate outputs, appropriating useful suggestions whilst rejecting inappropriate ones, augmented agency where AI enhances rather than supplants human capacities. Others described patterns bordering on habitual dependence: 'It's so easy to ask instead of thinking.' These reveals concern that convenience might erode disposition toward autonomous deliberation. Agency was further complicated by privacy concerns. Participants expressed reluctance to disclose personal information, citing uncertainty about storage and misuse: 'I feel it's a bit creepy... I don't know where the information goes.' Some reported deleting chats or limiting sensitive content as tactical resistance; others adopted pragmatic stances: 'I'm just one of millions; it's unlikely anyone cares.' This spectrum underscores digital subjectivity's negotiated character, where autonomy and vulnerability coexist [
32].
6. Conclusions
The tensions documented: between narrative enrichment and flattening, between empowerment and dependency, between convenience and authenticity, between validation and pseudo-empathy, are not merely individual ambivalences but structural features of AI-mediated selfhood. They reflect how algorithmic systems simultaneously enable new reflexivity capacities whilst constraining possibilities within normative logics of optimization, efficiency, and emotional regulation. ChatGPT functions as both narrative co-author and technology of self-governance, participating in autobiographical reasoning whilst embedding practices of self-care within dominant discourses of optimization [
52]
Participants' considerable awareness of these dynamics; their metacognitive reflection, ethical concern, and strategic negotiation, indicates that AI integration into self-formation practices is not passive technological adoption but active, reflexive engagement with emerging mediation forms. Yet awareness coexists with structural dependencies and habitual patterns potentially constraining the very autonomy participants seek to preserve. The ease of algorithmic consultation, immediate interpretation's seductiveness, synthetic empathy's comfort, these affordances invite selfhood reconfiguration even as users maintain critical distance.
The societal implications are significant. When AI systems participate in constituting selfhood, they raise questions extending beyond individual psychology to collective politics. The conditions under which subjectivity is produced are shaped by infrastructures, institutions, and power relations that must become objects of critical scrutiny [
49,
53] As generative AI becomes increasingly embedded in practices of self-reflection, emotional regulation, and social interaction, academia must develop conceptual tools and empirical methods necessary for critical engagement with these transformations. The future of selfhood in algorithmic environments depends not merely on individual choices but on collective efforts to shape the conditions under which subjectivity is constituted.