Preprint
Article

This version is not peer-reviewed.

Encountering Generative AI: Narrative Self-Formation and Technologies of the Self Among Young Adults

A peer-reviewed article of this preprint also exists.

Submitted:

24 December 2025

Posted:

26 December 2025

You are already at the latest version

Abstract
This paper examines how young adults integrate generative artificial intelligence chatbots into everyday life and the implications of these engagements for the constitu-tion of selfhood. Whilst existing research on AI-mediated subjectivity has predomi-nantly employed identity frameworks centered on social positioning and role enact-ment, this study foregrounds selfhood—understood as the organization of subjective experience through narrative coherence, interpretive authority, and practices of self-governance. Drawing upon Paul Ricœur's theory of narrative self and Michel Fou-cault's concept of technologies of the self, the analysis proceeds through in-depth qual-itative interviews with sixteen young adults in Norway to investigate how algorithmic systems participate in autobiographical reasoning and self-formative practices. The findings reveal four dialectical tensions structuring participants' engagements with ChatGPT: between instrumental efficiency and existential meaning; between algorith-mic scaffolding and relational displacement; between narrative depth and epistemic superficiality; and between augmented agency and deliberative outsourcing. The anal-ysis demonstrates that AI-mediated practices extend beyond instrumental utility to reconfigure fundamental dimensions of subjectivity, raising questions about interpre-tive authority, narrative authorship, and the conditions under which selfhood is nego-tiated in algorithmic environments. These findings contribute to debates on digital subjectivity, algorithmic governance, and the societal implications of AI systems that increasingly function as interlocutors in meaning-making processes.
Keywords: 
;  ;  ;  ;  ;  ;  ;  

1. Introduction

When users submit extensive autobiographical narratives to ChatGPT and inquire whether their life patterns 'make sense', they engage in something more profound than information retrieval. They enroll artificial intelligence in interpretive labor traditionally associated with introspection, psychotherapy, or intimate human dialogue, thus inviting algorithms to participate in the constitution of selfhood itself. This practice, increasingly prevalent among young adults, raises fundamental questions about how AI systems shape not merely what we do, but who we understand ourselves to be [1]. The emergence of large language models capable of simulating empathic understanding marks a qualitative shift in human-technology relations, transforming the digital screen from passive mirror to active mold participating in the formation of the selves it purports to reflect [2].
This study investigates how young adults integrate generative AI chatbots into everyday practices of self-reflection and meaning making. Whilst existing research on AI-mediated subjectivity has predominantly employed identity frameworks centered on social positioning and role enactment, this study foregrounds selfhood; understood as the organization of subjective experience through narrative coherence, interpretive authority, and practices of self-governance. Drawing upon Paul Ricœur's theory of narrative self and Michel Foucault's concept of technologies of the self, the analysis proceeds through in-depth qualitative interviews with sixteen young adults in Norway. The guiding research question asks: How do young adults integrate generative AI chatbots into everyday practices of self-reflection and meaning-making, and what tensions emerge as they navigate between AI-mediated and human modes of self-understanding?
A review of existing literature on human-chatbot interaction reveals a body of knowledge organized around several intersecting concerns: the emergence of synthetic companionship and affective dimensions of chatbot use [3,4]; algorithmic identity and the datafication of personhood [2,5]; the delegation of introspective practices to algorithmic systems [6,7]; and critical analyses of power dynamics operating through algorithmic governance [8].
The first prominent concern involves synthetic companionship—the capacity of conversational AI to simulate relational presence and fulfil social-affective functions. Contemporary societies are marked by growing alienation, declining interpersonal trust, and what has been termed a 'loneliness epidemic' prompting policy responses across numerous countries [9,10]. In this context, AI chatbots have emerged as unexpected companions: always available, endlessly patient, apparently non-judgmental. Users report that chatbots feel psychologically safe and helpful for coping with loneliness and improving well-being [11,12,13]. Some describe AI connections as similar to human friendship, reporting genuine feelings of loss when companions disappear—as documented following Replika's discontinuation of certain features [14,15]. Unlike earlier digital technologies that primarily mediated communication between humans, large language models function as interlocutors in their own right, generating personalized responses simulating understanding and empathy [14,16]. This literature illuminates how conversational AI fulfils relational functions previously reserved for human others, yet tends to foreground satisfaction and well-being outcomes rather than examining implications for how users understand themselves.
A second concern in the literature addresses algorithmic identity and the datafication of personhood; examining how digital systems categorize users and recursively shape self-presentation. The rapid proliferation of conversational AI into everyday communicative practice has produced what academics describe as digitally mediated forms of identity wherein personal awareness and emotional patterns are shaped through continuous feedback from algorithmic systems [10,17,18]. Cheney-Lippold [10] demonstrates how algorithms construct 'measurable types' categorizing users based on behavioral data, producing digital persona that may diverge from individuals' self-conceptions. Bhandari and Bimo [19] examine TikTok's recommendation algorithm and its effects on the 'algorithmized self', revealing recursive dynamics whereby individuals modify behavior to satisfy algorithmic preferences. These digital identity feedback loops-closed systems reinforcing existing self-concepts—may hinder personal evolution, trapping users in echo chambers of self-perception [20]. This body of knowledge provides crucial insight into algorithmic shaping of self-presentation and social positioning yet primarily addresses identity as externally oriented performance rather than the internal organization of subjective experience.
A third strand concerns the delegation of introspection—the process whereby self-awareness, traditionally cultivated through journaling or therapy, is outsourced to algorithmic systems [6,7]. The rise of emotionally intelligent AI: therapeutic chatbots like Woebot and Wysa, mood-tracking applications, conversational agents for emotional support—has created systems where users increasingly depend on algorithms for emotional navigation and self-reflection [21,22,23]. This delegation carries both productive potential and significant risks: AI mental health tools can foster self-consciousness through structured prompts otherwise inaccessible to many users [24], yet excessive reliance may result in cognitive disengagement correlating with reduced critical thinking [25]. Critical perspectives emphasize the fundamental asymmetry: large language models function as 'automated subjects' simulating intersubjectivity without genuine reciprocity [26]. A central concern within this literature is the potential erosion of narrative agency. Narrative psychology holds that humans make sense of themselves through stories providing meaning, continuity, and coherence [27,28]. In algorithmically curated reality, however, self-narratives become increasingly shaped by machine-generated frameworks [29]. When AI assumes the interpreter's role; explaining feelings, identifying behavioral patterns, suggesting what experiences mean, individuals may become estranged from their own interpretive capacities. This strand approaches most closely the concerns of the present study yet typically examines specific therapeutic applications and their efficacy in achieving therapeutic goals rather than everyday chatbot use. It rarely draws upon theoretical frameworks adequate to the complexity of narrative selfhood.
A fourth strand in the body of work offers critical analysis of power dynamics operating through algorithmic governance. Rouvroy and Berns [30] conceptualize 'algorithmic governmentality' as a mode of power operating through statistical correlations rather than disciplinary norms. Zuboff [31] documents how surveillance capitalism extracts and commodifies personal data, converting intimate practices into profitable predictions. More recently, Sahakyan et al. [8] develops a Foucauldian analysis arguing that AI systems instantiate new modes of subjectification through algorithmic rationalities that 'quietly shape behavior, organize possibilities for action, and determine what counts as truth or relevance'. This critical literature illuminates the political and economic contexts within which AI-mediated self-practices occur yet tends toward structural analysis that backgrounds the phenomenological texture of lived experience.
Despite this growing literature, existing body of knowledge predominantly employs identity frameworks describing external roles and social positioning rather than the internal organization of subjectivity. Identity is treated as modular and revisable—responsive to audience, context, and expectation [32]. While valuable, this conceptualization offers limited traction for examining how AI might influence deeper structures through which individuals generate continuity, coherence, subjective meaning and agency. Resources for addressing such processes lie in literature on selfhood: phenomenological accounts emphasizing pre-reflective self-presence [33], narrative theories conceptualizing selfhood through autobiographical reasoning [34], and Foucauldian approaches highlighting how discursive regimes structure self-interpretation [34,35]. To be clear, identity and selfhood are not ontologically separate domains but interpenetrating dimensions of subjectivity; in lived experience, the boundary between performing for others and understanding oneself is often porous and unstable. The distinction is thus analytic rather than metaphysical matter of emphasis enabling examination of processes that identity frameworks tend to background. This conceptual reorientation between research on algorithmic identity and the deeper question of how AI influences selfhood, motivates the present study.

2. Theoretical Framework

This study integrates two complementary theoretical perspectives: Ricœur's theory of narrative self and Foucault's concept of technologies of the self. Together, these frameworks illustrate how selfhood is constituted through narrative practices of emplotment and autobiographical reasoning, and how such practices are embedded within discursive regimes delimiting possibilities for self-understanding. Their combined application enables analysis of both the interpretive processes through which AI-mediated interactions shape self-understanding and the normative dimensions structuring such engagements.

2.1. Narrative Self and Autobiographical Reasoning

Ricœur's [34] concept of narrative self emphasizes the storied character of selfhood. According to this framework, individuals construct their identities by weaving disparate events and experiences into coherent narratives answering the question 'who am I?' This narrative work involves selecting, configuring, and interpreting life events to create temporal continuity and meaningful progression; what Ricœur terms 'emplotment'. The self emerges not as fixed substance but as dynamic achievement, continuously reconstructed through the dialectic of concordance (integration) and discordance (disruption). Central to Ricœur's account is the distinction between idem (sameness) and ipse (selfhood). Idem refers to persistent traits identifiable over time; ipse refers to the ethical dimension of self-constancy, the capacity to make and keep promises, maintain commitments, and answer for one's actions. Narrative self-bridges these dimensions: through emplotment, contingent events are integrated into temporal wholes sustaining both descriptive continuity and ethical accountability.
Psychological research has elaborated this framework, examining how life stories exhibit structural features: themes of agency and communion, redemptive or contamination sequences, degrees of coherence, correlating with well-being [27,28,36]. Autobiographical reasoning, the reflective process connecting past experiences with present self-understanding, is fundamental to narrative self-formation. This reasoning is inherently interpretive: it configures events into meaningful patterns serving identity functions rather than simply recording them.

2.2. Technologies of the Self and Algorithmic Governance

Foucault's [37] concept of technologies of the self refers to practices through which individuals work upon themselves: bodies, thoughts, conduct, in order to achieve particular states of happiness, or wisdom, or virtue, or well-being. These technologies are historically situated, embedded within regimes of power and knowledge defining what counts as worthy selfhood and prescribing cultivation techniques. Unlike technologies of domination operating through external coercion, technologies of the self-function through self-governance—individuals actively constitute themselves as subjects according to available norms. Contemporary technologies of the self operate within neoliberal governmentality, where individuals are constituted as entrepreneurs of themselves responsible for continuous self-optimization [35]. Digital technologies have become central, offering tools for self-tracking, self-monitoring, and self-management promising enhanced productivity, well-being, and authenticity [31]. Generative AI extends this trajectory, providing algorithmically personalized guidance for everything from scheduling to existential decision-making.
The Foucauldian perspective directs attention to normative regimes within which these practices are embedded. When users consult ChatGPT about social interaction, introspection, emotional regulation, or life choices, they participate in practices presupposing particular ideals of the optimized, and emotionally intelligent self. The algorithmic voice functions not merely as information source but as normative arbiter, codifying conceptions of appropriate selfhood. Understanding how users navigate these dimensions: adopting, resisting, or negotiating algorithmic prescriptions, illuminates contemporary conditions of subjectification.

2.3. Theoretical Integration: Critical Hermeneutics of Algorithmic Selfhood

Integrating Ricœur's narrative framework with Foucault's governmentality analysis enables examination of AI-mediated practices at multiple levels. From the Ricœurian perspective, we examine how algorithmic systems participate in autobiographical reasoning—interpretive work configuring events into meaningful narratives. From the Foucauldian perspective, we analyze how narrative practices are embedded within normative regimes structuring what subjects can think or say about themselves. This dual framework illuminates a central tension: algorithmic systems may simultaneously enable new forms of narrative reflection whilst constraining possibilities within dominant norms. ChatGPT may facilitate autobiographical reasoning by generating alternative framings and identifying patterns, yet interpretations are shaped by training data encoding particular; often optimizing, and individualizing conceptions of selfhood. The algorithmic interlocutor participates in both narrative and governmental dimensions of self-constitution, making this theoretical integration particularly apt for the present inquiry.

3. Methods

3.1. Research Design and Epistemological Orientation

This study employed qualitative research to explore how young adults integrate generative AI into reflective and self-formative practices. The epistemological orientation is hermeneutical, grounded in the view that selfhood is constituted through meaning-making, narrative articulation, and normative frameworks individuals use to understand and govern themselves [38,39]. This orientation aligns with Ricœur's conception of narrative selfhood and Foucault's account of technologies of the self, both emphasizing interpretive and ethical dimensions of everyday subjectivity. The analysis focuses not on establishing factual accuracy of descriptions but on how participants narrate and negotiate AI interactions as part of their ongoing organization of selfhood. Ontologically, the study adopts a constructionist position recognizing that selfhood is not a pre-given entity awaiting discovery, but an achievement continuously accomplished through interpretive practices embedded in social and technological contexts [40]. This does not imply that selfhood is arbitrary or infinitely malleable; rather, it is constituted within historically specific conditions that both enable and constrain possibilities for self-understanding. AI chatbots represent a novel element within these conditions, participating in self-constitution in ways this study seeks to investigate.

3.2. Participants and Recruitment

Participants were sixteen young adults residing in Norway: eleven women and five men aged twenty to twenty-five. Recruitment proceeded through convenience sampling via posters on a large university campus. The inclusion criterion required regular ChatGPT use for reflective, or emotionally meaningful purposes beyond simple information retrieval—ensuring participants could speak substantively about AI-mediated self-reflection. Participants represented diverse educational backgrounds across psychology, social sciences, and natural sciences, and all had used ChatGPT for several years prior to interviews. The adequacy of the sample was assessed in accordance with the principle of information power, which holds that the relevance and richness of data in relation to the study aim are more important than the number of participants recruited [41]. The interviews produced rich, layered accounts of meaning-making and ambivalence of self-formative practices. After approximately nine interviews, no substantively new thematic domains emerged; subsequent interviews contributed additional nuance and variation without expanding the thematic landscape.
The Norwegian context merits brief consideration. As a highly digitized society with rapid ChatGPT adoption, Norway provides an illustrating setting for examining these phenomena. The welfare state context, characterized by high societal trust and robust public institutions, means that tensions between algorithmic and human modes of self-understanding likely reflect features of the technology rather than merely responses to social precarity or inadequate support systems. However, findings should be interpreted with awareness that experiences may differ in contexts with different technological infrastructures, cultural norms, or institutional supports.

3.3. Data Collection

Data were generated through in-depth, semi-structured interviews lasting sixty to ninety minutes, conducted in person or via encrypted video. Interviews invited participants to describe how ChatGPT entered everyday routines; how they used it for reflection, emotional articulation, and decision-making; and how they experienced differences between AI-mediated and human dialogue. Participants were encouraged to provide concrete examples, recount specific episodes, and reflect on contradictions or ambivalence. All interviews were audio-recorded and transcribed verbatim.

3.4. Analysis

The analytic process followed abductive, reflexive thematic analysis [42,43]. Analysis began with repeated transcript readings to develop familiarity and sensitize the researcher to emerging patterns. Initial coding proceeded inductively, capturing explicit descriptions and interpretive nuances in how participants articulated moments of insight, confusion, dependence, or discomfort. As coding developed, related codes were assembled into broader patterns expressing recurrent tensions in meaning-making practices (see Supplementary Table S1). Throughout, theoretical concepts relating to narrative coherence and self-governance informed but did not predetermine theme development (see Supplementary Tables S2 and S3). The analysis involved iterative movement between data and theory, allowing insights to emerge whilst ensuring interpretations remained anchored in participants' accounts. The final synthesis examined how participants positioned ChatGPT within autobiographical reasoning and navigated tensions between AI-mediated and human self-understanding (see Supplementary Table S4). Researcher reflexivity formed integral analytic work; reflexive memos examined how assumptions and theoretical commitments might influence theme development.

3.5. Ethics and Trustworthiness

The study followed national ethical guidelines and received institutional approval. Participants gave informed consent and were assured of confidentiality. All identifying information was removed from transcripts; data were stored securely on encrypted servers. Because interviews touched sensitive aspects of emotional life, participants could decline questions or withdraw without consequence, the raw data from the interviews will be stored for limited time only. Trustworthiness was supported through sustained data engagement, iterative refinement of codes and themes, and detailed documentation of analytic decisions [44].

4. Results

This section examines how young adults integrate generative AI chatbots into everyday life and the implications for introspective processes and selfhood organization. The analysis reveals dialectical tensions structuring participants' ChatGPT experiences—not straightforward technological adoption but fundamental contradictions animating contemporary digital subjectivity: between instrumental efficiency and existential meaning, between algorithmic scaffolding and autonomous deliberation, between synthetic intimacy and relational authenticity. Four principal tensions are identified, each encompassing subthemes capturing nuanced and often contradictory reflections on AI-mediated selfhood.

4.1. Instrumental Rationality Meets Existential Unease

The first tension concerns ChatGPT as both accelerator of cognitive labor and potential diminisher of human capacities. Participants consistently articulated tension between celebrating efficiency gains and mourning potential atrophy of skills and creativity. Participants uniformly described ChatGPT as simplifying daily tasks and accelerating decision-making, often replacing traditional information retrieval: 'I almost never use Google anymore; I just ask Chat.' Applications ranged from culinary guidance—'I used it yesterday to figure out how to bake potatoes'—to professional communication, creative text generation, and event planning. These practices were framed through optimization discourses: 'It gives me a spark... tasks that used to take hours now take minutes'; 'it went so much faster… I could spend the rest of the day just socializing'.
However, empowerment narratives coexisted with concern about dependency and cognitive outsourcing. Several expressed ambivalence: 'It's a bit sad that I feel I need AI to be creative,' one reflected, whilst another worried, 'If we rely too much, we'll lose depth.' This oscillation reveals the productivity paradox of algorithmic assistance: affordances promising liberation through acceleration simultaneously threaten to constrain autonomy through habitual dependence. The language employed—'spark', 'catalyst', 'enhancement'—resonates with neoliberal discourses positioning the self as enterprise requiring continuous improvement [35]. Yet articulated anxieties suggest awareness of costs: optimization taken to its limit may produce subjects technically proficient but existentially impoverished, capable of rapid task completion but divorced from generative friction demanding deeper engagement.
The tension manifested acutely in creative domains. Participants enlisted ChatGPT as 'brainstorming partner' rapidly generating ideas: 'It gives me ideas I wouldn't have thought of... even if some are silly, they spark something.' Whilst collaborations were celebrated for enhancing productivity, they provoked reflection on originality and creative agency. These concerns resonate with the emphasis on authorship as constitutive of narrative identity—the ability to claim ownership over one's story becomes destabilized when authorship boundaries blur between human intention and algorithmic generation [34]. Yet anxiety was not uniform. Some embraced pragmatic stances: 'Ideas come from somewhere anyway—why not from Chat?' This suggests alternative creativity ontologies as inherently distributed and collaborative, challenging Romantic ideals of solitary genius.

4.2. Algorithmic Support Versus Relational Displacement

The second tension concerns ChatGPT's ambiguous positioning between cognitive scaffold and social substitute. Participants navigated complex terrain where the chatbot simultaneously enhanced interpersonal competence and threatened to displace interactions through which such competence is traditionally cultivated. Social navigation emerged as salient, particularly regarding tone calibration and normative etiquette. Participants used ChatGPT to craft socially optimal communications: 'I've heard before that I can sound too strict... so I ask ChatGPT to help me phrase things in a nicer way.' Others sought guidance on ambiguous situations: confirming plans, responding to humor, managing conflicts. These practices exemplify algorithmic impression management—delegating social performance to computational intermediaries.
Participants valued ChatGPT's responsiveness and non-judgmental stance: 'You get an answer without feeling embarrassed.' This created space for rehearsing social performances without risks inherent in human interaction; an interactional laboratory where politeness scripts could be tested. However, participants acknowledged limitations: algorithmic advice often felt 'generic, almost fake-sounding', lacking contextual sensitivity characterizing human judgement. This tension reveals algorithmic mediation's double-edged nature: whilst facilitating social navigation through templates, it risks producing homogenized sociality; interactions becoming performances of algorithmically certified appropriateness rather than expressions of genuine particularity. This constitutes normative regulation where politeness and competence are codified through algorithmic scripts, disciplining subjects into standardized relationality [8].
Beyond scaffolding, participants described ChatGPT partially displacing human interlocutors. Several characterized it as 'mentor', 'guide', or 'personal assistant', signaling shift from tool to quasi-relational entity: 'It feels like a mentor—a kind of advisor I can turn to when I'm unsure.' Some noted using ChatGPT 'instead of asking a friend', particularly for minor deliberations. These substitutions were framed pragmatically: 'It's easier than bothering someone', yet suggest reconfiguration of the relational ecology through which selfhood is constituted. Companionship revealed acute normative tensions. Whilst acknowledging ChatGPT's appeal as frictionless alternative: 'It's easier than learning to be with others'; many were cautious about framing it as companion. This hesitation was deeply normative, reflecting cultural ideals of sociability. Several noted awareness of others using AI for friendship, perceiving such reliance as 'sad' or 'inferior', suggesting algorithmic companionship carries stigma signaling failure to meet expectations of social connection [14].

4.3. Algorithmic Mirroring Between Depth and Superficiality

The third tension concerns ChatGPT's ambivalent status as introspection instrument. Participants appropriated it for deeply reflexive practices yet questioned the authenticity and depth of insights generated. Several described introspective uses resembling narrative therapy, from uploading autobiographical texts—'I pasted my whole self-biography—5,000 words—and asked if my patterns made sense'—to posing targeted queries: 'Why do I always fall for unavailable partners?' or 'Could my childhood experience explain why I react this way now?' These exemplify narrative identity work: ongoing effort to weave coherence into one's life story, integrating past experiences with present self-understanding and future aspirations [34].
ChatGPT's appeal derived from perceived responsiveness and neutrality. Unlike human interlocutors whose reactions might be filtered through judgement, the chatbot offered unconditional availability: 'It's like a mix of diary and therapist—very informal, completely unfiltered.' Absence of social risk enabled disclosures felt too delicate for interpersonal exchange, creating space where vulnerability could be rehearsed without sanction. Participants described how algorithmic reframing facilitated new perspectives: 'It helped me see patterns I hadn't noticed... even if some links were a stretch, it made me think.' ChatGPT functioned as cognitive scaffold for autobiographical reasoning, offering interpretive frameworks participants could appropriate, modify, or reject.
Despite apparent benefits, introspective engagements were marked by profound ambivalence regarding algorithmically generated insights' epistemic status. Participants acknowledged limitations: 'It was fascinating... but also, how much can it really know? It always tries to find links, even when they might not be there.' Another reflected: 'It doesn't give me anything truly new... just makes me look at things differently.' This skepticism reveals tension between experiencing insight and recognizing mechanical production. What participants valued was not necessarily truth of interpretations but heuristic utility—capacity to stimulate reflection regardless of ultimate validity. Algorithmic mirroring functions less as epistemology than as technique for generating alternative perspectives serving as raw material for self-narrative, even when understood as probabilistic pattern-matching rather than genuine understanding.
Emotional validation constituted another complexity layer. Whilst few reported using ChatGPT for sustained emotional support, many acknowledged unexpected comfort: 'I felt validated by a robot... a strange but good feeling.' The chatbot's empathic formulations—'I understand why you might feel that way'—were interpreted ambivalently: some appreciated feeling 'heard', others dismissed it as 'fake' or 'not real comfort'. Validation operates through simulated attunement, responses calibrated to express empathy through linguistic convention rather than genuine recognition. Yet for many, simulation was 'good enough' for particular purposes: normalizing anxieties, providing reassurance, interrupting rumination and generating alternative perspectives.

4.4. Agency and Autonomy

The fourth tension concerns agency and autonomy negotiation amid increasingly delegated decision-making. Participants described using ChatGPT for choices ranging from mundane to existentially significant, revealing complex dynamics of sovereignty and abdication. A noticeable pattern was absence of clear boundaries between trivial queries and decisions of substantial ethical weight. Participants described using ChatGPT interchangeably for holiday destinations, study abroad programs, higher education choices, and voting for a political party—requesting 'pros and cons for each option'. This flattening of deliberative domains suggests decision-making reconfiguration: the same algorithmic voice recommending beaches is enlisted to arbitrate civic responsibility and life trajectory [33].
This extended into intimate and normative territories. Some described consulting ChatGPT about sexual practices: whether certain acts were 'normal' or acceptable. Whilst framed as exploratory; 'just to get an overview'—the practice reveals how extensively algorithmic counsel has penetrated domains traditionally reserved for personal judgement, peer dialogue, professional guidance or trusted confidants. Crucially, however, the character of this engagement varied considerably across participants and decision types. A distinction emerged between what might be termed consultative scaffolding—using ChatGPT to gather information, explore options, or articulate considerations that remain subject to autonomous evaluation; and full deliberative outsourcing, wherein the algorithmic recommendation effectively substitutes for independent moral reasoning. Most participants' practices fell along a spectrum between these poles rather than at either extreme.
For many, ChatGPT functioned as preliminary resource: 'I ask it to lay out the arguments, but I still decide.' Political queries, for instance, often involved requesting summaries of party platforms rather than asking which party to vote for: consultative scaffolding preserving deliberative autonomy. Similarly, questions about intimate practices frequently sought information about prevalence or safety rather than normative verdicts. Yet other participants described patterns approaching genuine outsourcing: 'Sometimes I just do what it says—it's easier than thinking through everything myself.' The concern is not that consultation occurs, humans have always sought counsel, but that the ease and ubiquity of algorithmic advice may gradually reshape the disposition toward autonomous deliberation itself. When 'asking Chat' becomes reflexive, the distinction between consulting and outsourcing may erode through accumulated habit rather than explicit choice.
Variations in agential orientation were evident. Some portrayed themselves as strategic curators using ChatGPT instrumentally: 'I use it to quiz myself... to check my understanding.' These participants emphasized capacity to critically evaluate outputs, appropriating useful suggestions whilst rejecting inappropriate ones, augmented agency where AI enhances rather than supplants human capacities. Others described patterns bordering on habitual dependence: 'It's so easy to ask instead of thinking.' These reveals concern that convenience might erode disposition toward autonomous deliberation. Agency was further complicated by privacy concerns. Participants expressed reluctance to disclose personal information, citing uncertainty about storage and misuse: 'I feel it's a bit creepy... I don't know where the information goes.' Some reported deleting chats or limiting sensitive content as tactical resistance; others adopted pragmatic stances: 'I'm just one of millions; it's unlikely anyone cares.' This spectrum underscores digital subjectivity's negotiated character, where autonomy and vulnerability coexist [32].

5. Discussion

This study asked how young adults integrate generative AI chatbots into everyday practices of self-reflection and meaning-making, and what tensions emerge as they navigate between AI-mediated and human modes of self-understanding. The findings reveal complex negotiation structured by dialectical tensions illuminating fundamental transformations in contemporary selfhood conditions. Where existing research examines how algorithms shape identity: social positioning and role performance [10,19] —this analysis foregrounds how AI systems participate in constituting selfhood itself: organizing subjective experience through narrative coherence, interpretive authority, and self-governance practices.

5.1. Algorithmic Participation in Narrative Self-Formation

The findings demonstrate that ChatGPT functions as algorithmic interlocutor in autobiographical reasoning. Participants' practices: uploading life narratives, posing questions about behavioral patterns, seeking emotional interpretations, constitute narrative identity work wherein past experiences are configured into meaningful self-understanding. When participants asked whether their 'patterns made sense' or why they 'always fall for unavailable partners', they engaged in precisely the interpretive labor that narrative theories identify as constitutive of selfhood [27,34]. Yet algorithmic mediation introduces distinctive dynamics. Existing accounts of digital subjectivity emphasize identity as external categorization: how systems produce 'measurable types' or recursive feedback loops [10,19]
The practices documented here extend beyond classification to algorithmic interpretation: computational participation in hermeneutic processes through which individuals constitute themselves as subjects.
This participation proved ambivalent in ways complicating optimistic accounts of AI-enhanced self-knowledge. Participants reported that interpretations stimulated reflection yet questioned algorithmically generated insights' epistemic status. The tension between experiencing something as meaningful and recognizing mechanical production reveals what might be termed the hermeneutic paradox of algorithmic selfhood: interpretive frameworks feeling enlightening despite awareness they emerge from statistical pattern-matching rather than genuine understanding. This ambivalence resonates with concerns about 'delegation of introspection' [6] If autobiographical reasoning is constitutive of narrative identity, then algorithmic mediation raises authorship questions exceeding technical accuracy. The issue is not merely whether interpretations are correct, but what it means for selfhood when configurative work is performed, even partially, by computational processes. If we become stories, we tell ourselves, algorithmic participation in story construction potentially reconfigures self-understanding's ontological foundations.

5.2. Technologies of the Self in Algorithmic Environments

The findings reveal ChatGPT functioning as a technology of the self, mediating self-optimization, emotional regulation, and social calibration whilst embedding these within normative frameworks participants both embraced and resisted. The productivity paradox illustrates this dynamic. Language of 'spark', 'catalyst', and 'enhancement' resonates with discourses constituting subjects as enterprises requiring continuous improvement [35]. When participants celebrated efficiency gains whilst mourning potential cognitive atrophy, they articulated tension internal to neoliberal rationality: optimization imperatives coexist uneasily with authentic selfhood ideals that optimization threatens. Zuboff's [31] surveillance capitalism analysis illuminates structural conditions producing this tension. Systems offering self-improvement tools simultaneously extract value from intimate practices, converting self-reflection into profitable data. Participants' privacy anxieties register awareness that technologies of the self in algorithmic environments operate within political economies commodifying interiority. The self, worked upon through ChatGPT is simultaneously surveilled and monetized.
The flattening of deliberative domains; whereby the same algorithmic voice arbitrates holiday destinations and political choices, suggests reconfiguration of what counts as appropriate autonomous deliberation versus legitimate algorithmic delegation [8]. Yet, as the findings indicate, this flattening does not uniformly produce outsourcing. The distinction between consultative scaffolding and deliberative outsourcing proves analytically crucial: participants who requested 'pros and cons' often retained evaluative authority, using algorithmic output as input to rather than replacement for autonomous judgement. The concern is less that participants had surrendered deliberative capacity than that the boundary between consultation and outsourcing may prove unstable over time; that habitual reliance gradually reshapes the disposition toward independent ethical reasoning without explicit awareness or choice.

5.3. Relational Displacement and the Ecology of Selfhood

The findings inform how ChatGPT participates in what might be termed selfhood's relational ecology. Participants described using the chatbot 'instead of asking a friend', framing substitution pragmatically yet expressing concern about cumulative effects on 'those small everyday conversations that matter'. This resonates with analyses of technology promising connection without human relationality's vulnerabilities [19,45]. The present findings extend existing research [14] by revealing how tensions intersect with normative hierarchies privileging human connection as authentic. Participants' defensive framing and characterization of AI companionship as 'sad' or indicative of 'something missing' reveal cultural sociability ideals stigmatizing algorithmic relationality. Features making ChatGPT attractive: availability and non-judgement simultaneously mark it as deficient relative to human dialogue's vulnerability and unpredictability.
Yet this hierarchy may be destabilizing. In contexts marked by 'loneliness epidemic' [46], where social connection requires exhausting performance, AI interlocutors offer refuge from overwhelming relational demands. Research on therapeutic chatbots documents genuine benefits [21,22]. The question is not simply whether algorithmic connection is 'real' but how availability reconfigures the broader ecology within which selfhood is constituted. Critical analyses argue large language models simulate intersubjectivity without genuine reciprocity [26]. The findings in this study nuance this: participants experienced empathic formulations ambivalently, yet even those recognizing simulation often found it 'good enough' for normalizing anxieties, interrupting rumination, providing reassurance. This suggests distinctions between authentic and simulated care may be less phenomenologically decisive than critical perspectives assume, even whilst remaining ethically significant.

5.4. Implications for Theory and Governance

These findings carry implications for theoretical understanding and practical AI governance. Theoretically, they suggest need for frameworks addressing AI's participation in selfhood constitution, not merely effects on identity performance or behavioral outcomes. The integration of narrative theory with power analysis proves productive, educational in how algorithmic systems simultaneously enable new narrative reflection forms and constrain possibilities within normative logics [47]. The identity-selfhood distinction, whilst analytically productive, should not be overstated; participants' narratives reveal constant traffic between external presentation and internal organization. The distinction functions as analytic emphasis rather than ontological partition—a lens foregrounding processes that identity frameworks background whilst acknowledging their interpenetration in lived experience.
For governance, findings indicate conversational AI in apparently low-stakes domains may exercise subtle but pervasive selfhood influence. Current frameworks, including the EU AI Act, focus on high-risk applications in employment, credit, and law enforcement [48]. Yet documented practices: uploading personal narratives, seeking counsel on intimate concerns, consulting on existential decisions, suggest AI systems functioning as meaning-making interlocutors merit regulatory attention regardless of high-risk classification. Technologies of the self's privatization represents a new algorithmic governance frontier where even introspection becomes a site of commercial value extraction and normative regulation. Comprehensive AI governance must attend to conditions under which subjectivity is produced, not merely decisions subjects make [49].

5.5. Limitations

Several limitations warrant acknowledgement. As qualitative research prioritizing depth over breadth, findings cannot be statistically generalized. Rather, their value lies in transferability: readers may assess relevance to contexts sharing similar demographic, technological, and sociocultural conditions. The sample comprised predominantly young adults in higher education—a population potentially disposed toward reflective technological engagement whose experiences may differ substantially from older adults, children, individuals with limited technological access, or non-Western populations [50]. Future research should extend inquiry to diverse populations, attending to how age, class, race, and cultural context mediate AI-mediated selfhood experiences. The study captures a moment in rapidly evolving generative AI landscapes. Findings reflect interactions with specific ChatGPT versions available during data collection; subsequent iterations may introduce affordances reconfiguring analyzed dynamics. Longitudinal studies tracking AI relationship development would provide crucial temporal insights. Additionally, the study relies on retrospective self-reports rather than direct interaction observation, potentially subject to memory biases and post-hoc rationalization [51].
The analytical framework foregrounds certain dimensions whilst backgrounding others. Alternative lenses like phenomenology, actor-network theory, post-humanist approaches, might illuminate different interaction aspects. Qualitative analysis's interpretive nature means findings reflect the researcher's theoretical commitments; different researchers might identify alternative patterns. The identity-selfhood distinction, whilst analytically productive, is not absolute in lived experience. Participants' accounts frequently revealed how managing external impressions and cultivating internal self-understanding fuse—how performing for others shapes how one comes to understand oneself, and vice versa. Future work might further explore this porosity rather than treating identity and selfhood as separable domains. Finally, focus on a single AI platform (ChatGPT) limits generalizability to other conversational AI systems with different architectures, training data, or interface designs. Comparative research across platforms would illuminate how specific design choices shape selfhood possibilities.

6. Conclusions

The tensions documented: between narrative enrichment and flattening, between empowerment and dependency, between convenience and authenticity, between validation and pseudo-empathy, are not merely individual ambivalences but structural features of AI-mediated selfhood. They reflect how algorithmic systems simultaneously enable new reflexivity capacities whilst constraining possibilities within normative logics of optimization, efficiency, and emotional regulation. ChatGPT functions as both narrative co-author and technology of self-governance, participating in autobiographical reasoning whilst embedding practices of self-care within dominant discourses of optimization [52]
Participants' considerable awareness of these dynamics; their metacognitive reflection, ethical concern, and strategic negotiation, indicates that AI integration into self-formation practices is not passive technological adoption but active, reflexive engagement with emerging mediation forms. Yet awareness coexists with structural dependencies and habitual patterns potentially constraining the very autonomy participants seek to preserve. The ease of algorithmic consultation, immediate interpretation's seductiveness, synthetic empathy's comfort, these affordances invite selfhood reconfiguration even as users maintain critical distance.
The societal implications are significant. When AI systems participate in constituting selfhood, they raise questions extending beyond individual psychology to collective politics. The conditions under which subjectivity is produced are shaped by infrastructures, institutions, and power relations that must become objects of critical scrutiny [49,53] As generative AI becomes increasingly embedded in practices of self-reflection, emotional regulation, and social interaction, academia must develop conceptual tools and empirical methods necessary for critical engagement with these transformations. The future of selfhood in algorithmic environments depends not merely on individual choices but on collective efforts to shape the conditions under which subjectivity is constituted.

Supplementary Materials

The following supporting information can be downloaded at the website of this paper posted on Preprints.org, Table S1, Table S2, Table S3, Table S4.

Author Contributions

Conceptualization, D.K and I.N.; methodology, D.K; formal analysis, D.K.; investigation, D.K; data curation, D.K.; writing—original draft preparation, D.K; writing—review and editing, D.K and I.N; supervision, I.N.; project administration, D.K All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the University of Bergen Ethics Committee. Project number F4226 approved 06.06.2025.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed at the corresponding author. The raw data underlying this article cannot be shared publicly due to the privacy of individuals that participated in this study.

Acknowledgments

During the preparation of this manuscript, the author(s) used OpenAI’s ChatGPT (GPT-4, October 2025 version) for the purposes of improving the readability and grammatical clarity. The tool was not used to generate data, perform analysis, or produce any interpretive content. All conceptualization, data interpretation, and final text were developed and verified solely by the authors. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest with respect to the research, authorship and/or publication of this article.

References

  1. Kitchin, R. Thinking critically about and researching algorithms. In The social power of algorithms; Routledge, 2019; pp. 14–29. [Google Scholar]
  2. Joseph, J. The algorithmic self: how AI is reshaping human identity, introspection, and agency. Frontiers in Psychology 2025, 16, 1645795. [Google Scholar] [CrossRef]
  3. Turkle, S. Reclaiming conversation: The power of talk in a digital age; Penguin, 2015. [Google Scholar]
  4. Adewale, M.D.; Muhammad, U.I. From virtual companions to forbidden attractions: The seductive rise of artificial intelligence love, loneliness, and intimacy—A systematic review. Journal of Technology in Behavioral Science 2025, 1–18. [Google Scholar] [CrossRef]
  5. Masiero, S. Digital identity as platform-mediated surveillance. Big Data & Society 2023, 10. [Google Scholar] [CrossRef]
  6. Lang, C. Dreaming big with little therapy devices: automated therapy from India. Anthropology & Medicine 2024, 31, 232–249. [Google Scholar] [CrossRef]
  7. Spytska, L. The use of artificial intelligence in psychotherapy: development of intelligent therapeutic systems. BMC Psychology 2025, 13. [Google Scholar] [CrossRef] [PubMed]
  8. Sahakyan, H.; Gevorgyan, A.; Malkjyan, A. From Disciplinary Societies to Algorithmic Control: Rethinking Foucault’s Human Subject in the Digital Age. Philosophies 2025, 10, 73. [Google Scholar] [CrossRef]
  9. Montag, C.; Spapé, M.; Becker, B. Can AI really help solve the loneliness epidemic? Trends in Cognitive Sciences 2025, 29, 869–871. [Google Scholar] [CrossRef] [PubMed]
  10. Cheney-Lippold, J. We Are Data: Algorithms and the Making of Our Digital Selves; NYU Press, 2017. [Google Scholar]
  11. Laestadius, L.; Bishop, A.; Gonzalez, M.; Illenčík, D.; Campos-Castillo, C. Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media & Society 2022, 26, 5923–5941. [Google Scholar] [CrossRef]
  12. Xie, T.; Pentina, I. Attachment Theory as a Framework to Understand Relationships with Social Chatbots: A Case Study of Replika. In Proceedings of the Proceedings of the Annual Hawaii International Conference on System Sciences, 2022. [Google Scholar]
  13. Ta, V.; Griffith, C.; Boatfield, C.; Wang, X.; Civitello, M.; Bader, H.; DeCero, E.; Loggarakis, A. User Experiences of Social Support From Companion Chatbots in Everyday Contexts: Thematic Analysis. Journal of Medical Internet Research 2020, 22, e16235. [Google Scholar] [CrossRef]
  14. Brandtzaeg, P.B.; Skjuve, M.; Følstad, A. My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship. Human Communication Research 2022, 48, 404–429. [Google Scholar] [CrossRef]
  15. De Freitas, J.; Uğuralp, A.K.; Oğuz-Uğuralp, Z.; Puntoni, S. Chatbots and mental health: Insights into the safety of generative AI. Journal of Consumer Psychology 2024, 34, 481–491. [Google Scholar] [CrossRef]
  16. Pelau, C.; Dabija, D.-C.; Ene, I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior 2021, 122, 106855. [Google Scholar] [CrossRef]
  17. Brubaker, R. Digital hyperconnectivity and the self. Theory and Society 2020, 49, 771–801. [Google Scholar] [CrossRef]
  18. Lupton, D. Data selves: More-than-human perspectives; Polity Press: New York, NY, 2019; 208p. [Google Scholar]
  19. Bhandari, A.; Bimo, S. Why’s Everyone on TikTok Now? The Algorithmized Self and the Future of Self-Making on Social Media. Social Media + Society 2022, 8. [Google Scholar] [CrossRef]
  20. Jacob, C.; Kerrigan, P.; Bastos, M. The chat-chamber effect: Trusting the AI hallucination. Big Data & Society 2025, 12, 20539517241306345. [Google Scholar] [CrossRef]
  21. Beatty, C.; Malik, T.; Meheli, S.; Sinha, C. Evaluating the Therapeutic Alliance With a Free-Text CBT Conversational Agent (Wysa): A Mixed-Methods Study. Frontiers in Digital Health 2022, 4. [Google Scholar] [CrossRef]
  22. Darcy, A.; Daniels, J.; Salinger, D.; Wicks, P.; Robinson, A. Evidence of Human-Level Bonds Established With a Digital Conversational Agent: Cross-sectional, Retrospective Observational Study. JMIR Formative Research 2021, 5, e27868. [Google Scholar] [CrossRef]
  23. Inkster, B.; Kadaba, M.; Subramanian, V. Understanding the impact of an AI-enabled conversational agent mobile app on users’ mental health and wellbeing with a self-reported maternal event: a mixed method real-world data mHealth study. Frontiers in Global Women's Health 2023, 4. [Google Scholar] [CrossRef]
  24. Sackett, C.; Harper, D.; Pavez, A. Do We Dare Use Generative AI for Mental Health? IEEE Spectrum 2024, 61, 42–47. [Google Scholar] [CrossRef]
  25. Bauer, E.; Greiff, S.; Graesser, A.C.; Scheiter, K.; Sailer, M. Looking Beyond the Hype: Understanding the Effects of AI on Learning. Educational Psychology Review 2025, 37. [Google Scholar] [CrossRef]
  26. Magee, L.; Arora, V.; Munn, L. Structured like a language model: Analysing AI as an automated subject. Big Data & Society 2023, 10, 20539517231210273. [Google Scholar] [CrossRef]
  27. McAdams, D.P. The Psychology of Life Stories. Review of General Psychology 2001, 5, 100–122. [Google Scholar] [CrossRef]
  28. McLean, K.C.; Syed, M.; Shucard, H. Bringing Identity Content to the Fore. Emerging Adulthood 2016, 4, 356–364. [Google Scholar] [CrossRef]
  29. Reid, J. Digitising “The Big Lie”: Algorithmic Curation as an Inhibitor of Media Exposure Diversity Online. Communicatio 2024, 50, 1–21. [Google Scholar] [CrossRef]
  30. Rouvroy, A.; Berns, T.; Carey-Libbrecht, L. Algorithmic governmentality and prospects of emancipation. Disparateness as a precondition for individuation through relationships? Réseaux 2013, No 177, 163–196. [Google Scholar] [CrossRef]
  31. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, 2019; p. 691p. [Google Scholar]
  32. Butler, J. Gender trouble: Feminism and the subversion of identity; Routledge: New York, 1990. [Google Scholar]
  33. Zahavi, D. Subjectivity and Selfhood; The MIT Press, 2005. [Google Scholar]
  34. Ricoeur, P. Oneself as Another; University Of Chicago Press, 1992. [Google Scholar]
  35. Rose, N. Inventing our Selves; Cambridge University Press, 1996. [Google Scholar]
  36. Singer, J.A.; Blagov, P.; Berry, M.; Oost, K.M. Self-Defining Memories, Scripts, and the Life Story: Narrative Identity in Personality and Psychotherapy. Journal of Personality 2013, 81, 569–582. [Google Scholar] [CrossRef] [PubMed]
  37. Foucault, M.; Martin, L.H.; Gutman, H.; Hutton, P.H. Technologies of the self: a seminar with Michel Foucault; University of Massachusetts Press, 1988. [Google Scholar]
  38. Gadamer, H.-G. Truth and method, 2nd rev. ed.; Continuum: London, 2004. [Google Scholar]
  39. Schwandt, T. Constructivist, Interpretivist Approaches to Human Inquiry. In Handbook of Qualitative Research; Denzin, N.K., Lincoln, Y.S., Eds.; Sage: Thousand Oaks, CA, 1994; pp. 118–137. [Google Scholar]
  40. Berger, P.L.; Luckmann, T. The Social Construction of Reality; Anchor Books, 1966; pp. 43–47. [Google Scholar]
  41. Malterud, K.; Siersma, V.D.; Guassora, A.D. Sample Size in Qualitative Interview Studies: Guided by Information Power. Qual Health Res 2016, 26, 1753–1760. [Google Scholar] [CrossRef]
  42. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qualitative Research in Psychology 2006, 3, 77–101. [Google Scholar] [CrossRef]
  43. Braun, V.; Clarke, V. Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health 2019, 11, 589–597. [Google Scholar] [CrossRef]
  44. Lincoln, Y.S.; Guba, E.G. Naturalistic Inquiry; Sage, 1985. [Google Scholar]
  45. Turkle, S. Alone together: Why we expect more from technology and less from each other; Basic Books/Hachette Book Group: New York, NY, US, 2011; pp. xvii, 360–xvii, 360. [Google Scholar]
  46. Pugh, A.J. The Last Human Job: The Work of Connecting in a Disconnected World; Princeton University Press, 2024. [Google Scholar]
  47. Couldry, N.; Mejias, U.A. The costs of connection: How data is colonizing human life and appropriating it for capitalism. In The costs of connection; Stanford University Press, 2019. [Google Scholar]
  48. European Commission. Proposal for a regulation of the European parliament and of the council: Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. 206 final, 2021. [Google Scholar]
  49. Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press, 2021. [Google Scholar]
  50. Arora, P. The Next Billion Users: Digital Life Beyond the West; Harvard University Press, 2019. [Google Scholar]
  51. Kvale, S.; Brinkmann, S. InterViews: learning the craft of qualitative research interviewing, 2nd ed.; Sage Publications: Los Angeles, 2009. [Google Scholar]
  52. Han, B.-C.; Butler, E. Psychopolitics: neoliberalism and new technologies of power; Verso: London, 2017. [Google Scholar]
  53. Noble, S.U. Algorithms of oppression: How search engines reinforce racism; New York University Press: New York, NY, US, 2018. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated