1. Introduction
Since the emergence of generative AI, the integration of artificial intelligence (AI) in workplaces has significantly increased, with more employees engaging in AI-supported tasks (Wu et al., 2024b; Duan et al., 2024). Human–AI collaboration describes the contact and cooperation between employees and AI systems for completing job tasks more effectively (Kong et al., 2023). This interaction has been found to shape employees’ behaviors and performance in notable ways (Ma et al., 2024; Budhwar et al., 2022). On the positive side, AI adoption encourages innovative behavior and improve service quality (Yan & Teng, 2025; Duan et al., 2024). Yet, it can also lead to negative outcomes such as disengagement from work and counterproductive work behaviors (Meng et al., 2025; Bai & Zhang, 2025). Some scholars caution that employees may exploit the ease of AI to complete tasks superficially, thereby decreasing their effort and increasing laziness during working hours (Saluja et al., 2024). One demonstration of this is cyberloafing, a form of non-work-related online behavior that employees engage in during office hours (Gupta et al., 2025). Cyberloafing includes activities such as browsing e-commerce platforms or sending personal messages during work time (Hessari et al., 2025; Askew et al., 2014). Past research has shown that cyberloafing weakens employee engagement and diminishes productivity (Tsai, 2023; Hessari et al., 2025).
It is vital to recognize that collaboration between human and AI highlights the corresponding strengths of both human and AI systems (Jia et al., 2024; Kong et al., 2023). Unlike traditional workers, employees collaborating with AI can achieve higher-quality outcomes and greater efficiency, resulting in performance that surpasses that of conventional work arrangements (Seeber et al., 2020). In turn, they have enhanced sense of identity as members of an AI-collaborative group. Grounded in social identity theory (SIT), people’s categorization of their group identity shapes their subsequent behaviors (Rubin et al., 2023; Guo et al., 2021). Within a professional context, this identity is basically constructed upon key dimensions such as autonomy and competence. Consequently, in human-AI collaboration work arrangement, threats to these specific dimensions (autonomy and skill) represent a direct challenge to an individual’s professional identity. An individual’s need for autonomy and competence is essential for psychological well-being and motivation; when AI assumes decision-making authority or handles skill-intensive tasks, these core needs may be undermined (Deci and Ryan, 2000; Alfrey et al., 2023).
Empirically, AI-driven deskilling (loss of skill) decreases employees’ sense of efficacy, while algorithmic task allocation can diminish workplace autonomy (Liu and Li, 2025). These experiences can be framed as identity threat appraisals, where employees perceive AI collaboration not only as a productivity enhancing tool but also as a team member having potential to threaten their professional identity. Professional identity depicts the self-concept individuals derive from their work roles, expertise, and professional values (Kreiner et al., 2006; Ibarra, 1999). When AI teammate encroaches upon areas traditionally central to employees’ expertise or decision-making authority, employees may perceive that their professional distinctiveness, continuity, and value are under threat (Hewlin et al., 2020; Petriglieri, 2011). Such threats are not just technical disruptions but psychosocial stressors that challenge employees’ sense of being recognized as competent professionals. In this sense, identity threat appraisals in the form of loss of skill and loss of autonomy operate as triggers, while professional identity threat (PIT) emerges as the more enduring cognitive and emotional state that shapes behavioral responses. It is this sustained state of PIT that often encourages employees toward counterproductive surviving mechanisms to preserve self-esteem, such as cyberloafing (Askew et al., 2014). When employees experience PIT, they may feel disconnected from their roles and disengage from tasks that no longer affirm their competence or value. Cyberloafing, in this context, can function as an avoidance strategy that allows individuals to temporarily escape from identity-threatening tasks and restore a sense of autonomy and control over their workday (Meng et al., 2025; Ugrin and Pearson, 2008). Prior studies suggest that workplace stressors, identity violations, and perceived injustice can significantly increase employees’ tendency to cyberloaf as a form of psychological withdrawal (Koay et al., 2022; Lim, 2002). Thus, PIT serves as a critical psychological mechanism linking identity threat appraisals (loss of skill and autonomy) to behavioral disengagement in the form of cyberloafing.
Within AI collaboration, employees are required to include AI into their professional self-concept, viewing it as a collaborative partner rather than merely a tool or a threat (Bai and Zhang, 2025; Dhillon et al., 2024; Spring et al., 2022). However, an employee’s level of AI-inclusive identity fundamentally shapes whether these technological changes are appraised as an opportunity for growth or as the identity threat that triggers such defensive coping mechanisms (Petriglieri, 2011). AI-inclusive identity is defined as the extent to which individuals incorporate AI into their self-definition, seeing it as an essential component of their professional role and capabilities (Dhillon et al., 2024; Schepman and Rodway, 2020). Employees who have a strong AI-inclusive identity are more inclined to perceive AI-driven changes as opportunities for growth and synergy (Tarafdar et al., 2007). They proactively leverage the capabilities of AI to augment their decision-making and refine their expertise, thereby reinforcing their sense of purpose and value within the AI-assisted workflow (Parker and Grote, 2022; Zhang et al., 2025). This deep integration may either help mitigate Identity threat, which might otherwise arise from collaboration or establishes an implicit psychological contract wherein the employee expects the AI to augment their capabilities and uphold their professional status. However, when this same AI system then restricts their autonomy through algorithmic override, constrained decision-making, or monitoring, it constitutes a profound violation of the self-concept and a breach of this expected partnership (Hewlin et al., 2020) thereby, encouraging coping behaviors i.e. cyberloafing. Therefore, AI-inclusive identity is particularly important in this process, as it fundamentally shapes how employees cognitively and emotionally appraise the experience of working with AI. So, this study is supposed to answer the following question:
RQ: How do Human-AI collaboration-based identity threat appraisals, in the form of perceived loss of skill and autonomy contribute to professional identity threats and subsequently influence employees’ engagement in cyberloafing behaviors?
To answer this question, based on Social Identity Theory (SIT), this study investigates the influence of AI-driven loss of autonomy and loss of skill on cyberloafing through the lens of professional identity threat. A three-wave survey study (N = 507) revealed that Loss autonomy and loss of skills significantly increase cyberloafing by strengthening employees’ Professional identity threat. Furthermore, AI-Inclusive Identity positively moderated this relationship by amplifying the negative effect of loss of autonomy on professional identity threat. Theoretical contributions of this work are threefold. First, it extends SIT by applying it to the novel context of human–AI teams and identifying Professional Identity threat as a key mechanism. Second, it clarifies the conceptualization of Professional identity threat as mediator in understanding counterproductive work behaviors. Third, it underscores that integrating AI in identities is an important factor as over identification with AI may also encourage negative behavior. From a practical standpoint, these findings suggest that organizations can mitigate non-work-related internet use by proactively managing professional identity threat through reskilling and designing emotionally resonant AI interaction patterns to meet psychological needs, and refining interfaces to improve relatedness.
2. Theorical Foundation and Hypotheses
Artificial Intelligence (AI) has transitioned from a standalone tool to an indispensable collaborative partner in modern workplaces, reshaping how tasks are performed across industries. Instead of replacing humans, AI now works alongside them, augmenting capabilities. In healthcare, for example, AI systems like IBM Watson analyze patient data and recommend treatment options, but doctors remain essential for interpreting these recommendations within the context of a patient’s unique conditions and making the final decision (Jiang et al., 2017). Likewise, in finance, JPMorgan Chase’s COiN platform evaluates complex legal documents in seconds, yet human expertise is required to assess nuances and negotiate terms (Davenport et al., 2018). This dynamic demonstrates a broader trend: AI handles speed, scale, and data-driven responsibilities, while humans contribute creativity, judgment and ethical oversight. In creative industries, where AI tools like ChatGPT and Adobe Firefly assist rather than replace human ingenuity. In customer service, AI-powered chatbots from Zendesk and Intercom accomplish routine inquiries, freeing human agents to deal with more complex or sensitive issues that require empathy and problem-solving skills (Echegu, 2024). Even in engineering, where automation has long been dominant, collaborative robots like Tesla’s Optimus work alongside human employees, handling repetitive assembly tasks while humans focus on quality control and process improvements. Amazon’s AI-driven warehouses further demonstrate this interaction, with robots managing inventory logistics while humans oversee operations and handle exceptions. Perhaps the most effective examples come from fields where AI and humans co-create, such as software development. GitHub Copilot, an AI coding assistant, suggests lines of code in real time, significantly speeding up the programming process, but developers must still review, test, and adapt these suggestions to fit the broader project requirements. Similarly, in radiology, AI tools like AIdoc highlight potential anomalies in medical scans, but radiologists provide the critical final diagnosis, weighing clinical history and patient-specific factors (Topol, 2019). These examples underscore a fundamental shift: AI is no longer just a tool but a team member, one that excels in precision and efficiency but relies on human collaboration for context, ethics, and innovation. As this collaboration deepens, organization that effectively integrate AI along with leveraging human strengths will lead industry, proving that the future of work isn’t about humans versus machines but about how they can achieve more together.
2.1. Social Identity Theory
Social Identity Theory (SIT) by Tajfel and Turner, (1979), suggests that individuals derive their self-concept from their membership in social groups, which shapes their attitudes and behaviors within those groups (Guo et al., 2021; Hornsey, 2008; Ashforth and Mael, 1989). Central to SIT is the idea that individuals often classify themselves and others into social categories, such as ingroups and outgroups, based on shared traits (Stets and Burke, 2000; Brown, 2000). In organizational context, SIT points out how employees’ identification with teams, professions, or the organization as a whole shape their attitudes, behaviors, and workplace outcomes (Scheifele et al., 2021; Rubin et al., 2023). The three central processes of SIT are 1) social categorization, 2) social identity, and 3) social comparison. Social categorization refers to the way individuals sort themselves and others into separate groups based on common characteristics, enabling them to interpret their social environment and form a clearer sense of self (Guo et al., 2021; Adam et al., 2021; Scheifele et al., 2021). Social identity includes the internalization of group attributes, values, and behaviors once individuals identify with a group (Shao et al., 2023; Rubin et al., 2023). Social comparison is the process by which individuals assess their group in relation to others, contributing to self-esteem and reinforcing their group membership (Guo et al., 2021; Adam et al., 2021). Social comparison is particularly relevant in examining how individuals respond to emerging technologies (Shao et al., 2023; Guo et al., 2021). In this context, SIT offers a lens to understand how employees reconcile personal and professional identities with evolving organizational dynamics, including collaboration with AI systems.
2.2. Identity Threat Theory
Due Based on SIT, Identity threat theory conceptualizes that individuals derive a significant part of their self-concept from the roles they play in society or in organizations (Tajfel and Turner, 1979). When these roles risk confirming stereotypes by external factors, individuals may experience cognitive and emotional distress which is named as stereotype threat (Steele, 1988). Bringing this into the organizational context, Petriglieri, (2011) provided a comprehensive framework to explain how employees experience threats to their professional identities, particularly in times of change or instability. According to Identity Threat Theory, such threats arise when individuals perceive that their professional role, status, or expertise is being undermined or devalued (Ashforth and Mael, 1989; Jussupow et al., 2022; Petriglieri, 2011). In a new workplace setting where technology is a team member, employees may fear that their unique human contributions, such as decision-making, creativity, or problem-solving, are being replaced or diminished by algorithms. This creates professional identity threat, which may manifest three reactions; 1) Identity protection, 2) Identity restructuring and 3) Engagement in identity work. Identity protection involves phytologically distancing oneself from the organization or task e.g. use the internet for non-work-related activities during work hours (Koay et al., 2022; Carter and Grover, 2015). Identity restructuring involves redefining oneself or the group to which they belong e.g. considering AI as integral part of one’s own self (Carter and Grover, 2015). Engagement in identity work involves trying to align one’s internal identity with external expectations (Petriglieri, 2011). Building on this framework, this study investigates cyberloafing as a manifestation of identity protection and AI identity as an expression of identity restructured in response to AI-induced professional identity threat.
2.3. Hypotheses Development
2.3.1. AI Driven Identity Threat Appraisals and Cyberloafing
Professional identity is formed through the continuous interaction between individuals’ internalized self-concept and the external validation they receive from their work roles, expertise, and autonomy (Ashforth et al., 2008; Petriglieri, 2011). When technological or organizational changes disrupt this alignment, individuals engage in cognitive evaluations to assess whether such changes pose a risk to their professional self-concept (Breakwell, 1986; Petriglieri, 2011). These evaluations are referred to as identity threat appraisals which are activated when individuals perceive that the continuity, distinctiveness, or value of their role-based identity is being undermined (Lazarus and Folkman, 1984; Swann et al., 2010). In the context of AI-enabled work environments, two conditions are particularly salient in eliciting such appraisals: perceived loss of expertise and loss of autonomy. First, AI systems assume tasks that were previously central to an employee’s role, it creates a fundamental challenge to their professional competence which is a core component of occupational identity (Ibarra and barbulescu, 2010). This effect is particularly damaging because skills traditionally serve as primary identity markers in professional contexts (Ashforth et al., 2008), and their devaluation directly threatens an individual’s workplace self-concept (Petriglieri, 2011; Shao et al., 2023). Unlike conventional information technology, AI systems often completely reconfigure skill requirements (Brynjolfsson and McAfee, 2014), creating sudden and dramatic identity discontinuities (Pratt et al., 2006). When AI renders hard-earned competencies obsolete, it doesn’t just change work processes but also invalidates years of professional development and the identity narratives built around them (Brown, 2000; Cao and Song, 2024; Dunleavy and Margetts, 2025). Second, AI systems are increasingly dictating employees’ work processes, decision-making authority, and daily routines, creating a sense of diminished control over their professional domain (Deci and Ryan, 2000; Möhlmann et al., 2021; Mazmanian et al., 2013; Mirbabaie et al., 2022). This erosion of autonomy forms through a three-stage psychological process: first, workers experience algorithm-driven standardization that replaces their discretionary judgment e.g., AI-generated task prioritization or automated performance monitoring (Kellogg, 2019); second, they interpret these constraints as threats to their professional identity and competence, particularly when systems override their expertise without transparent rationale (Rai et al., 2019); and third, they develop reactance - a motivational state to reclaim lost freedom that manifests negative behaviors such as workarounds, non-compliance, or rejection of the technology (Nach and Lejeune, 2010). When individuals perceive that due to AI systems, their professional role is being devalued or undermined, they feel psychological strain and a disruption in self-continuity (Petriglieri, 2011). According to identity theory, such disruptions can provoke identity protection behaviors, including task disengagement or passive resistance, aimed at restoring emotional balance and a sense of control (Breakwell, 1986; Swann et al., 2010). One of such behaviors is cyberloafing. Cyberloafing is a growing workplace challenge which means using the internet for non-work activities like social media or shopping during work hours (Askew et al., 2014). It drains time and energy from core tasks, reducing productivity (Koay et al., 2022) Organizations actively try to control it, as even minor distractions can cause significant efficiency losses (Wagner et al., 2012; Tandon et al., 2022).Though cyberloafing is often framed as deviant behavior, scholars increasingly recognize it as a coping mechanism in response to workplace stressors or psychological contract breaches(Koay et al., 2022; Weng et al., 2010). Given that identity threats weaken the employee’s sense of value and alignment with the organization, individuals may cyberloaf to escape, emotionally disengage, or reclaim autonomy (Liu and Geertshuis, 2019). This conceptual linkage is consistent with research showing that identity-threatening environments contribute to withdrawal behaviors, of which cyberloafing is an increasingly prevalent form in digital workplaces. Therefor this study proposes the following hypothesis:
H1a. AI-driven Loss of skill increases cyberloafing in AI integrated workplace
H1b. AI-driven Loss of autonomy increases cyberloafing in AI integrated workplace
2.3.2. The Mediating Role of Professional Identity Threat
Professional Identity Threat (PIT) has been increasingly recognized as a central explanatory mechanism linking workplace disruptions to behavioral responses, particularly in environments characterized by technological transformation. Theoretical models of identity stress posit that when core aspects of an individual’s role are challenged or undermined, employees experience a cognitive-affective disruption in their self-concept, triggering identity threat appraisals (Breakwell, 1986; Petriglieri, 2011). Rather than directly resulting in withdrawal behaviors, these disruptions must first be interpreted as threats to identity, which then activate coping responses aimed at re-establishing a sense of psychological consistency and self-integrity (Lazarus and Folkman, 1984; Swann et al., 2010). In this context, PIT functions as a psychological bridge that translates identity-relevant disruptions, such as perceived loss of expertise and loss of autonomy, into behaviors like cyberloafing. That is, employees may not cyberloaf simply because they lose control or feel deskilled, but because these experiences erode their sense of professional identity, leading to emotional disengagement (Conroy et al., 2017; Koay et al., 2022). Thus, PIT captures the subjective interpretation of environmental change, making it a theoretically grounded mediator that helps explain how structural changes lead to behavioral withdrawal. This is consistent with broader identity literature, which emphasizes that identity threat is not merely a reaction to external events, but a lens through which individuals assign meaning to those events and decide how to respond (Ashforth and Schinoff, 2016; Petriglieri, 2011). Accordingly, PIT is proposed to mediate the relationship between AI-induced threat appraisals (loss of expertise and loss autonomy) and cyberloafing in digitally transforming organizations. So, the following hypotheses are proposed:
H2a. Professional identity threat mediates the relationship between Loss of skill and cyberloafing in AI integrated workplace
H2b. Professional identity threat mediates the relationship between Loss of autonomy and cyberloafing in AI integrated workplace
2.3.3. The Moderating Role of AI-Inclusive Identity
AI inclusive identity (AI-identity) plays an essential role in examining the interplay between AI driven identity threat appraisals, professional identity threat and cyberloafing. Introduced by Carter and Grover, (2015) AI identity refers to the belief that using IT is part of an individual’s self-concept. A strong IT identity helps support a positive self-view. However, in today’s digital era, especially in workplaces where AI is used as a team member, focusing only on IT identity is no longer enough. With rapid technological advancements, traditional IT is increasingly being replaced by more advanced and powerful AI systems (Dunleavy and Margetts, 2025; Spring et al., 2022; Mirbabaie et al., 2022). These systems not only improve efficiency but also helps employees realize their value by combining human strengths with AI capabilities. As a result, in Human-AI collaboration environment, employees may form a new kind of identity known as AI identity, (which is termed in this study as AI-inclusive identity) where working with AI becomes a key part of how they see themselves (Mirbabaie et al., 2022; Shao et al., 2023).
The concept of AI identity is commonly described through three key dimensions (Carter et al., 2020; Mirbabaie et al., 2022). First, dependence on AI reflects the extent to which individuals rely on AI technologies in their professional activities (Cao et al., 2023; Reychav et al., 2019). This reliance is seen in areas such as operations and data processing. Since, Human-AI collaboration improves work efficiency, employees may increasingly depend on AI to meet their performance goals. Second, emotional energy refers to the positive emotions individuals experience when working with AI (Huang, 2019; Reychav et al., 2019). Such emotions include satisfaction and a more enjoyable work experience which is due to the perceived competence and support provided by AI tools (Duan et al., 2024). Third, relatedness refers the degree to which individuals feel connected to AI systems (Reychav et al., 2019; Carter et al., 2020). As collaboration extends, employees may begin to see less boundaries between themselves and AI, realizing AI as an essential and integrated part of their daily work. Such employees are more likely to view AI as a necessary partner in achieving work tasks (Reychav et al., 2019). However, sometimes over-dependence on AI may also cause negative outcomes like laziness (Zhang et al., 2024). So, this study proposed the following hypotheses:
H3a. AI identity moderates the relationship between loss of autonomy and professional identity threat
H3b. AI Identity moderates the relationship between loss of skill and Professional identity threat
From the prior discussions, it is hypothesized that the interaction between loss of autonomy and AI Inclusive Identity and interaction between loss of skill and AI-Inclusive identity will affect professional identity threat (Hypotheses 3a and 3b). Professional identity threat plays a mediating role between Loss and autonomy and cyberloafing (H2a) and between Loss of skill and cyberloafing (H2b). therefore, it is reasonable to predict that the indirect effects of AI-inclusive identity on cyberloafing will be stronger when employees have a higher level of AI-Inclusive identity. So, we hypothesize the following:
H4a. AI inclusive identity moderates the indirect effects of loss of autonomy on cyberloafing
H4b. AI inclusive identity moderates the indirect effects of loss of skill on cyberloafing
Figure 1.
Conceptual Model (authors’ own creation).
Figure 1.
Conceptual Model (authors’ own creation).
3. Materials and Methods
3.1. Participants and Design
Unlike cross-sectional designs, multi-wave data collection is effective in mitigating common method bias (Podsakoff et al., 2003; Gupta et al., 2025). Accordingly, this study employed a three-wave design to enhance the validity of the hypothesized relationships. respondents were selected through Credamo, an online data collection platform widely used for academic research in organizational and psychological studies (Wu et al., 2024b). No restrictions were imposed regarding participants’ geographical location or organizational affiliation. The only eligibility criterion was active engagement in AI collaboration. To ensure the sample accurately reflected this requirement, the survey introduction included the following statement:
“The objective of this research is to understand the cognitive and behavioral responses of employees working in collaboration with AI. Individuals not currently collaborating with AI are kindly requested not to proceed with the questionnaire.”
Participants were also assured of the anonymity of their responses and informed that the data would be used solely for academic research purposes, thereby reducing potential concerns related to privacy or data disclosure.
At Time 1 (March 6, 2025), 600 questionnaires were distributed. Respondents provided demographic information and responded to items measuring AI-driven loss of skill and loss of autonomy. To encourage participation, a small monetary reward (RMB 1) was offered upon completion. After a three-week interval, on March 27, 2025 (Time 2), a follow-up questionnaire was administered to the original participants. This wave focused on assessing participants’ AI identity, a multidimensional construct comprising dependence, emotional energy, and relatedness and professional identity threat, again a multidimensional construct comprising threat to professional recognition and threat to professional capability. As with the first wave, participants received a small incentive of RMB 1 upon completion. A total of 538 valid responses were collected, resulting in an effective response rate of 89.67%.
Subsequently, on April 14, 2025 (Time 3), the third and final survey was distributed. This questionnaire measured participants’ cyberloafing behaviors following their engagement with AI in the workplace. To acknowledge their continued participation, respondents were compensated with RMB 2.
To ensure data quality, responses were screened for completion time, and cases with implausibly short (<60 seconds) or excessively long (>300 seconds) durations were excluded. After this quality check, the final sample comprised 507 valid responses, yielding an overall effective response rate of 84.5%. Descriptive statistics are reported in
Table 1.
3.2. Measurement
Cyberloafing was assessed using scale developed by Lim and Teo, (2005), which employs a 5-point Likert format ranging from 1 (never) to 5 (very frequently). This scale has consistently demonstrated strong internal reliability across studies (Andel et al., 2019). Participants were asked to indicate the frequency with which they engaged in various non-work-related internet activities during working hours. A sample item includes: “At work, I send non-work-related e-mail.” In the current study, the scale showed excellent internal consistency, with a Cronbach’s alpha of 0.909.
Loss of skills was assessed using scale developed by (Jussupow et al., 2018), usability of this scale is also confirmed by (Mirbabaie et al., 2022) participants were asked to rank their perception of losing expert status and competence at work from 1 = strongly disagree to 5 = strongly agree. A sample item includes: “due to AI teammate, my specialized work-related skills are not needed anymore.”
Loss of autonomy measures employees’ sense of having less discretion over how, when, or why they perform their work, was assessed using scale developed by Breaugh, (1985), Participants were asked to rank their perception from 1 = strongly disagree to 5 = strongly agree. A sample item includes: “I cannot choose the way to go about my job, my AI teammate decides that.”
Threat to professional identity was assessed by adapting scale by (Craig et al., 2019) with employing a five-point Likert format from 1 = strongly disagree and 5=strongly agree. It consists of dimensions comprising threat to professional recognition and capability. A sample item contains: “Using AI makes me feel discouraged with who I am.”
AI identity was measured using a modified version of the IT identity scale originally developed by (Carter and Grover, 2015). The applicability of this scale in the context of artificial intelligence was supported by (Mirbabaie et al., 2022), who validated its relevance for measuring AI identity in workplace settings. This instrument captures three distinct dimensions: dependence (e.g., “Thinking about myself in relation to the AI, I feel needing it”), emotional energy (e.g., “Thinking about myself in relation to the AI, I feel energized”), and relatedness (e.g., “Thinking about myself in relation to the AI, I feel close to it”). Responses were collected on a 5-point Likert scale ranging from 1 = strongly disagree to 5 = strongly agree. The scale established strong internal consistency, with Cronbach’s alpha coefficients of 0.918.
To rule out different explanations and account for possible confounding effects, several control variables were included in the analysis. Specifically, age, gender and organizational tenure (in years) were controlled, as these demographic factors have been recognized in prior research as variables that could influence the dependent variables under investigation (Liang et al., 2024).
4. Results
This research used SPSS 26.0, AMOS 24.0 and Smart Pls 4 to validate the data. Due to the large number of measurement items associated with the study constructs, the multidimensional variables were modeled through internal indicator parceling to enhance indicator quality and improve the overall model fit (Little et al., 2002).
4.1. Common Method Bias
Studies involving self-reported data often encounter the critical issue of common method bias (CMB) variance (Schwarz et al., 2017). Podsakoff et al., (2003) describe that this issue arises when data is gathered from a single source, with one respondent providing answers for both dependent and independent variables. To address CMB, this study executed a full collinearity assessment test in Smart PLS, following recommendations from several social science researchers (Susilowati and Barinta, 2024; Mirbabaie et al., 2022; Latif et al., 2024). The variance inflation factor (VIF) values in this study were all under the threshold of 3.3, confirming that CMB was not a significant concern (Kock, 2015).
4.2. Measurement Validation
SmartPLS 3 (Ringle et al., 2020) was used to evaluate the measurement model and test the hypotheses due to the complexity of the research model combined with a small sample size (Hair et al., 2016). Reliability was measured using composite reliability (CR) and Cronbach’s alpha (α), with all constructs showing CR and alpha values exceeding the minimum threshold of 0.70 (Hair et al., 2016). To find convergent validity, the study also assessed the average variance extracted (AVE) and factor loadings for all constructs. The AVE results indicated scores above 0.50, meeting the threshold (Sarstedt et al., 2017), while factor loadings ranged from 0.70 to 0.93, are according to standardized values (Cohen, 2013). These findings confirmed strong reliability and convergent validity, as detailed in
Table 2.
4.3. Discriminant Validity (DV)
This study employed two primary methods to examine discriminant validity (DV) of the constructs. First, Fornell and Larcker’s criterion shown in
Table 3, describes the square root values of the AVE for each construct were higher than the correlation values in their corresponding rows and columns, confirming DV (Fornell and Larcker, 1981). Second, heterotrait–monotrait (HTMT) ratio presented in Table 4 showed that all values were below 0.85, indicating no issues (Henseler et al., 2015).
Table 3.
Discriminant validity fornell-larcker criterion.
Table 3.
Discriminant validity fornell-larcker criterion.
| |
AI identity |
Loss of autonomy |
Professional Identity threat |
cyberloafing |
loss of skill |
| AI identity |
0.823 |
|
|
|
|
| Loss of autonomy |
-0.417 |
0.866 |
|
|
|
| Professional identity threat |
-0.458 |
0.508 |
0.848 |
|
|
| cyberloafing |
-0.317 |
0.461 |
0.486 |
0.845 |
|
| loss of skill |
-0.351 |
0.432 |
0.437 |
0.404 |
0.876 |
| |
AI identity |
Loss of autonomy |
Professional Identity threat |
cyberloafing |
loss of skill |
| AI identity |
|
|
|
|
|
| Loss of autonomy |
0.486 |
|
|
|
|
| Professional Identity threat |
0.623 |
0.711 |
|
|
|
| cyberloafing |
0.364 |
0.540 |
0.667 |
|
|
| loss of skill |
0.406 |
0.513 |
0.607 |
0.470 |
|
4.4. Hypotheses Testing
To uncover the complex relationship between variables, main effect, mediating effecr and moderating effects are examined using SmartPLS4. For the main effect analysis, control variables (gender, age and tenure in years), independent variables (Loss of skill and loss of autonomy) and dependent (Cyberloafing) were included in the model. For the mediation effect analysis, Professional identity threat served as a mediating variable and framed two path models of 1) loss of skill → Professional identity threat → cyberloafing and 2) Loss of autonomy → Professional identity threat → cyberloafing. To examine moderation effect analysis, AI Identity (moderating variable) was incorporated into the model between independent variables and Professional identity threat, with interaction terms (loss of expertise × AI identity) and (loss of autonomy × AI identity). In these analyses, smartPLS 4 estimated the confidence interval through bootstrap. Thus, we have validated the hypotheses through the aforementioned analysis procedure, and the detailed test results will be thoroughly explained in the following sections.
4.4.1. Main Effects
Main effect: As shown in
Table 5, a significant positive relationship has observed between identity threat appraisals (Loss of skill (β =0.246, p < 0.001) and loss of autonomy (β =0.340, p < 0.001) and cyberloafing. Hypothesis H1a and H1b were confirmed. This highlights that employees withdraw from AI-integrated tasks to symbolically reaffirm their human distinctiveness and control within the workplace. So, reaping the benefits of human AI collaboration needs employee empowerment and reskilling, thereby helping them concentrating on core responsibilities instead of engaging in cyberloafing (Cao et al., 2023; Duan et al., 2024; Seeber et al., 2020).
4.4.2. Mediation Effects
Table 5 shows that loss of skill (β =0.273, p < 0.001) and loss of autonomy (β =0.387, p < 0.001) positively related to Professional identity threat. It means that employees fear to lose their professional identity because of losing sense of competence and control at workplace due to AI team mates thereby fostering disidentification with AI. Hypotheses H2a and H2b were supported. Furthermore, professional identity threat significantly increased cyberloafing (β =0.289, p < 0.001) suggesting that, employees may engage in cyberloafing as a coping mechanism to protect their threatened professional identity (Carter and Grover, 2015; Hornsey, 2008; Koay et al., 2022).
When professional identity threat (PIT) was introduced as a mediator, the explained variance for cyberloafing increased to 32.1% (R² = 0.321), and PIT itself was predicted by loss of skill and loss of autonomy with an explained variance of 31.5% (R² = 0.315). In this mediated model, the effects of loss of skill (β = 0.179, p < 0.001) and loss of autonomy (β = 0.245, p < 0.001) on cyberloafing were reduced, while PIT demonstrated a significant positive effect on cyberloafing (β = 0.289, p < 0.001). The reduction in the direct path coefficients, coupled with the significant indirect effect through PIT, supports the presence of partial mediation. These results suggest that the impact of loss of skill and loss of autonomy on cyberloafing is partly explained by the extent to which such appraisals cause a professional identity threat, which in turn fosters disengagement behaviors in the form of cyberloafing
4.4.3. Moderation Effects
Analysis illustrated in
Table 6; the data confirmed a considerable moderating effect of AI inclusive identity (or AI identity). The interaction term between Loss of autonomy and AI identity had significant effect on Professional identity threat (β =-0.227, p < 0.001), however the interaction term between Loss of skill and AI identity did not have significant effect on professional identity threat (β =-0.053, p= 1.456). so, Hypotheses H3a is supported and H3b is not supported. AI identity moderated the relationship between Loss of autonomy and professional identity threat by amplifying employees’ inclusiveness of AI in their professional self (Tang et al., 2022). A simple slope was plotted for the interaction effect, which shows low (–1 SD) and high (+1 SD) levels of AI Identity. In contrast to employees showing low levels of AI identity, those demonstrating high levels of AI identity were more likely to provoke a greater degree of professional identity threat (see
Figure 2) due to AI driven autonomy loss.
For exploring more about how AI Inclusive Identity affects cyberloafing by influencing professional identity threat, an indirect-effect analysis was conducted as shown in
Table 7. At a low level of AI Inclusive Identity (M − 1SD), the effects of loss of autonomy on cyberloafing through AI Inclusive identity (95% CI [−.0278, 0.0382], including 0) and the effect of loss of skill on cyberloafing through AI inclusive Identity (95% CI [−.0125, 0.0645], including 0) were not significant. As the level of AI inclusive Identity increased (M +1SD), the effect of Loss of autonomy on cyberloafing through AI inclusive identity (95% CI [.1457, .2650], not including 0) and the effect of loss of skill on cyberloafing through AI Inclusive Identity (95% CI [.1335, .2435], not including 0) became significant. Therefore, the level of AI Inclusive Identity will affect the outcome of the mediating effect. Hypothesis 4a and H4b were confirmed.
5. Discussion
Previous research has scarcely explored cyberloafing in AI collaborative work environments, leaving open the question of whether employees collaborating with AI engage in more cyberloafing. Digital transformation has increased employees’ internet access opportunities, creating the potential for cyberloafing (Lai et al., 2025; Tandon et al., 2022). Collaboration with AI has made cyberloafing a focal point of concern for organizations (Zhang et al., 2025). Drawn upon Social Identity Theory and Identity Threat Theory, this study explored professional identity threat and its facilitating role between AI-driven threat appraisals (loss of autonomy and loss of skill) and cyberloafing. The hypotheses proposed by this study are validated through a three-wave lagged study of Chinese professionals continuously interacting with AI at their work.
The findings demonstrated that human-AI collaboration, when perceived as a source of threat, created a significant effect on cyberloafing behavior during working hours, particularly due to damaged professional identity. Specifically, collaboration at work has shifted from traditional human-to-human dynamics toward human–AI teamwork (Seeber et al., 2020; Chowdhury et al., 2022). This shift can create a distinct and threatening social categorization under SIT, where employees feel that their human expertise is devalued compared to AI efficiency. Rather than fostering a positive group identity, AI collaboration can trigger appraisals of lost autonomy and diminished skills, which directly threaten an employee’s professional self-concept. This threat to professional identity, in turn, leads employees to engage in cyberloafing as a coping mechanism, using non-work-related online activities to regain a sense of control and autonomy that is lacking in their primary work tasks.
Furthermore, we delved into the moderating role of AI-inclusive identity, which, contrary to previous findings, amplified the negative effect of these AI-driven threats. Employees with a strong AI-inclusive identity have deeply integrated AI into their self-concept. Consequently, when AI then restricts their autonomy, it is perceived not merely as a workflow issue but as a profound betrayal or violation of the self, significantly intensifying the professional identity threat and resulting in even greater cyberloafing(Burgoon, 2015).
5.1. Theoretical Contributions
Previous research has scarcely explored cyberloafing in AI collaborative work environments, leaving open the question of whether employees collaborating with AI engage in more cyberloafing. Digital transformation has increased employees’ internet access opportunities, creating the potential for cyberloafing (Lai et al., 2025; Tandon et al., 2022). Collaboration with AI has made cyberloafing a focal point of concern for organizations (Zhang et al., 2025). Drawn upon Social Identity Theory and Identity Threat Theory, this study explored professional identity threat and its facilitating role between AI-driven threat appraisals (loss of autonomy and loss of skill) and cyberloafing. The hypotheses proposed by this study are validated through a three-wave lagged study of Chinese professionals continuously interacting with AI at their work.
The findings demonstrated that human-AI collaboration, when perceived as a source of threat, created a significant effect on cyberloafing behavior during working hours, particularly due to damaged professional identity. Specifically, collaboration at work has shifted from traditional human-to-human dynamics toward human–AI teamwork (Seeber et al., 2020; Chowdhury et al., 2022). This shift can create a distinct and threatening social categorization under SIT, where employees feel that their human expertise is devalued compared to AI efficiency. Rather than fostering a positive group identity, AI collaboration can trigger appraisals of lost autonomy and diminished skills, which directly threaten an employee’s professional self-concept. This threat to professional identity, in turn, leads employees to engage in cyberloafing as a coping mechanism, using non-work-related online activities to regain a sense of control and autonomy that is lacking in their primary work tasks.
Furthermore, we delved into the moderating role of AI-inclusive identity, which, contrary to previous findings, amplified the negative effect of these AI-driven threats. Employees with a strong AI-inclusive identity have deeply integrated AI into their self-concept. Consequently, when AI then restricts their autonomy, it is perceived not merely as a workflow issue but as a profound betrayal or violation of the self, significantly intensifying the professional identity threat and resulting in even greater cyberloafing (Burgoon, 2015).
5.2. practical Contributions
First, organizations should recognize that human–AI collaboration can unintentionally erode employees’ sense of autonomy and skills, which triggers cyberloafing as a coping response. Organizations need to design AI-assisted workflows that preserve skill utilization and decision-making authority emphasizing augmentation rather than replacement (Langer et al., 2022).
Second, because professional identity threat mediates this relationship, organizations need to proactively protect employees’ professional self-concept. This can be attained by framing AI as a complementary partner that supports, rather than undermines, employees’ expertise. For example, task distribution can be arranged in a way that employees remain responsible for judgment-intensive or creative aspects, reinforcing their sense of value (Petriglieri, 2011).
Third, the result that AI-inclusive identity intensifies the effect of professional identity threat on cyberloafing suggests a paradox: while encouraging employees to embrace AI is beneficial, over identification may make them more vulnerable to identity threat when challenges arise. Thus, organizations should adopt a balanced identity integration by providing training that emphasizes both technological fluency and professional distinctiveness (van Knippenberg, 2000).
Fourth, targeted interventions like reskilling and upskilling programs can mitigate the perceived loss of skills and autonomy. These initiatives help employees retain professional self-assurance and reduce maladaptive responses like cyberloafing (Venkatesh et al., 2016).
Finally, leadership and HR policies should adopt identity-sensitive AI governance. By communicating clearly about the role of AI, involving employees in implementation decisions, and confirming that human judgment remains central, organizations can reduce identity threats and build healthier forms of human–AI collaboration (Kellogg, Valentine, & Christin, 2020).
5.3. Limitations and Future Research Agenda
First, this study relied on self-reported survey data collected from employees, which increase concerns about common method bias and social desirability effects. Future studies could employ multi-source data (e.g., supervisor evaluations, behavioral log data on cyberloafing) or experimental designs to strengthen causal inference.
Second, the data were collected from employees only in China which limits the generalizability of the findings due to single cultural context. As identity processes are socially grounded (Tajfel & Turner, 1979), cross-cultural comparative studies can explore whether cultural orientations toward autonomy, competence, or technology adoption influence the relationships observed here.
Third, this study focused on a specific set of human-AI-collaboration related Identty threat appraisals (loss of autonomy and loss of skills). Other relevant stressors, such as technology dependence, work intensification, or surveillance pressures, may also shape professional identity threat and warrant further investigation.
Fourth, the moderating role of AI-inclusive identity revealed a contradictory amplifying effect, but the underlying mechanisms remain unclear. Upcoming studies could examine boundary conditions (e.g., task type, organizational support, or employee adaptability) and explore whether identity reframing policies can mitigate this vulnerability.
Finally, this study considered relationships within a short-term, survey-based design. Future panel or longitudinal studies can provide richer insights into the temporal dynamics of professional identity threat, showing whether employees adapt to AI systems over time or whether continued exposure intensifies cyberloafing.
Author Contributions
Alqa Ashraf: Visualization, methodology, Writing – original draft, Investigation, Conceptualization. Qingfei Min: conceptualization, supervision. Aleena Ashraf: resources, investigation.
Funding
No funding is attained to execute this research.
Institutional Review Board Statement
The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Dalian University of Technology.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study prior to the survey.
Informed: Consent form
You are invited to participate in a research study conducted for academic purpose. The purpose of this study is to understand How individuals engaged in cyberloafing practices in response to Identity threat appraisals perceived by AI integration. If you agree to participate, you will be asked to complete a questionnaire. The survey will take approximately 10 minutes to complete. Your participation in this study is completely voluntary. You may choose not to participate or withdraw from the study at any time without any negative consequences. All information collected in this study will be kept strictly confidential. No personally identifiable information will be collected, and all responses will be analyzed in aggregate form for research purposes only.
Consent Statement
By agreeing below, you confirm that: - You have read and understood the information provided above. - You are at least 18 years old (or have guardian consent, where applicable). - You voluntarily agree to participate in this study.
Data Availability Statement
The datasets used in this research are available upon request from the corresponding author. The data are not publicly available due to restrictions, i.e., privacy or ethical.
Acknowledgments
The authors particularly appreciate all the survey participants. We also express our gratitude to the editor and anonymous reviewers of this paper for their excellent work and contributions to the refinements and improvements of the article.
Conflicts of Interest
There is no conflict of interest among any authors of this study.
References
- Adam, I., Agyeiwaah, E., Dayour, F., 2021. Understanding the social identity, motivations, and sustainable behaviour among backpackers: a clustering approach. Journal of Travel & Tourism Marketing 38, 139–154. [CrossRef]
- Alfrey, K.-L., Waters, K.M., Condie, M., Rebar, A.L., 2023. The Role of Identity in Human Behavior Research: A Systematic Scoping Review. Identity 23, 208–223. [CrossRef]
- Andel, S.A., Kessler, S.R., Pindek, S., Kleinman, G., Spector, P.E., 2019. Is cyberloafing more complex than we originally thought? Cyberloafing as a coping response to workplace aggression exposure. Comput Human Behav 101, 124–130. [CrossRef]
- Ashforth, B.E., Harrison, S.H., Corley, K.G., 2008. Identification in Organizations: An Examination of Four Fundamental Questions. J Manage 34, 325–374. [CrossRef]
- Ashforth, B.E., Mael, F., 1989. Social Identity Theory and the Organization. The Academy of Management Review 14, 20. [CrossRef]
- Ashforth, B.E., Schinoff, B.S., 2016. Identity Under Construction: How Individuals Come to Define Themselves in Organizations. Annual Review of Organizational Psychology and Organizational Behavior 3, 111–137. [CrossRef]
- Askew, K., Buckner, J.E., Taing, M.U., Ilie, A., Bauer, J.A., Coovert, M.D., 2014. Explaining cyberloafing: The role of the theory of planned behavior. Comput Human Behav 36, 510–519. [CrossRef]
- Bai, S., Zhang, X., 2025. My coworker is a robot: The impact of collaboration with AI on employees’ impression management concerns and organizational citizenship behavior. Int J Hosp Manag 128, 104179. [CrossRef]
- Breakwell, G.M., 1986. Coping with threatened identities. London.
- Breaugh, J.A., 1985. The Measurement of Work Autonomy. Human Relations 38, 551–570. [CrossRef]
- Brown, R., 2000. Social identity theory: past achievements, current problems and future challenges. Eur J Soc Psychol 30, 745–778.
- Brynjolfsson, E., McAfee, A., 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, 1st ed. W. W. Norton & Company.
- Budhwar, P., Malik, A., De Silva, M.T.T., Thevisuthan, P., 2022. Artificial intelligence – challenges and opportunities for international HRM: a review and research agenda. The International Journal of Human Resource Management 33, 1065–1097. [CrossRef]
- Burgoon, J.K., 2015. Expectancy Violations Theory, in: The International Encyclopedia of Interpersonal Communication. Wiley, pp. 1–9. [CrossRef]
- Cao, J., Song, Z., 2024. An incoming threat: the influence of automation potential on job insecurity. Asia-Pacific Journal of Business Administration. [CrossRef]
- Cao, L., Chen, C., Dong, X., Wang, M., Qin, X., 2023. The dark side of AI identity: Investigating when and why AI identity entitles unethical behavior. Comput Human Behav 143, 107669. [CrossRef]
- Carter, M., Grover, V., 2015. Me, My Self, and I(T): Conceptualizing Information Technology Identity and its Implications. MIS Quarterly 39, 931–957. [CrossRef]
- Carter, M., Petter, S., Grover, V., Thatcher, J.B., 2020. It identity: A measure and empirical investigation of its utility to is research. J Assoc Inf Syst 21, 1313–1342. [CrossRef]
- Chen, Q., Gong, Y., Lu, Y., Chau, P.Y.K., 2023. How mindfulness decreases cyberloafing at work: a dual-system theory perspective. European Journal of Information Systems 32, 841–857. [CrossRef]
- Chowdhury, S., Budhwar, P., Dey, P.K., Joel-Edgar, S., Abadie, A., 2022. AI-employee collaboration and business performance: Integrating knowledge-based view, socio-technical systems and organisational socialisation framework. J Bus Res 144, 31–49. [CrossRef]
- Cohen, J., 2013. Statistical Power Analysis for the Behavioral Sciences. Routledge. [CrossRef]
- Conroy, S., Henle, C.A., Shore, L., Stelman, S., 2017. Where there is light, there is dark: A review of the detrimental outcomes of high organizational identification. J Organ Behav 38, 184–203. [CrossRef]
- Craig, K., Thatcher, J.B., Grover, V., 2019. The IT Identity Threat: A Conceptual Definition and Operational Measure. Journal of Management Information Systems 36, 259–288. [CrossRef]
- Davenport, T.H., Ronanki, R., Wheaton, J., Nguyen, A., 2018. FEATURE ARTIFICIAL INTELLIGENCE FOR THE REAL WORLD 108 HARVARD BUSINESS REVIEW.
- Deci, E.L., Ryan, R.M., 2000. The “What” and “Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol Inq 11, 227–268. [CrossRef]
- Dhillon, P.S., Molaei, S., Li, J., Golub, M., Zheng, S., Robert, L.P., 2024. Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing with Language Models, in: Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, pp. 1–18. [CrossRef]
- Duan, W.-Y., Wu, T.-J., Liang, Y., 2025. Are resources always beneficial? The three-way interaction between slashies’ role stress, self-efficacy and job autonomy. Asia Pacific Journal of Management. [CrossRef]
- Duan, W.-Y., Wu, T.-J., Wei, A.-P., Huang, Y.-T., 2024. Reducing the adverse effects of compulsory citizenship behaviour on employee innovative behaviour via AI usage in China. Asia Pacific Business Review 1–21. [CrossRef]
- Dunleavy, P., Margetts, H., 2025. Data science, artificial intelligence and the third wave of digital era governance. Public Policy Adm 40, 185–214. [CrossRef]
- Echegu, D.A., 2024. Artificial Intelligence (AI) in Customer Service: Revolutionising Support and Engagement. IAA Journal of Scientific Research 11, 33–39. [CrossRef]
- Fornell, C., Larcker, D.F., 1981. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. Journal of Marketing Research 18, 39–50. [CrossRef]
- Glassman, J., Prosch, M., Shao, B.B.M., 2015. To monitor or not to monitor: Effectiveness of a cyberloafing countermeasure. Information & Management 52, 170–182. [CrossRef]
- Guo, Y., Rammal, H.G., Pereira, V., 2021. Am I ‘In or Out’? A social identity approach to studying expatriates’ social networks and adjustment in a host country context. J Bus Res 136, 558–566. [CrossRef]
- Gupta, M., Mehta, N.K.K., Agarwal, U.A., Jawahar, I.M., 2025. The mediating role of psychological capital in the relationship between LMX and cyberloafing. Leadership & Organization Development Journal 46, 85–101. [CrossRef]
- Hair, J., Hult, G.T.M., Ringle, C., Sarstedt, M., 2016. A Primer on Partial Least Squares Structural Equation Modeling.
- Henseler, J., Ringle, C.M., Sarstedt, M., 2015. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Acad Mark Sci 43, 115–135. [CrossRef]
- Hessari, H., Daneshmandi, F., Busch, P., Smith, S., 2025. Mitigating cyberloafing through employee adaptability: the roles of temporal leadership, teamwork attitudes and competitive work environment. Asia-Pacific Journal of Business Administration 17, 303–336. [CrossRef]
- Hewlin, P.F., Karelaia, N., Kouchaki, M., Sedikides, C., 2020. Authenticity at work: Its shapes, triggers, and consequences. Organ Behav Hum Decis Process 158, 80–82. [CrossRef]
- Hornsey, M.J., 2008. Social Identity Theory and Self-categorization Theory: A Historical Review. Soc Personal Psychol Compass 2, 204–222. [CrossRef]
- Huang, T.-L., 2019. Psychological mechanisms of brand love and information technology identity in virtual retail environments. Journal of Retailing and Consumer Services 47, 251–264. [CrossRef]
- Ibarra, H., 1999. Provisional Selves: Experimenting with Image and Identity in Professional Adaptation. Adm Sci Q 44, 764–791. [CrossRef]
- IBARRA, H., BARBULESCU, R., 2010. IDENTITY AS NARRATIVE: PREVALENCE, EFFECTIVENESS, AND CONSEQUENCES OF NARRATIVE IDENTITY WORK IN MACRO WORK ROLE TRANSITIONS. Academy of Management Review 35, 135–154. [CrossRef]
- Jarrahi, M.H., 2018. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Bus Horiz 61, 577–586. [CrossRef]
- Jia, N., Luo, X., Fang, Z., Liao, C., 2024. When and How Artificial Intelligence Augments Employee Creativity. Academy of Management Journal 67, 5–32. [CrossRef]
- Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Yilong, Dong, Q., Shen, H., Wang, Yongjun, 2017. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc Neurol. [CrossRef]
- Jussupow, E., Spohrer, K., Dibbern, J., Heinzl, A., 2018. Ai changes who we are - Doesn’t IT? Intelligent Decision Support and Physicians’ Professional Identiy, in: European Conference on Information Systems.
- Jussupow, E., Spohrer, K., Heinzl, A., 2022. Identity Threats as a Reason for Resistance to Artificial Intelligence: Survey Study With Medical Students and Professionals. JMIR Form Res 6, e28750. [CrossRef]
- Kellogg, K.C., 2019. Subordinate Activation Tactics: Semi-professionals and Micro-level Institutional Change in Professional Organizations. Adm Sci Q 64, 928–975. [CrossRef]
- Koay, K.Y., Lim, V.K.G., Soh, P.C.-H., Ong, D.L.T., Ho, J.S.Y., Lim, P.K., 2022. Abusive supervision and cyberloafing: A moderated moderation model of moral disengagement and negative reciprocity beliefs. Information & Management 59, 103600. [CrossRef]
- Kock, N., 2015. Common Method Bias in PLS-SEM. International Journal of e-Collaboration 11, 1–10. [CrossRef]
- Kong, H., Yin, Z., Baruch, Y., Yuan, Y., 2023. The impact of trust in AI on career sustainability: The role of employee–AI collaboration and protean career orientation. J Vocat Behav 146, 103928. [CrossRef]
- Kreiner, G.E., Hollensbe, E.C., Sheep, M.L., 2006. Where is the “Me” Among the “We”? Identity Work and the Search for Optimal Balance. Academy of Management Journal 49, 1031–1057. [CrossRef]
- Lai, C.H.Y., Koay, K.Y., Fujimoto, Y., Lim, V.K.G., Ong, D., 2025. Understanding the effects of socially responsible human resource management on cyberloafing: a moderation and mediation model. Management Decision. [CrossRef]
- Latif, M.S., Wang, J.J., Shahzad, M., 2024. Do ethics drive value co-creation behavior in online health communities? Information Technology and People 37, 1–28. [CrossRef]
- Lazarus, R., Folkman, S., 1984. Stress, Appraisal, and Coping. Springer, New york.
- Liang, Y., Wu, T.-J., Lin, W., 2024. Exploring the impact of forced teleworking on counterproductive work behavior: the role of event strength and work-family conflict. Internet Research. [CrossRef]
- Lim, V.K.G., 2002. The IT way of loafing on the job: cyberloafing, neutralizing and organizational justice. J Organ Behav 23, 675–694. [CrossRef]
- Lim, V.K.G., Teo, T.S.H., 2005. Prevalence, perceived seriousness, justification and regulation of cyberloafing in Singapore. Information & Management 42, 1081–1093. [CrossRef]
- Liu, Q., Geertshuis, S., 2019. Professional identity and the adoption of learning management systems. Studies in Higher Education. [CrossRef]
- Liu, X., Li, Y., 2025. Examining the Double-Edged Sword Effect of AI Usage on Work Engagement: The Moderating Role of Core Task Characteristics Substitution. Behavioral sciences (Basel, Switzerland) 15. [CrossRef]
- Ma, K., Zhang, Y., Hui, B., 2024. How Does AI Affect College? The Impact of AI Usage in College Teaching on Students’ Innovative Behavior and Well-Being. Behavioral Sciences 14, 1223. [CrossRef]
- Man Tang, P., Koopman, J., McClean, S.T., Zhang, J.H., Li, C.H., De Cremer, D., Lu, Y., Ng, C.T.S., 2022. When Conscientious Employees Meet Intelligent Machines: An Integrative Approach Inspired by Complementarity Theory and Role Theory. Academy of Management Journal 65, 1019–1054. [CrossRef]
- Meng, Q., Wu, T.-J., Duan, W., Li, S., 2025. Effects of Employee–Artificial Intelligence (AI) Collaboration on Counterproductive Work Behaviors (CWBs): Leader Emotional Support as a Moderator. Behavioral Sciences 15, 696. [CrossRef]
- Mirbabaie, M., Brünker, F., Möllmann Frick, N.R.J., Stieglitz, S., 2022. The rise of artificial intelligence – understanding the AI identity threat at the workplace. Electronic Markets 32, 73–99. [CrossRef]
- Möhlmann, M., Zalmanson, L., Henfridsson, O., Gregory, R.W., 2021. Algorithmic Management of Work on Online Labor Platforms: When Matching Meets Control. MIS Quarterly 45, 1999–2022. [CrossRef]
- Nach, H., Lejeune, A., 2010. Coping with information technology challenges to identity: A theoretical framework. Comput Human Behav 26, 618–629. [CrossRef]
- Parker, S.K., Grote, G., 2022. Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital World. Applied Psychology 71, 1171–1204. [CrossRef]
- Petriglieri, J.L., 2011. Under Threat: Responses to and the Consequences of Threats to Individuals’ Identities. Academy of Management Review 36, 641–662. [CrossRef]
- Podsakoff, P.M., MacKenzie, S.B., Lee, J.-Y., Podsakoff, N.P., 2003. Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology 88, 879–903. [CrossRef]
- Pratt, M.G., Rockmann, K.W., Kaufmann, J.B., 2006. Constructing Professional Identity: The Role of Work and Identity Learning Cycles in the Customization of Identity Among Medical Residents. Academy of Management Journal 49, 235–262. [CrossRef]
- Rai, A., Constantinides, P., Sarker, S., 2019. Next-Generation Digital Platforms: Toward Human-AI Hybrids. MIS Quaterly.
- Reychav, I., Beeri, R., Balapour, A., Raban, D.R., Sabherwal, R., Azuri, J., 2019. How reliable are self-assessments using mobile technology in healthcare? The effects of technology identity and self-efficacy. Comput Human Behav 91, 52–61. [CrossRef]
- Ringle, C.M., Wende, S., Becker, J.M., 2020. SmartPLS3.
- Rubin, M., Kevin Owuamalam, C., Spears, R., Caricati, L., 2023. A social identity model of system attitudes (SIMSA): Multiple explanations of system justification by the disadvantaged that do not depend on a separate system justification motive. Eur Rev Soc Psychol 34, 203–243. [CrossRef]
- Saluja, S., Sinha, S., Goel, S., 2024. Loafing in the era of generative AI. Organ Dyn 101101. [CrossRef]
- Scheifele, C., Ehrke, F., Viladot, M.A., Van Laar, C., Steffens, M.C., 2021. Testing the basic socio-structural assumptions of social identity theory in the gender context: Evidence from correlational studies on women’s leadership. Eur J Soc Psychol 51, 1038–1060. [CrossRef]
- Schepman, A., Rodway, P., 2020. Initial validation of the general attitudes towards Artificial Intelligence Scale. Computers in Human Behavior Reports 1, 100014. [CrossRef]
- Schwarz, A., Rizzuto, T., Carraher-Wolverton, C., Roldán, J.L., Barrera-Barrera, R., 2017. Examining the Impact and Detection of the “Urban Legend” of Common Method Bias. ACM SIGMIS Database: the DATABASE for Advances in Information Systems 48, 93–119. [CrossRef]
- Seeber, I., Bittner, E., Briggs, R.O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A.B., Oeste-Reiß, S., Randrup, N., Schwabe, G., Söllner, M., 2020. Machines as teammates: A research agenda on AI in team collaboration. Information & Management 57, 103174. [CrossRef]
- Shao, W., Zhang, Y., Cheng, A., Quach, S., Thaichon, P., 2023. Ethnicity in advertising and millennials: the role of social identity and social distinctiveness. Int J Advert 42, 1377–1418. [CrossRef]
- Sluss, D.M., Ashforth, B.E., 2007. Relational Identity and Identification: Defining Ourselves Through Work Relationships. Academy of Management Review 32, 9–32. [CrossRef]
- Sowa, K., Przegalinska, A., Ciechanowski, L., 2021. Cobots in knowledge work. J Bus Res 125, 135–142. [CrossRef]
- Spring, M., Faulconbridge, J., Sarwar, A., 2022. How information technology automates and augments processes: Insights from Artificial-Intelligence-based systems in professional service operations. Journal of Operations Management 68, 592–618. [CrossRef]
- Stets, J.E., Burke, P.J., 2000. Identity Theory and Social Identity Theory. Soc Psychol Q 63, 224. [CrossRef]
- Susilowati, C., Barinta, D.D., 2024. The Influence of Knowledge Management and Green Innovation on the Environmental Performance of MSMEs in Malang City: A Study of the Laundry Sector. Jurnal Manajemen Bisnis 11, 79–93. [CrossRef]
- Swann, W.B., Gómez, Á., Dovidio, J.F., Hart, S., Jetten, J., 2010. Dying and Killing for One’s Group. Psychol Sci 21, 1176–1183. [CrossRef]
- Tajfel, H., Turner, J.C., 1979. An integrative theory of intergroup conflict, in: Austin, W.G., Worchel, S. (Eds.), The Social Psychology of Intergroup Relations. Brooks/Cole, Monterey, CA, pp. 33–47.
- Tandon, A., Kaur, P., Ruparel, N., Islam, J.U., Dhir, A., 2022. Cyberloafing and cyberslacking in the workplace: systematic literature review of past achievements and future promises. Internet Research 32, 55–89. [CrossRef]
- Tarafdar, M., Tu, Q., Ragu-Nathan, B.S., Ragu-Nathan, T.S., 2007. The Impact of Technostress on Role Stress and Productivity. Journal of Management Information Systems 24, 301–328. [CrossRef]
- Topol, E., 2019. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, 1st ed. Basic Books, Inc., USA.
- Tsai, H.-Y., 2023. Do you feel like being proactive day? How Daily Cyberloafing Influences Creativity and Proactive Behavior: The Moderating Roles of Work Environment. Comput Human Behav 138, 107470. [CrossRef]
- Ugrin, J., Pearson, J., 2008. Exploring Internet abuse in the workplace: How can we maximize deterrence efforts? Review of Business 28, 29–40.
- Wagner, D.T., Barnes, C.M., Lim, V.K.G., Ferris, D.L., 2012. Lost sleep and cyberloafing: Evidence from the laboratory and a daylight saving time quasi-experiment. Journal of Applied Psychology 97, 1068–1076. [CrossRef]
- Weng, Q., McElroy, J.C., Morrow, P.C., Liu, R., 2010. The relationship between career growth and organizational commitment. J Vocat Behav 77, 391–400. [CrossRef]
- Wu, T.-J., Li, J.-M., Wu, Y.J., 2022. Employees’ job insecurity perception and unsafe behaviours in human–machine collaboration. Management Decision 60, 2409–2432. [CrossRef]
- Wu, T.-J., Liang, Y., Duan, W.-Y., Zhang, S.-D., 2024a. Forced shift to teleworking: how after-hours ICTs implicate COVID-19 perceptions when employees experience abusive supervision. Current Psychology 43, 22686–22700. [CrossRef]
- Wu, T.-J., Liang, Y., Wang, Y., 2024b. The Buffering Role of Workplace Mindfulness: How Job Insecurity of Human-Artificial Intelligence Collaboration Impacts Employees’ Work–Life-Related Outcomes. J Bus Psychol 39, 1395–1411. [CrossRef]
- Yan, B., Teng, Y., 2025. The double-edged sword effect of artificial intelligence awareness on organisational citizenship behaviour: a study based on knowledge workers. Behaviour & Information Technology 1–17. [CrossRef]
- Zhang, Q., Liao, G., Ran, X., Wang, F., 2025. The Impact of AI Usage on Innovation Behavior at Work: The Moderating Role of Openness and Job Complexity. Behavioral Sciences 15, 491. [CrossRef]
- Zhang, S., Zhao, X., Zhou, T., Kim, J.H., 2024. Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. International Journal of Educational Technology in Higher Education 21, 34. [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).