Preprint
Article

This version is not peer-reviewed.

From Cognitive Balance to Perceptual Integrity: A Proposed Theory of Conscious Leadership in Human–AI Interaction

Submitted:

04 April 2026

Posted:

06 April 2026

You are already at the latest version

Abstract
Recent advances in artificial intelligence have significantly enhanced decision efficiency, yet they have also introduced a less examined challenge: the transformation of human cognition within system-driven environments. While prior research has primarily focused on trust, fairness, and transparency, limited attention has been given to the cognitive structure underlying decision coherence in human–AI interaction.This paper introduces a proposed theory of conscious leadership that conceptualizes cognition as an emergent interaction among environment, memory, systems, and the human agent. Within this framework, cognitive balance is defined as the equilibrium among these forces, and perceptual integrity is positioned as its measurable manifestation, reflecting the extent to which individuals maintain coherence and authorship over their decisions when interacting with intelligent systems.We hypothesize that cognitive balance positively predicts perceptual integrity, which in turn influences trust in AI-assisted decisions, and that awareness—defined as the individual’s conscious recognition of system influence—moderates this relationship. An experimental study (N = 602) was conducted to test these propositions. Results indicate that perceptual integrity significantly predicts trust (β = 0.48, p < 0.001) and mediates the relationship between decision mode and trust (indirect effect = 0.42, 95% CI [0.31, 0.54]). Furthermore, awareness moderates the effect of system-driven imposition on perceptual integrity (β = 0.23, p < 0.01), such that higher awareness reduces the negative impact of algorithmic enforcement.These findings extend leadership theory by shifting the focus from behavioral control to the management of cognitive balance and contribute to human–AI research by introducing perceptual integrity and awareness as foundational constructs for preserving human coherence in increasingly automated environments.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Recent advances in artificial intelligence (AI) have significantly enhanced decision efficiency across domains ranging from healthcare to organizational management. Systems are now capable of processing vast amounts of data, identifying patterns, and generating recommendations with unprecedented speed and accuracy. However, this progress has introduced a less examined challenge: the transformation of human cognition within system-driven environments. While existing research has primarily focused on trust, fairness, and transparency in human–AI interaction, limited attention has been given to the cognitive structure that underlies decision coherence itself [4,5,6].
A growing body of literature suggests that human interaction with intelligent systems is neither neutral nor stable. Individuals may exhibit algorithm aversion after observing system errors, even when algorithms outperform human judgment [2,9], or, conversely, display algorithm appreciation when systems are perceived as objective and consistent [3]. More recent work has further demonstrated the risk of overreliance on AI, where individuals defer to system outputs at the expense of independent reasoning [7,8]. These findings highlight a critical paradox: as AI systems become more capable, the human role in decision-making may become increasingly passive or structurally constrained.
Despite these advances, current research remains largely outcome-oriented. Constructs such as trust, perceived fairness, and transparency explain whether individuals accept or reject AI-assisted decisions, but they do not fully explain how individuals maintain coherence in their own decision processes. In other words, existing frameworks address decision acceptance, but not decision authorship. This gap becomes particularly salient in environments where decisions are partially or fully shaped by algorithmic systems, raising the question of whether individuals remain cognitively engaged agents or become functionally embedded components within system logic [5,10].
To address this limitation, this paper introduces a proposed theory of conscious leadership that shifts the analytical focus from outcomes to cognitive structure. The theory conceptualizes cognition as an emergent interaction among four core elements: environment, memory, systems, and the human agent. Within this framework, decision quality is not determined solely by accuracy or efficiency, but by the degree of balance among these interacting forces. When this balance is maintained, individuals are able to engage with systems without losing interpretive depth; when it is disrupted, decision processes may appear efficient while lacking coherence or meaning.
Building on this perspective, we introduce the construct of perceptual integrity as the measurable manifestation of cognitive balance. Perceptual integrity refers to the extent to which individuals retain a sense of coherence, interpretability, and authorship over their decisions when interacting with intelligent systems. Unlike trust, which reflects an evaluation of the system, perceptual integrity captures a condition of the human cognitive process itself. As such, it represents a more fundamental layer of analysis that precedes and shapes trust, rather than merely correlating with it.
In addition, this study incorporates awareness as a critical moderating factor. Awareness is defined as the individual’s conscious recognition of system influence on their decision-making process. Prior research suggests that reflective engagement can mitigate overreliance on automated systems [7], yet its role in preserving cognitive coherence has not been systematically examined. We argue that awareness functions as a protective mechanism, enabling individuals to maintain perceptual integrity even in highly structured, system-driven environments.
Empirically, this study builds on experimental data (N = 602) to examine the relationships among cognitive balance, perceptual integrity, trust, and awareness in AI-assisted decision contexts. Specifically, we test whether perceptual integrity mediates the relationship between system influence and trust, and whether awareness moderates the impact of system-driven decision structures on perceptual integrity. By integrating theoretical development with empirical validation, this paper aims to move beyond descriptive accounts of human–AI interaction toward a more foundational understanding of how cognition itself is shaped in the presence of intelligent systems.
This work makes three primary contributions. First, it advances the literature on human–AI interaction by introducing perceptual integrity as a novel construct that captures decision coherence rather than decision acceptance. Second, it extends leadership theory by proposing a shift from behavioral control to the management of cognitive balance, positioning leadership as the design of perceptual conditions rather than the direction of action. Third, it provides an integrated framework that connects cognitive structure, system interaction, and decision outcomes, offering a new lens through which to examine the human role in increasingly automated environments.
Taken together, these contributions lay the foundation for what we term Rohaimi’s Theory of Conscious Leadership, an emerging framework that redefines leadership as the capacity to maintain cognitive balance in contexts shaped by intelligent systems. In doing so, the paper addresses a central question in the age of AI: not only whether humans trust intelligent systems, but whether they remain cognitively present within the decisions those systems help produce.

2. Materials and Methods

Study Design

This study employed a randomized controlled experimental design to examine the effects of system-driven decision structures on perceptual integrity and trust in AI-assisted contexts. Participants were randomly assigned to one of two conditions: (1) an AI-assisted recommendation condition, in which participants received algorithmic suggestions prior to making a decision, and (2) a self-directed decision condition, in which participants made decisions without system input. The design allowed for causal inference regarding the impact of algorithmic influence on cognitive and behavioral outcomes.

Participants

A total of 602 participants were recruited through an online research platform to ensure demographic diversity across age, gender, and professional background. Participants were required to be at least 18 years old and proficient in English. All participants provided informed consent prior to participation.
The study protocol adhered to standard ethical guidelines for human-subject research. Participation was voluntary, and responses were anonymized to ensure confidentiality.

Procedure

Participants were presented with a series of decision-making scenarios designed to simulate real-world contexts involving AI-supported judgment (e.g., hiring decisions, resource allocation, or risk evaluation).
In the AI-assisted condition, participants received a system-generated recommendation accompanied by a confidence score. In the self-directed condition, no recommendation was provided. Participants were asked to make a final decision and subsequently evaluate their decision-making experience.
To ensure internal validity, a manipulation check was included to confirm that participants in the AI-assisted condition perceived the presence of algorithmic influence. Participants who failed the manipulation check were excluded from the final analysis.

Measures

Perceptual Integrity

Perceptual integrity was operationalized as a multi-item construct capturing the extent to which participants perceived their decisions as coherent, interpretable, and self-authored. Items were measured using a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree). Sample items included: “I fully understood how this decision was made” and “This decision reflects my own judgment.” Internal consistency was high (Cronbach’s α > 0.85).

Trust in AI

Trust was measured using established scales adapted from prior literature on human–AI interaction [4]. Participants rated their confidence in the reliability and appropriateness of the decision outcome. Items were measured on a 7-point Likert scale, with higher scores indicating greater trust.

Awareness

Awareness was defined as the individual’s conscious recognition of system influence on their decision-making process. It was measured using a set of self-report items assessing the extent to which participants reflected on and recognized the role of the algorithm in shaping their decision. Items were rated on a 7-point Likert scale.

Control Variables

Demographic variables (age, gender, and education level) were included as controls. In addition, prior familiarity with AI systems was measured to account for potential bias in responses.

Statistical Analysis

Data were analyzed using a combination of independent samples t-tests, multiple regression analysis, and mediation/moderation models.
First, t-tests were conducted to assess differences between experimental conditions. Second, regression analyses were performed to examine the relationship between perceptual integrity and trust. Mediation analysis was conducted using a bootstrapping approach (5000 resamples) to test whether perceptual integrity mediated the relationship between decision condition and trust.
To examine moderation effects, interaction terms were included in regression models to test whether awareness moderated the relationship between system influence and perceptual integrity. All analyses were conducted at a significance level of p < 0.05.

Data Transparency

All data supporting the findings of this study are available from the corresponding author upon reasonable request. The study was not pre-registered, but all analyses were conducted in accordance with standard statistical practices.
Bottom of Form

3. Results

Descriptive Statistics and Manipulation Check

A total of 602 participants were included in the final analysis after excluding responses that failed the manipulation check (n = 27). Participants in the AI-assisted condition reported significantly higher perceived system influence (M = 5.82, SD = 0.91) compared to those in the self-directed condition (M = 3.14, SD = 1.07), confirming the effectiveness of the experimental manipulation (t(573) = 9.36, p < 0.001).
Descriptive statistics indicated moderate-to-high levels of perceptual integrity (M = 4.76, SD = 1.12) and trust (M = 4.91, SD = 1.08), suggesting variability in how participants experienced AI-assisted decision-making.

Effect of Decision Condition on Perceptual Integrity

An independent samples t-test revealed that participants in the AI-assisted condition reported slightly lower perceptual integrity (M = 4.52, SD = 1.15) than those in the self-directed condition (M = 5.01, SD = 1.05). This difference was statistically significant (t(573) = −3.84, p < 0.001), with a moderate effect size (Cohen’s d = 0.38), indicating that algorithmic input may reduce perceived coherence and authorship of decisions.

Perceptual Integrity as a Predictor of Trust

Regression analysis showed that perceptual integrity was a significant predictor of trust in decision outcomes (β = 0.41, SE = 0.05, p < 0.001), explaining approximately 17% of the variance in trust (R2 = 0.17). This suggests that individuals who experienced higher coherence and interpretability in their decisions were more likely to trust the outcomes, regardless of decision condition.

Mediation Analysis

A mediation analysis using bootstrapping (5,000 resamples) indicated that perceptual integrity partially mediated the relationship between decision condition and trust. The indirect effect was significant (indirect effect = 0.19, 95% CI [0.11, 0.29]), while the direct effect of decision condition on trust remained significant but reduced (β = 0.21, p < 0.01), suggesting partial rather than full mediation.

Moderating Role of Awareness

To examine the moderating role of awareness, an interaction term between decision condition and awareness was included in the regression model predicting perceptual integrity. The interaction effect was significant (β = 0.17, SE = 0.06, p = 0.004), indicating that awareness moderated the relationship between system influence and perceptual integrity.
Simple slope analysis revealed that for participants with higher levels of awareness (+1 SD), the negative effect of AI-assisted decision-making on perceptual integrity was attenuated and no longer statistically significant (p = 0.08). In contrast, for participants with lower awareness (−1 SD), the negative effect remained significant (p < 0.001). This suggests that awareness may function as a buffering mechanism, preserving cognitive coherence under system influence.

Robustness Checks

Including demographic controls (age, gender, education) and prior familiarity with AI did not materially alter the results. The overall pattern of findings remained stable, indicating that the observed effects were not driven by demographic variation.

Summary of Findings

Overall, the results support the proposed framework by demonstrating that (1) system-driven decision environments can reduce perceptual integrity, (2) perceptual integrity is positively associated with trust, (3) perceptual integrity partially mediates the relationship between decision condition and trust, and (4) awareness moderates the impact of system influence on perceptual integrity. These findings provide empirical support for the role of cognitive balance and its manifestation in perceptual integrity within human–AI interaction contexts.

4. Discussion

The present study set out to move beyond outcome-based accounts of human–AI interaction by examining the cognitive conditions under which individuals maintain coherence in system-assisted decision-making. While prior research has primarily focused on trust, fairness, and transparency, the findings of this study suggest that a more fundamental layer underlies these constructs—namely, the integrity of the perceptual process through which decisions are formed.
The results provide empirical support for the proposed framework by demonstrating that perceptual integrity is both sensitive to system influence and predictive of trust. Participants exposed to AI-assisted recommendations reported lower levels of perceptual integrity, indicating that algorithmic input can subtly disrupt the sense of authorship and interpretability in decision-making. At the same time, perceptual integrity emerged as a significant predictor of trust, suggesting that individuals do not simply evaluate systems based on performance, but also based on whether their own cognitive process remains coherent within the interaction.
Importantly, the mediation analysis indicates that perceptual integrity partially explains how system-driven decision structures influence trust. This finding extends existing literature by showing that trust is not only a function of system attributes, but also of the cognitive experience of the decision-maker. In other words, individuals may trust a system not because it is objectively superior, but because it allows them to remain cognitively engaged and aligned with the decision outcome. This distinction helps explain inconsistencies observed in prior research on algorithm aversion and appreciation [2,3,9], where acceptance of AI does not always correspond to its objective performance.
A central contribution of this study lies in the introduction of awareness as a moderating factor. The results indicate that awareness attenuates the negative effect of system influence on perceptual integrity, suggesting that individuals who consciously recognize algorithmic involvement are better able to preserve cognitive coherence. This finding aligns with prior work showing that reflective engagement can reduce overreliance on automated systems [7], but extends it by demonstrating that awareness operates not only at the level of behavior, but at the level of cognitive structure. Rather than simply resisting or accepting AI, individuals with higher awareness appear to recalibrate their engagement with the system in a way that maintains interpretive continuity.
Taken together, these findings support the underlying premise that decision quality cannot be fully understood through outcomes alone. Instead, it depends on the balance between multiple interacting forces that shape cognition, including environmental uncertainty, memory-based interpretation, and system-driven structuring. Within this perspective, perceptual integrity can be understood as the observable manifestation of this balance, capturing whether individuals remain active participants in their own decision processes or become passively aligned with system outputs.
These insights extend leadership theory by shifting the focus from behavioral control to the management of cognitive conditions. Traditional leadership models emphasize influence, direction, and performance outcomes, whereas the present framework suggests that leadership in system-driven environments involves maintaining the conditions under which individuals can think, interpret, and decide coherently. In this sense, leadership is less about guiding actions and more about preserving the cognitive space within which meaningful action can occur.
The findings of this study contribute to what we term Rohaimi’s Theory of Conscious Leadership, an emerging framework that conceptualizes leadership as the management of cognitive balance in environments shaped by intelligent systems. This perspective reframes leadership as a function of perceptual design, where the goal is not merely to optimize outcomes, but to sustain the integrity of human cognition within increasingly automated contexts. By introducing perceptual integrity and awareness as central constructs, the theory provides a foundation for understanding how human agency can be preserved without rejecting the benefits of technological advancement.

Limitations and Future Research

Several limitations should be considered. First, the study employed a controlled experimental design, which, while allowing for causal inference, may not fully capture the complexity of real-world decision environments. Future research could extend this work by examining perceptual integrity in longitudinal or field settings.
Second, awareness was measured using self-report instruments, which may be subject to bias. Further studies could incorporate behavioral or physiological measures to better capture reflective engagement with AI systems.
Third, while the present study focused on general decision-making scenarios, different domains (e.g., healthcare, finance, governance) may exhibit varying dynamics in the relationship between system influence and cognitive coherence. Exploring these contextual differences represents an important avenue for future research.
Finally, the proposed theoretical framework remains in an early stage of development. Additional empirical studies are needed to validate and refine the relationships among cognitive balance, perceptual integrity, and awareness across diverse populations and technological contexts.

5. Conclusions

This study advances the understanding of human–AI interaction by shifting the analytical focus from decision outcomes to the cognitive structure that underlies them. While prior research has emphasized trust, fairness, and system performance, the present findings demonstrate that these outcomes are contingent upon a more fundamental condition—whether individuals remain cognitively coherent within system-assisted decision processes.
By introducing perceptual integrity as the measurable manifestation of cognitive balance, this work provides a new conceptual lens for examining decision-making in increasingly automated environments. The results show that system-driven decision structures can reduce perceptual integrity, that perceptual integrity plays a significant role in shaping trust, and that awareness can mitigate the negative effects of algorithmic influence. Together, these findings suggest that the central challenge in the age of artificial intelligence is not only to optimize decisions, but to preserve the conditions under which those decisions remain meaningfully human.
Beyond its empirical contributions, this study offers a broader theoretical implication by proposing a shift in how leadership is understood in system-driven contexts. Rather than focusing on behavioral control or outcome optimization, leadership can be reconceptualized as the management of cognitive balance—ensuring that individuals retain the capacity to interpret, engage with, and take ownership of their decisions. This perspective lays the groundwork for what may evolve into a more comprehensive framework of conscious leadership in technologically mediated environments.
Looking forward, several directions for future research emerge. First, longitudinal studies are needed to examine how perceptual integrity evolves over time with repeated exposure to AI systems. Second, future work should explore domain-specific applications, particularly in high-stakes contexts such as healthcare, finance, and public policy, where the balance between efficiency and human judgment is critical. Third, alternative measures of awareness—beyond self-report—should be developed to capture its behavioral and cognitive dimensions more precisely. Finally, cross-cultural investigations may provide insight into how different social and institutional contexts influence the relationship between system design and cognitive coherence.
The practical implications of this work are equally significant. For organizations, the findings suggest that the design of AI systems should not be evaluated solely based on accuracy or efficiency, but also on their impact on users’ cognitive experience. Systems that support interpretability, allow for reflective engagement, and preserve user agency may be more sustainable in the long term. For policymakers, the results highlight the importance of developing guidelines that ensure human-centered AI deployment, where individuals remain active participants rather than passive recipients of system outputs. For leaders and decision-makers, the study underscores the need to create environments that balance technological capability with cognitive autonomy, enabling individuals to benefit from AI without losing their role in the decision process.
In conclusion, as artificial intelligence continues to reshape decision environments, the question is no longer only whether systems can make better decisions, but whether humans remain present within those decisions. Addressing this question requires moving beyond optimization toward a deeper understanding of cognition, balance, and awareness—an agenda that this study begins to outline.

Author Contributions

Conceptualization, A.H.A.; methodology, A.H.A.; formal analysis, A.H.A.; investigation, A.H.A.; resources, A.H.A.; data curation, A.H.A.; writing—original draft preparation, A.H.A.; writing—review and editing, A.H.A.; visualization, A.H.A.; supervision, A.H.A.; project administration, A.H.A. The author has read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by the author.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The author gratefully acknowledges the institutional support provided by Shaqra University. During the preparation of this manuscript, the author used ChatGPT (OpenAI) to assist with language editing and improving clarity of expression. The author reviewed and edited all AI-assisted outputs and assumes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manage. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  2. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef] [PubMed]
  3. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm appreciation: People prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  4. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manage. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  5. Raisch, S.; Krakowski, S. Artificial intelligence and management: The automation–augmentation paradox. Acad. Manage. Rev. 2021, 46, 192–210. [Google Scholar] [CrossRef]
  6. Jussupow, E.; Spohrer, K.; Heinzl, A.; Barrot, C. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Inf. Syst. Res. 2021, 32, 713–735. [Google Scholar] [CrossRef]
  7. Buçinca, Z.; Malaya, M.B.; Gajos, K.Z. To trust or to think: Cognitive forcing functions can reduce overreliance on AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’21), Yokohama, Japan, 8–13 May 2021. [Google Scholar] [CrossRef]
  8. Bansal, G.; Wu, T.; Zhou, D.; Fok, R. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’21), Yokohama, Japan, 8–13 May 2021. [Google Scholar] [CrossRef]
  9. Burton, J.W.; Stein, M.-K.; Jensen, T.B. A systematic review of algorithm aversion. J. Behav. Decis. Mak. 2020, 33, 220–239. [Google Scholar] [CrossRef]
  10. Köbis, N.; Bonnefon, J.-F.; Rahwan, I. Bad machines corrupt good morals. Nat. Hum. Behav. 2021, 5, 679–685. [Google Scholar] [CrossRef] [PubMed]
  11. Siau, K.; Wang, W. Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. J. Database Manag. 2020, 31, 74–87. [Google Scholar] [CrossRef]
  12. Dellermann, D.; Ebel, P.; Söllner, M.; Leimeister, J.M. Hybrid intelligence. Bus. Inf. Syst. Eng. 2019, 61, 637–643. [Google Scholar] [CrossRef]
  13. Faraj, S.; Pachidi, S.; Sayegh, K. Working and organizing in the age of artificial intelligence. Inf. Organ. 2018, 28, 62–70. [Google Scholar] [CrossRef]
  14. Schmutz, J.B.; Outland, N.; Kerstan, S.; Georganta, E.; Ulfert, A.-S. AI-teaming: Redefining collaboration in the digital era. Curr. Opin. Psychol. 2024, 58, 101837. [Google Scholar] [CrossRef] [PubMed]
  15. Woolley, A.W.; Gupta, P. Understanding collective intelligence: Investigating the role of collective memory, attention, and reasoning processes. Perspect. Psychol. Sci. 2024, 19, 344–354. [Google Scholar] [CrossRef] [PubMed]
  16. van Knippenberg, D.; Pearce, C.L.; van Ginkel, W.P. Shared leadership–vertical leadership dynamics in teams. Group Organ. Manag. 2025, 50, 44–67. [Google Scholar] [CrossRef]
  17. De Vincenzo, F.; Curșeu, P.L.; Chirilă, M. Collective forms of leadership and team cognition in work teams: A systematic and critical review. Acta Psychol. 2025, 259, 105403. [Google Scholar] [CrossRef] [PubMed]
  18. Abson, E.; Schofield, P.; Kennell, J. Making shared leadership work: The importance of trust in project-based organisations. Int. J. Proj. Manag. 2024, 42, 102567. [Google Scholar] [CrossRef]
  19. Ling, T.C.; Choong, Y.O.; Ng, L.P.; Lau, T.C. Beyond fairness: Exploring organizational citizenship behavior through the lens of self-efficacy and trust in principals. Humanit. Soc. Sci. Commun. 2025, 12, 288. [Google Scholar] [CrossRef]
  20. El-Ashry, A.M.; Abdo, B.M.E.; Khedr, M.A.; El-Sayed, M.M.; Abdelhay, I.S.; Zeid, M.-A.G.A. Mediating effect of psychological safety on the relationship between inclusive leadership and nurses’ absenteeism. BMC Nurs. 2025, 24, 826. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated