Preprint
Concept Paper

This version is not peer-reviewed.

A Conceptual Lexicon of Conscious Leadership: A Cognitive Architecture for Meaning in Human–AI Systems

Submitted:

08 April 2026

Posted:

10 April 2026

You are already at the latest version

Abstract
A fundamental limitation in human–AI systems lies not only in how decisions are produced, but in how they are cognitively understood. While existing research has advanced models of trust, performance, and human–AI interaction, it provides limited conceptual tools for explaining how individuals construct meaning within system-mediated environments. This gap suggests that the challenge of human–AI integration is not only computational, but fundamentally conceptual.This paper develops a structured conceptual framework of conscious leadership to organize the cognitive processes through which individuals interpret, engage with, and act within AI-supported systems. Rather than introducing isolated definitions, the framework is articulated as an interconnected system of constructs that collectively shape perception, interpretation, and decision coherence.Building on prior work on perceptual integrity as a condition of cognitive coherence, the study identifies and integrates a set of foundational constructs, including cognitive balance, meaning gap, leadership latency, and cognitive governance. These constructs are positioned within a unified cognitive architecture that explains how meaning is formed, disrupted, and restored in human–AI interaction.The paper makes three contributions. First, it reframes leadership as a cognitive–interpretive system rather than a purely behavioral or relational construct. Second, it introduces a structured framework as a methodological tool for analyzing and designing human–AI systems. Third, it provides a foundation for future empirical research by defining constructs that can be operationalized and tested across contexts.As intelligent systems increasingly shape decision environments, structuring how meaning is constructed becomes as critical as optimizing decisions. A decision may be technically correct yet cognitively unintegrated. This study positions conceptual structure not as a descriptive layer, but as an active mechanism shaping cognition, leadership, and human–AI coherence.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Sociology

1. Introduction

The increasing integration of artificial intelligence (AI) into decision-making processes has transformed not only how decisions are generated, but also how they are interpreted and understood. Contemporary research has made substantial progress in examining trust, performance, and human–AI collaboration, highlighting both the potential and the risks of algorithmic systems in shaping organizational and societal outcomes [1,4,5,6,11]. Yet, a critical dimension remains underdeveloped: the conceptual structures through which individuals interpret these systems and construct meaning around their role within them.
Existing frameworks largely focus on observable outcomes—whether individuals trust AI, rely on it, or resist it. While these perspectives provide valuable insights into behavioral responses, they offer limited explanatory power regarding how meaning is constructed during the decision process itself. In environments where algorithmic systems increasingly shape interpretation, evaluation, and choice, the central question is no longer only whether decisions are accurate, but whether they are cognitively interpretable. Decisions may be accepted without being understood.
The importance of meaning construction has long been recognized in organizational and cognitive research [13,21]. Sensemaking theory emphasizes that individuals actively interpret ambiguous environments by constructing coherent narratives that guide action, while research on collective intelligence highlights the role of shared mental models in enabling coordinated and adaptive behavior. However, the application of these perspectives to AI-mediated environments remains fragmented. Existing concepts are often treated in isolation, without a unifying framework that explains how they interact within a broader cognitive system.
This fragmentation becomes particularly consequential in contexts where human cognition interacts with computational systems operating according to fundamentally different logics. AI systems optimize decisions based on statistical patterns and predefined objectives, whereas human cognition relies on interpretation, contextualization, and meaning construction. The interaction between these modes of processing creates a structural gap: individuals may engage with system outputs without fully integrating them into their cognitive frameworks. As a result, decision-making may become operationally efficient yet conceptually misaligned [2,3,10].
To address this limitation, this paper builds on and integrates existing research in sensemaking, human–AI interaction, and cognitive systems to introduce a structured conceptual lexicon of conscious leadership. Rather than proposing a single construct, the lexicon defines a system of interrelated concepts that together form a coherent architecture for understanding how meaning is constructed, disrupted, and restored in system-mediated environments. In this sense, the lexicon functions not as a glossary of terms, but as a cognitive architecture for analyzing interpretation and decision coherence.
While prior research has explored constructs such as trust, autonomy, and reliance, it has not systematically formalized the conceptual structures through which meaning is maintained or disrupted in AI-mediated decision environments [4,5,6,12]. This work addresses that gap by organizing key constructs into a unified framework that explains how cognitive alignment is achieved and how it breaks down.
By structuring these constructs into an integrated system, the paper addresses a central challenge in human–AI interaction: the absence of a shared conceptual language for interpreting complex interactions between human cognition and intelligent systems. Without such a language, individuals may struggle to articulate, evaluate, or adjust their engagement with AI, limiting both theoretical development and practical application.
The paper makes three primary contributions. First, it reframes leadership as a cognitive–interpretive system rather than a purely behavioral or relational construct. Second, it introduces a structured conceptual lexicon as a methodological tool for analyzing and designing human–AI interaction. Third, it provides a foundation for future empirical research by defining constructs that can be operationalized and tested across contexts.
As AI systems continue to expand their role in shaping decisions, the challenge extends beyond improving system performance to structuring how meaning is formed within these environments. A system can optimize decisions while eroding understanding. Addressing this challenge requires not only technological advancement, but conceptual clarity. This paper takes a step in that direction by positioning conceptual language as a central mechanism in the design and understanding of human–AI coherence.

2. Conceptual Development

2.1. Research Design

This study adopts a conceptual research design aimed at developing a structured lexicon for understanding cognition and meaning construction in human–AI interaction. Rather than testing predefined hypotheses, the study focuses on theory building through the systematic identification, evaluation, and integration of core constructs relevant to conscious leadership in system-mediated environments.
The research follows an iterative conceptual development approach, drawing on and integrating insights from cognitive psychology, organizational theory, and human–AI interaction literature to construct a coherent and internally consistent framework.

2.2. Conceptual Derivation Process

The development of the lexicon proceeded through three stages.
First, a targeted literature review was conducted to identify constructs related to cognition, decision-making, sensemaking, and leadership in AI-supported contexts. The review focused on peer-reviewed studies published between 2020 and 2025, supplemented by foundational works in sensemaking and organizational cognition. Sources were identified using databases such as Scopus and Web of Science, with keywords including human–AI interaction, sensemaking, algorithmic decision-making, and cognitive processes.
Second, identified constructs were analytically compared and grouped based on their functional roles within cognitive processes. This involved distinguishing between constructs that represent conditions (e.g., cognitive balance), states (e.g., perceptual integrity), and processes (e.g., meaning construction), enabling a structured differentiation of conceptual roles.
Third, a synthesis phase was conducted in which selected constructs were refined, redefined where necessary, and organized into an integrated conceptual system. During this phase, overlapping or redundant constructs were eliminated to ensure conceptual clarity, parsimony, and internal coherence.

2.3. Construct Selection Criteria

To ensure theoretical rigor and relevance, constructs included in the lexicon were selected based on three criteria:
  • Theoretical grounding: Each construct is supported by existing literature or extends established theoretical foundations in cognition, leadership, or human–AI interaction.
  • Conceptual distinctiveness: Constructs represent unique dimensions of cognition or leadership and are not reducible to existing variables such as trust, reliance, or autonomy.
  • System relevance: Constructs contribute directly to explaining how meaning is formed, disrupted, or restored within human–AI interaction.
Based on these criteria, a subset of core constructs was identified, including cognitive balance, perceptual integrity, meaning gap, leadership latency, and cognitive governance.

2.4. Conceptual Structuring

Following selection, constructs were organized into a structured framework based on their functional roles within a broader cognitive system. Specifically, constructs were categorized into four interrelated layers:
  • Foundational conditions (e.g., cognitive balance)
  • Core cognitive states (e.g., perceptual integrity)
  • Dynamic processes (e.g., meaning construction and meaning gap)
  • System-level mechanisms (e.g., leadership latency and cognitive governance)
This layered structuring enables the lexicon to function as an integrated cognitive architecture rather than a collection of independent definitions.

2.5. Validation Approach

Given the conceptual nature of the study, validation was conducted through theoretical coherence, internal consistency, and alignment with established literature. Each construct was evaluated in relation to prior frameworks in sensemaking, collective cognition, and human–AI interaction to ensure conceptual compatibility and relevance.
In addition, internal validation was performed by examining the logical relationships among constructs, ensuring that the framework forms a coherent explanatory system. Particular attention was given to whether the proposed constructs collectively explain how meaning is constructed, disrupted, and restored within system-mediated environments.

2.6. Scope and Limitations of the Approach

The present methodology prioritizes conceptual clarity and theoretical integration over empirical testing. While this approach enables the development of a structured and coherent framework, it does not provide direct empirical validation of the proposed constructs.
Future research is required to operationalize the lexicon and empirically examine the relationships among constructs across different contexts, levels of system autonomy, and decision environments. The framework should therefore be understood as a foundational architecture intended to guide subsequent empirical investigation.

2.7. Data Availability

No primary data were collected for this study. All conceptual development is based on existing literature and theoretical synthesis.

3. Conceptual Framework

3.1. Overview of the Conceptual Lexicon

The conceptual development process resulted in a structured lexicon comprising a set of interrelated constructs that collectively define the cognitive architecture of conscious leadership in human–AI interaction. Rather than functioning as isolated definitions, these constructs form an integrated system in which each element contributes to the formation, disruption, and restoration of meaning within decision processes.
The lexicon is organized into four functional layers: (1) foundational conditions, (2) core cognitive states, (3) dynamic processes, and (4) system-level mechanisms. This layered structure enables a systematic understanding of how cognition operates in environments shaped by intelligent systems and provides a basis for analyzing how cognitive alignment is maintained or lost.

3.2. Foundational Conditions: Cognitive Balance

At the foundation of the framework lies cognitive balance, defined as the dynamic equilibrium among environmental inputs, memory-based interpretation, system influence, and human judgment. Cognitive balance functions as a prerequisite for coherent cognition, determining whether individuals are able to integrate multiple sources of information into a unified interpretive frame.
Disruptions in cognitive balance are associated with fragmentation of interpretation and increased reliance on system outputs without corresponding cognitive integration. In this sense, cognitive balance establishes the conditions under which higher-order cognitive states can emerge.

3.3. Core Cognitive State: Perceptual Integrity

Perceptual integrity represents the central cognitive state within the lexicon, capturing the degree to which individuals maintain alignment between interpretation, decision process, and outcome. Unlike constructs such as trust or autonomy, which reflect evaluations of external systems, perceptual integrity reflects the internal coherence of cognition.
Within the framework, perceptual integrity functions as a key indicator of cognitive alignment. It mediates the relationship between underlying conditions (e.g., cognitive balance) and experiential outcomes, determining whether decisions are not only executed but meaningfully integrated. A decision may be followed without achieving perceptual integrity.

3.4. Dynamic Processes: Meaning Construction and Meaning Gap

The lexicon identifies meaning construction as an ongoing cognitive process through which individuals interpret information and generate coherent narratives that guide action. This process is continuously shaped by the interaction between human cognition and system-generated outputs.
A complementary construct, the meaning gap, captures the discrepancy between system-generated outputs and the individual’s interpretive framework. The presence of a meaning gap indicates a breakdown in cognitive alignment, where decisions may be operationally accepted but not conceptually integrated.
Together, meaning construction and meaning gap define the dynamic processes through which cognitive coherence is maintained or disrupted over time.

3.5. System-Level Mechanisms: Leadership Latency and Cognitive Governance

At the system level, two constructs function as higher-order mechanisms regulating cognitive processes.
Leadership latency refers to the temporal or structural delay in human interpretive influence within system-driven environments. It captures situations in which leadership is not absent, but displaced—either occurring after decisions are shaped by systems or embedded indirectly within system design. This displacement alters the timing and location of interpretive authority.
Cognitive governance, in contrast, refers to the deliberate structuring of conditions that enable coherent cognition in system-mediated contexts. It encompasses the design of environments, interfaces, and decision processes that preserve interpretability, engagement, and cognitive balance.
Together, these mechanisms determine whether cognitive processes remain aligned or become progressively detached from human interpretation.

3.6. Integration of the Lexicon

The constructs form a coherent and interdependent system. Foundational conditions shape core cognitive states, which are enacted through dynamic processes and regulated by system-level mechanisms.
Specifically, cognitive balance enables perceptual integrity, while meaning construction sustains this integrity over time. The emergence of a meaning gap signals a disruption in this process, which may be amplified under conditions of leadership latency. Cognitive governance functions as a corrective mechanism, restructuring conditions to restore alignment.
This configuration positions the lexicon as a dynamic system rather than a static taxonomy, enabling analysis of how cognition evolves within human–AI interaction.

3.7. Summary of Conceptual Findings

Overall, the findings indicate that cognition in human–AI environments can be understood as a structured and dynamic system of interdependent constructs. The proposed lexicon provides a coherent architecture for examining how meaning is constructed, how it breaks down, and how it can be restored through intentional design and leadership.
This perspective shifts analysis from isolated variables to systemic cognitive alignment, offering a foundation for both theoretical development and empirical investigation.

4. Discussion

The purpose of this study is to address a fundamental limitation in current human–AI research: the absence of a structured conceptual language capable of explaining how individuals construct meaning within system-mediated environments. While prior work has generated important insights into trust, performance, and interaction dynamics, these constructs remain insufficient for explaining how cognition operates under conditions shaped by intelligent systems [4,5,6,12]. What remains underdeveloped is not an additional behavioral variable, but a framework capable of capturing how meaning itself is formed.
The conceptual lexicon developed in this study provides an alternative perspective by shifting the analytical focus from outcomes to interpretation. Rather than examining whether individuals trust or rely on AI, the framework focuses on how individuals interpret, integrate, and cognitively align with system-generated outputs. This shift is critical, as decision acceptance does not imply meaningful understanding—a limitation consistently reflected in prior research on algorithm aversion and appreciation [2,3,10].
By organizing constructs such as cognitive balance, perceptual integrity, and meaning construction into a unified system, the lexicon extends existing theories of sensemaking and collective cognition [13,21]. While prior research emphasizes interpretation in uncertain environments, it does not fully account for computational systems that actively structure information and decision processes. The present framework addresses this gap by explicitly incorporating system influence as a core component of cognitive architecture.
A key contribution of this work lies in distinguishing perceptual integrity from related constructs such as trust and autonomy. Trust reflects an evaluation of system reliability [1,4], while autonomy reflects perceived control; perceptual integrity, in contrast, captures alignment between interpretation, decision process, and outcome. This distinction helps explain why individuals may rely on AI systems without fully integrating their outputs into their cognitive frameworks, contributing to inconsistencies identified in prior studies [2,3,10].
The framework also introduces leadership latency as a novel construct describing the temporal and structural displacement of human interpretive influence in system-driven environments. This extends existing leadership research by highlighting that leadership is not only a matter of presence or authority, but also of timing within decision processes [14,15].
Similarly, cognitive governance reframes leadership as the structuring of conditions that enable coherent cognition. This perspective aligns with emerging views of sociotechnical system design and AI-enabled collaboration [5,12], while extending them by specifying the cognitive mechanisms through which such design shapes interpretation and decision coherence.
Taken together, these insights support the development of a broader theoretical perspective that may be understood as a theory of conscious leadership. Within this perspective, leadership is defined not as the control of action, but as the capacity to sustain cognitive coherence in environments shaped by intelligent systems.
Implications for Theory
The findings suggest that research on human–AI interaction should move beyond isolated constructs toward integrated frameworks that capture the interdependence of cognitive processes [13,21]. The conceptual lexicon contributes to this shift by providing a structured architecture for examining how meaning is constructed and sustained.
In addition, the distinction between decision outcomes and cognitive alignment calls for a reconsideration of how constructs such as trust are conceptualized and measured [1,4]. Incorporating perceptual integrity into existing models may provide a more comprehensive understanding of human responses to AI.
Implications for Practice
From a practical perspective, the framework has direct implications for the design of AI systems and organizational processes. Systems that prioritize efficiency without supporting interpretability may increase reliance while simultaneously undermining cognitive alignment [7,8].
Incorporating mechanisms that support explanation, reflection, and interpretive engagement may reduce the meaning gap and enhance decision sustainability. For leaders, the challenge extends beyond implementation to the deliberate design of environments that preserve cognitive coherence in human–AI interaction.
Limitations and Future Research
This study is conceptual in nature and does not provide empirical validation of the proposed constructs. Future research is required to operationalize and empirically test the lexicon across different contexts, particularly under varying levels of system autonomy and task complexity [6,12].

5. Conclusions

This paper addresses a fundamental gap in human–AI research by shifting the analytical focus from how decisions are made to how they are conceptually understood. While existing literature has advanced important insights into trust, performance, and interaction dynamics, it provides limited tools for explaining how individuals construct meaning within system-mediated environments [4,5,6,12]. In response, this study introduces a structured conceptual lexicon that organizes the cognitive language of conscious leadership.
By integrating constructs such as cognitive balance, perceptual integrity, meaning gap, leadership latency, and cognitive governance, the paper develops a coherent cognitive architecture for understanding how meaning is formed, disrupted, and restored in human–AI interaction. This contribution both complements and extends existing research on human–AI collaboration and decision-making [12,13].
Conceptually, the study reframes leadership as a cognitive–interpretive system rather than a purely behavioral construct. Methodologically, it introduces a structured lexicon that enables systematic analysis of cognitive alignment and establishes a foundation for future empirical research.
As artificial intelligence continues to shape decision environments, maintaining coherence between human interpretation and system output becomes increasingly critical. Prior research has emphasized performance and trust, yet the preservation of meaning remains comparatively underexplored [4,10]. This imbalance highlights a growing risk: systems may optimize decisions while simultaneously undermining their cognitive integration.
Future research should focus on operationalizing the proposed constructs and examining their relationships across diverse contexts, particularly under varying levels of system autonomy and complexity. Such efforts are essential for advancing a more comprehensive understanding of cognition in human–AI systems.
Ultimately, the challenge of artificial intelligence is not only to improve decisions, but to preserve meaning. Without conceptual alignment, even highly accurate systems may produce outcomes that are accepted in practice yet remain cognitively unintegrated [2,3].

Author Contributions

The author confirms sole responsibility for all aspects of the study, including conceptualization, methodology, analysis, and writing—original draft preparation and review, and editing.

Funding

This research received no external funding. The APC was funded by the author.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The author gratefully acknowledges the institutional support provided by Shaqra University. During the preparation of this manuscript, the author used ChatGPT (OpenAI) to assist with language editing and improving clarity of expression. The author reviewed and edited all AI-assisted outputs and assumes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Academy of Management Review 1995, 20, 709–734. [Google Scholar] [CrossRef]
  2. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 2015, 144, 114–126. [Google Scholar] [CrossRef] [PubMed]
  3. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 2019, 151, 90–103. [Google Scholar] [CrossRef]
  4. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 2020, 14, 627–660. [Google Scholar] [CrossRef]
  5. Raisch, S.; Krakowski, S. Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review 2021, 46, 192–210. [Google Scholar] [CrossRef]
  6. Jussupow, E.; Spohrer, K.; Heinzl, A.; Barrot, C. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Information Systems Research 2021, 32, 713–735. [Google Scholar] [CrossRef]
  7. Buçinca, Z.; Malaya, M.B.; Gajos, K.Z. To trust or to think: Cognitive forcing functions can reduce overreliance on AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems, 2021; 139. [Google Scholar] [CrossRef]
  8. Bansal, G.; Wu, T.; Zhou, J.; Fok, R.; Nushi, B.; Kamar, E.; Weld, D.S.; Horvitz, E. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In Proceedings of the CHI Conference on Human Factors in Computing Systems, 2021; 81. [Google Scholar] [CrossRef]
  9. Köbis, N.; Bonnefon, J.-F.; Rahwan, I. Bad machines corrupt good morals. Nature Human Behaviour 2021, 5, 679–685. [Google Scholar] [CrossRef] [PubMed]
  10. Burton, J.W.; Stein, M.-K.; Jensen, T.B. A systematic review of algorithm aversion. Journal of Behavioral Decision Making 2020, 33, 220–239. [Google Scholar] [CrossRef]
  11. Siau, K.; Wang, W. Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. Journal of Database Management 2020, 31, 74–87. [Google Scholar] [CrossRef]
  12. Schmutz, J.B.; Outland, N.; Kerstan, S.; Georganta, E.; Ulfert, A.-S. AI-teaming: Redefining collaboration in the digital era. Current Opinion in Psychology 2024, 58, 101837. [Google Scholar] [CrossRef] [PubMed]
  13. Woolley, A.W.; Gupta, P. Understanding collective intelligence: Investigating the role of collective memory, attention, and reasoning processes. Perspectives on Psychological Science 2024, 19, 344–354. [Google Scholar] [CrossRef] [PubMed]
  14. van Knippenberg, D.; Pearce, C.L.; van Ginkel, W.P. Shared leadership–vertical leadership dynamics in teams. Group & Organization Management 2025, 50, 44–67. [Google Scholar] [CrossRef]
  15. De Vincenzo, F.; Curșeu, P.L.; Chirilă, M. Collective forms of leadership and team cognition in work teams: A systematic and critical review. Acta Psychologica 2025, 259, 105403. [Google Scholar] [CrossRef] [PubMed]
  16. Abson, E.; Schofield, P.; Kennell, J. Making shared leadership work: The importance of trust in project-based organisations. International Journal of Project Management 2024, 42, 102567. [Google Scholar] [CrossRef]
  17. Ling, T.C.; Choong, Y.O.; Ng, L.P.; Lau, T.C. Beyond fairness: Exploring organizational citizenship behavior through the lens of self-efficacy and trust in principals. Humanities and Social Sciences Communications 2025, 12, 288. [Google Scholar] [CrossRef]
  18. El-Ashry, A.M.; Abdo, B.M.E.; Khedr, M.A.; El-Sayed, M.M.; Abdelhay, I.S.; Zeid, M.G.A. Mediating effect of psychological safety on the relationship between inclusive leadership and nurses' absenteeism. BMC Nursing 2025, 24, 826. [Google Scholar] [CrossRef] [PubMed]
  19. Mohase, K.; Donald, F.; Israel, N. Inclusive leadership, psychological safety, and employee voice in remote and hybrid work employees. South African Journal of Psychology 2025, 55, 1–14. [Google Scholar] [CrossRef]
  20. Wang, L.; Duan, X.; Wang, S.; Zhang, W. Generational diversity and team innovation: The roles of conflict and shared leadership. Frontiers in Psychology 2024, 15, 1501633. [Google Scholar] [CrossRef] [PubMed]
  21. Maitlis, S.; Christianson, M. Sensemaking in organizations: Taking stock and moving forward. Academy of Management Annals 2014, 8, 57–125. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated