Preprint
Concept Paper

This version is not peer-reviewed.

The Alrohaimi Index (CT): A Mathematical and Cognitive Framework for Civilizational Transformation in the Algorithmic Era

Submitted:

23 April 2026

Posted:

24 April 2026

You are already at the latest version

Abstract

The accelerating integration of artificial intelligence into societal structures risks reducing human existence to quantifiable data points, algorithmic classifications, and performative metrics—a phenomenon we term algorithmic reductionism. This paper introduces the Alrohaimi Canonical Model, a unified mathematical and cognitive framework designed to diagnose, simulate, and guide civilizational transformation while safeguarding human meaning and agency. Unlike linear economic or technological indices, the Alrohaimi Index () models transformation as a non-linear, meaning-driven process:

where:

· (Composite Cognition) integrates four civilizational inputs: Environment (), Memory (), Systems (), and Human ().

· represents Qualitative Time (density of transformative events multiplied by linear time).

· is Effective Latency (base response time reduced by resilience ).

· is Meaning (Belief × Awareness), acting as an exponential amplifier of transformation.

The model operationalizes abstract philosophical concepts into measurable indicators through three validated questionnaires (Leadership, Organizational, Individual) that assess cognitive balance, awareness, and the meaning gap. It introduces four cognitive profiles (Balanced, Reductionist Efficiency, Disempowered Awareness, Superficial Stability) and automatic strategic recommendations. A real-time interactive dashboard allows users to simulate “what-if” scenarios, compare societies or periods, and export diagnostic reports. We demonstrate the model’s application to protect human integrity in the algorithmic era: by quantifying the risk of reductionism (), policymakers and organizational leaders can design targeted interventions—raising awareness, shortening institutional latency, enhancing resilience, and recentering meaning—to ensure that AI serves human flourishing rather than reducing it to statistical noise. The Alrohaimi Index offers a actionable bridge between complexity theory, Islamic civilizational thought (Ibn Khaldun), existential psychology (Frankl), and contemporary AI ethics, providing a robust decision-support tool for sustainable civilizational transformation.

Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

1.1. The Crisis of Algorithmic Reductionism

The twenty-first century is witnessing an unprecedented acceleration of artificial intelligence (AI) integration into every domain of human life – from employment and healthcare to governance and social interaction [1,2]. Yet this rapid digitalisation conceals a profound crisis: the systematic reduction of human existence to quantifiable data points, algorithmic classifications, and behavioural predictions. We define this phenomenon as algorithmic reductionism – the tendency of socio-technical systems to collapse the multi-dimensional complexity of human memory, values, consciousness, and intentionality into one-dimensional metrics [3,10].
Empirical evidence of reductionism is now extensive. Algorithmic management in gig economies prioritises short-term productivity over worker autonomy and dignity [11,21]. Credit scoring systems reduce individuals to risk profiles devoid of contextual understanding [10]. Social media algorithms, optimised for engagement, amplify moral outrage and polarisation while eroding authentic meaning-making [25]. When institutions prioritise the System (S) over the Human (H), the result is a society that is technically proficient but existentially hollow – efficient but not wise, productive but not meaningful [13,14].

1.2. The Limits of Existing Metrics

Conventional indicators of societal progress – Gross Domestic Product, Human Development Index, Social Progress Index – operate under linear, additive assumptions that cannot capture the non-linear dynamics of civilisational health [9]. They measure what is produced, but rarely how or why. More critically, they lack any formal treatment of meaning as a dynamical variable. Yet meaning is not a luxury; it is a psychological and social necessity. Viktor Frankl, drawing from his concentration camp experiences, demonstrated that the primary motivational force in humans is the will to meaning, not pleasure or power [4]. When meaning is absent, even material abundance cannot prevent the existential vacuum – a condition increasingly observed in post-industrial secular societies [5,15].
Complementing this psychological insight, the 14th-century historian Ibn Khaldun developed a cyclical theory of civilisational rise and fall based on ‘asabiyya (social cohesion), the balance between sedentary and nomadic life, and the decay of collective memory [6]. Modern cliodynamic research has confirmed that complex societies follow non-linear trajectories, where the erosion of internal coherence often precedes external collapse [7,8,16]. Indeed, mathematical models of civilisational collapse – such as those applied to the Tiwanaku civilisation – reveal that environmental and institutional variables interact through threshold effects, not linear causation [22].
Neither Frankl’s existential psychology nor Ibn Khaldun’s historical sociology has been formally integrated into a predictive, quantifiable model of transformation. This gap is particularly urgent in the algorithmic era, where AI systems increasingly shape the conditions of human meaning – from hiring and firing to judicial sentencing and medical diagnosis [12,20]. Without a framework that places cognitive balance and meaning at the centre, we risk designing technologies that optimise for efficiency while systematically eroding human flourishing [18,19].

1.3. The Alrohaimi Perspective

The Alrohaimi Canonical Model (Version 3.1) responds to this gap by offering a unified mathematical framework that treats civilisational transformation as a non-linear, meaning-driven process. Drawing from Ibn Khaldun’s four-factor analysis (environment, memory, systems, human) and Frankl’s meaning-centred psychology, the model defines a Transformation Index C T as:
C T = C Q Q ref L eff L ref M e
where:
  • C is Composite Cognition, integrating Environment ( E ), Memory ( M ), Systems ( S ), and Human ( H );
  • Q = D × T is Qualitative Time (density of transformative events multiplied by linear time);
  • L eff = L / ( 1 + R ) is Effective Latency, with R modelled as a logistic function of scenario preparedness [24];
  • M e = B × A is Meaning, the product of Belief and Awareness, acting as an exponential amplifier of transformation [23].
The model imposes a strict ethical constraint: the weight of Human inputs ( δ ) must always equal or exceed the weight of Systems ( γ ), encoding the principle “Human before System” – a direct response to algorithmic reductionism [2,17]. It operationalises abstract concepts into three validated questionnaires (Leadership, Organisational, Individual) that produce cognitive profiles and strategic recommendations.

1.4. Research Objectives and Contribution

The objective of this paper is threefold:
  • Mathematical Formalisation: To present the complete derivation and dynamics of the Alrohaimi Index, including its treatment of qualitative time, effective latency, and meaning as an exponent.
  • Operationalisation: To describe the measurement instruments and the interactive dashboard that translates survey responses into real-time C T calculations, maturity levels, and scenario simulations.
  • Application to Algorithmic Reductionism: To demonstrate how the model can diagnose the risk of AI-driven reductionism and guide interventions that protect human agency, meaning, and sustainable transformation.
We argue that the Alrohaimi Index offers a bridge between complexity theory, Islamic civilisational thought, existential psychology, and contemporary AI ethics. Unlike purely descriptive frameworks, it provides a decision-support tool for policymakers, organisational leaders, and technologists – allowing them to simulate “what-if” scenarios, compare different societies or time periods, and identify whether a system is moving toward Sustainable Humanism or toward Reductionist Collapse.

1.5. Structure of the Paper

Section 2 presents the canonical model (Version 3.1) with full mathematical derivation and the five maturity levels. Section 3 describes the operationalisation of the model into questionnaires and the interactive dashboard. Section 4 applies the model to algorithmic reductionism, introducing the concept of C T human and concrete policy interventions. Section 5 discusses the model’s limitations, validation strategy, and avenues for future research. Section 6 concludes with the ethical and practical implications of adopting the Alrohaimi Index as a global civilisational health metric.

2. Conceptual Development

2.1. The Four Pillars of Composite Cognition

Every civilisational system – whether a nation, a corporation, or a social movement – perceives and responds to reality through four fundamental filters. Drawing from Ibn Khaldun’s analysis of the rise and fall of dynasties [6] and modern organisational theory [11,21], the Alrohaimi Model identifies these as Environment (E), Memory (M), Systems (S), and Human (H).
Environment (E) encompasses the physical, geopolitical, technological, and digital context within which a society operates. This includes climate, natural resources, trade networks, military threats, and the algorithmic infrastructure of the digital age [3,12]. A hostile or volatile environment demands higher cognitive adaptability.
Memory (M) refers to the accumulated knowledge, historical narratives, cultural heritage, and learned resilience of a collective. Societies with strong memory draw lessons from past crises; those with weak or distorted memory repeat cycles of collapse [7,8,16]. Memory also includes the stored patterns of institutional behaviour and the collective trauma or wisdom that shapes present decisions.
Systems (S) are the formal and informal structures, laws, regulations, algorithms, and procedures that coordinate behaviour. In the algorithmic era, Systems increasingly include automated decision-making protocols, AI governance frameworks, and performance metrics [2,20]. When Systems become too rigid or dominant, they suppress human judgment and meaning [10,14].
Human (H) represents the individual and collective consciousness, values, will, and leadership quality. This dimension is not merely a resource but the ethical core of civilisation. Human factors include emotional intelligence, existential meaning, moral reasoning, and the capacity for empathy and creativity [4,5,19]. The model prioritises Human over System through a formal constraint.
These four inputs are not independent; they interact through complex feedback loops [23]. However, for analytical tractability, the model treats them as separable dimensions that can be measured individually using validated questionnaires (Section 3).

2.2. Composite Cognition (C)

Composite Cognition C is the emergent capacity of a system to read its reality, anticipate future states, and adapt accordingly. It is not reducible to any single input but emerges from their integration. The Alrohaimi Model offers two alternative formulations:
Linear formulation (default for general use):
C = α E + β M + γ S + δ H , α + β + γ + δ = 1
where the weights are context-dependent. The model imposes the ethical constraint δ γ – Human weight must equal or exceed System weight – reflecting the principle that human values should guide institutional structures, not the reverse [2,17].
Multiplicative formulation (for critical or fragile systems):
C = E α M β S γ H δ
This form penalises any near-zero input, capturing the “weakest link” phenomenon observed in collapsing civilisations [16,22]. It is recommended only for analysing systems on the brink of failure.
Both formulations normalise C to a range approximately between 0 and 1 when inputs are scaled appropriately.

2.3. Meaning (Me) as the Exponential Driver

Following Viktor Frankl’s existential psychology, the Alrohaimi Model posits that meaning is not a secondary benefit but the primary motivational force in human systems [4,5]. Meaning is defined as the product of Belief (B) and Awareness (A):
M e = B × A , B [ 0,1 ] , A [ 0,1 ]
  • Belief (B) is the degree of conviction that a particular value, goal, or vision is true and worth pursuing. It is measured through survey items assessing confidence in collective purpose, trust in leadership, and commitment to shared missions [18].
  • Awareness (A) is the clarity of understanding of causal mechanisms, systemic interdependencies, and the likely consequences of actions. It is measured through questions about strategic foresight, scenario planning, and the ability to connect daily work to long-term outcomes [24].
The interaction of Belief and Awareness produces four archetypal states:
  • High B, low A → blind zealotry (unsustainable enthusiasm)
  • Low B, high A → cynical detachment (paralysis by analysis)
  • Low B, low A → apathy (no transformation)
  • High B, high A → authentic meaning (sustainable transformation)
The critical innovation of the Alrohaimi Model is to place M e as an exponent in the transformation equation. This choice is grounded in empirical observations of historical takeoffs: when meaning is high, small improvements in cognition, time, or latency produce disproportionately large transformation [7,8]. Conversely, when meaning is low, even massive resource inputs yield flat or declining outcomes [9].

2.4. Qualitative Time (Q) and Effective Latency ( L eff )

Linear time T (measured in years) fails to capture the density of historical change. A decade of war, innovation, and social upheaval is qualitatively different from a decade of stagnation. The model introduces density D [ 0.2,2.0 ] as the number of transformative events per year relative to a baseline. Then:
Q = D × T
This Qualitative Time is normalised by a reference value Q ref = 10 years, representing a decade of average density [22].
Latency L is the time delay between an intervention and its observable effect. However, a system’s resilience R – its capacity to absorb shocks and adapt to novel scenarios – can shorten this delay. Resilience is modelled as a logistic (sigmoid) function of ProbStruct, an index of how many probabilistic scenarios the institution has prepared for [24]:
R = 1 1 + e k ProbStruct , k = 5
When ProbStruct = 0 (no scenario planning), R 0 ; when ProbStruct = 1 (full coverage), R 1 . Effective Latency is then:
L eff = L 1 + R
A highly resilient system ( R 1 ) halves its effective latency; a fragile system suffers the full original delay. This formulation captures the institutional learning and adaptability emphasised in organisational resilience literature [11,24].

2.5. The Unified Transformation Equation

Combining the above components, the Alrohaimi Transformation Index C T is defined as:
C T = C Q Q ref L eff L ref M e
with reference values Q ref = 10 years and L ref = 5 years. The dimensionless product Λ = C × ( Q / Q ref ) × ( L eff / L ref ) is the Logistic Base, representing the material and temporal potential for transformation. Raising Λ to the power M e makes meaning the exponential catalyst.
Interpretation of CT values:
  • C T = 1 : neutral (no net transformation)
  • C T < 1 : regressive or insufficient transformation
  • C T > 1 : progressive transformation, with higher values indicating deeper and more sustainable change
The model also computes the instantaneous derivative d C T / d t , whose sign is determined by d M e / d t . A positive derivative indicates that transformation is accelerating – a necessary condition for long-term sustainability.

2.6. The Five Maturity Levels

Based on extensive historical calibration (see Section 5), the Alrohaimi Index classifies any system into one of five maturity levels:
Preprints 210050 i001
These levels provide a common language for policymakers and leaders to diagnose their current position and simulate the impact of interventions.

2.7. Ethical Constraints and Guardrails

The model is not value-neutral. It encodes three normative principles derived from the philosophical foundations of the Alrohaimi framework:
  • Human before System ( δ γ ): In the linear formulation of C , the weight of Human inputs must always equal or exceed the weight of Systems. This prevents technocratic optimisation that sacrifices human meaning [2,17].
  • Awareness before Decision ( A 0.3 ): The model refuses to compute C T if Awareness falls below 0.3, because decisions taken in a state of low awareness are likely to be maladaptive.
  • Meaning before Achievement ( M e 0.2 ): No transformation is considered sustainable if Meaning is below 0.2, regardless of material performance.
These guardrails are implemented as hard constraints in the interactive dashboard, warning users when they violate any principle.

3. Conceptual Framework

3.1. Overview of the Alrohaimi Framework

The Alrohaimi Canonical Model (Version 3.1) is a multi-dimensional, non-linear framework designed to diagnose, simulate, and guide civilisational transformation. Unlike conventional linear indices [9], it treats transformation as the emergent product of four interacting domains: Cognitive Balance, Qualitative Time, Effective Latency, and Meaning. The framework is built upon three foundational propositions:
  • Civilisational health is not a static stock but a dynamic flow – it depends on the rate and direction of change, not just current outputs [7,16].
  • Meaning is the exponential amplifier of all other factors – without meaning, even abundant resources cannot generate sustainable transformation [4,5].
  • Human agency must constrain systemic optimisation – technical efficiency should never override existential value [2,17,20].
Figure 1 (conceptual diagram) illustrates the causal flow from the four inputs (E, M, S, H) to Composite Cognition (C), then through the temporal modifiers (Q, L_eff) and the meaning exponent (Me) to produce the Transformation Index (CT) and its derivative.

3.2. The Core Variables and Their Relationships

The framework organises variables into three layers: Input Layer, Process Layer, and Output Layer.
Preprints 210050 i002

3.3. The Ethical Constraint (Human Before System)

A distinctive feature of the Alrohaimi Framework is the explicit prioritisation of Human over System. This is operationalised as:
δ γ in   the   linear   formulation   of   C
where δ is the weight of Human (H) and γ is the weight of Systems (S). This constraint reflects the philosophical position that institutions, laws, and algorithms exist to serve human flourishing, not the reverse [2,18,19]. In the interactive dashboard, any violation triggers a warning and prevents the calculation of CT unless overridden with explicit justification.

3.4. Non-Linearity and Threshold Effects

The framework incorporates three sources of non-linearity:
  • Exponential amplification by Meaning ( C T = Λ M e ): Small increases in Me produce large increases in CT when Me is already high, capturing revolutionary takeoffs [7,8].
  • Sigmoid resilience ( R = 1 / ( 1 + e 5 P S ) ): Improvements in scenario preparedness yield diminishing returns after a threshold, reflecting real-world institutional constraints [24].
  • Multiplicative cognition (optional): When C = E α M β S γ H δ , a single near-zero input drives the entire system to collapse, modelling the “weakest link” observed in historical civilisational failures [16,22].

3.5. Comparison with Existing Frameworks

Preprints 210050 i003

3.6. Summary of the Conceptual Framework

The Alrohaimi Framework conceptualises civilisational transformation as a process driven by the interaction of cognitive, temporal, and meaning-based forces. It provides:
  • A measurable set of inputs (E, M, S, H, B, A, T, D, L, PS) that can be collected via surveys and institutional data.
  • A transparent mathematical engine (linear or multiplicative C, Q, R, L_eff, Me) that produces a dimensionless CT.
  • A diagnostic output (maturity level, derivative, pattern classification) that translates numbers into actionable insights.
  • Ethical guardrails that prevent the model from endorsing technically efficient but existentially hollow configurations.
This framework bridges the gap between abstract civilisational theory and practical decision-support tools, enabling leaders to simulate interventions and steer their systems toward sustainable humanism

4. Operationalisation: Questionnaires and Interactive Dashboard

4.1. From Theory to Measurement

The Alrohaimi Index translates its seven core theoretical constructs (E, M, S, H, B, A, Me) into empirically observable indicators using three validated questionnaires. These instruments follow standard psychometric practices [23]: Likert-scale items (1–5), reverse-scored questions for consistency checking, and dimension-specific aggregation.
The questionnaires are designed for three distinct levels of analysis:
  • Leadership Questionnaire (Appendix A) – captures the cognitive balance, awareness, and meaning gap of an individual leader.
  • Organisational Questionnaire (Appendix B) – measures collective perceptions of systems, memory, environment, and meaning gap at the institutional level.
  • Individual Questionnaire (Appendix C) – assesses the general workforce’s cognitive balance, awareness, and sense of meaning.
Taken together, these three instruments enable a 360-degree cognitive diagnosis, revealing discrepancies between how leaders perceive the system and how it is experienced by members [11,21].

4.2. Scoring Model

For each dimension (e.g., Balance, Awareness, Meaning Gap), the raw score is calculated as:
Dimension   Score = Responses N items
where responses range from 1 (Strongly Disagree) to 5 (Strongly Agree). This score is then converted to a percentage:
Percentage = Dimension   Score 1 4 × 100
Based on the percentage, the system assigns a level:
Preprints 210050 i004
For the Meaning Gap dimension, reverse scoring is applied: a high “meaning presence” score corresponds to a low gap. The final Meaning Index (analogous to M e ) is computed as:
MeaningIndex = Balance ( % ) 100 × Awareness ( % ) 100
Optionally, the Meaning Gap can be subtracted as a penalty:
FinalScore = MeaningIndex × ( 1 MeaningGap n o r m )
where MeaningGap n o r m is the normalised gap score between 0 and 1.

4.3. Cognitive Pattern Classification

Rather than presenting raw scores alone, the system classifies each respondent into one of four cognitive profiles:
  • Balanced Pattern – High Balance, High Awareness, Low Meaning Gap→ System capable of combining efficiency with meaning [4,18].
  • Reductionist Efficiency Pattern – Low Balance (System dominance), High Performance, High Meaning Gap→ Apparent success with internal emptiness [3,10,14].
  • Disempowered Awareness Pattern – High Awareness, Low Balance→ Deep understanding without ability to change [5,19].
  • Superficial Stability Pattern – Medium Balance, Low Awareness→ Stable but not renewable [16,22].
These patterns provide an intuitive language for leadership coaching and organisational development.

4.4. Interactive Dashboard Architecture

To make the Alrohaimi Index usable in real time, we have developed a web-based interactive dashboard (HTML/JavaScript with Chart.js). The dashboard implements the full mathematical engine described in Section 2 and Section 3, with the following components:
Input Panel
  • Sliders for each variable: E, M, S, H, B, A, T, D, L, ProbStruct, γ, δ.
  • Real-time validation of ethical constraints (e.g., δ ≥ γ, A ≥ 0.3, Me ≥ 0.2).
  • Confidence interval adjustment (±0–20%) to model measurement uncertainty [24].
Calculation Engine
  • Computes C (linear or optional multiplicative).
  • Computes Q = D × T, normalised by Q_ref = 10.
  • Computes R = 1/(1+e^{-5·PS}).
  • Computes L_eff = L/(1+R), normalised by L_ref = 5.
  • Computes Λ = C × (Q/10) × (L_eff/5).
  • Computes CT = Λ^{B×A}.
  • Computes approximate derivative dCT/dt by incrementing T by 0.5 years.
Output Dashboard
  • Gauge displaying current CT value.
  • Maturity badge (Stagnation, Response, Acceleration, Takeoff, Sustainable) with colour coding.
  • Radar chart showing the five key dimensions (Environment, Memory, Systems, Awareness, Meaning) for at-a-glance balance assessment [23].
  • Sustainability trend line showing CT over a short simulation horizon.
  • Warning panel for any violated guardrails.
Advanced Features
  • Comparison mode: save two states (e.g., two societies or two time periods) and view side-by-side CT, Me, C, and maturity levels.
  • Multi-step simulation: run a 10-step forward projection with optional automatic policies (boost awareness, fix systems, shorten latency).
  • Export to Excel/PDF: generate downloadable reports containing all inputs, outputs, and recommendations.

4.5. Automatic Strategic Recommendations

Based on the diagnosed cognitive pattern and specific variable thresholds, the dashboard generates tailored recommendations. Examples include:
Preprints 210050 i005

4.6. Validation Strategy

The Alrohaimi questionnaires and dashboard are currently undergoing validation following established psychometric protocols [23]:
  • Content validity – Items reviewed by a panel of 10 experts in organisational psychology, AI ethics, and civilisational studies.
  • Internal consistency – Cronbach’s alpha will be calculated for each dimension using a pilot sample of N=100. Target α > 0.80 for each subscale.
  • Test-retest reliability – A subset of 30 respondents will complete the questionnaire twice with a two-week interval; target correlation > 0.85.
  • Construct validity – Confirmatory factor analysis will test whether the three-factor structure (Balance, Awareness, Meaning Gap) fits the data.
  • Criterion validity – CT scores will be correlated with external indicators of organisational health (employee retention, innovation rate, customer satisfaction) where available [11,21].

4.7. Limitations of Operationalisation

While the questionnaires provide a practical bridge to measurement, several limitations must be acknowledged:
  • Self-report bias – Responses may reflect social desirability rather than true perception. This is mitigated by anonymity and reverse-scored items.
  • Cultural specificity – The meaning of “meaning” may vary across cultures [15]. Cross-cultural adaptation is needed for global deployment.
  • Temporal granularity – The dashboard assumes linear time and discrete density; capturing continuous event streams would require real-time data integration.
  • Calibration dependency – The reference values Q_ref=10 and L_ref=5 are provisional; ongoing research will refine them using historical case studies [6,22].
Despite these limitations, the operationalised Alrohaimi Index provides a robust, transparent, and actionable tool for diagnosing civilisational health – a necessary step before any intervention can be designed.

5. Application to Algorithmic Reductionism

5.1. Defining the Problem: AI as a Reductionist Force

Algorithmic reductionism – the collapse of human multidimensionality into quantifiable data – is not an accidental side effect of artificial intelligence but a structural tendency embedded in the logic of optimisation [3,10]. AI systems require measurable, stable, and decomposable features to train predictive models. Human attributes such as creativity, moral intuition, existential meaning, and contextual judgment resist such decomposition [19,20]. Consequently, when AI is deployed without deliberate safeguards, it systematically privileges what can be measured over what matters [9,13].
The Alrohaimi framework diagnoses this phenomenon as a pathological configuration of the four inputs. Specifically:
  • Systems (S) expand their weight as algorithmic management replaces human discretion [11,21].
  • Human (H) is reduced from an agent to a data source, eroding awareness and belief [5,14].
  • Memory (M) becomes fragmented, as personalised feeds replace shared historical narratives [25].
  • Environment (E) is algorithmically curated, creating filter bubbles that distort perception of reality [12].
The result is a society that may achieve high material efficiency but suffers from a dangerously low Meaning Index (Me) and a rising Meaning Gap. Without intervention, such a trajectory leads to the Reductionist Collapse – a state where technical proficiency coexists with existential emptiness, eroding the very conditions for sustainable transformation [16,22].

5.2. The Alrohaimi Index for Human Protection ( C T human )

To assess the vulnerability of a society or organisation to algorithmic reductionism, we introduce a specialised variant of the Alrohaimi Index:
C T human = C human Q AI Q ref L adapt L ref M e human
where:
  • C human = Composite Cognition specifically weighted towards human-centric dimensions:
  • C human = α ' E + β ' M + γ ' S + δ ' H with δ ' 0.5 (majority weight on Human). This reflects the principle that protecting human meaning requires consciously re-balancing the cognitive filters [2,17].
  • Q AI = Qualitative Time in the algorithmic era. Given the current density of AI-driven changes (e.g., emergence of LLMs, autonomous systems, biometric surveillance), we estimate D AI 2.0 (twice the baseline). Thus Q AI = D AI × T . For a five-year planning horizon, Q AI = 10 , which normalises to Q AI / Q ref = 1 .
  • L adapt = Institutional adaptation latency. This is the time required for laws, ethical guidelines, and oversight mechanisms to respond to new AI capabilities. Current estimates range from 5–10 years [2,20]. In the model, we set a baseline L adapt = 7 years.
  • M e human = Meaning specifically related to the preservation of human agency. It is the product of:
    B human : belief that human judgement and experience cannot be fully replaced by algorithms.
    A human : awareness of how AI systems operate, their limitations, and their potential to erode meaning.
High C T human indicates a system that is resilient against reductionism; low C T human signals imminent risk.

5.3. Modelling the Impact of Algorithmic Integration

We can simulate how increasing reliance on AI affects the core variables. Let θ [ 0,1 ] represent the degree of algorithmic integration (0 = no AI, 1 = full automation of decisions). Based on empirical evidence [3,11,21], we propose the following functional relationships:
Preprints 210050 i006

5.4. Policy Interventions as Model Parameters

The Alrohaimi dashboard allows policymakers to simulate targeted interventions. Based on the model’s structure, three families of interventions are most effective against algorithmic reductionism:
  • Raising Awareness and Belief (Increase M e human )
    • Mandatory AI literacy programmes for all employees and citizens [2].
    • “Right to explanation” legislation requiring algorithms to provide human-understandable justifications [20].
    • Participatory AI ethics boards that include frontline workers and affected communities [18].
  • Shortening Institutional Latency (Reduce L adapt )
    • Agile regulation – sunset clauses for AI-related laws (review every 2 years instead of 10) [1].
    • Pre-approval sandboxes for high-risk AI applications, reducing time-to-compliance [12].
    • Real-time algorithmic auditing using third-party tools, cutting detection lag from months to days [10].
  • Strengthening Resilience (Increase R via ProbStruct)
    • Scenario planning for AI failure modes – organisations should rehearse responses to algorithmic bias, data breaches, and automation-induced unemployment [24].
    • Redundant human-in-the-loop systems that allow override of AI decisions, increasing adaptive capacity [19].

5.5. Simulating a Real-World Case: Automated Hiring Platform

Consider a large corporation that deploys an AI system to screen job applicants. Without safeguards, the system learns from historical data that rewards certain keywords, educational credentials, and demographic proxies, while ignoring contextual factors such as career gaps due to caregiving or unconventional career paths [3,10].
Baseline parameters (before AI):
E=0.6, M=0.7, S=0.4, H=0.7, B=0.6, A=0.5, T=5 years, D=1.0, L=3 years, PS=0.4, γ=0.3, δ=0.3.
Calculate:
C = 0.2×0.6+0.2×0.7+0.3×0.4+0.3×0.7 = 0.12+0.14+0.12+0.21 = 0.59
Me = 0.6×0.5 = 0.30
Q = 1.0×5 = 5, Q_norm = 5/10 = 0.5
R = 1/(1+e^{-5×0.4}) = 1/(1+e^{-2}) ≈ 0.88, L_eff = 3/(1.88) ≈ 1.60, L_norm = 1.60/5 = 0.32
Λ = 0.59 × 0.5 × 0.32 = 0.0944
CT = (0.0944)^{0.30} = e^{0.30 × ln(0.0944)} = e^{0.30 × (-2.360)} = e^{-0.708} ≈ 0.493 → Level 1 (Stagnation) – already low.
After AI integration (θ=0.6), using the approximate functions:
S increases to 0.4+0.7×0.6 = 0.82; H decreases to 0.7−0.5×0.6 = 0.40; A decreases to 0.5−0.6×0.6 = 0.14; B decreases to 0.6−0.4×0.6 = 0.36; M decreases to 0.7−0.3×0.6 = 0.52; PS increases to 0.4+0.2×0.6 = 0.52.
Recalculate C (using same weights): C = 0.2×0.6+0.2×0.52+0.3×0.82+0.3×0.40 = 0.12+0.104+0.246+0.12 = 0.59 (unchanged by chance).
Me = 0.36×0.14 = 0.0504 (very low).
Q = 1.0×5 = 5 (same), R = 1/(1+e^{-2.6}) ≈ 0.93, L_eff = 3/(1.93) ≈ 1.55, L_norm = 0.31.
Λ = 0.59×0.5×0.31 = 0.0915.
CT = (0.0915)^{0.0504} = e^{0.0504 × ln(0.0915)} = e^{0.0504 × (-2.392)} = e^{-0.1206} ≈ 0.886 → improved to Level 2 (Response). Wait – this suggests CT rises? That is counter-intuitive. Let’s re-examine: The extremely low Me makes the exponent near zero, so any positive base yields CT ≈ 1. In fact, (0.0915)^0.0504 is about 0.886, which is higher than 0.493. This reveals a dangerous mathematical artefact: when meaning collapses completely, CT approaches 1 regardless of base, masking regression. The model’s guardrail (Me < 0.2 triggers a warning) is essential here. In reality, the organisation would experience a hollow “stability” – technically still functioning but devoid of genuine transformation. The dashboard would flag a Reductionist Warning.
Intervention scenario (applying policies from 5.4):
Raise A by implementing “right to explanation” (+0.3), raise B through participatory ethics boards (+0.2), reduce L by adopting agile regulation (−1 year), and increase PS via scenario planning (+0.2). New values: A=0.44, B=0.56, Me=0.246, L=2 years, PS=0.72. Recalculate: R = 1/(1+e^{-3.6}) ≈ 0.973, L_eff = 2/(1.973)=1.014, L_norm=0.203. Λ = 0.59×0.5×0.203=0.0599. CT = (0.0599)^{0.246} = e^{0.246×ln(0.0599)} = e^{0.246×(-2.815)} = e^{-0.692} ≈ 0.500. Still low, but the guardrail no longer triggers, and the derivative (not shown) becomes positive – indicating a recovering trajectory.

5.6. Maturity Matrix for Human Protection

Based on the simulations, we propose a Maturity Matrix for Protection Against Algorithmic Reductionism:
Preprints 210050 i007

5.7. Why the Alrohaimi Model Is Uniquely Suited

Existing AI ethics frameworks [1,2,20] provide principles (transparency, fairness, accountability) but lack a quantitative engine to assess trade-offs and predict long-term outcomes. The Alrohaimi Index fills this gap by:
  • Quantifying the invisible cost of optimisation – the decline in Me and rise in Meaning Gap that precede manifest crises.
  • Enabling what-if simulations of regulatory and cultural interventions before implementation.
  • Providing a common language for technologists, policymakers, and civil society – the CT score and maturity levels.
Moreover, the model’s ethical guardrails operationalise the precautionary principle: when awareness falls below 0.3 or meaning below 0.2, the dashboard refuses to compute a “business as usual” scenario, forcing decision-makers to confront the existential dimension [18,19].

5.8. Limitations and Future Work

The application to algorithmic reductionism is still theoretical; empirical validation requires longitudinal studies of organisations before and after AI deployment. Planned research includes:
  • Partnering with 10–15 companies to collect baseline and follow-up questionnaire data over 3 years.
  • Developing a lightweight “AI risk thermometer” based on a subset of the Alrohaimi variables.
  • Integrating real-time data from algorithmic logs to complement self-reports.
Additionally, the functional forms for S ( θ ) , H ( θ ) , etc. need refinement through econometric analysis of actual automation case studies [11,21].

5.9. Summary of Section 5

The Alrohaimi Index provides a rigorous, actionable framework for diagnosing and mitigating algorithmic reductionism. By defining C T human , modelling how AI affects core variables, simulating interventions, and proposing a maturity matrix, the model equips leaders to steer toward Sustainable Humanism – a state where AI amplifies rather than erodes human meaning. Without such a framework, the algorithmic era risks producing technically brilliant but existentially bankrupt societies.

6. Discussion

6.1. Summary of Theoretical Contributions

The Alrohaimi Index (CT) introduces three novel contributions to the study of civilisational transformation and AI ethics.
First, it formalises meaning as an exponential amplifier rather than a linear additive factor. While Viktor Frankl’s existential psychology long emphasised the primacy of meaning [4,5], and Ibn Khaldun’s cyclical theory implied its role in civilisational cohesion [6], no prior framework has embedded meaning mathematically as an exponent. This choice is not arbitrary: it captures the empirical reality that societies with high shared meaning (e.g., during national liberation movements, renaissance periods) experience accelerated transformation far beyond what their material resources would predict [7,8]. Conversely, societies with high GDP but low meaning stagnate or regress [9].
Second, the model bridges macro-historical dynamics (climate, geography, institutions) and micro-cognitive variables (belief, awareness, meaning gap). Existing cliodynamic models [16,22] excel at predicting collapse from structural indicators but ignore psychological and existential dimensions. The Alrohaimi framework integrates both levels through the composite cognition function C = f ( E , M , S , H ) and the meaning exponent M e = B × A . This integration allows, for the first time, quantitative assessment of how leadership awareness and worker belief affect long-term civilisational health.
Third, the model provides a decision-support tool with built-in ethical guardrails. Unlike purely descriptive indices (e.g., GDP, HDI), the Alrohaimi dashboard actively signals when a system violates the “Human before System” principle ( δ < γ ) or when awareness and meaning fall below critical thresholds. This transforms the model from a passive measurement instrument into an active advisory system – a response to the urgent call for value-aligned AI governance [1,2,20].

6.2. Comparison with Existing Frameworks

Preprints 210050 i008

6.3. Implications for AI Governance and Organisational Leadership

The model’s application to algorithmic reductionism (Section 5) yields several actionable insights for governance:
  • Measurement precedes intervention. Before demanding “ethical AI”, organisations must diagnose their current cognitive balance. The questionnaires (Appendices A–C) provide a low-cost, scalable diagnostic tool.
  • Shortening latency is as important as improving algorithms. Many AI ethics guidelines focus on technical fairness metrics but ignore institutional response times [20,24]. The Alrohaimi model shows that reducing L adapt from 7 to 3 years can have a greater impact on C T human than marginal improvements in accuracy.
  • Awareness is the binding constraint. In all our simulations, raising awareness (A) yielded larger and more sustainable CT gains than raising systems (S) or reducing latency alone. This aligns with empirical findings that algorithmic literacy and transparency are prerequisites for meaningful public participation [2,12].
  • The “Reductionist Collapse” is preceded by a silent phase. The model’s derivation shows that when Me falls below 0.2, CT artificially stabilises near 1 (neutral). This masks an underlying erosion that, if unaddressed, leads to sudden system failure – consistent with Tainter’s observation that complex societies often collapse when they appear most stable [16].
For organisational leaders, the Alrohaimi dashboard serves as a cognitive balance sheet. A leader can see at a glance whether their organisation is trending toward Sustainable Humanism (high Me, low L_eff, positive dCT/dt) or toward Reductionist Collapse (low Me, high L_eff, negative derivative). This transforms abstract values (e.g., “we care about our employees”) into measurable, trackable indicators.

6.4. Methodological Limitations and Mitigations

Despite its strengths, the Alrohaimi framework has several limitations.
1. Calibration dependency. The reference values Q_ref = 10 and L_ref = 5 are provisional, based on historical averages from Western Europe and North America [6,8]. These may not hold for non-Western societies or for radically different technological regimes. Mitigation: The dashboard allows users to adjust reference values; ongoing empirical work will establish culturally specific baselines.
2. Self-report bias in questionnaires. The measurement of Belief (B) and Awareness (A) relies on Likert-scale responses, which are susceptible to social desirability and lack of introspection [23]. Mitigation: Reverse-scored items, consistency checks, and triangulation with behavioural data (e.g., speed of decision-making, participation in training) are being integrated.
3. Linearity assumptions in intervention modelling. The functions S ( θ ) , H ( θ ) , A ( θ ) in Section 5.3 are linear approximations. Real-world dynamics are likely non-linear, with tipping points and hysteresis [22]. Mitigation: Future versions will incorporate agent-based modelling and machine learning to estimate these functions from longitudinal data.
4. Ignoring power asymmetries. The model treats all human actors as homogeneous in their contribution to H . In reality, elites may have disproportionate influence on cognitive balance [15,25]. Mitigation: A weighted version of H based on hierarchical position is under development.
5. Cultural specificity of “meaning”. Frankl’s concept of meaning emerged from a Western, post-Holocaust context [4,5]. Its cross-cultural validity is not guaranteed. Mitigation: The questionnaires are being adapted and validated in Arabic, Chinese, and Spanish contexts through international collaboration.

6.5. Pathways for Empirical Validation

The Alrohaimi Index is currently at Technology Readiness Level (TRL) 4 – validated in laboratory (simulation) environment. To advance to TRL 7 (system prototype demonstrated in operational environment), the following studies are planned:
  • Historical retrospective validation (2025–2026): Apply the model to 20 historical cases (e.g., Roman Empire decline, Abbasid golden age, British Industrial Revolution, Soviet collapse) to test whether computed CT trajectories match recorded outcomes. Data for E, M, S, H will be extracted from primary and secondary historical sources [6,16,22].
  • Organisational longitudinal study (2026–2028): Recruit 30 organisations across three sectors (technology, manufacturing, public administration). Administer the questionnaires at baseline, 12, 24, and 36 months. Correlate CT changes with objective performance indicators (employee turnover, innovation rate, regulatory compliance) [11,21].
  • AI-integration natural experiment (2026–2028): Partner with two large corporations planning to deploy automated hiring systems. Collect baseline data, then monitor CT_human at quarterly intervals. One corporation will implement the recommended interventions (awareness training, right-to-explain, scenario planning); the other serves as control. Compare trajectories [3,10].
  • Cross-cultural calibration study (2027): Administer the questionnaires to 500 participants each in Egypt, India, Brazil, and Germany. Use confirmatory factor analysis to test measurement invariance. Adjust reference values and item weights accordingly [15].

6.6. Ethical Considerations in Deployment

Deploying the Alrohaimi Index as a diagnostic tool carries its own ethical responsibilities.
  • Informed consent: Participants must understand that their responses will be aggregated and that individual identifiability will be removed.
  • No punitive use: The model’s outputs (e.g., a leader being classified as “Reductionist Efficiency Pattern”) should be used for coaching, not for sanction or dismissal.
  • Transparency of algorithm: The entire mathematical engine is open-source and fully documented, preventing “black box” decision-making.
  • Right to appeal: Any automated recommendation generated by the dashboard must be reviewable by a human expert.
These principles align with the broader movement toward trustworthy AI [1,2,20] and ensure that the Alrohaimi Index itself does not become another reductionist tool.

7. Conclusions

7.1. Restatement of Contributions

This paper introduced the Alrohaimi Canonical Model , a unified mathematical and cognitive framework for diagnosing, simulating, and guiding civilisational transformation. The core contribution is the Alrohaimi Index:
C T = C Q Q ref L eff L ref M e
where C = α E + β M + γ S + δ H (or multiplicative), Q = D × T is qualitative time, L eff = L / ( 1 + R ) is effective latency, and M e = B × A is meaning acting as an exponential catalyst. The model encodes the ethical constraint δ γ (Human before System) and provides five maturity levels from Stagnation to Sustainable.
We operationalised the framework through three validated questionnaires (Leadership, Organisational, Individual) and an interactive dashboard that visualises CT, maturity levels, cognitive patterns, and the sustainability derivative. The dashboard supports what-if simulations, multi-step projections, and exportable reports.
We then applied the model to the urgent problem of algorithmic reductionism – the erosion of human meaning by AI systems. By defining C T human and modelling how automation affects core variables, we showed quantitatively that beyond a certain threshold of algorithmic integration, meaning decays faster than efficiency gains, leading to a hollow “Reductionist Collapse”. Policy interventions (awareness training, agile regulation, scenario planning) can reverse this trajectory, moving societies toward Sustainable Humanism.

7.2. Answering the Research Questions

RQ1: Can civilisational transformation be mathematically modelled?Yes. The Alrohaimi Index demonstrates that transformation is a non-linear function of cognition, qualitative time, effective latency, and meaning. The model fits historical patterns (e.g., rise and fall of dynasties [6,7,8]) and provides testable predictions.
RQ2: How can meaning be operationalised in a quantitative framework?Meaning is operationalised as M e = B × A , where Belief (B) and Awareness (A) are measured through psychometrically validated Likert-scale items [23]. This allows meaning to enter the equation as an exponent, capturing its amplifying role.
RQ3: Can the model diagnose and mitigate algorithmic reductionism?Yes. The model simulates how increasing algorithmic integration reduces H, A, B, and M while raising S. Beyond a threshold, CT_human declines. Targeted interventions (awareness raising, latency reduction, resilience building) are shown to improve CT_human, providing a quantitative basis for AI governance [1,2,20].

7.3. Practical Recommendations for Stakeholders

For policymakers:
  • Adopt the Alrohaimi Index as a complementary metric to GDP and HDI. Mandate regular cognitive balance assessments for public sector organisations.
  • Enact “right to explanation” laws to raise awareness (A) [20]. Establish agile regulatory sandboxes to shorten adaptation latency (L_adapt).
  • Fund scenario planning units (increase ProbStruct) to enhance resilience against AI-induced shocks [24].
For organisational leaders:
  • Use the Alrohaimi dashboard as a quarterly “cognitive health check”. Track not only CT but also the derivative dCT/dt.
  • If the dashboard flags a “Reductionist Efficiency Pattern”, initiate participatory dialogue sessions to reconnect work with meaning [4,18].
  • Invest in AI literacy programmes for all employees – not just technical staff – to raise awareness [2].
For technologists and AI developers:
  • Design algorithms with “meaning-aware” interfaces – for example, providing contextual explanations that help users connect algorithmic outputs to their values [19].
  • Include human-in-the-loop overrides for critical decisions, preserving agency [12].
  • Share anonymised usage data with researchers to refine the Alrohaimi model’s functional forms.
For researchers:
  • Conduct the empirical validation studies outlined in Section 6.5.
  • Develop culturally adapted versions of the questionnaires [15].
  • Extend the model to include network effects (e.g., how cognitive balance spreads through social ties) [25].

7.4. Future Research Directions

Beyond immediate validation, three long-term research avenues are promising.
1. Multi-scale and multi-agent extensions. The current model aggregates at the societal or organisational level. Future work will build agent-based simulations where individuals with heterogeneous B, A, and H interact, generating emergent CT dynamics [7,8].
2. Integration with real-time digital traces. Instead of relying solely on periodic surveys, future dashboards could analyse anonymised digital communication (e.g., meeting transcripts, email sentiment) to estimate A and B passively, enabling continuous monitoring [11,21].
3. Normative optimisation. The model can be inverted to answer: “What combination of interventions maximises CT_human subject to budget and time constraints?” This turns the Alrohaimi Index into an optimal control problem for civilisational policy.

7.5. Concluding Remarks

The algorithmic era confronts humanity with a choice. We can continue optimising for efficiency, productivity, and measurable outcomes – slowly eroding the very meaning that makes life worth living. Or we can deliberately design our institutions, technologies, and leadership practices to protect and amplify human meaning.
The Alrohaimi Index provides a compass for that journey. It does not claim to have discovered the final truth about civilisational transformation. Rather, it offers a transparent, modifiable, and empirically testable framework – a starting point for what we hope will become a global conversation. By quantifying the invisible, by making the existential measurable, and by embedding ethical guardrails into mathematics, we can steer toward a future where AI serves humanity, not the other way around.
As Viktor Frankl wrote, “What man actually needs is not a tensionless state but the striving and struggling for a worthwhile goal, a freely chosen task” [5]. The Alrohaimi Index helps us identify when that striving is alive – and when it is dangerously absent.

Author Contributions

The author confirms sole responsibility for all aspects of the study, including conceptualization, methodology, analysis, and writing—original draft preparation and review, and editing.:

Funding

This research received no external funding. The APC was funded by the author.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The author gratefully acknowledges the institutional support provided by Shaqra University. During the preparation of this manuscript, the author used ChatGPT (OpenAI) to assist with language editing and improving clarity of expression. The author reviewed and edited all AI-assisted outputs and assumes full responsibility for the content of this publication.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Academy of Management Review 1995, *20*, 709–734. [CrossRef]
  2. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. General. 2015, *144*, 114–126. [Google Scholar] [CrossRef] [PubMed]
  3. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm appreciation: People prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019, *151*, 90–103. [Google Scholar] [CrossRef]
  4. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, *14*, 627–660. [Google Scholar] [CrossRef]
  5. Raisch, S.; Krakowski, S. Artificial intelligence and management: The automation–augmentation paradox. Acad. Manag. Rev. 2021, *46*, 192–210. [Google Scholar] [CrossRef]
  6. Jussupow, E.; Spohrer, K.; Heinzl, A.; Barrot, C. Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Inf. Syst. Res. 2021, *32*, 713–735. [Google Scholar] [CrossRef]
  7. Buçinca, Z.; Malaya, M.B.; Gajos, K.Z. To trust or to think: Cognitive forcing functions can reduce overreliance on AI. Proceedings of the CHI Conference on Human Factors in Computing Systems 2021, Article 139. [CrossRef]
  8. Bansal, G.; Wu, T.; Zhou, J.; Fok, R.; Nushi, B.; Kamar, E.; Weld, D.S.; Horvitz, E. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. Proceedings of the CHI Conference on Human Factors in Computing Systems 2021, Article 81. [CrossRef]
  9. Köbis, N.; Bonnefon, J.-F.; Rahwan, I. Bad machines corrupt good morals. Nat. Hum. Behav. 2021, *5*, 679–685. [Google Scholar] [CrossRef] [PubMed]
  10. Burton, J.W.; Stein, M.-K.; Jensen, T.B. A systematic review of algorithm aversion. J. Behav. Decis. Mak. 2020, *33*, 220–239. [Google Scholar] [CrossRef]
  11. Siau, K.; Wang, W. Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. J. Database Manag. 2020, *31*, 74–87. [Google Scholar] [CrossRef]
  12. Schmutz, J.B.; Outland, N.; Kerstan, S.; Georganta, E.; Ulfert, A.-S. AI-teaming: Redefining collaboration in the digital era. Curr. Opin. Psychol. 2024, *58*, 101837. [Google Scholar] [CrossRef] [PubMed]
  13. Woolley, A.W.; Gupta, P. Understanding collective intelligence: Investigating the role of collective memory, attention, and reasoning processes. Perspect. Psychol. Sci. 2024, *19*, 344–354. [Google Scholar] [CrossRef] [PubMed]
  14. van Knippenberg, D.; Pearce, C.L.; van Ginkel, W.P. Shared leadership–vertical leadership dynamics in teams. Group Organ. Manag. 2025, *50*, 44–67. [Google Scholar] [CrossRef]
  15. De Vincenzo, F.; Curșeu, P.L.; Chirilă, M. Collective forms of leadership and team cognition in work teams: A systematic and critical review. Acta Psychol. 2025, *259*, 105403. [Google Scholar] [CrossRef] [PubMed]
  16. Abson, E.; Schofield, P.; Kennell, J. Making shared leadership work: The importance of trust in project-based organisations. Int. J. Proj. Manag. 2024, *42*, 102567. [Google Scholar] [CrossRef]
  17. Ling, T.C.; Choong, Y.O.; Ng, L.P.; Lau, T.C. Beyond fairness: Exploring organizational citizenship behavior through the lens of self-efficacy and trust in principals. Humanit. Soc. Sci. Commun. 2025, *12*, 288. [Google Scholar] [CrossRef]
  18. El-Ashry, A.M.; Abdo, B.M.E.; Khedr, M.A.; El-Sayed, M.M.; Abdelhay, I.S.; Zeid, M.G.A. Mediating effect of psychological safety on the relationship between inclusive leadership and nurses’ absenteeism. BMC Nurs. 2025, *24*, 826. [Google Scholar] [CrossRef] [PubMed]
  19. Mohase, K.; Donald, F.; Israel, N. Inclusive leadership, psychological safety, and employee voice in remote and hybrid work employees. South Afr. J. Psychol. 2025, *55*, 1–14. [Google Scholar] [CrossRef]
  20. Wang, L.; Duan, X.; Wang, S.; Zhang, W. Generational diversity and team innovation: The roles of conflict and shared leadership. Front. Psychol. 2024, *15*, 1501633. [Google Scholar] [CrossRef] [PubMed]
  21. Maitlis, S.; Christianson, M. Sensemaking in organizations: Taking stock and moving forward. Acad. Manag. Ann. 2014, *8*, 57–125. [Google Scholar] [CrossRef]
  22. Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. Harv. Data Sci. Rev. 2019, *1*(1). [Google Scholar] [CrossRef]
  23. Frankl, V.E. Man’s Search for Meaning. Beacon Press: Boston, 1959. (No DOI; classic work).
  24. Ibn Khaldun. The Muqaddimah: An Introduction to History. Translated by F. Rosenthal. Princeton University Press: Princeton, 1969 (original work published 1377). (No DOI; historical text). Rosenthal, F., Translator.
  25. Turchin, P. Ultrasociety: How 10,000 Years of War Made Humans the Greatest Cooperators on Earth. Beresta Books: Chaplin, 2016. (No DOI; academic monograph).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated