Preprint
Review

This version is not peer-reviewed.

Is Strategic Decision-Making Still Human? A Systems Review of Agency, Algorithms, and Enterprise Strategy

Submitted:

22 February 2026

Posted:

23 February 2026

You are already at the latest version

Abstract
Strategic decision-making (SDM) has traditionally been viewed as a human activity based on judgment, experience, and negotiation among senior managers. These decisions are limited by attention constraints, incomplete information, and bounded rationality. Today, many firms embed artificial intelligence (AI) and algorithmic decision-making systems into strategic processes. In some cases, algorithms do more than support managers. They filter options, rank priorities, and strongly shape final decisions. This article asks when SDM remains meaningfully human and when it becomes effectively algorithmic in algorithmically mediated enterprises. The study uses a theory-building integrative review of 62 contributions from strategy, information systems, behavioural research, and governance. It compares human and algorithmic decision-making across five dimensions: interpretive authority, search structure, time orientation, accountability, and scalability. Based on this analysis, it develops a framework of human–AI decision structures. The framework identifies three main forms: human-dominant, sequential hybrid (AI-to-human or human-to-AI), and aggregated human–AI governance structures. Each form affects not only decision accuracy but also power, learning, agency, and accountability. The key challenge is not to defend purely human strategy. It is to design governance systems where decision rights, oversight, and contestability remain strong when algorithms act as active decision participants.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  

1. Introduction

Strategic decision-making (SDM) is usually treated as the responsibility of senior leaders, enacted through human judgement and deliberation at the top of the organisation. Classic work already argues, however, that SDM is constrained by limited attention, imperfect information, and bounded rationality (BR) (Simon, 1947). Building on this, the behavioural theory of the firm (BTF) asserts that organisations rely on routines, decision rules, and problemistic search to make choices under uncertainty, so that SDM is shaped as much by organisational structures and procedures as by individual cognition (Cyert & March, 1963; Cyert et al., 2007). In this sense, SDM has long been an organisational design problem: firms can decide who holds decision rights, when they decide, what information is available, and how disagreements are resolved (Greve & Zhang, 2022).
Although SDM has historically been human-centred, many organisations now embed artificial intelligence (AI) outputs directly into strategic processes. In practice, algorithmic systems do not simply “support” decisions; they filter option sets, prioritise targets, recommend actions, and shape what is considered credible evidence. So, algorithms may act as de facto decision participants rather than neutral tools. In this context, the issue is not merely whether AI improves predictive accuracy, but how strategic authority, responsibility, and accountability should be organised when AI has real influence over strategic choices (Trunk et al., 2020; Shrestha et al., 2019).

1.1. From Human Judgement to Decision Architecture

A useful starting point is to treat SDM as decision architecture rather than personal trait. Behavioural and organisational traditions argue that decision quality depends on how choices are structured, not only on who is in the room (Simon, 1947; Cyert & March, 1963). In a similar vein, behavioural strategy (BS) shows that even experienced executives remain vulnerable to systematic biases, motivated reasoning, and affective influences, which can distort strategic outcomes despite expertise (Powell et al., 2011; Bromiley & Rau, 2017). Therefore, the often-implied claim that “keeping SDM human” is automatically safer or normatively superior becomes difficult to sustain once the behavioural evidence is considered.
At the same time, BS exhibits an important limitation for the present problem. Much of this work focuses on the cognition of top managers and boards, with relatively less attention to hybrid systems in which algorithms structure attention, pre-filter alternatives, and frame recommendations before human actors deliberate (Cristofaro et al., 2023). In other words, BS explains how managers decide, but often under-specifies how human and algorithmic inputs should be structured together. If SDM is already an organisational design problem, then the pressing question becomes: what exactly changes when elements of the decision process are delegated to AI, and how should these delegations be configured?

1.2. AI Enters SDM as a New Decision Actor, Not Only a Capability

Research on AI and strategy typically treats AI as a capability or resource that enhances analytical power, speed, and decision efficiency. Empirical and conceptual studies report that AI can improve decision quality and organisational performance, thereby reinforcing a “tool to capability to performance” narrative (Alasmri & Basahel, 2022; Charitha & Hemaraju, 2023). Reviews of AI in strategic contexts similarly argue that AI alters task division and roles between humans and machines, but they often stop short of specifying distinct structural designs for strategic decision authority (Trunk et al., 2020).
This perspective is analytically useful, yet it obscures a harder claim. When AI filters the option set, ranks alternatives, or generates “optimal” recommendations at speed, it does not merely strengthen human cognition; it reshapes the decision space itself. Shrestha et al. (2019) compare human and AI decision making along several contingencies, such as search-space specificity, interpretability, alternative set size, speed, and replicability, and argue that these differences imply different optimal combinations of human and machine decision-making (Shrestha et al., 2019). Their implication is structural rather than purely technical: under different decision conditions, organisations should adopt different ways of combining human and AI decision roles. Thus, the practical challenge becomes architectural rather than merely technological: under what decision conditions should organisations delegate, sequence, or aggregate human and AI inputs in SDM?

1.3. Governance Is Necessary but Does Not Fully Address Structure

Once AI participates in SDM, governance concerns become unavoidable. Scholars argue that algorithmic systems can embed bias, reduce transparency, and complicate accountability, especially when models are opaque or difficult to contest (Coglianese & Lehr, 2019; Katzenbach & Ulbricht, 2019). Moreover, fairness is not a single, neutral property; it depends on specific metrics and design choices, so “being fair” cannot be bolted on ex post (Pessach & Shmueli, 2020). This means that research in algorithmic governance rightly insists on transparency, accountability, and auditability as central design priorities.
However, governance alone does not specify decision structures. Much of this literature explains what should be protected (e.g., fairness, non-discrimination, due process), yet it less frequently theorises how SDM roles and decision rights should be configured inside firms once AI becomes a decision participant. Put simply, governance defines what managers must answer for; it does not automatically tell them how to organise strategic authority. The unresolved question is how accountability can be maintained when decision influence is partly algorithmic, but legal and social responsibility remains managerial (Coglianese & Lehr, 2019).

1.4. The Design Problem and Review Gap

When behavioural strategy, AI-and-strategy, and governance literatures are read together, a distinct gap becomes visible. First, BS largely explains SDM through the limits of human cognition and the influence of bias, thereby keeping the main unit of analysis centred on managers and top teams (Bromiley & Rau, 2017; Cristofaro et al., 2023). Second, AI-and-strategy work tends to frame AI as a capability that can raise decision quality and organisational performance, which implicitly treats algorithms as tools that support managers rather than as actors that reshape strategic authority (Trunk et al., 2020; Alasmri & Basahel, 2022). Third, governance research rightly foregrounds fairness, transparency, and accountability, yet often does not specify how strategic decision rights should be allocated once algorithmic outputs become influential (Coglianese & Lehr, 2019; Pessach & Shmueli, 2020). Taken together, these omissions leave under-theorised the structural integration of human and algorithmic agency at the strategic level.
In this respect, the most practical formulation of the organisational design problem is straightforward: how should SDM be structured when algorithms become decision participants rather than tools? This is the question that many firms now face in domains such as credit allocation, portfolio strategy, resource prioritisation, and platform governance, where AI systems shape high-stakes outcomes.

1.5. Research Question

In response, this article addresses a practical and theoretical question: what SDM structures are most appropriate when strategic decisions involve both human and AI-based decision makers, and how do these structures vary with decision conditions and governance demands?

1.6. Contribution and Approach

To address this question, the article develops a configurational framework of SDM structures in algorithmically mediated enterprises (AMEs). In particular, it theorises alternative configurations by specifying how decision rights are allocated and how the option set is formed, since power resides both in final choice and in earlier filtering of alternatives (Shrestha et al., 2019). It clarifies how human and AI inputs are coordinated through delegation, sequencing, or aggregation, because each coordination logic implies different bottlenecks, biases, and failure modes.
Moreover, the article integrates interpretability and contestability by considering when explanations are needed, to whom they are owed, and how algorithmic outputs can be challenged, which is central to responsible strategic authority (Coglianese & Lehr, 2019). It also addresses reliability and replication, since organisations may value consistency while still needing to surface error rather than propagate it at scale. Finally, it links configuration choices to governance and accountability, by outlining oversight structures that preserve managerial responsibility even when algorithms shape the decision process.
In doing so, the article shifts the debate from the narrow issue of whether AI improves SDM to the broader organisational question of how SDM should be architected when AI participates in strategic choice (Simon, 1947; Cyert & March, 1963; Shrestha et al., 2019).

2. Materials and Methods

2.1. Review Approach

This study employs a theory-building integrative review methodology (Torraco, 2005; Snyder, 2019). Rather than aggregating effect sizes or ranking interventions, the integrative review clarifies constructs, surfaces contingencies, and develops a framework that can later be tested empirically. This orientation is appropriate because hybrid human–AI strategic decision-making (SDM) is currently fragmented across several literatures that use different methods and vocabularies.
Hence, the review deliberately reads these strands together. It combines conceptual arguments, empirical case studies, experiments, and governance analyses into a configurational account of how human and algorithmic agency are structured at the strategic level (Cronin & George, 2020). The ambition is not completeness or quantitative generalisability, but a theoretically coherent and empirically informed set of configurations that subsequent work can refine or contest.
In line with this aim, the corpus spans strategic management, information systems, behavioural research, and algorithmic governance. This cross-disciplinary scope reflects the fact that SDM in algorithmically mediated enterprises (AMEs) is a socio-technical problem that links models, organisational structures, and institutional constraints.

2.2. Analytical Strategy

The analytical strategy follows the dimension-to-configuration logic proposed by Shrestha et al. (2019). First, the review identifies five dimensions along which human and algorithmic SDM differ: interpretive authority, decision search-space structure, temporal orientation, accountability and traceability, and replicability and scalability (Green & Chen, 2019; Marabelli et al., 2021). Second, the analysis compares human and AI decision-making, not to declare a “better” decision maker, but to expose complementarities and concerns. Human actors work with ambiguity and informal knowledge yet remain bias-prone, whereas AI systems apply consistent rules at scale but require formal objectives and may reproduce structural harms (Gigerenzer et al., 2022; De-Arteaga et al., 2020). Third, the dimensions underpin three structural configurations: human-dominant, sequential hybrids, and aggregated human–AI governance, as a parsimonious typology (Punzi et al., 2024). Finally, each configuration is evaluated for decision quality, accountability, learning, and governance under specified boundary conditions (Lambrecht & Tucker, 2019; Saxena et al., 2021).

2.3. Search Strategy, Screening and Data Sources

To capture both strategic and technical work on AI, the review used structured searches in Scopus, Web of Science Core Collection, ABI/INFORM (ProQuest), and IEEE Xplore. Scopus and Web of Science were used to locate strategy, organisation and governance scholarship, while ABI/INFORM and IEEE Xplore captured information systems, HCI and socio-technical studies on ADMS (Falagas et al., 2008; Gusenbauer & Haddaway, 2020). The search window was 2018–2025 to reflect the period in which ADMS moved from speculation to documented organisational deployment (Trunk et al., 2020; Rahman et al., 2025). Seminal SDM and behavioural theory texts were added through citation tracing rather than direct querying, to avoid diluting the AI focus (Simon, 1947; Cyert & March, 1963).
Studies were eligible if they: (1) examined SDM or organisation-level decision processes, especially at senior, board or enterprise level; (2) analysed AI or ADMS as decision support, automation, delegation or joint human–AI decision-making; or (3) addressed transparency, accountability, explainability or fairness in strategic or high-stakes decisions. Studies were excluded where AI appeared only as a technical performance issue, or where analysis was limited to low-level algorithmic management without a strategic lens (Ogunleye & Kalema, 2020; Sienkiewicz, 2021).
Search strings were organised around five keyword families (Table 1), covering SDM, organisational decision structure, AI/ADMS, human–AI collaboration, and governance. This introduced a trade-off: broad terms such as “AI” and “decision support” risked many technically focused but strategically thin papers, whereas narrower labels would miss work on “algorithmic management” or “hybrid decision systems” (Jarrahi et al., 2021; Benlian et al., 2022). The approach was therefore iterative, with broad initial searches followed by refinement and manual screening.
Across the four databases, 1,626 records were retrieved; 732 duplicates were removed, leaving 894 records for title and abstract screening. Of these, 764 were excluded for lacking a clear link to SDM, decision structures or AI/ADMS at a strategic level. Full texts were obtained for 130 articles; 78 were then excluded because they treated AI as a generic “technology trend” or focused solely on model metrics. Backward and forward citation tracing around conceptually central pieces on decision structures, algorithm-in-the-loop and outsider oversight added 10 further studies (Shrestha et al., 2019; Green & Chen, 2019; Raji et al., 2022; Hadley et al., 2024; Punzi et al., 2024; Moreira et al., 2025). The final corpus comprised 62 publications, of which 35 were conceptual or review papers and 27 empirical studies. This diversity strengthens the analysis by combining theorisation with concrete evidence, although it rules out formal effect synthesis (Cronin & George, 2020). Table 2 summarises the included studies and their relevance to the framework.

3. Results

3.1. Overview of Included Studies

The 62 included studies combine laboratory experiments, fieldwork, ethnography, conceptual frameworks, and governance analyses. Taken together, they support a shift from viewing AI as a static “tool” towards understanding AI as part of a socio-technical decision system that reallocates agency, discretion, and accountability.

3.2. Comparing Human and Algorithmic Strategic Decision-Making

To understand how hybrid structures work, it is necessary first to examine how human and algorithmic SDM differ when considered separately. Five dimensions are central here: interpretive authority, decision search-space structure, temporal orientation, accountability and traceability, and replicability and scalability (see Table 3). These dimensions do not exhaust the differences, but they capture the main trade-offs that recur across the literature (Green & Chen, 2019; Shrestha et al., 2019; Marabelli et al., 2021).

Interpretive Authority

In human SDM, interpretive authority is grounded in contextual sense-making, narrative construction, and political negotiation. Executives weave weak signals, informal knowledge, and stakeholder pressures into stories that make particular courses of action appear reasonable or necessary (Langley et al., 1995; Cristofaro et al., 2023). This narrative richness can reveal opportunities that are not visible in formal data, but it also opens the door to motivated reasoning and self-serving framings of risk (Powell et al., 2011; Bromiley & Rau, 2017).
AI and ADMS, by contrast, derive authority from models, data patterns, and optimisation objectives. Their “interpretation” is encoded in features, labels, and loss functions rather than in explicit narratives (Green & Chen, 2019; Veale et al., 2018). This shift often improves statistical consistency but makes it harder for human actors to see how particular recommendations arise. Zerilli et al. (2022) note that transparency may modulate trust, yet they also caution that simplistic disclosure can mislead users about the true limits of model understanding.
The concern here is not merely epistemic but political. When executives adopt model outputs as authoritative, they sometimes present them as neutral facts rather than as products of contested modelling choices (Koulu, 2020; Vaassen, 2022). In doing so, they may shift debate from “what should we value?” to “what does the model say?”, thereby narrowing the space for dissent.

Decision Search-Space Structure

Human strategists are comfortable operating in ill-structured spaces with shifting goals and ambiguous constraints. BTF and process studies suggest that strategic problems often begin as fuzzy issues that are gradually defined through interaction, learning, and conflict (Cyert & March, 1963; Langley et al., 1995). This flexibility allows adaptation but also makes search path-dependent and vulnerable to framing effects, as decision-makers settle on locally salient options rather than exploring the full space (De Dreu et al., 2008; Kahneman, 2011).
AI systems, in contrast, require explicit objectives, features, and constraints to define their search space. Once these elements are specified, models can evaluate large numbers of alternatives and identify patterns that humans might miss. However, as Marabelli et al. (2021) argue, the choice of targets and metrics is itself a strategic act. Decisions about which variables to optimise and which trade-offs to encode effectively shape the “hidden” strategic agenda of the system (Funda, 2025; Rahman et al., 2025).
From a critical perspective, this means that the neutrality often attributed to algorithmic search is misleading. The search space is never simply given; it is designed, negotiated, and sometimes imposed. When those design choices are opaque to boards or stakeholders, strategic power can shift towards technical teams or vendors without clear accountability.

Temporal Orientation

Human SDM is often framed as long-term and imaginative, especially in corporate strategy discourse. Nevertheless, behavioural work shows that executives are structurally pulled towards short-term performance by incentive schemes, market pressures, and career concerns (Gigerenzer et al., 2022; Greve & Zhang, 2022). As a result, stated long-term orientations may be fragile and contingent.
AI systems usually extrapolate from historical data under pre-defined time horizons. They are strong in short- to medium-term prediction for phenomena with stable patterns, such as customer churn or credit risk, but they are vulnerable to regime shifts and structural breaks (Noti et al., 2025; Lahoti et al., 2025). Elish (2018) shows, in a clinical setting, how practitioners and managers re-embed model outputs within institutional time frames such as budget cycles and regulatory reviews. In that sense, temporal orientation emerges from the interaction between technical design and organisational context rather than from either alone (Kawakami et al., 2024; Masrani et al., 2025).
The implication is that neither human nor algorithmic actors naturally “own” the long term. Long-term orientation must be designed and governed, for example through scenario planning, stress testing, and explicit consideration of structural uncertainty, rather than assumed as a property of one actor type.

Accountability and Traceability

Human SDM is narratively explainable. Executives can describe reasons, trade-offs, and constraints, even if such explanations are partial or ex post rationalisations (Feldman & March, 1981; Brunsson, 1989). This narrative capacity supports established accountability mechanisms, since regulators, courts, or shareholders can ask decision-makers to justify what they did and why. At the same time, narrative flexibility creates scope for blame shifting and what Brunsson (1989) calls “organised hypocrisy”, where talk, decisions, and actions diverge.
ADMS promise detailed logs, code histories, and audit trails. In principle, this technical traceability should strengthen accountability. In practice, however, few actors understand the entire pipeline from data collection to model deployment, and responsibility can become diffused across multiple teams and suppliers (Burrell, 2016; Nwachukwu et al., 2025). Hadley et al. (2025) show, in their study of ARBs, that record-keeping can support oversight but can also become a ritual that substitutes for substantive challenge, a risk Chappidi et al. (2025) describe as “accountability capture”.
Thus, the move from narrative to technical traceability does not automatically improve accountability. Instead, it shifts the skills and institutional arrangements required to hold decision-makers to account.

Replicability and Scalability

Human decision-making is difficult to replicate. Outcomes depend on experience, emotion, group dynamics, and local context, so the same team may respond differently to similar issues at different times (De Dreu et al., 2008; Lerner et al., 2015). While this variability can allow learning and adaptation, it also introduces noise. Green and Chen (2019) show that, even with algorithmic advice, users often respond inconsistently across cases.
AI systems, conversely, apply the same rules to the same inputs and can do so at scale. This stability is attractive for high-volume decisions, such as lending or portfolio rebalancing, where consistency is valued (De-Arteaga et al., 2020; Kovari, 2024). Yet replicability is conditional on context. If the model is mis-specified or trained on biased data, it may consistently propagate errors or harms, thereby amplifying injustice rather than reducing it (Saxena et al., 2021; Masrani et al., 2025).
Here, the trade-off is clear. Human noise can be costly, but so can systematic error. The choice is not between “stable good decisions” and “messy human ones” but between different patterns of error and different forms of control over them.
These contrasts underpin the configurational analysis that follows. The next section examines how organisations combine human and algorithmic SDM through three broad structural forms: human-dominant, sequential hybrids, and aggregated human–AI governance.

4. Strategic Decision-Making Structures in the Age of AI

Having outlined the core differences between human and algorithmic SDM, the analysis now looks at the structures through which organisations combine them. The article follows Shrestha et al. (2019) in treating decision structures as designable objects, but it extends their typology by introducing a more explicit focus on agency, power, and accountability. Three broad configurations are considered:
  • Human-dominant structures, where AI plays an advisory role.
  • Sequential hybrid structures, where AI and humans act in ordered stages.
  • Aggregated human–AI governance, where human and algorithmic inputs are combined through explicit aggregation rules.
Each configuration appears in practice in different sectors, often in hybrid forms. The typology is therefore heuristic rather than exhaustive. Its value lies in making structural choices visible and open to critique.

4.1. Human-Dominant Strategic Structures: AI as Advisory Input

In human-dominant structures, final authority formally rests with human actors—typically boards, TMTs, or senior committees—while AI provides analytical input. Examples include scenario simulations for strategy offsites, risk models for investment committees, or decision dashboards that feed into board deliberations (CFA Institute, 2021; Büber & Seven, 2025; Ramu & Bansal, 2025).
Case material on “T-shaped teams” in financial services illustrates how this works. Here, data scientists and modellers sit alongside portfolio managers, offering analysis but not holding explicit decision rights (Cao, 2021; CFA Institute, 2021). In this arrangement, AI extends the scope and speed of analysis, yet the narrative framing, interpretation, and formal voting remain with human decision-makers.
At first sight, this configuration appears normatively attractive. It seems to preserve human responsibility while benefiting from AI-driven insight. However, several studies suggest that this picture may be too reassuring. Green and Chen (2019) show that even when humans retain final authority, they often struggle to calibrate how much weight to give algorithmic advice. Alon-Barkat and Busuioc (2021, 2023) find selective adherence and automation bias, with decision-makers tending to accept AI recommendations that align with prior beliefs or appear numerically precise.
Ethnographic work similarly suggests that “human-dominant” narratives can mask de facto dependence on AI. Elish (2018) documents how clinical staff and managers reframe machine learning predictions as neutral evidence, even when they rely heavily on model outputs in practice. Saxena et al. (2021) find that high-stakes public sector ADMS, such as child-welfare tools, can quietly shape decisions while officials publicly emphasise their own discretion.
In human-dominant structures, then, a central risk is automation complacency. Executives may over-rely on AI-generated numbers while maintaining a rhetorical emphasis on human control, thereby weakening scrutiny of model assumptions. This risk is heightened when oversight forums, such as ARBs, are weakly integrated into strategic deliberation and function more as procedural checks than as substantive challengers (Hadley et al., 2024; Chappidi et al., 2025).
There are, nonetheless, positive outcomes. Some boards use AI-supported simulations explicitly as “boundary objects” for debate rather than as decision rules. In such cases, directors interrogate model assumptions, explore scenarios that challenge prevailing strategies, and insist on explicit justification when recommendations are followed (Ganesh et al., 2025; Lahoti et al., 2025). AI strengthens human agency by making trade-offs and uncertainties more visible rather than by dictating a single “answer”. The contrast between these cases and more complacent practices underlines that human-dominant structures are not inherently safe; they must be actively governed to avoid slipping into unacknowledged automation.

4.2. Sequential Hybrid Strategic Structures

Sequential hybrid structures introduce an ordered relationship between AI and human decision-makers. In these configurations, either AI acts first and humans follow (AI-to-human), or humans act first and AI follows (human-to-AI). The sequence matters, because it determines who shapes the option set and who performs the final evaluation (Shrestha et al., 2019; Punzi et al., 2024).

AI-to-Human Sequences

In AI-to-human structures, algorithms carry out an initial screening or ranking of a large alternative set, and humans then review a narrowed subset. Examples include credit scoring systems that pre-filter loan applicants, risk models that triage welfare cases, or AI tools that shortlist candidates for innovation contests (De-Arteaga et al., 2020; Saxena et al., 2021; CFA Institute, 2021).
This design exploits AI’s strength in scalability and pattern recognition while reserving final judgment for human actors. In principle, it can reduce cognitive overload and allow experts to focus on borderline or complex cases. However, several authors argue that the structural risks are substantial. Lambrecht and Tucker (2019) show how biased ad-targeting can systematically exclude certain groups from opportunities. Studies of biased recruitment and criminal-justice tools provide similar evidence that early algorithmic filtering can encode historical discrimination into the candidate pool (Dastin, 2018; Angwin et al., 2016; Noble, 2018).
Once discarded, these cases rarely reach human review. Even conscientious managers cannot scrutinise options that the model never presents. From a structural perspective, AI-to-human sequences therefore shift strategic power upstream, into feature selection, labelling, and threshold design (Marabelli et al., 2021; Funda, 2025). Unless these design decisions are visible and contestable, formal human oversight may function largely as an illusion.

Human-to-AI Sequences

In human-to-AI structures, human actors define or narrow the option set first, and AI performs intensive analysis or optimisation within that constrained space. Familiar examples include “Moneyball”-style sports analytics, where managers identify target players and models refine choices, or clinical monitoring systems, where clinicians select risk cohorts and AI tracks patterns over time (Millington & Millington, 2015; Rajkomar et al., 2018; Choi et al., 2023).
Here, humans retain control over problem framing, while AI delivers depth and consistency within that frame. At first, this seems to preserve human agency. Yet this configuration creates different risks. Natali et al. (2025) argue that over time AI can induce deskilling, as professionals come to rely on model outputs instead of maintaining their own analytical capabilities. Ibrahim et al. (2025) similarly warn that, without explicit monitoring, users may develop over-reliance patterns that are hard to detect.
In strategic settings, industry accounts and governance analyses point to a similar pattern. Chawande (2025) and Nwachukwu et al. (2025) describe risk committees in finance that increasingly treat model outputs as definitive until crises expose underlying fragilities. Raji et al. (2022) show how, in such contexts, external auditors struggle to attribute responsibility when both human committees and model pipelines have been involved.
Human-to-AI sequences therefore risk hollowing out the analytical core of SDM. Executives and boards may retain formal authority but lose the capacity to interrogate complex models, particularly when time pressures and organisational routines encourage acceptance of “standard” outputs.

4.3. Aggregated Human–AI Strategic Governance

Aggregated structures treat AI as a parallel decision participant whose outputs are combined with human judgments through explicit aggregation rules, such as voting weights, veto powers, or ensemble schemes (Shrestha et al., 2019; De-Arteaga et al., 2025). Human and algorithmic decisions are produced separately and then integrated, rather than strictly sequenced. Table 4 maps the dimensions to strategic decision-making structures.
One well-known example is Deep Knowledge Ventures’ appointment of an AI system (VITAL) to its investment committee, with a formal “vote” in funding decisions (Burridge, 2017; CFA Institute, 2021). Although this case is often cited more as a signalling device than as a robust governance innovation, it illustrates the idea that algorithms can hold explicit seats in decision forums. Similar designs can be found in committees that combine model-based risk scores with expert ratings, or in ARBs that include both technical and non-technical members (Hadley et al., 2024; Ganesh et al., 2025).
Aggregated configurations have an intuitive appeal. If human and algorithmic errors are not perfectly correlated, combining their judgements may reduce overall error, in line with ensemble logic in machine learning and “wisdom of crowds” arguments in behavioural research (Gigerenzer et al., 2022; Smith et al., 2025; Lahoti et al., 2025). However, the literature suggests that this promise is conditional on how aggregation is designed and governed.
First, the assignment of weights is itself a political act. If AI outputs are given high formal weight, in practice they may dominate deliberation even when human votes exist on paper. Conversely, if human elites can easily override models, the AI component may become symbolic (Jarrahi et al., 2021; Benlian et al., 2022). Neither outcome necessarily improves decision quality; both can obscure where real authority lies.
Second, aggregated structures may diffuse responsibility. When a poor outcome arises, decision-makers can point to the “balanced” process”, we used both expert judgement and AI”, without specifying which part of the system contributed most to the failure (Enarsson et al., 2022; Loi & Spielkamp, 2021). Empirical work on audits and ARBs shows that, if aggregation rules and deliberations are not transparent, external observers may find it difficult to hold any actor to account (Raji et al., 2022; Terzis et al., 2024; Chappidi et al., 2025).
Third, aggregated structures are demanding in terms of organisational capability. They require decision-makers who can understand both human and algorithmic rationales and who are willing to surface disagreement rather than hide behind “combined scores”. Without such capability, aggregation risks becoming a technical exercise carried out by specialists, with the committee simply endorsing a pre-packaged result.
These configurations show that the choice of structure is not a purely technical design decision. It shapes who exercises strategic agency, who can contest decisions, and how failures are interpreted. The next part of the article therefore turns explicitly to the reconfiguration of strategic agency in AMEs and the broader implications for theory, design, and governance.

5. The Reconfiguration of Strategic Agency

This section examines strategic agency. The central question is who performs strategy when AI systems participate in strategic decision-making in algorithmically mediated enterprises (Rahman et al., 2025; Jarrahi et al., 2021). Evidence from algorithmic management, public-sector AI, and board governance shows that AI neither simply augments nor replaces human strategists. Instead, it redistributes agency across socio-technical systems that include models, data infrastructures, oversight bodies, and external auditors (Saxena et al., 2021; Kawakami et al., 2024; Raji et al., 2022).
Raisch and Krakowski (2021) argue that this redistribution is often disguised as “augmentation”. They show that projects framed as human–AI collaboration tend to slide towards de facto automation as high-volume or politically sensitive decisions are shifted into systems marketed as neutral and objective (Raisch & Krakowski, 2021; Singh & Gupta, 2025). Studies of ride-hailing, logistics, and platform work show that algorithmic management centralizes control in software while preserving a formal appearance of autonomy (Benlian et al., 2022; Ogunleye & Kalema, 2020; Isbah, 2022). Discretion moves from line managers to algorithm designers and owners. In strategic contexts such as growth targets or risk calibration, this dynamic embeds key choices in model configuration rather than in explicit board deliberation. Strategic agency thus shifts upstream into technical design (Marabelli et al., 2021; Nwachukwu et al., 2025; Chawande, 2025).
Findings on augmentation presents a more collaborative view. When tasks are structured to reflect complementary strengths, AI can perform large-scale pattern recognition while humans provide contextual judgment and ethical evaluation (Jarrahi et al., 2021; De-Arteaga et al., 2021; Smith et al., 2025). Yet the allocation of objectives remains decisive. Designers and data scientists select optimization targets and encode trade-offs. Executives may therefore operate within boundaries set by earlier modelling decisions (Marabelli et al., 2021; Funda, 2025; Hadley et al., 2024). Agency shifts from visible deliberation in top-management teams to less visible interactions among technical staff, compliance actors, and auditors. This relocation raises concerns about legitimacy and stakeholder voice (Raji et al., 2022; Terzis et al., 2024; Cohen & Suzor, 2024).
Governance scholarship highlights a related control problem. Humans remain formally accountable but often over-rely on algorithmic outputs or disregard them when inconvenient (Shin, 2020; Zerilli et al., 2019; Veale et al., 2018). Effective oversight requires both ex ante design scrutiny and ex post review of decisions. Oversight must assume fallibility in both systems and human supervisors, including boards (Laux, 2023; Loi & Spielkamp, 2021; Ganesh et al., 2025). Studies of algorithm review boards and audits show that weakly empowered oversight bodies can legitimize existing hierarchies rather than redistribute authority (Hadley et al., 2024; Raji et al., 2022; Chappidi et al., 2025).
Strategic management research often treats AI as a resource within the resource-based view. It emphasizes performance gains from data and analytics but pays less attention to the redistribution of decision rights (Büber & Seven, 2025; Ramu & Bansal, 2025; Lahoti et al., 2025). This focus risks neglecting how AI shapes internal goal setting, board contestation, and stakeholder participation (Rahman et al., 2025; Ganesh et al., 2025; Cohen & Suzor, 2024). Behavioral strategy documents cognitive limits in human actors but rarely examines how algorithmic systems structure which information and alternatives reach executives (Cristofaro et al., 2023; Gigerenzer et al., 2022).
Legal and governance literatures emphasize fairness, transparency, and bias. They often address operational harms rather than strategic direction setting (Koulu, 2020; Enarsson et al., 2022; Terzis et al., 2024; Raji et al., 2022; Moreira et al., 2025; Mushkani, 2025). An integrated account of strategic agency in algorithmically mediated enterprises must therefore connect resource-based, behavioral, governance, and socio-technical perspectives. It must explain not only performance outcomes but also power, responsibility, and voice (Saxena et al., 2021; Jarrahi et al., 2021; Rahman et al., 2025).
The next section derives implications for strategic management theory, organizational design, and governance. It treats decision structure and distributed agency as central analytical concerns rather than technical background conditions.

6. Implications

6.1. Implications for Strategic Management Theory

Strategic management theory must revise how it understands agency. Agency cannot be located only in top managers or human coalitions. It is distributed across people, algorithms, and institutions (von Krogh, 2018; Jarrahi et al., 2021; Rahman et al., 2025). Models shape which opportunities are visible. Audits and algorithm review boards influence which actions are acceptable. Regulators and civil society actors intervene through governance mandates (Marabelli et al., 2021; Terzis et al., 2024; Ganesh et al., 2025). The firm should therefore be understood as a socio-technical actor whose agency emerges from the interaction of human and technical elements, not as a simple owner of AI assets.
Core constructs require adjustment. Executive discretion is constrained by model outputs and parameter settings. Dynamic capabilities depend on how data systems and oversight processes are configured. Strategic fit reflects alignment not only with markets but also with algorithmic architectures and regulatory expectations. Treating AI as a neutral resource overlooks these structural effects.
Decision structure also becomes central. Human-dominant, sequential, and aggregated configurations allocate authority differently. These choices influence performance, resilience, and exposure to bias (Shrestha et al., 2019; Green & Chen, 2019; Romeo & Conti, 2026). Some structures may respond better to uncertainty or regulatory scrutiny than others. Comparative research across sectors could clarify these relationships (CFA Institute, 2021; Nwachukwu et al., 2025; Chawande, 2025). This approach moves beyond general claims about “AI in strategy” and toward analysis of specific governance arrangements.
There is also a need to integrate behavioural strategy with algorithmic governance. Behavioural research documents human bias. Governance research documents bias and opacity in algorithmic systems. Few studies examine how these biases interact within shared decision structures (Romeo & Conti, 2025; Zerilli et al., 2022; Ibrahim et al., 2025). Automation bias, for example, may vary across configurations. Bias should be analysed as a property of the joint human–AI system rather than of either element alone.

6.3. Implications for Governance and Policy

Organisations must clarify interpretive authority. They should identify who sets objectives, who selects features, who tunes thresholds, and who defines legitimate outcomes (Loi & Spielkamp, 2021; Ganesh et al., 2025; Chappidi et al., 2025). Without such clarity, power may shift to technical teams or vendors while formal structures suggest shared governance.
Override and abstention rules are equally important. Firms must specify when humans may reject AI recommendations and when systems must escalate cases to human review. Research on abstaining classifiers and override mechanisms shows that formal fail-safe paths are feasible and necessary in high-stakes contexts (Madras et al., 2018; Lenders et al., 2025; Mushkani, 2025). Examples include risk systems that flag uncertain cases for mandatory review and public-control systems that revert to conservative defaults under uncertainty (Nimmy et al., 2025; Moreira et al., 2025). Clear rules reduce ambiguity when errors occur.
Boards and executives also require algorithmic literacy. Directors do not need advanced technical skills, but they must understand model logic, limits, and trade-offs (Ganesh et al., 2025; Hadley et al., 2024; Kostick-Quenet & Gerke, 2022). Effective practice includes structured engagement between directors, data scientists, and ethics specialists to review model portfolios and stress-test assumptions (Torre et al., 2019; Raji et al., 2022; Terzis et al., 2024). Weak practice treats AI as a narrow IT issue and relies on summary dashboards without deeper scrutiny.
Learning processes must also adapt. In human-only settings, organisations rely on narrative reviews of failures. Hybrid systems require access to model logs, parameter histories, and data lineage (Hadley et al., 2024; Nwachukwu et al., 2025; Chappidi et al., 2025). Firms should treat governance configurations as provisional and subject to revision. Design choices should be tested, evaluated, and adjusted over time.

7. Future Research

Future research should develop metrics suited to hybrid systems. Evaluation should include resilience, fairness, and organisational learning, not only accuracy or profit. Longitudinal studies across sectors could compare governance configurations under similar shocks (Shrestha et al., 2019; Lahoti et al., 2025; Nwachukwu et al., 2025; Funda, 2025). Research should also examine executive discretion and identity under algorithmic mediation. Qualitative studies with board members and senior executives could explore how they interpret redistributed agency and technical dependence (Kawakami et al., 2024; Masrani et al., 2025; Benlian et al., 2022). Ethnographic work within top-management teams could trace how models shape agenda setting and coalition formation (Langley et al., 1995; Jarrahi et al., 2021; Robinson et al., 2025). Different configurations may influence whose voice carries weight, including data officers and risk managers.
Comparative regulatory research could analyse how sectoral rules shape configuration choices and outcomes (Terzis et al., 2024; Aloisi, 2024; Cohen & Suzor, 2024). Such work would connect organisational design with institutional context.

8. Conclusions

The central issue is not whether strategic decision-making remains human. The issue is how to structure decision systems when algorithms participate as decision actors (Shrestha et al., 2019; Green & Chen, 2019; Rahman et al., 2025). Human and algorithmic decision processes differ across interpretation, accountability, and scalability. These differences appear in human-dominant, sequential, and aggregated configurations that redistribute agency in distinct ways (Marabelli et al., 2021; Jarrahi et al., 2021; Raji et al., 2022). Well-designed configurations demonstrate that deliberate governance, clear override rules, and empowered oversight can support responsible strategy (CFA Institute, 2021; Hadley et al., 2024; Ganesh et al., 2025). Poorly designed systems obscure responsibility and reinforce inequality (De-Arteaga et al., 2020; Angwin et al., 2016; Raji et al., 2022).
Strategic management research should therefore treat decision structure and distributed agency as core constructs. Practice should specify authority, override rules, and stakeholder contestation. Policy should focus on governance architecture as well as model metrics (Veale et al., 2018; Mushkani, 2025; Cohen & Suzor, 2024; Terzis et al., 2024; Funda, 2025).
Careful design of hybrid systems is necessary if AI is to support, rather than weaken, responsible strategic judgement in an algorithmically mediated economy (Saxena et al., 2021; Kawakami et al., 2024).

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analysed in this study. Data sharing is therefore not applicable to this article.

Acknowledgments

The authors thank colleagues and reviewers for their insightful comments on earlier drafts of this work. Any remaining errors are the authors’ responsibility.

Conflicts of Interest

The authors declare no conflicts of interest. The funders, if any, had no role in the design of the study; in the collection, analyses, or interpretation of the literature; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript
  • ADMS – Algorithmic decision-making systems
  • AI – Artificial intelligence
  • AME – Algorithmically mediated enterprise
  • ARB – Algorithm review board
  • BTF – Behavioural theory of the firm
  • BR – Bounded rationality
  • BS – Behavioural strategy
  • ICU – Intensive care unit
  • M&A – Mergers and acquisitions
  • RBV – Resource-based view
  • SDM – Strategic decision-making
  • TMT – Top-management team
  • WoS – Web of Science Core Collection
  • XAI – Explainable artificial intelligence

References

  1. Agard, G.; Roman, C.; Guervilly, C.; Ouladsine, M.; Boyer, L.; Hraiech, S. Improving sepsis prediction in the ICU with explainable artificial intelligence: The promise of Bayesian networks. Journal of Clinical Medicine 2025, 14(18), 6463. [Google Scholar] [CrossRef]
  2. Alasmri, N.; Basahel, S. B. Linking artificial intelligence use to improved decision-making, individual and organizational outcomes. International Business Research 2022, 15(10), 1–13. [Google Scholar] [CrossRef]
  3. Aloisi, A. Regulating algorithmic management at work in the European Union: Data protection, non-discrimination and collective rights. International Journal of Comparative Labour Law and Industrial Relations 2024, 40(1), 37–70. [Google Scholar] [CrossRef]
  4. Alon-Barkat, S.; Busuioc, M. Decision-makers’ processing of AI algorithmic advice: Automation bias versus selective adherence . arXiv 2021, arXiv:2103.02381. [Google Scholar] [CrossRef]
  5. Alon-Barkat, S.; Busuioc, M. Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory 2023, 33(1), 153–169. [Google Scholar] [CrossRef]
  6. Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine bias. ProPublica. May 2016. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  7. Baumeister, R. F. The psychology of irrationality: Why people make foolish, self-defeating choices. In The psychology of economic decisions; Brocas, I., Carrillo, J. D., Eds.; Oxford University Press, 2003; Vol. 1, pp. 3–16. Available online: https://www.econbiz.de/Record/the-psychology-of-irrationality-why-people-make-foolish-self-defeating-choices-baumeister-roy/10001887051.
  8. Benlian, A.; Wiener, M.; Cram, W. A.; Krasnova, H.; Maedche, A.; Möhlmann, M.; Recker, J.; Remus, U. Algorithmic management. Business & Information Systems Engineering 2022, 64(6), 825–839. [Google Scholar] [CrossRef]
  9. Boscoe, B. Creating transparency in algorithmic processes. Delphi: Interdisciplinary Review of Emerging Technologies 2019, 2(1), 12–22. [Google Scholar] [CrossRef]
  10. Bromiley, P.; Rau, D. Behavioral strategic management; Routledge, 2017. [Google Scholar] [CrossRef]
  11. Brown, J. S.; Collins, A.; Duguid, P. Situated cognition and the culture of learning. Educational Researcher 1989, 18(1), 32–42. [Google Scholar] [CrossRef]
  12. Brunsson, N. The organization of hypocrisy: Talk, decisions and actions in organizations; John Wiley & Sons: Chichester, UK, 1989. [Google Scholar]
  13. Büber, H.; Seven, E. Strategic Decision-Making in the AI Era: An Integrated Approach Classical, Adaptive, Resource-Based, and Processual Views. International Journal of Management and Administration 2025, 9(17), 67–97. Available online: https://dergipark.org.tr/en/pub/ijma/issue/90536/1637935. [CrossRef]
  14. Burrell, J. How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society 2016, 3(1), 1–12. [Google Scholar] [CrossRef]
  15. Burridge, N. Artificial intelligence gets a seat in the boardroom: Hong Kong venture capitalist sees AI running Asian companies within 5 years. Nikkei Asian Review, May 10; 2017. [Google Scholar]
  16. Cao, L. T-shaped teams: Organizing to adopt AI and big data at investment firms . In CFA Institute Research Foundation; 2021. [Google Scholar] [CrossRef]
  17. Chappidi, S.; Cobbe, J.; Norval, C.; Mazumder, A.; Singh, J. Accountability capture: How record-keeping to support AI transparency and accountability (re)shapes algorithmic oversight. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 2025, 8(1), 554–566. [Google Scholar] [CrossRef]
  18. Charitha, C.; Hemaraju, B. Impact of artificial intelligence on decision-making in organisations. International Journal For Multidisciplinary Research 2023, 5(4), 5172. [Google Scholar] [CrossRef]
  19. Chawande, P. Model risk governance for AI-based compliance systems in investment banking. International Journal of Multidisciplinary Research and Growth Evaluation 2025, 6(3), 2027–2035. [Google Scholar] [CrossRef]
  20. Choi, D.; Lim, M. H.; Kim, K. H.; Shin, S.; Hong, K.; Kim, S. Development of an artificial intelligence bacteremia prediction model and evaluation of its impact on physician predictions focusing on uncertainty. Scientific Reports 13 2023, 12866. [Google Scholar] [CrossRef]
  21. Coglianese, C.; Lehr, D. Transparency and algorithmic governance. Administrative Law Review 2019, 71(1), 1–56. Available online: https://scholarship.law.upenn.edu/faculty_scholarship/2123/.
  22. Cohen, T.; Suzor, N. P. Contesting the public interest in AI governance. Internet Policy Review 2024, 13(1). [Google Scholar] [CrossRef]
  23. Cristofaro, M.; Bao, Y. J.; Chiu, S.; Hernández-Lara, A. B.; Pérez-Calero, L. Editorial: Affect and cognition in upper echelons’ strategic decision making: Empirical and theoretical studies for advancing corporate governance. Frontiers in Psychology 13 2023, 1081095. [Google Scholar] [CrossRef]
  24. Cronin, M. A.; George, E. The why and how of the integrative review. Organizational Research Methods 2020, 26(1), 168–192. [Google Scholar] [CrossRef]
  25. Cyert, R. M.; Feigenbaum, E. A.; March, J. G. Models in a behavioral theory of the firm. Behavioral Science 1959, 4(2), 81–95. [Google Scholar] [CrossRef]
  26. Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. October 2018. Available online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
  27. De Dreu, C. K. W.; Nijstad, B. A.; van Knippenberg, D. Motivated information processing in group judgment and decision making. Personality and Social Psychology Review 2008, 12(1), 22–49. [Google Scholar] [CrossRef]
  28. De-Arteaga, M.; Dubrawski, A.; Jeanselme, V.; Chouldechova, A. Leveraging expert consistency to improve algorithmic decision support. Management Science 2025, 71(12), 10465–10485. [Google Scholar] [CrossRef]
  29. De-Arteaga, M.; Fogliato, R.; Chouldechova, A. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery, 2020; pp. 1–12. [Google Scholar] [CrossRef]
  30. Dican, L. Human-AI collaboration and its impact on decision-making. International Journal of Multidisciplinary Research and Growth Evaluation 2025, 6(2), 919–923. [Google Scholar] [CrossRef]
  31. Elish, M. C. The stakes of uncertainty: Developing and integrating machine learning in clinical care. Ethnographic Praxis in Industry Conference Proceedings 2018, 2018(1), 364–380. [Google Scholar] [CrossRef]
  32. Enarsson, P.; Klamberg, M.; Nyman-Metcalf, K. Algorithmic accountability and the rule of law. Journal of Cyber Policy 2021, 6(2), 270–288. [Google Scholar]
  33. Enarsson, T.; Enqvist, L.; Naarttijärvi, M. Approaching the human in the loop–legal perspectives on hybrid human/algorithmic decision-making in three contexts. Information & Communications Technology Law 2022, 31(1), 123–153. Available online: https://www.tandfonline.com/doi/abs/10.1080/13600834.2021.1958860.
  34. Falagas, M. E.; Pitsouni, E. I.; Malietzis, G. A.; Pappas, G. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: Strengths and weaknesses. International Journal of Medical Informatics 2008, 79(9), 769–776. [Google Scholar] [CrossRef]
  35. Feldman, M. S.; March, J. G. Information in organizations as signal and symbol. Administrative Science Quarterly 1981, 26(2), 171–186. [Google Scholar] [CrossRef]
  36. Funda, V. A systematic review of algorithm auditing processes to assess bias and risks in AI systems. Journal of Infrastructure, Policy and Development 2025, 9(2), 11489. [Google Scholar] [CrossRef]
  37. Ganesh, N. B.; Siddineni, D.; Reddy, V. V.; Lateef, K.; Sharma, R. Corporate governance in the age of AI: Ethical oversight and accountability frameworks. Journal of Information Systems Engineering and Management 2025, 10(1), 959. [Google Scholar] [CrossRef]
  38. Gigerenzer, G.; Reb, J.; Luan, S. Smart heuristics for individuals, teams, and organizations. Annual Review of Organizational Psychology and Organizational Behavior 9 2022, 171–198. [Google Scholar] [CrossRef]
  39. Gomez, C.; Cho, S. M.; Ke, S.; Huang, C.-M.; Unberath, M. Human–AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review. arXiv. 2023. Available online: https://arxiv.org/abs/2310.19778.
  40. Gomez, C.; Cho, S. M.; Ke, S.; Huang, C.-M.; Unberath, M. Human–AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review. Frontiers in Computer Science 2025, 6, 1521066. [Google Scholar] [CrossRef]
  41. Green, B.; Chen, Y. The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW) 2019, 50, 1–24. [Google Scholar] [CrossRef]
  42. Greve, H. R.; Zhang, C. M. Is there a strategic organization in the behavioral theory of the firm? Looking back and looking forward. Strategic Organization 2022, 20(4), 698–708. [Google Scholar] [CrossRef]
  43. Gusenbauer, M.; Haddaway, N. R. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Research Synthesis Methods 2020, 11(2), 181–217. [Google Scholar] [CrossRef]
  44. Hadley, E.; Blatecky, A. R.; Comfort, M. L. Investigating algorithm review boards for organizational responsible artificial intelligence governance. AI and Ethics 5 2024, 2485–2495. [Google Scholar] [CrossRef]
  45. Ibrahim, L.; Collins, K. M.; Kim, S. S. Y.; Reuel, A.; Lamparth, M.; Feng, K.; Ahmad, L.; Soni, P.; El Kattan, A.; Stein, M.; Swaroop, S.; Sucholutsky, I.; Strait, A.; Liao, Q. V.; Bhatt, U. Measuring and mitigating overreliance is necessary for building human-compatible AI . arXiv 2025, arXiv:2509.08010. [Google Scholar] [CrossRef]
  46. Isbah, M. F. Algorithmic exploitation: Understanding labor process and control among ride-hailing platform workers. Sosio e-Kons 2022, 21(2). [Google Scholar] [CrossRef]
  47. Jarrahi, M. H.; Newlands, G.; Lee, M. K.; Wolf, C. T.; Kinder, E.; Sutherland, W. Algorithmic management in a work context. Big Data & Society 2021, 8(2), 1–14. [Google Scholar] [CrossRef]
  48. Jeppesen, L. B.; Lakhani, K. R. Marginality and problem-solving effectiveness in broadcast search. Organization Science 2010, 21(5), 1016–1033. [Google Scholar] [CrossRef]
  49. Kahneman, D. Thinking, fast and slow; Farrar, Straus and Giroux: New York, NY, 2011. [Google Scholar]
  50. Katzenbach, C.; Ulbricht, L. Algorithmic governance. Internet Policy Review 2019, 8(4), 1–18. [Google Scholar] [CrossRef]
  51. Kawakami, A.; Coston, A.; Heidari, H.; Holstein, K.; Zhu, H. Studying up public sector AI: How networks of power relations shape agency decisions around AI design and use. Proceedings of the ACM on Human-Computer Interaction 2024, 8(CSCW2)(Article 450), 1–37. [Google Scholar] [CrossRef]
  52. Kolbjørnsrud, V. Designing the intelligent organization: Six principles for human–AI collaboration. California Management Review 2024, 66(2), 44–64. [Google Scholar] [CrossRef]
  53. Kostick-Quenet, K. M.; Gerke, S. AI in the hands of imperfect users. In npj Digital Medicine 5; 2022; p. 197. [Google Scholar] [CrossRef]
  54. Koulu, R. Human oversight and symbolic control in algorithmic governance. In Life and the law in the era of data-driven agency; Hildebrandt, M., O’Hara, K., Eds.; Edward Elgar: Cheltenham, UK, 2020; pp. 209–231. [Google Scholar]
  55. Koulu, R. Proceduralizing control and discretion: Human oversight in artificial intelligence policy. Maastricht Journal of European and Comparative Law 2020, 27(6), 720–735. [Google Scholar] [CrossRef]
  56. Kovari, A. AI for decision support: Balancing accuracy, transparency, and trust across sectors. Information 2024, 15(11), 725. [Google Scholar] [CrossRef]
  57. Lahoti, Y.; Kalshetti, P.; Anute, N.; Limbore, N. V. AI-enhanced business simulation models for strategic decision-making in uncertain environments. 2025 International Conference on Innovations in Intelligent Systems: Advancements in Computing, Communication, and Cybersecurity (ISAC3); IEEE, 2025; pp. 1–6. [Google Scholar] [CrossRef]
  58. Lambrecht, A.; Tucker, C. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science 2019, 65(7), 2966–2981. [Google Scholar] [CrossRef]
  59. Langley, A.; Mintzberg, H.; Pitcher, P.; Posada, E.; Saint-Macary, J. Opening up decision making: The view from the black box. Organization Science 1995, 6(3), 260–279. [Google Scholar] [CrossRef]
  60. Laux, J. Institutionalised distrust and human oversight of artificial intelligence: Towards a democratic design of AI governance under the European Union AI Act. AI & Society 39 2024, 2853–2866. [Google Scholar]
  61. Lenders, D.; Pugnana, A.; Pellungrini, R.; Calders, T.; Pedreschi, D.; Giannotti, F. Interpretable and fair mechanisms for abstaining classifiers . arXiv 2025, arXiv:2503.18826. [Google Scholar] [CrossRef]
  62. Lerner, J. S.; Li, Y.; Valdesolo, P.; Kassam, K. S. Emotion and decision making. Annual Review of Psychology 66 2015, 799–823. [Google Scholar] [CrossRef] [PubMed]
  63. Loi, M.; Spielkamp, M. Towards accountability in the use of artificial intelligence for public administrations. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society; ACM, 2021; pp. 757–766. [Google Scholar] [CrossRef]
  64. Madras, D.; Pitassi, T.; Zemel, R. Predict responsibly: Improving fairness and accuracy by learning to defer. In Advances in Neural Information Processing Systems 31 (NeurIPS 2018); 2018; Available online: https://arxiv.org/abs/1711.06664.
  65. Marabelli, M.; Newell, S.; Handunge, V. The lifecycle of algorithmic decision-making systems: Organizational choices and ethical challenges. Journal of Strategic Information Systems 2021, 30(3), 101683. [Google Scholar] [CrossRef]
  66. Masrani, T. W.; Messier, G.; Voida, A.; Dimitropoulos, G.; He, H. A. Understanding data usage when making high-stakes frontline decisions in homelessness services. Proceedings of the ACM on Human-Computer Interaction 2025, 9(7)(Article CSCW506), 1–32. [Google Scholar] [CrossRef]
  67. Millington, B.; Millington, R. “The datafication of everything”: Toward a sociology of sport and big data. Sociology of Sport Journal 2015, 32(2), 140–160. [Google Scholar] [CrossRef]
  68. Moreira, C.; Palatkina, A.; Braca, D.; Walsh, D. M.; Leihn, P. J.; Chen, F.; Hubig, N. C. Explainable AI systems must be contestable: Here’s how to make it happen . arXiv 2025, arXiv:2506.01662. [Google Scholar] [CrossRef]
  69. Mushkani, R. Right-to-override for critical urban control systems: A deliberative audit method for buildings, power, and transport. arXiv 2025, arXiv:2509.13369. [Google Scholar] [CrossRef]
  70. Natali, C.; Marconi, L.; Dias Duran, L. D.; Cabitza, F. AI-induced deskilling in medicine: A mixed-method review and research agenda for healthcare and beyond. Artificial Intelligence Review 58 2025, 356. [Google Scholar] [CrossRef]
  71. Nimmy, S. F.; Hussain, O. K.; Chakrabortty, R. K.; Leshob, A. Quantifying the trustworthiness of explainable artificial intelligence outputs in uncertain decision-making scenarios. Engineering Applications of Artificial Intelligence 141 2025, 109678. [Google Scholar] [CrossRef]
  72. Noble, S. U. Algorithms of oppression: How search engines reinforce racism; New York University Press: New York, NY, 2018. [Google Scholar]
  73. Noorani, S.; Kiyani, S.; Pappas, G. J.; Hassani, H. Human–AI collaborative uncertainty quantification. arXiv. 2025. Available online: https://arxiv.org/abs/2510.23476.
  74. Noti, G.; Donahue, K.; Kleinberg, J.; Oren, S. Ai-assisted decision making with human learning. arXiv. 2025. Available online: https://arxiv.org/abs/2502.13062.
  75. Nwachukwu, P. S.; Chima, O. K.; Okolo, C. H. The artificial intelligence governance framework for finance: A control-by-design approach to algorithmic decision-making in accounting. Finance & Accounting Research Journal 2025, 7(8). [Google Scholar] [CrossRef]
  76. Ogunleye, O. S.; Kalema, B. M. Evaluation of algorithmic management of digital work platforms in developing countries. In Automation and Control; IntechOpen, 2020. [Google Scholar] [CrossRef]
  77. Okonji, P. S.; Fajimolu, O. C.; Onyemaobi, C. A. The role of organizational creativity between artificial intelligence capability and organizational performance. Business and Entrepreneurial Review 2023, 23(1), 157–174. [Google Scholar] [CrossRef]
  78. Olhede, S. C.; Rodrigues, R. Fairness and transparency in the age of the algorithm. Significance 2017, 14(2), 8–9. [Google Scholar] [CrossRef]
  79. Park, S.; Ryoo, S. How does algorithm control affect platform workers’ responses? Algorithm as digital Taylorism. Journal of Theoretical and Applied Electronic Commerce Research 2023, 18(1), 273–288. [Google Scholar] [CrossRef]
  80. Pessach, D.; Shmueli, E. Algorithmic fairness. In arXiv.; 2020. [Google Scholar] [CrossRef]
  81. Powell, T. C.; Lovallo, D.; Fox, C. R. Behavioral strategy. Strategic Management Journal 2011, 32(13), 1369–1386. [Google Scholar] [CrossRef]
  82. Punzi, C.; Pellungrini, R.; Setzu, M.; Giannotti, F.; Pedreschi, D. AI, meet human: Learning paradigms for hybrid decision making systems . arXiv 2024, arXiv:2402.06287. [Google Scholar] [CrossRef]
  83. Rahman, M. A.; Hossain, M. S.; Mintoo, A. A.; Islam, S. A systematic review of intelligent support systems for strategic decision-making using human-AI interaction in enterprise platforms. American Journal of Advanced Technology and Engineering Solutions 2025, 1(1), 506–543. [Google Scholar] [CrossRef]
  84. Raisch, S.; Krakowski, S. Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review 2021, 46(1), 192–210. [Google Scholar] [CrossRef]
  85. Raji, I. D.; Xu, P.; Honigsberg, C.; Ho, D. E. Outsider oversight: Designing a third party audit ecosystem for AI governance. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society; ACM, 2022; pp. 557–571. [Google Scholar] [CrossRef]
  86. Rajkomar, A.; Oren, E.; Chen, K.; Dai, A. M.; Hajaj, N.; Hardt, M.; Dean, J. Scalable and accurate deep learning with electronic health records. NPJ Digital Medicine 2018, 1(1), 18. [Google Scholar] [CrossRef] [PubMed]
  87. Ramu, S.; Bansal, P. A study on AI’s Transformative Impact on Strategic Decision-Making. IJSAT-International Journal on Science and Technology 2025, 16(2). Available online: https://www.researchgate.net/profile/Prerana-Bansal/publication/393975203_A_study_on_AI%27s_Transformative_Impact_on_Strategic_Decision-Making/links/688220aaf8031739e60869aa/A-study-on-AIs-Transformative-Impact-on-Strategic-Decision-Making.pdf.
  88. Cyertand James G., Richard M. March, A Behavioral Theory of the Firm Herbert A. Simon, Administrative Behavior; Prentice-Hall: Englewood Cliffs, NJ; Macmillan: London, UK, 1963; pp. 169–187. [Google Scholar]
  89. Robinson, A. P.; Jarrahi, M. H.; Keegan, A.; Meijerink, J. Algorithmic management in limbo: Task-driven interweaving of hierarchy and market management. Human Resource Management 2026, 65(1), 117–131. [Google Scholar] [CrossRef]
  90. Romeo, G.; Conti, D. Exploring automation bias in human–AI collaboration: A review and implications for explainable AI. AI & Society 41 2025, 259–278. [Google Scholar] [CrossRef]
  91. Sargeant, H.; Jorgensen, M.; Shah, A.; Weller, A.; Bhatt, U. Unequal uncertainty: Rethinking algorithmic interventions for mitigating discrimination from AI . arXiv 2025, arXiv:2508.07872. [Google Scholar] [CrossRef]
  92. Saxena, D.; Badillo-Urquiola, K. A.; Wisniewski, P.; Guha, S. A framework of high-stakes algorithmic decision-making for the public sector developed through a case study of child-welfare. Proceedings of the ACM on Human-Computer Interaction 2021, 5(CSCW2)(Article 287), 1–41. [Google Scholar] [CrossRef]
  93. Shin, D. The effects of explainability and causability on perception, trust, and acceptance of algorithmic decisions. Journal of Behavioral and Experimental Finance 28 2020, 100454. [Google Scholar] [CrossRef]
  94. Shrestha, Y. R.; Ben-Menahem, S. M.; von Krogh, G. Organizational decision-making structures in the age of artificial intelligence. California Management Review 2019, 61(4), 66–83. [Google Scholar] [CrossRef]
  95. Sienkiewicz, Ł. Algorithmic human resources management–perspectives and challenges. Annales Universitatis Mariae Curie-Skłodowska, Sectio H Oeconomia 2021, 55(2), 95–105. Available online: https://www.ceeol.com/search/article-detail?id=997390.
  96. Simon, H. A. Administrative behavior: A study of decision-making processes in administrative organization; Macmillan: New York, NY, 1947. [Google Scholar]
  97. Singh, A.; Gupta, R. From Taylorism to algorithmic management: How digital systems reshape control. International Journal of Latest Technology in Engineering, Management & Applied Science 2025, 14(8), 1688–1695. [Google Scholar] [CrossRef]
  98. Smith, A.; van Wagoner, H. P.; Keplinger, K.; Celebi, C. Navigating AI convergence in human-artificial intelligence teams: A signaling theory approach. In Journal of Organizational Behavior; Advance online publication, 2025. [Google Scholar] [CrossRef]
  99. Snyder, H. Literature review as a research methodology: An overview and guidelines. Journal of Business Research 104 2019, 333–339. [Google Scholar] [CrossRef]
  100. Spera, C.; Agrawal, G. Reversing the Paradigm: Building AI-First Systems with Human Guidance. arXiv. 2025. Available online: https://arxiv.org/abs/2506.12245.
  101. Takayanagi, R.; Takahashi, K.; Sogabe, T. AI-assisted decision-making and risk evaluation in uncertain environment using stochastic inverse reinforcement learning: American football as a case study. Mathematical Problems in Engineering 2022 2022, 4451427. [Google Scholar] [CrossRef]
  102. Terzis, P.; Veale, M.; Gaumann, N. Law and the emerging political economy of algorithmic audits. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency; ACM, 2024; pp. 1255–1267. [Google Scholar] [CrossRef]
  103. Torraco, R. J. Writing integrative literature reviews: Guidelines and examples. Human Resource Development Review 2005, 4(3), 356–367. [Google Scholar] [CrossRef]
  104. Torre, F.; Teigland, R.; Engstam, L. AI leadership and the future of corporate governance: Changing demands for board competence. In The digital transformation of labor: Automation, the gig economy and welfare; Larsson, A., Teigland, R., Eds.; Routledge, 2019; pp. 116–146. [Google Scholar] [CrossRef]
  105. Trunk, A. D.; Birkel, H.; Hartmann, E. On the current state of combining human and artificial intelligence for strategic organizational decision making. Business Research 2020, 13(3), 875–919. [Google Scholar] [CrossRef]
  106. Veale, M.; Van Kleek, M.; Binns, R. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18) (Paper; Association for Computing Machinery, 2018; Volume 440, pp. 1–14. [Google Scholar] [CrossRef]
  107. von Krogh, G. Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Academy of Management Discoveries 2018, 4(4), 404–409. [Google Scholar] [CrossRef]
  108. Yeung, S.; Rinaldo, F.; Jopling, J.; Liu, B.; Mehra, R.; Downing, N. L.; Guo, M.; Bianconi, G. M.; Alahi, A.; Lee, J.; Campbell, B.; Deru, K.; Beninati, W.; Fei-Fei, L.; Milstein, A. A computer vision system for deep learning-based detection of patient mobilisation activities in the ICU. npj Digital Medicine 2 2019, 11. [Google Scholar] [CrossRef]
  109. Zárate-Torres, R.; Rey-Sarmiento, C. F.; Acosta-Prado, J. C.; Gómez-Cruz, N. A.; Rodríguez Castro, D. Y.; Camargo, J. Influence of Leadership on Human–Artificial Intelligence Collaboration. Behavioral Sciences 2025, 15(7), 873. Available online: https://www.mdpi.com/2076-328X/15/7/873. [CrossRef] [PubMed]
  110. Zerilli, J.; Bhatt, U.; Weller, A. How transparency modulates trust in artificial intelligence. Patterns 2022, 3(4), 100455. [Google Scholar] [CrossRef] [PubMed]
  111. Zerilli, J.; Knott, A.; MacLaurin, J.; Gavaghan, C. Algorithmic decision-making and the control problem. Minds and Machines 2019, 29(4), 555–578. [Google Scholar] [CrossRef]
Table 1. Keyword families used for literature identification.
Table 1. Keyword families used for literature identification.
Keyword family Example search terms (combined with AND/OR)
Strategic decision-making (SDM) “strategic decision-making” OR “strategic choice” OR “top management team” OR “board decision*” OR “executive decision*”
Organisational decision structure “decision structure” OR “decision architecture” OR “delegation” OR “organisational design” OR “coordination”
AI and algorithmic decision systems (ADMS) “artificial intelligence” OR “machine learning” OR “algorithmic decision*” OR “ADM” OR “decision automation” OR “decision support system*”
Human–AI collaboration “human-AI” OR “human-in-the-loop” OR “hybrid decision*” OR “augmented decision*” OR “AI delegation”
Governance and accountability “algorithmic governance” OR “accountability” OR “transparency” OR “explainab*” OR “fairness” OR “audit*” OR “trust”
Table 2. Included studies in the theory-building integrative review.
Table 2. Included studies in the theory-building integrative review.
ID Author(s), Year Study type / method Core focus and relevance to the review
S1 Green & Chen (2019) Controlled experiments (algorithm-in-the-loop) Shows that decision-makers mis-calibrate reliance on algorithms; foundational for understanding automation bias and oversight limits in hybrid SDM.
S2 Koulu (2020) Conceptual socio-legal analysis Argues that “human oversight” in algorithmic systems often becomes symbolic, highlighting risks of empty managerial control in hybrid decision structures.
S3 Enarsson et al. (2022) Comparative legal / institutional analysis Shows that hybrid human–algorithm systems blur accountability; links SDM structures to legitimacy and responsibility.
S4 Noti et al. (2025) Large-scale behavioural experiments Demonstrates that timing and framing of AI advice affect performance; underpins the temporal orientation dimension.
S5 Rahman et al. (2025) Systematic literature review Synthesises work on strategic AI systems and positions managers as validators and governors; central for framing strategic agency and governance roles.
S6 Büber & Seven (2025) Conceptual integration (strategy theories) Argues that AI enhances analytical capability but does not remove the need for human contextual judgement; informs complementarity logics.
S7 Zárate-Torres et al. (2025) Qualitative literature review Emphasises leadership as mediator of human–AI interaction; relevant for configuration-level interpretation of agency and control.
S8 Ramu & Bansal (2025) Conceptual / industry analysis Describes shift from intuition-led to data-driven strategy, with humans retaining ethical control; supports human-dominant configuration discussion.
S9 Thareja (2025) Mixed-methods dissertation Practitioner-oriented evidence that AI accelerates SDM but requires oversight for fairness and context sensitivity; illustrates hybrid strategic practice.
S10 Kovari (2024) Conceptual and cross-sector analysis of AI-based decision support systems (DSS), drawing on examples across industries Examines how AI-driven DSS can enhance decision accuracy while maintaining transparency, explainability, and user trust. Highlights design principles for trustworthy, effective human–AI decision support and discusses how ethical standards and visibility requirements shape user acceptance. Relevant as evidence that AI can augment strategic and operational decision-making, provided that system transparency and explainability are explicitly engineered into decision architectures.
S11 Sienkiewicz (2021) Critical literature review of theoretical and empirical work on algorithmic human resources management (AHRM) Reviews how algorithmic management technologies (AI, ML, big data, HR analytics) are being applied in HRM functions such as recruitment, performance management, remuneration and employment relations.
S12 Vaassen (2022) Conceptual analysis Argues that AI opacity undermines personal autonomy and responsible decision-making, relevant for strategic AI governance.
S13 Spera & Agrawal (2025) Conceptual organisational analysis Warns that AI-first organisations may become over-dependent on automation, weakening human skills and oversight capacity.
S14 Saxena et al. (2021) Ethnographic case study (child welfare) Shows how high-stakes ADMS in child welfare interact with discretion and bureaucracy; anchors “augmentation vs automation” at the strategic apex.
S15 Kawakami et al. (2024) Elite interviews (public sector AI) Demonstrates that AI adoption decisions are shaped by power relations and legal pressures, not just technical performance; key for agency redistribution.
S16 Elish (2018) Ethnography (clinical ML deployment) Analyses how clinical and executive actors reinterpret ML outputs through authority structures; core for temporal and accountability dimensions.
S17 Masrani et al. (2025) Qualitative fieldwork (homelessness services) Shows “data-outsourcing continuums” in frontline decisions; illustrates partial delegation and resistance to full automation.
S18 Marabelli et al. (2021) Conceptual / lifecycle framework (ADMS) Develops a lifecycle model of organisational choices around ADMS; central for treating decision structures as designable configurations.
S19 CFA Institute (2021) Practice report / case-based analysis Examines “T-shaped” teams and investment committees using AI; provides concrete examples of human-dominant and aggregated configurations.
S20 Jarrahi et al. (2021) Conceptual synthesis (algorithmic management) Shows how AI reshapes coordination, authority, and information flows; informs redistribution of decision rights and power.
S21 Smith et al. (2025) Behavioural experiments (human–AI teams) Analyses “AI convergence” and voluntary AI advice use; supports arguments about when and how humans defer to algorithmic recommendations.
S22 Veale et al. (2018) Empirical / design-needs study Identifies fairness and accountability design needs in high-stakes public-sector ADMS; important for governance dimensions.
S23 Gigerenzer et al. (2022) Conceptual (heuristic decision-making) Articulates “smart heuristics” for individuals, teams, and organisations; underpins human SDM side of bounded rationality vs analytics.
S24 Hadley et al. (2024) Empirical study of ARBs Investigates algorithm review boards in finance and health; demonstrates conditions under which internal AI governance is substantive vs symbolic.
S25 Torre et al. (2019) Conceptual corporate-governance analysis Argues that boards must develop AI operational and governance capabilities; anchors board-level oversight discussion.
S26 Funda (2025) Systematic review of algorithm audits Reviews algorithm auditing approaches; highlights technical focus and need for organisational and participatory governance.
S27 De-Arteaga et al. (2020) Field study (child-maltreatment screening) Shows that experts sometimes override erroneous algorithmic scores; complicates simple narratives of automation bias.
S28 Alon-Barkat & Busuioc (2021) Large-scale experiments Finds selective adherence rather than blind automation bias; shows importance of stereotypes and framing in AI reliance.
S29 Alon-Barkat & Busuioc (2020) Pre-registered experiments Demonstrates that people selectively adopt AI advice aligned with prior beliefs; deepens behavioural accounts of algorithmic influence.
S30 De-Arteaga et al. (2021) Hybrid decision framework / modelling Uses expert consistency to improve algorithmic decision support; informs design of human-in-the-loop systems.
S31 Punzi et al. (2024) Taxonomy / conceptual framework Proposes learning paradigms for hybrid decision systems (human-in/on/out-of-the-loop); directly supports configuration typology.
S32 Romeo & Conti (2025) Systematic review (automation bias) Synthesises 35 studies on automation bias; clarifies conditions under which over-reliance emerges.
S33 Zerilli et al. (2022) Review (trust and transparency in AI) Argues that both over-reliance and aversion are risks; motivates need for “algorithmic vigilance” in strategic settings.
S34 Ibrahim et al. (2025) Conceptual + measurement framework Proposes metrics for over-reliance and human-compatible AI; supports governance recommendations on monitoring reliance.
S35 Natali et al. (2025) Mixed-method review (medicine) Shows AI-induced deskilling in clinical decision-making; generalises to concerns about strategic deskilling.
S36 Kostick-Quenet & Gerke (2022) Behavioural-economics perspective Discusses how user biases shape AI reliance and how interfaces can nudge critical engagement.
S37 Zerilli et al. (2019) Conceptual “control problem” analysis Frames algorithmic decision-making as a human–machine control loop; emphasises complacency and diffidence risks.
S38 Sargeant et al. (2025) Experimental + legal analysis Shows how selective abstention and friction reshape reliance and discrimination risk; informs override/abstention design.
S39 Raji et al. (2022) Institutional design / conceptual + cases Proposes “outsider oversight” and third-party audit ecosystems; central for accountability beyond the firm.
S40 Terzis et al. (2024) Legal / political-economy analysis Examines regulatory audit mandates and risks of audit capture; links firm-level structures to regulation.
S41 Nwachukwu et al. (2025) Conceptual AI-governance framework (finance) Advocates “control-by-design” with embedded explainability and audit trails; informs replicability and traceability dimensions.
S42 Chawande (2025) Model-risk governance case (investment banking) Details lifecycle governance for AI compliance systems; illustrates structural integration of validation and oversight.
S43 Mushkani (2025) Design / policy analysis (urban control systems) Designs right-to-override and safe fallback states; provides concrete override architecture for high-stakes systems.
S44 Moreira et al. (2025) Conceptual framework (contestability) Defines contestability for XAI systems and proposes criteria for operationalising it.
S45 Cohen & Suzor (2024) Legal / institutional analysis Argues that public interest in AI requires contestation channels, separation of powers, and independent information access.
S46 Loi & Spielkamp (2021) Governance / delegation analysis Discusses accountability in public-sector AI and imperfect delegation; underpins need for clear accountability chains.
S47 Laux (2023) Oversight theory (institutionalised distrust) Distinguishes constitutive vs corrective oversight; emphasises overseer fallibility in human oversight of AI.
S48 Chappidi et al. (2025) Empirical study (record-keeping & oversight) Shows how transparency and record-keeping reshape oversight practices and can generate accountability capture.
S49 Ganesh et al. (2025) Corporate-governance framework Proposes integrated board-level AI governance with ethics committees and risk-management linkages.
S50 Benlian et al. (2022) Conceptual analysis (algorithmic management) Shows how algorithms automate coordination and control in platform work, recentring authority in system design.
S51 Robinson et al. (2025) Empirical platform-work study Analyses “algorithmic management in limbo”, showing dynamic calibration of hierarchy vs autonomy; relevant for power in hybrid systems.
S52 Park & Ryoo (2023) Empirical study (food-delivery platforms) Describes algorithmic control as “digital Taylorism”, standardising discretion and intensifying monitoring.
S53 Ogunleye & Kalema (2020) Empirical evaluation (ride-hailing) Documents “algorithmic despotism” on platforms in developing countries; illustrates centralisation of power in algorithms.
S54 Isbah (2022) Empirical case studies (ride-hailing) Explores algorithmic exploitation and asymmetric power; informs arguments about hidden centralisation of strategic control.
S55 Aloisi (2024) Legal / labour-regulation analysis (EU) Examines how algorithmic management intensifies employer discretion and how EU law seeks to rebalance power.
S56 Singh & Gupta (2025) Theoretical synthesis Traces continuity from Taylorism to algorithmic management, showing how efficiency logics migrate into software.
S57 Choi et al. (2023) Clinical study (AI bacteremia prediction) Shows AI improves decisions when model uncertainty is low and physician uncertainty high; illustrates task boundary conditions.
S58 Takayanagi et al. (2022) Experimental / inverse reinforcement learning framework Demonstrates conditions under which AI outperforms experts in stochastic environments with well-defined rewards.
S59 Nimmy et al. (2025) Risk-management study Proposes methods to quantify trustworthiness of XAI outputs; relevant for uncertainty and explainability alignment.
S60 Lenders et al. (2025) Algorithm design (abstaining classifier) Develops interpretable and fair abstaining classifiers; concrete mechanism for safe abstention in high-stakes decisions.
S61 Agard et al. (2025) Clinical Bayesian-network study (sepsis) Shows probabilistic, transparent models can improve decisions in ambiguous intensive-care environments.
S62 Lahoti et al. (2025) AI-enhanced business simulation Demonstrates AI-supported scenario modelling for strategic decisions under uncertainty; links simulation to strategic SDM.
Table 3. Comparative dimensions of human and algorithmic strategic decision-making (SDM).
Table 3. Comparative dimensions of human and algorithmic strategic decision-making (SDM).
Dimension Human SDM (top-management teams, boards) Algorithmic SDM (AI / ADMS)
Interpretive authority Contextual, narrative, politically negotiated; rich but biased Model-based, data-bound, probabilistic; consistent but often opaque
Decision search-space structure Tolerates ambiguity, conflicting goals, evolving frames Requires explicit objectives, labels, constraints; hides design choices in metrics
Temporal orientation Claims long-term, imaginative framing but pulled by short-term incentives Extrapolative within chosen horizon; sensitive to structural breaks and data regimes
Accountability and traceability Narratively explainable but vulnerable to blame shifting and organised hypocrisy Log- and code-based traceability but complex accountability chains and potential audit capture
Replicability and scalability Low replicability, bounded capacity, context sensitivity High replicability and scale under stable conditions, but risk of large-scale consistent error
Table 4. Mapping dimensions to strategic decision-making structures.
Table 4. Mapping dimensions to strategic decision-making structures.
Structure Dimension profile (high/low) Typical applications Salient risks
Human-dominant (AI advisory) High human interpretive authority; loose search; mixed temporal orientation; narrative accountability; low scale Boards, M&A, strategy offsites Automation complacency; rhetorical AI; weak challenge
Sequential AI-to-human Tight search; high scale; fast initial screening; human narrative accountability Loan origination, triage, innovation contests Omission errors; hidden bias; illusory oversight
Sequential human-to-AI Human framing; small alternative set; intensive algorithmic optimisation Sports analytics, ICU monitoring, risk optimisation Deskilling; over-reliance; weak organisational learning
Aggregated human–AI governance Parallel decisions; mixed interpretive authority; explicit aggregation; partial replicability Investment committees, ARBs, strategic risk boards Responsibility diffusion; weighting politics; governance opacity
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated