Submitted:
22 February 2026
Posted:
23 February 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
1.1. From Human Judgement to Decision Architecture
1.2. AI Enters SDM as a New Decision Actor, Not Only a Capability
1.3. Governance Is Necessary but Does Not Fully Address Structure
1.4. The Design Problem and Review Gap
1.5. Research Question
1.6. Contribution and Approach
2. Materials and Methods
2.1. Review Approach
2.2. Analytical Strategy
2.3. Search Strategy, Screening and Data Sources
3. Results
3.1. Overview of Included Studies
3.2. Comparing Human and Algorithmic Strategic Decision-Making
Interpretive Authority
Decision Search-Space Structure
Temporal Orientation
Accountability and Traceability
Replicability and Scalability
4. Strategic Decision-Making Structures in the Age of AI
- Human-dominant structures, where AI plays an advisory role.
- Sequential hybrid structures, where AI and humans act in ordered stages.
- Aggregated human–AI governance, where human and algorithmic inputs are combined through explicit aggregation rules.
4.1. Human-Dominant Strategic Structures: AI as Advisory Input
4.2. Sequential Hybrid Strategic Structures
AI-to-Human Sequences
Human-to-AI Sequences
4.3. Aggregated Human–AI Strategic Governance
5. The Reconfiguration of Strategic Agency
6. Implications
6.1. Implications for Strategic Management Theory
6.3. Implications for Governance and Policy
7. Future Research
8. Conclusions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| The following abbreviations are used in this manuscript |
|
References
- Agard, G.; Roman, C.; Guervilly, C.; Ouladsine, M.; Boyer, L.; Hraiech, S. Improving sepsis prediction in the ICU with explainable artificial intelligence: The promise of Bayesian networks. Journal of Clinical Medicine 2025, 14(18), 6463. [Google Scholar] [CrossRef]
- Alasmri, N.; Basahel, S. B. Linking artificial intelligence use to improved decision-making, individual and organizational outcomes. International Business Research 2022, 15(10), 1–13. [Google Scholar] [CrossRef]
- Aloisi, A. Regulating algorithmic management at work in the European Union: Data protection, non-discrimination and collective rights. International Journal of Comparative Labour Law and Industrial Relations 2024, 40(1), 37–70. [Google Scholar] [CrossRef]
- Alon-Barkat, S.; Busuioc, M. Decision-makers’ processing of AI algorithmic advice: Automation bias versus selective adherence . arXiv 2021, arXiv:2103.02381. [Google Scholar] [CrossRef]
- Alon-Barkat, S.; Busuioc, M. Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory 2023, 33(1), 153–169. [Google Scholar] [CrossRef]
- Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine bias. ProPublica. May 2016. Available online: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
- Baumeister, R. F. The psychology of irrationality: Why people make foolish, self-defeating choices. In The psychology of economic decisions; Brocas, I., Carrillo, J. D., Eds.; Oxford University Press, 2003; Vol. 1, pp. 3–16. Available online: https://www.econbiz.de/Record/the-psychology-of-irrationality-why-people-make-foolish-self-defeating-choices-baumeister-roy/10001887051.
- Benlian, A.; Wiener, M.; Cram, W. A.; Krasnova, H.; Maedche, A.; Möhlmann, M.; Recker, J.; Remus, U. Algorithmic management. Business & Information Systems Engineering 2022, 64(6), 825–839. [Google Scholar] [CrossRef]
- Boscoe, B. Creating transparency in algorithmic processes. Delphi: Interdisciplinary Review of Emerging Technologies 2019, 2(1), 12–22. [Google Scholar] [CrossRef]
- Bromiley, P.; Rau, D. Behavioral strategic management; Routledge, 2017. [Google Scholar] [CrossRef]
- Brown, J. S.; Collins, A.; Duguid, P. Situated cognition and the culture of learning. Educational Researcher 1989, 18(1), 32–42. [Google Scholar] [CrossRef]
- Brunsson, N. The organization of hypocrisy: Talk, decisions and actions in organizations; John Wiley & Sons: Chichester, UK, 1989. [Google Scholar]
- Büber, H.; Seven, E. Strategic Decision-Making in the AI Era: An Integrated Approach Classical, Adaptive, Resource-Based, and Processual Views. International Journal of Management and Administration 2025, 9(17), 67–97. Available online: https://dergipark.org.tr/en/pub/ijma/issue/90536/1637935. [CrossRef]
- Burrell, J. How the machine “thinks”: Understanding opacity in machine learning algorithms. Big Data & Society 2016, 3(1), 1–12. [Google Scholar] [CrossRef]
- Burridge, N. Artificial intelligence gets a seat in the boardroom: Hong Kong venture capitalist sees AI running Asian companies within 5 years. Nikkei Asian Review, May 10; 2017. [Google Scholar]
- Cao, L. T-shaped teams: Organizing to adopt AI and big data at investment firms . In CFA Institute Research Foundation; 2021. [Google Scholar] [CrossRef]
- Chappidi, S.; Cobbe, J.; Norval, C.; Mazumder, A.; Singh, J. Accountability capture: How record-keeping to support AI transparency and accountability (re)shapes algorithmic oversight. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 2025, 8(1), 554–566. [Google Scholar] [CrossRef]
- Charitha, C.; Hemaraju, B. Impact of artificial intelligence on decision-making in organisations. International Journal For Multidisciplinary Research 2023, 5(4), 5172. [Google Scholar] [CrossRef]
- Chawande, P. Model risk governance for AI-based compliance systems in investment banking. International Journal of Multidisciplinary Research and Growth Evaluation 2025, 6(3), 2027–2035. [Google Scholar] [CrossRef]
- Choi, D.; Lim, M. H.; Kim, K. H.; Shin, S.; Hong, K.; Kim, S. Development of an artificial intelligence bacteremia prediction model and evaluation of its impact on physician predictions focusing on uncertainty. Scientific Reports 13 2023, 12866. [Google Scholar] [CrossRef]
- Coglianese, C.; Lehr, D. Transparency and algorithmic governance. Administrative Law Review 2019, 71(1), 1–56. Available online: https://scholarship.law.upenn.edu/faculty_scholarship/2123/.
- Cohen, T.; Suzor, N. P. Contesting the public interest in AI governance. Internet Policy Review 2024, 13(1). [Google Scholar] [CrossRef]
- Cristofaro, M.; Bao, Y. J.; Chiu, S.; Hernández-Lara, A. B.; Pérez-Calero, L. Editorial: Affect and cognition in upper echelons’ strategic decision making: Empirical and theoretical studies for advancing corporate governance. Frontiers in Psychology 13 2023, 1081095. [Google Scholar] [CrossRef]
- Cronin, M. A.; George, E. The why and how of the integrative review. Organizational Research Methods 2020, 26(1), 168–192. [Google Scholar] [CrossRef]
- Cyert, R. M.; Feigenbaum, E. A.; March, J. G. Models in a behavioral theory of the firm. Behavioral Science 1959, 4(2), 81–95. [Google Scholar] [CrossRef]
- Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. October 2018. Available online: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
- De Dreu, C. K. W.; Nijstad, B. A.; van Knippenberg, D. Motivated information processing in group judgment and decision making. Personality and Social Psychology Review 2008, 12(1), 22–49. [Google Scholar] [CrossRef]
- De-Arteaga, M.; Dubrawski, A.; Jeanselme, V.; Chouldechova, A. Leveraging expert consistency to improve algorithmic decision support. Management Science 2025, 71(12), 10465–10485. [Google Scholar] [CrossRef]
- De-Arteaga, M.; Fogliato, R.; Chouldechova, A. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery, 2020; pp. 1–12. [Google Scholar] [CrossRef]
- Dican, L. Human-AI collaboration and its impact on decision-making. International Journal of Multidisciplinary Research and Growth Evaluation 2025, 6(2), 919–923. [Google Scholar] [CrossRef]
- Elish, M. C. The stakes of uncertainty: Developing and integrating machine learning in clinical care. Ethnographic Praxis in Industry Conference Proceedings 2018, 2018(1), 364–380. [Google Scholar] [CrossRef]
- Enarsson, P.; Klamberg, M.; Nyman-Metcalf, K. Algorithmic accountability and the rule of law. Journal of Cyber Policy 2021, 6(2), 270–288. [Google Scholar]
- Enarsson, T.; Enqvist, L.; Naarttijärvi, M. Approaching the human in the loop–legal perspectives on hybrid human/algorithmic decision-making in three contexts. Information & Communications Technology Law 2022, 31(1), 123–153. Available online: https://www.tandfonline.com/doi/abs/10.1080/13600834.2021.1958860.
- Falagas, M. E.; Pitsouni, E. I.; Malietzis, G. A.; Pappas, G. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: Strengths and weaknesses. International Journal of Medical Informatics 2008, 79(9), 769–776. [Google Scholar] [CrossRef]
- Feldman, M. S.; March, J. G. Information in organizations as signal and symbol. Administrative Science Quarterly 1981, 26(2), 171–186. [Google Scholar] [CrossRef]
- Funda, V. A systematic review of algorithm auditing processes to assess bias and risks in AI systems. Journal of Infrastructure, Policy and Development 2025, 9(2), 11489. [Google Scholar] [CrossRef]
- Ganesh, N. B.; Siddineni, D.; Reddy, V. V.; Lateef, K.; Sharma, R. Corporate governance in the age of AI: Ethical oversight and accountability frameworks. Journal of Information Systems Engineering and Management 2025, 10(1), 959. [Google Scholar] [CrossRef]
- Gigerenzer, G.; Reb, J.; Luan, S. Smart heuristics for individuals, teams, and organizations. Annual Review of Organizational Psychology and Organizational Behavior 9 2022, 171–198. [Google Scholar] [CrossRef]
- Gomez, C.; Cho, S. M.; Ke, S.; Huang, C.-M.; Unberath, M. Human–AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review. arXiv. 2023. Available online: https://arxiv.org/abs/2310.19778.
- Gomez, C.; Cho, S. M.; Ke, S.; Huang, C.-M.; Unberath, M. Human–AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review. Frontiers in Computer Science 2025, 6, 1521066. [Google Scholar] [CrossRef]
- Green, B.; Chen, Y. The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW) 2019, 50, 1–24. [Google Scholar] [CrossRef]
- Greve, H. R.; Zhang, C. M. Is there a strategic organization in the behavioral theory of the firm? Looking back and looking forward. Strategic Organization 2022, 20(4), 698–708. [Google Scholar] [CrossRef]
- Gusenbauer, M.; Haddaway, N. R. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Research Synthesis Methods 2020, 11(2), 181–217. [Google Scholar] [CrossRef]
- Hadley, E.; Blatecky, A. R.; Comfort, M. L. Investigating algorithm review boards for organizational responsible artificial intelligence governance. AI and Ethics 5 2024, 2485–2495. [Google Scholar] [CrossRef]
- Ibrahim, L.; Collins, K. M.; Kim, S. S. Y.; Reuel, A.; Lamparth, M.; Feng, K.; Ahmad, L.; Soni, P.; El Kattan, A.; Stein, M.; Swaroop, S.; Sucholutsky, I.; Strait, A.; Liao, Q. V.; Bhatt, U. Measuring and mitigating overreliance is necessary for building human-compatible AI . arXiv 2025, arXiv:2509.08010. [Google Scholar] [CrossRef]
- Isbah, M. F. Algorithmic exploitation: Understanding labor process and control among ride-hailing platform workers. Sosio e-Kons 2022, 21(2). [Google Scholar] [CrossRef]
- Jarrahi, M. H.; Newlands, G.; Lee, M. K.; Wolf, C. T.; Kinder, E.; Sutherland, W. Algorithmic management in a work context. Big Data & Society 2021, 8(2), 1–14. [Google Scholar] [CrossRef]
- Jeppesen, L. B.; Lakhani, K. R. Marginality and problem-solving effectiveness in broadcast search. Organization Science 2010, 21(5), 1016–1033. [Google Scholar] [CrossRef]
- Kahneman, D. Thinking, fast and slow; Farrar, Straus and Giroux: New York, NY, 2011. [Google Scholar]
- Katzenbach, C.; Ulbricht, L. Algorithmic governance. Internet Policy Review 2019, 8(4), 1–18. [Google Scholar] [CrossRef]
- Kawakami, A.; Coston, A.; Heidari, H.; Holstein, K.; Zhu, H. Studying up public sector AI: How networks of power relations shape agency decisions around AI design and use. Proceedings of the ACM on Human-Computer Interaction 2024, 8(CSCW2)(Article 450), 1–37. [Google Scholar] [CrossRef]
- Kolbjørnsrud, V. Designing the intelligent organization: Six principles for human–AI collaboration. California Management Review 2024, 66(2), 44–64. [Google Scholar] [CrossRef]
- Kostick-Quenet, K. M.; Gerke, S. AI in the hands of imperfect users. In npj Digital Medicine 5; 2022; p. 197. [Google Scholar] [CrossRef]
- Koulu, R. Human oversight and symbolic control in algorithmic governance. In Life and the law in the era of data-driven agency; Hildebrandt, M., O’Hara, K., Eds.; Edward Elgar: Cheltenham, UK, 2020; pp. 209–231. [Google Scholar]
- Koulu, R. Proceduralizing control and discretion: Human oversight in artificial intelligence policy. Maastricht Journal of European and Comparative Law 2020, 27(6), 720–735. [Google Scholar] [CrossRef]
- Kovari, A. AI for decision support: Balancing accuracy, transparency, and trust across sectors. Information 2024, 15(11), 725. [Google Scholar] [CrossRef]
- Lahoti, Y.; Kalshetti, P.; Anute, N.; Limbore, N. V. AI-enhanced business simulation models for strategic decision-making in uncertain environments. 2025 International Conference on Innovations in Intelligent Systems: Advancements in Computing, Communication, and Cybersecurity (ISAC3); IEEE, 2025; pp. 1–6. [Google Scholar] [CrossRef]
- Lambrecht, A.; Tucker, C. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science 2019, 65(7), 2966–2981. [Google Scholar] [CrossRef]
- Langley, A.; Mintzberg, H.; Pitcher, P.; Posada, E.; Saint-Macary, J. Opening up decision making: The view from the black box. Organization Science 1995, 6(3), 260–279. [Google Scholar] [CrossRef]
- Laux, J. Institutionalised distrust and human oversight of artificial intelligence: Towards a democratic design of AI governance under the European Union AI Act. AI & Society 39 2024, 2853–2866. [Google Scholar]
- Lenders, D.; Pugnana, A.; Pellungrini, R.; Calders, T.; Pedreschi, D.; Giannotti, F. Interpretable and fair mechanisms for abstaining classifiers . arXiv 2025, arXiv:2503.18826. [Google Scholar] [CrossRef]
- Lerner, J. S.; Li, Y.; Valdesolo, P.; Kassam, K. S. Emotion and decision making. Annual Review of Psychology 66 2015, 799–823. [Google Scholar] [CrossRef] [PubMed]
- Loi, M.; Spielkamp, M. Towards accountability in the use of artificial intelligence for public administrations. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society; ACM, 2021; pp. 757–766. [Google Scholar] [CrossRef]
- Madras, D.; Pitassi, T.; Zemel, R. Predict responsibly: Improving fairness and accuracy by learning to defer. In Advances in Neural Information Processing Systems 31 (NeurIPS 2018); 2018; Available online: https://arxiv.org/abs/1711.06664.
- Marabelli, M.; Newell, S.; Handunge, V. The lifecycle of algorithmic decision-making systems: Organizational choices and ethical challenges. Journal of Strategic Information Systems 2021, 30(3), 101683. [Google Scholar] [CrossRef]
- Masrani, T. W.; Messier, G.; Voida, A.; Dimitropoulos, G.; He, H. A. Understanding data usage when making high-stakes frontline decisions in homelessness services. Proceedings of the ACM on Human-Computer Interaction 2025, 9(7)(Article CSCW506), 1–32. [Google Scholar] [CrossRef]
- Millington, B.; Millington, R. “The datafication of everything”: Toward a sociology of sport and big data. Sociology of Sport Journal 2015, 32(2), 140–160. [Google Scholar] [CrossRef]
- Moreira, C.; Palatkina, A.; Braca, D.; Walsh, D. M.; Leihn, P. J.; Chen, F.; Hubig, N. C. Explainable AI systems must be contestable: Here’s how to make it happen . arXiv 2025, arXiv:2506.01662. [Google Scholar] [CrossRef]
- Mushkani, R. Right-to-override for critical urban control systems: A deliberative audit method for buildings, power, and transport. arXiv 2025, arXiv:2509.13369. [Google Scholar] [CrossRef]
- Natali, C.; Marconi, L.; Dias Duran, L. D.; Cabitza, F. AI-induced deskilling in medicine: A mixed-method review and research agenda for healthcare and beyond. Artificial Intelligence Review 58 2025, 356. [Google Scholar] [CrossRef]
- Nimmy, S. F.; Hussain, O. K.; Chakrabortty, R. K.; Leshob, A. Quantifying the trustworthiness of explainable artificial intelligence outputs in uncertain decision-making scenarios. Engineering Applications of Artificial Intelligence 141 2025, 109678. [Google Scholar] [CrossRef]
- Noble, S. U. Algorithms of oppression: How search engines reinforce racism; New York University Press: New York, NY, 2018. [Google Scholar]
- Noorani, S.; Kiyani, S.; Pappas, G. J.; Hassani, H. Human–AI collaborative uncertainty quantification. arXiv. 2025. Available online: https://arxiv.org/abs/2510.23476.
- Noti, G.; Donahue, K.; Kleinberg, J.; Oren, S. Ai-assisted decision making with human learning. arXiv. 2025. Available online: https://arxiv.org/abs/2502.13062.
- Nwachukwu, P. S.; Chima, O. K.; Okolo, C. H. The artificial intelligence governance framework for finance: A control-by-design approach to algorithmic decision-making in accounting. Finance & Accounting Research Journal 2025, 7(8). [Google Scholar] [CrossRef]
- Ogunleye, O. S.; Kalema, B. M. Evaluation of algorithmic management of digital work platforms in developing countries. In Automation and Control; IntechOpen, 2020. [Google Scholar] [CrossRef]
- Okonji, P. S.; Fajimolu, O. C.; Onyemaobi, C. A. The role of organizational creativity between artificial intelligence capability and organizational performance. Business and Entrepreneurial Review 2023, 23(1), 157–174. [Google Scholar] [CrossRef]
- Olhede, S. C.; Rodrigues, R. Fairness and transparency in the age of the algorithm. Significance 2017, 14(2), 8–9. [Google Scholar] [CrossRef]
- Park, S.; Ryoo, S. How does algorithm control affect platform workers’ responses? Algorithm as digital Taylorism. Journal of Theoretical and Applied Electronic Commerce Research 2023, 18(1), 273–288. [Google Scholar] [CrossRef]
- Pessach, D.; Shmueli, E. Algorithmic fairness. In arXiv.; 2020. [Google Scholar] [CrossRef]
- Powell, T. C.; Lovallo, D.; Fox, C. R. Behavioral strategy. Strategic Management Journal 2011, 32(13), 1369–1386. [Google Scholar] [CrossRef]
- Punzi, C.; Pellungrini, R.; Setzu, M.; Giannotti, F.; Pedreschi, D. AI, meet human: Learning paradigms for hybrid decision making systems . arXiv 2024, arXiv:2402.06287. [Google Scholar] [CrossRef]
- Rahman, M. A.; Hossain, M. S.; Mintoo, A. A.; Islam, S. A systematic review of intelligent support systems for strategic decision-making using human-AI interaction in enterprise platforms. American Journal of Advanced Technology and Engineering Solutions 2025, 1(1), 506–543. [Google Scholar] [CrossRef]
- Raisch, S.; Krakowski, S. Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review 2021, 46(1), 192–210. [Google Scholar] [CrossRef]
- Raji, I. D.; Xu, P.; Honigsberg, C.; Ho, D. E. Outsider oversight: Designing a third party audit ecosystem for AI governance. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society; ACM, 2022; pp. 557–571. [Google Scholar] [CrossRef]
- Rajkomar, A.; Oren, E.; Chen, K.; Dai, A. M.; Hajaj, N.; Hardt, M.; Dean, J. Scalable and accurate deep learning with electronic health records. NPJ Digital Medicine 2018, 1(1), 18. [Google Scholar] [CrossRef] [PubMed]
- Ramu, S.; Bansal, P. A study on AI’s Transformative Impact on Strategic Decision-Making. IJSAT-International Journal on Science and Technology 2025, 16(2). Available online: https://www.researchgate.net/profile/Prerana-Bansal/publication/393975203_A_study_on_AI%27s_Transformative_Impact_on_Strategic_Decision-Making/links/688220aaf8031739e60869aa/A-study-on-AIs-Transformative-Impact-on-Strategic-Decision-Making.pdf.
- Cyertand James G., Richard M. March, A Behavioral Theory of the Firm Herbert A. Simon, Administrative Behavior; Prentice-Hall: Englewood Cliffs, NJ; Macmillan: London, UK, 1963; pp. 169–187. [Google Scholar]
- Robinson, A. P.; Jarrahi, M. H.; Keegan, A.; Meijerink, J. Algorithmic management in limbo: Task-driven interweaving of hierarchy and market management. Human Resource Management 2026, 65(1), 117–131. [Google Scholar] [CrossRef]
- Romeo, G.; Conti, D. Exploring automation bias in human–AI collaboration: A review and implications for explainable AI. AI & Society 41 2025, 259–278. [Google Scholar] [CrossRef]
- Sargeant, H.; Jorgensen, M.; Shah, A.; Weller, A.; Bhatt, U. Unequal uncertainty: Rethinking algorithmic interventions for mitigating discrimination from AI . arXiv 2025, arXiv:2508.07872. [Google Scholar] [CrossRef]
- Saxena, D.; Badillo-Urquiola, K. A.; Wisniewski, P.; Guha, S. A framework of high-stakes algorithmic decision-making for the public sector developed through a case study of child-welfare. Proceedings of the ACM on Human-Computer Interaction 2021, 5(CSCW2)(Article 287), 1–41. [Google Scholar] [CrossRef]
- Shin, D. The effects of explainability and causability on perception, trust, and acceptance of algorithmic decisions. Journal of Behavioral and Experimental Finance 28 2020, 100454. [Google Scholar] [CrossRef]
- Shrestha, Y. R.; Ben-Menahem, S. M.; von Krogh, G. Organizational decision-making structures in the age of artificial intelligence. California Management Review 2019, 61(4), 66–83. [Google Scholar] [CrossRef]
- Sienkiewicz, Ł. Algorithmic human resources management–perspectives and challenges. Annales Universitatis Mariae Curie-Skłodowska, Sectio H Oeconomia 2021, 55(2), 95–105. Available online: https://www.ceeol.com/search/article-detail?id=997390.
- Simon, H. A. Administrative behavior: A study of decision-making processes in administrative organization; Macmillan: New York, NY, 1947. [Google Scholar]
- Singh, A.; Gupta, R. From Taylorism to algorithmic management: How digital systems reshape control. International Journal of Latest Technology in Engineering, Management & Applied Science 2025, 14(8), 1688–1695. [Google Scholar] [CrossRef]
- Smith, A.; van Wagoner, H. P.; Keplinger, K.; Celebi, C. Navigating AI convergence in human-artificial intelligence teams: A signaling theory approach. In Journal of Organizational Behavior; Advance online publication, 2025. [Google Scholar] [CrossRef]
- Snyder, H. Literature review as a research methodology: An overview and guidelines. Journal of Business Research 104 2019, 333–339. [Google Scholar] [CrossRef]
- Spera, C.; Agrawal, G. Reversing the Paradigm: Building AI-First Systems with Human Guidance. arXiv. 2025. Available online: https://arxiv.org/abs/2506.12245.
- Takayanagi, R.; Takahashi, K.; Sogabe, T. AI-assisted decision-making and risk evaluation in uncertain environment using stochastic inverse reinforcement learning: American football as a case study. Mathematical Problems in Engineering 2022 2022, 4451427. [Google Scholar] [CrossRef]
- Terzis, P.; Veale, M.; Gaumann, N. Law and the emerging political economy of algorithmic audits. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency; ACM, 2024; pp. 1255–1267. [Google Scholar] [CrossRef]
- Torraco, R. J. Writing integrative literature reviews: Guidelines and examples. Human Resource Development Review 2005, 4(3), 356–367. [Google Scholar] [CrossRef]
- Torre, F.; Teigland, R.; Engstam, L. AI leadership and the future of corporate governance: Changing demands for board competence. In The digital transformation of labor: Automation, the gig economy and welfare; Larsson, A., Teigland, R., Eds.; Routledge, 2019; pp. 116–146. [Google Scholar] [CrossRef]
- Trunk, A. D.; Birkel, H.; Hartmann, E. On the current state of combining human and artificial intelligence for strategic organizational decision making. Business Research 2020, 13(3), 875–919. [Google Scholar] [CrossRef]
- Veale, M.; Van Kleek, M.; Binns, R. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18) (Paper; Association for Computing Machinery, 2018; Volume 440, pp. 1–14. [Google Scholar] [CrossRef]
- von Krogh, G. Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Academy of Management Discoveries 2018, 4(4), 404–409. [Google Scholar] [CrossRef]
- Yeung, S.; Rinaldo, F.; Jopling, J.; Liu, B.; Mehra, R.; Downing, N. L.; Guo, M.; Bianconi, G. M.; Alahi, A.; Lee, J.; Campbell, B.; Deru, K.; Beninati, W.; Fei-Fei, L.; Milstein, A. A computer vision system for deep learning-based detection of patient mobilisation activities in the ICU. npj Digital Medicine 2 2019, 11. [Google Scholar] [CrossRef]
- Zárate-Torres, R.; Rey-Sarmiento, C. F.; Acosta-Prado, J. C.; Gómez-Cruz, N. A.; Rodríguez Castro, D. Y.; Camargo, J. Influence of Leadership on Human–Artificial Intelligence Collaboration. Behavioral Sciences 2025, 15(7), 873. Available online: https://www.mdpi.com/2076-328X/15/7/873. [CrossRef] [PubMed]
- Zerilli, J.; Bhatt, U.; Weller, A. How transparency modulates trust in artificial intelligence. Patterns 2022, 3(4), 100455. [Google Scholar] [CrossRef] [PubMed]
- Zerilli, J.; Knott, A.; MacLaurin, J.; Gavaghan, C. Algorithmic decision-making and the control problem. Minds and Machines 2019, 29(4), 555–578. [Google Scholar] [CrossRef]
| Keyword family | Example search terms (combined with AND/OR) |
| Strategic decision-making (SDM) | “strategic decision-making” OR “strategic choice” OR “top management team” OR “board decision*” OR “executive decision*” |
| Organisational decision structure | “decision structure” OR “decision architecture” OR “delegation” OR “organisational design” OR “coordination” |
| AI and algorithmic decision systems (ADMS) | “artificial intelligence” OR “machine learning” OR “algorithmic decision*” OR “ADM” OR “decision automation” OR “decision support system*” |
| Human–AI collaboration | “human-AI” OR “human-in-the-loop” OR “hybrid decision*” OR “augmented decision*” OR “AI delegation” |
| Governance and accountability | “algorithmic governance” OR “accountability” OR “transparency” OR “explainab*” OR “fairness” OR “audit*” OR “trust” |
| ID | Author(s), Year | Study type / method | Core focus and relevance to the review |
| S1 | Green & Chen (2019) | Controlled experiments (algorithm-in-the-loop) | Shows that decision-makers mis-calibrate reliance on algorithms; foundational for understanding automation bias and oversight limits in hybrid SDM. |
| S2 | Koulu (2020) | Conceptual socio-legal analysis | Argues that “human oversight” in algorithmic systems often becomes symbolic, highlighting risks of empty managerial control in hybrid decision structures. |
| S3 | Enarsson et al. (2022) | Comparative legal / institutional analysis | Shows that hybrid human–algorithm systems blur accountability; links SDM structures to legitimacy and responsibility. |
| S4 | Noti et al. (2025) | Large-scale behavioural experiments | Demonstrates that timing and framing of AI advice affect performance; underpins the temporal orientation dimension. |
| S5 | Rahman et al. (2025) | Systematic literature review | Synthesises work on strategic AI systems and positions managers as validators and governors; central for framing strategic agency and governance roles. |
| S6 | Büber & Seven (2025) | Conceptual integration (strategy theories) | Argues that AI enhances analytical capability but does not remove the need for human contextual judgement; informs complementarity logics. |
| S7 | Zárate-Torres et al. (2025) | Qualitative literature review | Emphasises leadership as mediator of human–AI interaction; relevant for configuration-level interpretation of agency and control. |
| S8 | Ramu & Bansal (2025) | Conceptual / industry analysis | Describes shift from intuition-led to data-driven strategy, with humans retaining ethical control; supports human-dominant configuration discussion. |
| S9 | Thareja (2025) | Mixed-methods dissertation | Practitioner-oriented evidence that AI accelerates SDM but requires oversight for fairness and context sensitivity; illustrates hybrid strategic practice. |
| S10 | Kovari (2024) | Conceptual and cross-sector analysis of AI-based decision support systems (DSS), drawing on examples across industries | Examines how AI-driven DSS can enhance decision accuracy while maintaining transparency, explainability, and user trust. Highlights design principles for trustworthy, effective human–AI decision support and discusses how ethical standards and visibility requirements shape user acceptance. Relevant as evidence that AI can augment strategic and operational decision-making, provided that system transparency and explainability are explicitly engineered into decision architectures. |
| S11 | Sienkiewicz (2021) | Critical literature review of theoretical and empirical work on algorithmic human resources management (AHRM) | Reviews how algorithmic management technologies (AI, ML, big data, HR analytics) are being applied in HRM functions such as recruitment, performance management, remuneration and employment relations. |
| S12 | Vaassen (2022) | Conceptual analysis | Argues that AI opacity undermines personal autonomy and responsible decision-making, relevant for strategic AI governance. |
| S13 | Spera & Agrawal (2025) | Conceptual organisational analysis | Warns that AI-first organisations may become over-dependent on automation, weakening human skills and oversight capacity. |
| S14 | Saxena et al. (2021) | Ethnographic case study (child welfare) | Shows how high-stakes ADMS in child welfare interact with discretion and bureaucracy; anchors “augmentation vs automation” at the strategic apex. |
| S15 | Kawakami et al. (2024) | Elite interviews (public sector AI) | Demonstrates that AI adoption decisions are shaped by power relations and legal pressures, not just technical performance; key for agency redistribution. |
| S16 | Elish (2018) | Ethnography (clinical ML deployment) | Analyses how clinical and executive actors reinterpret ML outputs through authority structures; core for temporal and accountability dimensions. |
| S17 | Masrani et al. (2025) | Qualitative fieldwork (homelessness services) | Shows “data-outsourcing continuums” in frontline decisions; illustrates partial delegation and resistance to full automation. |
| S18 | Marabelli et al. (2021) | Conceptual / lifecycle framework (ADMS) | Develops a lifecycle model of organisational choices around ADMS; central for treating decision structures as designable configurations. |
| S19 | CFA Institute (2021) | Practice report / case-based analysis | Examines “T-shaped” teams and investment committees using AI; provides concrete examples of human-dominant and aggregated configurations. |
| S20 | Jarrahi et al. (2021) | Conceptual synthesis (algorithmic management) | Shows how AI reshapes coordination, authority, and information flows; informs redistribution of decision rights and power. |
| S21 | Smith et al. (2025) | Behavioural experiments (human–AI teams) | Analyses “AI convergence” and voluntary AI advice use; supports arguments about when and how humans defer to algorithmic recommendations. |
| S22 | Veale et al. (2018) | Empirical / design-needs study | Identifies fairness and accountability design needs in high-stakes public-sector ADMS; important for governance dimensions. |
| S23 | Gigerenzer et al. (2022) | Conceptual (heuristic decision-making) | Articulates “smart heuristics” for individuals, teams, and organisations; underpins human SDM side of bounded rationality vs analytics. |
| S24 | Hadley et al. (2024) | Empirical study of ARBs | Investigates algorithm review boards in finance and health; demonstrates conditions under which internal AI governance is substantive vs symbolic. |
| S25 | Torre et al. (2019) | Conceptual corporate-governance analysis | Argues that boards must develop AI operational and governance capabilities; anchors board-level oversight discussion. |
| S26 | Funda (2025) | Systematic review of algorithm audits | Reviews algorithm auditing approaches; highlights technical focus and need for organisational and participatory governance. |
| S27 | De-Arteaga et al. (2020) | Field study (child-maltreatment screening) | Shows that experts sometimes override erroneous algorithmic scores; complicates simple narratives of automation bias. |
| S28 | Alon-Barkat & Busuioc (2021) | Large-scale experiments | Finds selective adherence rather than blind automation bias; shows importance of stereotypes and framing in AI reliance. |
| S29 | Alon-Barkat & Busuioc (2020) | Pre-registered experiments | Demonstrates that people selectively adopt AI advice aligned with prior beliefs; deepens behavioural accounts of algorithmic influence. |
| S30 | De-Arteaga et al. (2021) | Hybrid decision framework / modelling | Uses expert consistency to improve algorithmic decision support; informs design of human-in-the-loop systems. |
| S31 | Punzi et al. (2024) | Taxonomy / conceptual framework | Proposes learning paradigms for hybrid decision systems (human-in/on/out-of-the-loop); directly supports configuration typology. |
| S32 | Romeo & Conti (2025) | Systematic review (automation bias) | Synthesises 35 studies on automation bias; clarifies conditions under which over-reliance emerges. |
| S33 | Zerilli et al. (2022) | Review (trust and transparency in AI) | Argues that both over-reliance and aversion are risks; motivates need for “algorithmic vigilance” in strategic settings. |
| S34 | Ibrahim et al. (2025) | Conceptual + measurement framework | Proposes metrics for over-reliance and human-compatible AI; supports governance recommendations on monitoring reliance. |
| S35 | Natali et al. (2025) | Mixed-method review (medicine) | Shows AI-induced deskilling in clinical decision-making; generalises to concerns about strategic deskilling. |
| S36 | Kostick-Quenet & Gerke (2022) | Behavioural-economics perspective | Discusses how user biases shape AI reliance and how interfaces can nudge critical engagement. |
| S37 | Zerilli et al. (2019) | Conceptual “control problem” analysis | Frames algorithmic decision-making as a human–machine control loop; emphasises complacency and diffidence risks. |
| S38 | Sargeant et al. (2025) | Experimental + legal analysis | Shows how selective abstention and friction reshape reliance and discrimination risk; informs override/abstention design. |
| S39 | Raji et al. (2022) | Institutional design / conceptual + cases | Proposes “outsider oversight” and third-party audit ecosystems; central for accountability beyond the firm. |
| S40 | Terzis et al. (2024) | Legal / political-economy analysis | Examines regulatory audit mandates and risks of audit capture; links firm-level structures to regulation. |
| S41 | Nwachukwu et al. (2025) | Conceptual AI-governance framework (finance) | Advocates “control-by-design” with embedded explainability and audit trails; informs replicability and traceability dimensions. |
| S42 | Chawande (2025) | Model-risk governance case (investment banking) | Details lifecycle governance for AI compliance systems; illustrates structural integration of validation and oversight. |
| S43 | Mushkani (2025) | Design / policy analysis (urban control systems) | Designs right-to-override and safe fallback states; provides concrete override architecture for high-stakes systems. |
| S44 | Moreira et al. (2025) | Conceptual framework (contestability) | Defines contestability for XAI systems and proposes criteria for operationalising it. |
| S45 | Cohen & Suzor (2024) | Legal / institutional analysis | Argues that public interest in AI requires contestation channels, separation of powers, and independent information access. |
| S46 | Loi & Spielkamp (2021) | Governance / delegation analysis | Discusses accountability in public-sector AI and imperfect delegation; underpins need for clear accountability chains. |
| S47 | Laux (2023) | Oversight theory (institutionalised distrust) | Distinguishes constitutive vs corrective oversight; emphasises overseer fallibility in human oversight of AI. |
| S48 | Chappidi et al. (2025) | Empirical study (record-keeping & oversight) | Shows how transparency and record-keeping reshape oversight practices and can generate accountability capture. |
| S49 | Ganesh et al. (2025) | Corporate-governance framework | Proposes integrated board-level AI governance with ethics committees and risk-management linkages. |
| S50 | Benlian et al. (2022) | Conceptual analysis (algorithmic management) | Shows how algorithms automate coordination and control in platform work, recentring authority in system design. |
| S51 | Robinson et al. (2025) | Empirical platform-work study | Analyses “algorithmic management in limbo”, showing dynamic calibration of hierarchy vs autonomy; relevant for power in hybrid systems. |
| S52 | Park & Ryoo (2023) | Empirical study (food-delivery platforms) | Describes algorithmic control as “digital Taylorism”, standardising discretion and intensifying monitoring. |
| S53 | Ogunleye & Kalema (2020) | Empirical evaluation (ride-hailing) | Documents “algorithmic despotism” on platforms in developing countries; illustrates centralisation of power in algorithms. |
| S54 | Isbah (2022) | Empirical case studies (ride-hailing) | Explores algorithmic exploitation and asymmetric power; informs arguments about hidden centralisation of strategic control. |
| S55 | Aloisi (2024) | Legal / labour-regulation analysis (EU) | Examines how algorithmic management intensifies employer discretion and how EU law seeks to rebalance power. |
| S56 | Singh & Gupta (2025) | Theoretical synthesis | Traces continuity from Taylorism to algorithmic management, showing how efficiency logics migrate into software. |
| S57 | Choi et al. (2023) | Clinical study (AI bacteremia prediction) | Shows AI improves decisions when model uncertainty is low and physician uncertainty high; illustrates task boundary conditions. |
| S58 | Takayanagi et al. (2022) | Experimental / inverse reinforcement learning framework | Demonstrates conditions under which AI outperforms experts in stochastic environments with well-defined rewards. |
| S59 | Nimmy et al. (2025) | Risk-management study | Proposes methods to quantify trustworthiness of XAI outputs; relevant for uncertainty and explainability alignment. |
| S60 | Lenders et al. (2025) | Algorithm design (abstaining classifier) | Develops interpretable and fair abstaining classifiers; concrete mechanism for safe abstention in high-stakes decisions. |
| S61 | Agard et al. (2025) | Clinical Bayesian-network study (sepsis) | Shows probabilistic, transparent models can improve decisions in ambiguous intensive-care environments. |
| S62 | Lahoti et al. (2025) | AI-enhanced business simulation | Demonstrates AI-supported scenario modelling for strategic decisions under uncertainty; links simulation to strategic SDM. |
| Dimension | Human SDM (top-management teams, boards) | Algorithmic SDM (AI / ADMS) |
| Interpretive authority | Contextual, narrative, politically negotiated; rich but biased | Model-based, data-bound, probabilistic; consistent but often opaque |
| Decision search-space structure | Tolerates ambiguity, conflicting goals, evolving frames | Requires explicit objectives, labels, constraints; hides design choices in metrics |
| Temporal orientation | Claims long-term, imaginative framing but pulled by short-term incentives | Extrapolative within chosen horizon; sensitive to structural breaks and data regimes |
| Accountability and traceability | Narratively explainable but vulnerable to blame shifting and organised hypocrisy | Log- and code-based traceability but complex accountability chains and potential audit capture |
| Replicability and scalability | Low replicability, bounded capacity, context sensitivity | High replicability and scale under stable conditions, but risk of large-scale consistent error |
| Structure | Dimension profile (high/low) | Typical applications | Salient risks |
|---|---|---|---|
| Human-dominant (AI advisory) | High human interpretive authority; loose search; mixed temporal orientation; narrative accountability; low scale | Boards, M&A, strategy offsites | Automation complacency; rhetorical AI; weak challenge |
| Sequential AI-to-human | Tight search; high scale; fast initial screening; human narrative accountability | Loan origination, triage, innovation contests | Omission errors; hidden bias; illusory oversight |
| Sequential human-to-AI | Human framing; small alternative set; intensive algorithmic optimisation | Sports analytics, ICU monitoring, risk optimisation | Deskilling; over-reliance; weak organisational learning |
| Aggregated human–AI governance | Parallel decisions; mixed interpretive authority; explicit aggregation; partial replicability | Investment committees, ARBs, strategic risk boards | Responsibility diffusion; weighting politics; governance opacity |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
