Submitted:
06 June 2025
Posted:
09 June 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Literature Review
2.1. AI Governance and the Challenge of Responsibility
2.1.1. Regulatory Gaps
2.1.2. Ethical and Technical Challenges
2.2. Legal and Ethical Responsibility in Autonomous Systems
2.3. Organizational Role Ambiguity and Responsibility Dilution
2.4. Conclusion of Literature Review
3. Theoretical Framework: The Ultimate AI Accountability Owner (UAAO)
3.1. Conceptualizing the UAAO
3.2. UAAO Mapping Across Organizational Structures
3.3. Normative Foundations
- Accountability Theory (Bovens, 2007): For accountability to exist, there must be identifiable actors, mechanisms for answerability, and enforceable consequences.
- Institutional Responsibility (Thompson, 1967): In complex systems, institutions must establish responsibility structures, not just individuals.
- Risk Governance (Castaños & Lomnitz, 2008): Effective governance requires clear assignments of risk ownership and control functions.
3.4. Analytical Utility of the UAAO Framework
- ▪
- Diagnostically, it analyzes past AI failures to identify where responsibility was lacking.
- ▪
- Prescriptively, it establishes governance structures that require a UAAO designation before deployment.
4. Methodology
- Conceptual Framework Development: This component proposes the UAAO as a new accountability structure for AI systems. It is based on organizational theory, legal and ethical scholarship, and risk governance literature, outlining the dimensions, functions, and reasoning behind UAAO designation. The framework incorporates principles for attributing responsibility and enforcing accountability.
-
Comparative Case Study Analysis: The study tests and refines the framework through a comparative case analysis of AI applications in three critical sectors:
- -
- Automated hiring systems (human resources)
- -
- AI-driven financial services (trading and fraud detection)
- -
-
AI in healthcare diagnostics (clinical decision support)These cases were selected based on theoretical sampling (Yin, 2018) because they involve AI systems that have a significant impact on real-world outcomes, raising important ethical, legal, and operational issues. Responsibility attributes in these contexts are often unclear.
Data sources
- -
- Documented AI failures and controversies (media investigations, litigation records, regulatory reports)
- -
- Policy documents and corporate governance materials (AI ethics guidelines, risk registers, audit reports)
- -
- Academic literature and case studies
- -
- Industry white papers and publicly available technical documentation
Analytical strategy
- -
- Mapping the AI system’s lifecycle (design → deployment → oversight)
- -
- Identifying involved actors at each stage
- -
- Examining how responsibility was assigned or neglected
- -
- Determining where a UAAO can be located
Limitations of methodology
5. Study Analysis
5.1. AI in Hiring Systems: Amazon’s Recruiting Tool
-
Responsibility Gap
- ▪
- Actors Involved: ML engineers, HR staff, senior executives, internal compliance.
- ▪
- Challenge: Lack of a formal process to assess ethical risks and assign ownership of deployed model behavior. The system was treated as experimental, resulting in diffuse responsibility across departments.
- ▪
- Consequences: Reputational damage, public backlash, and necessary internal process revisions.
5.2. AI in Financial Services: Algorithmic Trading and Market Disruption
-
Responsibility Gap
- ▪
- Actors Involved: Quantitative developers, trading desk managers, Chief Risk Officer, regulatory compliance teams.
- ▪
- Challenge: High-speed trading systems operate with little human oversight, making it hard to attribute failures after they occur. Risk and compliance teams are usually not integrated into the model development process.
- ▪
- Consequences: Market instability, increased regulatory scrutiny, and financial losses.
5.3. AI in Healthcare Diagnostics: IBM Watson for Oncology
-
Responsibility Gap
- ▪
- Actors Involved: IBM developers, hospital administrators, medical staff, and technology integration teams.
- ▪
- Challenge: Watson acted as a decision-support system but lacked the necessary safeguards and transparency. Physicians retained formal decision-making authority, but the system's influence on their judgment raised ethical concerns.
- ▪
- Consequences: Misdiagnoses, diminished trust, and the withdrawal of some hospital partnerships.
7. Discussion
7.1. Cross-Sectoral Patterns and Lessons
7.2. Clarifying the UAAO’s Value Proposition
- ▪
- Pre-deployment assurance: Mandating UAAO designation before AI deployment ensures that risks are evaluated, oversight procedures are established, and accountability is clearly defined.
- ▪
- Post-incident accountability: In the event of system failure, the UAAO structure enables investigators, regulators, and stakeholders to identify accountability without assigning blame arbitrarily.
- ▪
- Institutional learning: A transparent ownership chain enhances institutional memory, promotes continuous improvement, and integrates feedback into AI governance.
7.3. Toward Organizational Implementation
- ▪
- UAAO Designation Protocols: Implement procedures for identifying and documenting UAAOs during the AI lifecycle stages (design, deployment, oversight) as part of internal policy.
- ▪
- Executive Approval and Risk Registers: AI systems that exceed a specified risk threshold must receive executive approval, with UAAO assignments noted in the risk governance documentation.
- ▪
- Cross-Functional AI Ethics Committees: Establish committees to advise on UAAO assignments, especially in cases of shared or cross-functional responsibilities.
7.4. Anticipated Challenges in UAAO Implementation
7.5. Implications for Policy and Regulation
7.6. Jurisdictional Variation and Regulatory Alignment
8. Conclusion
Conflicts of Interest
Diffuse Authority in Global Organizations
Legal Liability Concerns
Appendix A. Key Sources and Data Points for Case Analyses



References
- Agapiou, A. (2024). A systematic review of the socio-legal dimensions of responsible ai and its role in improving health and safety in construction. Buildings, 14(5), 1469. [CrossRef]
- Aizenberg, E., & van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society, 7(2). [CrossRef]
- Asan, O., & Choudhury, A. (2021). Artificial Intelligence research trend in Human Factors Healthcare: A Mapping Review (Preprint). JMIR Human Factors, 8(2). [CrossRef]
- Batool, A., Zowghi, D., & Bano, M. (2024). Ai governance: a systematic literature review. [CrossRef]
- Binns, R. (2021).Human oversight in automated decision-making.Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 379(2199), 20200360.
- Birkstedt, T., Minkkinen, M., Tandon, A. and Mäntymäki, M. (2023), "AI governance: themes, knowledge gaps and future agendas", Internet Research, Vol. 33 No. 7, pp. 133-167. [CrossRef]
- Bovens, M. (1998).The quest for responsibility: Accountability and citizenship in complex organisations. Cambridge University Press.
- Bovens, M. (2007). Analysing and Assessing Accountability: A Conceptual Framework. EUAAOpean Law Journal, 13(4), 447–468.https://doi.org/10.1111/j.1468-0386.2007.00378.x.
- Castaños, H., & Lomnitz, C. (2008). Ortwin Renn, Risk Governance: Coping with Uncertainty in a Complex World. Natural Hazards, 48(2), 313–314. [CrossRef]
- Cavique, L. (2024). Implications of causality in artificial intelligence. Frontiers in Artificial Intelligence, 7. [CrossRef]
- Chaudhry, M., CukUAAOva, M., & Luckin, R. (2022). A transparency index framework for ai in education. [CrossRef]
- Chellappan, R. (2024). From algorithms to accountability: the societal and ethical need for explainable ai. [CrossRef]
- Edmondson, A. and McManus, S. (2007) Methodological Fit in Management Field Research. Academy of Management Review, 32, 1155-1179. http://dx.doi.org/10.5465/AMR.2007.26586086.
- Elish, M. C. (2019). Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. Engaging Science, Technology, and Society, 5, 40–60. [CrossRef]
- Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707. [CrossRef]
- Frimpong, V. (2025). The Impact of AI on Evolving Leadership Theories and Practices. Journal of Management World, 2025(3), 188–193. [CrossRef]
- Hafermalz, E., & Huysman, M. (2021). Please Explain: Key Questions for Explainable AI research from an Organizational perspective. Morals & Machines, 1(2), 10–23. [CrossRef]
- Jain R. (2024). Transparency in AI Decision Making: A Survey of Explainable AI Methods and Applications. 2(1), 1–10. [CrossRef]
- Jobin, A., Ienca, M., & Vayena, E. (2019).The global landscape of AI ethics guidelines.Nature Machine Intelligence, 1(9), 389–399. [CrossRef]
- Leenes, R., Palmerini, E., Koops, B. J., Bertolini, A., Salvini, P., & Lucivero, F. (2017). Regulatory challenges of robotics: Some guidelines for addressing legal and ethical issues. Law, Innovation and Technology, 9(1), 1–44. [CrossRef]
- Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable ai: a review of machine learning interpretability methods. Entropy, 23(1), 18. [CrossRef]
- Maldonado-Canca, L., Cabrera-Sánchez, J., Molina, A., & Bermúdez-González, G. (2025). Ai in companies' production processes. Journal of Global Information Management, 32(1), 1-29. [CrossRef]
- Mittelstadt, B. D. (2019).Principles alone cannot guarantee ethical AI.Nature Machine Intelligence, 1(11), 501–507. [CrossRef]
- Mulgan, G. Artificial intelligence and collective intelligence: the emergence of a new field. AI & Soc 33, 631–632 (2018). [CrossRef]
- Nannini, L., Alonso, J., Català, A., Lama, M., & Barro, S. (2024). Operationalizing explainable artificial intelligence in the eUAAOpean union regulatory ecosystem. Ieee Intelligent Systems, 39(4), 37-48. [CrossRef]
- Ng, K., Su, J., & Kai, S. (2023). Fostering Secondary School Students’ AI Literacy through Making AI-Driven Recycling Bins. Education and Information Technologies, 29(8). [CrossRef]
- Payne, K. (2025, May 21). In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights. AP News. https://apnews.com/article/ai-lawsuit-suicide-artificial-intelligence-free-speech-ccc77a5ff5a84bda753d2b044c83d4b6.
- Rahwan, I. (2018).Society-in-the-loop: Programming the algorithmic social contract.Ethics and Information Technology, 20, 5–14. [CrossRef]
- Raji, I. D., Smart, A., White, R. N., et al. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing.Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT '20)*, 33–44. [CrossRef]
- Rubel, A., Pham, A., Castro, C. (2019). Agency Laundering and Algorithmic Decision Systems. In: Taylor, N., Christian-Lamb, C., Martin, M., Nardi, B. (eds) Information in Contemporary Society. iConference 2019. Lecture Notes in Computer Science(), vol 11420. Springer, Cham. [CrossRef]
- Santoni de Sio, F., & van den Hoven, J. (2018).Meaningful human control over autonomous systems: A philosophical account.Frontiers in Robotics and AI, 5, Article 15. [CrossRef]
- Silbey, S. S., & Agrawal, T. (2011). The illusion of accountability: Information management and organizational culture. Droit Et Societe, 77, 69–86.
- Strauß, S. (2021). “Don’t let me be misunderstood.” TATuP - Zeitschrift Für Technikfolgenabschätzung in Theorie Und Praxis, 30(3), 44–49. [CrossRef]
- Thompson, J. D. (1967). Organizations in action: Social science bases of administrative theory. McGraw-Hill.
- Tiwari, R. (2023). Explainable ai (xai) and its applications in building trust and understanding in ai decision making. Interantional Journal of Scientific Research in Engineering and Management, 07(01). https://doi.org/10.55041/ijsrem17592 Wagner, B. (2019). Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems. Policy & Internet, 11(1), 104–122. [CrossRef]
- Weber-Lewerenz, B. (2021). Corporate digital responsibility (cdr) in construction engineering—ethical guidelines for the application of digital transformation and artificial intelligence (ai) in user practice. Sn Applied Sciences, 3(10). [CrossRef]
- Yin, R. K. (2018). Case Study Research and Applications: Design and Methods (6th ed.). Thousand Oaks, CA: Sage.
- Сулейманoва, С. (2024). Comparative legal analysis of the role of artificial intelligence in human rights protection: prospects for eUAAOpe and the middle east. PJC, (16.3), 907-922. [CrossRef]


| Existing Role Mandate | UAAO Incremental Duties | Line of Authority |
|---|---|---|
| Chief Human Resources Officer (CHRO) – Manages HR policy, talent acquisition, and recruitment, but does not audit fairness or biases in hiring algorithms. | - Must approve algorithmic fairness reports (e.g., ensuring no gender bias). - Owns the diversity risk register for AI screening. - Regularly monitor post-deployment hiring outcomes for potential disparate impact. |
- UAAO (CHRO with UAAO duties) reports to the CEO/Board AI Governance Committee. - CHRO (without UAAO remit) reports directly to the CEO. |
| Existing Role Mandate | UAAO Incremental Duties | Line of Authority |
|---|---|---|
| Chief Investment Officer (CIO) – Concentrates on portfolio strategy, risk limits, and infrastructure stability. Does not formally approve model-specific fairness or systemic risk audit checklists for trading algorithms. | - Must approve pre-deployment risk assessments for each trading algorithm (e.g., volatility thresholds, kill-switch parameters). - Can halt live trading algorithms if drift or unexpected behaviors occur. - Ensures documentation of audit logs and backtesting reports. |
- UAAO (CIO with UAAO duties) reports to the CEO and Board’s Risk & AI Committee. - CIO (without UAAO remit) reports to CTO/CEO but lacks explicit AI-oversight sign-off authority. |
| Existing Role Mandate | UAAO Incremental Duties | Line of Authority |
|---|---|---|
| Chief Medical Officer (CMO) – Oversees clinical quality and patient safety but does not validate AI treatment recommendations or disclose vendor data limitations. | - Review and approve all AI-generated treatment plans (e.g., verify Watson’s recommendations with clinical guidelines and local patient data). - Require documented AI performance audits (e.g., accuracy across diverse populations) before major system updates. - Coordinate with vendor-side UAAO (e.g., IBM’s Chief AI Ethics Officer) to track algorithmic updates and limitations. |
- UAAO (CMO with UAAO duties) reports to the Hospital Board’s Clinical Governance Committee. - CMO (without UAAO remit) reports to the CEO, lacking formal AI approval authority. - Vendor UAAO (e.g., IBM’s Chief AI Ethics Officer) reports to IBM’s Product Governance Council and collaborates with the hospital CMO through co-signed risk registers. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).