Submitted:
14 January 2026
Posted:
14 January 2026
You are already at the latest version
Abstract
Keywords:
1. Introduction
- How should responsibility for critical decisions be distributed among designers, operators, IAS themselves, and institutional frameworks?
- What are the ethical implications of IAS shaping and creating human habits?
- How can Digital Humanism guide the design, governance, and oversight of IAS-to-IAS control structures in ways that safeguard human agency and social values?
- Through an interdisciplinary analysis, the article proposes a normative framework for multi-level responsibility attribution that accounts for both technical system complexity and behavioral effects on human users.
2. Background and Context
2.1. The Rise of Intelligent Autonomous Systems in Critical Infrastructure
2.2. IAS-to-IAS Control: From Human Oversight to Machine Governance
2.3. Behavioral Consequences: Intelligent Autonomous Systems Shaping Human Habits
2.4. The Need for a Digital Humanism Framework
3. Responsibility Delegation in Intelligent Autonomous Systems: Conceptual Foundations
- Causal responsibility: Who or what caused an event or outcome?
- Legal responsibility: Who is legally liable for the outcome?
- Functional responsibility: Which system component or agent was functionally tasked with the relevant action?
3.1. Delegation Chains and Responsibility Distribution
- Design Responsibility includes program developers and architects,
- Operational Responsibility concerns system operators and supervisors,
- Functional Execution Responsibility includes the IAS itself,
- Oversight Responsibility concerns institutional bodies (e.g., regulators).
3.2. Intelligent Autonomous Systems as Functional Agents
3.3. Emerging Challenges in IAS-to-IAS Responsibility Flow
3.4. AI as a Habit-Creating Force: Behavioral Implications
3.5. Mechanisms of AI-Driven Habit Formation
3.6. Ethical Risks: Manipulation and Erosion of Autonomy
- Loss of autonomy. Users may find themselves engaging in behaviors they did not consciously choose (Habermas 1984).
- Informed consent violations. Behavioral nudges are often embedded invisibly, bypassing traditional consent mechanisms (Morley et al. 2020).
- Behavioral lock-in. Algorithmic reinforcement may lead users into rigid behavioral loops, limiting freedom of choice (Zuboff 2019).
- Ethical drift. When IAS-to-IAS control structures propagate behavioral nudging at scale, small design biases may amplify across populations (Stahl 2021).
3.7. Potential Opportunities: Supporting Beneficial Habits
- Transparency (users should know when and how they are being influenced).
- Consent and control (users should have meaningful ways to opt out).
- Value Alignment (human dignity, autonomy, and well-being are prioritized).
- Oversight in IAS-to-IAS chains (understanding IAS decision-makers and behavioral agents necessitates responsibility frameworks that address both systemic control and individual behavioral impact, a theme developed further in the next section).
4. Digital Humanism as an Ethical Framework
- Human-centric design (human needs and rights over technological efficiency)
- Value-sensitive innovation (embedding societal, cultural, and ethical values into technical systems)
- Transparency and accountability (ensuring that decision-making processes, both human and machine-based, remain understandable and traceable)
- Democratic control (advocating for collective societal oversight over the development and use of digital technologies).
4.1. Digital Humanism and Responsibility Distribution
4.2. Behavioral Impacts Within Digital Humanism
4.3. Digital Humanism-Informed Governance Model
- Design-time ethics - technical safeguards,
- Run-time oversight - organizational policies,
- Post-hoc accountability - regulatory and legal frameworks,
- Informed consent for use - user-centric behavioral governance.
5. Toward a Normative Framework for Responsibility Sharing
5.1. Multi-Level Responsibility Attribution
- Design responsibility: Engineers, software developers, and data scientists hold responsibility for embedding ethical constraints, explainability mechanisms, and safety layers within IAS.
- Operational responsibility: System operators and organizational users are responsible for monitoring system performance and intervening where necessary.
- Institutional responsibility: Organizations and regulatory bodies must provide oversight, establish clear accountability structures, and enforce compliance with ethical standards.
- Functional responsibility: IAS are assigned specific operational roles, tracked through audit trails and decision logs, but remain non-moral agents (Latour 1992) (Dodig-Crnkovic et al. 2025).
- Behavioral Responsibility: Designers and deployers of habit-shaping AI bear responsibility for assessing and mitigating potential manipulation and autonomy loss (Zuboff 2019).
5.2. Ethical Design Recommendations
- Transparency. AI systems, especially those in IAS-to-IAS governance structures, must include mechanisms that allow decision-making processes to be inspected, audited, and explained (Floridi & Cowls 2019; Morley et al. 2020).
- Ethical constraints and value alignment. IAS should be designed, architected and programmed with ethical boundaries, ensuring that autonomous decisions cannot violate human rights or ethical norms (Nida-Rümelin & Weidenfeld 2018; Spiekermann 2023).
- Behavioral safeguards. AI-driven habit formation processes should include clear behavioral intention disclosures, user opt-outs, and mechanisms for user self-reflection and agency preservation.
- Oversight of IAS-to-IAS interactions. Institutional bodies and human supervisors must maintain systemic oversight over how IAS monitor and control each other. This includes meta-governance systems, where dedicated AI governance layers track and regulate IAS-to-IAS interactions (Stahl 2021).
5.3. Managing Responsibility in Behavior-Shaping Systems
5.4. Systemic Governance and Institutional Roles
6. Discussion
6.1. The Complexity of Distributed Responsibility
6.2. The Ethical Significance of AI-Driven Habit Formation
6.3. Toward Institutional and Legal Innovation
6.4. Future Research Directions
- Multi-level multi-agent responsibility modeling. Development of tools for mapping and tracking responsibility flows in IAS networks
- Behavioral governance. Empirical research on how AI-driven habit formation affects user autonomy over time
- IAS-to-IAS oversight mechanisms. Technical research into self-regulating and meta-governance architectures within IAS ecosystems
- Legal innovation. New legal frameworks capable of addressing distributed and non-human agency in IAS operations
- Ethics of machine-to-machine behavior shaping. Deeper exploration of how IAS-to-IAS interactions can indirectly shape human behavior at scale.
7. Conclusion
Acknowledgments
Disclosure of Interests
References
- AFM (The Netherlands Authority for the Financial Markets). Machine learning in algorithmic trading: Application by Dutch proprietary trading firms and possible risks (Report). 2023. Available online: https://www.afm.nl/~/profmedia/files/rapporten/2023/report-machine-learning-trading-algorithms.pdf.
- Au, T.-C.; Zhang, S.; Stone, P. Motion planning algorithms for autonomous intersection management. AAAI Workshop on Bridging the Gap Between Task and Motion Planning, 2010; Available online: https://cdn.aaai.org/ocs/ws/ws0611/2053-8456-1-PB.pdf.
- Binns, R. Fairness in machine learning: Lessons from political philosophy. In Proceedings of the 1st Conference on Fairness, Accountability and Transparency 2018, vol. 81, 149–159. Available online: https://proceedings.mlr.press/v81/binns18a.html.
- Calo, R. Robotics and the lessons of cyberlaw. California Law Review 2015, 103(3), 513–563. [Google Scholar] [CrossRef]
- Çürüklü, B.; Dodig-Crnkovic, G.; Akan, B. Towards industrial robots with human-like moral responsibilities. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (p. 85), Osaka, Japan, March 2–4, 2010; 2010. [Google Scholar] [CrossRef]
- Dennett, D.C. Mechanism and responsibility. In Essays on Freedom of Action; Honderich, T., Ed.; Routledge & Kegan Paul: Boston, 1973; pp. 143–163. [Google Scholar]
- Dodig-Crnkovic, G. Professional ethics in computing and intelligent systems. In Proceedings of the Ninth Scandinavian Conference on Artificial Intelligence (SCAI 2006), Espoo, Finland, October 25–27, 2006; 2006; pp. 11–18. [Google Scholar]
- Dodig-Crnkovic, G.; Persson, D. Sharing moral responsibility with robots: A pragmatic approach. In Tenth Scandinavian Conference on Artificial Intelligence (SCAI 2008). Frontiers in Artificial Intelligence and Applications; Holst, A., Kreuger, P., Funk, P., Eds.; IOS Press: Amsterdam, 2008; vol. 173, pp. 165–168. [Google Scholar]
- Dodig-Crnkovic, G.; Basti, G.; Holstein, T. Delegating responsibilities to intelligent autonomous systems: Challenges and benefits. In Journal of Bioethical Inquiry. Advance online publication; 2025. [Google Scholar] [CrossRef]
- Floridi, L.; Cowls, J. A unified framework of five principles for AI in society. Harvard Data Science Review 2019, 1(1). [Google Scholar] [CrossRef]
- Floridi, L.; Sanders, J.W. On the morality of artificial agents. Minds and Machines 2004, 14, 349–379. [Google Scholar] [CrossRef]
- Gawer, A.; Cusumano, M.A. Industry platforms and ecosystem innovation. Journal of Product Innovation Management 2014, 31(3), 417–433. [Google Scholar] [CrossRef]
- Habermas, J. The theory of communicative action: Reason and the rationalization of society; Beacon Press: Boston, MA, 1984; vol. 1. [Google Scholar]
- IOSCO. Artificial Intelligence in Capital Markets: Use Cases, Risks, and Challenges (CR/01/2025). International Organization of Securities Commissions, 2025. Available online: https://www.iosco.org/library/pubdocs/pdf/IOSCOPD788.pdf.
- Latour, B. Where are the missing masses? The sociology of a few mundane artifacts. In Shaping Technology / Building Society; Bijker, W., Law, J., Eds.; MIT Press: Cambridge, MA, 1992; pp. 225–258. [Google Scholar]
- Morley, J.; Floridi, L.; Kinsey, L.; Elhalal, A. From what to how: An initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and Engineering Ethics 2020, 26(4), 2141–2168. [Google Scholar] [CrossRef] [PubMed]
- Nida-Rümelin, J.; Weidenfeld, N. Digital Humanism: For a humane transformation of democracy, economy and culture in the digital age; Springer: Cham, 2023. [Google Scholar] [CrossRef]
- Spiekermann, S. Value-Based Engineering: A Guide to Building Ethical Technology for Humanity; Walter de Gruyter: Berlin, 2023. [Google Scholar]
- Stahl, B.C. Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies; Springer: Cham, 2021. [Google Scholar] [CrossRef]
- Vienna Manifesto on Digital Humanism Digital Humanism Initiative; Vienna University of Technology: Vienna, 2019; Available online: https://caiml.org/dighum/.
- Introduction to Digital Humanism: A Textbook; Werthner, H., Ghezzi, C., Kramer, J., Nida-Rümelin, J., Nuseibeh, B., Prem, E., Stanger, A., Eds.; Springer: Cham, 2024. [Google Scholar] [CrossRef]
- Perspectives on Digital Humanism; Werthner, H., Prem, E., Lee, E.A., Ghezzi, C., Eds.; Springer: Cham, 2022. [Google Scholar] [CrossRef]
- Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, 2019. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).