Preprint
Article

This version is not peer-reviewed.

Responsibility, Habit, and Control: Digital Humanismand the Delegation of Critical Functions toIntelligent Autonomous Systems

Submitted:

14 January 2026

Posted:

14 January 2026

You are already at the latest version

Abstract
As intelligent autonomous systems (IAS) continue to assume increas-ingly central roles in safety- and mission-critical domains such as transportation,healthcare, finance, and infrastructure management, humans are becoming una-ble to monitor or intervene in real time. This shift is driven by the speed, data-processing capacity, and adaptivity of IAS. To manage this complexity, a newparadigm is emerging: IAS controlling and monitoring other IAS, a developmentthat introduces at the same time practical efficiency and profound practical andethical challenges.This article explores the multi-layered delegation of responsibilities within IASecosystems, where decisions influencing human lives and well-being are madewith minimal human intervention. One often-overlooked consequence of this del-egation is the capacity of AI systems to shape and create new human habits,whether through personalized persuasion, behavioral feedback loops, or autono-mous decision enforcement. As humans increasingly adapt their behaviors to ma-chine-optimized environments, questions arise about autonomy, agency, and re-sponsibility for resulting behavior changes.Drawing on insights from recent research on responsibility delegation in IAS andon AI-driven habit formation, the article critically examines how responsibilityshould be distributed across human actors, autonomous systems, and institutions.Framed within the principles of Digital Humanism, I argue for a value-sensitivegovernance model that ensures transparency, explainability and human oversighteven in complex IAS-to-IAS control scenarios.I propose a normative framework for responsibility attribution that accounts forboth the technical architecture of IAS networks and the behavioral effects thesesystems have on human users. The article concludes by addressing the ethicalrisks of diminished human agency, manipulation through behavioral design, andthe need for institutional mechanisms that align IAS operations with fundamentalhuman values.
Keywords: 
;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated