Preprint Article Version 1 This version is not peer-reviewed

Safe Artificial General Intelligence via Distributed Ledger Technology

Version 1 : Received: 15 June 2019 / Approved: 16 June 2019 / Online: 16 June 2019 (11:19:23 CEST)

A peer-reviewed article of this Preprint also exists.

Carlson, K.W. Safe Artificial General Intelligence via Distributed Ledger Technology. Big Data Cogn. Comput. 2019, 3, 40. Carlson, K.W. Safe Artificial General Intelligence via Distributed Ledger Technology. Big Data Cogn. Comput. 2019, 3, 40.

Journal reference: Big Data Cogn. Comput. 2019, 3, 40
DOI: 10.3390/bdcc3030040

Abstract

Artificial general intelligence (AGI) progression metrics indicate AGI will occur within decades. No proof exists that AGI will benefit humans and not harm or eliminate humans. I propose a set of logically distinct conceptual components that are necessary and sufficient to 1) ensure various AGI scenarios will not harm humanity and 2) robustly align AGI and human values and goals. By systematically addressing pathways to malevolent AI we can induce the methods/axioms required to redress them. Distributed ledger technology (DLT, ‘blockchain’) is integral to this proposal, e.g. ‘smart contracts’ are necessary to address evolution of AI that will be too fast for human monitoring and intervention. The proposed axioms: 1) Access to technology by market license. 2) Transparent ethics embodied in DLT. 3) Morality encrypted via DLT. 4) Behavior control structure with values at roots. 5) Individual bar-code identification of critical components. 6) Configuration Item (from business continuity/disaster recovery planning). 7) Identity verification secured via DLT. 8) ‘Smart’ automated contracts based on DLT. 9) Decentralized applications - AI software modules encrypted via DLT. 10) Audit trail of component usage stored via DLT. 11) Social ostracism (denial of resources) augmented by DLT petitions. 12) Game theory and mechanism design.

Subject Areas

artificial general intelligence; AGI; blockchain; distributed ledger; AI containment; AI safety; AI value alignment; ASILOMAR

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.