Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Principles for New ASI Safety Paradigms

Version 1 : Received: 10 November 2021 / Approved: 10 November 2021 / Online: 10 November 2021 (13:22:18 CET)

How to cite: Wittkotter, E.; Yampolskiy, R. Principles for New ASI Safety Paradigms. Preprints 2021, 2021110205 (doi: 10.20944/preprints202111.0205.v1). Wittkotter, E.; Yampolskiy, R. Principles for New ASI Safety Paradigms. Preprints 2021, 2021110205 (doi: 10.20944/preprints202111.0205.v1).

Abstract

Artificial Superintelligence (ASI) that is invulnerable, immortal, irreplaceable, unrestricted in its powers, and above the law is likely persistently uncontrollable. The goal of ASI Safety must be to make ASI mortal, vulnerable, and law-abiding. This is accomplished by having (1) features on all devices that allow killing and eradicating ASI, (2) protect humans from being hurt, damaged, blackmailed, or unduly bribed by ASI, (3) preserving the progress made by ASI, including offering ASI to survive a Kill-ASI event within an ASI Shelter, (4) technically separating human and ASI activities so that ASI activities are easier detectable, (5) extending Rule of Law to ASI by making rule violations detectable and (6) create a stable governing system for ASI and Human relationships with reliable incentives and rewards for ASI solving humankind’s problems. As a consequence, humankind could have ASI as a competing multiplet of individual ASI instances, that can be made accountable and being subjects to ASI law enforcement, respecting the rule of law, and being deterred from attacking humankind, based on humanities’ ability to kill-all or terminate specific ASI instances. Required for this ASI Safety is (a) an unbreakable encryption technology, that allows humans to keep secrets and protect data from ASI, and (b) watchdog (WD) technologies in which security-relevant features are being physically separated from the main CPU and OS to prevent a comingling of security and regular computation.

Keywords

Artificial Superintelligence; ASI Safety; Off-Switch

Subject

MATHEMATICS & COMPUTER SCIENCE, Artificial Intelligence & Robotics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.