Preprint Article Version 2 This version is not peer-reviewed

Avoiding AGI Races Through Self-Regulation

Version 1 : Received: 1 October 2018 / Approved: 2 October 2018 / Online: 2 October 2018 (15:32:31 CEST)
Version 2 : Received: 7 November 2018 / Approved: 8 November 2018 / Online: 8 November 2018 (10:52:39 CET)

How to cite: Worley III, G.G. Avoiding AGI Races Through Self-Regulation. Preprints 2018, 2018100030 (doi: 10.20944/preprints201810.0030.v2). Worley III, G.G. Avoiding AGI Races Through Self-Regulation. Preprints 2018, 2018100030 (doi: 10.20944/preprints201810.0030.v2).

Abstract

The first group to build artificial general intelligence or AGI stands to gain a significant strategic and market advantage over competitors, so companies, universities, militaries, and other actors have strong incentives to race to build AGI first. An AGI race would be dangerous, though, because it would prioritize capabilities over safety and increase the risk of existential catastrophe. A self-regulatory organization (SRO) for AGI may be able to change incentives to favor safety over capabilities and encourage cooperation rather than racing.

Subject Areas

artificial general intelligence, AI policy, self-regulatory organization

Readers' Comments and Ratings (0)

Leave a public comment
Send a private comment to the author(s)
Rate this article
Views 0
Downloads 0
Comments 0
Metrics 0
Leave a public comment

×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.