Article
Version 1
Preserved in Portico This version is not peer-reviewed
Avoiding AGI races through self-regulation
Version 1
: Received: 1 October 2018 / Approved: 2 October 2018 / Online: 2 October 2018 (15:32:31 CEST)
Version 2 : Received: 7 November 2018 / Approved: 8 November 2018 / Online: 8 November 2018 (10:52:39 CET)
Version 2 : Received: 7 November 2018 / Approved: 8 November 2018 / Online: 8 November 2018 (10:52:39 CET)
How to cite: Worley III, G. G. Avoiding AGI races through self-regulation. Preprints 2018, 2018100030. https://doi.org/10.20944/preprints201810.0030.v1 Worley III, G. G. Avoiding AGI races through self-regulation. Preprints 2018, 2018100030. https://doi.org/10.20944/preprints201810.0030.v1
Abstract
The first group to build artificial general intelligence or AGI stands to gain a significant strategic and market advantage over competitors, so companies, universities, militaries, and other actors have strong incentives to race to build AGI first. An AGI race would be dangerous, though, because it would prioritize capabilities over safety and increase the risk of existential catastrophe. A self-regulatory organization (SRO) for AGI may be able to change incentives to favor safety over capabilities and encourage cooperation rather than racing.
Keywords
artificial general intelligence, AI policy, self-regulatory organization
Subject
Computer Science and Mathematics, Computer Science
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment