Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Avoiding AGI races through self-regulation

Version 1 : Received: 1 October 2018 / Approved: 2 October 2018 / Online: 2 October 2018 (15:32:31 CEST)
Version 2 : Received: 7 November 2018 / Approved: 8 November 2018 / Online: 8 November 2018 (10:52:39 CET)

How to cite: Worley III, G.G. Avoiding AGI races through self-regulation. Preprints 2018, 2018100030. https://doi.org/10.20944/preprints201810.0030.v1 Worley III, G.G. Avoiding AGI races through self-regulation. Preprints 2018, 2018100030. https://doi.org/10.20944/preprints201810.0030.v1

Abstract

The first group to build artificial general intelligence or AGI stands to gain a significant strategic and market advantage over competitors, so companies, universities, militaries, and other actors have strong incentives to race to build AGI first. An AGI race would be dangerous, though, because it would prioritize capabilities over safety and increase the risk of existential catastrophe. A self-regulatory organization (SRO) for AGI may be able to change incentives to favor safety over capabilities and encourage cooperation rather than racing.

Keywords

artificial general intelligence, AI policy, self-regulatory organization

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.