Preprint Article Version 1 This version is not peer-reviewed

Solving the AI Race: Addressing the potential pitfalls of competition towards Artificial General Intelligence

Version 1 : Received: 30 September 2018 / Approved: 2 October 2018 / Online: 2 October 2018 (13:50:53 CEST)

How to cite: Schmidt, T. Solving the AI Race: Addressing the potential pitfalls of competition towards Artificial General Intelligence. Preprints 2018, 2018100024 (doi: 10.20944/preprints201810.0024.v1). Schmidt, T. Solving the AI Race: Addressing the potential pitfalls of competition towards Artificial General Intelligence. Preprints 2018, 2018100024 (doi: 10.20944/preprints201810.0024.v1).

Abstract

AGI could arise within the next decades, promising a decisive strategic advantage. This paper discusses risks, associated with the development of AGI: destabilizing effects on strategic balance, underestimating risks in the interest of victory in the race, egoistically exploiting the huge benefits by a tiny minority. Further: Developed AGI could be beyond human control. Human goals could not be implemented and an intelligence explosion to superintelligence could take place leading to a total loss of control and power. If competition for AGI is non-transparent, secret, uncontrolled and not regulated, it’s possible that risks could not be managed and would lead to catastrophic consequences. The danger corresponds to that of nuclear weapons. It is crucial that the key actors of a possible AI Race agree at an early stage on the prevention and transparent regulation of a possible AI Race - similar to measures to secure strategic stability, on arms control measures, disarmament, and prevention of the proliferation of nuclear weapons. The realization that an uncontrolled AI race can lead to the extinction of humanity - this time even independent of human will – requires analogous measures to contain, prevent, regulate and secure an AI race within the framework of AGI development.

Subject Areas

Artificial General Intelligence; superintelligence; decisive strategic advantage; human goals; AI race; strategic stability; nuclear weapons; regulation

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.