Preprint Essay Version 1 Preserved in Portico This version is not peer-reviewed

A Causal Framework for AI Regulation and Auditing

Version 1 : Received: 17 January 2024 / Approved: 18 January 2024 / Online: 18 January 2024 (15:51:40 CET)

How to cite: Sharkey, L.; Ní Ghuidhir, C.; Braun, D.; Scheurer, J.; Balesni, M.; Bushnaq, L.; Stix, C.; Hobbhahn, M. A Causal Framework for AI Regulation and Auditing. Preprints 2024, 2024011424. https://doi.org/10.20944/preprints202401.1424.v1 Sharkey, L.; Ní Ghuidhir, C.; Braun, D.; Scheurer, J.; Balesni, M.; Bushnaq, L.; Stix, C.; Hobbhahn, M. A Causal Framework for AI Regulation and Auditing. Preprints 2024, 2024011424. https://doi.org/10.20944/preprints202401.1424.v1

Abstract

Artificial intelligence (AI) systems are poised to become deeply integrated into society. If developed responsibly, AI has potential to benefit humanity immensely. However, it also poses a range of risks, including risks of catastrophic accidents. It is crucial that we develop oversight mechanisms that prevent harm. This article outlines a framework for evaluating and auditing AI to provide assurance of responsible development and deployment, focusing on catastrophic risks. We argue that responsible AI development requires comprehensive auditing that is proportional to AI systems’ capabilities and available affordances. This framework offers recommendations toward that goal and may be useful in the design of AI auditing and governance regimes.

Keywords

AI governance; AI evaluations; AI regulation; AI auditing

Subject

Computer Science and Mathematics, Other

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.