Preprint
Article

This version is not peer-reviewed.

Optimal Release Timing of AI Systems: A Strategic Analysis with Safety Externalities

Submitted:

31 March 2026

Posted:

31 March 2026

You are already at the latest version

Abstract
We study the strategic release timing of frontier AI systems by competing firms. Each firm develops a model whose quality improves with development time, but faces incentives to release early to capture first-mover advantages. Premature release imposes safety externalities on society that firms do not fully internalize. We characterize the symmetric Nash equilibrium in a preemption game and show that equilibrium release occurs strictly before the social optimum. We analyze four policy interventions: (i) minimum quality standards, which can implement the first-best; (ii) mandatory release delays, which paradoxically reduce deployed model quality by shifting preemption to the announcement stage, where quality locks in before the mandated waiting period; (iii) voluntary safety commitments, which can sustain cooperative outcomes when observable and credible; and (iv) Pigouvian safety taxes, which partially correct the externality but cannot eliminate the preemption distortion alone. Our results speak to ongoing policy debates about frontier AI regulation and generalize to other technologies with safety externalities and first-mover advantages.
Keywords: 
;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated