We study the strategic release timing of frontier AI systems by competing firms. Each firm develops a model whose quality improves with development time, but faces incentives to release early to capture first-mover advantages. Premature release imposes safety externalities on society that firms do not fully internalize. We characterize the symmetric Nash equilibrium in a preemption game and show that equilibrium release occurs strictly before the social optimum. We analyze four policy interventions: (i) minimum quality standards, which can implement the first-best; (ii) mandatory release delays, which paradoxically reduce deployed model quality by shifting preemption to the announcement stage, where quality locks in before the mandated waiting period; (iii) voluntary safety commitments, which can sustain cooperative outcomes when observable and credible; and (iv) Pigouvian safety taxes, which partially correct the externality but cannot eliminate the preemption distortion alone. Our results speak to ongoing policy debates about frontier AI regulation and generalize to other technologies with safety externalities and first-mover advantages.