As the world embarks on an artificial intelligence revolution, governments and supranational organizations are taking highly divergent approaches to regulation in an effort to regulate the effects of AI. Although there are new educational theories, which propose that AI might precipitate a paradigm shift in how knowledge is produced, which values human-AI co-creation [1], empirical studies on the actual way states will make the transition are in short supply. To fill this gap, this research paper applies a qualitative comparative policy review of 35 representative excerpts extracted from seven authoritative legislative and strategic documents across China, Singapore, and the European Union. We use a six-dimensional framework (inter-coder reliability κ = 1.00) to investigate the extent to which these policies are framed around optimization or restructuring: focusing on infrastructural scale and efficiency versus requiring systemic, pedagogical, and epistemic transformation. As findings indicate, there are radically different policy imaginaries. Relying solely on restructuring-based legal requirements, the EU compensates for high-risk algorithmic harms and implements tight ethical protection. China displays a characterized temporal development, as it alters macroeconomic optimization in 2017 to a hybrid system that requires interactive exploration and multimodal creation in 2025. Singapore, on the other hand, takes the calculated risk of a middle way, with massive reorganization of human-focused pedagogical functions and with optimization safely applied to scale up the infrastructures of the public services. Finally, this research paper proves that there is no single global AI educational governance. We state that negotiating this optimization-restructuring tension is the key to institutions that seek to develop authentic student agency without undermining ethical protection.