Submitted:
28 June 2025
Posted:
30 June 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
1.1. Background and Context
1.2. The Need for Legal Evolution
1.3. Objectives of the Study
1.3.1. To Assess the Limitations of Existing International Laws
1.3.2. To Explore the Risks of Algorithmic Escalation and Autonomy
1.3.3. To Evaluate Accountability and Responsibility Dilemmas
1.3.4. To Analyze the Role of Dual-Use Technologies
1.3.5. To Identify Normative and Policy Gaps
1.3.6. To Propose Legal and Policy Recommendations
1.3.7. To Contribute to Interdisciplinary Legal and Security Discourse
1.4. Key Research Questions
1.4.1. How Adequate Are Existing International Laws and Treaties in Addressing AI-Enabled Nuclear Threats?
1.4.2. What Unique Legal and Ethical Challenges Does AI Pose in the Context of Nuclear Warfare?
1.4.3. Who Bears Responsibility in the Event of Unlawful or Accidental Use of AI-Driven Nuclear Systems?
1.4.4. To What Extent Do Dual-Use Technologies Complicate Legal Regulation and Verification Protocols?
1.4.5. How Can International Law Be Reimagined or Revised to Respond to Algorithmic Warfare and Autonomy?
1.4.6. What Role Should Global Institutions Play in Governing AI-Nuclear Convergence?
1.4.7. How Do Emerging Security Doctrines and Military AI Strategies Influence Legal Interpretation and Application?
1.5. Methodological Approach
1.6. Historical Parallels and Evolution
1.7. Ethical and Philosophical Considerations
1.8. Geopolitical Fragmentation and Strategic Competition
1.9. Structural Challenges to Lawmaking
2. Literature Review
2.1. Introduction to the Literature Landscape
2.2. Technological Evolution and Strategic Doctrine
2.2.1. Automation, Autonomy, and Escalation Risk
2.2.2. Predictive Analytics and AI Bias
2.3. Legal and Ethical Discourse
2.3.1. International Humanitarian Law (IHL) and AI
2.3.2. Responsibility, Accountability, and State Sovereignty
2.4. Arms Control Regimes and Treaty Gaps
2.4.1. The NPT and Technological Modernization
2.4.3. Other Initiatives: TPNW, OST, and Regional Frameworks
2.5. Emerging Norms and Non-State Actors
2.5.1. Private Tech Firms and Defense Contractors
2.5.2. Civil Society and Epistemic Communities
3. Theoretical Framework
3.1. Introduction to Theoretical Grounding
3.2. Technological Determinism and Sociotechnical Systems
3.2.1. Technology as a Shaping Force
3.2.2. Black-Box Epistemology and Legal Invisibility
3.3 Constructivist International Relations and Norm Diffusion
3.3.1. Norm Construction in Emerging Technologies
3.3.2. Legalization of Norms and the Problem of State Consent
3.4. Legal Positivism vs. Legal Pluralism in International Law
3.4.1. Legal Positivism and Treaty Formalism
3.4.2. Legal Pluralism and Adaptive Jurisprudence
3.5. Just War Theory and Ethics in the Age of AI
3.5.1. Jus ad Bellum and Strategic AI Dilemmas
3.5.2. Jus in Bello and Meaningful Human Control
3.5.3. Machine Ethics and Computational Morality
3.6. Integrative Model: A Hybrid Theoretical Approach
3.7. Theoretical Limitations and Future Avenues
4. Research Methodology
4.1. Introduction
4.2. Research Philosophy and Ontological Assumptions
4.3. Hypotheses
4.4. Legal Doctrinal Methodology
4.5. Case Study Approach
4.6. Expert Virtual Interviews and Qualitative Insights
4.7. Technology Assessment Framework
4.8. Normative and Ethical Analysis
4.9. Limitations of the Methodology
4.10. Ethical Considerations in Research
5. Case Study Analysis
5.1. Introduction
5.2. United States
5.2.1. Strategic Context
5.2.2. NC3 and AI Integration
5.2.3. Legal Review Mechanisms
5.2.4 Normative Discourse and Accountability
5.3. Russia
5.3.1. Strategic Context
5.3.2. NC3 and AI Integration
5.3.3. Legal and Institutional Framework
5.3.4. Normative Resistance and Disinformation
5.4. China
5.4.1. Strategic Context
5.4.2. AI-Nuclear Integration
5.4.3. Legal Doctrine and Treaty Interpretation
5.4.4. Epistemic and Technocratic Governance
5.5. Israel
5.5.1. Strategic Ambiguity and Technological Innovation
5.5.2. NC3 and Cyber-AI Overlap
5.5.3. Legal Framework and Secrecy
5.5.4. Civil-Military Fusion and Dual-Use Dynamics
5.6. Comparative Insights and Cross-Case Patterns
5.6.1. Legal Ambiguity as Strategic Asset
5.6.2. Soft Law and Ethical Guidelines
5.6.3. State Sovereignty and Treaty Resistance
5.6.4. Epistemic Communities and Civil Society
5.7. Summary of Case Study Findings
5.8. Iran
5.8.1. Strategic Context
5.8.2. AI Integration in Military and Strategic Domains
5.8.3. Legal and Normative Frameworks
5.8.4. Geopolitical Isolation and Legal Exceptionalism
5.8.5. Risk Amplification through AI and Strategic Ambiguity
5.8.6. Civil-Military Research Nexus
6. Discussion and Policy Recommendations
6.1. Introduction
6.2. The Legal Vacuum and Strategic Risk Multiplication
6.3. Ethical Challenges and Normative Ambiguities
6.4. Fragmentation of Governance Regimes
6.5. Comparative Legal Analysis: Converging Gaps
6.6. Policy Recommendations
6.6.1. Establish an AI-Nuclear Governance Framework
6.6.2. Strengthen Article 36 Review Mechanisms
6.6.3. Clarify and Operationalize ‘Meaningful Human Control’
6.6.4. Promote Dual-Use Research Oversight
6.6.5. Expand Multilateral Dialogue and Confidence-Building Measures
6.6.6. Support Norm Entrepreneurship by Non-State Actors
6.7. Anticipating and Mitigating Future Risks
7. Conclusions
7.2. Final Reflections
References
- Acton, J. M. (2018). Escalation through entanglement: How the vulnerability of command-and-control systems raises the risks of an inadvertent nuclear war. International Security, 43(1), 56–99. [CrossRef]
- Abbott, K., & Snidal, D. (2000). Hard and soft law in international governance. International Organization, 54(3), 421–456. [CrossRef]
- Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94(886), 687–709. [CrossRef]
- Boulanin, V., & Verbruggen, M. (2017). Mapping the development of autonomy in weapon systems. SIPRI.
- Bhaskar, R. (2008). A realist theory of science. Routledge.
- Boulanin, V., & Verbruggen, M. (2017). Mapping the development of autonomy in weapon systems. Stockholm International Peace Research Institute.
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
- Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. Future of Humanity Institute.
- Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.
- Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12.
- Cordesman, A. H. (2021). Iran's Military Forces and Warfighting Capabilities: The Threat in the Northern Gulf. CSIS.
- Crootof, R. (2015). The Killer Robots Are Here: Legal and Policy Implications. Cardozo Law Review, 36(4), 1837–1915.
- Department of Defense. (2012). Directive 3000.09: Autonomy in Weapon Systems. U.S. DoD.
- Department of Defense. (2018). Summary of the 2018 Department of Defense Artificial Intelligence Strategy.
- Docherty, B. (2019). Stopping killer robots: Country positions on banning fully autonomous weapons and retaining human control. Human Rights Watch.
- Ekelhof, M. (2019). Moving beyond semantics on autonomous weapons: Meaningful human control in operation. Global Policy, 10(3), 343–348. [CrossRef]
- Esfandiary, D., & Fitzpatrick, M. (2016). Iran's Nuclear Program and International Law: From Confrontation to Accord. IISS.
- Finnemore, M., & Sikkink, K. (1998). International norm dynamics and political change. International Organization, 52(4), 887–917. [CrossRef]
- Hart, H. L. A. (1961). The concept of law. Oxford University Press.
- Horowitz, M. C. (2019). Artificial Intelligence and the Future of Warfare. Belfer Center for Science and International Affairs.
- Horowitz, M. C., Kahn, L., & Scharre, P. (2020). Artificial intelligence and international stability: Risks and confidence-building measures. Center for a New American Security.
- International Atomic Energy Agency. (2022). Verification and Monitoring in the Islamic Republic of Iran in Light of United Nations Security Council Resolution 2231 (2015). IAEA.
- International Committee of the Red Cross. (2019). Autonomous weapon systems: Implications of increasing autonomy in the critical functions of weapons. Geneva: ICRC.
- Kania, E. B. (2019). Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power. Center for a New American Security.
- Kristensen, H. M., & Korda, M. (2023). Status of World Nuclear Forces. Federation of American Scientists.
- Pagallo, U. (2013). The laws of robots: Crimes, contracts, and torts. Springer.
- Lewis, J., & Xue, L. (2020). China's strategic posture in the age of AI. Science and Global Security.
- MacKenzie, D., & Wajcman, J. (1999). The social shaping of technology (2nd ed.). Open University Press.
- Merry, S. E. (1988). Legal pluralism. Law & Society Review, 22(5), 869–896.
- OECD. (2019). OECD Principles on Artificial Intelligence. Organisation for Economic Co-operation and Development.
- Roff, H. M., & Moyes, R. (2016). Meaningful human control, artificial intelligence and autonomous weapons. Article 36 & International Committee for Robot Arms Control.
- Sachs, N. (2019). Cybersecurity and Israel's national security strategy. Israel Journal of Foreign Affairs, 13(3), 231–244.
- Scharre, P. (2018). Army of None: Autonomous Weapons and the Future of War. W. W. Norton & Company.
- Schmitt, M. N. (2013). Autonomous weapon systems and international humanitarian law: A reply to the critics. Harvard National Security Journal, 4(1), 1–45.
- Searle, J. R. (1995). The construction of social reality. Simon and Schuster.
- Shelton, D. (2000). Commitment and compliance: The role of non-binding norms in the international legal system. Oxford University Press.
- Singer, P. W., & Brooking, E. T. (2018). LikeWar: The weaponization of social media. Houghton Mifflin Harcourt.
- Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.
- Sparrow, R. (2016). Robotic weapons and the future of war. In Ethics and Emerging Technologies (pp. 311–323). Palgrave Macmillan.
- UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. United Nations Educational, Scientific and Cultural Organization.
- UNIDIR. (2020). The weaponization of increasing autonomy in future warfare. United Nations Institute for Disarmament Research.
- UNIDIR. (2022). Artificial Intelligence and the Future of Deterrence: A Survey of Issues. United Nations Institute for Disarmament Research.
- Zeng, Y., Lu, E., & Huangfu, C. (2020). Linking AI principles. Nature Machine Intelligence, 2(1),.
- Vincent, R. J., & Packer, J. (2020). State responsibility and autonomous weapon systems. Journal of Conflict and Security Law, 25(3), 357–385.
- Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press.
- Wendt, A. (1999). Social theory of international politics. Cambridge University Press.
- Winner, L. (1986). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press. [CrossRef]
- Work, R., & Brimley, S. (2014). 20YY: Preparing for war in the robotic age. Center for a New American Security.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).