Technological progress has always reshaped the landscape of international security, from the advent of nuclear weapons to the rise of cyberspace. Artificial Intelligence (hereinafter: AI) represents the latest frontier, offering capabilities that range from predictive analysis and humanitarian aid to autonomous targeting systems and advanced cyberattacks. Unlike many previous technologies, AI is characterised by its dual-use potential: the same systems that can prevent famine or assist in post-conflict recovery can also destabilise global security if deployed irresponsibly.
This inherent tension has propelled AI governance to the centre of international debate. The United Nations Security Council (hereinafter: UNSC), tasked with safeguarding peace and security, has begun grappling with the opportunities and risks posed by this technology. Its discussions reveal both the promise of AI and the sharp divisions between states on how — or whether — it should be governed collectively.
The UNSC Debate on AI
On 24 September 2025, the UNSC convened a high-level open debate on the implications of AI for international peace and security. Chaired by the President of the Republic of Korea, Lee Jae Myung, the meeting sought to address both the transformative potential and the risks of AI. United Nations Secretary-General António Guterres underscored AI’s dual capacity to prevent crises and amplify them. He pointed to positive applications such as forecasting food insecurity and aiding de-mining operations, but also warned of dangers including autonomous weapons, cyberattacks on critical infrastructure and deepfakes capable of manipulating public opinion or disrupting diplomacy. “Humanity’s fate cannot be left to an algorithm,” he remarked, urging urgent safeguards.
Calls for Restraint: The Question of Autonomous Weapons
The Secretary-General outlined four priorities for state action: ensuring human control over the use of force; establishing coherent international frameworks; protecting information integrity during conflict; and addressing the widening gap in AI capacity between states.In particular, he reiterated his call for a ban on Lethal Autonomous Weapons Systems (hereinafter: LAWS) that operate without human oversight. Proposing the conclusion of a legally binding instrument within a year, he emphasised that decisions involving nuclear arsenals or the use of deadly force must remain firmly in human hands.Expert testimony reinforced these concerns. Professor Yejin Choi of Stanford University highlighted how the concentration of AI development in a small number of companies and states risks leaving most of the world excluded, warning of growing inequalities in access and influence.
Divergent Positions on Global Governance
The debate revealed a fundamental divide over how far international governance should extend. The United States of America (hereinafter: USA), represented by Michael Kratsios, Assistant to the President and Director of the Office of Science and Technology Policy, cautioned against centralised oversight. He stated that the USA “totally reject all efforts by international bodies to assert centralised control and global governance of AI,” arguing that overregulation could hinder innovation and inadvertently strengthen authoritarian control. Washington’s emphasis remains on promoting American AI standards while enabling partners and allies to develop sovereign ecosystems. Other states, including Pakistan and Greece, stressed the dangers of unregulated proliferation. They warned of an arms race in AI-enabled weapons and the erosion of human authority over critical decisions, aligning with the Secretary-General’s appeal for caution and human-centred regulation.
Broader Implications for Global Security
The debate underscored the challenge of reconciling rapid technological advances with the slower, consensus-driven processes of multilateral institutions. While some states prioritise sovereignty and innovation, others call for immediate regulation to address ethical and security concerns. This divergence risks leaving a governance vacuum at precisely the moment when the technology is advancing at speed.
Scenario One: Growing Inequality in AI Capacity
If consensus on governance remains elusive, powerful states and their allies are likely to pursue independent AI strategies. This would widen the capacity gap between technologically advanced and resource-constrained nations, increasing the vulnerability of the latter to cyberattacks, disinformation and AI-enabled coercion.
Scenario Two: Fragmented Norms and Regional Approaches
Alternatively, governance may develop outside the UN framework, through regional coalitions, export control regimes or technical standards bodies. While these efforts could establish some safeguards, they risk creating a patchwork of incompatible frameworks. Such fragmentation could complicate interoperability, weaken collective security and allow technological progress to outpace ethical and legal standards.
The UNSC debate illustrates that AI governance is no longer a theoretical concern but an urgent matter of international politics. The lack of consensus highlights a core dilemma: the world’s most powerful states disagree on where authority should lie. Whether the future brings deepening inequality or fragmented governance structures, one reality is clear — AI will shape the architecture of global security, whether or not international institutions succeed in shaping AI.