The UN Security Council in the AI Era

UN Security Council members issue a joint statement on the implications of AI for international peace and security

From Ukraine to Gaza, to India and Pakistan, conflicts and crises increasingly have a digital or technological component. The question facing the Council is no longer whether technologies including AI affect international security, but how the UN’s primary body for maintaining peace can respond effectively.

As 2025 drew to a close, four United Nations Security Council (UNSC) members issued a joint statement on artificial intelligence (AI) and international peace and security. The timing was no accident. From Ukraine to Gaza, to India and Pakistan, conflicts and crises increasingly have digital or technological layers. The question facing the Council is no longer whether technologies like AI affect international security, but how the UN’s primary body for maintaining peace can respond effectively.

The December 31 statement, initiated by Slovenia as it concluded its two-year term and supported by France, Denmark, and Greece, recognizes AI’s transformative potential for international peace and security while acknowledging significant risks from its irresponsible use.

It rightfully recalls the Security Council’s “primary responsibility for the maintenance of international peace and security” and commits members to “remain seized of the implications of AI to this mandate,” but it does not take positions on AI governance directly. This ambiguity opens a door for interested Council members to find ways to contend with and monitor the security implications of AI, without wading into the politics of jurisdiction. Importantly, the statement stresses that the use of AI must be in accordance with international law, including international humanitarian law and international human rights law in order to prevent destabilizing effects and unintended harm.

Timing and Context

The statement emerges from increased Council interest in technology and security issues. In 2024 and 2025 alone, members convened high-level open debates on cybersecurity (June 2024) and AI (September 2025), alongside more focused Arria-formula meetings on commercial spyware, ransomware attacks against medical facilities, and broader cybersecurity concerns. These sessions revealed both the promise and peril of emerging technologies for the Council’s work.

Digital technologies like AI offer genuine benefits for UNSC mandates, as the joint statement acknowledges and as multiple interventions to the September 2024 high-level AI debate demonstrated. For example, AI-enhanced surveillance can improve civilian protection in UN peace operations. Machine learning algorithms can help humanitarian actors predict food insecurity or identify populations at risk.

Yet, new technologies create new vulnerabilities. United Nations peace operations and humanitarian actors increasingly rely on digital systems for communication, logistics, and coordination. This dependence creates attack surfaces that hostile actors can exploit. A cyberattack that disables communication systems could hamper operations in a crisis zone. Compromised data systems could expose the locations of vulnerable civilian populations or humanitarian workers. Just last week, the UNSC held an emergency meeting to discuss the recent U.S. military operation in Venezuela — where cyber operations reportedly played a supporting role.

The AI-Cyber Nexus

Although focused primarily on AI, the joint statement makes reference to AI as an “enabler” of cyber and information threats, among others.

While cyber and AI governance are treated as distinct issues in diplomatic circles, including at the UN where cyber governance has a much longer history and unique political dynamics, the technological connection between the two is clear. AI amplifies existing cyber threats in several ways; for example, machine learning can automate vulnerability discovery, allowing attackers to find and exploit software weaknesses on a larger scale. AI-generated phishing messages are increasingly sophisticated and harder to detect. Deep-fake technology enables impersonation attacks that can undermine trust and introduce uncertainty.

The flip side is equally important: AI represents a powerful tool for cyber defense. Anomaly detection can spot intrusions faster than human analysts, and AI systems can respond to attacks at a greater speed. For the UNSC’s purposes, the question becomes how to help member states and UN operations build the cybersecurity capacity needed to safely deploy AI technologies.

Bridging the Digital Divide

During the September 2025 high-level debate on AI convened by the Republic of Korea, a recurring theme emerged from developing countries: the fear of being left behind. As Stimson’s commentary on the debate noted, “AI has the potential to either deepen global inequalities and jeopardize democratic processes or become a transformative driver to better serve the UNSC’s international peace and security objectives.”

The concern is very real. If AI governance frameworks and regulatory standards are developed primarily in the global North, they may not address the needs or reflect the values of the global South. If advanced AI capabilities remain concentrated in a handful of wealthy nations, power imbalances will deepen. If capacity-building efforts focus on AI adoption without adequate attention to building even fundamental national cyber resilience, developing countries may import vulnerabilities faster than capabilities.

What Next?

The four-country statement arrives at an important moment for AI governance and tech diplomacy. The General Assembly established an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance in August 2025. In December 2024, it adopted a resolution specifically addressing AI in the military domain. In addition, in 2025, the UN General Assembly established a permanent “Global Mechanism” for cybersecurity to replace the temporary Open-ended Working Groups and Groups of Governmental Experts that have handled cyber topics in past. The UN Convention on Cybercrime opened for signature in 2025 and the WSIS+20 review process on the future of digital governance reached an outcome.

The Security Council statement helps bridge these efforts by opening the door to future recognition of AI issues within the Council without staking any governance claims. The statement’s careful language reflects the divisions among UN member states about whether the UNSC should play any role in AI governance at all. Rather, it urges UN members and stakeholders to collaborate closely to “anticipate, assess and address emerging risks.”

The four countries behind the statement have created space for continued dialogue and risk assessment without forcing premature decisions on contested issues. Whether this measured approach can keep pace with the speed of technological change remains to be seen. What is certain is that AI will continue reshaping conflict, crisis, peace operations, and other areas in the Council’s mandate. As we have argued elsewhere, downplaying or continuing to overlook the impacts of emerging technologies brings into question the future viability of the UNSC, in the AI era and beyond.

For more on Stimson’s work about cyber, AI and technology in the UN Security Council, visit https://www.stimson.org/project/cyber-security-in-the-un-security-council/.

Recent & Related

Field Note
Courtney Weatherby • Allison Pytlak
Policy Memo
Kalliopi Mingeirou • Yeliz Osman • Raphaëlle Rafin