When the United Nations Security Council (UNSC) convened to debate artificial intelligence (AI) on September 24, a certain irony infused the discussion: An institution forged in the ashes of the Second World War, whose permanent membership still reflects the geopolitical realities of 1945, must now grapple with technologies that evolve faster than diplomatic consensus can form. The UN is increasingly struggling with this challenge — many of its deliberative processes were designed in an era of telegrams but are now attempting to govern peace and security challenges that emerge at the speed of algorithms.
“Eighty years ago, the UN’s central concern at its founding was how the international community would manage the emerging threat of nuclear weapons,” said South Korean president Lee Jae Myung, who chaired the Debate. “Now it is time to explore new governance structures to address the new challenges and threats posed by AI.”
Indeed, AI and tech governance more broadly were hot topics during the opening week of the UN General Assembly’s 80th session, during which the UN Security Council’s Debate on AI occurred. This is not surprising considering the wide range of initiatives and AI governance fora underway within the UN, most of which have only sprung up in the last two years. A large number of world leaders referenced these topics during their UNGA Debate statements, and a high-level meeting on AI governance to launch the Global Dialogue on AI Governance occurred, which is an outcome of agreements made during the 2024 UNGA session.
It is evident that both experts and governmental policymakers agree that the “world is at an extraordinary inflection point,” to use the words of Yejin Choi, Professor of Computer Science and Senior Fellow at Stanford University’s Institute for Human-Centered AI, who briefed the Council. But what does this look like practically, and what are the politics at play?
Here are our three big takeaways from the Debate:
The potential for good is enormous — but the potential for misuse is real.
One of the objectives of the Debate was to encourage discussion on mitigating the risks and maximizing the benefits of AI in the context of international peace and security. Although there were varying interpretations of how AI governance should move forward, the “dual use” nature of the technology was widely commented on, including sometimes in relation to UNSC mandates and priorities. Several countries emphasized AI’s transformative potential for peacekeeping operations, with France and the UK highlighting enhanced early warning and data analysis capabilities and Slovenia noting that AI could eliminate information redundancies and optimize logistics for complex missions. Kenya and Guyana pointed to AI’s promise for strengthening health systems and climate response, factors that often influence the need for peacekeeping missions in the first place. The mentions of these advantages, along with the potential for economic growth and moving closer to the achievement of Sustainable Development Goals (SDGs), reflected a broad optimism towards the potential of AI.
However, numerous countries also raised concerns about the misuse of AI, spanning a spectrum of threats. These concerns encompassed direct misuse such as AI-generated disinformation that could undermine democracies and endanger peacekeepers, as well as cyberattacks on critical infrastructure and the exacerbation of online extremism. Somalia and Sierra Leone emphasized how AI’s uneven spread creates disparities that leave vulnerable nations at risk, while Algeria noted that limited internet coverage and a lack of ICT regulations across Africa compound these challenges. The message overall was clear: AI has the potential to either deepen global inequalities and jeopardize democratic processes or become a transformative driver to better serve the UNSC’s international peace and security objectives.
Throughout, countries made reference to national and regional policies or acts on various aspects of AI, or their involvement in non-UN processes. It is evident that while only a short time has passed since generative AI came into the mainstream, policymakers are sitting up and taking notice.
But…what about AI?
Another feature of the Debate was the very wide range of AI-related threats and concerns referenced by states, some of which verged into other technologies and areas of interest. This begs the question, what about AI is pertinent to the UNSC? In addition to the many risks and benefits already described above, numerous countries spoke of AI in the military domain, referencing in particular the initiative being led by South Korea and the Netherlands on this topic. Autonomous weapons systems (AWS) were frequently cited as well. AWS has been the focus of a UN Group of Governmental Experts based in Geneva for over a decade but has recently gained traction in the UNGA, including through the development of a legally binding instrument on AWS. UNSG Guterres used the opportunity of the Debate to call again for a ban on fully AWS operating without human control by 2026.
There were also points made about how AI facilitates cyberattacks and references to national cybersecurity strategies, global norms, and information integrity. Positively, many countries spoke strongly about upholding and using AI in conformity with international law and the importance of meaningful human control, as well as about accountability and transparency in developing and utilizing AI, whether in civilian or military contexts.
The diversity of AI-related issues and concerns demonstrates the ubiquity of the technology itself but also highlights a need for the Council to refine its focus — should it take forward more work on this topic. More on that below.
A Role for the Council?
Another point that States were invited to comment on during the Debate is the possible role of the UNSC in AI governance. States are divided here, and some of the big players are changing their views. Notably the U.S. was clear in rejecting “efforts by international bodies to assert centralized control and global governance of AI,” which struck a different tone in comparison to what it promoted less than one year ago as host of the Council’s second-ever formal Debate on AI. Russia maintained its long-standing view that work in the Council would be duplicative of efforts elsewhere, such as the GGE on autonomous weapon systems and the newly established Global Mechanism for cyber-related issues in the General Assembly. China is increasingly advocating for a “people-centered approach” with accountability mechanisms and a grounding of AI in international law and shared values.
It is not surprising that there is a diversity of views on the role of the Council, considering the huge diversity of other governance initiatives that exist both within the UN system and externally. Certainly, even countries like France that generally support the Council discussing or debating AI cautioned about the need to act collaboratively with other processes, including the series of AI Action Summits, the most recent of which it hosted. A practical way forward may be in line with the suggestion from Slovenia that the Council receive regular informational briefings about how AI impacts international peace and security and UNSC agenda items; or those from Guyana about using AI to monitor UNSC resolution implementation, support peacekeeping mandates, and address food insecurity or other related issues. Australia suggested that the Council can lead by example in promoting transparency, accountability, and peaceful uses of emerging technologies and that it intends to bring lessons from its national experience to its campaign for a UNSC seat.
The Bigger Debate
This Debate and others are not simply about investigating a new technology but also about determining who holds power in an AI-driven world. Whether focused on military or civilian AI, there are important interests at stake: Will AI development remain concentrated in a few nations, or will it be accessible to all? Will governance frameworks be imposed by the technologically advanced or co-created through inclusive dialogue?
This concern was palpable from several Global Majority countries, including Guyana and Somalia, which warned about the risks of “digital colonialism” and from Algeria, which warned against Africa being turned into a “lab rat” for testing technologies. There is a real risk that advanced economies will have greater opportunities than developing countries to leverage the benefits of AI , which will only widen the existing digital divide.
In remarks delivered to a pre-UNSC Debate briefing, Stimson’s Giulia Neaher, a Research Analyst with the Strategic Foresight Hub, stressed that, “It is not enough, however, to recognize the variety of AI’s applications in a vacuum. We must also consider how this variety is presented across global contexts, and particularly in regions that are historically underrepresented in international governance and technology development.”
In his opening remarks, UN Secretary-General Antonio Guterres stated that innovation must serve, and not undermine, humanity. AI is a tool, after all — the safeguards and norms we put around it will determine which of those outcomes becomes our reality.
AI on the Agenda
By Allison Pytlak • Kathleen Scoggin
Emerging Technology
It can feel as though we are reaching an AI inflection point. Regulation and governance decisions made today will have implications well into the future. Along with its many benefits, AI presents risks and can amplify existing threats, as well as digital divides. Policymakers, tech leaders, and other actors have the opportunity now to set the groundwork for responsible and safe AI use and development into the future — a rare chance for forward-facing action. But can an organization notorious for slow decision-making and lengthy processes keep pace with technological development? What role can and should the UNSC play in an already busy governance space?
Such questions and topics formed the basis of statements delivered at a recent UN Security Council High-level Debate on AI, which was convened by the Republic of Korea and featured remarks from UN Secretary-General Antonio Guterres; Yoshua Bengio, Professor at Université de Montréal and Co-President and Scientific Director of LawZero (via videoconference); and Yejin Choi, Professor of Computer Science and Senior Fellow at Stanford University’s Institute for Human-Centered AI.
When the United Nations Security Council (UNSC) convened to debate artificial intelligence (AI) on September 24, a certain irony infused the discussion: An institution forged in the ashes of the Second World War, whose permanent membership still reflects the geopolitical realities of 1945, must now grapple with technologies that evolve faster than diplomatic consensus can form. The UN is increasingly struggling with this challenge — many of its deliberative processes were designed in an era of telegrams but are now attempting to govern peace and security challenges that emerge at the speed of algorithms.
“Eighty years ago, the UN’s central concern at its founding was how the international community would manage the emerging threat of nuclear weapons,” said South Korean president Lee Jae Myung, who chaired the Debate. “Now it is time to explore new governance structures to address the new challenges and threats posed by AI.”
Indeed, AI and tech governance more broadly were hot topics during the opening week of the UN General Assembly’s 80th session, during which the UN Security Council’s Debate on AI occurred. This is not surprising considering the wide range of initiatives and AI governance fora underway within the UN, most of which have only sprung up in the last two years. A large number of world leaders referenced these topics during their UNGA Debate statements, and a high-level meeting on AI governance to launch the Global Dialogue on AI Governance occurred, which is an outcome of agreements made during the 2024 UNGA session.
It is evident that both experts and governmental policymakers agree that the “world is at an extraordinary inflection point,” to use the words of Yejin Choi, Professor of Computer Science and Senior Fellow at Stanford University’s Institute for Human-Centered AI, who briefed the Council. But what does this look like practically, and what are the politics at play?
Here are our three big takeaways from the Debate:
The potential for good is enormous — but the potential for misuse is real.
One of the objectives of the Debate was to encourage discussion on mitigating the risks and maximizing the benefits of AI in the context of international peace and security. Although there were varying interpretations of how AI governance should move forward, the “dual use” nature of the technology was widely commented on, including sometimes in relation to UNSC mandates and priorities. Several countries emphasized AI’s transformative potential for peacekeeping operations, with France and the UK highlighting enhanced early warning and data analysis capabilities and Slovenia noting that AI could eliminate information redundancies and optimize logistics for complex missions. Kenya and Guyana pointed to AI’s promise for strengthening health systems and climate response, factors that often influence the need for peacekeeping missions in the first place. The mentions of these advantages, along with the potential for economic growth and moving closer to the achievement of Sustainable Development Goals (SDGs), reflected a broad optimism towards the potential of AI.
However, numerous countries also raised concerns about the misuse of AI, spanning a spectrum of threats. These concerns encompassed direct misuse such as AI-generated disinformation that could undermine democracies and endanger peacekeepers, as well as cyberattacks on critical infrastructure and the exacerbation of online extremism. Somalia and Sierra Leone emphasized how AI’s uneven spread creates disparities that leave vulnerable nations at risk, while Algeria noted that limited internet coverage and a lack of ICT regulations across Africa compound these challenges. The message overall was clear: AI has the potential to either deepen global inequalities and jeopardize democratic processes or become a transformative driver to better serve the UNSC’s international peace and security objectives.
Throughout, countries made reference to national and regional policies or acts on various aspects of AI, or their involvement in non-UN processes. It is evident that while only a short time has passed since generative AI came into the mainstream, policymakers are sitting up and taking notice.
But…what about AI?
Another feature of the Debate was the very wide range of AI-related threats and concerns referenced by states, some of which verged into other technologies and areas of interest. This begs the question, what about AI is pertinent to the UNSC? In addition to the many risks and benefits already described above, numerous countries spoke of AI in the military domain, referencing in particular the initiative being led by South Korea and the Netherlands on this topic. Autonomous weapons systems (AWS) were frequently cited as well. AWS has been the focus of a UN Group of Governmental Experts based in Geneva for over a decade but has recently gained traction in the UNGA, including through the development of a legally binding instrument on AWS. UNSG Guterres used the opportunity of the Debate to call again for a ban on fully AWS operating without human control by 2026.
There were also points made about how AI facilitates cyberattacks and references to national cybersecurity strategies, global norms, and information integrity. Positively, many countries spoke strongly about upholding and using AI in conformity with international law and the importance of meaningful human control, as well as about accountability and transparency in developing and utilizing AI, whether in civilian or military contexts.
The diversity of AI-related issues and concerns demonstrates the ubiquity of the technology itself but also highlights a need for the Council to refine its focus — should it take forward more work on this topic. More on that below.
A Role for the Council?
Another point that States were invited to comment on during the Debate is the possible role of the UNSC in AI governance. States are divided here, and some of the big players are changing their views. Notably the U.S. was clear in rejecting “efforts by international bodies to assert centralized control and global governance of AI,” which struck a different tone in comparison to what it promoted less than one year ago as host of the Council’s second-ever formal Debate on AI. Russia maintained its long-standing view that work in the Council would be duplicative of efforts elsewhere, such as the GGE on autonomous weapon systems and the newly established Global Mechanism for cyber-related issues in the General Assembly. China is increasingly advocating for a “people-centered approach” with accountability mechanisms and a grounding of AI in international law and shared values.
It is not surprising that there is a diversity of views on the role of the Council, considering the huge diversity of other governance initiatives that exist both within the UN system and externally. Certainly, even countries like France that generally support the Council discussing or debating AI cautioned about the need to act collaboratively with other processes, including the series of AI Action Summits, the most recent of which it hosted. A practical way forward may be in line with the suggestion from Slovenia that the Council receive regular informational briefings about how AI impacts international peace and security and UNSC agenda items; or those from Guyana about using AI to monitor UNSC resolution implementation, support peacekeeping mandates, and address food insecurity or other related issues. Australia suggested that the Council can lead by example in promoting transparency, accountability, and peaceful uses of emerging technologies and that it intends to bring lessons from its national experience to its campaign for a UNSC seat.
The Bigger Debate
This Debate and others are not simply about investigating a new technology but also about determining who holds power in an AI-driven world. Whether focused on military or civilian AI, there are important interests at stake: Will AI development remain concentrated in a few nations, or will it be accessible to all? Will governance frameworks be imposed by the technologically advanced or co-created through inclusive dialogue?
This concern was palpable from several Global Majority countries, including Guyana and Somalia, which warned about the risks of “digital colonialism” and from Algeria, which warned against Africa being turned into a “lab rat” for testing technologies. There is a real risk that advanced economies will have greater opportunities than developing countries to leverage the benefits of AI , which will only widen the existing digital divide.
In remarks delivered to a pre-UNSC Debate briefing, Stimson’s Giulia Neaher, a Research Analyst with the Strategic Foresight Hub, stressed that, “It is not enough, however, to recognize the variety of AI’s applications in a vacuum. We must also consider how this variety is presented across global contexts, and particularly in regions that are historically underrepresented in international governance and technology development.”
In his opening remarks, UN Secretary-General Antonio Guterres stated that innovation must serve, and not undermine, humanity. AI is a tool, after all — the safeguards and norms we put around it will determine which of those outcomes becomes our reality.
Recent & Related