The Militarization of Artificial Intelligence

Building understanding among stakeholders about AI technology and develop responsive solutions to mitigate risks.

Militaries are developing systems that use AI for missions ranging from logistics, to decision support, command and control, or even lethal force. Those capabilities seem to be advancing faster than discussions about what the risks might be – including if certain applications could raise safety concerns, stoke arms racing dynamics, or remove breakers to the outbreak of nuclear war.

The Stanley Center, the United Nations Office for Disarmament Affairs, and the Stimson Center partnered in a workshop and series of papers to facilitate such a discussion among stakeholders. The workshop, held in August 2019 at UN Headquarters, involved experts from member states, industry, academia, and research institutions. These papers capture that conversation. They share assessments of the topic from US, Chinese, and Russian perspectives. In publishing them, we aim to help expand this dialogue to include more stakeholders.

Download

The complete version of this item is available as a download.

Multistakeholder Perspectives on the Potential Benefits, Risks, and Governance Options for Military Applications of Artificial Intelligence

Few developments in science and technology hold as much promise for the future of humanity as the suite of computer science-enabled capabilities that falls under the umbrella of artificial intelligence (AI). AI has the potential to contribute to the health and well-being of individuals, communities, and states, as well as to aid fulfillment of the United Nations’ 2030 agenda for Sustainable Development Goals. As with past revolutionary technologies, however, AI applications could affect international peace and security, especially through their integration into the tools and systems of national militaries. In recognition of this, UN Secretary-General António Guterres, in his agenda for disarmament, Securing Our Common Future, stresses the need for UN member states to better understand the nature and implications of new and emerging technologies with potential military applications and the need to maintain human control over weapons systems. He emphasizes that dialogue among governments, civil society, and the private sector is an increasingly necessary complement to existing
intergovernmental processes.

Such an approach is particularly relevant for AI, which, as an enabling technology, is likely to be integrated into a broad array of military applications but is largely being developed by private sector entities or academic institutions for different, mostly
civilian, purposes. To facilitate a conversation between disparate stakeholders on this topic, the UN Office for Disarmament Affairs, the Stimson Center, and the Stanley Center for Peace and Security convened an initial dialogue on the intersection of AI and national military capabilities. Over two days at UN headquarters in New York, experts from member states, industry, academia, and research institutions participated in a workshop on The Militarization of Artificial Intelligence.

Discussion within the workshop was candid and revealed that the implications for international peace and security of AI’s integration into national militaries remains to a large extent unclear. Consequently, uncertainty about the domains in which and the
purposes for which AI will be used by national militaries poses practical challenges to the design of governance mechanisms. This uncertainty generates fear and heightens perceptions of risk. These dynamics reflect the early stage of discourse on military
applications of AI and reinforce the need for active and consistent engagement.

Workshop participants acknowledged and were mindful of the need for precision when referring to the large body of tools compressed into the term “AI,” most notably by distinguishing between machine-assisted decision making and machine autonomy. The result was a rich discussion that identified three topical areas in need of ongoing learning and dialogue among member states and other stakeholders:

  • Potential Risks of Military Applications of AI: There undoubtedly are risks posed by applications of AI within the military domain; it is important, however, to not be alarmist in addressing these potential challenges.
  • Potential Benefits of Military Application of AI: There is a need to consider more fully the potential positive applications of AI within the military domain and to develop state-level and multilateral means of capturing these benefits safely.
  • Potential Governance of Military Applications of AI: There are considerable challenges to international governance posed by these emergent technologies, and the primary work of stakeholders will be to devise constructs that balance the tradeoffs made between innovation, capturing the positive effects of AI, and mitigating or eliminating the risks of military AI.

Potential Risks of Military Applications of Artificial Intelligence

The risks of introducing artificial intelligence into national militaries are not small. Lethal autonomous weapon systems (LAWS) receive popular attention because such systems are easily imagined and raise important security, legal, philosophical, and ethical questions. Workshop participants, however, identified multiple other risks from military applications of AI that pose challenges to international peace and security.

Militaries are likely to use AI to assist with decision making. This may be through providing information to humans as they make decisions, or even by taking over the entire execution of decision-making processes. This may happen, for example, in communications-denied environments or in environments such as cyberspace, in which action happens at speeds beyond human cognition. While this may improve a human operator’s or commander’s ability to exercise direct command and control over military systems, it could also have the opposite effect. AI affords the construction of complex systems that can be difficult to understand, creating problems of transparency and of knowing whether the system is performing as expected or intended. Where transparency is sufficiently prioritized in AI design, this concern can be reduced. Where it is not, it becomes possible that errors in AI systems will go unseen—whether such errors are accidental or caused deliberately by outside parties using techniques like hacking or data poisoning.

Participants debated whether AI can be used effectively to hack, distort, or corrupt the functions of command-and-control structures, including early warning systems for nuclear weapons. Specific note was made, however, that the integration of multiple AI-enabled systems could make it harder to identify command-and-control malfunctions. Such integration is a likely direction for advancement in military applications of AI.

Participants also discussed how advances in AI interact with human trust in the machine-based systems they use. Increasing complexity could make AI systems harder to understand and, therefore, encourage the use of trust rather than transparency. Increased trust means that errors and failures are even less likely to be detected.

The concern was also expressed that the desire for—or fear of another’s—decision-making speed may contribute to acting quickly on information aggregated and presented by AI. This pressure can increase the likelihood that decision makers will be prone to known automation biases, including rejection of contradictory or surprising information. So too might the addition of speed create pressures that work against caution and deliberation, with leaders fearing the consequences of delay. Speed can be especially destabilizing in combat, where increases in pace ultimately could surpass the human ability to understand, process, and act on information. This mismatch between AI speed and cognition could degrade human control over events and increase the destructiveness of violent conflict.

Although participants worry about the potential for lone actors to use AI-enabled tools, these concerns are moderated by their inability to apply them at large scale. More problematic to participants is the potential for national-level arms racing. The potential ill effects of AI arms racing are threefold. First, arms race dynamics have in the past led to high levels of government spending that were poorly prioritized and inefficient. Second, arms racing can generate an insecurity spiral, with actors perceiving others’ pursuit of new capabilities as threatening. Third, the development of AI tools for use by national militaries is in a discovery phase, with government and industry alike working
to find areas for useful application. Competition at the industry and state levels might, therefore, incentivize fast deployment of new and potentially insufficiently tested capabilities, as well as hiding of national AI priorities and progress. These characteristics of arms racing—high rates of investment, a lack of transparency,
mutual suspicion and fear, and a perceived incentive to deploy first—heighten the risk of avoidable or accidental conflict.

Potential Benefits of Military Applications of Artificial Intelligence

For national militaries, AI has broad potential beyond weapons systems. Often referred to as a tool for jobs that are “dull, dirty, and dangerous,” AI applications offer a means to avoid putting human lives at risk or assigning humans to tasks that do not require the creativity of the human brain. AI systems also have the potential to reduce costs in logistics and sensing and to enhance communication and transparency in complex systems, if that is prioritized as a design value. In particular, as an information communication technology, AI might benefit the peacekeeping agenda by more effectively communicating the capacities and motivations of military actors.

Workshop participants noted that AI-enabled systems and platforms have already made remarkable and important enhancements to national intelligence, surveillance, and reconnaissance capabilities. The ability of AI to support capturing, processing, storing, and analyzing visual and digital data has increased the quantity, quality, and accuracy of information available to decision makers. They can use this information to do everything from optimizing equipment maintenance to minimizing civilian harm. Additionally, these platforms allow for data capture in environments that are inaccessible to humans.

Participants shared broad agreement that the benefits of military applications of AI will require governments and the private sector to collaborate frequently and in depth. Specifically, participants advocated for the identification of practices and norms that
ensure the safety of innovation in AI, especially in the testing and deployment phases. Examples include industry-level best practices in programming, industry and government use of test protocols, and government transparency and communication
about new AI-based military capabilities.

Agreement also emerged over the need for better and more comprehensive training among technologists, policymakers, and military personnel. Participants expressed clearly that managing the risks of AI will require technical specialists to have a better
understanding of international relations and of the policymaking context. Effective policymaking and responsible use will also require government and military officials to have some knowledge of how AI systems work, their strengths, their possibilities, and their vulnerabilities. Practical recommendations for moving in this direction included the development of common terms for use in industry, government, and multilateral discourse, and including the private sector in weapons-review committees.

Potential Governance of Military Applications of AI

The primary challenge to multilateral governance of military AI is uncertainty—about the ways AI will be applied, about whether current international law adequately captures the problems that use of AI might generate, and about the proper venues through which to advance the development of governance approaches for military applications of AI. These characteristics of military AI are amplified by the technology’s rapid rate of change and by the absence of standard and accepted definitions. Even fundamental concepts like autonomy are open to interpretation, making legislation and communication difficult.

There was skepticism among some, though not all, participants that current international law is sufficient to govern every possible aspect of the military applications of AI. Those concerned about the extent to which today’s governance mechanisms are sufficient noted that there are specific characteristics of military applications of AI that may fit poorly into standing regimes—for example, international humanitarian law—or for which applying standing regimes may produce unintended
consequences. This observation led to general agreement among participants that many governance approaches—including self-regulation, transparency and confidence-building measures, and intergovernmental approaches—ultimately would be required
to mitigate the risks of military applications of AI. It should be noted that workshop participants included transnational nongovernmental organizations and transnational corporations—entities that increasingly have diplomatic roles.

The workshop concluded with general agreement that the UN system offers useful platforms within which to promote productive dialogue and through which to encourage the development of possible governance approaches between disparate stakeholders. All participants expressed the belief that beyond discussions on LAWS, broad understanding of and discourse about potential military applications of AI—its benefits, risks, and governance challenges—is nascent and, indeed, underdeveloped. Participants welcomed and encouraged more opportunities for stakeholders to educate each other, to communicate, and to innovate around the hard problems posed by military applications of AI.

The full report was originally published by the Stanley Center

Recent & Related

Policy Memo
Chris O. Ògúnmọ́dẹdé
Policy Paper
Christopher K. Colley

Subscription Options

* indicates required

Research Areas

Pivotal Places

Publications & Project Lists

38 North: News and Analysis on North Korea