Technological Change and the UN Framework of Analysis for Atrocity Crimes

Developments in new and emerging technologies present key reasons to revise the UN Framework of Analysis for Atrocity Crimes.

By  Federica D’Alessandra  •  Ross James Gildea

Despite the value and relevance of the UN Framework of Analysis for Atrocity Crimes (FAAC), particularly as a tool guiding UN analysis and forecasting in the context of its approach to mass atrocity prevention and response (MAPR), rapid changes in the technological landscape present good reasons to revisit and revise the framework. Developments in new and emerging technologies (NETs) have shifted the mass atrocity prevention and response landscape concerning both the commission and prevention of such crimes. In particular, NETs have markedly expanded the tools available to states and groups to commit atrocities, altering the dynamics and processes which may lead to such crimes. At the same time, NETs also present unprecedented opportunities for monitoring, early warnings, documentation and forecasting of potential atrocities, and to support accountability efforts in response to their commission. Such revisions would not be unprecedented. The FAAC, published in 2014, replaced an earlier, more limited version published in 2009, taking account of “recent developments and new research into the processes that lead to those crimes.” To assess the need for a future revision, we conducted a systematic evaluation of the FAAC. These findings are timely, given the 2021 report, Advancing Atrocity Prevention: Work of the Office on Genocide Prevention and the Responsibility to Protect from the UN Secretary-General, which re-emphasized the responsibilities of not just member states in addressing atrocities but also technology and social media companies. Our paper identifies considerable scope for the revision of the FAAC to include a detection component, including details on how NETs and other tools can be applied in assessing indicators.

Executive Summary

How might developments in new and emerging technologies (NETs) shape our understanding of the risk factors associated with atrocities? This paper analyzes the potential need to revise and update the UN Framework of Analysis for Atrocity Crimes (FAAC). The FAAC, published in 2014, provides “an integrated analysis and risk assessment tool for atrocity crimes,” helping to identify areas where atrocity threats may be most pressing.1  Note: “Framework of Analysis for Atrocity Crimes: A Tool for Prevention” (United Nations, 2014), 5, https://www.un.org/en/genocideprevention/documents/about-us/Doc.3_Framework%20of%20Analysis%20for%20Atrocity%20Crimes_EN.pdf.    However, despite the value of the FAAC, particularly as a tool guiding UN analysis and forecasting in the context of its approach to mass atrocity prevention and response (MAPR), rapid changes in the technological landscape concerning both the committing and prevention of atrocity crimes, suggests there may be good reasons to revisit the framework. Such revisions would not be unprecedented. The current FAAC replaced an earlier, more limited version published in 2009, taking account of “recent developments and new research into the processes that lead to those crimes.”2  Note: Ibid.    To assess the need for a future revision, we conducted a systematic evaluation of the FAAC, selecting six risk factors as representative cases. We initially focused on three common risk factors pertaining to each atrocity crime. We also examined individual risk factors that relate to each crime category — genocide, crimes against humanity, and war crimes. In addition, we carried out an initial examination of NETs’ potential in detection efforts. Because the FAAC is intended to aid monitoring and early warning in anticipation of atrocity threats, we examine if a future edition may benefit from incorporating information on how indicators of risk factors could be measured and assessed.

We contend that NETs and a growing body of research on their development and impact may be shifting our understanding of MAPR once again.

Our analysis indicates that while the current iteration of the FAAC continues to be useful, with many risk factors and indicators reflecting enduring issues related to atrocity crimes, the rapid development of NETs may have shifted the MAPR landscape sufficiently to warrant revision. In particular, NETs have markedly expanded the tools available to states and groups to commit atrocities, altering the dynamics and processes which may lead to such crimes. One example is NETs with surveillance applications, which can enhance the ability of perpetrators to track and monitor targets, increasing their capacity to commit atrocity crimes. Tech developments can also significantly contribute to both upstream and downstream elements of the MAPR toolkit. These findings are timely, given the 2021 report, Advancing Atrocity Prevention: Work of the Office on Genocide Prevention and the Responsibility to Protect from the UN Secretary-General, which re-emphasized the responsibilities of not just member states in addressing atrocities but also technology and social media companies.3  Note: “United Nations Office on Genocide Prevention and the Responsibility to Protect,” United Nations, n.d., https://www.un.org/en/genocideprevention/key-documents.shtml.    Our paper identifies considerable scope for the revision of the FAAC to include a detection component, including details on how NETs and other tools can be applied in assessing indicators.

Introduction

The proliferation of NETs in the past decade appears to have far-reaching implications for MAPR. Technology is not only shaping interstate relations’ dynamics but has elevated non-state actors’ roles in global politics.4  Note: Joseph S Nye, “The Information Revolution and Soft Power,” Current History 113, no. 759 (2014): 19–22, https://nrs.harvard.edu/urn-3:HUL.InstRepos:11738398; Daniel W Drezner, “Technological Change and International Relations,” International Relations 33, no. 2 (June 1, 2019): 286–303, https://doi.org/10.1177/0047117819834629.    On the one hand, technological innovation is generating new challenges, providing would-be perpetrators with new means to incite and organize identity-based mass violence. This includes the growing use of AI-powered surveillance technology, information and communication technologies (ICTs) such as social media and other digital platforms, and automated weapons systems. In Myanmar, for example, authorities have used “surveillance drones, phone cracking, and computer hacking software, to track citizens and steal data,” contributing to systematic attacks on civilians.5  Note: Ross James Gildea and Federica D’Alessandra, “We Need International Agreement on How to Handle These Dangerous Technologies,” Slate, March 7, 2022, https://slate.com/technology/2022/03/dual-use-surveillance-technology-export-controls.html.    The Myanmar military has also used digital platforms to incite and instigate identity-based mass violence.

Conversely, ongoing contextual changes also broaden the tools available to policymakers, enhancing our ability to prevent and respond to atrocity threats.6  Note: Federica D’Alessandra and Kirsty Sutherland, “The Promise and Challenges of New Actors and New Technologies in International Justice,” Journal of International Criminal Justice 19, no. 1 (September 13, 2021): 9–34, https://doi.org/10.1093/jicj/mqab034.    Geospatial intelligence technology such as satellite imagery has expanded the available tools for investigation and documentation of crimes. It is being used to track the actions of perpetrators in South Sudan, the Central African Republic, the Democratic Republic of the Congo, Niger, and Myanmar, among other cases.7  Note: D’Alessandra and Sutherland, “The Promise and Challenges of New Actors and New Technologies in International Justice.”    Given these tech-driven changes to the MAPR landscape, it is imperative to revisit policy frameworks such as the FAAC to ensure they are fit for purpose.

The FAAC was created by the United Nations Office on Genocide Prevention and the Responsibility to Protect to inform evaluations of global atrocity risks. Published in 2014, the framework builds on a prior iteration developed in 2009 by the United Nations Office of the Special Adviser on the Prevention of Genocide, which in turn emerged from foundational work on genocide prevention by United Nations Secretary-General Kofi Annan in 2004.8  Note: Adama Dieng et al., “Assessing the Risk of Atrocity Crimes,” Genocide Studies and Prevention 9, no. 3 (February 2016): 4–12, https://doi.org/10.5038/1911-9933.9.3.1392.    Its purpose is to serve as a “guide for assessing the risk of genocide, crimes against humanity and war crimes.”9  Note: “FAAC”, iii.    It is regularly (although not yet systematically)10  Note: For example, the recent reports of the UN Human Rights Council appointed Commission of Inquiry on Burundi and Fact-Finding Mission on Myanmar have both used the FAAC to identify risk factors and potential triggers for war crimes, crimes against humanity, ethnic cleansing, or genocide. “Report of the Commission of Inquiry on Burundi” (United Nations Human Rights Council, August 12, 2021), https://documents-dds-ny.un.org/doc/UNDOC/GEN/G21/223/37/PDF/G2122337.pdf; “Report of the Independent International Fact-Finding Mission on Myanmar” (United Nations Human Rights Council, September 12, 2018), https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/274/54/PDF/G1827454.pdf.    used by various UN entities and agencies, as well as researchers and practitioners in the field of atrocity prevention,11  Note: Ryan X D’Souza, “A Reoriented Approach to Atrocity Prevention in UN Peace Operations,” White Paper (Oxford Programme on International Peace and Security, June 2020), https://elac.web.ox.ac.uk/sites/default/files/elac/documents/media/a_reoriented_approach_to_atrocity_prevention_in_un_peace_operations_dsouza.pdf?time=1591798580367.    as part of the early warning and analysis tools currently available to carry out essential forecasting and risk assessments of both current atrocity threats and of how situations may deteriorate over time.12  Note: Dieng et al., “Assessing the Risk of Atrocity Crimes.”    It comprises 14 risk factors — eight common and six specific risk factors — and associated indicators. To evaluate the FAAC, we conducted a systematic review of the document, selecting six risk factors as representative cases. We focused on three common risk factors pertaining equally to each crime. To further generalize the implications of our findings, we also examined three specific risk factors which relate to each category of crime — genocide, crimes against humanity, and war crimes (see Table 1). To enhance representativeness, we chose risk factors affecting broad and varying aspects of atrocity threats cited in the FAAC. For instance, Risk Factor 4 (Motives and incentives) pertains to the decision-making process of perpetrators, while Risk Factor 5 (Capacity to commit atrocity crimes) speaks to the material capabilities that underpin atrocity crimes.

Table 1: Risk Factors Selected for Analysis

Common Risk Factors
Risk Factor 4Motives and incentives
Risk Factor 5Capacity to commit atrocity crimes
Risk Factor 7Enabling circumstances or preparatory action
Specific Risk Factors
Risk Factor 10Signs of an intent to destroy in whole or in part a protected group (Genocide)
Risk Factor 12Signs of a plan or policy to attack any civilian population (Crimes against humanity)
Risk Factor 13Serious threats to those protected under international humanitarian law (War crimes)

In addition, we conducted an initial probe of the potential for NETs to be leveraged for detection efforts. Because the FAAC is intended to aid monitoring and early warning in anticipation of atrocity threats, we examined if a future edition could benefit from a systematic treatment of how indicators could be measured and assessed. To this end, in each section, we briefly comment on the potential utility of applying a tech lens to a detection component of the FAAC.

Our analysis shows the current iteration of the FAAC remains valuable, with many risk factors and indicators reflecting enduring issues related to atrocities. But the rapid development of NETs may have shifted the MAPR landscape sufficiently to warrant revision. States and groups have access to markedly expanded tools thanks to NETs, which can make atrocity crimes more likely. Improvements in technology have also greatly contributed to upstream and downstream elements of MAPR, from early warning systems to mitigation of and response to atrocities. Of relevance to our study is also the fact that the 2021 report of the UN Secretary-General entitled Advancing Atrocity Prevention: Work of the Office on Genocide Prevention and the Responsibility to Protect has recently emphasized that, in addition to the ongoing responsibilities of UN member states, technology and social media companies also bear responsibilities to address atrocities.13  Note: “United Nations Office on Genocide Prevention and the Responsibility to Protect.”    In light of these developments, and precisely to support better public-private sector cooperation around monitoring and bolster early warnings, our paper identifies how the FAAC could be revised to include a detection component, including details on how NETs and other tools can be applied in assessing indicators. These conclusions apply across common and specific risk factors.

Common Risk Factors

Motives and incentives (Risk Factor 4)

This risk factor refers to the “reasons, aims or drivers that justify the use of violence against protected groups, populations or individuals, including by actors outside of State borders.”14  Note: “FAAC”, 13.    In what ways might technological developments affect the motives and incentives leading to atrocity crimes? While it is likely that the core motivations behind atrocity crimes remain unaffected by NETs, the foundational role of ideology and the perceptions of grievances and interests remain central to these crimes. Nonetheless, NETs may exacerbate or create new forms of inequalities that re-shape actor motivations.

To illustrate this point, consider one indicator of Risk Factor 4, pertaining to unequal distribution and control of economic resources connected to differences in identity groups. Incentives to retain or exploit the status quo as a dominant group, or to reverse existing grievances, may heighten the probability of mass atrocities. Economic deprivation has been directly linked to participation in armed violence.15  Note: Patricia Justino, “Poverty and Violent Conflict: A Micro-Level Perspective on the Causes and Duration of Warfare,” Journal of Peace Research 46, no. 3 (May 1, 2009): 315–33, https://doi.org/10.1177/0022343309102655.    Previous work has connected Hutus’ participation in the 1994 Rwandan genocide, for instance, to socio-economic status.16  Note: Willa Friedman, “Local Economic Conditions and Participation in the Rwandan Genocide” (APSA 2011 Annual Meeting, Rochester, NY: Social Science Research Network, 2011), https://papers.ssrn.com/abstract=1900726.    As technology plays an increasingly important role in society, the “digital divide”17  Note: The unequal access to critical digital technologies like the internet.    and differentiated use patterns could dramatically affect existing inequalities. Prior studies have suggested a nexus between technology, poverty, and widening wealth gaps.18  Note: M. Usman Mirza et al., “Technology Driven Inequality Leads to Poverty and Resource Depletion,” Ecological Economics 160 (June 2019): 215–26, https://doi.org/10.1016/j.ecolecon.2019.02.015.    Unequal access to modern digital technology may equally influence the ability of groups to benefit from expanded opportunities, such as in education, job opportunities, healthcare, and other critical social outcomes.

Additionally, such technologies can be deployed to organize and mobilize sectors of society, thus helping to consolidate political power and potentially disenfranchise politically weaker groups. Scholars of genocide have long posited a relationship between technological and bureaucratic architectures and the mentality of intent they can produce in those who wield them, as examined in studies of Holocaust perpetrators.19  Note: Akio Kimura, “Genocide and the Modern Mind: Intention and Structure,” Journal of Genocide Research 5, no. 3 (September 2003): 405–20, https://doi.org/10.1080/1462352032000154633.    While the core motives and incentives outlined in the FAAC remain applicable today, exceptionally associated indicators, such as those related to political and economic cleavages – for example, indicators 4.1 on political motives, 4.2 on economic interests, and 4.8 on the politicization of past grievances – may benefit from incorporating a technological dimension.

Regarding detection, NETs could prove useful in identifying the specific political and economic sources of inter-group animosity and, ultimately, mass violence. For instance, tracking differentiated patterns of access to critical technologies with empirical links to salient socio-economic and political outcomes (access to public services, jobs, political office, and so on) could allow researchers to anticipate growing inequalities and improve predictive capacities. Of course, scholars should avoid conflating deeply contextual challenges with correlations in aggregate data. For ethical reasons, in interpreting their findings, researchers should remain cognizant of the potential impact of identifying false positives or exacerbating inter-group tensions.

Capacity to Commit Atrocity Crimes (Risk Factor 5)

This risk factor pertains to “the ability of relevant actors to commit atrocity crimes.”20  Note: “FAAC”, 14.    As the FAAC suggests, atrocities are typically challenging to commit, requiring a certain level of personnel and resources. Currently, the FAAC primarily and appropriately focuses on material capacities such as personnel, arms, training, and financing. Insightfully, it highlights important though less tangible factors such as cultures of obedience and group conformity. While these conceptualizations are valuable in their current form, adopting insights related to technology may enhance our understanding of both.

Indicator 5.3, which relates to perpetrators’ ability to recruit and mobilize supporters, provides an instructive entry point for the utility of applying a tech lens. Previous studies have found that the diffusion of NETs, such as information and communication technology, can lead to increased organized violence. A reputed mechanism driving this connection is that technology can radically diminish collective action challenges by lowering barriers to communication and organization. NETs may improve perpetrators’ capacity to recruit, mobilize and coordinate supporters.21  Note: Jan H. Pierskalla and Florian M. Hollenbach, “Technology and Collective Action: The Effect of Cell Phone Coverage on Political Violence in Africa,” American Political Science Review 107, no. 2 (May 2013): 207–24, https://doi.org/10.1017/S0003055413000075.    

The relationship between NETs and the capacity to commit atrocity crimes is also apparent with surveillance technology. By surveillance, we refer to the collection and use of personal data to observe, influence, or manage those whose data has been collected.22  Note: David Lyon, Surveillance Society: Monitoring Everyday Life, Issues in Society (Buckingham [England] ; Philadelphia: Open University Press, 2002).    In China, a largescale surveillance regime has been developed to gather information and track the domestic population as part of the repression of Uyghurs and other minorities.23  Note: Omar Shakir and Maya Wang, “Mass Surveillance Fuels Oppression of Uyghurs and Palestinians,” Human Rights Watch (blog), November 24, 2021, https://www.hrw.org/news/2021/11/24/mass-surveillance-fuels-oppression-uyghurs-and-palestinians.     Similar challenges have been identified in Myanmar, where access to surveillance technology has facilitated the military’s targeting of victims.24  Note: Thin Lei Win et al., “EU Spy Tech Serves Myanmar Junta,” Lighthouse Reports (blog), June 14, 2021, https://www.lighthousereports.nl/investigation/eu-spy-tech-serves-myanmar-junta/.    NETs with surveillance applications can enhance perpetrators’ ability to track and monitor targets, increasing their capacity for atrocity crimes. To date, this dimension of state capacity is mainly absent from the FAAC, with Risk Factor 5 including no reference to capacities to track and target victims.

NETs are also directly relevant to cultures of obedience and group conformity. The use of social media platforms, for instance, can “limit the exposure [of users] to diverse perspectives and favor the formation of groups of like-minded users framing and reinforcing a shared narrative.”25  Note: Matteo Cinelli et al., “The Echo Chamber Effect on Social Media,” Proceedings of the National Academy of Sciences 118, no. 9 (March 2, 2021): e2023301118, https://doi.org/10.1073/pnas.2023301118.    Digital platforms may not only lead to polarized exchanges and intergroup distrust but also serve as a tool to incite hatred and violence, especially in societies with identity-based divisions, as was the case with the targeting of Rohingya in Myanmar.26  Note: Alexandra Stevenson, “Facebook Admits It Was Used to Incite Violence in Myanmar,” The New York Times, November 6, 2018, sec. Technology, https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html.    Incorporating a tech lens to the FAAC may enhance our understanding of the dynamics producing cultures of obedience and group conformity and how this may be conducive to committing atrocities.

In terms of detection, identifying actors’ investment trends in tech infrastructure, particularly those with surveillance applications, could be a valuable signifier of future threats. Using tech to monitor digital platforms, in particular the use of dehumanizing speech — a potential predictor of atrocities — or incitement by government officials and non-state actors, may also prove fruitful in pre-empting the orchestration of atrocity crimes.

Enabling circumstances of preparatory action (Risk Factor 7)

Risk Factor 7 relates to “events or measures, whether gradual or sudden, which provide an environment conducive to the commission of atrocity crimes, or which suggest a trajectory towards their perpetration.”27  Note: “FAAC”, 17.    The FAAC provides a substantial list of 14 indicators for this risk factor, including measures related to domestic laws, procurement of arms and other capacities for state violence, growing rights violations, and targeting marginalized groups.

Although the FAAC currently includes a comprehensive account of the environmental conditions that may enable atrocities, applying a tech lens may helpfully augment and add further specificity to this risk factor. Consider, for instance, Indicator 7.3, which refers to the “strengthening of the security apparatus” in states.28  Note: Ibid.    The global proliferation of artificial intelligence (AI) surveillance technology illustrates the benefits and limitations of this perspective. AI tech represents an increasingly important part of states’ security infrastructure, and it would be erroneous to conflate its increasing and lawful use with atrocity threats. While a professional focus on strengthening security apparatuses is helpful, we might pay closer attention to the presence or absence of institutional safeguards before and after tech systems are in place. Experts suggest that algorithmic biases in AI-driven tech can compound and amplify societal inequalities, including identity-based marginalization.29  Note: Trishan Panch, Heather Mattie, and Rifat Atun, “Artificial Intelligence and Algorithmic Bias: Implications for Health Systems,” Journal of Global Health 9, no. 2 (December 2019): 010318, https://doi.org/10.7189/jogh.09.020318.    Without adequate checks and balances, this tech also provides would-be perpetrators with powerful and direct means of targeting protected groups and individuals. Given the rapid spread of AI-driven surveillance, this is likely to be a growing challenge that requires additional focus on the norms and institutional frameworks surrounding the development and application of NETs.

A related challenge with AI technology, with direct relevance to “preparatory action” under this risk factor, is the emergence of “deepfakes.”30  Note: This refers to “content, generated by an artificial intelligence, that is authentic in the eyes of a human being.” Yisroel Mirsky and Wenke Lee, “The Creation and Detection of Deepfakes: A Survey,” ACM Computing Surveys 54, no. 1 (January 31, 2022): 1–41, https://doi.org/10.1145/3425780; “Deepfakes in 2021,” Witness Media Lab (blog), April 2, 2020, https://lab.witness.org/backgrounder-deepfakes-in-2021/; “Weaponised Deep Fakes,” Australian Strategic Policy Institute, April 29, 2020, https://www.aspi.org.au/report/weaponised-deep-fakes.    The FAAC cites indicators such as increases in the politicization of past events, use of propaganda, and inflammatory language.31  Note: “FAAC”, 17.    Deepfakes add another dimension to these risks, allowing creators to generate sophisticated audio and visuals of events and individuals, such as political leaders, that appear realistic. The inauthenticity of this content can be difficult for the public to detect. Deepfakes could include dangerous scenarios such as fabricated declarations of war by leaders or incidents of identity-based violence.32  Note: Bobby Chesney and Danielle Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107 (2019): 1753, https://heinonline.org/HOL/Page?handle=hein.journals/calr107&id=1789&div=&collection=.    Deepfakes, therefore, have potential to cause profound societal unrest and rationalize mass violence, helping perpetrators to quickly mobilize support and resources for committing atrocities. This feeds into a broader challenge of tech and misinformation within states and third parties.33  Note: Sarah Kreps, “The Role of Technology in Online Misinformation” (Washington: Brookings Institution, June 2020), https://www.brookings.edu/wp-content/uploads/2020/06/The-role-of-technology-in-online-misinformation.pdf.    Moreover, while misinformation is hardly new, deepfakes and related kinds of misinformation, when combined with digital platforms and messaging applications, can now be generated and spread with unprecedented speed. In the coming years, this NET may prove to be a crucial form of preparatory action employed by perpetrators, which policymakers must carefully examine and respond to.

Regarding detection, technology can be used to evaluate indicators associated with Risk Factor 7. For instance, tech can examine various enabling factors, from mapping the import of arms and ammunition to developing AI surveillance and tracking trends in inflammatory rhetoric among political leaders and online. Researchers are countering deepfakes by creating software tools that can detect manipulations in images and video.34  Note: Leo Kelion, “Deepfake Detection Tool Unveiled by Microsoft,” BBC News, September 1, 2020, sec. Technology, https://www.bbc.com/news/technology-53984114.    Because of the rising sophistication of deepfakes, these tools must be developed to combat misinformation. The ongoing campaign of disinformation by the Russian government to generate consent among its citizens for its actions in Ukraine is one case where disinformation may be helping to facilitate serious crimes.35  Note: Stephanie Diepeveen, Olena Borodyna, and Theo Tindall, “A War on Many Fronts: Disinformation around the Russia-Ukraine War,” ODI: Think change, March 11, 2022, https://odi.org/en/insights/a-war-on-many-fronts-disinformation-around-the-russia-ukraine-war/. In another example, hate speech and disinformation circulating online were found to have played a role in the deadly communal unrest (and anti-Muslim violence) that shook Sri Lanka in 2018: Debbie Stothard et al., “Preventing Hate Speech, Incitement, and Discrimination: Lessons on Promoting Tolerance and Respect for Diversity in the Asia Pacific” (Global Action Against Mass Atrocity Crimes, n.d.), https://www.gaamac.org/media-uploads/Regional%20Initiatives%20publications/_APSG_REPORT_FINAL.pdf.    Tracking trends in political deepfakes could be an important aid in identifying future atrocity risks.

Specific Risk Factors

Signs of an intent to destroy in whole or in part a protected group (Risk Factor 10)

This risk factor describes an intent “to destroy all or part of a protected group based on its national, ethnical, racial or religious identity, or the perception of this identity.”36  Note: “FAAC”, 19.    In contrast to the general factors above, Risk Factor 10 specifically relates to the crime of genocide. Again, while the FAAC works well to map potential indicators of relevant intent, there may be areas where it could be revised. Indicator 10.1 is an example. This indicator cites “official documents, political manifestos, media records, or any other documentation” from which genocidal intent might be revealed or inferred.37  Note: Ibid.    However, this focus on official documentation and media records may ignore certain realities of political action and information diffusion in the digital age. In the recent Myanmar case, political leaders and military officials used online platforms to wage a “systematic campaign” to incite hatred and justify violence against the Rohingya minority.38  Note: Mozur, “A Genocide Incited on Facebook, With Posts From Myanmar’s Military.”    Much of this campaign was propagated through unofficial accounts and channels. In countries such as Myanmar, digital platforms are often seen as synonymous with the internet as a whole and are an important source of information.39  Note: Ibid.     Focusing on traditional political sources, such as party documentation or official statements, may obscure how perpetrators use technology to orchestrate mass atrocities such as genocide.

An additional challenge pertains to the platforms where perpetrators demonstrate their intent to dehumanize and attack vulnerable identity groups. Facebook’s failures in Myanmar are perhaps the most notorious example. Investigations have revealed how the platform’s failures around content moderation for users supporting armed violence against the Rohingya significantly bolstered the military’s campaign. This facilitated the Myanmar Armed Forces’ propaganda strategy and allowed them to drum up public support and recruitment for their activities. Even worse, Facebook’s algorithm was found to have amplified military messaging by promoting pages and posts used by the military to incite violence.40  Note: Billy Perrigo, “Facebook Tried to Ban Myanmar’s Military. But Its Own Algorithm Kept Promoting Pages Supporting Them, Report Says,” Time, June 24, 2021, https://time.com/6075539/facebook-myanmar-military/.    It is therefore desirable to update the FAAC to reflect the new official and unofficial mediums through which perpetrators may exhibit their intent to engage in identity-based violence.

Moreover, the framework could be leveraged to inform (and even draw from) digital platforms’ indicators to assess and forecast crisis scenarios. This would, of course, require partnerships between the private sector, the UN, and experts in civil society. However, such an exercise could lead to the gradual development of industry-wide standards for the identification and removal of content indicating an environment conducive to genocidal or other atrocity violence.

Thankfully, these issues are increasingly recognized by the international community and the private sector. At a May 2022 meeting of the UN Security Council, the UN Under-Secretary-General for Political and Peacebuilding Affairs, Rosemary DiCarlo, said that “malicious use of digital technologies for political or military ends has nearly quadrupled since 2015,” and she urged states to take action in this area.41  Note: United Nations Security Council, 9039th Meeting (UN Web TV, 2022), https://media.un.org/en/asset/k1r/k1ryrk0n4o. Equally, pursuant to UN General Assembly resolution 75/240, two substantive sessions have already convened for the Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies.    Similarly, learning from the failures in Myanmar, industry actors such as Twitter and Facebook have begun to revise their terms, policies, and practices to detect and remove more quickly online content that could lead to serious identity-based offline harm.42  Note: Kate Conger, “Twitter Expands Content-Moderation Rules to Cover Crises like War and Disasters.,” The New York Times, May 19, 2022, sec. Business, https://www.nytimes.com/2022/05/19/business/twitter-content-moderation.html; Neil Murphy and Paul Carey, “Facebook Announces Changes to Content Moderation,” The National, February 25, 2021, sec. World, https://www.thenationalnews.com/world/europe/facebook-announces-changes-to-content-moderation-1.1173392; Simon Allison, Samuel Gebre, and Claire Wilmot, “Facebook Fails to Curb the Spread of Hate Speech in Ethiopia,” The Mail & Guardian, November 22, 2021, https://mg.co.za/africa/2021-11-22-facebook-fails-to-curb-the-spread-of-hate-speech-in-ethiopia/.  

To illustrate the applicability of a tech lens in respect of detection, we may refer to Indicator 10.7, which concerns “expressions of public euphoria at having control over a protected group and its existence.”43  Note: “FAAC”, 19.    As noted, digital platforms are increasingly the avenue through which socio-political communication is conducted for political leaders, military officials, and the public. Monitoring this kind of speech produces technical challenges, and researchers have developed various tools to track and analyze this kind of content effectively.44  Note: Salla-Maaria Laaksonen et al., “The Datafication of Hate: Expectations and Challenges in Automated Hate Speech Monitoring,” Frontiers in Big Data 3 (2020), https://www.frontiersin.org/article/10.3389/fdata.2020.00003.    NETs may therefore be a critical resource in measuring key indicators related to crimes such as genocide (among others).

Signs of a plan or policy to attack any civilian population (Risk Factor 12)

This risk factor concerns “facts or evidence suggestive of a State or organizational policy, even if not explicitly stipulated or formally adopted, to commit serious acts of violence directed against any civilian population.”45  Note: “FAAC”, 21.    As with previous risk factors, the FAAC provides an extensive list of indicators, several of which remain applicable without revision to take account of tech developments. For instance, Indicator 12.9, which refers to “widespread or systematic violence against civilian populations or protected groups,” and Indicator 12.10, which alludes to “involvement of State institutions or high-level political or military authorities in violent acts,” continue to be pertinent independent of NETs and do not require adjustment.46  Note: Ibid.    

That said, some areas may benefit from the application of a tech lens. For example, Indicator 12.2, which relates to the “adoption of discriminatory security procedures against different groups of the civilian population,” could be updated given recent developments in surveillance technology.47  Note: Ibid.    With technologies such as facial recognition — now used widely in airport screening, law enforcement, and areas of social policy such as health and housing decisions — a significant challenge is an inherent bias against minority groups.48  Note: Alex Najibi, “Racial Discrimination in Face Recognition Technology,” Science in the News (blog), October 24, 2020, https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/.    As such, tech which draws on AI systems and machine learning algorithms can significantly affect governance and societies, potentially widening pre-existing cleavages. Worryingly, such technology commonly fails in areas, such as gender and race,49  Note: Davide Castelvecchi, “Is Facial Recognition Too Biased to Be Let Loose?,” Nature, November 18, 2020, https://www.nature.com/articles/d41586-020-03186-4.    with direct relevance to identity-based violence. This example alone highlights the potential for two tech-based updates to Indicator 12.2:

Processes of discrimination may arise in areas beyond security policy, such as health and housing, which may directly feed into wider societal inequalities and lead to the securitization of vulnerable minority populations.

The adoption of discriminatory procedures may be unintentional, hard to detect, yet nonetheless consequential in fostering an environment that may enable atrocity crimes.

Moreover, this kind of updating may apply to other indicators. For instance, Indicator 12.5, which is “preparation and use of significant public or private resources, whether military or other kinds,” is currently vague in relation to crimes against humanity. Added specificity on this indicator could incorporate the detection of NET in the commission of atrocities. For instance, reference to growing investment in, and operation of, AI-powered surveillance infrastructure or access to phone cracking or computer hacking software through spyware such as Pegasus or similar tech with demonstratable harmful applications would be a useful addition.50  Note: David Pegg and Sam Cutler, “What Is Pegasus Spyware and How Does It Hack Phones?,” The Guardian, July 18, 2021, sec. News, https://www.theguardian.com/news/2021/jul/18/what-is-pegasus-spyware-and-how-does-it-hack-phones.    Related to this, Indicator 12.8 concerns the facilitation or inciting of violence against civilians. Again, looking to norms and regulations around NETs, particularly information and communication technologies, could add important detail to this risk factor.

When it comes to detection and early warning signs, the issue of bias in AI-driven systems is insightful. Researchers are currently developing frameworks to identify the causes of biases in the development and implementation phases of tech such as facial recognition, and best practices to limit negative consequences.51  Note: Nicol Turner Lee, Paul Resnick, and Genie Barton, “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms” (Brookings Institution, May 22, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.    Incorporating these insights could help establish novel indicators and inform revisions of the FAAC regarding detection, therein aiding our ability to pre-empt tech-driven moves toward crimes against humanity and other crimes.

Serious threats to those protected under international humanitarian law (Risk Factor 13)

Risk Factor 13 pertains to “conflict-related conduct that seriously threatens the life and physical integrity of those protected under international humanitarian law.”52  Note: “FAAC”, 22.    Under this risk factor, which relates solely to conduct during armed conflict, the FAAC includes a broad list of 18 indicators. Many speak to fundamental aspects of war crimes, such as evidence of orders for violent attacks that would leave no survivors or otherwise contravene rights under International Humanitarian Law.53  Note: Ibid.    These indicators are of enduring relevance, independent of the evolution of NETs.

How might a tech lens be applied to this risk factor? A revised FAAC could address the spread of lethal autonomous weapons systems (LAWS), also known as “killer robots,” in conflict.54  Note: Kelsey Piper, “Death by Algorithm: The Age of Killer Robots Is Closer than You Think,” Vox, June 21, 2019, https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-weapons-ai-war.    Although LAWS are not unlawful per se under humanitarian law, analysts have expressed doubt that this form of weaponry could ever “replicate human judgment and comply with the legal requirement to distinguish civilian from military targets.”55  Note: “Mind the Gap: The Lack of Accountability for Killer Robots” (Human Rights Watch, April 9, 2015), https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots.    LAWS also pose serious challenges for criminal accountability, given that they can “select and attack targets without direct human input.”56  Note: James Igoe Walsh, “Political Accountability and Autonomous Weapons,” Research & Politics 2, no. 4 (October 1, 2015): 2053168015606749, https://doi.org/10.1177/2053168015606749.    Under Indicator 13.10, the FAAC cites the use of “weapons, projectiles, materials or substances which are by their nature indiscriminate or cause superfluous injury or unnecessary suffering to people” as a sign of prospective war crimes.57  Note: “FAAC”, 22.    The use of LAWS in war may add another layer to this risk factor, creating ambiguity about when and for whom criminal accountability awaits. Experts, therefore, speak of an “accountability gap,” pointing to “hurdles to holding anyone responsible for the actions of this type of weapon.”58  Note: “Mind the Gap.”    For these reasons, there has been a growing campaign to ban them.59  Note: Robert F. Trager and Laura M. Luca, “Lethal Autonomous Weapons Systems Are Here—and We Need to Regulate Them,” Foreign Policy, May 11, 2022, https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/; Adam Satariano, Nick Cumming-Bruce, and Rick Gladstone, “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing.,” The New York Times, December 17, 2021, sec. World, https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html.    These challenges are far from resolved, and new ones might still emerge as weapons technology evolves. There thus appears to be some scope to explore within the context of the FAAC the distinct issues presented by LAWS and other NETs, and better capture current challenges to anticipating and addressing war crimes.

Beyond this, and perhaps more promisingly, technology could also provide a valuable detection function across several indicators in Risk Factor 13. For instance, software for examining information and communication tech can be used effectively to examine patterns of radicalization and extremism among opposing parties in a conflict (Indicator 13.3), promotion of particular ethnicities and religions as signs of allegiance (Indicator 13.4), or dehumanization of groups within the population (Indicator 13.5). Evolving technology in the atrocity space, such as video and satellite imagery, could prove valuable, as has in cases like Myanmar,60  Note: “Myanmar: Video and Satellite Evidence Shows New Fires Still Torching Rohingya Villages,” Amnesty International, September 22, 2017, https://www.amnesty.org/en/latest/news/2017/09/myanmar-video-and-satellite-evidence-shows-new-fires-still-torching-rohingya-villages/.    in tracking the movement of troops and logistical support (Indicator 13.9) and mobilization of weapons (Indicator 13.10). In conclusion, these examples suggest that emphasizing the detection of early warning signs and indicators, as well as applying a tech lens to aid detection efforts, could complement and enhance the existing FAAC.

Conclusion

In this paper, we analyzed the potential need to revise and update the UN Framework of Analysis for Atrocity Crimes in light of rapid developments in NETS since the document’s publication in 2014. Our findings can be categorized under two rubrics. First, the risk factors and associated indicators in the FAAC make clear that the framework remains a pertinent tool for assessing risks of atrocity crimes such as genocide, crimes against humanity, and war crimes. Indeed, we find that the FAAC continues to do an admirable job identifying core characteristics of atrocity threats. Risk Factor 4, pertaining to “motives and incentives,” astutely describes the role of ideology and perceptions of grievances and interests, which seem foundational to atrocity threats, independent of the development of NETs. Likewise, indicators highlighted in Risk Factors 12 and 13, regarding evidence of orders or direct attacks on civilian populations, are also of enduring relevance.

While recognizing the continued value of the FAAC, we also identified several areas that may be revised and enhanced through the application of a tech lens. For example, Risk Factor 10, the FAAC cites “official documents, political manifestos, media records, or any other documentation” as key sources to identify intent to commit genocide. However, in a digital age, where perpetrators increasingly use online and informal channels to orchestrate mass violence, as we have seen in Myanmar and elsewhere, a focus on traditional documentation of this kind appears outmoded. More generally, our paper suggests the advent of a “digital divide,” AI-driven surveillance such as facial recognition technology, deepfakes, and lethal autonomous weapons systems (LAWS), among other tech-related phenomena, may be profoundly changing MAPR. NETs may not only widen pre-existing social cleavages between groups but — as with LAWS — create new layers of complexity in assessing risks of atrocities and holding individuals accountable for crimes. It seems apparent that NETs, based on a growing body of research on their nature and impact, are altering the dynamics of MAPR, a process likely to continue in the coming years. As we approach the FAAC’s 10-year mark, we conclude that it may be time to incorporate insights on NETs into a revised version.

Second, we also conducted a preliminary analysis of the potential to augment the FAAC to include a detection component. We contend that given that the FAAC is intended to aid monitoring and early warning to anticipate atrocity threats, the lack of information on detection — such as methods of evaluating indicators — should be addressed in a revised edition. This is the case both in general and with specific allusion to NETs. For instance, our paper identifies the use of deepfake technology as a preparatory action that perpetrators may increasingly use to mobilize support and orchestrate violence in the coming years. In an updated FAAC, it seems logical to include information on deepfakes as an indicator and provide information on tools by which audio-visual manipulations may be identified. In a similar vein, on account of their ability to re-shape the dynamics leading to atrocity crimes, offering guidance on assessing the use of AI-driven surveillance, LAWS, or checks and balances in information and communication technologies, would be helpful. These steps would no doubt enhance the utility of the FAAC for practitioners involved in MAPR and should be considered in a future iteration of the document.

Recommendations

  1. That common Risk Factors 4, 5, and 7 of the FAAC be revised to reflect better the impact and potential effects of technology tools and their distribution on perpetrator’s motives, incentives, capacity, preparatory actions, and enabling circumstances that might lead to the commission of atrocity crimes;
  2. That specific Risk Factors 10, 12, and 13 of the FAAC be revised to reflect better the impact and potential effects of existing and emerging technology tools, including systems of surveillance and specific weaponry, on the risk of commission of genocide, crimes against humanity, or war crimes;
  3. That a broader study be carried out, building on this scoping exercise, to systematically review how every risk factor and indicator under the FAAC might be affected by the application of a technology lens, and if revisions might be necessary to any additional risk factors or related indicators, whether common or specific to any sub-set of atrocity crimes;
  4. That the overall FAAC be expanded to incorporate a technology-driven detection component, to improve existing early warning systems and forecasting;
  5. That public-private sector partnerships be fostered to develop industry-wide standards for the prompt identification and removal of online content that could lead to offline harm in the form of identity-based mass violence;
  6. That internationally accepted norms must be developed, informed by public-private sector partnerships, on the ethical development, procurement, sale, and use of technological tools such as AI-powered surveillance infrastructures or weaponry systems such as LAWS that, if misused, could increase the risk of atrocity crimes.

About the authors

Federica D’Alessandra is the Deputy Director of the Oxford Institute for Ethics, Law, and Armed Conflict (ELAC) and the founding Executive Director of the Oxford Programme on International Peace and Security (IPS) of the Blavatnik School of Government, University of Oxford.

Ross James Gildea is a Postdoctoral Research Fellow at the Oxford Programme on International Peace and Security (IPS) and Oxford Institute for Ethics, Law, and Armed Conflict (ELAC), Blavatnik School of Government, University of Oxford.

Acknowledgements

The authors are grateful to all colleagues who have provided food for thought feeding into this publication over the years, including: Shannon Raj Singh, Talita DeSouza Dias, Kirsty Sutherland, Cecilia Jacob, Jennifer Welsh, Brianna Rosen, Rhiannon Neilsen, and Gwendolyn Whidden among many others. We are also grateful to the Stimson Center, and particularly Jim Finkel and Lisa Sharland, Ilhan Dahir, and Shakiba Mashayekhi for supporting this publication; and to members of the Institute for Ethics, Law and Armed Conflict and Blavatnik School of Government equally for their support and for their invaluable input. All errors remain the authors’ only.

Notes

  • 1
      Note: “Framework of Analysis for Atrocity Crimes: A Tool for Prevention” (United Nations, 2014), 5, https://www.un.org/en/genocideprevention/documents/about-us/Doc.3_Framework%20of%20Analysis%20for%20Atrocity%20Crimes_EN.pdf.  
  • 2
      Note: Ibid.  
  • 3
      Note: “United Nations Office on Genocide Prevention and the Responsibility to Protect,” United Nations, n.d., https://www.un.org/en/genocideprevention/key-documents.shtml.  
  • 4
      Note: Joseph S Nye, “The Information Revolution and Soft Power,” Current History 113, no. 759 (2014): 19–22, https://nrs.harvard.edu/urn-3:HUL.InstRepos:11738398; Daniel W Drezner, “Technological Change and International Relations,” International Relations 33, no. 2 (June 1, 2019): 286–303, https://doi.org/10.1177/0047117819834629.  
  • 5
      Note: Ross James Gildea and Federica D’Alessandra, “We Need International Agreement on How to Handle These Dangerous Technologies,” Slate, March 7, 2022, https://slate.com/technology/2022/03/dual-use-surveillance-technology-export-controls.html.  
  • 6
      Note: Federica D’Alessandra and Kirsty Sutherland, “The Promise and Challenges of New Actors and New Technologies in International Justice,” Journal of International Criminal Justice 19, no. 1 (September 13, 2021): 9–34, https://doi.org/10.1093/jicj/mqab034.  
  • 7
      Note: D’Alessandra and Sutherland, “The Promise and Challenges of New Actors and New Technologies in International Justice.”  
  • 8
      Note: Adama Dieng et al., “Assessing the Risk of Atrocity Crimes,” Genocide Studies and Prevention 9, no. 3 (February 2016): 4–12, https://doi.org/10.5038/1911-9933.9.3.1392.  
  • 9
      Note: “FAAC”, iii.  
  • 10
      Note: For example, the recent reports of the UN Human Rights Council appointed Commission of Inquiry on Burundi and Fact-Finding Mission on Myanmar have both used the FAAC to identify risk factors and potential triggers for war crimes, crimes against humanity, ethnic cleansing, or genocide. “Report of the Commission of Inquiry on Burundi” (United Nations Human Rights Council, August 12, 2021), https://documents-dds-ny.un.org/doc/UNDOC/GEN/G21/223/37/PDF/G2122337.pdf; “Report of the Independent International Fact-Finding Mission on Myanmar” (United Nations Human Rights Council, September 12, 2018), https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/274/54/PDF/G1827454.pdf.  
  • 11
      Note: Ryan X D’Souza, “A Reoriented Approach to Atrocity Prevention in UN Peace Operations,” White Paper (Oxford Programme on International Peace and Security, June 2020), https://elac.web.ox.ac.uk/sites/default/files/elac/documents/media/a_reoriented_approach_to_atrocity_prevention_in_un_peace_operations_dsouza.pdf?time=1591798580367.  
  • 12
      Note: Dieng et al., “Assessing the Risk of Atrocity Crimes.”  
  • 13
      Note: “United Nations Office on Genocide Prevention and the Responsibility to Protect.”  
  • 14
      Note: “FAAC”, 13.  
  • 15
      Note: Patricia Justino, “Poverty and Violent Conflict: A Micro-Level Perspective on the Causes and Duration of Warfare,” Journal of Peace Research 46, no. 3 (May 1, 2009): 315–33, https://doi.org/10.1177/0022343309102655.  
  • 16
      Note: Willa Friedman, “Local Economic Conditions and Participation in the Rwandan Genocide” (APSA 2011 Annual Meeting, Rochester, NY: Social Science Research Network, 2011), https://papers.ssrn.com/abstract=1900726.  
  • 17
      Note: The unequal access to critical digital technologies like the internet.  
  • 18
      Note: M. Usman Mirza et al., “Technology Driven Inequality Leads to Poverty and Resource Depletion,” Ecological Economics 160 (June 2019): 215–26, https://doi.org/10.1016/j.ecolecon.2019.02.015.  
  • 19
      Note: Akio Kimura, “Genocide and the Modern Mind: Intention and Structure,” Journal of Genocide Research 5, no. 3 (September 2003): 405–20, https://doi.org/10.1080/1462352032000154633.  
  • 20
      Note: “FAAC”, 14.  
  • 21
      Note: Jan H. Pierskalla and Florian M. Hollenbach, “Technology and Collective Action: The Effect of Cell Phone Coverage on Political Violence in Africa,” American Political Science Review 107, no. 2 (May 2013): 207–24, https://doi.org/10.1017/S0003055413000075.    
  • 22
      Note: David Lyon, Surveillance Society: Monitoring Everyday Life, Issues in Society (Buckingham [England] ; Philadelphia: Open University Press, 2002).  
  • 23
      Note: Omar Shakir and Maya Wang, “Mass Surveillance Fuels Oppression of Uyghurs and Palestinians,” Human Rights Watch (blog), November 24, 2021, https://www.hrw.org/news/2021/11/24/mass-surveillance-fuels-oppression-uyghurs-and-palestinians.   
  • 24
      Note: Thin Lei Win et al., “EU Spy Tech Serves Myanmar Junta,” Lighthouse Reports (blog), June 14, 2021, https://www.lighthousereports.nl/investigation/eu-spy-tech-serves-myanmar-junta/.  
  • 25
      Note: Matteo Cinelli et al., “The Echo Chamber Effect on Social Media,” Proceedings of the National Academy of Sciences 118, no. 9 (March 2, 2021): e2023301118, https://doi.org/10.1073/pnas.2023301118.  
  • 26
      Note: Alexandra Stevenson, “Facebook Admits It Was Used to Incite Violence in Myanmar,” The New York Times, November 6, 2018, sec. Technology, https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html.  
  • 27
      Note: “FAAC”, 17.  
  • 28
      Note: Ibid.    The global proliferation of artificial intelligence (AI) surveillance technology illustrates the benefits and limitations of this perspective. AI tech represents an increasingly important part of states’ security infrastructure, and it would be erroneous to conflate its increasing and lawful use with atrocity threats. While a professional
  • 29
      Note: Trishan Panch, Heather Mattie, and Rifat Atun, “Artificial Intelligence and Algorithmic Bias: Implications for Health Systems,” Journal of Global Health 9, no. 2 (December 2019): 010318, https://doi.org/10.7189/jogh.09.020318.  
  • 30
      Note: This refers to “content, generated by an artificial intelligence, that is authentic in the eyes of a human being.” Yisroel Mirsky and Wenke Lee, “The Creation and Detection of Deepfakes: A Survey,” ACM Computing Surveys 54, no. 1 (January 31, 2022): 1–41, https://doi.org/10.1145/3425780; “Deepfakes in 2021,” Witness Media Lab (blog), April 2, 2020, https://lab.witness.org/backgrounder-deepfakes-in-2021/; “Weaponised Deep Fakes,” Australian Strategic Policy Institute, April 29, 2020, https://www.aspi.org.au/report/weaponised-deep-fakes.  
  • 31
      Note: “FAAC”, 17.  
  • 32
      Note: Bobby Chesney and Danielle Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107 (2019): 1753, https://heinonline.org/HOL/Page?handle=hein.journals/calr107&id=1789&div=&collection=.  
  • 33
      Note: Sarah Kreps, “The Role of Technology in Online Misinformation” (Washington: Brookings Institution, June 2020), https://www.brookings.edu/wp-content/uploads/2020/06/The-role-of-technology-in-online-misinformation.pdf.  
  • 34
      Note: Leo Kelion, “Deepfake Detection Tool Unveiled by Microsoft,” BBC News, September 1, 2020, sec. Technology, https://www.bbc.com/news/technology-53984114.  
  • 35
      Note: Stephanie Diepeveen, Olena Borodyna, and Theo Tindall, “A War on Many Fronts: Disinformation around the Russia-Ukraine War,” ODI: Think change, March 11, 2022, https://odi.org/en/insights/a-war-on-many-fronts-disinformation-around-the-russia-ukraine-war/. In another example, hate speech and disinformation circulating online were found to have played a role in the deadly communal unrest (and anti-Muslim violence) that shook Sri Lanka in 2018: Debbie Stothard et al., “Preventing Hate Speech, Incitement, and Discrimination: Lessons on Promoting Tolerance and Respect for Diversity in the Asia Pacific” (Global Action Against Mass Atrocity Crimes, n.d.), https://www.gaamac.org/media-uploads/Regional%20Initiatives%20publications/_APSG_REPORT_FINAL.pdf.  
  • 36
      Note: “FAAC”, 19.  
  • 37
      Note: Ibid.  
  • 38
      Note: Mozur, “A Genocide Incited on Facebook, With Posts From Myanmar’s Military.”  
  • 39
      Note: Ibid.    
  • 40
      Note: Billy Perrigo, “Facebook Tried to Ban Myanmar’s Military. But Its Own Algorithm Kept Promoting Pages Supporting Them, Report Says,” Time, June 24, 2021, https://time.com/6075539/facebook-myanmar-military/.  
  • 41
      Note: United Nations Security Council, 9039th Meeting (UN Web TV, 2022), https://media.un.org/en/asset/k1r/k1ryrk0n4o. Equally, pursuant to UN General Assembly resolution 75/240, two substantive sessions have already convened for the Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies.  
  • 42
      Note: Kate Conger, “Twitter Expands Content-Moderation Rules to Cover Crises like War and Disasters.,” The New York Times, May 19, 2022, sec. Business, https://www.nytimes.com/2022/05/19/business/twitter-content-moderation.html; Neil Murphy and Paul Carey, “Facebook Announces Changes to Content Moderation,” The National, February 25, 2021, sec. World, https://www.thenationalnews.com/world/europe/facebook-announces-changes-to-content-moderation-1.1173392; Simon Allison, Samuel Gebre, and Claire Wilmot, “Facebook Fails to Curb the Spread of Hate Speech in Ethiopia,” The Mail & Guardian, November 22, 2021, https://mg.co.za/africa/2021-11-22-facebook-fails-to-curb-the-spread-of-hate-speech-in-ethiopia/.  
  • 43
      Note: “FAAC”, 19.  
  • 44
      Note: Salla-Maaria Laaksonen et al., “The Datafication of Hate: Expectations and Challenges in Automated Hate Speech Monitoring,” Frontiers in Big Data 3 (2020), https://www.frontiersin.org/article/10.3389/fdata.2020.00003.  
  • 45
      Note: “FAAC”, 21.  
  • 46
      Note: Ibid.    
  • 47
      Note: Ibid.  
  • 48
      Note: Alex Najibi, “Racial Discrimination in Face Recognition Technology,” Science in the News (blog), October 24, 2020, https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/.  
  • 49
      Note: Davide Castelvecchi, “Is Facial Recognition Too Biased to Be Let Loose?,” Nature, November 18, 2020, https://www.nature.com/articles/d41586-020-03186-4.  
  • 50
      Note: David Pegg and Sam Cutler, “What Is Pegasus Spyware and How Does It Hack Phones?,” The Guardian, July 18, 2021, sec. News, https://www.theguardian.com/news/2021/jul/18/what-is-pegasus-spyware-and-how-does-it-hack-phones.  
  • 51
      Note: Nicol Turner Lee, Paul Resnick, and Genie Barton, “Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms” (Brookings Institution, May 22, 2019), https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.  
  • 52
      Note: “FAAC”, 22.  
  • 53
      Note: Ibid.  
  • 54
      Note: Kelsey Piper, “Death by Algorithm: The Age of Killer Robots Is Closer than You Think,” Vox, June 21, 2019, https://www.vox.com/2019/6/21/18691459/killer-robots-lethal-autonomous-weapons-ai-war.  
  • 55
      Note: “Mind the Gap: The Lack of Accountability for Killer Robots” (Human Rights Watch, April 9, 2015), https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots.  
  • 56
      Note: James Igoe Walsh, “Political Accountability and Autonomous Weapons,” Research & Politics 2, no. 4 (October 1, 2015): 2053168015606749, https://doi.org/10.1177/2053168015606749.  
  • 57
      Note: “FAAC”, 22.  
  • 58
      Note: “Mind the Gap.”  
  • 59
      Note: Robert F. Trager and Laura M. Luca, “Lethal Autonomous Weapons Systems Are Here—and We Need to Regulate Them,” Foreign Policy, May 11, 2022, https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/; Adam Satariano, Nick Cumming-Bruce, and Rick Gladstone, “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing.,” The New York Times, December 17, 2021, sec. World, https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html.  
  • 60
      Note: “Myanmar: Video and Satellite Evidence Shows New Fires Still Torching Rohingya Villages,” Amnesty International, September 22, 2017, https://www.amnesty.org/en/latest/news/2017/09/myanmar-video-and-satellite-evidence-shows-new-fires-still-torching-rohingya-villages/.  

Recent & Related

Project Note
Ryan Fletcher

Subscription Options

* indicates required

Research Areas

Pivotal Places

Publications & Project Lists

38 North: News and Analysis on North Korea