Social Media Mis-/Disinformation and the Risk of Atrocities in the United States

Exploring opportunities to enhance atrocity prevention and early warning through improved tracking and understanding of online false narratives

By  Roudabeh Kishi

The spread of social media mis-/disinformation (SMM) can increase the risk of violence against marginalized populations, or the risk of mass atrocities more largely. If data collection efforts existed to more systematically capture online trends around SMM, it would be possible to better understand how, when, and in what ways online false narratives lead to offline violent activity. This could in turn better inform early warning and early action — especially during contentious periods. This issue brief explores what such systematic tracking and data collection could look like in the U.S. context. Drawing on expert interviews, it offers recommendations to assist researchers and other audiences in better tracking and understanding online SMM so as to be better able to identify early warning signs of offline violence/mobilization.

Introduction

Social media disinformation is a tool used to perpetuate narrative strategies developed by political actors to shape public perception and to mobilize potential supporters. Its spread, even without such strategic intent,1See definitions in the inset box. With the rise of artificial intelligence (AI) has come new forms of fabricated content creation that capitalize on AI and machine learning to create digital forgeries. Deepfakes, for example, are “computer-generated, photorealistic media created via cutting-edge artificial intelligence technology,” which can be used by malicious actors to fool viewers into believing things that never happened — aiding in their propaganda attempts. A known politician in power making a claim about COVID-19 being fake and created as a nefarious tool, and a not-real person alleging they are a teacher who uses critical race theory for indoctrination, both represent powerful hypothetical disinformation that may fool the untrained eye, making them likely candidates to be shared widely as misinformation. See M. Mitchell Waldrop, “Synthetic Media: The Real Trouble with Deepfakes,” Knowable Magazine, March 16, 2020. often targets a marginalized population, further “othering” them and/or portraying them as the source of hardship or as a threat to livelihood. While social media disinformation may be useful in its ability to mobilize supporters, it can also increase the risk of violence against marginalized populations, or the risk of mass atrocities more largely. As a result of the latter, a burgeoning field of research focuses on the relationship between social media mis-/disinformation (SMM) online (i.e., disinformation, including its spread without strategic/malicious intent) and its direct and indirect impacts on violence and mobilization offline.2See, for example, Kristina Hook and Ernesto Verdeja, “Social Media Misinformation and the Prevention of Political Instability and Mass Atrocities,” Stimson Center, 2022, https://www.stimson.org/2022/social-media-misinformation-and-the-prevention-of-political-instability-and-mass-atrocities/. (See the inset box for more details around definitions.) If data collection efforts existed to more systematically capture online trends around SMM, it would be possible to better understand how, when, and in what ways such online false narratives (and SMM more largely) lead to offline violent activity. This could in turn better inform early warning and early action (EW/EA) — especially during contentious periods.

Definitions

Social media disinformation refers to misleading information created and/or disseminated via social media with a deliberate intent to cause harm, used to shape public perception and in mobilization. Social media misinformation refers to the dissemination of disinformation without the deliberate intent to cause harm. With the latter, individuals may contribute to spreading fabricated narratives, originally manufactured toward strategic ends by political actors; such individuals might perpetuate such narratives without being cognizant of such strategic ends. The two — social media mis-/disinformation, or SMM for short — often appear in conjunction, given their related nature, by definition. While social media misinformation might not have strategic/malicious intent, its result — the spread of harmful narratives that serve to (further) marginalize certain populations — still contributes to the heightened risk of violence toward such populations. As a result, they are explored jointly throughout this brief.

This issue brief seeks to explore what such systematic tracking and data collection could look like. In this way, the findings explored here are intended to be methodological in scope. This brief does not seek to address the more substantive debates around mis-/disinformation and questions of free speech; shortcomings in content moderation by social media companies; the need for legislative regulation to incentivize such moderation; the ethics around new developments in artificial intelligence (AI), such as the rise in deepfakes and other forgeries; and more. While these are important topics that intersect in various ways with the question of online SMM and offline harms, they are beyond the scope of this short study.

To bound the scope of this study, a single case study is explored: online SMM in the U.S. context. This allows for deeper understanding of the relationship between the very robust online SMM environment in the U.S. and the effects it has on offline violence and atrocities involving violent actors. Being able to better understand this relationship to inform EW/EA efforts monitoring the risk of atrocities in the U.S., especially in the lead-up to the presidential election in 2024, is crucial. The findings of this study — which offer potential fruitful research strategies around data collection — are intended primarily for a research audience. However, secondary audiences include donors, who could support such work, recognizing the importance of the subject matter; U.S. policymakers, who would benefit from the findings of such research in informing legislation; and social media companies, who would better understand the relationship between social media platforms online and violence/mobilization offline.

Research Methods

False narratives rely on media to take shape and build momentum, from online forums and social media to major outlets like Fox News in the U.S. The self-reinforcing cycle of mis-/disinformation underpinning these narratives can further perpetuate and entrench them. Political and social elites may use these narratives (and SMM campaigns more largely) as starting points to build out propaganda and to inform legislation and policies.3E.g., the dissemination of SMM about coronavirus mandates in efforts to challenge them. Political entrepreneurs may develop and/or propagate such false narratives via SMM to drive political organizing and mobilization more largely.4E.g., the development and dissemination of the conspiracy theory that the 2020 presidential election in the U.S. was “stolen.” And groups and/or movements may latch onto such narratives as they can help them to meet goals around recruitment, infiltration, networking, alliance building, and more.5E.g., the Proud Boys (a violent, far-right group engaged in political violence in North America, and the U.S. in particular) rarely engaged in mobilization, such as engagement at demonstrations, around schools — until the summer of 2021, when they began latching onto SMM around critical race theory (CRT), which branded teaching about racism in America as a radical threat to white children. The lack of major recent mobilization around schools before SMM around CRT took shape suggests that the group became engaged in such activity not based on ideology alone, but as a strategic means to an end — recognizing the opportunities co-opting such a popular narrative could provide. For example, “on 6 July 2021, the Proud Boys protested at a Granite School Board meeting in Salt Lake City, Utah, attacking CRT, despite the fact that CRT is already banned from school curricula in the state,” suggesting that their actions were driven by strategy/opportunity rather than ideology. See Roudabeh Kishi, “Far-Right Violence and the American Midterm Elections: Early Warning Signs to Monitor Ahead of the Vote,” ACLED, 2022, https://acleddata.com/2022/05/03/far-right-violence-and-the-midterm-elections-early-warning-signs-to-monitor-ahead-of-the-vote.

Despite the key role social media platforms play in the dissemination of mis-/disinformation and the early formation of harmful narrative strategies, they have fallen short of effective content moderation and have failed to quell the spread of false claims and conspiracy theories — many of which ultimately make it all the way to cable news and the mainstream. Given the effectiveness of SMM, and the lack of effective barriers thwarting its spread, it is likely to not only continue, but to continue to be used toward such strategic ends.

As noted above, SMM has the ability to mobilize supporters, and to increase the risk of violence against marginalized populations (i.e., those perceived as responsible for the harms that false narratives may put forth). However, despite the opportunities to inform early warning and early action (EW/EA) work by better understanding the relationship between online SMM and offline violence, there has been limited research conducted on the topic. Research that is conducted tends to be qualitative or anecdotal in nature — not allowing for long-term longitudinal analysis or baseline comparisons. While long-term, longitudinal efforts exist to track offline violence (e.g., the Armed Conflict Location & Event Data Project, or ACLED),6Even such long-term, longitudinal efforts to track offline violence do not capture all forms of violence; they are limited to violence of a political nature (excluding criminal violence) that occurs in public for a (excluding domestic or interpersonal violence). there are no complementary efforts around long-term, longitudinal data collection capturing the characteristics of online trends that may lead to such offline outcomes.

To assess what such systematic tracking could look like — as well as what information and/or structural gaps,7Structural gaps refer to cases where the information exists yet is not being captured in a way that makes it usable in analysis (i.e., the structures do not exist, as opposed to the information not existing). or other barriers, exist — discussions were held with 14 experts via informal/off-the-record interviews.8This represents a 70% response rate to interview requests (i.e., interview requests were sent out to 20 experts). Such a response rate is considered to be very strong; a good response rate to surveys, for example, is considered to be between 5% and 30%. The experts are individuals engaged in research in this space, though they come to such research with a variety of different lenses. Experts were assured that discussions would remain anonymous, in hopes of cultivating more frank discussions about the status of the field, existing gaps and barriers, and how they might be best surmounted.

Experts include those engaged (or who have previously engaged) in monitoring of social media on behalf of various types of institutions, including NGOs (both those operating at the national level and those with a state-specific mandate), university research centers, academia, and government entities, to allow for insights resulting from different mandates. They include those who train others in such monitoring work — both in academic and practitioner settings — as well as those who engage in this work via the private sector tech field, through the lens of journalism, or as an activist. This allows for a broad spectrum of viewpoints, coming at research questions from different positions and angles. Substantively, expertise includes various analytical foci, ranging from a focus on narratives, to a focus on actors (e.g., militias), to allow for discussion of the utility of different units of analysis for such work. Some experts turned to this work in more recent years and are more familiar with the current spectrum of platforms that dot the landscape today. Others have been engaged in the field for some time, having witnessed the significant evolution of the social media landscape over the years, and how actors have adapted alongside such changes. And while the emphasis was to speak with those engaged in such work from a U.S. perspective (given the case study considered in this issue brief), some experts engage in research with a focus on other Western countries as well, allowing for a comparative lens.

The unique perspectives of the experts involved in the interviews allowed for a number of rich discussions around various pieces of the puzzle, which together help to illuminate what may be the best steps forward in better understanding via systematically tracking the relationship between online SMM and offline violence and mobilization. 

Discussion, Based on Interviews

Tracking Online SMM Campaigns

To systematically track online SMM — if one is to better understand its relationship to offline violence/mobilization — it is important to first consider what to track. Experts were asked what types of indicators they (sub)consciously look for in their own SMM monitoring: in other words, what types of indicators they feel are most salient for someone to track or consider if trying to deduce information about offline activity from online monitoring. Responses fell into two broad camps: (1) what to consider when monitoring more indirect or longer-term risks of violence via narrative tracking (i.e., how false narratives that emerge via SMM are used for mobilization offline), and (2) what to consider when trying to deduce more direct and immediate credible threats of violence or more credible plans of mobilization. Experts agree that the two go hand in hand, with offline mobilization or targeted violence being quite responsive to false narratives and SMM online.

Narrative Tracking

False narratives, shared online via SMM, are used to shape the conversation, which can in turn inspire violence and/or mobilization offline. They are often used in efforts to build “in-groups.” Stemming from research in social psychology, in-group favoritism refers to the tendency to favor members of one’s in-group — those with whom one feels they share commonalities — over others (out-group members). In-group favoritism becomes increasingly worrying when coupled with out-group negativity — i.e., the development of negative stereotypes of out-group members, resulting in animosity toward them. In short, this is the establishment and entrenchment of an “us vs. them” mentality. Framing gains by an out-group as losses for the in-group (i.e., a zero-sum game), or painting a direct threat to the in-group by out-group members, helps to establish an existential threat. Such threats are effective in recruitment efforts, both in co-option within a larger movement more generally, and into specific named groups. With the U.S. a “nation of joiners”9Arthur M. Schlesinger, “Biography of a Nation of Joiners,” American Historical Review 50, no. 1 (1944): 1-25. in spirit, individuals may already be seeking such belonging into groups or movements larger than themselves, especially in an age where many are “bowling alone”10Robert D. Putnam, “Bowling Alone: America’s Declining Social Capital,” Journal of Democracy 6, no. 1 (1995): 65-78. — individuals without robust communities in which to take part. Experts noted that narratives specifically targeting marginalized groups tend to present an easy way to identify an out-group, which in turn aids in building in-groups. Similarly, experts noted that they often look for narratives that rely on particularly emotive content, which can be used to establish existential threats and necessary urgency of action.

While such false narratives are important to monitor given the risk they pose for offline violence/mobilization, it is difficult to track them. Such narratives evolve in real time, meaning that their substantive tracking must evolve quickly as well — a difficult feat for an iterative process. In addition to these substantive difficulties, there are logistical obstacles: the reach of SMM is boundless, with new content (posts, videos, etc.) generated constantly. Without the use of automation, it would be impossible to even scratch the surface of all content. However, SMM includes the use of symbols, humor (e.g., through memes), dog whistles, coded language, emojis, text as images, and more — all of which present difficulties in automated moderation. In other words, there are entire “languages” being used by in-groups that may not be understood by those outside of such groups.

Identification of Credible Threats

When it comes to monitoring of more direct and immediate credible threats of violence, or more credible plans of mobilization, stemming from SMM, a number of themes and specific considerations emerged across experts.11See also Jared Holt and Katherine Keneally, “A Field Guide for Assessing Chances of Online-to-Offline Mobilization,” Institute for Strategic Dialogue, 2023, https://www.isdglobal.org/digital_dispatches/a-field-guide-for-assessing-chances-of-online-to-offline-mobilization/. First, experts noted that they avoid looking simply for “anger,” as that is pervasive and hence not a reliable signal, in and of itself, of violence to come. Rather, they noted that looking for “calls to action” instead — i.e., what people “ought to do” or “should do” as a result of a (perceived) threat, or details about where to mobilize in order to combat or stand up against a (perceived) threat — proved more useful. What came up consistently in discussions with experts was the recommendation to look for specificity. Looking for more specific indications of mobilization — such as what preparations in particular are being made for mobilization (i.e., descriptions of specific named locations, or specific dates or recommended timing of where and when to show up in person) — helps in deciphering between aspirational requests or threats versus actual organizing/mobilization and increased risk of violence stemming from SMM.

Experts also noted the importance of where plans for mobilization or threats of violence were taking place in response to SMM, as that can help in pointing to the intended audience, which in turn helps to signal the intent of a “call to action.” For example, a discussion around planned organizing (e.g., showing up to a specific overpass at a specific time for a planned banner drop) that takes place in a closed, private chat among members of a group (e.g., a closed Telegram channel) has a high likelihood of occurring. Given the audience on the channel (i.e., predominantly group members), and the assumption of those on the channel that information shared therein is privileged by virtue of it being a closed/private channel, monitors that may have access to such channels can assume that discussions are more logistics-driven12This is especially so if considered together with the past conversation rate of the private group and/or poster; this is explored in further detail in the following section. — and hence have a higher likelihood of occurring offline. Individuals that are members of such channels do not need the inspiration per se to mobilize; rather, they simply need the logistics to do so. Meanwhile, information shared on more public-facing platforms has various audiences, and hence different intents. Some public-facing content is intended to appeal to “new converts,” striving to reach those who may be more generally in the same milieu (e.g., those on the Right, broadly understood) in an effort to draw them further into a movement or a specific group (e.g., broad narratives about attacks on the Right). Other public-facing announcements may be intended for those who may be more peripherally involved in a movement, aimed at bringing them further into the fold (e.g., a public “call to action” to assemble at a specific place and time, wearing all black).

An additional useful heuristic noted by experts is situating content within the larger context. For example, if SMM around anti-LGBTQ+ rhetoric is high already, a monitor might more reliably believe plans around violent organizing especially during Pride month, as it signals a confluence of opportunity to leverage existing narratives around an event or period of significance. Similarly, identifying where networked harassment may already be happening can also be a useful strategy in identifying credible threats of violence. For example, if members of the anti-LGBTQ+ community have been facing heightened harassment as a result of a rampant anti-LGBTQ+ SMM campaign, threats of violence (especially those with other characteristics noted above, such as specificity) could be considered more credible.

Identifying the credible threats and risks through the noise is an important yet difficult endeavor — even more so if trying to identify the risk of actions by individuals (“lone wolves”) over groups. Individuals do not tend to plan or organize their actions online, for fear of being thwarted; often, if a digital manifesto exists, for example, it is shared in the immediate lead-up to an attack,13On average, such manifestos are posted online less than two hours before an attack. Thomas James Vaughan Williams, Calli Tzani, and Maria Ioannou, “Foreshadowing Terror: Exploring the Time of Online Manifestos Prior to Lone Wolf Attacks,” Studies in Conflict & Terrorism, 2023, https://doi.org/10.1080/1057610X.2023.2205973. making it quite difficult to take early action against such a hardly-early warning. Furthermore, determining the success rate of early identification of credible threats involving individuals is difficult to assess, as it depends on counterfactuals.

When it comes to mass mobilization, however, such organizing is more likely to occur online: if numerous people are to be involved in an event, there is need for networking and coordination among them. As a result, monitoring and identifying credible threats and risks involving groups is more consistent and predictable than seeking to do the same for individual actors.14There are, of course, challenges in defining “credible threats and risks”; for some (e.g., law enforcement), this represents only the threat of bodily harm or illegal action, while others may consider a broader spectrum of harm, or may be most interested in the risk of elevating conflict dynamics, for example. Such definitional choices have significant impacts for the measurement and assessment of threats and risks. Nevertheless, it is not without difficulty — and in fact has been growing increasingly difficult. The media environment has considerably evolved in recent years, with many more platform options now, many with considerably lax content moderation restrictions and/or the proliferation of private spaces for coordination. Groups/movements and individuals have also continued to evolve as they adapt to not only this media landscape, but also to the political landscape; post-January 6, 2021, in the U.S., for example, there was a “great scattering” of individuals (especially those with more extremist positions who are arguably core conduits to the propagation of SMM) off of mainstream platforms.15Jared Holt, “After the Insurrection: How Domestic Extremists Adapted and Evolved After the January 6 US Capitol Attack,” DFRLab, Atlantic Council, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/report/after-the-insurrection-how-domestic-extremists-adapted-and-evolved-after-the-january-6-us-capitol-attack/. While steps by social media companies and online platforms toward more rigorous content moderation standards are welcome, the result has also been some organization, coordination, and mobilization shifting to closed spaces, which can in turn make it more difficult for analysts to monitor. Some experts — those who have been in the field for some time — pointed to this shift and the resulting need for evolution of monitoring strategies.

Identifying the Next SMM Campaign

What most of the strategies outlined above have in common is their reactive — rather than anticipatory — nature. In other words, they rely on a certain SMM campaign to have already taken hold, and to have already begun resulting in offline violence/mobilization, in order to allow for more reliable identification of offline activity that might stem from it. This raises the question: would it be possible to identify a new SMM campaign (that may have the potential to inspire offline violence/mobilization) when it first begins to emerge — which would allow for a more anticipatory response?

Common Themes and Strategies

When asked of experts, there was some consensus around some common themes that are often prevalent in “successful” SMM campaigns.16Arguably, such themes and strategies may be prevalent in successful campaigns of all types, beyond SMM alone. These common themes — along with the strategies commonly used in shaping narrative campaigns relying on such themes — are described below, and outlined in a table at the end of this section. These lists are not exhaustive.

For example, narratives capitalizing on victimhood are common, relying on fear to motivate captive listeners, who are painted to be “victims.” A recent example is the resurgence of Great Replacement Theory, the conspiracy theory positing that the influx of immigrants (and people of color more generally) in the U.S. will result in the extinction, or “replacement,” of the white race.

Another common theme noted by experts is around the portrayal of an existential threat — language and framing underlining the imperative of taking a “last stand” or seizing an opportunity as a “last chance.” A recent example is the Stop the Stealmovement following the 2020 presidential election, the conspiracy theory positing that the election had been stolen. The movement reached an inflection point on January 6, 2021, with the deadly attack at the U.S. Capitol; mass turnout in response to the SMM campaign had been mobilized via framing of the vote confirmation in Washington, D.C., that day as the “last chance” to “take action,” and taking a “last stand” against the “steal.”

Saviorism is also a common theme that emerged in discussions with experts. Capitalizing on moral outrage, combined with a reliance on heightened emotional tenor — such as claims that only the reader/viewer/follower is able to stop something egregious from happening — results in a powerful concoction when it comes to inspiring action. A common victim needing saving in such cases tends to be children — a group that are universally seen as helpless, innocent, and in need of protection. Expressing outrage against (perceived) threats to children — framed as a moral injury or a noble cause — is hence common in SMM campaigns. Recent examples include conspiracy theories put forward by the QAnon movement, alleging the sex trafficking and satanic sacrifice of children by Left elites, or the “groomer” narrative targeting the LGBTQ+ community, alleging that LGBTQ+ individuals are engaged in child grooming and pedophilia.

Experts also pointed to the use of certain strategies in shaping narrative campaigns relying on such themes, used by content creators and political entrepreneurs. For example, some experts pointed to the utility of speculative framing by content creators — asking questions or putting forth unrelated things and simply questioning the link between them as a means of planting a new conspiratorial idea (whether done consciously or not). A recent example is the speculative link posited between football player Damar Hamlin’s cardiac arrest during a football game (without evidence) to his COVID-19 vaccination, in order to further cement the conspiracy theory around the “dangers” of the COVID-19 vaccine. 

Another common strategy for mobilizing includes the intersection of outrage and the use of humor or the comedic. Memes in particular offer the intertwining of comedy, entertainment, and politics while indirectly commenting on the last. Their accessibility and ability to exacerbate in-group cohesion and out-group negativity (as outlined above) make them a popular tool, especially in certain circles.17See, for example, Hampton Stall, David Foran, and Hari Prasad, “Kyle Rittenhouse and the Shared Meme Networks of the Armed American Far-Right: An Analysis of the Content Creation Formula, Right-Wing Injection of Politics, and Normalization of Violence, Terrorism and Political Violence,” Terrorism and Political Violence, 2020, https://doi.org/10.1080/09546553.2022.2074293. More generally, a reliance on absurdity can be a useful tool in inspiring followers. A recent example includes mockery of the use of pronouns (e.g., “My pronoun is Patriot”) in response to advocacy around more normalized gender identity signaling via pronouns by the LGBTQ+ population and their allies.

Narratives also tend to be recycled — drawing on historical trends, perhaps at times framed in a new way. For example, the recent resurgence of the anti-LGBTQ+ SMM campaign — which centers a “groomer” narrative and opposition to drag events, for example — may offer a slightly new focus for its vitriol, but targeting of the LGBTQ+ community is nothing new. Experts pointed to the importance of considering historical trends and precedent in identifying which emerging SMM campaigns might have more “staying power.” Considering things like whether a “new” narrative has an indication of the past, or how similar themes have played out previously, can hence offer useful insight. Lastly, experts pointed to the continued ebb and flow of the media environment. Once an SMM campaign has captured attention for some time, it is not uncommon to begin to see a new narrative emerge to combat the “fatigue” that may be associated with a “now-old” narrative. Given the amount of content online, it can be difficult to capture the attention of readers/viewers/followers for long periods of time. Content creators and savvy entrepreneurs know and capitalize on this to hold onto their followings.18For a discussion of the shifts in mobilizing narratives used by the far right in the U.S. in recent years, see Roudabeh Kishi, “From the Capitol Riot to the Midterms: Shifts in American Far-Right Mobilization Between 2021 and 2022,” ACLED, 2022, https://acleddata.com/2022/12/06/from-the-capitol-riot-to-the-midterms-shifts-in-american-far-right-mobilization-between-2021-and-2022/. For example, in early 2022, opposition to COVID-19 mandates, exacerbated by SMM campaigns around the “harms” caused by vaccines and mandates, captured the attention of the far right — manifesting as “Freedom Convoys,” inspired by organizing by Canadian truckers. Come summer of 2022, “new” SMM campaigns targeting the LGBTQ+ community, perpetuating the “groomer” narrative, took hold, with mobilization around COVID-19 mandates waning.

Theme/StrategyCommon Themes and StrategiesDescriptionUS-Based Example
ThemeVictimhoodRelying on fear to motivate captive listeners, who are painted as “victims”Great Replacement Theory: conspiracy theory positing that the influx of immigrants (and people of color more generally) in the U.S. will result in the extinction, or “replacement,” of the white race
ThemeExistential threatLanguage and framing underlining the imperative of taking a “last stand” or seizing an opportunity as a “last chance.”Stop the Steal movement: conspiracy theory following the 2020 presidential election positing that the election had been stolen, which reached an inflection point on January 6, 2021, with the deadly insurrection at the U.S. Capitol, the mass turnout for which was mobilized in response to an SMM campaign framing the vote confirmation in Washington, D.C., that day as the “last chance” to “take action” and taking a “last stand” against the “steal”
ThemeSaviorismMoral outrage, combined with a reliance on heightened emotional tenor with claims that only the reader/viewer/follower is able to stop something egregious from happening QAnon movement: conspiracy theory alleging the sex trafficking and satanic sacrifice of children by Left elite

“Groomer” narrative targeting the LGBTQ+ community: conspiracy theory alleging that LGBTQ+ individuals are engaged in child grooming and pedophilia
StrategySpeculative framingAsking questions or putting forth unrelated things and questioning the link between them “Dangers” of the COVID-19 vaccine: the speculative link posited between football player Damar Hamlin’s cardiac arrest during a football game (without evidence) to his COVID-19 vaccination
StrategyUse of humorThe intersection of outrage and the comedic (e.g., absurdity), used to increase accessibility while exacerbating in-group cohesion and out-group negativity Mockery of the use of pronouns: “My pronoun is Patriot” used in response to advocacy around more normalized gender identity signaling via pronouns by the LGBTQ+ population and their allies
Strategy“Recycling”Drawing on historical trends, perhaps at times framed in a new wayRecent resurgence of the anti-LGBTQ+ SMM campaign: centering a “groomer” narrative and opposition to drag events as a slightly new focus for “traditional” LGBTQ+ targeting
StrategyCombating “fatigue”Emergence of a new narrative in response to an SMM campaign “getting old,” having captured attention for some time alreadyWaning of SMM campaigns in opposition to COVID-19 mandates: fatigue associated with opposition to COVID-19 mandates (which has been exacerbated by SMM campaigns) in early 2022 contributed to its waning, and the emergence of “new” SMM campaigns targeting the LGBTQ+ community, perpetuating the “groomer” narrative, in summer 2022

The Role of Branding and Communication

Experts underlined that while the kinds of common themes and strategies outlined above may (in most cases) be necessary to identify the next SMM campaign, shaped by content creators and political entrepreneurs, and capable of inspiring mobilization and/or violence, they are not in and of themselves sufficient.

For example, the summer of 2022 saw the (re)emergence of both anti-LGBTQ+ rhetoric, shaped by an SMM campaign painting members of that community as “groomers,” and anti-abortion rhetoric, fueled by SMM around fetal heartbeats, in light of the landmark Dobbs v. Jackson Women’s Health Organization decision by the Supreme Court overturning Roe v. Wade and federal abortion protection, as well as in response to pro-reproductive health organizing. There was mobilization around both narratives, with groups/movements seen capitalizing on both — e.g., attending demonstrations around both topics. However, many of the violent groups/movements that sought to mobilize around both narratives have since prioritized one (anti-LGBTQ+) over the other (anti-abortion), with mobilization by such groups around the former ramping up, and around the latter dwindling. This is despite the fact that both narratives meet a number of the themes mentioned above of “successful” SMM campaigns (e.g., saviorism around “save the children”).

While reasons specific to these two narratives may exist as to why one narrative was “stickier” than the other in capturing the attention of both groups/movements and individual followers alike (the “aesthetics” associated with each narrative, the current legal and political climate, etc.), experts pointed to the important role of messaging and communication of a narrative. How something is packaged and “sold” is integral — with, in fact, a general consensus across experts that it may in fact be the most important part of how “successful” a narrative ultimately is. This is especially the case given the recycling of narratives, meaning that most narratives are merely a repackaging of an older variant.

In the age of social media, platforms offer content creators access to real-time “focus grouping” of their messaging. What may have taken advertisers days or weeks, and significant resources, to accomplish before can now be done in near real-time via looking at views, likes, follows, and more as metrics — with creators able to determine quite quickly which messaging “works” or has the ability to elicit a response. In an environment that is already saturated with content, such feedback can be invaluable. Other strategies “borrowed” from the world of advertising and communications include establishing a keyword or phrase, allowing for instant recognition (recent examples include “groomer” in reference to members of the LGBTQ+ community, or “mule” in reference to election “stealers”); repetition, especially after the establishment of such a keyword or phrase; and endorsement, especially by someone in power, such as political elites/politicians.

In short, identifying salient SMM narratives is difficult, in that identifying narratives is subjective, and the narratives themselves are ephemeral. This, combined with the fact that incentives exist around the continued emergence of ever-new narratives, suggests that trying to systematically track narratives over time — especially in an anticipatory rather than reactive fashion — is not feasible. In other words, considering SMM narratives as the unit of analysis, in an effort to systematically and quantitatively track factors online that contribute to offline violence/mobilization, is not a sound strategy.

Nodes of Influence as Unit of Analysis

Instead of considering the substance of SMM narratives, experts pointed to other factors that they consider to be more reliable signals around which SMM campaigns will be “stickier” and hence more likely to contribute to offline violence/mobilization. These common factors are described below, and outlined in a table at the end of this section.

A common theme that arose in discussions with experts around how to identify which online SMM campaigns may be most salient in fueling offline violence/mobilization was the importance of who is sharing the content — and especially their position vis-à-vis other similar voices in the ecosystem. In other words, this refers to the role of influencer culture: a phenomenon wherein those with more influence or clout have in turn a considerable effect on the decisions and actions of their followers. (“Influencer” here refers not only to “internet celebrities” or those who have purposefully acquired or developed their fame online. Rather, it refers to any person who has influence, which can also include politicians or other elected leaders.)For example, some experts pointed out that if SMM was shared by an “influencer” holding a more “intellectual” position (or even if merely perceived as such), the narrative became more likely to “stick” and to eventually become more mainstream.

While no one indicator can measure “influence,” experts pointed to a number of indicators that may together be insightful. These include money, connectedness to politicians and/or political power, and connectedness to already known “influencers.” On the last, it can also be useful to look to who such influencers are in turn being influenced by. Social media statistics can also be useful, especially in tandem with other indicators and when treated as a measure of reach rather than influence alone. In that vein, it is also useful to consider the extent of content that one generates, as those producing massive amounts of content tend to be rewarded algorithmically.

Similarly, experts agreed that it is important to also consider where the information is sharedby the influencer — i.e., what platform is used. This was flagged as more important to consider than social media metrics of follower counts or reposts alone. For example, a video posted by an influencer sharing SMM on a niche Rumble account (an “alt-tech” online video sharing platform, popular among the far right in the U.S.) may be seen by only 20 individuals — and yet, experts noted that they might expect nearly all of those viewers to mobilize in response. Meanwhile, a post shared via X (formerly known as Twitter)19Until recently, this would have been referred to as a tweet shared via Twitter. On July 22, 2023, Twitter’s owner, Elon Musk, suddenly rebranded the company to X. sharing similar SMM may garner 100 views — and yet, experts noted that they might expect only a few of those on a more broadly followed X account to actually mobilize. In short, one should not assume that more “views” or “engagement” on social media necessarily signals a greater threat. It is important, instead, to consider who may be behind those views; where information is shared can be a helpful proxy in capturing who may be behind those views, as it can signal an influencer’s intended audience.

It is also useful to understand how SMM shifts across platforms, considering the relationship across influencers across these different online spaces, and which influencers/platforms are historically upstream or downstream from others. For example, in the U.S. context, many narratives that emerge on InfoWars (home to far-right influencer Alex Jones) later end up on Fox News (former home to far-right influencer Tucker Carlson, among others), as the latter is downstream from the former. Understanding such relationships across platforms can hence provide an early indication of what new narratives may eventually become more mainstream.

In a similar vein, it is important to consider the narratives that emerge — and especially those that have staying power — via influencers on more fringe platforms, which are upstream from most other spaces. Influencers peddling narratives in such fringe spaces often do not see their narratives become mainstream immediately, especially given the backlash that such an immediate shift from the extreme fringe to the mainstream could mean for mainstream influencers. However, experts pointed to how narratives propagated by influencers that remain on such fringe platforms for some time do eventually move downstream — slowly shifting across platforms before becoming more mainstream, which in turn allows mainstream influencers more distance from their origin. These factors suggest that a more useful unit of analysis in systematically and quantitatively tracking online factors that contribute to offline violence/mobilization may be nodes of influence rather than the SMM narratives themselves. In other words, who shares the content (i.e., the node of influence) — taking into consideration where and its evolution across platforms — may be more important in assessing the impact of that content than the substance of that content alone.

Alternative FactorsDescription
Measure of influenceWho is sharing the SMM content, and their position vis-à-vis other similar voices in the ecosystem, is a measure of “influencer culture.” Those with more influence or clout have a considerable effect on the decisions and actions of their followers. “Influence” can be measured in a variety of ways, such as money; connectedness to politicians and/or political power; connectedness to already known “influencers” (i.e., who are other “influencers” being influenced by); having an “intellectual” position (or being perceived as such); extent of content being generated; etc.
Platform usedUnderstanding where SMM is shared by an influencer (i.e., what platform they use) helps to indicate their intended audience, which is a useful proxy for capturing who may be behind the views on social media. Better understanding who is actually viewing/engaging with the content can in turn signal their likelihood in mobilization
RelationshipsUnderstanding the relationship across influencers can signal what new narratives may eventually become more mainstream, especially when considering the relationship across influencers across different online spaces, and which influencers/platforms are historically upstream or downstream from others (i.e., how SMM shifts across platforms).

Understanding the Relationship between Online SMM and Offline Violence

What might be useful steps forward in attempts to better understand the relationship between online SMM and offline violence/mobilization? In discussions with experts, and building on ideas stemming from these discussions, some short- to long-term initiatives are proposed below — each of which may be able to better inform early warning, and in turn early action. These initiatives are intended primarily for a research audience,  though secondary audiences such as donors, U.S. policymakers, and social media platforms may also benefit.

Systematic Monitoring of Specific Online Spaces

One option that could be implemented in the short term considers the point shared above regarding fringe online spaces. While narratives by influencers emerging in fringe, extreme spaces online may not become mainstream immediately, narratives that remain on such fringe platforms for some time do seem to eventually move downstream, eventually reaching — and potentially inspiring violence/mobilization in — a more mainstream audience. Most SMM monitors already regularly monitor such spaces, but more systematic monitoring and documentation — and appropriate, regular sharing with other researchers — could be useful.

For example, monitoring of the /pol/ board on 4chan could allow for better identification of emerging narratives with staying power. (4chan is an anonymous political discussion imageboard website, widely considered to be a fringe corner of the internet, home to political extremism; /pol/ is its most active board.) This could in turn point to what narratives may emerge downstream in months to come, given the indirect relationship between influencers on 4chan and those inhabiting more mainstream spaces. This would allow for early warning of which marginalized populations may be at heightened risk in the near future, which could in turn allow for early action. Of course, the monitoring of extremist, fringe spaces online comes at a personal cost, and researchers doing such work must be adequately supported to bolster their resiliency. 

Reliability Assessment of Reporting

A more medium-term option that could be implemented is to establish a better understanding of the reliability of reporting around future events. Monitors often come across reports of potential future events — e.g., calls to action, such as to show up at a certain school board meeting; or flyers advertising a mass mobilization event; or logistical plans regarding upcoming activities on a closed channel. It can be difficult to determine whether such planned actions will in fact take place. Anecdotally, some experts noted how they have come to know (after years of monitoring) that if an event is shared by a certain individual or group, or shared on a certain channel comprising certain individuals/influencers, they feel confident that the event will take place — that it is not just “noise.” But such efforts have not been tracked systematically.

Potential future events (e.g., events planned, reported, or advertised online), if tracked, could ultimately be compared to events that have already occurred.20The latter is already being collected by projects such as ACLED. Over time, this could help to establish the reliability of reporting by various sources of information (individuals, groups, channels, etc.). Better understanding of the reliability of reporting could in turn serve as an early warning indicator. For example, if it is determined that reporting of future events by a certain group on a certain channel is very reliable (e.g., taking place 85% of the time), and if that group is known to often engage in violence, then forthcoming reports of future events by that group on that platform could be considered a credible early warning signal. Understanding when to give merit to reporting of planned events can allow for early action, especially given finite resources (community support, law enforcement, etc.).

It is important, however, to consider that reporting, especially by groups/movements, can change over time, especially in response to changes in the political landscape. There may be incentives for groups to alter their reporting to over- or under-inflate the extent of their actions. Continued monitoring of potential future events, and continued comparisons to events that have occurred, will ensure that calculated “reliability scores” remain accurate.

In addition to considering the credibility of an event occurring via the reliability of the reporting source/platform, one can consider the activity landscape of the location in which an event is reported to occur. For example, if a future event is reported to occur in Atlanta, one can consider what Atlanta looks like in terms of current mobilization, the presence of active groups/movements there or nearby, participation by groups/movements in other events in or near Atlanta, etc. If reporting of a future event suggests that “Group-X” will be active in Atlanta, and reporting of future events by “Group-X” are quite reliable, and past reporting makes it clear that “Group-X” is quite active in Atlanta or nearby, then the credibility of such a future event being reported can be considered quite high. Such contextual details — especially if systematically, quantitatively tracked — can offer useful early warning signals of where mobilization, and potential violence, could take place. Qualitative efforts to understand such interactions are currently underway at a handful or organizations and via one-off collaboration (for example, at the Bridging Divides Initiative [BDI]); more systematic, quantitative efforts to monitor such trends could not only advance understanding of potential threats, but could make identification of early warnings happen more quickly, allowing for better early action.  

Further, if the early warning signs noted above suggest that a future reported event has a high likelihood of occurring (as a function of the reliability of reporting by the source and/or contextual factors), better tracking of offline mobilization measures could help to further assess its reliability of occurrence. For example, if the (potentially violent) event that has a high likelihood of occurrence is meant to elicit a wide draw (i.e., drawing individuals from nearby locations), one could better assess the likelihood of said draw by identifying things like whether local hotels are being booked up, local buses are being chartered in nearby coalescing points, etc. 

An additional component to systematically track could entail further exploration of the framing behind the events that did occur, especially those that turned violent. A better understanding of the differences in the SMM behind events that occurred and turned violent, relative to events that occurred and did not turn violent, or events that did not occur at all, could help to shed light on specific themes, strategies, and/or factors that may be most salient, especially in certain contexts.

Network of Influencers

A longer-term option that could be implemented would be to identify and build a network of “influencers” (at all levels) and their profiles across platforms — i.e., a tracking system in which the unit of analysis is the node of influence. In short, instead of a systematic, quantitative effort trying to track SMM narratives, this would be a systematic, quantitative effort to track nodes of influence — using those to then assess what narratives are being propagated by whom. While narratives may be ephemeral and ever-changing, a list of influencers is (relatively) more stable, allowing for longitudinal analysis. Instead of waiting for a new SMM to begin to emerge in order to then try to assess whether the newly emerged narrative is salient enough to both have staying power and to inspire mobilization/violence, such a system would determine when certain influencers begin perpetuating a new narrative — a shift that would arguably signal more meaningful change.

Such an initiative also has the benefit of being more bounded in scope. Trying to track SMMs online at large is effectively an infinite task: platforms abound, users are many, and content is ever-increasing. Trying to digest all of it accurately, even via automation, is not feasible. Instead, identifying a list of influencers across platforms and then digesting their content specifically is a bounded task, rather than infinite. Reviewing content propagated by a certain list of actors — even if that list is long — becomes more feasible, and automation can be capitalized on in a more targeted way (e.g., trained to identify calls to action, or specificity of events, shared by such actors, via use of natural language processing [NLP]).

Such an effort may surmount some of the technical barriers raised above, but there would be important associated legal and ethical issues to consider. Developing a list of individuals may be legal in the U.S. context (for now), but one would face legal barriers in some other contexts — such as in the European Union, where General Data Protection Regulation would hinder attempts to collect, use, or store personal data (e.g., information relating to an identifiable person). Further, a list of influencers that may represent a list of “individuals to monitor” for one person may be a list of “enemies of the state” in the hands of another. When considered in conjunction with concerns around state surveillance — especially in light of debates around free speech — this can pose important ethical concerns to consider for future research.

Recommendations

In conclusion, the suggestions put forward above help to inform where resources could be targeted to better understand the relationship between online SMM and offline violence/mobilization, and how to harness that information to better inform atrocity prevention and early warning. Below, recommendations are put forward for primarily for the research field — the primary target audience of this brief — though some are also noted for other, secondary audiences as well. Stemming from discussions with experts, these recommendations seek to provide suggestions toward the better tracking and understanding of online SMM, so as to be better able to identify early warning signs of offline violence/mobilization.

For example, some experts highlighted the need for those monitoring SMM to have an (always improving) understanding of history. Historical precedent impacts trends, as many emerging (new) SMM campaigns are in fact repackaged versions of prior narratives. For that reason, having an improved understanding of what happened the last time a narrative targeting a specific population emerged can provide lessons learned. A better grasp of the histories of marginalized groups (immigrants, indigenous groups, etc.) is also important, especially as such histories may be used to fuel future narratives and SMM campaigns.

There also needs to be better monitoring of the variety of types of media that exist within the research field — both by humans and via automation/AI. Much of the focus has been on the consumption of text-based media, with far less emphasis on visual media (symbols, images, text as image, etc.), including video. The latter can be quite emotive and therefore effective in its use in calls to action, for example — and yet, they can be time-consuming to digest, with much nuance lost if simply transcribed into text (pauses or humor, for example, may be lost). There was likely a reason, after all, that the original influencer chose to deliver their information via video rather than a text-based medium. So there must be improvements in the digestion and assessment of such media. Furthermore, monitoring must remain ever-evolving and respondent to the types of media that exist, especially given the emergence of new technologies that may be harnessed by extremists and political entrepreneurs. For example, just as text-based bulletin boards have given way to the sharing of visual media via social media, one must consider the nefarious opportunities that the metaverse may provide in the future.21Suraj Lakhani, “When Digital and Physical World Combine: The Metaverse and Gamification of Violent Extremism,” International Centre for Counter-Terrorism, 2022, https://pt.icct.nl/article/when-digital-and-physical-world-combine-metaverse-and-gamification-violent-extremism.

A common theme that emerged across many experts was the need for better linkages within the research field, and the need for working together more — e.g., more sharing of lessons learned, or better synergies between technical experts and those with contextual expertise. Even within the smaller world of monitors, many disparate individuals and groups are currently doing the same monitoring. Working together — instead of the current piecemeal approach — could minimize redundancies while maximizing the precious resource of human capacity and contextual expertise. As things stand now, many individuals and groups vie for the same funding and survival (and some may even argue the same clout) within the same field. An environment or “hub” that helps to bring together such entities is hence a considered recommendation. 

Hand in hand with a recommendation for better support within the research field is a recommendation for better support of the research field by donors. As the online ecosystem continues to expand, and SMM becomes increasingly pervasive, there is an ever-greater need for its effective monitoring, which requires the support of donors. Only with continued support — especially ahead of and during contentious periods, such as the upcoming 2024 U.S. presidential election, where early warning and early action would be most imperative — will systematic tracking of the relationship between online SMM and offline violence/mobilization improve, or be possible at all. Given the slow speed of public funding for these types of research — and in light of both real and perceived concerns regarding the government’s place in monitoring individual free speech — private foundations and donors have a unique role to play in supporting such innovation, and in helping to fill this gap in the short term.

Another recommendation is for social media companies to standardize data availability and accessibility across platforms, given the difficulties in monitoring and assessment that social media companies currently pose. Experts noted that such platforms are making it increasingly hard to engage in monitoring at all, with increased limitations to data access. As one example, X’s half million dollar and up price tag for access to a very small fraction of the company’s posts — an incredible increase in price when academics previously accessed the Twitter API for free — makes it nearly impossible for most to (continue to) access the data from the social media giant.22Chris Stokel-Walker, “Twitter’s $42,000-per-Month API Prices Out Nearly Everyone,” Wired, March 10, 2023,  https://www.wired.com/story/twitter-data-api-prices-out-nearly-everyone/.

(As Twitter continues to make significant changes to its platform as a part of its rebranding to X, more obstacles for research are expected. Recently, for example, the non-profit research center making use of social media data from the platform to demonstrate a stark rise in hate speech following the company’s purchase by Elon Musk was sued by the company for “falsely claim[ing] it had statistical support showing the platform is overwhelmed with harmful content” via “unlawfully accessing data.”23Hayden Field, “Twitter, Now Called X, Sues Researchers Who Showed Rise in Hate Speech on Platform After Musk Takeover,” CNBC, August 1, 2023, https://www.cnbc.com/2023/08/01/x-sues-ccdh-for-showing-hate-speech-rise-on-twitter-after-musk-deal.html. ) Making such information inaccessible makes it increasingly difficult to understand the relationship between online engagement and offline activity.

Another concern that arose was around documentation by social media companies. It can be difficult to document information online that arises during monitoring efforts in a way that ensures it will be admissible in court — a concern that arose for the United States House Select Committee to Investigate the January 6th Attack on the United States Capitol, for example. The ephemeral nature of content online also makes it difficult to be able to access information once content is taken down. Such is the case in the immediate aftermath of atrocities, such as mass casualty events, in which social media companies take down the digital footprint of perpetrators. While the removal of such content is important in attempts to thwart violent propaganda, it can make the job of monitors all the more difficult in gathering information on the inspiration and motives of a perpetrator when it matters most. There have been propositions put forward by those in the human rights space to consider the implementation of “digital evidence lockers” as a model for archiving digital information.24Alexa Koenig, Shakiba Mashayekhi, Diana Chavez-Varela, Lindsay Freeman, Kayla Brown, Zuzanna Buszman, Rachael Cornejo, Amalya Dubrovsky, Sofia Jordan, Sang-Min Kim, Lucy Meyer, Pearlé Nwaezeigwe, Sri Ramesh, Maitreyi Sistla, Eric Sype, and Ji Su Yoo, “Digital Lockers: Archiving Social Media Evidence of Atrocity Crimes,” UC Human Rights Center, 2023, https://humanrights.berkeley.edu/publications/digital-lockers-archiving-social-media-evidence-atrocity-crimes.

The scope of this brief has been to identify systematic approaches to understanding the online-offline continuum for improved prevention/early warning, and these recommendations speak specifically to means toward that end. Recommendations outside of this scope — such as those around improvements to content moderation by social media companies, regulatory legislation by U.S. policymakers, or early actions that ought to be taken in response to early warning indicators by various stakeholders — are outside of the purview of this study, and hence not explored here.

About the Author

Roudabeh Kishi is Chief Research Officer at the Bridging Divides Initiative (BDI), a non-partisan research initiative that tracks and mitigates political violence in the United States, based at the School of Public and International Affairs (SPIA) at Princeton University, co-hosted by the Empirical Studies of Conflict (ESOC). An expert in data methodologies and measuring the risk of political violence, she regularly consults on a range of projects around conflict and data for clients, including multilateral, humanitarian, peacebuilding, and research organizations. Previously, she was the Director of Research & Innovation at the Armed Conflict Location & Event Data Project (ACLED), where she oversaw global data collection and analysis tracking political violence and protests around the world.

Notes

  • 1
    See definitions in the inset box. With the rise of artificial intelligence (AI) has come new forms of fabricated content creation that capitalize on AI and machine learning to create digital forgeries. Deepfakes, for example, are “computer-generated, photorealistic media created via cutting-edge artificial intelligence technology,” which can be used by malicious actors to fool viewers into believing things that never happened — aiding in their propaganda attempts. A known politician in power making a claim about COVID-19 being fake and created as a nefarious tool, and a not-real person alleging they are a teacher who uses critical race theory for indoctrination, both represent powerful hypothetical disinformation that may fool the untrained eye, making them likely candidates to be shared widely as misinformation. See M. Mitchell Waldrop, “Synthetic Media: The Real Trouble with Deepfakes,” Knowable Magazine, March 16, 2020.
  • 2
    See, for example, Kristina Hook and Ernesto Verdeja, “Social Media Misinformation and the Prevention of Political Instability and Mass Atrocities,” Stimson Center, 2022, https://www.stimson.org/2022/social-media-misinformation-and-the-prevention-of-political-instability-and-mass-atrocities/.
  • 3
    E.g., the dissemination of SMM about coronavirus mandates in efforts to challenge them.
  • 4
    E.g., the development and dissemination of the conspiracy theory that the 2020 presidential election in the U.S. was “stolen.”
  • 5
    E.g., the Proud Boys (a violent, far-right group engaged in political violence in North America, and the U.S. in particular) rarely engaged in mobilization, such as engagement at demonstrations, around schools — until the summer of 2021, when they began latching onto SMM around critical race theory (CRT), which branded teaching about racism in America as a radical threat to white children. The lack of major recent mobilization around schools before SMM around CRT took shape suggests that the group became engaged in such activity not based on ideology alone, but as a strategic means to an end — recognizing the opportunities co-opting such a popular narrative could provide. For example, “on 6 July 2021, the Proud Boys protested at a Granite School Board meeting in Salt Lake City, Utah, attacking CRT, despite the fact that CRT is already banned from school curricula in the state,” suggesting that their actions were driven by strategy/opportunity rather than ideology. See Roudabeh Kishi, “Far-Right Violence and the American Midterm Elections: Early Warning Signs to Monitor Ahead of the Vote,” ACLED, 2022, https://acleddata.com/2022/05/03/far-right-violence-and-the-midterm-elections-early-warning-signs-to-monitor-ahead-of-the-vote.
  • 6
    Even such long-term, longitudinal efforts to track offline violence do not capture all forms of violence; they are limited to violence of a political nature (excluding criminal violence) that occurs in public for a (excluding domestic or interpersonal violence).
  • 7
    Structural gaps refer to cases where the information exists yet is not being captured in a way that makes it usable in analysis (i.e., the structures do not exist, as opposed to the information not existing).
  • 8
    This represents a 70% response rate to interview requests (i.e., interview requests were sent out to 20 experts). Such a response rate is considered to be very strong; a good response rate to surveys, for example, is considered to be between 5% and 30%.
  • 9
    Arthur M. Schlesinger, “Biography of a Nation of Joiners,” American Historical Review 50, no. 1 (1944): 1-25.
  • 10
    Robert D. Putnam, “Bowling Alone: America’s Declining Social Capital,” Journal of Democracy 6, no. 1 (1995): 65-78.
  • 11
    See also Jared Holt and Katherine Keneally, “A Field Guide for Assessing Chances of Online-to-Offline Mobilization,” Institute for Strategic Dialogue, 2023, https://www.isdglobal.org/digital_dispatches/a-field-guide-for-assessing-chances-of-online-to-offline-mobilization/.
  • 12
    This is especially so if considered together with the past conversation rate of the private group and/or poster; this is explored in further detail in the following section.
  • 13
    On average, such manifestos are posted online less than two hours before an attack. Thomas James Vaughan Williams, Calli Tzani, and Maria Ioannou, “Foreshadowing Terror: Exploring the Time of Online Manifestos Prior to Lone Wolf Attacks,” Studies in Conflict & Terrorism, 2023, https://doi.org/10.1080/1057610X.2023.2205973.
  • 14
    There are, of course, challenges in defining “credible threats and risks”; for some (e.g., law enforcement), this represents only the threat of bodily harm or illegal action, while others may consider a broader spectrum of harm, or may be most interested in the risk of elevating conflict dynamics, for example. Such definitional choices have significant impacts for the measurement and assessment of threats and risks.
  • 15
    Jared Holt, “After the Insurrection: How Domestic Extremists Adapted and Evolved After the January 6 US Capitol Attack,” DFRLab, Atlantic Council, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/report/after-the-insurrection-how-domestic-extremists-adapted-and-evolved-after-the-january-6-us-capitol-attack/.
  • 16
    Arguably, such themes and strategies may be prevalent in successful campaigns of all types, beyond SMM alone.
  • 17
    See, for example, Hampton Stall, David Foran, and Hari Prasad, “Kyle Rittenhouse and the Shared Meme Networks of the Armed American Far-Right: An Analysis of the Content Creation Formula, Right-Wing Injection of Politics, and Normalization of Violence, Terrorism and Political Violence,” Terrorism and Political Violence, 2020, https://doi.org/10.1080/09546553.2022.2074293.
  • 18
    For a discussion of the shifts in mobilizing narratives used by the far right in the U.S. in recent years, see Roudabeh Kishi, “From the Capitol Riot to the Midterms: Shifts in American Far-Right Mobilization Between 2021 and 2022,” ACLED, 2022, https://acleddata.com/2022/12/06/from-the-capitol-riot-to-the-midterms-shifts-in-american-far-right-mobilization-between-2021-and-2022/.
  • 19
    Until recently, this would have been referred to as a tweet shared via Twitter. On July 22, 2023, Twitter’s owner, Elon Musk, suddenly rebranded the company to X.
  • 20
    The latter is already being collected by projects such as ACLED.
  • 21
    Suraj Lakhani, “When Digital and Physical World Combine: The Metaverse and Gamification of Violent Extremism,” International Centre for Counter-Terrorism, 2022, https://pt.icct.nl/article/when-digital-and-physical-world-combine-metaverse-and-gamification-violent-extremism.
  • 22
    Chris Stokel-Walker, “Twitter’s $42,000-per-Month API Prices Out Nearly Everyone,” Wired, March 10, 2023,  https://www.wired.com/story/twitter-data-api-prices-out-nearly-everyone/.
  • 23
    Hayden Field, “Twitter, Now Called X, Sues Researchers Who Showed Rise in Hate Speech on Platform After Musk Takeover,” CNBC, August 1, 2023, https://www.cnbc.com/2023/08/01/x-sues-ccdh-for-showing-hate-speech-rise-on-twitter-after-musk-deal.html.
  • 24
    Alexa Koenig, Shakiba Mashayekhi, Diana Chavez-Varela, Lindsay Freeman, Kayla Brown, Zuzanna Buszman, Rachael Cornejo, Amalya Dubrovsky, Sofia Jordan, Sang-Min Kim, Lucy Meyer, Pearlé Nwaezeigwe, Sri Ramesh, Maitreyi Sistla, Eric Sype, and Ji Su Yoo, “Digital Lockers: Archiving Social Media Evidence of Atrocity Crimes,” UC Human Rights Center, 2023, https://humanrights.berkeley.edu/publications/digital-lockers-archiving-social-media-evidence-atrocity-crimes.

Recent & Related

Project Note
Ryan Fletcher

Subscription Options

* indicates required

Research Areas

Pivotal Places

Publications & Project Lists

38 North: News and Analysis on North Korea