Making the Nation Safer Through Social Science

The Psychology of It All

With security issues tied to human behavior, cutting edge psychological science can be an indispensable tool for effective policy

By  Arie W. Kruglanski  •  Michele J. Gelfand

Security issues are intimately tied to human behavior. Whether it is waging of wars, carrying out of insurgencies, yielding or resisting terrorism, conducting negotiations, or combating extremism—it is individuals (leaders, policy makers, voters, generals and foot soldiers) that make these happen. They are governed by factors, structures and processes that constitute what is broadly known as the “human nature,” the quintessential subject matter of psychology. Indeed, social scientists’ recommendations to policy makers almost without exception contained implicit psychological assumptions.

In this paper, we discuss the conditions under which psychology’s insights into human behavior can make effective contributions to policy. Essential in this regard is a close collaboration of the basic scientists with relevant policy makers who can clearly articulate their particular concerns, and contextual experts who can identify the constraints that a given policy recommendation must accommodate. Problem focused teams that bring together policy makers, basic scientists and context experts are needed to jointly forge the best, most evidence-based interventions and programs in the security domain and beyond.

“Politics is too serious a matter to be left to the politicians,” famously quipped French President Charles De Gaulle. But, if not politicians, then who? For better or worse, it is politicians, after all, whose job it is to govern and carry out policy. How can they do it well? Plato recommended the concept of ‘philosopher king’ – a studious scholar endowed with wisdom, intelligence, and reliability as well as modesty and self-effacement. Equipped with these laudable attributes, the ruler was expected to use power wisely and to lead their people with a steady hand to safety and prosperity. Yet whereas some great rulers of the past were philosophically inclined (e.g. Marcus Aurelius, Catherine the Great, Frederick the Great), the kind of ideal envisaged by Plato has been in short supply. To compensate for their imperfections, rulers, as far as history can reach, have been typically tutored, assisted, and counselled by knowledgeable advisors. These counsellors provided them with the relevant information, analysis, and opinion on topics of concern.

The need for competent policy advice is as relevant and particularly acute in today’s complex world faced as it is with the gathering tsunami of problems that threaten the stability and security of societies around the globe. One would think that today’s policymakers should have a big advantage over the rulers of yesteryear; they can call on the advice of sophisticated scientists possessed of an immense, empirically validated reservoir of knowledge in nearly all domains of human endeavor. This should make the task of policy much easier, more systematic, and less dependent on the vagaries of impulse and intuition than in the past. But does it?

Here we discuss the role of the social sciences in both the shaping of governmental policy, and on conditions in which their input is most likely to bear fruit. We focus in particular on the security domain – with which we have had first-hand experience – and on the role of psychology, our own discipline, in this enterprise.

It has been said that the First World War was the war of the chemists, the Second World War was that of the physicists, whereas the Cold War belonged to the social scientists.1Note: Desch, 2019 Indeed, eminent social scientists the likes of Thomas Schelling, Henry Kissinger, Walter Rostow, McGeorge Bundy, and Zbigniew Brzezinski had significant impact on US security policy during the last half of the 20th century. Their substantial influence was hardly unblemished. In fact, these social scientists have been criticized often for what was perceived as significant policy debacles of the administrations they advised, in particular the disastrous course and outcomes of the Kennedy, Johnson, and Nixon administrations’ involvement in the war in Vietnam between the years 1955-1975.

Following this “golden age” of social scientists (economists and political scientists in particular) involvement in policy, a renewed impetus for the involvement of social scientists in policy was occasioned by the 9/11 attacks by Al Qaeda operatives on the World Trade Center in New York and on the Pentagon in Washington, DC. In the aftermath of these assaults, President George W. Bush appealed to the U.S. National Academy of Sciences (NAS) to provide him with the best available science on the causes and dynamics of terrorism. In response, the NAS set up various disciplinary panels including one in the social and behavioral sciences (in which one of us [AK] was a member). Their work resulted in the 2002 report by the National Research Council (NRC), titled: “Making the Nation Safer: The Role of Science and Technology in Countering Terrorism.”

At about the same time, in November 2002, the Department of Homeland Security (DHS) was created. DHS set up several centers of excellence for the disciplinary studies of terrorism at some of the nation’s universities. One of these centers on the social and behavioral aspects of terrorism was founded at the University of Maryland, namely The National Center for the Study of Terrorism and the Response to Terrorism (START), in whose research and educational activities both of us have participated.2Note: Kruglanski was START’s founding co-PI and co-director.

In 2008, the Minerva Research Initiative was launched by the U.S. Department of Defense with the explicit aim of supporting basic social science of relevance to DoD’s central mission. The scientific community responded enthusiastically to the Minerva call and proceeded to carry out considerable amounts of high-quality research under the program’s aegis. The present writers too were among the program’s highly involved participants. We served as co-PIs on two separate Minerva grants (“The Motivational, Ideological and Social Aspects of Violent Extremism” and “Syrian Refugees and Their Potential for Radicalization”) both of which produced key insights on the psychology of terrorism disseminated in multiple scientific articles as well as a number of books.

Our experience as Minerva scholars was highly rewarding. We benefited from the Minerva-sponsored work both as basic scientists and as researchers seeking to understand the role of social science in the advancement of America’s national interests. In what follows, we summarize what we have learned on this topic, and discuss the challenges that the interface of basic science and policy entails, as well as the conditions under which social science contributions to policy may live up to their promise. 

Challenges and Obstacles

When all is said and done, the rationale of allotting defense funding to basic research in social science lies in the expectation that social science research will make concrete and palpable contributions to national security and provide valid answers to burning issues facing the government in this domain. Unfortunately, the rich history of social science involvement in security policy reveals a rather checkered record. Numerous programs, initiatives, and institutes set up for that purpose over the years produced a rather meager yield3Note: Desch, 2019 and the social scientists with the greatest access, impact, and influence on policy during the Cold War era (Schelling, Rostow, McGeorge Bundy and others) were blamed in part for the immense price in lives, resources, and prestige that the U.S. paid for the policy mistakes that characterized its participation in the Vietnam entanglement.  

What went wrong?

Method over Substance

In commenting on the wrong turns, the trials and tribulations, and the dead ends that historically bedeviled the relations between the basic research community, and the policy establishment, critics4Note: e.g., Desch, 2019; Lyons, 1969 have identified as a major culprit a basic tension between rigor and relevance. Rigor in this context refers to the insistence on elegant formalisms, mathematical models, computer simulations, and the use of sophisticated statistics and neuroscience, all presumably intended to highlight the truly scientific character of the social sciences and their kinship with the highly successful natural sciences, the “unity of science” notion advocated by the Russell Sage Foundation.5Note: Desch, 2019, p. 72 According to critics, such methodological “purism” ushered in a distaste for the messiness that inquiry into policy-relevant problems inevitably entails, and reduced the appeal of work on such problems across the social sciences.

Now, there is nothing wrong with rigorous practices as such. Quite to the contrary, modeling, mathematics, etc. may contribute much to a honing down of theoretical ideas, derivation of unexpected implications from precise concepts, and the emergence of new insights from computer-assisted simulations when combined with real-world data as we have shown in our own work.6Note: e.g., Roos et al., 2015 The problem arises when the method begins to drive the substance of inquiry –

the tail wagging the dog, as it were. After all, the logic of scientific discovery7Note: e.g., Popper, 1959 requires that the substantive idea come first, and the method of representing it (e.g. formally, or mathematically) or testing it (e.g., via sophisticated statistical techniques) be secondary. Method serves as a means to clarifying or validating the substantive idea but it should not replace the idea as a reason for carrying out the inquiry in the first place.  

Yet it can happen that the cart is put before the horse, and for a given scientific community the method takes precedence over the substance. This has been and still is happening in the social sciences to an appreciable degree. Formal modeling may be preferred by analysts because it seems “more scientific” and worth the research investment which needs to be continually justified. Yet falling in love with method can have a major adverse consequence. It may bias the choice of substantive problems away from their intrinsic interest, or relevance to broader issues and bend it toward questions that lend themselves to an appropriately “rigorous” treatment. Like in the parable of the person who limits their search for a lost dime to the illuminated area under the street lamp, so the scientist who insists on the use of rigorous methods of a given kind, to the exclusion of all others, is in danger of missing out on important insights and ending up learning more and more about less and less.

This even isn’t good basic science. As the Nobel Prize-winning physicist Percy Williams Bridgman pithily put it, “The scientific method, as far as it is a method, is nothing more than doing one’s damnedest with one’s mind, no holds barred.” This definition should include all kinds of representations as well as diverse types of evidence including qualitative evidence from interviews, systematic observations, case studies, etc. In short, science at its best should consist of implementing the art of the possible in the interest of discovery.

Substantive inadequacy

Ironically, the major mistakes of the US Vietnam policy attributed to the counsel of social scientists like Thomas Schelling or Walter Rostow did not result from the applications of sophisticated mathematical models or other complicated tools of rigor. Sophisticated scientists though they were, their actual policy proposals were simple-minded and basic, rather than derived from arcane mathematical calculations. Schelling’s idea of deterrence through demonstrated commitment to anti-communism, and zero tolerance of communist insurgency (as well as the derivative idea that any deviation from that policy will signal weakness and invite further aggression, the basis of the so-called “domino theory”) is commonsensical enough. So is Rostow’s idea that coercion works and, if it didn’t work yet, it is because there wasn’t enough of it – an idea that motivated the escalating coercive bombing campaign that he advocated in Vietnam.

Apparently, the persuasive power of simple, readily understandable ideas isn’t restricted to the realm of social science. As Desch noted: “It is not even clear that natural scientists have been most influential when they have employed their most rigorous and mathematically sophisticated approaches, at least in the national security realm. Indeed, there is more evidence that they have been most influential when they have offered practical solutions to real-world problems. These solutions have often come from scientifically uncertain and incomplete data. These are the hallmarks of much of the best of qualitative social science.”8Note: Desch 2019 (p. 249)

So, the failures of the strategic advice that social scientists offered to successive US administrations in the Cold War era did not necessarily stem from methodological considerations related to disciplinary “scientism.” Rather, they stemmed from the substantive insufficiency of those proposals, and more specifically from their inadequate appreciation and outright neglect of the psychology of the situation. Schelling’s insistence on demonstration of commitment as the sine qua non of deterrence ran afoul of the fact that commitment is a two-way street and that North Vietnamese commitment may be stronger and more enduring than the American commitment.

Miscalibration of the adversary’s commitment relative to one’s own has also characterized the more recent U.S. policies in Afghanistan (and to some extent in Iraq and Syria)  where after two decades of fighting the U.S. finally decided that enough is enough and proceeded to withdraw its forces from the region, largely leaving the field to the Taliban whose patience and the idea that “while the American have the watches, we have the time” appears to have been sadly vindicated.

Similarly, Rostow’s idea that coercion and intimidation are unconditionally effective ignores the psychology of challenge and reactance, the idea that standing up to a powerful adversary sets up the motivation to resist the aggressor and demonstrate resilience, courage, admirable fortitude, and above all, one’s honor, a concept that we have focused on at length in our Minerva efforts. Time and time again in history, we see this dynamic illustrated, for instance, by the Londoners’ resilience to the Blitz, and the successes in the mid-20th century of various anti-colonial insurgencies of the “wretched of the earth,”9Note: Fannon, 1961 in Algeria, Israel, and Kenya among others.

In other words, it is not the methodical rigor of basic researchers that resulted in social scientists’ misguided recommendations to policymakers, but rather the fact that their substantive ideas were inadequate and ill-fitting to the context to which they were applied. That said, we wish to disavow here something we do not mean to suggest, namely that the social scientists whose recommendations did not exactly pan out were therefore lacking in ability, talent, acumen, or proper training. Science – even at its best – isn’t clairvoyant. In Popper’s framing, it consists of “conjectures and refutations.”10Note: Popper 1962 Progress in basic research and its consequent ability to effectively inform policy depends on our readiness to learn from our mistakes. It is the latter, perhaps, that has been lacking in much of the basic social science research of potential relevance to policy in the security domain. And the problem, in our view, has been insufficient attention to human psychology. In what follows, we elaborate this opinion in some detail.

Culture clashes

While we believe that psychological science has a lot to offer the national security community, we also recognize that there have been fundamental culture clashes between the two communities—reflecting different values and temporal perspectives—that have limited our progress.11Note: Gelfand discussed this clash at Carnegie Relevance Workshop “The Ivory Tower, the Beltway, and the Fourth Estate” sponsored by the DOD in January 2014. Scientists who are doing work that is relevant to the defense community are not necessarily trained in the art of communicating their findings beyond the walls of academia. Moreover, the culture of academic psychology until recently generally viewed “applied” problems as secondary (if not “selling out”), and rewards for such endeavors were few and far between. Likewise, scientists in academic psychology have been socialized to be very conservative in their application and to be wary of providing guidance prematurely. On the other hand, the defense community has clear time pressure to implement scientific findings. Often, too, the good advice that social scientists offer is ignored by policymakers because of hubris, misunderstanding, or time constraints that privilege the old ways of doing things over suggested innovations.  As we will discuss later, managing the culture clash between the social science and policy communities is a key mandate in the years to come.

The Centrality of Psychology in National Defense

Security issues are intimately tied to human behavior. Whether it is waging wars, carrying out insurgencies, yielding or resisting terrorism, conducting negotiations, or combating extremism—it is individuals (leaders, policymakers, voters, generals, and foot soldiers) that make these happen. They are governed by factors, structures, and processes that constitute what is broadly known as the “human nature,” the quintessential subject matter of psychology. Interestingly, social scientists’ recommendations to policymakers almost without exception have historically contained implicit psychological assumptions. The problem has been, however, that often such assumptions were overly general and undifferentiated. They woefully lagged behind what psychological science has discovered in the recent half-century.

In the domain of terrorism research, for instance, hypotheses were initially framed in terms of the macro-level variables (poverty, political oppression, or poor education) suspected of constituting ‘root causes’ of the phenomenon. Underlying these hypotheses, presumably, are the implicit psychological assumptions that political oppression or poverty are frustrating and that frustration tends to breed aggression, or that poorly educated people are persuadable and are readily swayed by the terrorists’ rhetoric, whereas the well-educated know better than to accept patent falsehoods. Yet empirical research flatly refuted all these “root cause” hypotheses. 

Does that mean that poverty, feeling politically oppressed, or holding a low-level job because of one’s poor education are necessarily irrelevant to succumbing to the appeal of terrorism? Not necessarily. Each of these are likely to reduce individuals’ sense of personal significance, and in this sense constitute contributing factors to terrorism.12Note: (Kruglanski & Fishman, 2006) We expand on the psychological process underlying their potential contribution later on.

The rationality fallacy

A recurrent issue with psychological assumptions made by economists and other social scientists is the universalization of their own rationality and its ‘bestowal’ upon others. As Desch notes, “even for very simple game theory like Schelling’s utilized models of coercive bargaining assumed both sides share the same cost/benefit preferences.”13Note: Desch (2019, p. 174) Often such assumptions have been wrong. For instance, a frequent assumption made by game-theoretically based scholars that it is rational to yield to overwhelming force rather than suffer massive casualties, the destruction of property, and economic devastation. Yet time and time again it turned out that people are ready to suffer such calamities rather than face the shame of surrender. Is such behavior ‘irrational’? According to definitions of rationality by such eminent theorists as Max Weber, Emile Durkheim, and Herbert Simon, rationality concerns the relation of means to ends, such that a means is rational if it serves the individual’s goals, and irrational if it does not. This highlights the essential subjectivity of rationality and its fundamental dependence on the “eye of the beholder.” After all, not everyone has the same ends, and for some people and cultures14Note: Gelfand et al., 2011 honor can be more important than health, wealth, and even life itself.15Note: Nowak et al., 2015

The work of Scott Atran similarly demonstrates that upholding and protecting values that are sacred to one’s group can constitute an end that supersedes all others.16Note: Atran (e.g., 2010, 2021) Suicide bombers aren’t irrational by this analysis, as they are carrying their deed for a purpose of doing something significant and worthwhile that makes them heroes or martyrs. Nor are resistance fighters who initiate uprisings against an occupying force with full knowledge that their attempt will be squashed with a heavy price to pay. The 1943 Warsaw Ghetto uprising comes to mind as a par excellence instance of this phenomenon. Closer to home, yet relatedly: “North Vietnam and the United States had very different concepts of what was at stake in the conflict. The former fought to liberate their homeland; the latter merely sought an opportunity to demonstrate its resolve to defend more important regions of the world” (Desch, 2019, p. 174).

The universalization fallacy

The “egocentric” universalization of preferences that engenders misguided conceptions of rationality and irrationality relates to the fundamental challenge of psychological science. It is to determine and clearly demarcate what is universal and invariant about human nature and what is malleable and protean. Clearly, some of the invariances scholars have assumed and used as bases for policy recommendations in the domain of security were invalid: The assumption that people generally value survival or economic well-being above all else, and therefore that you can coerce people into anything or monetize everything, were patently wrong. The former, as already noted, gave rise to the notion that political conflicts (insurgencies, terrorism), can be squashed by military force alone (i.e., underlying the concept of coercive bargaining), and the latter that nations will inevitably succumb to economic sanctions, an assumption that has been repeatedly refuted by empirical facts (see Kruglanski & Gelfand, 2017).

Recent decades of psychological research demonstrate that people’s motivations (their goals and means) fluctuate considerably across situations and are determined also by individuals’ personalities and the culture of which they are members. So, it is not the case that people in general want this or that, that people, in general, are more averse to losses than attracted to gains (see Higgins et al., 2011), that people, in general, prefer heuristics over statistics (see Kruglanski and Gigerenzer, 2011), that people, in general, adhere to their cultural norms (see Gelfand et al., 2011), or that people, in general, crave certainty and are averse to uncertainty (e.g., Kruglanski, 2004). All these tendencies are highly variable rather than general. Ignoring this is likely to backfire, and inform wrong policy recommendations.

Significance quest theory, and the 3Ns of radicalization

Our work, supported by DoD’s Minerva Initiative has addressed the distinction between the universals of human nature and its diverse manifestations in the realm of violent extremism and radicalization. We started with the universalist assumption that all humans have the same set of basic needs, some of which are biological (nutrition, hydration, rest) and others psychological (the needs to belong, to have knowledge, meaning in life, etc). (see, e.g. Maslow, 1943, Fiske, 2010; Higgins, 2012). We focused on one such fundamental need, the need for significance and dignity, with particularly broad implications for social action (Kruglanski et al., 2021). Our work demonstrated that this need underlies violent extremism and constitutes the common denominator shared by a variety of previously identified motives for extremism, such as the perks of paradise, identification with the leader, vengeance, etc. (see e.g., Bloom, 2004; Gambetta, 2005; Kruglanski, Gelfand et al., 2014; Kruglanski, Belanger & Gunaratna, 2019).

Motivation science of recent decades shows also that for it to foment behavior a need (any need) must be activated or aroused.  And the need for significance can be activated either by (1) a significance loss, or likelihood of loss, through threatened or experienced humiliation, disempowerment, discrimination, bullying, etc. and (2) perceived opportunity for significance gain by becoming a hero, a martyr or a superstar revered by all.

In and of itself, need activation is insufficient for understanding how it would affect behavior, however. The missing knowledge here is the culturally accepted means for need fulfillment which can vary widely across cultures and circumstances. In the case of the need for significance, the means for need satisfaction is contained in the cultural narrative that links given patterns of behavior to significance. Because significance pertains to one’s perceived social worth, the need for significance is attained through the enactment of worthy (i.e., value-serving, or value-representing) behaviors (Atran, 2010, 2021). In the case of violent extremism such behavior involves aggression directed at the alleged enemy seen to imperil an important value, e.g., the safety or welfare of one’s cherished group or institution (one’s nation, one’s religion) or another “sacred” value such as freedom, democracy, homeland or honor (Gelfand, Severance, Lee at al., 2015; Nowak, et al. 2016).

The notion of the cultural narrative entails the importance of it being supported by one’s social network, one’s ingroup of trusted persons, esteemed leaders, family, and friends. The network validates the narrative and turns it into a shared reality (Higgins, 2019); furthermore, it dispenses rewards for individuals who realize the narrative’s dictates through their value-serving actions. The triad of Need, Narrative, and Network thus constitutes a 3N framework for understanding radicalization into violent extremism.

By now, the 3N model has been supported by ample empirical research, both quantitative and qualitative. Among others, it has guided our field research on the de-radicalization of the Liberation Tigers of Tamil Eelam in Sri Lanka (Webber, Chernikova, Kruglanski & Gelfand et al., 2018), on the radicalization of Islamist extremists in a Philippine prison (Kruglanski, Gelfand et al., 2016), and the journey of German Neo-Nazis to the fringe and back into the mainstream (Kruglanski, Webber, & Koehler, 2019). While the contexts we explored had vastly different cultures, we applied the 3N model locally with the help of collaborators to study the emic manifestations of this broader general model.

 The 3N framework represents an attempt to build on prior insights and go beyond them. Whereas prior scholars emphasized either the need (motivational) aspect of radicalization (e.g., Speckhard and Akhmedova, 2005), or the narrative (ideological) aspect (e.g., Atran’s, 2021 work on sacred values), or the social network aspect (Sageman, 2004; 2008), the 3N model integrates these essential elements and depicts their functional interdependence.

The plight of immigrants and refugees

Radicalization that progresses into violent extremism requires the conjunction of all the 3Ns; according to the theory, the mere activation of the need for significance would not produce such an effect in the absence of a radicalizing narrative and a supportive network. We recently studied from this perspective the plight of Muslim immigrants in the U.S. (Lyons-Padilla et al., 2015) and Syrian refugees in the Middle East and in Europe in order to estimate their potential for radicalization (cf. Jasko et al., 2021; Webber et al., 2021). Our findings attest that the Muslim immigrants in the U.S. and refugee status indeed entail a substantial loss of significance with respect to individuals’ prior social standing in their countries of origin. Yet our data suggest that this alone does not promote radicalization. Those who feel particularly insignificant and humiliated in the host country were found to more likely desire to either leave it and settle elsewhere, or return to their country of origin, but we found no evidence of refugees’ radicalization or their support of violence. So, in the absence of a radicalizing narrative supported by a committed network, significance loss as such does not produce radicalization. Nonetheless, the loss of significance that the refugee status often induces, compounded by discrimination on part of the locals, creates a susceptibility to radicalizing narratives and networks. Such vulnerability could well be exploited by extremists for recruitment purposes   

Our Minerva-supported work is a contribution to basic science, but how can it inform policy? In the following penultimate section of our essay, we consider the conditions that must be met for it to happen.

Making Social Science Useful

In essence, we identify two such general conditions – one is that the policy requirement be clearly spelled out. The second is that the general theories and principles that basic research has generated and supported be properly translated into the context wherein the policy requirements are meant to apply. We consider these in turn.

Knowing the policy concerns

Often the funding of basic research is conditionalized on the investigator describing the ‘broader impact’ of their anticipated findings, or their potential policy implications. This requires the scientist to speculate about possible ways in which their findings could apply to imagined policy issues. Whereas such speculations could be suggestive and interesting in and of themselves, they could be also completely off track in relation to the actual problems a given policy community is grappling with. After all, the basic researcher doesn’t necessarily have access to what is on the policy maker’s mind without being explicitly told so – namely being informed where, and in what way specifically, their help is needed. To wit, the so-called “Golden Age” of social science impact on policy during the Cold War era occurred where major social scientists were in close contact with policymakers. They had the access and the audience that allowed them to see clearly what policies were at stake and how they could use their science to shape them effectively.

Translating basic science

Typically, the findings of basic research are de-contextualized or insufficiently contextualized to be useful for a policymaker whose concerns may address a particular geopolitical situation unfolding in a specific cultural environment at a specific time and place. It is unlikely that a basic researcher alone would be able to effectively translate their findings into specific policies without inputs from experts (e.g., area experts) who are well-versed in the specifics of the situation and can make valid judgments about available options. To twist what they say about politics, the application of basic science is “the art of the possible” and only people who are familiar with a given region of intended application should be able to define what is in fact “possible” under the circumstances. On the other hand, such contextual experts may often be ill-equipped to understand the implications of the basic science, and so unable to apply them effectively to the situation at hand. All of this implies the need for problem-focused teams involving policymakers (who can articulate their specific policy concerns), contextual experts (familiar with the nitty-gritty of a specific region of application), and basic scientists (familiar with the cutting-edge science relevant to the policy in question). This all suggests that we need to build a culture of collaboration between academics and policy-makers that involves frequent and rewarding interactions.

Conclusion

The foremost necessary condition for the effective application of basic science to policy in the security domain is the utilization of the right basic science. By this, we mean cutting-edge knowledge framed at the right level of analysis. Science, after all, is a dynamic enterprise that continually develops, driven as it is by novel findings and novel theories and models. It is, therefore, important that to the extent possible the application of science to policy reflect the current scientific consensus rather than being based on notions that were invalidated and discarded by prior work.

Relatedly, the correct level of (basic) scientific analysis is crucial to successful policy recommendations. Because issues in the security domain primarily involve human behavior, the psychological level of analysis is critical, in our opinion, to informing effective policies in nearly all security matters, including conditions for war and peace, extremism, effective messaging, negotiations, and public diplomacy. Any proposed macro-level policies should thus be carefully evaluated in terms of their possible psychological impact on the security-relevant behaviors of individuals, and groups.

Having the right basic science, though necessary, is insufficient for the fashioning of effective policy in a given region of application. Needed to that end is also a close collaboration of the basic scientists with relevant policy makers who can clearly articulate their particular concerns and contextual experts in the pertinent domains that can identify the constraints that a given policy recommendation must accommodate. Problem-focused teams that bring together policymakers, basic scientists, and context experts are needed to jointly forge the best, most informed interventions possible in given circumstances.

About the Authors

Michele Gelfand is the John H. Scully Professor in Cross-Cultural Management and Professor of Organizational Behavior at Stanford Graduate School of Business. She is the author of Rule Makers, Rule Breakers: How Tight and Loose Cultures Wire the World (Scribner, 2018) and co-editor of Values, Political Action, and Change in the Middle East and the Arab Spring (Oxford University Press, 2017), The Handbook of Conflict and Conflict Management (Taylor & Francis, 2013), and The Handbook of Negotiation and Culture (2004, Stanford University Press).

Arie W. Kruglanski is a Distinguished University Professor, a recipient of numerous awards, and is a Fellow of the American Psychological Association and the American Psychological Society. His work in the domains of human judgment and belief formation, the motivation-cognition interface, group and intergroup processes, and the psychology of human goals has been disseminated in over 300 articles, chapters, and books, and has been continuously supported by grants from the National Science Foundation, NIMH, Deutsche Forschungs Gemeineschaft, the Ford Foundation and the Israeli Academy of Science. As a founding Co-PI and Co-Director of START (National Center for the Study of Terrorism and the Response to Terrorism), Kruglanski also conducts research with the support of grants from the Department for Homeland Security and from the Department of Defense on the psychological processes behind radicalization, deradicalization, and terrorism.

Notes

  • 1
    Note: Desch, 2019
  • 2
    Note: Kruglanski was START’s founding co-PI and co-director.
  • 3
    Note: Desch, 2019
  • 4
    Note: e.g., Desch, 2019; Lyons, 1969
  • 5
    Note: Desch, 2019, p. 72
  • 6
    Note: e.g., Roos et al., 2015
  • 7
    Note: e.g., Popper, 1959
  • 8
    Note: Desch 2019 (p. 249)
  • 9
    Note: Fannon, 1961
  • 10
    Note: Popper 1962
  • 11
    Note: Gelfand discussed this clash at Carnegie Relevance Workshop “The Ivory Tower, the Beltway, and the Fourth Estate” sponsored by the DOD in January 2014.
  • 12
    Note: (Kruglanski & Fishman, 2006)
  • 13
    Note: Desch (2019, p. 174)
  • 14
    Note: Gelfand et al., 2011
  • 15
    Note: Nowak et al., 2015
  • 16
    Note: Atran (e.g., 2010, 2021)

Recent & Related

Report
Yuki Tatsumi • Pamela Kennedy • Kenji Nagayoshi
Chapter
Melissa Flagg • Michael Desch • David Montgomery

Subscription Options

* indicates required

Research Areas

Pivotal Places

Publications & Project Lists

38 North: News and Analysis on North Korea