The Science of Ending Conflict
War has long been the subject of history, philosophy, and poetry. Now, science may be revealing the hard truth about why men fight—and what could make them stop.
By David Berreby published June 30, 2015 - last reviewed on June 9, 2016
In a sparsely appointed trailer in northern Iraq, close to the sandbagged front line where Kurds faced the advancing forces of the Islamic State, fighters sat on the floor last spring and talked to Lydia Wilson about war. “Here,” one would say, pointing to his neck, “is where I was wounded—and here, and here.” Another trailed off from his own story to tell her about the wars in which his father and grandfather had fought in defense of their ethnic identity. Others praised their French allies’ efficiency in carrying out air strikes—the Americans, they said, took too long to arrive and flew away too soon. Some wondered out loud whether the coming night would bring suicide attackers driving trucks laden with explosives toward their position. Daytime offered quiet and some respite in the trailer, but by nightfall, they knew, ISIS would be back.
Wilson, an ebullient English research fellow at the Centre for the Resolution of Intractable Conflict at Oxford, sat beside the men asking questions, listening intently, and scribbling in her notebook. At certain points she focused her attention on two men in particular. They had been led into the trailer in handcuffs and with their eyes down and at first had little to say. These men, the Kurds told her, had been working undercover for ISIS, planting car bombs and plotting assassinations. They had already been tried in Mosul and would soon be executed. For Wilson, the opportunity to talk with them could offer valuable information for her study of what motivates young men to kill or die in war.
These conversations were a notable departure from what she had intended to do when she moved to the Middle East 10 years earlier. Wilson had arrived in Damascus planning to learn Arabic as part of her doctoral research in medieval Arabic philosophy, but quickly discovered that when living there, “you can’t help but get absorbed in the politics.” At a conference, she happened to meet anthropologist Scott Atran, who for 20 years has studied people who participate in violent action on behalf of a group or a cause. When Atran offered Wilson an opportunity to collaborate with him, she jumped at the chance.
Since then, she has worked in several conflict zones in the region, and among the things she has learned is to adapt her research technique to local mores. For instance, while psychologists often ask subjects to rate their feelings on a scale of 1 to 10, such blunt quantification is “an unheard of way of answering a question” in the Middle East. “They look at you and say, ‘Let me tell you a story.’ So we do not ask with scales anymore. We’ll use images of fighters or we’ll ask them a question, and when they say yes or no, we’ll say, ‘Yes very, very much? Or yes only a bit?’ Or, ‘Are you not quite sure—right in the middle?’”
With the captured ISIS fighters, she plunged into her experiment, with its seemingly odd questions and images on flash cards. From their fellow believers, the men were used to hearing that establishing the Islamic State’s caliphate was God’s work. From their enemies, they were used to hearing the opposite—that ISIS was bad. Wilson managed to engage them by showing that she was not expecting either of these rote answers. “We’re not coming in and saying, ‘What’s it like to live under ISIS?’ or any of the questions they’re very used to. We’re asking something that makes them say ‘What?’ It gets them out of whatever prepared answers they had.” Measuring their true feelings about the relative strength of different groups in the region was the first step in the experiment. Af ter that, other tests would measure the extent to which the fighters identified with groups and their values. Despite the alien nature of Wilson’s line of inquiry, she found that it seized the men’s attention and they freely offered answers.
For millennia, philosophers and poets, historians and political economists have offered explanations for why men fight, with theories primarily based on rhetoric, ideology, or emotion. Yet Wilson and Atran are part of a growing number of researchers who are bringing the tools of science to bear on the study of conflict. That may sound unremarkable, but it’s actually a revolutionary approach to understanding the ancient scourges of war, genocide, and other manifestations of intergroup hatred and violence. Employing systematic research methods, these pioneering scholars are examining the pathology of war in much the same manner that biologists examine the pathology of disease. They hope to do nothing less than decipher the origins of conflict—and ultimately find new ways to stop it.
The Banality Of Evil
One benefit of applying scientific methods to the question of why we fight is that it can clear away misconceptions—the things that “everyone knows” about conflict but that have rarely been tested and, when they are, often prove to be false. One of the most common misconceptions, beloved of partisans in all conf licts as well as many who support combatants from a distance, is that people on the “other side” are abnormal—deluded, cruel, perhaps even insane. In a recent interview, for example, John Brennan, the director of the CIA, said of ISIS: “They are terrorists; they’re criminals. Most of them are psychopathic thugs—murderers who use a religious concept and mask themselves in that religious construct.”
But researchers have established that the image of the “crazy enemy” simply isn’t true. Rather, it’s a reflection of outgroup bias, one of the oldest and most robust findings from the annals of social psychology. It shows that we prefer people we perceive as members of our group—however loosely that may be defined—and are biased against those who are not. When groups fight, the natural bias against an outgroup is further exacerbated by fear and resentment.
In contrast to how we perceive them, the majority of terrorists, insurgents, and perpetrators of mass killing who have been tested by scientists have proven to be basically like the rest of us. That’s not to say that participating in a genocide or blowing oneself up in a crowded market is normal, but such behavior is not evidence of personality disorder or other serious psychopathology; rather, it’s an adaptive state of mind that mentally healthy people are entirely capable of adopting. In the case of Islamist terrorist groups, Atran says, “most foreign volunteers and supporters fall within the mid-ranges of what social scientists call the normal distribution of attributes like empathy, compassion, idealism, and wanting to help rather than hurt other people.” It’s proof of what the mid-20th-century political theorist Hannah Arendt famously called “the banality of evil” in her consideration of the seeming ordinariness of Nazis who committed atrocities in World War II.
“Ours is a ‘banality of evil’ approach,” says Hammad Sheikh, a psychologist at the New School for Social Research and collaborator with Atran and Wilson. Sheikh’s personal interest in the psychological origins of group violence began when he was growing up in Germany. “I could never believe that the Nazis were these evil people who had taken over. Millions of ordinary people had followed Hitler, and I met them. They had been fanatics. But in my childhood, they were nice old people shaking my hand and giving me chocolate.”
Not only are perpetrators of conflict not the cold-blooded psychopaths they’re often assumed to be; they may actually be distinguished for having an unusually high degree of compassion. In his studies of the neural mechanisms of prejudice and empathy, Emile Bruneau, a cognitive neuroscientist at MIT, has found that some terrorists scored higher than average on measures of empathy. Their intense empathy is limited, however, to members of their own group. “The problem is not that they lack empathy,” Bruneau says. “They have plenty. It’s just not distributed evenly.”
Why We Fight
If terrorists and genocidal murderers are not insane or intrinsically wicked, it doesn’t necessarily follow that they are coolly rational either. Such is the equal and opposite error often made in thinking about mass violence. Leaders of modern states frequently assume that their opponents are out to maximize their largely material rewards and minimize their pain. They are thought to respond to incentives (“We’ll give you food and other aid”) and avoid disincentives (“We’ll bomb you”). But Atran, who has talked to far more terrorists and likely received far more death threats than any other social scientist, has found that this kind of horse-trading is usually anathema to people in conflict zones.
In fact, it’s anathema to most of us. That is because people of all cultures hold “sacred values”—things that are too cherished to be compromised. For example, you might relinquish a weekend day to work for money. But if your religion prohibits working on the Sabbath, no amount of money can compel you to do so. Anything—a nation, a religious landmark, a legal status—can be construed as sacred, at which point defending it is perceived as a matter of right and wrong, not of costs and benefits.
Negotiating transactionally with people who are motivated by moral imperatives is bound only to infuriate them. As Jeremy Ginges, a psychologist at the New School for Social Research, wrote in a paper published last year, “Regardless of the specific issue (whether it concerns the right to make salt or to protect an old growth rain forest, a ‘holy’ city, or a national boundary), all sacred values appear to be defined by a taboo against material trade-offs.”
Although the human distinction between values and costs is universal, what we assign to each category varies widely. Something that’s venerated on one side of a dispute may be meaningless to the other, leaving plenty of room for misunderstandings or good-faith offers that can make conflict worse, not better. Ominously, a survey of some 1,400 Iranians conducted a few years ago by Atran and his colleagues found that 14 percent of them saw the maintenance of their country’s nuclear program as sacred.
Worse yet, two sides can regard the same thing as hallowed, setting the stage for a serious impasse. Such is the case in a number of confrontations around the world that seem fundamentally intractable: India and Pakistan fighting over Kashmir, Russia and Western allies over Ukraine, and, most obviously, Israel and the Palestinians over their disputed land. A survey by Sheikh, Ginges, and Atran in 2013 found that 86 percent of Palestinians consider “protecting Palestinian rights over Jerusalem” as a value ranked just slightly less than “protecting the family” and equal to “fairness to others.” The “right of return”—the demand of Palestinians to be able to return to the ancestral homeland from which their families fled during Israel’s establishment in 1948—was held sacred by 78 percent.
These findings may sound like grounds for despair, but the researchers argue that acknowledgment of an adversary’s sacred values—even if they conflict with one’s own—can make negotiations more successful. This is not just because it allows negotiators to avoid the error of offering to horse-trade over an issue that’s impervious to negotiation. It’s because people often respond well to having their sacred values acknowledged, even if that recognition comes in the form of a gesture that makes no practical difference. As Atran and the political scientist Robert Axelrod wrote several years ago, by making “symbolic concessions of no apparent material benefit”—for example, an apology for a past wrong or an acknowledgment of the other side’s legitimate right to its position—negotiators “might open the way to resolving seemingly irresolvable conflicts.” In some cases, an apology means more than a very large pile of money.
A thorough understanding of the nature of deeply held values could also eliminate time-wasting and posturing. After all, every side in a political conflict claims that it’s fighting for something fundamental, be it the right to eat whale meat in Japan or to walk around naked in San Francisco. Accurate surveys of people’s attitudes could separate a true values clash from a situation where leaders are just lobbing rhetoric.
What’s more, measuring the degree to which a value is perceived as essentially holy can shed light on behavior that has traditionally been considered impossible to predict or quantify. That was Wilson’s objective in her recent survey in Iraq, which was aimed at testing a method that could predict “willingness to fight”—the spirit of self-sacrifice and ruthless pursuit of victory that, for example, makes ISIS fighters so determined. United States officials believe that this quality is, as President Obama described it last year, “imponderable.”
Atran, Wilson, and Sheikh have argued that willingness to fight is actually quite possible to ponder, and even predict, if two things are known: the extent to which an individual feels his personal identity is fused with a collective identity, and the extent to which he thinks the fight is in defense of sacred values.
With a combination of images and questions, Wilson asked the men she met in the trailer how intensely they identified with various values and group labels (“democracy,” “Kurdishness,” “Iraq,” “Islam”). Later, she asked them to rate how much they would sacrifice for the values and identities that meant the most to them. Would they be willing to die? To kill someone? To kill a child? Crunched as data, the answers to these questions may reveal something critical and measurable about a person’s willingness to fight in a war.
Cure For Conflict?
Understanding how people become mass murderers, terrorists, and exploiters, or even how they come to support barbarousness from the sidelines, is only half the challenge. The other half, of course, is understanding what gets people out of those ranks. Here, too, the problem is not that we lack for theories, but that we have too many explanations on offer, few of which have been tested.
Several years ago, political scientist Donald P. Green of Yale and psychologist Elizabeth Levy Paluck, now at Princeton, looked at more than one thousand studies aimed at reducing conflict and concluded that almost none passed scientific muster. “A small fraction speak convincingly to the questions of whether, why, and under what conditions a given type of intervention works,” they wrote, concluding that “the causal effects of many widespread prejudice-reduction interventions, such as workplace diversity training and media campaigns, remain unknown.”
Paluck set out to change this. In Rwanda and Eastern Congo, she tested antiviolence campaigns similar to the way that pharmaceutical makers test new medications—with randomized controlled trials. In the case of a drug, its effect is tested on one group of randomly chosen patients and compared to the effect of something else—a placebo, an alternate medication, or nothing—on another similarly random group. If patients who took the experimental drug live twice as long as those who didn’t, researchers can reasonably credit the drug. Of course, the element of “control” is more complex when a treatment isn’t a pill but a psychological tactic, and the participants aren’t patients in a monitored clinic but people scattered across hundreds of miles of countryside. Yet Paluck has conducted trials for conflict intervention strategies that convincingly demonstrate a cause and effect.
Consider her work in Rwanda, where in 1994 members of the country’s Hutu majority slaughtered hundreds of thousands of their Tutsi neighbors, along with Hutus who were deemed too moderate. The Hutu militants driving the genocide used radio—which all Rwandans listen to—to broadcast the message that being a responsible Hutu included a duty to kill Tutsi. The broadcasts didn’t just send out detailed instructions about when, where, and whom to kill—they normalized slaughter. Part of this effect came about because Rwandans habitually listen to the radio in groups, so any message from a broadcast is echoed and reinforced by people agreeing with it. “That’s a setting where we instantly see what our peers think about norms,” Paluck says.
A widely held misconception about mass violence is that it arises because people’s beliefs change—in this case, the conventional wisdom held that the radio broadcasts caused the Hutu to change their minds about political issues. Paluck, however, hypothesized that altered beliefs weren’t really the force that pushed people into participating in genocide. What the broadcasts quickly accomplished, she guessed, was a shift in norms as listeners took in the notion that being a good Hutu required killing Tutsi and their supposed allies.
Instead of leaving her theory on the page, Paluck put it through a real-world trial. If radio had such fast-acting power to encourage genocide, she thought, it would follow that radio could promote humane values like tolerance and reconciliation as well. In essence, she says, “we thought we would use radio to reverse-engineer that process.”
In Rwanda, it turned out, she had an experimental “treatment” already available. In the aftermath of the genocide, a group of writers and broadcasters created a radio soap opera called “Musekeweya,” or “New Dawn,” about two communities struggling to live together. Though the labels “Hutu” and “Tutsi” could not be used because of a Rwandan law that bars depictions of ethnic conflict, the two groups were instantly recognizable to any listener. On the show they compete and fight, as well as talk to each other across boundaries and speak up against leaders who demand violence.
To test her theory that listening to the show changed people, Paluck loaned radios to 12 villages around Rwanda, randomly assigning each community to listen to one of two programs, either “Musekeweya” or a “placebo” radio show. After a year, the experimenters returned to each village with a gift—a portable stereo radio for them to keep, which confronted them with a choice: How would they share the new radio?
The difference between the two types of villages was striking. Those who had heard the placebo broadcast did what Rwandans traditionally do: “They said, ‘Let’s give it to our leader and he’ll take care of it,’” Paluck recalls. But in the villages that had received the “treatment”—the “Musekeweya” program—there was open debate. “A hand would go up and someone would say, ‘But our leader doesn’t listen to the radio,’ or ‘Why not give it to a woman?’ ” The experiment showed that the program indeed interfered with their characteristic obedience to authority, which had in part triggered the 1994 genocide.
Bruneau, at MIT, has taken a different tack in testing the efficacy of conflict interventions by focusing on the processes in the brain that underpin bias and empathy and seeing how they respond to prompts that involve perceiving the pain of others. “If we can get a neural signature that corresponds with these processes, then we get a measure we can’t get otherwise, one that allows us to assess the effect of interventions that are aimed at decreasing bias,” he says.
In the past he has taken brain scans of Palestinians and Israeli Jews as well as Mexican immigrants and white Americans in Arizona; he’s currently looking at the neural patterns of Hungarians as part of an effort to combat their widespread prejudice against ethnic Roma people. By establishing a baseline measure of how empathy operates in the brain, he says, neuroscience can provide a means of testing whether a strategy to elicit empathy is working—and it might also show that a strategy that works for some people doesn’t work for others. Knowing how empathy and indifference work in the brains of different people, he believes, might make it possible to tailor interventions to individuals based on their unique bias imprints. “In science, we often discard things because there is no measurable effect on the population,” Bruneau says. “But if it works for 10 percent of the population, then it is worth keeping for those people.”
Finding what works for just 10 percent of a population may indeed be a best-case scenario. Neither Bruneau, Paluck, nor any other scientist researching conflict thinks anything close to a panacea exists. Paluck’s own work offers a cautionary tale about assuming that a treatment’s success can be replicated in different settings. In Eastern Congo, where an estimated 5.4 million people have been killed since 1998, she tried an experiment similar to the one that worked in Rwanda and found it had an opposite effect, making people more angry and xenophobic instead of less. The reason for the outcome, she speculates, is that “it’s very different to generate discussion within an ongoing conflict than it is in a postconflict situation.” The larger lesson she gathers is that searching for broad ways to manipulate human behavior will get a researcher only so far. “Social scientists think like theoreticians in terms of the universal,” she says. “But psychologists who want to address real-world questions have to be more like engineers.”
The possibility of engineering people away from their natural prejudices and impulses sounds like the plot of a science-fiction story. It’s exhilarating to imagine a scenario where the causes of a suicidal willingness to fight could be identified and eliminated, where propaganda promoting group violence could be instantly negated by a well-tested antidote, and where psychological profiles help tailor a perfect anticonflict message to each person’s distinct biases. We’re a long way from there, and no researcher is operating under the fantasy of discovering a magic bullet. But addressing these possibilities with scientific inquiry so far appears to be a push in the direction of a more humane future.
Submit your response to this story to letters@psychologytoday.com. If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity. For more stories like this one, subscribe to Psychology Today, where this piece originally appeared.