Viva Behavioral Science
A philosopher has questioned the usefulness of behavioral research. He is wrong.
Posted Jun 01, 2012
Gary Gutting, a professor of philosophy at the University of Notre Dame, questioned the value of the social sciences, in a piece on the Opinionator blog of the New York Times ("How Reliable are the Social Sciences?"). He has been criticized, and rightly so, for his broadsided attack on entire disciplines. See, for example, Jamil Zaki’s nice rebuttal.
The unfortunate thing about Gutting’s piece is that he actually made some reasonable points about how to study human behavior. But, he doesn’t seem to recognize that a great deal of research is already doing exactly what he suggests.
What Gutting really objects to, it turns out, is the failure to use the experimental method to study people. The reason that much social science research fails to produce precise predictions, he argues, is because “such predictions almost always require randomized controlled experiments, which are seldom possible when people are involved.”
He is sadly mistaken on this last point, having missing entire disciplines (such as mine, social psychology) that use the experimental method to study human behavior. Nor does he mention the vast knowledge that has accrued through experimentation, including novel discoveries that have reduced human suffering. Just a few examples:
• Recent experimental work in schools shows that simple social psychological interventions can reduce the achievement gap by 40 percent.
• Getting high school students to do community service reduces teenage pregnancies and improves academic performance
• A simple psychological intervention has been found to dramatically reduce child abuse
Each of these findings is based on research that used the experimental method, with random assignment to the “treatment” or control conditions. I discuss them, and several other examples, in, Redirect:The Surprising New Science of Psychological Change.
One place in which I wholeheartedly agree with Gutting is that we need to “find ways of injecting more experimental data into government decisions.” As he notes, social and educational policies have often been based on the flimsiest of evidence. But this is not due to a grand failure of the social sciences, but rather to a failure by policy makers (and yes, some social scientists) to appreciate the value of a good experiment.
But this is changing, as evidenced by the use of the experimental method to debunk some popular programs. Examples of debunked programs are Critical Incident Stress Debriefing, an intervention used to prevent post-traumatic stress disorders in people who have witnessed horrific events; the D.A.R.E. anti-drug program, and Scared Straight programs designed to prevent at-risk teens from criminal behavior. All three of these interventions have been shown, with solid experimental studies, to be ineffective or, in some cases, to increase the very behaviors they are trying to prevent. And as a result, these programs have become less popular or have changed their methods.
The same is true of educational programs. Gutting is right that too often, they have not been tested rigorously. He is wrong that they can’t be tested with the experimental method. They can be, and increasingly, they are. See, for example, an experimental test of a teacher training program that successfully improved teacher quality and student performance, that Science Magazine, the premier journal in all of the sciences (hard or soft), saw fit to publish.
Clearly, Gutting is not familiar with vast areas of psychological and educational research that do precisely what he suggests. Too bad he didn’t read more widely in the disciplines he dismissed.