Career
Is Science Showing Diminishing Returns?
Are there too many social scientists?
Posted April 4, 2018
Science is supposed to be self-correcting. Yet in recent years in some areas – biomedicine and social science, especially – the process seems to be failing. Many published studies turn out to rely on flawed methods or even fraud. One cause is the bad incentives under which most scientists operate. But a deeper problem, not easily cured, is that science, like every other human activity, may be subject to diminishing returns.
A major failure is the so-called replication crisis: researchers in social and biomedical science cannot reliably repeat an experiment and get the same result. Since replicability is the criterion for truth in experimental science, failure to replicate is a serious problem. In 2016 the prestigious international science journal Nature published a survey which showed that “More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.” In 2011 the Wall Street Journal described how the pharmaceutical company Bayer attempted to replicate a number of drug studies and failed nearly two thirds of the time. The situation may be even worse than these results suggest, because in social science, especially, replication is rarely attempted. It follows that many conclusions about diet, drugs, bias, prejudice and the right way to teach – are false.
False findings are necessarily the basis for flawed practice and the pursuit of scientific dead-ends. Researcher A learns from the literature, that X is true. He infers that if X is true, then Y must follow. He tests (usually inadequately) Y, and finds it to be true…Rinse and repeat with Researcher B and finding Y... If X is in fact false, this trail leads nowhere. Flawed research is not something that can be ignored: it has a real and potentially growing cost.
How do false findings get published? A couple of examples may help. Professor Brian Wansink is head of the Food and Brand Lab at Cornell University. The Lab has had a number of problems; several published papers have had to be retracted. One of the labs’ more trivial problems is this (from The Chronicle of Higher Education.):
Wansink and his fellow researchers had spent a month gathering information about the feelings and behavior of diners at an Italian buffet restaurant. Unfortunately, their results didn’t support the original hypothesis. “This cost us a lot of time and our own money to collect,” Wansink recalled telling the graduate student. “There’s got to be something here we can salvage.”
Four publications emerged from the ‘salvaged’ buffet study.
The real problem, the probable source of all of Wansink’s other problems, may be the drive to produce publications. By this measure, his research group is exceedingly successful: 178 peer-reviewed journal articles, 10 books, and 44 book chapters in 2014 alone.
The drive to publish is not restricted to Professor Wansink. It is universal in academic science, especially among young researchers seeking promotion and research grants. One way to increment publication lists is to add authors: multi-author papers have much increased in recent years. Another is to publish as soon as you have any ‘significant’ result. The LPU (“least publishable unit”), a perennial a joke among researchers, is that elusive, irreducible quantum of results that will suffice for a publication. A new industry of ‘pop-up’ journals have arisen to meet this need to publish.
Here’s another example, from a recent science blog. The issue was the so-called significance level a researcher should use as a criterion for the truth of his result. If the probability of the result occurring by chance is less than X%, then he can accept it as true: 5% is the conventional value for X. The expert’s (correct) answer was as follows: “There is no authoritative reference for using 0.05 as significance level. Au contraire… the level of significance has to be chosen based on the whole context…” The 5% standard is way too generous, as it turns out.
But more revealing than the answer is the question – from an unembarrassed gentleman at the University of Oslo: “How can I justify the use of significance at the 10%?” In other words, this guy is interested not in the truth of his result, but in what it would take to get it published. It is hard to imagine a clearer demonstration of the decline of scientific method.
Too few good questions, too many scientists?
Why this drive to publish? Most researchers now are salaried employees. They need publications because that is how they are evaluated. The problem is that at any time the number of scientific openings, of fruitful questions – questions that lead to new insights, not dead ends – is limited. It may not have kept pace with the demand. There may be too few good questions for the number of seeking scientists. What, then, determines the number of scientists?
In 1945 Vannevar Bush, engineer and public intellectual, wrote an influential report that led to the creation of the National Science Foundation. In SCIENCE, the Endless Frontier Bush declared “Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.” [my emphasis] Bush believed that the field of science is essentially infinite, that the opportunities to make new discoveries are unlimited. In short: the more scientists the better!
But is that true? Bush’s ambitious claim has come under attack recently, partly because of replication crisis and other problems with the research product I just described. Attempts are being made to remedy these problems, but their source may lie outside our control.
Bad incentives are part of the problem, but the poor incentive structure of modern science may be an effect rather than the root cause. The real cause may be the nature of science itself. Vannevar Bush promised scientific advance on a “broad front.” “Broad,” yes, but not infinite. As each problem is solved, new questions open up. There may be no end to this process, but the number of fruitful research lines at any given time may well be finite. The natural reaction to this may be a relaxation of scientific standards. The growing number of pseudo-scientific missteps we have witnessed in recent years may be not just a testament to human frailty, but a reflection of the fact that the number of fruitful lines of inquiry has not kept pace with the growing number of scientists.
This disparity is not disastrous. There are still answers to be found; advance continues. But the mismatch does mean that the ratio of unsuccessful to successful experiments will increase.
Failure in science is career suicide
A high rate of failure is not in itself a problem, scientifically speaking. Failure is OK; it’s a necessary part of science. The problem is that repeated failure is not compatible with career advancement. Science is now for most scientists a career not a vocation. Failures are essential to scientific advance. Many of the major advances in science, from Darwin’s theory to the Higgs boson, came only after many years of often fruitless search for confirming evidence. Darwin could persist because he was independently wealthy. The search for the Higgs was part of the collective enterprise of the Large Hadron Collider, a necessarily long-term investment. But failure, especially individual failure, doesn’t play well with research administrators. An ambitious scientist cannot afford to fail.
And that has created a major problem, one that threatens to erode the very foundations of science. Anxious researchers will be drawn to research methods that look enough like science to become accepted practice, but are guaranteed to get publishable results at least some of the time.
In other words, the replication crisis and other problems of science, like the apparent slowdown in the rate of discovery of new therapeutic drugs, may reflect something more than human susceptibility to bad incentives. Perhaps the problem is not people, but nature? Perhaps there are simply too many scientists for the number of soluble problems available? Perhaps we have taken the low-hanging fruit and what is left is too tough to harvest without abandoning rigor?
There can be too much of anything. There must be an optimal number of scientists that is less than one hundred percent of the adult population. Beyond that optimal number, the scientific community will begin to generate noise rather than signal and advance is impeded. Are we at that point now in areas like social science and biomedicine? Vannevar Bush’s inspiring prose was appropriate at the end of WW2 and led to great advances in government-supported pure and applied science. But the situation now may be very different. We should at least be thinking about whether we need not more, but fewer, social and biomedical scientists.