I begin with a true and tragic story. Many years ago I was a graduate student conducting research in one of the top biopsychology laboratories in the country. The lab chief was one of a handful of the world's most prominent research psychologists at that time, and many in the lab believed he was headed for a Nobel Prize.
As is often the case, this lab head was not doing hands-on research himself. He was busy writing articles and grant proposals and traveling around giving speeches. A fleet of graduate students and postdoctoral fellows conducted the research. He would put his name on reports of research that he had helped to design but that others had conducted. He didn't even understand fully the equipment that was used in those experiments.
A certain postdoctoral fellow in the lab--I'll call him Henry--was getting most of the fabulous results. At about the same time that I received my Ph.D. and took an assistant professorship at a more humble institution, Henry accepted an offer to become full professor at one of the most prestigious psychology departments in the country. The task of continuing the line of research he had been doing was then turned over to an excellent, conscientious graduate student in the lab he had left. That graduate student could not replicate any of Henry's famous findings. This led to repeated calls to Henry to come back and demonstrate how he got those fabulous results, to which there was no satisfactory response. With continued failures to replicate, and with continued defensiveness and evasiveness on the part of Henry, the suspicion grew, usually unstated, that Henry may have made up those findings. And then the tragedy happened. Henry committed suicide.
What a shock that was to me. I can't say that I really liked Henry; his ambition was such that he rubbed those who were beneath him, including me, the wrong way. But I knew him and felt I understood him. He was a real flesh and blood person to me, and when I heard of his suicide I cried. I could see him as a frail person--despite his burly physique and blustering style--caught up in a drive toward self-advancement, in a lab that was rewarding the "right" findings and had little interest in the "wrong" ones. He was not, in truth, a scientist at all. He wasn't interested in the questions he was supposedly pursuing in the lab. When the foundation for his self-advancement was pulled out from under him he toppled; he could no longer see any purpose in living.
I've been thinking lately about the whole question of cheating in science. It has been brought to mind, of course, by the recent media coverage of the Marc Hauser case at Harvard. Hauser is accused of fabricating data in at least some of his celebrated experiments on the cognitive abilities of monkeys. The Hauser case is reminiscent of another case of scientific fraud that also occurred in the Harvard Psychology Department. In the late-1990s, fast-rising Harvard psychologist Karen Ruggiero was found guilty of fabricating five experiments, which had been published in two articles, and of altering the data that appeared in a third article. Her career was destroyed.
How common is scientific fraud? Nobody really knows. Defenders of science's purity often argue that such fraud is very rare, the product of a tiny number of "bad apples." But I doubt that. My suspicion is that the cases of fraud that are exposed are just the tip of the iceberg.
I've heard people argue that it would be against anyone's self-interest to cheat in science because cheating will be caught when someone tries to replicate the experiment and fails. But, in truth, replication is rare in most areas of science. Most scientists want to do something new, and funding agencies rarely provide grants to repeat already published experiments. Even when replications are conducted and fail, there are almost always ways to explain the discrepancies without suggesting fraud. No experiment can possibly be an exact replication of a previous one. This is especially true in the behavioral sciences. The subjects are different (different people, or rats, or ant colonies), the time in history is different, the ambient conditions (temperature, barometric pressure, color of the walls) are different, and so on. Failure to replicate may well be taken to indicate that the original findings are not as "robust" as previously believed, but it is almost never taken as evidence of fraud.
Even in the case of Henry, where every attempt was made to keep conditions exactly the same as those in the original experiments, the researchers continued to "explain" the failure, at least publicly, in terms of hypothetical changed conditions. They suggested in one article, for example, that the company from which they obtained the rats may have been breeding the animals in a way that had altered their behavioral reactions. My guess is that if Henry had remained alive and had been formally accused of fraud, nobody would have been able to prove it.
Proof of fraud in science rarely if ever comes from failure to replicate. It comes, most often, when the perpetrator of the fraud becomes so brazen that he or she fabricates or alters data in ways that make the fraud obvious to others. Hauser was caught, apparently, because he began to pressure his graduate students to get the results he wanted, which led them to become whistleblowers, which, in turn, led to an investigation revealing that his recorded data did not match that in his published papers.
A graduate student complaint also triggered the investigation that led to Karen Ruggiero's downfall. The student had asked Ruggiero for a copy of the original data for a certain experiment, and Ruggiero had refused. This led the student to suspect that the data might not exist, which led to the investigation. If Ruggiero had taken the trouble to produce a false paper record to "support" her falsified experiments, the investigation would not have happened.
Some other scientists have been caught cheating because their fabricated data, quite literally, was too good to be true. There is always a certain degree of random variability in real data, and repeated data sets that have no or almost no variability are powerful evidence of fabrication. You have to be either very brazen or very stupid to get caught at cheating in science.
Over the years a number of surveys have been conducted in which scientists were asked to report, on an anonymous questionnaire, on their own fraudulent behavior. A recent meta-analysis of those surveys reveals that, on average, about 2% of scientists admitted to fabricating or falsifying data, and 14% said that they had personal evidence of such behavior in one or more of their colleagues. The percentage admitting to fraud was highest among scientists doing pharmaceutical, clinical, and other medical research, which either means that researchers in those fields fabricate lab data more often or lie less often on questionnaires than do researchers in other fields.
As the author of the meta-analysis, Daniele Fanelli, points out, the 2% figure is the lowest possible estimate of the percentage of scientists who have deliberately falsified data. No respondents would say that they had behaved fraudulently if they hadn't, but many, even on an anonymous questionnaire, might be expected to lie in the opposite direction. The meta-analysis also revealed that a full third of the respondents to the surveys admitted to more subtle forms of scientific cheating, such as failing to report data that contradicted their theories or dropping data points from analyses because of a "gut feeling" that they were inaccurate.
The purpose of science is to discover truths. Cheating completely defeats the purpose. Why, then, do scientists cheat? In my next post I'm going to delve more deeply into this question and suggest that many so-called scientists are not, in their heads, really scientists. Instead, they are still students, going through one hoop after another to reach the next level. To them, cheating in science is just like cheating in school, and "Who doesn't do that?"
See new book, Free to Learn
 Price, M. (2010), "Sins against science" APA Monitor, 41 (#7), 44.
 Wade, N. (August 27, 2010), "Harvard researcher may have fabricated data," New York Times.
 Fanelli, W. (2009). "How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data." PLoS One, 4 (#5), 1-11.