Skip to main content

Verified by Psychology Today

Is This Psychology's Most Ironic Research Fraud?

Did social science best-seller Dan Ariely fake data on honesty?

Key points

  • Evidence suggests faked data in Dan Ariely's 2012 honesty study.
  • Scientific reform movements should consider strategies to combat active fraud, not just errors in research methods.
 Photo by Thirdman from Pexels
Evidence suggests researcher Dan Ariely faked data on honesty
Source: Photo by Thirdman from Pexels

[Update: Reporting has revealed that Dan Ariely has asked for the manuscript describing the study below to be retracted. Further reporting provides context about problems with research ethics involving Ariely in the past: he was suspended from MIT (and ultimately left) because he did not get approval from an ethics oversight board before conducting electric shock experiments on students, he repeatedly stated that dentists can't agree on what is a cavity based on evidence he could not produce, and he conducted a study asking college undergraduates to masturbate at the MIT Media Lab--which itself appears to have statistical (as well as potential ethical) issues. Further, anonymous colleagues are quoted as saying "[Ariely is] an excellent storyteller, and he twists and fiddles with his studies so that they match the big story" and "Ariely likes to cut corners, and he doesn't think he needs to follow the rules like everyone else. He didn't think he'd get caught."]

In perhaps the most ironic case of academic misconduct to date, new evidence convincingly shows that data from the honest study was faked. In 2012, an academic paper on honesty was published in a prestigious scientific journal. The evidence was documented and disseminated by a group of well-known social scientists who have used statistical methods to identify scientific fraud in the past. This is the latest flashy psychology finding from top-tier psychology researchers to be faked in a long line stretching back over the last decade.

First, the study. One of three studies in the 2012 paper, conducted by noted behavioral economist Dan Ariely, manipulated whether people signed an “integrity statement” (“I promise that the information I am providing is true”) at the top or the bottom of a car insurance form. His results found that people who signed the form at the top reported driving more than those who signed at the bottom. This implied they were reporting results more honestly because drivers are motivated to under-report miles driven to insurance companies. This was on trend for the rising interest in behavioral economics “nudges” at the time–small changes that push people towards more socially desirable behavior.

Photo by Mike from Pexels
Driving data from a major study had several big problems.
Source: Photo by Mike from Pexels

Data for this study were available online, and a group of anonymous junior researchers interested in the effect found several unusual properties (note that junior researchers were afraid to reveal their identity based on this careful follow-up of influential work–much less claim credit–is a whole separate problem we will get to below.)

First, a large portion of the data showed there was not a single person who drove more than 50,000 miles over a two-year period. Further, driving any amount under 50,000 miles was equally likely. More people drive a medium amount in real data (such as from the UK Department of Transportation) than those who drive very little or very often. The data pattern is not like previously reported driving data, but it is like the data you would get if you used Excel’s random number generator.

Further, there are odd pairs of numbers in the data. Half are in one font, and half are in another, and they are almost the same (except for a slight jittering that could be due to using the Excel random number generator). This is consistent with someone copying and pasting the data then slightly changing the new fake data—but not realizing they had a different font setting in Excel.

Finally, there's no rounding in the data section that looks like it may have been copied and pasted. This is very odd because when you ask people to report how many miles are on their odometer, they tend to report in 1,000s (or 500s). People don’t usually say, “my car has 26,342 miles on it”; they would say, “my car has 26,000 miles on it.” This wasn’t present in the copied data.

According to the study authors, only Dan Ariely and the insurance company accessed the original data. Ariely provided a brief response to this blog post after reviewing it, where he says that he didn’t alter the data but didn’t look for abnormalities in the data either. This implies that the data from the insurance company was falsified to support his hypothesis before he got it. That, or he’s not being honest.

This episode raises several issues about psychological research. First, there has been much progress on assessing research credibility in the last decade, enough to make us think that perhaps this kind of data faking wouldn’t get through peer review. No part of peer review involves forensically investigating data to catch fraud. And because Ariely posted the data publicly–a reform pushed by members of the open science and Credibility Revolution communities–that allowed the error to be caught, catching it was not automatic. As much as psychology and other behavioral fields have embraced reform, there’s still a long way to go before we can really say we’re doing everything possible to vet reported results.

Second, this was a relatively sloppy case of fraud. Exploring public data in an Excel file yielded several important traces. There are likely more careful ways researchers can commit fraud that avoids detection, such as refusing to post data publicly or even just being a bit more careful in adding fake data than copying and pasting in two fonts.

Third, and related to this point, is the possibility that part of the problem with unreplicable psychology is a lack of proper understanding of statistics or proper research methods. Even famous big-name researchers who have tenure, TED talks, best-selling pop-science books, and documentaries about them may commit fraud. While scientific research rests on trust, we as a field should not be so naive as to believe that every research finding can be trusted–or that interventions to prevent fraud (as best we can) shouldn’t be part of our larger conversation about reforming science.

Photo by Pixabay/Pexels
We may need more safety precautions with our data and science.
Source: Photo by Pixabay/Pexels

Fourth, how does our field treat researchers who do the careful follow-up work required to uncover the fraud? The researchers who did this data detective work contribute to the strength of the social science literature by cleaning it up, yet they are too afraid to reveal their names.

As I’ve written here, this hostility to valid criticism may be one reason why young science reformers are opting out of jobs in academic research. Since academia is built on peer review, there are many ways that senior academics and their networks of collaborators can strike back at those who challenge them.

While the original study seemed to show that honesty can be easily increased through “one simple trick,” it turns out acting honestly is more complicated than that. There are larger systemic issues related to incentives for dishonesty, disincentives for follow-up work challenging established researchers, and the difficulty of doing careful investigations in the first place. As it turns out, there may be more to learn from the behavior of the honesty researchers themselves than from the content of the work they did.

[Note: In a previous version of this post, I noted that the fraudulent data were available online since 2012, when the manuscript was published. That is not true: The data was only posted in 2020, when a replication study failed to find the original effect, and posted both the original 2012 and their new 2020 data together online.]

advertisement