Bias
Spinning Science and the Elusive Quest for Objectivity
New data about data shows how bias continues to color scientific articles.
Posted October 2, 2019 Reviewed by Kaja Perina
We like to think of scientific studies and the articles that come from these studies as cold, hard, objectives facts. Sure, scientific data can be spun and cherry picked by pundits trying to advance particular agendas—but that is thought to come from other people who introduce bias after the fact, not in the actual production of scientific evidence itself.
Such a perspective regarding the purity of scientific information did take a hit a couple decades ago in the world of medicine, when it came to light that studies of medications and other medical devices were often skewed to show products in their most favorable light. This was done through a number of techniques that ranged from statistical maneuverings to simply not publishing negative papers in the first place. Many of the authors of these studies had financial ties to the products they were investigating. If you were a pharmaceutical company excited about promoting a promising new antidepressant—and your most recent clinical trial showed the drug was a complete flop—maybe you could, for example, just reveal your data at the annual New Guinea Entomology Conference or, better yet, simply pretend the study never happened at all.
These little tricks prompted some serious changes to the way scientific data was produced and published. Scientists started to be required to disclose, in writing, all possible financial relationships that could present a conflict of interest, and journals began insisting that treatment studies be registered and described in detail before they started in order for them to be qualified for publication.
Now it’s all on the up-and-up, right? Well, maybe not so fast. A recent scientific study about scientific studies looked at the degree to which bias or “spin” is still present in our literature. The authors pulled 116 clinical trial articles from prominent psychology and psychiatry journals that tested specific types of treatment, like medication or type of psychotherapy. For an article to qualify, the main pre-defined outcome needed to be negative—meaning that, overall, the active treatment was found to not be statistically different from placebo or a control group. Then, the authors looked at the published summary of the article, what is called the abstract, to see if this negative result was fairly communicated—versus being twisted by including language that actively downplayed the result or interpreted it in a much more positive manner.
The result was that over half of the articles (56 percent) contained spin, which was most commonly placed in the Conclusions section of the abstract (where people who can’t even bother to read the entire summary go to get a quick answer). The most common type of spin was focusing on “secondary” outcomes that were positive at the expense of primary ones that weren’t.
In other words, say you were conducting a study of a medication to treat anxious adults and you measure anxiety with two different rating scales. Before you carried out the study, you would have to pick one of them as your primary measure that would ultimately determine whether or not your medication worked; you’d also be allowed to have another secondary measure (or two) under the argument that these scales captured something a little different. If your primary measure showed no effect from the medication but one of your secondary ones did, the spin would be in devoting a disproportionate amount of focus to the positive secondary scale.
Interestingly, having the study funded by a commercial entity like a pharmaceutical company, as opposed to a government-funded study, did NOT predict which studies had a spin. Indeed, the vast majority of studies with spin were not industry-funded.
Some important caveats are worth noting. First, one person’s “spin” is another person’s expanded information. The presence of spin as defined in the study certainly does not equate to the intentional manipulation of data or an effort to “fool” the reading public. Devoting some attention to secondary outcomes may be entirely appropriate and worthwhile. If you are doing a study on the treatment of ADHD, for example, it might be important to mention a treatment’s effect on anxiety levels—even if that is not the main focus of the study.
At the same time, however, this study should serve as a reminder to both authors and readers that truly objective scientific data is still hard to achieve, and can be influenced by more than just financial ties to a drug company. Many readers of scientific information are well aware of the phenomenon in which prominent scientists—who are known to have a particular position on a controversial topic—often seem to be able to conduct studies that serve to confirm their beliefs. Again, this does not mean that there is scientific fraud, but rather that the human element may be more difficult to remove than we think.
Subtle bias can come from non-financial sources as well, including scientific ones. Once a person goes on public record with a view (or a book) that, for example, video games lead to violence (or they don’t) or that cognitive-behavior therapy is the best treatment for depression, it can be very hard to shift even a little from that position, lest that person be seen as (gasp) incorrect or, even worse, wishy-washy.
What to do about all this? From a regulatory standpoint, we might consider expanding the definition of what constitutes a potential conflict of interest. If you give a public lecture about how great or horrible psychiatric medications are, for example, maybe we need to know not only if someone is on a speaker’s bureau for a drug company but also that they are on the board of an advocacy group against psychiatric medications. If you are writing to argue for the legalization of cannabis or the dangers of alcohol, it might be important to disclose not only if you work for a cannabis or alcohol company, but also if you consume the substance yourself.
Perhaps even more, however, we may need to shift our culture in a way that gives people a little more space to be scientifically flexible and moderate. We should celebrate, rather than attack, those who, in good faith, change their position based on new scientific data. We may also need to devote a little more attention to people with more moderate stances on subjects, as opposed to our current tendency to give the most air time to folks with the most extreme and polarizing perspectives.
As long as human beings are the ones conducting and interpreting science, I doubt we’ll ever get rid of spin entirely. But if we care about people actually believing what we say, a little more dedication to truly “follow the data” could take us a long way.
References
Jellison S, Roberts W, et al. Evaluation of spin in abstracts of papers in psychiatry and psychology journals. British Medical Journal Evidence Based Medicine. 2019 Aug 5. pii: bmjebm-2019-111176. doi: 10.1136/bmjebm-2019-111176. [Epub ahead of print]