Everything You Need to Know About Conflicts of Interest
Is transparency the only solution?
Posted Jan 11, 2017
Note: This is the first installment of a 3-part blog post on conflicts of interest and bias in medicine and science. This entry is mostly about financial conflicts of interest. The second entry is about other kinds of hidden biases that plague scientists and doctors, and the third will offer suggestions about what consumers and scientists can do to combat these problems.
In September of 2016, a shocking expose in The New York Times revealed that everything we thought we knew about sugar, fat, and heart disease was wrong. And not only was it wrong, but the information we had been using to guide our decisions about what to eat and what to feed our kids had been manipulated in what can only be described as a conspiracy between scientists and the sugar industry.
Needless to say, people were outraged. As one reader of The New York Times article commented, “This was a conspiracy of scientific FRAUD. The sugar companies that did this should be sued for $BILLIONS for the health harm that they caused.” It wasn’t long before comparisons to the tobacco industry started: “Sugar is the new tobacco and has been for a while. The article is just the tip of the iceberg,” commented another NYT reader.
And then, in the midst of election season, came the conspiracy theories: “FYI.. Hillary very well funded by Big Sugar so you can bet nothing will happen as a result of these findings. With Hillary in the White House, we'll all be eating cake anyway- It's a win win for everyone!”
This news was definitely unsettling. In an age of increasing transparency and availability of information, it is troubling to think that we might still be making important decisions about our nutrition based on evidence deriving from serious conflicts of interest. And while the Harvard paper referenced in this New York Times study did appear before modern rules about disclosure of conflict of interest in scientific research, another study published just this year questioning the validity of the new WHO sugar guidelines was also found to be heavily funded by the sugar industry, and the scientists involved were less than completely forthcoming about how this funding may have affected their viewpoint.
In the wake of the sugar debacle, it’s become clear that we need to take a step back and re-examine our current system of handling conflict of interest in science and medicine. Is it working? Well, we have achieved much greater levels of transparency around industry funding of science and medicine. But transparency on its own does not seem to be enough.
It has become increasingly clear over the past few decades since the Harvard sugar study that scientists and doctors are prone to biases based on financial incentives and industry sponsorship. Those data seem relatively clear and do not need to be rehearsed here. The standard response to this problem has been simply to include disclosure statements at the end of scientific papers.
But does reading the words: “This study has been funded in part by Kraft Foods” actually help journal editors, reviewers, science journalists, and the general reader understand whether and how the study might be biased?
The answer is...Probably not.
Because on its own, transparency doesn’t help us very much. In reality, outright fraud like we saw in the sugar study is thankfully quite uncommon, but biases that subtly affect scientific studies are quite common. Of course, these types of subtle influence and bias are much more difficult to detect, especially for an average consumer of scientific studies and news. Not to mention that these disclosures focus solely on financial conflicts of interests, leaving the consumer no opportunity to understand what kinds of powerful non-financial biases may be affecting the study results.
So we desperately need the answers to two interrelated questions:
1. How should potential biases be communicated to the general public in a way that maximizes consumer awareness but minimizes opportunities for extreme suspicion that can ultimately lead to science denial?
2. How do we account for non-financial conflicts of interest in science and medicine?
What is conflict of interest?
First, we need to understand what is really meant by “conflict of interest” and how it can differ from accusations of outright fraud. “Conflict of interest” has to do with a person’s involvement with an entity that could corrupt his or her decision-making or judgment about the issue at hand.
Going back to the sugar example, if the Harvard authors had simply been funded to undertake a study evaluating the relationship between sugar and heart disease by the sugar industry but there was no proof of any further involvement by the industry in the study itself, then we could say that a conflict of interest might be present.
A conflict of interest is by nature speculative. When authors declare “competing interests” to a medical journal, they are not admitting some form of guilt. They are simply disclosing the fact that other interests could have influenced their study.
In the healthcare field, these “other interests” are very broad. They refer to literally anything other than an interest in improving the health of the public through “pure” scientific findings.
Theoretically, we could imagine a scenario in which a principal investigator’s study of a new cancer drug is biased by the fact that his mother has cancer and he is desperate to find a new medication that might increase survival time. Of course, this is not the kind of “conflict” that scientists are asked to report to scientific and medical journals. Instead the focus has been on transparent reporting of monetary ties that could theoretically result in biased study results.
Are transparency and disclosure the only answers to conflict of interest in medicine and science?
On an intuitive level, this makes great sense and of course transparency is always a good thing. But when something like the sugar “conspiracy” is uncovered, it becomes immediately clear that most people are utterly confused about what industry sponsorship of scientific research really means, what the true threats to unbiased science are, and basically how to interpret information about potential “conflicts of interest” when evaluating scientific evidence.
When NYT broke the story about sugar in September, an outpouring from the general public revealed that, in fact, the difference between “conflict of interest” and “fraud” is sometimes confused. It was as though people were assuming that every case of industry sponsorship must be exactly like the sugar example - that industry sponsorship in and of itself will always result in outright fraud.
The fact that the sugar industry sponsored the Harvard study in and of itself may or may not have been the problem. What really indicted the industry and the scientists in that case was the correspondence that revealed that sugar executives had blatantly asked the scientists to manipulate their data in favor of a conclusion the company favored.
This is obviously an extreme example and one that is thankfully uncommon. More often we are faced with situations in which industry sponsorship of science is present and that sponsorship has the capacity to subtly influence the opinions and behaviors of scientists and healthcare professionals.
What do we know about how money influences scientists and doctors? In other words, can money or gifts from a certain industry really bias scientists and doctors that much, even in the absence of direct requests that studies funded by industry show favorable results, as we saw with the sugar example?
The answer to this question is yes. There are already plenty of studies showing that money from industry—even relatively small amounts—influences scientific conclusions and physician prescribing. This realization has led to ever more stringent reporting and disclosure policies in science and medicine.
Is industry money the only thing that can bias scientists and doctors?
In the past 30 years or so, we’ve seen not only many laws around disclosure but also a plea for more public sector funding for scientific research to avoid the undue influence industry interests may have on science and medicine.
But do these solutions really solve the problem? They are a step in the right direction. But they are not the complete answer to this problem. Why? Because “influence” is actually a quite complex and nuanced phenomenon.
Interestingly, one of the main respondents to another sugar debacle this year is Dean Schillinger, who wrote an editorial criticizing the methodology used in an industry-sponsored review article about the new WHO sugar guidelines in Annals of Internal Medicine. In his response to that review article, he actually disclosed that he was a paid expert witness for the city of San Francisco in a lawsuit brought by the beverage industry the previous year when the city mandated warning labels on soft drinks. Government funding of scientific research might be slightly better than industry sponsorship, but we should not believe for a minute that this approach completely eliminates bias.
And in fact, there are many sources of potential bias that do not get disclosed under current transparency policies for science and medicine. For example, what if a prominent doctor writes a popular book about the importance of early and frequent screening for cancer? What if that doctor is known for her position that annual mammograms are essential? When that same doctor publishes a paper showing that annual mammograms result in reduced rates of metastatic breast cancer, isn’t it possible that a very significant bias may have crept in here? Indeed, that bias could even be viewed as financial, since perhaps those positive findings could translate into increased book sales for the doctor. But this kind of bias would not need to be disclosed in the paper.
Or, to take another example, what if a psychiatrist has a big private practice based on a certain type of treatment, say cognitive-behavioral therapy (CBT)? Could that be a potential source of conflict of interest in that doctor’s new study on the superiority of CBT compared to drug therapy? Should someone who does a lot of back surgeries be allowed to write a paper in which back surgery is compared to physical therapy? These biases are in fact all ultimately tied to financial gain, but they don’t get disclosed in papers. And this is not to mention a whole range of other kinds of biases that are entirely non-financial in nature but potentially very influential.
So what can we do to better understand all the potential drivers of conflicts of interests and biases in medicine and science? How can healthcare professionals and scientists alike better monitor their own biases? And how can informed consumers determine whether and how conflicts of interest and biases might be playing a role in the science they consume and the medical care they receive?
These are not easy questions to answer, but we will try to take them on next month in Part II of this series on conflicts of interest in science and medicine. Stay tuned. In the meantime, if you want more information, we recommend following two excellent sites devoted to issues around transparency, fraud, and science gone wrong in general, Retraction Watch and Bad Science.