Everything You Need to Know About Conflicts of Interest

Unconscious bias in science and medicine

Posted Mar 08, 2017

In Parts I and II of this series on conflicts of interest, we discussed the myriad ways in which both financial and non-financial conflicts present serious threats to the integrity of the scientific process. We argued that non-financial conflicts, and particularly what we call “emotional conflicts of interest,” can be just as significant as financial conflicts, even though the system of transparency and disclosure we currently have does not really account for these types of conflicts. We called for more research and for further consideration of whether the idea that “transparency” is truly the only solution to conflicts of interest in science and medicine.

Now we’d like to turn our attention to a very important, but oddly often less discussed, question: What can we do to prevent conflicts and bias in the first place? And an equally important related question: What can the average citizen do to both detect these biases and conflicts and to understand what real effect they might have on scientific findings and medical treatment?

These are definitely not simple questions to answer, and that’s probably part of the reason why they’re not discussed nearly enough. But there are some promising approaches already in existence, and we’d like to outline a few of them and then lay out how we can take these approaches even further.

Tackling Unconscious Bias in Medicine

A 47-year-old African American man with no medical history visits his doctor complaining of severe back pain. A 47-year-old Caucasian man with no medical history visits the same doctor complaining of severe back pain. The African American man is referred to a chiropractor. The Caucasian man is given a narcotic painkiller and sent for an MRI scan.

There is mounting evidence accumulating that physicians actually make quite different treatment decisions depending on whether the patient is white, black, or Asian, male or female, and even whether the patient is overweight.  

Shutterstock
Source: Shutterstock

The culprit here is thought to be something called “implicit bias.” Implicit bias refers to assumptions people make based not on an accurate assessment of the situation but rather on stereotypes that the person carries in his or her head, of which he or she may not even be aware. Physicians are, of course, human just like everyone else, and medicine is in no way an exact science, no matter what anyone tells you. Medicine requires a series of calculated judgments and decisions on the part of the physician. We’re already aware that factors such as sleep deprivation, stress, and even “likeability” of a patient can have serious effects on the decisions physicians ultimately make.

But it’s becoming more and more clear that the biases people have probably also exert an undue influence on medical decision-making. And it’s certainly not the case that major decisions are exempt from this - in fact, a doctor’s bias could mean the difference between a diagnosis of schizophrenia, including prescription of drugs with serious side effects and possibly hospitalization in some cases, and a diagnosis of a “transient psychotic state,” which entails less invasive interventions.

This is clearly not the kind of medical environment for which we all strive. We want our doctors to be making evidence-based decisions as much as possible and to solicit the judgment of others in cases where they’re unsure. As Laura Castillo-Page, Association of American Medical Colleges (AAMC) senior director for diversity policy and programs, noted: “If we want to address disparity and quality of care, we have to tackle bias.” We couldn’t agree more.

So what is to be done? There are several promising programs that attempt to address these unconscious biases in medical practice. One intriguing program is led by Howard Ross, author of the book Everyday Bias and chief learning officer of Cook Ross, Inc. Ross has developed training programs and workshops in conjunction with the AAMC to make health professionals more aware of their implicit biases and to give them some tools to manage them. These workshops have been introduced across the country at many medical schools, including UCSF, Ohio State University, and University of Texas Medical Center, among others.

It’s important to recognize that workshops like these are not designed to “resolve” or get rid of implicit biases. In fact, expunging implicit biases from our psyches entirely might even be close to impossible (or just require an incredible expenditure of time and mental energy that most people can’t spare). But in many cases, simply making people more aware of the existence of unconscious bias and how it may be affecting their decisions is enough to effect meaningful change in how they behave.

This kind of thinking is the basis for the University of Massachusetts Medical School’s requirement that all new medical students take the “Implicit Association Test” (IAT). The IAT was developed by Anthony Greenwald and Mahzarin Banaji, two researchers at Harvard and co-authors of the book Blind Spot: Hidden Biases of Good People. The test is a rapid assessment that measures the degree to which we make nearly automatic assumptions about people belonging to certain races or ethnicities. Over time the test has uncovered all sorts of biases among physicians, from preference for thin over fat to more generous attitudes toward people with lighter skin versus darker skin.
 

Slowing Down

These workshops and the emerging requirements to grapple with implicit bias in the first days of medical school are encouraging steps in the right direction. But we must resist the temptation to stop there if we want to see substantive change in the way doctors and scientists make decisions and manage bias and conflicts. In a pointed critique of these methods to manage bias in medicine, Martin Escandon notes that in order to truly solve issues of implicit bias in medicine, “we must move further upstream.” In other words, we have to deal with the fact that most physicians are from relatively well-off backgrounds and that racial minorities are still underrepresented in medicine. 

We agree. In addition, however, we also need to understand that “implicit bias” and poor decision-making in medicine are not just about racial biases. They are  about a wide array of attitudes and beliefs that have become commonplace in medicine and are rooted in a structure that’s still too hierarchical, too “crisis-driven,” and doesn’t allow any time for reflection.

Shutterstock
Source: Shutterstock

Sure, if someone is bleeding profusely in front of you it’s not the time to take a step back and think about what biases you have that might affect your approach to that patient. But physicians are so used to operating in “crisis” mode that very few people pay attention to the value of “slowing down.” So beyond offering training on implicit biases at the start of medical school, practical strategies that allow people to “slow down,” reflect, and access the information they have about their own biases are desperately needed. This will also require some realignment on the payment system for medical care, which punishes providers for slowing down in order to think and reflect on their patients’ well-being.

Indeed, one way to “slow down” and prevent bias from taking over in science is actually very simple: write it down. There’s a whole movement in science now to get scientists to pre-register trials. According to Chris Chambers, who runs the Centre for Open Science’s Registered Reports, pre-registration “ties scientists’ hands.” And in fact, pre-registered trials are much less likely to show positive results than trials that were not pre-registered. Of course we still need a much deeper understanding of how this works. But there is good evidence from other fields that writing things down does help to slow down automatic thought processes that could be laden with bias. It’s possible that something like this could also be implemented in the medical setting although the process of “slowing down” by writing things down must be weighed against the need to act quickly in actual emergency situations.

There's also a need for some cultural shifts in medicine, as alluded to above. Medicine should be less hierarchical and more collaborative. Younger physicians should have more leeway to question the choices of senior physicians without distracting from providing high-quality and timely care to patients. And the problem of constant urgency in medical situations also needs to be addressed. First we would need to understand whether the sense of urgency is always warranted. Beyond that, if doctors are being constantly overwhelmed by a string of emergencies, perhaps there are issues surrounding the efficiency of the hospital as well as around human resources. Are physicians’ assistants, pharmacists, and nurses being under-utilized? Are there things that the doctor does that could in part be done by a non-physician, freeing the doctor up to think more carefully about treatment decisions? Without some of these structural and cultural changes, it might be difficult to prevent unconscious thought processes from determining important treatment decisions.

What can the average consumer do to detect and understand unconscious bias and conflicts of interest in science and medicine?

So far we’ve focused heavily on doctors and scientists. But there are things average consumers of scientific information and patients can do to understand whether conflicts and bias are skewing the information they are getting.

At the most basic level, average consumers of scientific information need to understand the various ways in which subtle bias can influence scientific results, outside of major revelations of conflicts of interest and fraud. This kind of education must start from a very young age, and we must do a better job of exposing young children to science as it actually unfolds and not just present a formulaic version of the scientific method that doesn’t really reflect reality.

More broadly, students need to be taught rigorous critical thinking skills and to pay attention to “peripheral” cues when reading a new piece of information. By peripheral cues we mean not just what the article is saying but, and often more importantly, who’s writing the article? What else do we know about this person? Is it possible that this person has any motivation aside from the data itself to come to the conclusion that he or she is coming to? This line of questioning is not meant to make people paranoid and overly suspicious. In fact, quite the contrary: the more we teach people how to rigorously and accurately question new information, the less susceptible they will be to conspiracy theories and exaggerated claims about supposed fraud in science and medicine.

None of these solutions is easy and none of them will probably solve the problem on its own. We’ve made great progress in making science and medicine much more transparent about funding sources and possible conflicts. But now we need to take the next step and tackle some of these more hidden, insidious threats to the validity of science and medicine. Some good solutions do already exist. Now we must work together to improve and enact them.

If you’re interested in better ways of communicating conflicts of interest and similar issues in science communication and public perception of science, please also visit our new online community Critica. Subscribe to the newsletter and join us in protecting science and improving the American public’s perception of science.

More Posts