Skip to main content

Verified by Psychology Today

Health

Evidence-Based Practice: The Misunderstandings Continue

A recent essay displays startling misconceptions regarding science and therapy.

The three components of evidence-based practice

There’s plenty of area of legitimate debate in clinical psychology and allied fields, such as psychiatry, social work, and counseling, but at least one proposition should not be particularly controversial: The state of mental health care is in shambles. Surveys show, among other things, that only about 20 percent of people with major depressive disorder (a life-threatening condition associated with a markedly increased risk of suicide) receive anything close to optimal treatment; that many or most practitioners who treat clients with eating disorders are not administering scientifically supported therapies; that about two-third of individuals with autism spectrum disorder are receiving scientifically unsupported interventions, such as facilitated communication, sensory-motor integration, chelation therapy, and glutein-free diets; that large proportions of clinicians who treat obsessive-compulsive disorder do not implement the clear-cut treatment of choice for this condition, namely exposure and ritual (response) prevention, and on and on…and on.

Enter evidence-based practice into the fray – and perhaps at least partly to the rescue. Evidence-based practice is a concept, imported into psychology from medicine, where it originated in the 1990s (we in psychology almost always seem to be a few decades behind our colleagues in medicine), that attempts to minimize error in treatment selection and administration by grounding clinical decisions in the best available research evidence (Sackett & Rosenberg, 1995). Specifically, evidence-based practice, like its predecessor of evidence-based medicine, is traditionally conceptualized as a three-legged stool. Specifically, evidence-based practice attempts to integrate (1) the best available research evidence bearing on the efficacy (how well treatments work in rigorous controlled trials) and effectiveness (how well treatments work in real-world settings) of clinical interventions with (2) clinical judgment and expertise, and (3) client preferences and values . In an influential report, a task force of the American Psychological Association (2005) regrettably declined to explicitly address the question of whether these stool legs should be weighted equally. Nevertheless, many researchers, myself included, believe that the research leg of the stool should be accorded the highest priority in the decision-making hierarchy. When the rubber meets the road, that is, when well-designed studies demonstrate that Intervention X works better than Intervention Y but when a clinician’s intuition tells him or her to use Intervention Y, we should side with research evidence unless there a clear-cut reason to do otherwise.

No serious scholar believes that evidence-based practice is a panacea. Nevertheless, it is an essential and long-overdue step in the right direction, because it reduces – although of course does not and probably cannot eliminate – errors in clinical inference. By constraining clinical selections to interventions that at least have some modicum of research support, evidence-based practice increases the chances that clients will receive treatments that work, and decreases the chances that clients will be exposed to interventions that are ineffective or that can cause harm.

Yet, there has been a good deal of resistance to evidence-based practice. As social work scholar Eileen Gambrill and her colleagues have noted (e.g., Gibbs & Gambrill, 2002), much of this resistance stems from misunderstandings and misconceptions. For example, some opponents maintain that evidence-based practice (a) eliminates all clinical judgment (as reflected in the three-legged definition above, it doesn’t); (b) prevents practitioners from administering unvalidated interventions (not so; it implies only that they should give their clients full informed consent when administering experimental interventions); (c)mandates a cookie-cutter approach to treatment administration (no it doesn’t; so long as treatments are explicitly laid out, they can be modified or tailored to individual clients); (d) only considers evidence from randomized controlled trials (false; it considers evidence from observational studies, quasi-experimental designs, and well-designed within-subject designs, although it appropriately weights well-controlled studies more highly than other sources of evidence); (e) is equivalent to the concept of empirically supported therapies (as many authors have noted it is not; empirically supported therapies constitute only one operationalization of the research leg of the evidence-based practice stool); and (f) applies only to groups of clients, not to individual clients (groups are composed of individuals, so this objection doesn’t hold water; all things being equal, an intervention found to be more effective than no treatment or an alternative treatment for a group of individuals with Disorder X is more likely to be effective for individuals with Disorder X; see Lilienfeld, Ritschel, Lynn, Latzman, & Cautin, 2013, for a discussion of these and other misconceptions regarding evidence-based practice).

Still, not all objections to evidence-based practice are unreasonable or based on dubious logic. Perhaps the most persuasive argument against evidence-based practice in its present form comes from physician Kimball Atwood (2008) and his colleagues at their superb blog, Science-Based Medicine. As they observe, evidence-based medicine relies too heavily on the results of controlled trials, and not sufficiently on theoretical plausibility. From the standpoint of Bayes theorem, a well-known mathematical formula that requires us to take the a priori likelihood of a model into account before evaluating its scientific merit, medical and psychological practices should be judged by the quality of both (a) the research evidence in their favor and (b) the plausibility of their theoretical rationale. When a treatment’s theoretical rationale is dubious, the reasoning goes, we need more – and more persuasive – research evidence to accept this treatment as valid than when its theoretical rationale is well-established. As sociologist Marcello Truzzi and later, astronomer and science writer Carl Sagan, noted, extraordinary claims require extraordinary evidence (see David & Montgomery, 2011; Lilienfeld, 2011, for further discussions). Even if this friendly amendment is correct (and I am sympathetic to it), it does not imply that evidence-based practice per se is flawed; it implies only that we need to expand the definition of evidence-based practice to incorporate theoretical plausibility along with rigorous research evidence.

I had assumed that the universe of possible misconceptions surrounding evidence-based practice had been exhausted until last week, when I read an essay by well-known psychologist and consultant Gary Klein on the website, “The Edge.” Along with 175 other eminent invited contributors, Klein responded to the provocative question, posed by prominent book agent and science advocate John Brockman, “What Scientific Idea is Ready for Retirement?” See:

http://www.edge.org/response-detail/25433

To my surprise, Klein responded “Evidence-Based Medicine.”

My surprise only grew, however, as I read on:

“…we should only trust EBM [evidence-based medicine] if the science behind best practices is infallible (italicized) and comprehensive (italicized), and that’s certainly not the case.”

This statement reflects a jaw-dropping misunderstanding of evidence-based practice. No science, including the science underling evidence-based practice, is or ever will be infallible. The goal of evidence-based practice, like that of all science, is not to eliminate all error; it is to minimize error. Indeed, one of the major advantages of evidence-based practice is that like all good science, it is in principle self-correcting. As better treatments become available, they will eventually displace less effective ones. Crucially, by sorting the wheat from the chaff, evidence-based practice can also tell us which treatments are extremely unlikely to be effective – and thereby decreases the odds that clients will be harmed directly (by iatrogenic interventions) or indirectly (by opportunity costs incurred by the loss of time, energy, effort, and resources that could have otherwise been invested in effective interventions).

Klein continues:

“Practitioners shouldn’t believe a published study just because it meets the criteria of randomized controlled design. Too many of these studies cannot be replicated.””

Of course. But nothing in evidence-based practice implies that treatment decisions should be based exclusively on the results of single studies; quite the contrary. Instead, the rationale is that all else being equal, treatments have been shown to work in multiple, independently replicated, well-designed studies (especially when confirmed by meta-analyses, that is, quantitative summaries of the literature) should be accorded higher priority in treatment selection than treatments that haven’t. Klein appears to be attacking a straw-person version of evidence-based practice, one that implies that practitioners should blindly and robotically be beholden to the results of the latest randomized controlled trial. That is certainly not the case.

The many misconceptions continue, but I will address only two more:

“Many patients suffer from multiple problems..The protocol that works with one problem may be inappropriate for the others.”

Right. But again, that in no way vitiates the rationale underlying evidence-based practice. It implies only that such practice must be attuned to so-called “comorbidities.” Evidence-based practice that cavalierly ignores this problem is poor evidence-based practice. If studies show that a treatment protocol works well for Problem X but not for “comorbid” Problem Y, this can and should be built into practice guidelines. All sophisticated advocates of evidence-based practice are well aware of this consideration.

“A treatment that is generally ineffective might still be useful for a sub-set of patients.”

Again, correct. But that is precisely what moderator analyses, those that ascertain whether interventions are more effective for certain subgroups of clients than for others, are designed to do. Moderators can easily be accommodated into evidence-based practice guidelines, e.g., “If a client has major depressive disorder, in general you should first consider administering interventions X and Y. But if this client has major depression disorder but also has a family history of bipolar disorder, evidence suggests that intervention Z is generally the treatment of first choice.” Again, Klein’s arguments are hardly a reason to “retire” evidence-based practice; they are a reason to ensure that evidence-based practice is nuanced and attentive to scientific reality, which scores of researchers are currently striving to do.

In fairness to Klein, in a few places, he acknowledges the utility of evidence-based practice:

“Sure, scientific investigations have done us all a great service by weeding out ineffective remedies.” He then gives the apt example of arthroscopic surgery, which controlled studies have found to be no more effective than sham surgery for osteoarthritis of the knee.

Nevertheless, he then undermines this valid point by immediately qualifying it:

“But we are also grateful for all of the surgical advances of the past few decades (e.g., hip and knee replacements, cataract treatments) that were achieved without randomized controlled trials and placebo conditions.”

But wait. How did we determine that these surgical advances worked in the first place? By randomized controlled trials that are then built into evidence-based medicine. Klein’s logic here does not withstand close scrutiny.

I urge readers to apply the same level of scrutiny to the remainder of Klein’s essay.

Klein’s essay is a sobering reminder that even highly intelligent, well-educated individuals can misunderstand the principles of applied science, in this case the rationale underlying evidence-based practice. It is also a reminder that mental and medical health professionals need to speak out whenever they encounter such misconceptions, as public misstatements like those of Klein have the potential to mislead practitioners, students, and the general public.

Which is why I took an hour out of my Sunday evening to compose this blog posting.

References

American Psychological Association. (2005). Report of the 2005 presidential task force on evidence-based practice. Washington, DC: Author.

Atwood, K. (2008, February 15). Prior probability: The dirty little secret of “evidence-based alternative medicine.” Science-based Medicine. http://www.sciencebasedmedicine.org/prior-probability-the-dirty-little-…

David, D., & Montgomery, G. H. (2011). The scientific status of psychotherapies: A new evaluative framework for evidence‐based psychosocial interventions. Clinical Psychology: Science and Practice, 18(2), 89-99.

Gibbs, L., & Gambrill, E. (2002). Evidence-based practice: Counterarguments to objections. Research on Social Work Practice, 12, 452-476.

Lilienfeld, S. O. (2011). Distinguishing scientific from pseudoscientific psychotherapies: Evaluating the role of theoretical plausibility, with a little help from Reverend Bayes. Clinical Psychology: Science and Practice, 18, 105-112.

Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2013). Why many clinical psychologists are resistant to evidence-based practice: Root causes and constructive remedies. Clinical Psychology Review, 33, 883-900.

Sackett, D. L., & Rosenberg, W. M. (1995). On the need for evidence-based medicine. Journal of Public Health, 17, 330-334.

advertisement
More from Scott Lilienfeld Ph.D.
More from Psychology Today