Psychologically Minded

Dare to go deep

Bamboozled by Bad Science

The first myth about "evidence-based" therapy

myth of evidence-based therapy
Media coverage of psychotherapy often advises people to seek "evidence-based" therapy.

Few outside the mental health professions realize that the term “evidence-based" has become a form of marketing or branding (see my previous blog). It refers to therapies conducted by following instruction manuals, originally developed to create standardized treatments for research trials. These pre-scripted or "manualized" therapies are typically brief, highly structured, and almost exclusively identified with cognitive behavioral therapy or CBT.

Academic researchers routinely extoll the “evidence-based” therapies studied in research settings and denigrate psychotherapy as it is actually practiced by most therapists. Their comments range from the hysteric (“The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment.”–Professor Walter Mischel, quoted in Newsweek) to the seemingly cautious and sober (“Evidence-based therapies work a little faster, a little better, and for more problematic situations, more powerfully.”–Professor Steven Hollon, quoted in the Los Angeles Times). Even former American Psychological Association president Alan Kazdin jumped on the bandwagon, telling Time magazine that psychotherapy is “overrated and outdated” and lamenting that it is hard to find referrals for “evidence-based treatments like cognitive-behavioral therapy.”

One might assume from such comments that strong scientific evidence shows that “evidence-based” (read manualized) therapy is superior to psychotherapy as practiced by most clinicians in the real world.

Does scientific evidence really show this?


Myth #1: “Evidence-based” therapy is more effective than other psychotherapy 

Nearly all the evidence supporting “evidence-based” therapy comes from studies that compare “evidence-based” therapy to no therapy, or to control groups that receive sham therapies that serve as foils and are not designed to be serious alternatives.

This research tells us only that “evidence-based” therapy is better than doing nothing (or doing something that is not meant to be a serious alternative). It does not tell us how "evidence-based" therapy compares to real-world psychotherapy that a person would receive from a qualified mental health professional.

What about studies that compare “evidence-based” therapies to legitimate alternative therapies?  Such studies are scarce but their results are clear and consistent: they show no advantage for “evidence-based” therapies. An analysis published in the prestigious Clinical Psychology Review explored the topic in depth. As control groups more closely approximate legitimate psychotherapy provided by qualified mental health professionals (any kind of legitimate therapy), any apparent advantage for “evidence-based” therapy vanishes. Writing in careful academic language, the authors conclude: “There is insufficient evidence to suggest that transporting an evidence-based therapy to routine care that already involves psychotherapy will improve the quality of services.”1

The same article offers a truly disturbing glimpse into psychotherapy research trials. Interventions provided to control groups and labeled “Treatment As Usual” by the original researchers “were predominantly ‘treatments’ that did not include any psychotherapy.” In other cases, so-called “Treatment As Usual” involved hobbled pseudo-therapy, where therapists were prevented from providing the treatment they normally provide. The authors expressed their frustration with these misleading research practices in, again, understated academic tones: “Training therapists to prevent them from using certain therapeutic actions that are typically employed in their practice cannot logically be classified as a Treatment As Usual.”

Another way to evaluate how “evidence-based” therapies compare to real-word therapy is through naturalistic studies. These studies follow patients treated by ordinary clinicians in their practices. The patients are evauated before and after treatment to measure improvement, or effect size. The effect size can then be compared to effect sizes for “evidence-based” therapies in published research trials.

An especially rigorous naturalistic study, reported in the Journal of Consulting and Clinical Psychology, followed 5,704 depressed patients who received real-world therapy from licensed clinicians covered by their health insurance plans.2 The clinicians were not specially trained or qualified; they were ordinary practitioners with master’s degrees or higher in psychology, marriage and family therapy, clinical social work, psychiatry, or psychiatric nursing—not a “high power” group by any means. The results obtained by the real-world clinicians were no different from those for “evidence-based” therapies in controlled research trials. Five published studies used similar methods to evaluate real-world therapy. Not one showed an advantage for “evidence-based” therapy.

Even these studies overestimate the real benefits of “evidence-based” therapy because published effect sizes for "evidence-based" therapy are skewed by “publication bias”: favorable research findings tend to get published and unfavorable findings tend to be suppressed. Publication bias plagues many areas of research and creates the impression that treatments work better than they really do.

In research on “evidence-based” therapy, the level of publication bias is shocking: an analysis in the British Journal of Psychiatry calculated that published effect sizes for CBT are exagerated by 60% to 75% due to publication bias.3 In other words, the real benefits are just a fraction of what the research literature portrays. If “evidence-based” and real-world therapy are compared on a level playing field by adjusting for publication bias, real-world therapy appears to be more effective.

Reality: 

Claims that "evidence-based” therapy is more effective than real-world therapy lack scientific basis. Academic researchers have been selling a myth—one that enhances the careers and reputations of academic researchers, but not necessarily the well-being of patients.

It is not just my conclusion that the therapies promoted and marketed as "evidence based" confer no special benefits. It is the official scientific conclusion of the American Psychological Association, based on a comprehensive review of psychotherapy research by a blue-ribbon expert panel. This concusion is spelled out by the American Psychological Association in an official policy resolution.

 

Jonathan Shedler, PhD is a Clinical Associate Professor at the University of Colorado School of Medicine. He lectures to professional audiences nationally and internationally and provides weekly clinical supervision and consultation by videoconference to mental health professionals worldwide.

Visit and "like" my Facebook page to hear about new posts or ask about this one. If you know others interested in this topic, please forward the link (use the email button on this page). You can see my other blog posts here


Note: For readers who want more in-depth information about the misunderstandings surrounding “evidence-based” therapy, I am providing a list of key scholarly articles, below. They provide the background to evaluate the research literature for yourself:    

Wachtel, P.L. (2010).  Beyond “ESTs”: Problematic assumptions in the pursuit of evidence-based practice.  Psychoanalytic Psychology, 27, 251-272.

Parker, G. & Fletcher, K. (2007).  Treating depression with the evidence-based psychotherapies: a critique of the evidence.  Acta Psychiatrica Scandinavica, 115, 352–359.

Westen, D., Novotny, C.M., Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130, 631–663.

Beutler, L.E. (2009).  Making science matter in clinical practice: Redefining psychotherapy. Clinical Psychology: Science and Practice, 16, 301-317.

American Psychological Association (2013). Recognition of Psychotherapy Effectiveness. Psychotherapy, 50, 102-109.

Duncan, B.L. & Miller, S.D. (2006). Treatment manuals do not improve outcomes. In J.C. Norcross, L.E. Beutler, R.F. Levant (Eds.), Evidence-based practices in mental health: Debate and dialogue on the fundamental questions (pp. 140-149). Washington, DC: American Psychological Association.

 

______________
1Wampold, B.E., Budge, S.L., Laska, K.M., Del Re, A.C., Baardseth, T.P., Fluckiger, C., Minami, T., Kivlighan, D.M., Gunn , W. (2011) Evidence-based treatments for depression and anxiety versus treatment-as-usual: A meta-analysis of direct comparisons.  Clinical Psychology Review, 31, 1304–1312.

2Minami, T., Wampold, B.E., Serlin, R.C., Hamilton, E.G., Brown, G.S., Kircher, J.C (2008).  Benchmarking the Effectiveness of Psychotherapy Treatment for Adult Depression in a Managed Care Environment: A Preliminary Study.  Journal of Consulting and Clinical Psychology, 76, 116–124.

3Cuijpers, P., Smit, F., Bohlmeijer, E., Hollon, S. D., & Andersson, G. (2010). Efficacy of cognitive– behavioural therapy and other psychological treatments for adult depression: Meta-analytic study of publication bias. British Journal of Psychiatry, 196, 173–178.

 

© 2013 by Jonathan Shedler

Jonathan Shedler, PhD, is a Clinical Associate Professor of Psychology at the University of Colorado School of Medicine. 

more...

Subscribe to Psychologically Minded

Current Issue

Just Say It

When and how should we open up to loved ones?