Don’t get me wrong. Cognitive-behavioral therapy, or CBT, is a perfectly good treatment, about as effective as other psychotherapies, which is also to say, about as good as psychotherapeutic medications.
But if you think that research has proved CBT is special, that it’s better than what you can get from a competent, dedicated, old-fashioned therapist using what comes to hand, some updated version of Freud, Winnicott, Kernberg, Kohut, and the rest — I would want to suggest that the new emperor needs a new wardrobe.
Readers of this blog, or of my books, stretching back to my early writing, will know that I have long had a crochet about CBT. I will explain these stubborn prejudices in future postings, but for now — and repeatedly in this space as new studies emerge — I will want to look at some evidence.
This past spring, Stefan Hofmann, of Boston University, and Jasper Smits, of Southern Methodist, performed a meta-analysis, a mathematically sophisticated roundup review, of research on CBT in the treatment of anxiety disorders in adults. Their results were widely reported as showing that CBT works. It does. But how well?
One answer is that we don’t know. Another might be: it's a bit of a disappoinment.
CBT was developed as an alternative to psychodynamic psychotherapy, the offshoot of psychoanalysis whose main function was the treatment of neurosis, largely what today are called anxiety disorders. In examining the efficacy of cognitive approaches to anxiety, researchers are looking at the core indication for CBT.
The investigators report that after what they call two decades of research — arguably, the history goes back further — they could find only six studies that meet rigorous criteria for quality, or eight, if you lower the standards a bit. (Hofmann and Smits call this cull rate “surprising and concerning.”) In these more scientific studies, the ones that take into account patients who drop out of treatment, CBT proved modestly useful.
For those who know about effect size, a measure that I have mentioned occasionally in these posts, the result for the therapy was .33 when you look at improvement in anxiety symptoms, and apparently lower for depressive symptoms. Effect size measures how well an intervention does relative to the intractability of the problem under study. One informal interpretation has it that an effect size of .2 is small, .5 is medium, and .8 is large. With an effect size of .33, three quarters of treated patients, even if doing somewhat better, would continue to experience symptoms in the range suffered by untreated patients. Early work on psychotherapy found long-term effect sizes of about 1.1, or three times what is here reported for CBT.
So, an effect size of .33, while positive, is unimpressive. It comes in at about the level of effect sizes for antidepressants tested for depression in the poorly executed drug company trials submitted to the FDA, the ones that have come under such criticism in both the scientific literature and the popular press. For its primary indication, the ailments it was developed to cure, CBT looks like an indifferent treatment.
The reason that reporters were able to say that CBT performed well is that the researchers also looked at less carefully designed studies, ones that ignore attrition rates. Because they suggest where CBT works best, those results are also of interest. I will discuss them in an upcoming post.