Skip to main content

Verified by Psychology Today

Anxiety

Why I Don’t Read Empirical Studies About Psychotherapy

The analogy between anxiety and scurvy does not stand up to scrutiny.

As psychology bends over to scoop up the dollars that manage to slip by pharmaceutical companies and health insurers, the necessity of entry into the medical establishment for access to that money has led to a call for “evidence-based treatment” and “empirically-supported therapies.” Responsiveness to evidence was the great leap forward in medicine. Many people thought, for example, that consuming citrus fruit would prevent scurvy, but it was not until 1747, in the first ever clinical trial, that this was proven. The field of medicine being populated by, well, people, it took several decades for the idea of an evidence-based approach to catch on. A practitioner could avoid all sorts of trouble by doing what Aristotle or some other luminary said to do, as opposed to worrying about what works. In 1799, George Washington’s physicians bled him to death because, even though he was obviously suffering from the lack of blood, it was the accepted treatment.

Unfortunately for those who prefer simple treatments, psychological disorders are rarely simple. The analogy between depression and a virus, between anxiety and a strain of bacteria, does not stand up to scrutiny; every depression is different and context-specific. Clinical trials for psychological disorders cannot infect randomly selected people with the disorders, a crucial step in the proof that penicillin works, and it is impossible to construct a double-blind study in which the therapists don’t know what sort of treatment they are actually providing. Outcome measures in medical research involve, for example, inspecting blood samples for the presence of the virus or scans for the sizes of tumors. In psychology, outcome studies typically involve asking patients if they feel better, disregarding any motivations the patient might have to answer one way or another.

Most published studies in the social sciences are simply incorrect (Ioannidis, 2005). The reasons have to do in large part with Bayesian probabilities, in that unlikely findings are more publishable, and low pre-existing base rates work against the findings standing up over time. Publishability also underlies other sources of error, including especially flexibility in research designs (since researchers are not as careful as they might be if their career goals could wait for them to get it right), researching hot topics (since a larger number of studies is likely to create more false positives), and bias (since positive findings enhance the researcher’s career in a way that negative findings do not). To this list, Shedler (2002) adds that clinical researchers are pressured by tenure considerations and then by granting agencies to get results quickly, leading to an examination of shorter and easily coded treatments. He also notes the hostility that researchers often feel toward clinicians who are trained and experienced in ways the researchers are not. How many clinical researchers have put in the 10,000 hours with feedback that it takes to get really good at something? Many researchers respond by scoffing at the idea that there is anything to get good at.

Researchers and even clinicians get recognition for developing something new (usually referred to by a three-letter acronym) rather than refining what is already known. A good example is the behavioristic development of FAP (functional analytic psychotherapy), which is an exact replica of psychoanalytic therapy in the 1970s; it disdains psychoanalytic thinking and has to wait for behaviorists to discover on their own what is already known about transference, framing, and intersubjectivity. Research on FAP outcomes will imply that psychoanalytic treatment is not evidenced based, even though it will be psychoanalytic principles (in different clothing) that this research supports. (I’m a behaviorist, by the way.)

One of strangest perversions arising from applying a medical model of validation to psychological treatments is the insistence that empirically-supported treatments follow a manual. Presumably, this is to insure that the treatment claiming validation is the treatment that was actually provided. The idea is that the therapist’s knowledge of psychology, wisdom about life, and empathy for the client are all irrelevant, while surely these are the three qualities you’d most want in your own therapist.

The push for an “evidence base,” inappropriate to the kinds of problems psychologists address, also leads to a conformity of practice, turning us from chefs into sous-chefs, following treatment regimens as if they were recipes. The danger is that we will lose our ability to innovate and to base our work on understanding rather than on manuals. To remind yourself of the fallibility of research evidence, you might keep in mind that the only treatment relevant to our field that has won a Nobel Prize is the lobotomy.

Ioannidis, J.P.A. (2005). Why most published research findings are false. PLoS Med, 2(8): e124. doi:10.1371/journal.pmed.0020124.

Shedler, J. (2002). Why scientist-practitioner schism won’t go way. National Psychologist, July/August.

Mostly reprinted from The Colorado Psychologist.

advertisement
More from Michael Karson Ph.D., J.D.
More from Psychology Today