Did Brief Psychotherapy Keep Psychotic Mental Patients From Returning to a State Hospital?
Did brief therapy reduce mental patients' re-hospitalization by half?
Posted Jul 28, 2011
Claims that a few sessions of psychotherapy can keep psychotic patients out of a state mental hospital grab attention. A report that four or fewer sessions of Acceptance and Commitment Therapy (ACT) significantly reduced the rates of rehospitalization of psychotic patients reverberated in peer-reviewed journals, and actually got noted in Time Magazine.
Patients in a mental hospital who were delusional or hearing voices were randomized to either an ACT intervention or treatment as usual [TAU]. Patients assigned to ACT received up to four sessions designed to teach them to notice their hallucinations or delusions, but not stuggle or argue with them, or take them to be true. Instead, they should try to remain focused on goals important to them, like getting on with their life. The abstract of the paper stated that the patients receiving ACT therapy had "a rate of rehospitalization half that of TAU participants over a 4-month follow-up period."
The 2002 Journal of Consulting and Clinical Psychology article in which the claims were made has already been cited over 100 times in peer-reviewed journals, making it a citation classic. The claim has also proven helpful in promoting Acceptance and Commitment Therapy as a distinctive and powerful new therapy that ushers in "the third wave" of behavior therapy.
I am not aware of any of the authors citing this paper expressing doubts about whether the primary intent of the trial was to keep patients from getting rehospitalized or whether significant reductions in rehospitalization actually occurred.
At the time the paper was published, the reporting of results of clinical trials in JCCP was substandard. The journal also had a policy of not accepting critiques of papers published there, no matter what flaws were later discovered in them. The ban on letters to the editor or other critical commentaries was only lifted this year, and it is only retroactive to articles published in the past six months. So, we cannot rely on finding published critiques in the journal where the article was published, but a hundred citations of the paper would seem to provide an ample opportunity for anyone citing the article to express doubts about it.
Yet, most serious flaws in published peer-reviewed papers do not get noted in print, especially when the papers have become highly cited. For instance, I took a critical look at a classic psychotherapy study after it had been cited over thousand times. I could only find a half-dozen or so hints that authors who cited it had detected important flaws in the report. I went on to publish a detailed analysis the conclujsion of which is now generally accepted: that there was no significant effect in the published paper. I have even raised doubts whether citations of high-profile papers actually reflect authors having read the papers they cite. To see for yourself, doubleclick on the links I have provided.
Anyone interested in learning to be a skeptical sleuth take three simple criteria and apply them in reading the paper carefully. For convenience, here is a link where you can download single copies of the paper.
The goal is to decide whether the claim of half the rate of rehospitalization is a fair and accurate summary of the results of the trial. In a blog that I will post in a week, I will provide my analysis and you can compare your conclusion or dispute mine.
And here are the questions to keep in mind in reading the paper:
What was the primary outcome measure? The gold standard is that investigators document ahead of time - preferably before they have even conducted the clinical trial - their settling on one or maybe two outcome variables being the primary ones they consider crucial to the evaluation of the intervention. They should also commit themselves to exactly when this assessment of outcome should be made, such as immediately after conclusion of the trial or, alternatively, three months later.
It is not always easy for readers to decide what was the primary outcome was in a published study. Investigators often do not designate a primary outcome until after they have seen the results, which is a no-no. Some classic psychological intervention studies have even resorted to inventing a new primary outcome after investigators have examined results. With the new outcome, a trial without positive results was transformed to one with a positive outcome. Many investigators are reluctant to concede that their favored intervention has apparently not worked, but instead explore alternative endpoints with subgroups of patients measured at different time points. Sooner or later such efforts can often uncover a positive result, but this is likely to be a chance occurrence and not likely replicated in the next study.
In recognition of this potential problem, many journals now require a registering of the primary outcome before the first patient is even enrolled, if results of a trial other published there. If you not registered ahead of time, results the trial cannot later be published in that journal, no matter how good they may be. However, such requirement does not extend to this the journals typically publishing reports of psychological interventions, and often a reader does not have any corroborating evidence of the primary outcome except what was reported in the paper.
Are all patients accounted for? The gold standard for reporting results of clinical trials are intent-to treat-analyses in which results for every patient who was randomized, i.e., intended to be treated, are entered into analyses. Such a strategy allows a direct simple answer to the question 'What are the effects of assigning patients to this treatment?' There is a high risk of bias if analyses are limited to only those patients who complete treatment or who remain available for follow-up. A classic study of the effects of psychotherapy on the survival of cancer patients excluded one patient because this patient died before receiving treatment and another patient who was deemed too depressed to benefit from treatment. Widely cited claims that this treatment increased survival time depend on the exclusion of these patients; if they were included, analyses would find that there was no significant effect, which is now the accepted interpretation of this trial---no effects were found on survival.
What treatment was received by the comparison patients who did not receive the intervention, and are there other any differences in the treatment received by patients in the intervention versus the comparison group that might explain differences in outcomes? Basically, we want to know that intervention and comparison group differences in outcome can be attributed to the patients in the intervention group having gotten the intervention. We want to be confident that if the control patients had gotten the intervention that they would have done just as well.
A key threat to validity is that the increased attention and symptom monitoring of patients assigned to an intervention can result in these patients getting more care of other kinds. One trial of psychotherapy for cancer patients involved the therapist meeting with the patient every day and the therapist reporting back to the treatment team the patient's psychological and medical status. Not surprisingly, the patients receiving psychotherapy got more intensive medical care and lived longer, but it was not possible to answer the question of whether psychotherapy itself extended a patient's life by means of some psychological processes. Maybe the effect was simply due to the patient being monitored more carefully and getting more timely and intensive medical treatment.
Having decided on the primary outcome, the number of patients for whom outcome data should be analyzed, and whether the intervention and control groups are comparable, can we accept the authors' claim as fair and accurate? Of course, disagreement with the authors on the answers to any of the questions outlined above could result in our coming to a conclusion different than the one they expressed in their paper.
Does this all sound simple? It is not in practice, as you will readily see when you try your hand at answering these questions as you read the paper. You need to read carefully.
This whole exercise came about when my colleagues and I were trying to decide what outcome data for this trial should be entered into a meta-analysis, which is a synthesis of a number of clinical trials that seeks to summarize their overall effect.
Good luck, skeptical sleuths!