The Skeptical Sleuth

Applying a healthy dose of skepticism to new findings about health and psychology.

More on the Acceptance and Commitment Therapy Intervention That Failed to Reduce Re-Hospitalization

We should have been told that patients died and went to jail.

My last blog post analyzed a classic study of a brief Acceptance and Commitment Therapy (ACT) intervention that was touted in Time Magazine as having reduced by half the re-hospitalization of hallucinating and delusional patients being discharged from a state mental hospital. Despite the lack of public criticism of this study, I concluded that the intervention did not have a significant effect on re-hospitalization. My blog post was visited by hundreds of readers, but the lively discussions of it were confined to Facebook and the ACT listserv.

In this blog I will continue to discuss the issue of the suicides and the jailing of patients that occurred, why these events should have been reported in the article, and why they are so important for interpreting the study. The broader issues are how we should responsibly report the results of clinical trials but also how the peer review process is fallible. We need to be skeptical sleuths in reading the best of scientific journals.

Find a Therapist

Search for a mental health professional near you.

The abstract of the study clearly stated that the re-hospitalization rate for patients who received the ACT intervention was half that of the patients who receive treatment as usual. This claim was based on 7 of the patients getting the ACT being re-hospitalized versus 14 of the patients in standard care (7/14 is half). Even if we accept the authors' claim that there with 35 patients in each group, this difference is not statistically significant and so it should not have been reported in the abstract without noting that differences were not significant. Simply put, if results are not significant, there are effectively no differences to be reported and so the statistic "half" becomes misleading. But the issues are more complicated than that...

There were not 35 patients assigned to each group, there were 40. In passing, the authors state that "four participants in each condition moved out of the area, and one in each condition died." These patients were dropped from the analyses. However, best practices for reporting clinical trials require what are called "intention to treat" analyses - including all the patients in analyses who were randomized.

Why is that? Dropping patients undermines the equivalency  of groups that randomization is supposed to achieve. We cannot assume that the patients who are missing are just like the ones who  are still around. Secondly, randomized trials are designed to answer the question, "what is the effect of assigning patients to this treatment?" It is relevant and important information if patients are not around to be assessed after treatment.

However, the problems caused by the authors dropping these patients gets worse when we investigate exactly what is meant by 'moving out of the area' and 'dying.' The authors don't tell us in the paper, but I got a copy of the original doctoral thesis and I found  that 2 of the patients who 'moved away' actually went to jail and the 2 patients who 'died' actually committed suicide. These are negative outcomes that are certainly relevant to evaluating whether avoiding re-hospitalization was such a good idea.

What should the authors have done? First, they should of told us about the the jailing and suicides so we could decide for ourselves. Second, they should have taken the conservative approach of counting missing data as negative outcomes. So, we have 12 versus 19 patients considered "rehospitalized" which is less impressive than 7 versus 14.

How much do these "missing" patients matter? Effect sizes depend on the number of events being explained, in this case, a difference in re-hospitalization. The 5 patients missing from the ACT group represent 5/7 or 71% of the reported re-hospitalizations and the patients missing from the standard care group represent 5/14 or 36% of the reported re-hospitalizations. These are numbers we cannot afford to ignore.

So, this clinical trial is a negative one, in that it did not demonstrate that an ACT intervention reduced re-hospitalizations. Yet the trial went on to be published in Journal of Consulting and Clinical Psychology and cited in Time Magazine. Why didn't the reviewers at JCCP notice? It did not help that the manuscript did not mention suicides or patients going to jail, but then again, the review process is quite fallible at even the best journals and -- as I will show again and again in future blog posts-- reviewers often don't read carefully and neither do editors. I strongly suspect that the reviewers and the editor were awed by the claim in the abstract and didn't look further. Sometimes authors being audacious wins, even when they are exaggerating or simply mistaken!

Sadly, if the authors had not made exaggerated claims of a positive effect for their intervention, we might never have read about it, or at least we wouldn't have read about it in Journal of Consulting and Clinical Psychology. Results of all clinical trials should be available, whether positive or negative. We need to know about clinical trials that did not obtain an effect, but there are strong biases against publishing negative trials. Furthermore, with a small trial like this one, the argument can be made that the trial was too small to expect a finding and so if results are negative, they are not interesting. So, there is a confirmatory bias in publishing small trials: we only find out about them if the authors claim that the intervention was successful. That's why small published trials notoriously do not replicate, but that's the subject of another blog post.


Postscript

Questions About Patient Safety and Human Subjects Approval. Two suicides and two incarcerations among the 80 patients enrolled in the Bach and Hayes ACT study raise some important issues.

Even outside of a clinical trial, two suicides among 80 discharged patients would precipitate a review of procedures at many hospitals. Likely questions would include "Are patients being adequately screened for suicidality before discharge?" "Could these suicides have been prevented and what procedures need to be in place to avoid them in the future?"

Similarly, hallucinating and delusional patients being incarcerated may reflect police response to bizarre behavior that should have kept patients from being discharged from the hospital. Like suicides, these events should be investigated.

I would have basic questions about the adequacy of treatment as usual in this state hospital. Treatment as usual is not equivalent to standard care, if it does not meet conventional standards. I would be unsatisfied with a list of the treatments that are potentially available to patients, I would like to see evidence that patients can reasonably access these treatments with adequate intensity and follow-up during what is undoubtedly a very brief hospitalization.

Most importantly, there is little or nothing to be learned about the specific effectiveness of an intervention if it is being compared to inadequate care. Rather than demonstrating the effectiveness of a specific intervention, a comparison with inadequate care may simply reflect the effects of correcting the inadequacies, not the efficacy of the intervention and certainly not proof of its purported mechanism.

This was not an NIH funded study, but if it had been, a data safety monitoring board (DSMB) would have been required. One of its functions would have been to investigate such negative events as they occur and determine whether they are relevant to the evaluation of the intervention. Shutting down the trial or substantially modifying procedures would be one possibility. Such decisions should not be left to investigators, but should be made independently by qualified professionals without a vested interest in completing the trial. Moreover, the exacerbation of symptoms that were shown for the intervention group might in itself be in need of investigation or a stopping of the trial.

If I were on the IRB committee asked to review this study for human subjects concerns, I would have asked what evidence is there is such a brief intervention with hallucinating and delusional patients could have an expectation of reducing their hospitalization. There is something implausible about psychotic patients with such severe symptomatology paying attention, comprehending or remembering accurately what they were told in a few sessions of psychotherapy. I would be unimpressed by the investigators' theoretical beliefs that rehospitalization could be reduced without a reduction in symptoms. And I would wonder about the ethics of reducing hospitalization in the absence of a reduction in symptoms. Overall, if there is insufficient evidence to anticipate a positive effect for this intervention, then patient should not be recruited to a randomized trial evaluating its effectiveness. These are vulnerable mental patients whose rights need to be protected we are talking about, not laboratory rats.

I would also be concerned about whether reducing rehospitalization was necessarily a praiseworthy achievement. After all, the deinstitutionalization movement produced thousands of inadequately treated, homeless patients by simply discharging them from state hospitals. If the study were to be approved at all, I would have required that it incorporates better monitoring of patients after they are discharged.

 

Jim Coyne, Ph.D., is a clinical health psychologist and Professor in the Department of Psychiatry at the University of Pennsylvania. more...

Subscribe to The Skeptical Sleuth

Current Issue

Love & Lust

Who says marriage is where desire goes to die?