When researchers begin a clinical trial in medicine, they are meant to pre-identify the outcomes that most interest them. The idea, as health journalist Julia Belluz wrote last year, “is that researchers won’t just publish positive or more favorable outcomes that turn up during the study, while ignoring or hiding important results that don’t quite turn out as they were hoping.”
Clearly, medicine and the public rely on data and evidence that we must be able to trust. Life or death decisions cannot rest on results in published research that either weren’t accounted for or that just didn’t happen.
But as Belluz noted, revelations last year about the now-infamous “Study 329” of Paxil—which had described the popular antidepressant as “well tolerated and effective” for children when data in the trial indicated the opposite—have shed light on a troubling, widespread problem. Such revelations have also become the centerpiece of efforts to restore scientific integrity to medical trials by correcting data that can deceive doctors and the public.
Thanks to these efforts, we can now begin to gauge precisely how much misreporting is taking place. Earlier this week, the British Medical Journal published a study indicating that almost a third of prespecified outcomes in research protocols had not been reported in submitted papers, including in articles the journal went on to publish. The other study, ongoing and conducted by medical researchers at Oxford as Project Compare: Tracking Switched Outcomes in Clinical Trials, brings to light even more damaging results, including about the publication of data in journals as influential as The Lancet and the BMJ.
Of the 67 clinical trials so far investigated by Project Compare, under the direction of Ben Goldacre, only nine were found to have reported their outcomes perfectly. The remaining 58—the vast proportion—showed often-egregious amounts of outcome switching, nonreporting, or silent fixing: a total of 301 outcomes not reported. Just as significant, in the same batch of trials 357 new outcomes had been silently added.
The researchers were able to identify these omissions and post-hoc adjustments by counting “how many of the outcomes pre-specified in the protocol or registry were never reported.” They also flagged how many new outcomes were silently added.
On average, they found, each trial “reported just 62% of its specified outcomes.” Meanwhile, and also on average, each trial had “silently added 5.3 new outcomes.”
Yet of the 58 follow-up letters sent to medical journals, alerting them of errors in this data, only 6 were published (a figure since updated to 15). 16 of the letters were rejected outright, and 31 more had not been published after four weeks.
In the BMJ study, undertaken by investigators at the universities of Liverpool, Oxford, and Queen’s, Belfast, the researchers focused on 311 trial manuscripts received by the journal between September 2013 and June 2014, twenty-one of which were subsequently published. They found that “27% of the prespecified outcomes in the protocol were not reported in the submitted paper and 11% of reported outcomes were not prespecified.” In the sample of 21 trials rejected by the BMJ, moreover, “19% of prespecified outcomes went unreported and 14% of reported outcomes were not prespecified.”
As do the researchers at Project Compare, the authors of the BMJ study recommend mandating “the prospective registration of a trial” by that and other leading journals. They also urge that researchers be required to upload their protocols when submitting trial articles for review.
Doing so would increase transparency and lower the risk of bias, distortion, and misreporting. Yet the ongoing revelations by Project Compare suggest that, currently, only a handful of medical journals seems to mind that the clinical trials forming the backbone of their articles on average are reporting just 62% of their specified outcomes, while silently adding as many as 5.3 new outcomes.
On the strength of these studies, concern about bias and insufficient transparency in medical research appears to be well-justified. If the results of both studies can be replicated across medicine, as Project Compare aims to discover, the scale of the problem may turn out to be daunting and vast.
Slade E., H. Drysdale, B. Goldacre. "Discrepancies Between Prespecified and Reported Outcomes." BMJ 11 November 2015. http://www.bmj.com/content/351/bmj.h5627/rr-12
Weston, J., K. Dwan, D. Altman, M. Clarke, C. Gamble, S. Schroter, P. Williamson, and J. Kirkham. “Feasibility study to examine discrepancy rates in prespecified and reported outcomes in articles submitted to The BMJ.” BMJ Open 2016;6:4 e010075 doi:10.1136/bmjopen-2015-010075; http://bmjopen.bmj.com/content/6/4/e010075