Early this year, the New England Journal of Medicine showcased research concluding that when it came to trials of antidepressant drugs, only studies finding positive results were likely to find their way into print. This week, Eli Lilly cried foul. The pharmaceutical giant, best known as the manufacturer of Prozac, sent a mass mailing to doctors and other health care providers saying that two negative studies the Journal counted as “unpublished” had, in fact, been published twice. In addition, the letter continues, the inconclusive data (in which Cymbalta, or duloxetine, failed to demonstrate efficacy) had been presented at medical meetings and posted on Lilly’s own website.
Who’s right? The skeptics or the drug industry? The answer turns on the meaning of the word published.
I reviewed the Journal study when it appeared. The authors—Erick Turner, of the Oregon Health and Sciences University, and others—had examined fifteen years of data submitted to the Food and Drug Administration about new antidepressants and found that while only half of the early trials demonstrated that the medications worked, almost all the studies in print showed positive results. But as I indicated at the time, Turner was finicky about what he counted as a published study. I wrote: “if researchers aggregated data, combining a highly positive study and an inconclusive one into a pooled analysis yielding overall moderate results, Turner counts the inconclusive study as having been misreported, even when the monograph indicates up front that data sets have been merged.”
It turns out that Turner was yet more demanding. In its fashion, Lilly or researchers working with its drugs had published accounts of the two trials in which duloxetine did poorly. The scientists reported the inconclusive trials in overview papers, but in graphs and in the text, some of the data on those trials did appear separately, not merged with results of other research. Since Turner counted only full monographs devoted to a single study, he listed the two inconclusive trials as “unpublished.”
When the press reported on the Turner study, they tended to focus on the word Prozac. But as I wrote in my critique, Prozac was not at issue. All the FDA data on Prozac had long since been made public. Lilly was on the hook only for Cymbalta, a new drug that resembles the old Elavil in affecting two different neurotransmitters, serotonin and norepinephrine. So: was the data on Cymbalta available?
Yes and no. In the fall of 2002, Charles Nemeroff and others published an article in Psychopharmacology Bulletin summarizing six studies of duloxetine. There, in a table are “Study 4” and “Study 5” over columns filled (in the case of study 4) or dotted (study 5) with “NS,” for “not significant,” indicating measures on which the effects of duloxetine could not be distinguished from those of placebo. Study 6 was similarly unimpressive at a lower dose of duloxetine, 40 milligrams daily, although there, 80 milligrams looked effective. In every study, the drug did better than placebo, though not always at a level that would assure scientists that the result was not due to chance. The report contains at least one odd analysis. The scientists calculated the likelihood that depression would “remit”—that an episode would end cleanly—based only on the trials in which duloxetine worked (omitting studies 4 and 5). But though the wording in the report is not crystal clear, the other major analyses appear to have included all the data. In particular, Nemeroff and his colleagues calculated effect sizes, the measure that forms the basis for Tucker’s later critque.
The other, probably more widely circulated report—it appeared in a 2003 “Primary Care Companion” to the Journal of Clinical Psychiatry—was less forthcoming. It mentioned the inconclusive studies, but its charts were largely based on the trials in which duloxetine looked stronger. Like the prior report, this one also devoted space to the more marginal issue of bodily pain accompanying depression, a “hook” that Lilly used to market Cymbalta to primary care physicians. (My impression is that it remains unclear whether Cymbalta has an advantage over other antidepressants in helping with bodily pain.)
The FDA approved Cymbalta in August of 2004. In December of 2004, Lilly posted the full data from the inconclusive trials on its website.
Was Lilly wronged? Or did the New England Journal authors merely set the bar (for transparency) appropriately high?
The published articles were timely, and they disclosed every trial. But they did not contain extensive data. The 2003 article in particular put the drug in a highly favorable light; once mentioned, the inconclusive studies were largely set aside. Certainly, outside statisticians would have had difficulty reanalyzing the reported results or integrating the findings with those of other studies. Then again, the complete data were made available to the scientific community within months of Cymbalta’s coming to market. For now, it appears that Lilly was the most "transparent" of the pharmaceutical houses whose reporting came under scrutiny in the Journal article.
As for the rights and wrongs of the dispute, fair judges might hand down a split decision. The Journal article failed to indicate how forthcoming Lilly had been, but the researchers who studied Cymbalta could have produced a more even-handed account of the evidence.