The $3 million DSM 5 Field Trials have been a pure disaster from start to finish. First, there was the poor choice of design. The study restricted itself to reliability – the measurement of diagnostic agreement among different raters. Unaccountably, it failed to address two much more crucial questions – DSM 5's potential impact on who would be diagnosed and on how much its dramatic lowering of diagnostic thresholds would increase the rates of mental disorder in the general population. There was no possible excuse for not asking these simple to answer and vitally important questions. We have a right to know how much DSM 5 will contribute to the already rampant diagnostic inflation in psychiatry, especially since this risks even greater over-use of psychotropic drugs.
Second problem – the design of the DSM 5 field trial had a byzantine complexity that could be dreamed up only by people with no experience in real life field testing. One look made clear that there would be serious implementation problems and that it would be impossible to complete within the time allotted. The first stage of the field trial limped in eighteen months late, having taken twice as long as was scheduled. APA then had to choose between delaying the publication of DSM 5 or canceling its planned second stage of field testing that was meant to provide for desperately needed quality control. APA decided to cancel the trial and instead is rushing ahead with the premature publication of DSM 5 next May – publishing profits clearly trumped concern for the quality and integrity of the product. Fiduciary responsibility was thrown out the window.
According to the authors, 14 of the 23 disorders had “very good” or “good” reliability; 6 had questionable, but 'acceptable' levels; and just three had “unacceptable” rates. Sounds okay until you look at the actual data and discover that the cheerful words used by the DSM 5 leaders simply don't fit their extremely disappointing results. The paper is a classic example of orwellian 'newspeak'. When DSM 5 failed to achieve acceptable reliability by historical standards, the DSM 5 leadership arbitrarily decided to move the goal posts in and lower the bar in defining what is 'acceptable'. In fact, only the 5 of the 23 DSM 5 diagnoses that achieved kappa levels of agreement between 0.60–0.79 would have been considered 'good' in the past. DSM 5 cheapens the coinage of reliability by hyping these merely okay levels as 'very good'. Then it gets much worse. The 9 DSM 5 disorders in the kappa range of 0.40–0.59 previously would have been considered just plain poor, but DSM 5 puffs these up as 'good'. Then DSM 5 has the chutzpah to call acceptable the 6 disorders that achieved lousy, absolutely unacceptable reliabilities with kappas of 0.20–0.39. DSM 5 finally finds unacceptable the 3 diagnoses that were below <0.20 (which is barely better than chance).
Major Depressive Disorder and Generalized Anxiety Disorder were among those that achieved the unacceptable kappas in 0.20–0.39 range. This makes sense for GAD because its DSM 5 definition was so very poorly done. But how to explain the ridiculously low levels of agreement for MDD. DSM 5 had made no changes from the MDD definition whose reliability has been studied hundreds of times in the past 30 years and has always achieved rates about twice as high. The only possible explanation for the egregiously poor MDD result is amateur incompetence in how the DSM 5 field trials were conducted- and this throws in doubt all of the other results (and all of DSM 5).
It is sad that the American Journal Of Psychiatry agreed to publish this sleight of hand interpretation of the remarkably poor DSM 5 field trial results. Clearly, AJP has been forced into the role of a cheerleading house organ, not an independent scientific journal. AJP is promoting APA product instead of critically evaluating it. Scientific journals all have some inherent conflicts of interest, but this is ridiculous.
The DSM 5 field trial fiasco and its attempted cover-up is more proof (if any were needed) that APA has lost its competence and credibility as custodian for DSM. A diagnostic system that affects so many crucial decisions in our society cannot be left to a small professional association whose work is profit driven, lacking in scientific integrity, and insensitive to public weal.