False Assumptions in Personality Disorder Research, Part IV

Differences may not be abnormalities, and not every group member is average.

Posted Aug 17, 2018

Wikimedia Commons, Face Recognition by NIH, public domain
Source: Wikimedia Commons, Face Recognition by NIH, public domain

This is the fourth and last in a series of posts that discusses false and unacknowledged assumptions that are rampant in the personality disorders research literature and which lead to false or misleading conclusions. I presented this information during a panel discussion on personality research at the 2018 annual meeting of the American Psychiatric Association in New York City.

False Assumption #6: Confusion between differences and abnormalities: Ignorance of neural plasticity in fMRI interpretation.

fMRI machines, because they measure magnetic fields, can map both brain structure and brain function because the iron in blood that passes through the brain creates a magnetic field. What researchers do is use fMRI scans to compare certain brain structures and brain activity— particularly in the primitive part of the brain called the limbic system—in some diagnostic group with matched controls or "normals." For instance, an important brain structure called the left amygdala is smaller, on average, in patients who exhibit the signs of borderline personality disorder (BPD) than in "normals." 

Of course, they are comparing averages, so the left amygdala in some BPD patients is larger than those of the average "normal." Notice also that the scientists only occasionally compare different diagnostic with each other. Differences in amygdalar size and activity are found in any number of different diagnostic groups in psychiatry.

An even bigger source of misleading conclusions is that when a difference is found between a diagnostic group and "normals," that difference is automatically labeled an abnormality. If patients have an abnormality, then of course they must have a brain disease. Actually, these scientists do not know if what they have found is an abnormality or not. What makes the use of the term "abnormality" totally misleading is that the brain, particularly in terms of limbic system structures, is plastic. This means that, in the normal brain, these structures can change in size and activity level to reflect behaviors that have become important to a given individual. The changes can be very quick and substantial.

A recent article by Siddarth et. al., for example, found that people who were sedentary had reduced thickness of the medial temporal lobe, a region of the brain linked to memory – something that also occurs in normal aging. In the February 2010 issue of the Archives of General Psychiatry (Volume 67 [2] pp. 133-143), Pajonk, Wobrock, Gruber et. al. found that after just three months of a vigorous exercise program, the size of a brain structure called the hippocampus increased an average of 16 percent in normals! It is also true that the part of the brain that controls finger movements is, on average, much larger in concert violinists than in non-musicians. Last time I looked, neither being a concert violinist, exercising, nor being sedentary is a disease.

False Assumption #7: The Ecological Fallacy: Different people do not respond in an identical way to a specific psychotherapy intervention.

The ecological fallacy is a logical fallacy in which inferences are made about individuals based on obtained data that characterizes an entire group to which that individual belongs, using the group average on various measures. I will illustrate this with a prime example: the fallacy is rampant in studies that attempt to compare how successful two different types of psychotherapy are for the same disorder.

This type of study is actually relatively uncommon, as most psychotherapy outcome studies compare a treatment, not with a second specific type treatment, but with a "control" condition like being on a waiting list or "treatment as usual" (letting the patient get whatever treatment they want, or choose to not get any at all). Those latter control groups are also of highly questionable validity, but that is a matter not relevant to this discussion.

In those few studies that compare one school of therapy with another, an interesting statistic is that 85 percent of the time, the treatment favored by the person designing the experiment "wins" and outperforms the other treatment [Luborsky, L., Diguer, L., Seligman, D. A., et. al. 1999.  "The Researcher’s Own Therapy Allegiances:  A “Wild card” in Comparisons of Treatment Efficacy." Clinical Psychology: Science and Practice, 6, 95-106]. This is due to something called the allegiance effect — the more enthusiastic a therapist in a study is about their own school, the better the patient tends to do.

But even ignoring this clear-cut sign that research conclusions in comparative outcome studies are inherently misleading, let us suppose that with one therapy treatment in the study, 45 percent of the patients improve significantly on some characteristic, while in the other, only 30 percent do. The conclusion of the researchers: the first treatment is superior. Wrong.

This conclusion presupposes that all patients react to the treatments somewhat identically, despite the fact that the majority (or at least a significant percentage) of patients in both arms of the study did not improve. It is quite likely that some patients are more comfortable and do much better with one of the therapy treatments than with the other. Of course there is no way to know for certain, but it is quite possible that the 45 percent of people who respond to the allegedly superior treatment respond very differently in many respects than do the 30 percent who respond to the allegedly inferior therapy. In fact, those subjects who responded to the supposedly inferior treatment may have perhaps not had any response at all to the supposedly superior one.

A well-designed study, on the other hand, would have to recognize these differences and would look at the characteristics and the lack or presence of a variety of commonalities and response patterns within the four different groups that comprised the study: those who got better with treatment A, those who got better with treatment B, those who responded poorly to treatment A, and those who responded poorly to treatment B. The researchers could then match the patient with the type of therapy they seemed to do best with, and then and only then compare outcomes. Even then, researchers would not be figuring in the behavior, demeanor, tone of voice, and even the appearance of the therapist who was delivering the interventions, which greatly affects a patient's reaction to them. There is almost no way to quantify that with any degree of validity.

In other words, as I have pointed out elsewhere, no matter what therapy intervention you use, some people will improve with it, while others either will not improve or may even get worse! Different strokes for different folks, people.