There's Madness in Our Methods

Part 4. Matching methods to important questions is the key to understanding.

Posted Nov 01, 2019

This is the fourth article in a five-part series discussing different aspects of the current mainstream, biomedical approach to mental health diagnosis. Earlier versions of these five articles were part of an online forum called the “Global Summit on Diagnostic Alternatives,” which ceased operating in 2015 and is no longer accessible.

When scientific methods are used appropriately, that is, when the method matches the question, the results can be extraordinary. We have learned how to send small numbers of people to outer space and large numbers of people from continent to continent. We can build bridges that span great distances and withstand the vagaries of weather. We have also made humongous advances in understanding and treating serious conditions, such as poliomyelitis, smallpox, and tuberculosis.

Our achievements in mental health, however, are much less monumental. There is still no widespread consensus on what mental health problems are, much less what causes them. And we are still a long way from being able to systematically match treatments with problems to produce high rates of effective and efficient outcomes.

One reason for our lack of substantial progress could be the misalignment of methods and questions. Currently, in many areas, we are using the wrong methods for the questions we are interested in. There is nothing wrong with the methods per se, and there is nothing wrong with the questions being asked. There is something very wrong, however, in expecting methods to answer questions they are not designed to answer.

karandaev/ID: 113362119/123rf
Source: karandaev/ID: 113362119/123rf

It's a bit like going to a Chinese restaurant and ordering a pizza. There's nothing wrong with dining in a Chinese restaurant, and there's nothing wrong with ordering pizza. The problem lies in the mismatch of the ordering and the setting.

In research, we need to ensure our methods line up with the questions we are asking to guarantee we serve up the best results possible.

When methods are used for inappropriate purposes, the quality of the knowledge that is generated is severely compromised. Our current state of fragmented, contentious, and limited understandings may be a direct result of using our research tools inappropriately. It would be considered silly to use a hammer to chop firewood. Likewise, using tools inappropriately in the research arena should also be considered silly.

An area in which our findings are fundamentally and fatally flawed is in the study of DSM disorders. The investigations go something like this:

1. Allocate people to various diagnostic categories, such as depression and schizophrenia, according to the diagnostic criteria that are believed to be important and relevant to each category.

2. Look for characteristics that are similar within groups and different between groups.

3. Use the findings as evidence for the legitimacy of the categories.

Research that uses the criteria being investigated as the means for creating the categories to be investigated presupposes an authority of the categories and, so, cannot provide meaningful information about the correctness or otherwise of the categories.

The groups of people who participate in research (called "samples") also influence how widespread the implications of the results of the research might be. People who volunteer for research may be unlike people who don't volunteer for research in important ways. Perhaps the most important point, though, is that we can never be sure how different they are, or in what ways they are different. This isn't necessarily dreadful, but it does mean the results might not apply to all people in any particular population.

If we survey people in a mental health service, for example, and 90 percent of the people who complete the survey report that when they were in primary school, they were “bedoodled,” are we justified in concluding that 90 percent of people with mental health problems are bedoodled in childhood? No, we're not. Even if all of the people accessing this mental health service completed the survey, we would still only know that 90 percent of the people accessing this mental health service reported experiencing bedoodling. We don't know anything about people accessing other mental health services, and we don't know anything about people not accessing mental health services. We don’t even know how many of the 90 percent of people reporting that they experienced bedoodling actually were bedoodled.

I’m not suggesting that research participants intentionally lie to researchers on any widespread scale, but I am suggesting that there might be reasons for endorsing bedoodling on a mental health survey other than the reason that the bedoodling actually occurred. My point here is that it is often an unquestioned assumption that the information participants provide on surveys is completely accurate. People have purposes, and there could be numerous purposes for completing a survey other than providing an accurate portrayal of a segment of your past.

Unfortunately, the vast majority of research ignores purposes and focuses, instead, on what can be observed or what is reported, whereas genuine and important progress might come from exactly the opposite approach. It is the study of purpose, what it is, and how it works, that should underpin our research (Marken, 2014). 

Although techniques can be used to improve the representativeness of samples, a sample will only ever be representative according to certain variables identified as important and relevant by the researcher. We might not even know what the most important variables are to ensure that people who volunteer for research and endorse bedoodling on a survey are representative of people who do not volunteer, or of people who do volunteer, but who are reluctant to indicate they were bedoodled in childhood. If, for example, we somehow discovered that 90 percent of the general population also report childhood bedoodling, then we wouldn’t have learned very much about the manifestation of severe psychological distress.

We must be careful to ensure that the conclusions we make are appropriate given the information we obtained from the people who participated.

Even when we carefully match our methods and our questions and are also very circumspect about the conclusions we draw, there is still a certain madness in the relationship between the methods we use and the information we want. Statistical methods are designed to help us understand populations. The direction of inference is from the sample to the population (Blampied, 2001).

Inferring from a sample to a population, however, is in exactly the opposite direction from what we are generally most interested in. Mostly we want to make inferences about individuals from the research we have conducted. That is, the direction in which we would most like to infer is from the sample to the individual. Statistical methods will not allow us to do that.

So, with very high-quality research, we might be able to make conclusions about a certain treatment being effective, in general, for particular problems. We might also be able to conclude that certain experiences from the past are reliably associated, in a general sense, with current problems. The qualifier "in a general sense" is often overlooked, but is, in fact, critical.

Our current methods do not allow us to specify, with any level of precision, the likelihood that a given treatment will work with a particular individual. These methods also won’t enable us to predict with high levels of accuracy the extent to which individuals have or do not have certain experiences in their past—or anything else, for that matter.

Phil Runkel’s (1990) excellent book Casting Nets and Testing Specimens is a detailed and masterful analysis of some of these problems. Runkel describes statistical approaches to research as “casting nets” methodologies. According to Runkel, statistical methods are the most appropriate means of finding out how much something occurs in a population.

Very-well-conducted statistical research will enable us to infer with specified levels of precision the rate at which we could expect a particular event or characteristic to occur in a population. Statistical methods, however, won’t allow us to specify how likely it is that any particular individual will have that characteristic or will have experienced the specified event. Statistical methods also won’t allow us to learn how certain events relate to particular behaviors.

To understand accurately and precisely how things occur or why things happen in particular ways, we need to use “testing specimens” methodology. In this methodology:

1. An explanation of why or how something occurs is proposed.

2. The explanation is expressed in terms that allow a functional model to be built.

3. The behavior of the model is compared to the behavior being investigated.

If there is not a very close match between the actual behavior and the behavior of the model, then it is assumed that the explanation is wrong, and the researcher returns to the drawing board to modify and improve the model.

To make major advances in our understanding and treatment of psychological distress, we need to remove the madness from our methods and restrict our use of casting nets methodology to the purposes for which it was designed. We also need to begin to incorporate much more testing specimens methodology into our research practices. We need to demand the construction of functional models rather than relying solely on conceptual or statistical models for the generation of knowledge. Perceptual Control Theory (PCT; Powers, 2005) is an excellent example of what is possible with a model-building approach. 

PCT is a scientific approach to understanding human functioning that is, in many ways, swimming against the tide because the focus of PCT is purpose (or control) rather than the observed or reported behavior that, as I mentioned earlier, is the current focus of most research. There is a growing literature about research that is informed by PCT which provides, along with Powers’ pioneering and groundbreaking work, an excellent foundation from which to coax the flourishing of a new science of what we do and how we do it (e.g., Marken, 2014; Carey, Huddy, & Griffiths, 2019).

It is madness to use methods to answer questions they were not designed to answer. By using methods for the purposes for which they were designed, mental health research might begin to build its own sturdy bridges between research and practice and shoot for stars that are currently impossibly out of reach.

References

Blampied, N. M. (2001). The third way: Single-case research, training, and practice in clinical psychology. Australian Psychologist, 36(2), 157-163.

Carey, T. A., Huddy, V., & Griffiths, R. (2019). To Mix or Not To Mix? A meta-method approach to rethinking evaluation practices for improved effectiveness and efficiency of psychological therapies illustrated with the application of Perceptual Control Theory. Frontiers in Psychology, 10, 1445.

Marken, R. S. (2014). Doing research on purpose: A control theory approach to experimental psychology. St Louis, MO: newview Publications.

Powers, W. T. (2005). Behavior: The control of perception (2nd ed.). New Canaan, CT: Benchmark.

Runkel, P. J. (1990). Casting nets and testing specimens: Two grand methods of psychology. New York: Praeger.