As a member of the Board of the New Jersey Association for Infant Mental Health, I recently received an e-mail urging me to distribute some material about an intervention that was described as "evidence-based". Although the intervention seemed to be a sensible one and by no means potentially harmful, nevertheless I was intrigued by the sender's stress on the term "evidence-based". Where does this label fit into the therapeutic and educational world nowadays? Why do we hear it so often? When some methods are described this way, does that imply that there is simply no evidence behind other treatments?
Over the last ten or fifteen years, a movement toward the use of treatments deemed evidence-based has been on the rise in the medical field. Much effort has gone into definition of the term with respect to medicine, and the criteria are quite strict, involving experimental studies where patients are assigned randomly to treatments, as well as double-blind conditions in which patients do not know what treatment they are receiving, and neither do the staff members who evaluate them or the statisticians who make comparisons between the groups. Medical treatments are not properly described as "evidence-based" unless they are supported by these careful studies. The fact that a physician used a method before, and liked it, is not acceptable as evidence of this kind.
A second factor in the development of the "evidence-based" approach was that both private and public health insurers have taken a serious interest in this idea. Insurance payment or reimbursement for treatment is much less likely if research evidence does not support the effectiveness of the method.
When psychologists began to apply the concept of evidence-based treatment to their own work, some problems of research design became apparent. Whether people were studying psychotherapies, early intervention techniques, or teaching methods, it was often difficult or impossible to meet the standards applied in medical research. For example, it is not always possible to assign people randomly to treatments, especially if they are paying for the treatment they want. Adults usually have a reasonably good idea about the type of treatment they or their children are receiving, and of course the practitioner working with them knows what he or she is doing. If information is collected from small clinics or private clients, it may be quite difficult to employ an evaluator who assesses the outcome of the treatment but did not know anything about the client. As a result, only large, well-funded, well-planned research projects can assess the evidence for a treatment method in anything comparable to the way medical outcomes are studied.
Because of this, discussions of evidence for the effectiveness of psychological methods often focuses on the quality of the existing evidence, rather than considering only two categories (evidence-based treatments and non-evidence-based treatments). Discussions of the quality of evidence often use the expression "levels of evidence", implying that high levels of evidence give us confidence in the usefulness of a treatment, while lower levels do not.
A couple of years ago, Monica Pignotti and I published a paper suggesting a list of levels of evidence ranging from the highest level of treatment acceptability to the lowest (Mercer & Pignotti (2007). Shortcuts cause errors in Systematic Research Syntheses: Rethinking evaluation of mental health interventions. Scientific Review of Mental Health Practice, Vol. 5 (2), pp. 59-77). We suggested that the term "evidence-based" be reserved for treatments with research evidence drawn from randomly-assigned treatments and other important characteristics such as corroboration by independent researchers, not just by enthusiastic advocates. But, because there would be very few psychological interventions that have established research evidence at that level, we also suggested that a "research-supported" category be used. This would allow consideration of treatments in which clients had chosen a method for themselves or for their children, so that the outcome might be influenced by their expectations, which is less likely in random assignment.
In addition to several other lower categories, we also included as our lowest level the designation "potentially harmful treatments" (previously suggested by Scott Lilienfeld), applying to treatments that were not only without evidence of effectiveness but had done harm or could be expected to do so. We considered this level to be of greater concern than an apparently harmless treatment that was simply lacking in any research support. One of the most critical reasons for concern with evidence-based treatments is the possibility - rarely realized, we hope-- that an unresearched intervention can actually be harmful. Calling a technique "a therapy" does not necessarily make it therapeutic.
The term "evidence-based treatment" is still in the process of redefinition and expansion, but it does have a meaning, and we all need to be careful about using it, whether we are asserting or denying that a particular intervention has a good effect.