Child Myths

Straight talk about child development.

Evidence-based Treatment: When Organizations Misunderstand Evidence

Organizations need caution when repeating treatment effectiveness claims.

In my last two posts, I commented on the problem of defining "evidence-based treatment" with respect to psychotherapy, education, and similar interventions. I described the concept of "levels of evidence" and pointed out that all evidence is not created equal; a testimonial may be a form of evidence, but it has nothing like the power of a properly-conducted randomized controlled trial. I also pointed out the deceptive nature of conclusions drawn from weak research designs such as pre- and post-tests, which cannot do a good job of demonstrating the effectiveness of interventions.

When we as individuals become confused by the evidence offered in support of a treatment, we probably cannot do a great deal of damage-- although we may harm ourselves or our loved ones by making wrong choices about interventions. When large organizations fail to understand the level of evidence for a treatment, however, they can cause a lot of trouble, wasting money and other resources and encouraging families to make choices that are potentially harmful.

Today, I want to describe two situations in which organizations failed to understand what is meant by "evidence-based treatment," either using the term incorrectly or stating conclusions about effectiveness that could not be drawn from the evidence available. I am not going to mention the name of the first one because it is a struggling group that may soon figure out the mistakes that have been made. The group is offering an across-the-board program related to child welfare which its announcements describe as "evidence-based". However, as one goes back to preliminary document after preliminary document, it becomes quite clear that the program being suggested has never been evaluated in terms of its effectiveness for the situations for whom it is being proposed. It was, however, based on a similar program that was used in a different way for another group of people and reported to be effective. After that, various practitioners used the program for various purposes without any formal evaluation.

Find a Therapist

Search for a mental health professional near you.

The problem with the procedures described in the last paragraph has to do with an issue called "transportability". This refers to the question of whether a treatment that was useful for one kind of population can simply be assumed to work for other kinds of people-- for instance, if an educational program works for healthy middle-class children, can we assume that it will work as well for poor children with many risk factors in their lives? Or, equally importantly, if a program works for infants, will it also work for school-age children? Generally speaking, there needs to be some evidence that the conclusions about effectiveness are "transportable", and that evidence depends on how similar the groups of people are.

There may be absolutely no harm in applying an intervention to a new kind of population; it may even work well. The problem that concerns me is the tendency to describe a program being applied to a new population as being "evidence-based", when the evidence available may or may not be relevant to the new way of using the program. A similar tendency, using slightly different language, is to refer to a treatment as "effective" when the evidence presented does not support that claim. I consider this to be a problem because a) it uses a questionable description to make the program seem more attractive to potential consumers, and b) it further confuses the already confused understanding of this term. I don't believe such misuse of the term is intentionally deceptive, and I suspect that the choice of language is made by enthusiastic advocates rather than by researchers. It is easy for this to happen in a large organization where people have different skills and different jobs.

Now I'd like to mention another organization that has made some quite inappropriate claims about the effectiveness of treatments. The Evan B. Donaldson Adoption Institute, in its September 2009 Adoption Institute E-Newsletter (http://www.adoptioninstitute.org/newsletter/2009_09.html#evaluation) , noted two published articles that claimed evidence for the effectiveness of psychosocial interventions. One of these (Wimmer, J., Vonk, E., & Bordnick, P. [2009]. A preliminary investigation of the effectiveness of attachment therapy for adopted children with Reactive Attachment Disorder. Child and Adolescent Social Work, vol. 26 (4)) was a simple before-and-after, pre-test/post-test study of the type whose flaws I discussed in the post previous to this one. In addition, the researchers employed a test, the Randolph Attachment Disorder Questionnaire, which is not only very poorly validated but whose inaccuracy has been shown in at least one empirical investigation. Nevertheless, the E-Newsletter headlined the summary of this study in the following words: "Evaluation finds Georgia attachment therapy program is effective", a conclusion that could not possibly be drawn on the basis of the level of evidence provided.

In the same issue of the E-Newsletter, a second publication was noted (Becker-Weidman, A. [2009]. Effects of early maltreatment on development... Child Welfare, vol. 88(2).) This article was not a report of work on the effectiveness of treatment, but employed its discussion section to repeat the claim that a particular treatment, Dyadic Developmental Psychotherapy, is "evidence-based", in spite of the fact that work supporting the effectiveness of the method has been repeatedly criticized as failing to reach a suitable level of evidence. The E-Newsletter commented in its summary of the paper that it was appropriate to use "interventions having evidence of effectiveness-- Hughes' Dyadic Developmental Psychotherapy is recommended [in Becker-Weidman's paper]". The author of this summary evidently accepted the assertions of effectiveness stated in the paper without examining the background of the claims, and repeated those claims.

It's understandable that editors and staff of newsletters, or staff of advocacy groups, may not feel they have the time or resources to examine every claim that comes across their desks. Nevertheless, the potential harmful impact of mirroring and multiplying inappropriate assertions is so great that there are ethical obligations to pay attention to these issues... and these apply most strongly to groups that stand up for children's health and welfare.

 

 

Jean Mercer is a developmental psychologist with a special interest in parent-infant relationships.

more...

Subscribe to Child Myths

Current Issue

Let It Go!

It can take a radical reboot to get past old hurts and injustices.