Why Are So Few Programs for Survivors of War Scaled Up?
Intervention studies conducted under real-world conditions are greatly needed.
Posted Jan 19, 2018
There's an interesting trend among researchers concerned with the impact of war on mental health, towards conducting efficacy studies of innovative interventions. Efficacy in this context refers to studies designed to assess the impact of treatments or interventions under the most ideal circumstances possible given the context. This means extensive training and supervision of lay (non-specialist) therapists or group facilitators because most such programs are implemented by community members who are not mental health professionals. There’s a scarcity of mental-health professionals in those parts of the world where most survivors of armed conflict live, so the use of trained community members is a practical necessity. As it turns out, it’s also a promising alternative to professional mental health care for many people.
Efficacy studies generally prioritize strict adherence to carefully developed intervention manuals or treatment protocols, with extensive monitoring to ensure that fidelity is maintained (i.e., that the program is being implemented precisely as designed). Priority is also given to maximizing attendance in the intervention, to ensure that participants are adequately exposed to the essential elements of the program.
All of this usually costs quite a lot of money. You have to pay facilitators and trainers for their time, and train local supervisors to supervise the facilitators, and provide those supervisors with their own supervision until they’ve mastered the intervention and the art of supervising lay facilitators. You need to pay people to monitor the fidelity of the intervention and provide refresher trainings as needed for facilitators who are struggling. It also means careful monitoring of all aspects of the implementation, in order to ensure the fidelity of what’s actually done to what’s supposed to be done. In addition, transportation and material incentives may be needed to maximize participation in the intervention.
None of this is generally a problem in efficacy studies because funders are willing to invest heavily in the development and testing of compelling interventions for civilian populations affected by armed conflict.
There’s just one problem.
After the highly resourced interventions have demonstrated their impact (and a few have shown impressive results, with effect sizes that equal or exceed those found in rigorous studies of mental health treatments in high-income countries), a manual is typically finalized and the results are published in a prestigious journal. That’s when non-governmental organizations (NGOs) with limited resources and local mental health centers with sparse budgets are faced with the daunting task of trying to replicate those great results from the efficacy studies. Only now they must do so without the resources needed to provide extensive training of facilitators. Donors are often reluctant to fund trainings that run beyond the bare minimum, and NGOs know they are more competitive for intervention grants if they limit the duration of the trainings they provide, regardless of what is actually needed to implement effectively. So the training of lay therapists or group facilitators is reduced significantly. And supervision is cut back as well because a lot of organizations lack the local expertise to provide the sort of quality supervision offered during the efficacy study. And the researchers who provided long distance supervision to the local supervisors may no longer be available because the research is done, efficacy has been established, and the next project beckons. Fidelity? It’s often unknown… maybe the program is well-implemented, hopefully so, but too often NGOs lack the resources to carefully monitor what happens during the intervention, after the highly resourced efficacy study is done. Implementation staff may deviate from the intervention manual or program design, but whether modest deviations actually matter or not is generally unknown, since efficacy studies generally assess the effects of strict adherence rather than flexible implementation.
And so, what happens in the real world of lower-resourced NGOs and health centers may, or may not, resemble what happens in the highly resourced world of efficacy studies.
Now, enter a different concept: effectiveness studies (also referred to as pragmatic trials). These are meant to assess the impact of interventions conducted under “real world” conditions, the conditions in which most organizations actually work: limited budgets, and field teams that are often understaffed, with limited capacity to train, supervise, and monitor interventions. Effectiveness studies ask: “What are the effects of Intervention X under these real world, lower-resourced, and complex conditions, and what is the minimum level of resources needed to achieve a meaningful impact? To what extent can flexible implementation or partial attendance still yield meaningful benefits to participants?” We might, for example, find that by strengthening staff capacity through a modest increase in training and supervision, and adding a few more monitoring visits to ensure reasonable program fidelity, meaningful impacts can be achieved in terms of reducing distress and improving mental health. The gains might be less impressive than those found in efficacy studies, but they are also more attainable and replicable, and therefore more scalable because they can be achieved under real world, rather than ideal, conditions.
Efficacy studies are extremely useful in medical research, where treatments under ideal conditions often resemble treatments under “real world” conditions. Testing the effects of a daily dose of aspirin on cardiovascular health looks pretty much the same in efficacy and effectiveness studies because generic aspirin is widely available and easy to take. In the “real world,” most people can easily take a daily aspirin with little difficulty. So the benefits found in efficacy studies (in which people are randomly assigned to take an aspirin or a placebo every day for a fixed period of time), are easily replicated in the real world.
But for mental health interventions in war-affected settings, the similarity between efficacy and effectiveness studies grows a lot thinner. And this raises an interesting question: what is the role of efficacy studies of mental health programs in such contexts if they are studying the effects of programs under conditions so highly resourced and rigorous that they cannot be reproduced except in other efficacy studies? They tell us what can be accomplished under conditions that simply don’t exist outside of their own well-funded research process.
Perhaps that’s one reason why so few mental health interventions have been scaled up in war-affected settings.
If we want to create programs that can be brought to scale, we need to study the impact of those programs under conditions that are within the reasonable grasp of the organizations and institutions that will implement them on a daily basis, and that have the potential to bring them to scale—expanding their impact from a small number of research participants to large numbers of people in need of support.
Here’s one approach: three-arm studies in which Arm 1 is Control (or “treatment as usual”), Arm 2 is modest training, supervision, and fidelity assurance (the effectiveness arm), and Arm 3 is extensive training, supervision, and fidelity assurance (the efficacy arm). This approach would allow us to assess the impact of an intervention under both ideal and real-world conditions, and to see whether meaningful results can still be attained with less than ideal resources. Alternatively, we might drop efficacy studies altogether, and focus instead on assessing the effectiveness of interventions within the constraints of real-world contexts.
Some experts (e.g., Mundt et al., 2014) have questioned whether the randomized control trial (RCT) is, in fact, a viable “gold standard” for establishing effectiveness in the chaotic and poorly resourced settings in which refugees and other war-affected people live, suggesting that qualitative and quasi-experimental methods might be more appropriate. Certainly, qualitative methods give rich insight into the meaning and experience of interventions and their specific components. And quasi-experimental designs, in which already existing groups are compared (without random assignment to condition), are useful when randomization is simply not possible. However, as my colleague Andrew Rasmussen has pointed out, there are numerous examples of well-conducted effectiveness RCTs in challenging environments, and I am reluctant to abandon the methodological power that randomization affords.
I’m also not suggesting that we should accept real-world constraints as inherently inalterable, and lower our expectations of what is possible based on those constraints. Donors need to be educated about the necessity of sufficiently thorough training and adequate supervision. If they want meaningful impact, donors need to be willing to invest in the resources that can produce it. And NGOs, for their part, need to broaden their focus from outputs (”Are services delivered?”) to outcomes (”Are services effective?”) with a correspondingly greater investment in local expertise that can ensure quality implementation.
Finally, it’s essential to explore cost-effective methods of achieving scalable, high-impact results. As suggested by Daisy Singla and her colleagues, digital platforms allowing for remote training and supervision, greater use of peer supervision (intervision), and greater attention to employing implementation staff who possess well-established qualities of effective helpers may all contribute to achieving more effective interventions for those adversely affected by the destruction and chaos of war.