Post by Dr. Joseph M. Lucyshyn, University of British Columbia
For the past 15 years, the fields of education and psychology have been active in the evidence-based practice (EBP) movement that began in the medical field in the early 1990s. However, an ongoing problem in this movement is a research to practice gap. In recognition of this issue, a number of recent APA Division 15 blogs have addressed the importance and challenge to educators of adopting EBPs in school settings (Cook, 2015; Schutz, 2016). Despite the development of many EBPs, few have been implemented and sustained by practitioners in school and mental health settings. In addition, when practitioners adopt an EBP, the level of implementation fidelity is often low and thus unsuccessful. The gap between research and practice is attributed to many proximal factors, including inadequate practitioner training, a poor fit between treatment requirements and existing organizational structures, insufficient administrative support, and practitioner resistance to change (Gotham, 2006).
This proximal analysis has been recently supplemented by a more systemic analysis of the problem. Research scientists have recognized that the way in which they pursue the development of EBPs often interferes with their adoption by practitioners. In the world of research in education and psychology, there are essentially three types of studies: efficacy studies, effectiveness studies, and dissemination studies. Efficacy studies involve the investigation of a practice under ideal conditions. Effectiveness studies involve the investigation of a practice under real-world conditions. Dissemination studies involve investigating whether an effective practice can be implemented at a large scale by practitioners in real-world conditions. The systemic problem is that, by far, the majority of research to date has been efficacy studies, with far fewer effectiveness studies, and very few dissemination studies.
Bruce Chorpita and Eric Daleiden, clinical psychologists, have examined the research to practice gap and offered a more in-depth analysis that also suggests a promising solution (Chorpita & Daleiden, 2014). Borrowing from information science, Chorpita and Daleiden argue that there is a fundamental imbalance between design-time and run-time when a practitioner attempts to implement an EBP in a real-world setting. Design-time refers to the time in which the researcher designs and tests the practice under ideal conditions. Run-time refers to the time when a practitioner attempts to implement (i.e., run) the practice under real-world conditions. In researchers’ efforts to control sources of variability during design-time to maximize effects, circumstances in the natural service setting that require practitioners to adapt the practice are not taken into account. They argue for the adoption of a new model of research to practice called collaborative design. In this model, researchers and practitioners work together in collaborative partnership to ensure that an EBP is adapted to real-world conditions in a manner that preserves design-time features essential to effectiveness while allowing for practitioner feedback and adaptation to run-time conditions.
The research to practice gap also has contributed to the development of a new discipline, implementation science. Implementation science involves the study of conditions that promote or hinder the implementation of an EBP. Dean Fixsen and colleagues, leaders in implementation science, argue that researchers have to abandon “let it happen” and “help it happen” approaches to the dissemination of EBPs, and instead adopt a “make it happen” approach informed by implementation science (Fixsen et al., 2010). “Making it happen” involves five key features (1) a purveyor organization capable of empowering practitioners to implement an EBP; (2) EBP components that are clearly defined; (3) training methods that effectively teach practitioners to implement the EBP with fidelity; (4) organizational support for implementation; and (5) leadership throughout the organization, from adaptive leadership that champions the change to technical leadership that ensures long-term sustainability.
A contemporary example of the development of an EBP consistent with these innovations in addressing the research to practice gap is School-wide Positive Behavior Interventions and Supports (PBIS). As described by its founders, Robert Horner and George Sugai, PBIS is:
... a systems approach for establishing the social culture and individualized behavior supports needed for a school to be a safe and effective learning environment for students. ... [I]t is an approach designed to improve the adoption, accurate implementation, and sustained use of evidence-based practices related to behavior and classroom management and school discipline systems” (Sugai & Horner, 2009, p. 309).
From its beginnings in schools in Oregon in the late 1990s, PBIS is now being implemented in over 21,000 schools throughout the United States, and is being adopted in schools in Canada, Europe, and Australia. In practice, PBIS involves a multi-tiered system of positive behavior support that includes universal supports for all students, targeted supports for some students, and intensive supports for relatively fewer students who are unresponsive to the first two tiers of support. For example, at the universal tier, school-wide expectations are defined and explicitly taught. At the targeted tier, a small group of students may participate in a social skills training intervention. At the intensive tier, a student may receive function-based, multicomponent positive behavior support within a wraparound, interagency service delivery model.
The remarkable growth in the dissemination of PBIS may be attributed to the founders’ and their colleagues’ application of design-time/run-time thinking in the design and refinement of PBIS, and in their use of implementation science when scaling up research and dissemination to the school district and state levels. For example, PBIS research from its inception has included collaborative dialogue between researchers and school educators and administrators. This dialogue has allowed design-time and run-time considerations to reciprocally shape the approach. To empower school personnel to implement PBIS with fidelity and to scale up research and dissemination, researchers have developed regional purveyor groups that support implementation, articulated a blueprint that defines components of the approach, utilized a train-the-trainer coaching model to build local capacity, and worked with administrators to build organizational support for implementation. Each of these activities represent the use of implementation science to bring PBIS into the lives of thousands of educators and millions of students in the US, and now educators and students in Canada, Europe, and Australia.
The innovative research methods described above suggest the value of educational psychologists conducting research in collaboration with education professionals so that EBPs are more likely not only to be effective but also acceptable, feasible, and adaptable in educational settings. The innovative dissemination principles and practices illuminated by implementation science offer educational psychologists a clear pathway to the adoption of EBPs by practitioners in real-world settings. When educational psychologists integrate these innovations into their own lines of research, they are likely to build a broad and sturdy bridge between research and practice.
This post is part of a special series curated by APA Division 15 President Nancy Perry. The series, centered around her presidential theme of "Bridging Theory and Practice Through Productive Partnerships," stems from her belief that educational psychology research has never been more relevant to practitioners' goals. Perry hopes the blog series will provoke critical and creative thinking about what needs to happen so that researcher and practitioner groups can work together collaboratively and productively. Those interested can learn more—and find links to the full series—here.
Chorpita, B. F., & Daleiden, B. F. (2014). Structuring the collaboration of science and service in pursuit of a shared vision. Journal of Clinical and Child & Adolescent Psychology, 43(2), 323-338.
Cook, B. C. (2015, June 2). The importance of evidence-based practice: Identifying evidence-based practices can be tricky, but well-worth the effort. [Web log post]. Retrieved from https://www.psychologytoday.com/blog/psyched/201506/the-importance-evide...
Fixsen, D. L., Blasé, K. A., Duda, M. A., Naoom, S. F., & Van Dyke, M. (2010). Implementation of evidence-based treatments for children and adolescents: Research findings and their implications for the future. In J. R. Weisz & A. E. Kazdin (Ed), Evidence-based psychotherapies for children and adolescents (2nd ed), pp. 435-450). New York: Guilford.
Gotham, H. J. (2006). Advancing implementation of evidence-based practices into clinical practice: How do we get there from here? Professional Psychology: Research and Practice, 37(6), 606-613.
Schutz, P. (2016, May 23). Spreading the word: Science isn’t just for scientists anymore. [Web log post]. Retrieved from https://www.psychologytoday.com/blog/psyched/201605/spreading-the-word
Sugai, G., & & Horner, R. H. (2009). Defining and describing Schoolwide Positive Behavior Support. In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook of positive behavior support. New York: Springer