Skip to main content

Verified by Psychology Today

Education

The Importance of Evidence-based Practice

Identifying evidence-based practices can be tricky, but well-worth the effort.

Post by Bryan G. Cook, University of Hawaii at Manoa

A few internet searches (conducted on May 23, 2015) hint at the ubiquity of evidence-based reforms in contemporary education: Searches of “evidence-based” and “education” resulted in about 26,100,000 and 1,390,000 hits on Google and Google Scholar, respectively, and 68 presentations at the 2015 AERA conference included “evidence-based” in the title or abstract. However, despite (or perhaps because of) their popularity, considerable confusion exists as to just what evidence-based practice/s is/are.

The logic of evidence-based reforms is relatively straightforward (Cook, Smith, & Tankersley, 2012). In virtually every professional field (including medicine, where contemporary evidence-based reforms originated), a research-to-practice gap exists in which some practices shown to be effective by scientific research are seldom used in applied settings, but some commonly implemented practices are not empirically validated and may be ineffective or even harmful. Because scientific research—especially when synthesized across multiple, high-quality, experimental studies—is generally recognized as the most valid source of evidence for determining what works, prioritizing such evidence-based practices over relatively ineffective approaches should result in increased learner outcomes. However, despite the promise of evidence-based reforms, the devil of realizing that potential lies in the details.

One vexing detail related to evidence-based reforms is terminology; many stakeholders simply don’t understand what evidence-base practice/s mean. The term is commonly used in two related but distinct ways. Evidence-based practice can denote a decision-making process in which practice (i.e., instruction) is informed by credible research evidence. It is important to note that evidence-based decision-making is not dictated by research evidence. Indeed, advocates of evidence-based practice emphasize that factors such as clinical expertise and stakeholder values should be considered alongside research evidence when making decisions (e.g., Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996). Evidence-based practices (EBPs) also refer to the specific instructional programs and practices supported as effective by credible research. Although some are, EBPs are not necessarily scripted or manualized—EBPs can be any practice or program that features replicable instructional procedures. Broad policies (e.g., inclusion), instructional frameworks (e.g., universal design for learning), and educational philosophies (e.g., behaviorism) should not be considered EBPs because they can be implemented in any number of ways. Organizations have identified EBPs for children and families (e.g., Promising Practices Network), in schools (e.g., What Works Clearinghouse), and for specific groups of learners (e.g., National Professional Development Center on Autism Spectrum Disorders). EBPs (latter use) are, then, an important (but not sole) consideration in the process of evidence-based practice (former use). For clarity, I use terms such as evidence-based education or evidence-based psychology to denote the decision-making process informed by research evidence in a particular field of study, and EBPs when referring to specific practices and programs supported by credible research evidence as effective. Other terms, such as empirically validated treatments, are also used to refer to specific practices supported by credible research as effective, adding to terminology proliferation.

Further confusion exists because EBP is sometimes used synonymously with terms such as research-based practice and best practice. Though I know of no official definitions, I think of best practices as referring to interventions that are recommended for any number of reasons (e.g., ethics, personal experience, expert opinion, research). Although some best practices are empirically validated, many are not; indeed, some approaches touted as “best practice” may actually be ineffective or even harmful. I take research-based practice to mean that the practice is supported by research. This sounds great, but it might not mean as much as one might think – sometimes the research supporting a practice can be flimsy and unconvincing, and oftentimes research supports only specific elements of a practice or program rather than showing the practice/program as a whole is effective. Whereas systematic standards are not used to determine best or research-based practices, professional organizations and groups of scholars use objective and rigorous standards to identify EBPs (e.g., Chambless et al., 1998; Council for Exceptional Children, 2014; What Works Clearinghouse, 2013). Although these standards differ, they generally set criteria for:

  • Research design: Typically, only studies using experimental research designs that establish causality (including but not limited to randomized controlled trials or RCTs) are considered.
  • Research quality: Typically, only studies determined to be of sufficient methodological rigor or quality are considered.
  • Number of studies: Typically, multiple high-quality, experimental studies must support a practice as effective for it to be considered an EBP (some standards also specify that no or few high-quality, experimental studies can show negative or no effects).

Although best practice, research-based practice, and EBP are not synonyms, they are often used indiscriminately to refer to practices believed to be effective. Unfortunately, as the popularity of evidence-based reforms has grown, “evidence-based” is increasingly used as a marketing tool rather than as an indication of valid research support. To be clear about the level of research support for any practice being promoted, I recommend asking whether it is supported by a body of research and what standards, if any, were used to identify the practice as an EBP.

Some of the other devilish details related to evidence-based practice include:

  • EBPs are supported by bodies of research indicating that they are generally, not universally, effective. There are no magic bullets (but this doesn’t mean that some practices aren’t generally more effective than others).
  • Standards for identifying EBPs vary, meaning that a practice might be identified as an EBP by one organization but not another.
  • Identifying EBPs is just a start, practitioners will need considerable support to implement and maintain the use of EBPs.
  • Critical elements of EBPs need to be implemented with fidelity, or as designed, to attain the positive effects reported in research studies.
  • Practitioners many need to adapt non-critical elements of EBPs to optimize their fit.
  • Implementing EBPs should occur within the context of effective teaching and effective schools to bring about desired improvements in learner outcomes.
  • EBPs are tools that can make good practitioners even more effective, and should not be used to control instruction or curriculum.

For readers interested in more information on evidence-based practice, I recommend the IRIS Center’s modules, found here, here, and here. Although much work remains—such as conducting more high-quality, experimental research examining the effectiveness of instructional practices; examining the external validity of EBPs; and supporting the implementation and maintenance of EBPs—evidence-based practice represents a potentially powerful approach for addressing Harris’ call to develop “a powerful repertoire for teaching and learning across the life span."

This post is part of a special series contributed in response to Karen R. Harris’ Division 15 Presidential theme, “Impacting Education Pre-K to Gray.” President Harris has emphasized the importance of impacting education by maintaining and enriching the ways in which Educational Psychology research enhances and impacts education at all ages. Such impact depends upon treating competing viewpoints with thoughtfulness and respect, thus allowing collaborative, cross/interdisciplinary work that leverages what we know from different viewpoints. She has also argued that we need to set aside paradigm biases and reject false dichotomies as we review research for publication or funding, develop the next generation of researchers, support early career researchers, and work with each other and the larger field.

References

Chambless, D. L., Baker, M. J., Baucom, D. H., Beutler, L. E., Calhoun, K. S., Crits-Christoph, P., ... & Woody, S. R. (1998). Update on empirically validated therapies, II. The Clinical Psychologist, 51(1), 3-16.

Cook, B. G., Smith, G. J., & Tankersley, M. (2012). Evidence-based practices in education. In K. R. Harris, S. Graham, & T. Urdan (Eds.), APA educational psychology handbook, volume 1 (pp. 495–528). Washington, DC: American Psychological Association.

Council for Exceptional Children. (2014). Council for Exceptional Children standards for evidence-based practices in special education. Retrieved from www.cec.sped.org/~/media/Files/Standards/Evidence%20based%20Practices%2…

Harris, K. R. (2014, Summer). Getting to know our incoming President. Newsletter for Educational Psychologists. Retrieved from http://apadiv15.org/wp-content/uploads/2014/06/NEPSummer2014.pdf

Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn't. British Medical Journal, 312, 71–72.

What Works Clearinghouse. (2014). Procedures and standards handbook, volume 3.0. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_procedures_v3_0_…

advertisement
More from APA Division 15
More from Psychology Today