Education
Using What Works
Evidence-based practice is changing both education and educational psychology.
Posted September 22, 2015
By Robert E. Slavin, Johns Hopkins University
When you are ill, do you seek medicines and treatments that have been rigorously evaluated?
When you buy a car, do you expect that it has been rigorously tested for safety?
Do you support research in agriculture to increase the supply and quality of healthy food for the world?
Just about everybody would say that they would rather receive proven medical services, purchase safe and effective products, travel on safe trains and airplanes, and so on. Evidence plays a central role in agriculture, where proven practices, seeds, stock, and veterinary practices have enormously raised productivity, lowered costs, and improved food access and quality throughout the world.
Yet in education, evidence has long played a minimal role in practice. A teenager’s acne cream has had to prove its safety and effectiveness. So why isn't his algebra curriculum scrutinized in the same way? One could argue that the rapid pace of progress in fields such as medicine, agriculture, and technology has a lot to do with the respect for evidence in those fields, while the slow pace of progress in education reflects the opposite.
Of course, there is a great deal of research in education and educational psychology, and principles derived from that research do influence practice. Yet research to support (or refute) general principles of practice, while essential, does not guide educators in choosing particular texts, programs, software, or methodologies. Until research directly supports particular approaches, educators are likely to pick out the parts of the body of received knowledge they like and ignore the rest. That’s better than ignoring research entirely, but it does not lead to progressive adoption of more and more effective strategies, as is the norm in medicine, agriculture, technology, and other evidence-driven fields.
In recent years, the field of education has begun to embrace a more central role for evidence in educational practice and decisions. This movement is called Evidence-Based Practice, or EBP. Brian Cook recently introduced this topic and defined some of the key terms. My intention is to go one step further, to discuss how evidence-based practice is beginning to transform educational practice and educational psychology—and how, as President Karen Harris suggests, we can truly “impact education pre-K to gray.”
What is Considered “Rigorous Research”?
One of the key requirements of evidence-based practice is a large and robust set of programs and practices with “rigorous evidence of effectiveness.” What is meant by this varies in different circumstances, but the “gold standard” of evidence for evidence-based practice is the randomized clinical trial, or RCT (Shadish, Cook, & Campbell, 2002). In an RCT, students, teachers, or schools who agree to participate in an experiment are assigned at random to experience a given treatment, or to serve as a control group (often, those in the control group receive the treatment after the study is over). Typically, students in either the experimental and control groups are pretested before the program begins, and then post-tested one or more times. The post-tests are then statistically compared, controlling for pretests and possibly other factors.
The beauty of the RCT is that—because of random assignment—experimental and control groups can be considered equal at pretest not only on observable factors, such as pretests, but also on unobservable factors such as motivations or orientations. Therefore, if outcomes differ significantly at the end of the experiment, the difference must be due to the treatment.
Pragmatically, RCTs can be difficult to implement. It is often hard to get students, teachers, and school leaders to agree to be randomly assigned. Because students are taught in schools and classes, it is difficult to randomly assign students within schools and classes, except for individual treatments such as tutoring. Therefore, classes or schools are often randomly assigned in what are called cluster randomized trials (CRTs). These preserve the group basis of teaching, but such experiments typically require a lot of schools—usually around 25 per treatment—for methods that take clustering into account.
Less-than-gold-standard research designs that are also usually acceptable in evidence-based practice include quasi-experiments (QEDs), in which experimental and control groups are matched in advance on factors such as pretests and demographics instead of being randomly assigned. A variation of this is what I call a randomized quasi-experiment (RQE), in which groups are randomly assigned to treatments but there are too few groups to permit use of statistics that take clustering into account.
What About Research Other Than RCTs?
The evidence-based practice movement values all types of research: correlational, ethnographic, and descriptive, as well as experimental. Its insistence on RCTs applies only to questions of “what works.” For example, if you want to know if a given science program increases science achievement more than common practice, an experiment is essential. However, for other questions, other methods may be preferable. Many RCTs also use qualitative methods to characterize what the experimental and control classes were doing and how students and teachers perceived the treatments.
Adding Proven Programs
The most rapid progress in evidence-based reform has involved the development and evaluation of programs in all levels of education, especially pre-K to 12. Under funding from the Institute of Education Sciences (IES), National Science Foundation (NSF), and other agencies, numerous programs have been supported. In more recent years, the Obama administration’s Investing in Innovation (i3) program has invested more than $1.5 billion in development, validation, and scale-up of proven programs. A total of 143 programs have been funded so far. In the U.K., the Education Endowment Fund (EEF) is making similar investments.
What IES, NSF, i3, and EEF are finding is that it is not so easy to improve student achievement, but it can be done. The majority of funded programs do not produce significant differences, but the minority that do have enormous potential to transform educational practice and policy because their findings are typically obtained in realistic school settings and can (in principle) be replicated broadly.
Summarizing Outcomes
In order to make the findings of rigorous research easily available and easy to interpret, it is important to have reviews that use consistent and fair standards to summarize the evidence. In education, the U.S. Department of Education maintains the What Works Clearinghouse (http://ies.ed.gov/NCEE/wwc). Our group at Johns Hopkins University produces the Best Evidence Encyclopedia, or BEE (www.bestevidence.org). Recently, the Education Department General Administrative Regulations (EDGAR) created definitions of moderate and strong evidence of effectiveness for education programs.
Is Evidence-Based Practice Affecting Educational Policy and Practice?
The evidence-based practice movement is only just beginning to affect educational practice beyond schools participating in the research itself. However, this may be changing.
One area of impact is i3. In order for programs to receive larger validation and scale-up grants, they're required to show evidence of effectiveness. Collectively, these programs have served thousands of schools.
Another developing area of impact is federal School Improvement Grants (SIG), which support the transformation of the lowest-performing schools in each state. Recent legislation defined a new category schools could choose: evidence-proven whole-school programs. It will be interesting to see how many schools make this choice.
Encouragements to use proven programs are showing up in many parts of federal legislation. For example, in Title II SEED proposals, applicants must show evidence for their programs.
Politics may intervene, but over time it seems likely that evidence requirements will become increasingly common in federal, state, and local policies.
How Will Evidence-Based Practice Affect Educational Psychology?
I believe that evidence-based practice will be very good for all of educational research and educational psychology. If it takes hold in policy and practice, this will greatly increase interest in research by educators, policy makers, and funders at all levels. The strength of evidence-based practice in medicine does not just increase interest and funding for RCTs, but also supports a vast array of research using all sorts of methods. The same could happen in education.
Education is an applied field. Government, educators, and the public care about the outcomes of education, and they are far more likely to care about (and fund) educational psychology research and development when they can see the impacts on children.
There is plenty of legitimate controversy within educational psychology about research methods, measures, and purposes of research. This debate is healthy and necessary. However, if you take the view that the goal of educational psychology as a discipline is improving education—especially for children at risk—then the evidence-based practice movement offers real hope to make our field more impactful for kids. Isn’t that what matters?
This post is the final of a special series contributed in response to Karen R. Harris’ Division 15 Presidential theme, “Impacting Education Pre-K to Gray.” President Harris has emphasized the importance of impacting education by maintaining and enriching the ways in which Educational Psychology research enhances and impacts education at all ages. Such impact depends upon treating competing viewpoints with thoughtfulness and respect, thus allowing collaborative, cross/interdisciplinary work that leverages what we know from different viewpoints. She has also argued that we need to set aside paradigm biases and reject false dichotomies as we review research for publication or funding, develop the next generation of researchers, support early career researchers, and work with each other and the larger field.
References
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton-Mifflin.