A paper describing some research on the way science is taught was recently published by one of the most high profile research journals in the world. This has raised a lot of issues about how education research is valued.
My research is on basic concepts in neuroscience and psychiatry. But I also do a lot of teaching, mostly to medical students. I am really interested in applying some of my research findings, and modern neuroscience in general, to teaching. The last couple of decades have seen huge advances in our understanding of learning and memory but it seems that these are often not applied to how students learn and remember. In my personal encounters I have also run across many folk who seem actively opposed to the application of neuroscience to education, perhaps turned off by memories (and misconceptions) of B.F. Skinner and his 'teaching machines'.
Applying findings from the science of learning and memory also seems like it could be an interesting and useful area of research. In fact such research should really be essential - I am a big fan of evidence-based education and I don't think it's right to make significant changes to a student's education without evaluating them first. I know many enthusiastic academics who feel the same.
However most of their enthusiasm for doing this is, I'm afraid to say, often undermined by practical realities. Most research (of any type) is funded by grants. To do research properly is expensive, my experience of research grants in neuroscience and psychiatry is of budgets of hundreds of thousands of US dollars, many running into the millions. I was recently involved in a successful bid for funding to generate novel teaching materials for students. The budget, I was startled to discover..... 4000 USD. Grants for teaching research are tiny, but to do any sort of research properly is expensive.
The second, related, practical reality - almost all academics are under pressure to publish in high-impact journals, as this then attracts more research money. Positive feedback. For those not familiar with Impact Factors, they're a simple (but controversial) measure of how 'important' is the research contained within a journal. Very simply they measure how many times the work in a particular journal has been cited in other journals. There is no such-thing as a high-impact education journal. In the field of medical education, the eponymously titled journal also carries the highest impact factor, a respectable but distinctly 'un-high'.....2.7. This is a shame, Medical Education has a lot of great research in it and has had a really positive influence on the way medical education is carried on around the world. Is there anyone in the real world who would argue that improving the way medicine is taught to student doctors is a low-impact area? Unfortunately, those who fund research would appear to disagree.
Thus for research into education, there is a largely closed loop - no funding means no high impact research means no funding..... and so on.
Imagine my surprise when I discovered that a piece of education research had recently made it into Science. One of the most prestigious scientific journals in the world with an impact factor of a whopping 29.7, more than 10 times that of Medical Education! The research was carried out by the Carl Wieman Science Education Initiative which has a budget of 12.25 million USD! I was amazed!
The paper, authored by Louis Deslauriers, Ellen Schelew and Carl Wieman himself, described the application of some principles from cognitive neuroscience and constructivist learning theory to the teaching of physics. Teaching sessions were structured around the principle of 'deliberate practice', where students were kept actively thinking about the principles being taught using quizzes, interactive elements and constant feedback given by two new teachers who roamed the class, never actually 'lecturing'. Interaction, interaction, interaction. The control group 'just' had conventional lectures, albeit given by a highly rated teacher. After 3x1hour sessions, the groups were tested for their knowledge of the principles taught. The results were startling, the scores of those in the 'interactive' group were twice that of those in the 'lecture' group. Amazing!
Many sound scientific principles were applied - the study had a control group, both groups were matched on pre-experimental performance.
But. There's a but. Well a few buts actually. The study has generated some controversy, a lot of it articulated in a New York Times article and centred on some methodological issues - the new teachers in the 'instructed' class were also involved in the study design and publication, meaning that they were necessarily 'new and enthusiastic' and may also be 'teaching to the test'. It seems to me that this concern is rooted in another, more basic concern about experimental design - three elements have changed, the number of teachers (1 vs 2), the identity of those teachers and the way the students are taught. How do you know which of these has resulted in the massive increase in test scores? Is it a combination of the changes? It is difficult to draw conclusions from any experiment where you change more than one element under study.
A related issue of study design is that it's difficult to be absolutely confident that the two groups of students are the 'same' before the experiment begins. They may have scored equally on tests before the study started, but from what I can tell the two classes existed as separate entities for at least 11 weeks before the study started, being taught by different teachers. If, as is actually inferred by the authors, the teacher for the group which eventually became the 'lecture' group was 'better', then it is possible that the students who then went into the 'Interactive' group, with 2 new teachers, simply benefited from having a 'better' teacher during the study (and underperformed in the pre-study test due to having a teacher who was not as effective).
However,for me there are two, bigger, issues which this paper highlights. The first is the nature of the material in the 'Instructed' class. According to the paper, the interactive group received "preclass reading assignments, preclass reading quizzes, in-class clicker questions with student-student discussion, small-group active learning tasks, and targeted in-class instructor feedback". All elements of the interactive tasks were piloted beforehand and then changed accordingly.
That is a lot of work.
Buried in the supplementary material for the paper is a key piece of 'data'. The first lesson took an estimated '20-person hours' to construct. For a one-hour session. Thats a lot of time and, although I have absolutely no reason to doubt that it took the authors 20 hours ..... that stuff takes many teachers even longer to generate. No matter how enthusiastic a teacher is, that amount of time is a lot of money and money, in the current educational climate, is limited. We've already seen that there is precious little research money out there to develop and test these methods further.......currently
Hopefully this is the biggest point. Overall, I am thrilled that the application of 'learning science' has actually been applied and tested, and that the results have then got the recognition in one of the world's most high-profile scientific journals. The paper has generated massive interest in the press (with some taking the line that 'postdocs can be trained to be more effective than senior faculty!') Blogs from all over education have also been taking an interest and, as mentioned previously, the paper made the New York Times. Would this have happened if it had been in a regular education journal? I doubt it. Maybe this is smart on the part of the journal Science - journals are themselves under huge pressure to keep (or make) their impact factors high. This paper has certainly generated interest and is likely to be heavily cited in the future.
Maybe the closed circle has been broken and educational research is finally going to get the recognition and thus funding it deserves? I hope so!!
Image credit; Grant Cochrane