Quizzes, tests, exams, midterms, final exams—call them what you will. Virtually all of these “tests” are summative assessments or evaluations of what has been learned, retained, understood, and possibly applied by students. Tests are a fact of life in most secondary educational settings and likely the majority of postsecondary—college and university—settings.
Testing always generates a bit of controversy because of its summative and, effectively, “high stakes” nature. If a first year college student is enrolled in an introductory class in psychology at a mid to large-sized university, there is a very good chance that her knowledge of the subject will be measured by two, possibly three, tests—and no more. Why? Consider the problem of teaching a large lecture course with 100, 200, 300, or even more students. Even if computer-graded tests are used, that’s still quite a bit of grading (although the instructor may have a teaching assistant or two to help). The problem? Poor performance on the midterm leaves little opportunity for academic salvation—some students literally have to “ace” the final in order to pass the class.
Moreover, the sheer size of a large lecture course can hamper more active pedagogies—it can be a challenge to run a discussion in a class that size and writing assignments, if any, are apt to be brief (and even then, imagine grading, say, 200 five-page papers—now imagine you assign 3 of those, or 600 papers in a semester—that's 3000 pages to grade—you get the idea). So, many large lecture courses rely on the classic two-exam (i.e., mid-term and final) or multi-exam (test 1, test 2, and a final) model. But these approaches have their limits because the exams retain still an “all or nothing” quality—if a student does poorly on the midterm or the final (or worse, both), his grade suffers—and surely he learned something besides what these exams demonstrated?
The student audience also matters. One obvious problem, particularly for first-year students, is unfamiliarity with college-level testing, not to mention the testing-style of a given professor. Thus, high stakes, summative assessments may be problematic. Is there a better way to assess learning while maintaining the course’s integrity and helping students to learn (and retain!) topical information?
Enter the so-called testing effect, where taking tests designed to assess what one knows actually end up enhancing later retention. Practically speaking, this can entail testing students on course material regularly, so that repetition and familiarity with testing strengthens memory, assuming students learn from their mistakes (i.e., looking over missed questions to learn the correct answer). Another practical effect is that students know when they will be tested on course material (say, weekly) so they are (more) motivated to keep on top of course readings and to study the material routinely than if they anticipated taking only one, two, or three big tests (that first exam always seems way off in the distance—until it isn’t—and cramming reading and study into a day or two before is never effective).
A recent study by James Pennebaker, Samuel Gosling, and Jason Ferrell explored the impact of giving students quizzes at the start of every class meeting of introductory psychology at the University of Texas at Austin. There were just over 900 students in the class and each brought a lap top computer to the class for online quizzing purposes. The good news? Class attendance improved (it’s risky to cut class when you know a quiz is happening), as did overall performance on the quizzes, thereby demonstrating a nice variation of the testing effect. Further, the teacher-researchers showed that under some circumstances, computers in the classroom can be a pedagogical aid rather than a tempting distraction. However, the really intriguing finding was that this testing effect was particularly effective among students from lower-income households.
Pennebaker and Gosling added an important element to the quizzing that teachers should consider: Each quiz had eight questions—seven that all students in the class answered and one targeting each individual student, typically an item that had been missed on a previous quiz. Students knew about this feature, thus they had an opportunity to learn from past mistakes, thereby updating their memories by learning from (reviewing and mentally correcting) past mistakes.
Besides the impact of the testing effect or, rather, in concert with the testing effect, students knew—just knew—they had to keep up with the reading or suffer the consequences (Gosling waggishly suggests that the need to study curtailed the students accustomed bar-hopping on school nights.) Regular quizzing also motivated them to be attentive to what was happening in class each day.
There is something liberating about moving away from the midterm/final approach, isn’t there? I think so. In my Human Adjustment course this term, I am trying weekly quizzes comprised of ten objective items drawn from assigned reading and class discussions. As I am teaching fewer than 30 students (not 900!), I also grade on active class participation/discussion and weekly self-reflection exercises. And I will give a final exam but no midterm, however, the final will be an essay-test based (I think a writing-based assessment is crucial in every course) on themes repeated throughout the course. That test is meant to be more formative than summative, as I am interested to learn what concepts they have mastered and how they have applied them to their own experience.
Well, then: Can you use the testing effect to good effect in your teaching? How?