Let’s make a few assumptions about teaching ability. The first of these is that the ability to be an effective teacher (a broad trait, to be sure, compromised of many different sub-traits)—as measured by your ability to, roughly, put knowledge into people’s head in such a manner as so they can recall it later—is not an ability that is evenly distributed throughout the human population. Put simply, some people will make better teachers than others, all else being equal. The second assumption is that teaching ability is approximately normally distributed: a few people are outstanding teachers, a few people are horrible, and most are a little above or below average. This may or may not be true, but let’s just assume that it is to make things easy for us. Given these two assumptions, we might wonder how many of those truly outstanding-tail-end teachers end up being instructors at the college level. The answer to that question depends, of course; on what basis are teachers being hired?
Glasses AND a sweater vest? Seems legitimate enough for me.
Now, having never served on any hiring committees myself, I can offer little data or direct insight on that matter. Thankfully, I can offer anecdotes. From what I’ve been told, many colleges seem to look at two things when considering how to make their initial cut of the dozens or hundreds of resumes they receive for the single job they are offering: publications in academic journals (more publications in “better” journals is a good thing) and grant funding (the more money you have, the better you look, for obvious reasons). Of course, those two factors aren’t everything when it comes to who gets hired, but they at least get your foot in the door for consideration or an interview. The importance of those two factors doesn’t end post-hiring either, as far as I’ve been told, later becoming relevant for such minor issues like “promotions” and “tenure”. Again, this is all gossip, so take it with a grain of salt.
However, to the extent that this resembles the truth of the matter, it would seem to game the incentive system away from investing time and effort into becoming a “good” teacher, as such investments in teaching (as well as the teaching itself) would be more of a “distraction” from other, more-important matters. How does this bear on our initial question? Well, if college professors are being hired primarily on their ability to do things other than teach, we ought to expect that the proportion of professors being drawn from the upper-tail of that distribution in teaching ability might end up being lower than we would prefer (that is, unless teaching ability correlates pretty well with one’s ability to do research and get grants, which is certainly an empirical matter). I’m sure many of you can relate to that issue, having both had teachers who inspired you to pursue an entirely new path in life, as well as teachers who inspired you to get an extra hour of sleep instead of showing up to their class.The difference between a good teacher (and you’ll know them when you see them, just like porn) and a mediocre or poor one can be massive.
So why ask this questions about teaching ability? It has to do with a recent meta-analysis by Freeman et al (2014) examining what the empirical research has to say about the improvements in education outcomes that active learning classes have over traditional lecture teaching in STEM fields. For those of you not in the know, “active learning” is a rather broad, umbrella term for a variety of classroom setups and teaching styles that go beyond strictly lecturing. As the authors put it, the term, “...included approaches as diverse as occasional group problem-solving, worksheets or tutorials completed during class, use of personal response systems with or without peer instruction, and studio or workshop course designs“. Freeman et al (2014) wanted to see which instruction style had better outcomes for both (1) standardized tests and (2) failure/withdrawal rates from the classes.
“Don’t lecture him, dear; just let the active learning happen”
The results found that, despite this exceedingly-broad definition for active learning, the method seemed to have a marked increase in learning outcomes, relative to lecture classes. With respect to the standardized test scores, the average effect size was 0.47, meaning that, on the whole, students in active learning classes tended to score about half a standard deviation higher than students in lecture-based classes. In simpler terms, this means that students in the active learning classes should be expected to earn about a B on that standardized test, relative to the lecture student’s B-. While that might seem neat, if not terribly dramatic, the effect of the failure rate was substantially more noteworthy: specifically, students in lecture-only classes were 1.5 times more likely to fail than a student in an active learning class (roughly 22% failure rate in active learning classes, relative to lecture’s 34%). These effects were larger in small classes, relative to large ones, but held regardless of class size or subject matter. Active learning seemed to be better.
The question of why active learning seems to have these benefits is certainly an interesting one, especially given the diversity of methods that fall under the term. As the authors note, “active learning” could refer both to a class that spent 10% of its time on “clicker” questions (real-time multiple-choice questions) or a class that was entirely lecture-free. One potential explanation is that active learning per se doesn’t actually have too much of a benefit; instead, the results might be due to the “good” professors being more likely to volunteer for research on the topic of teaching or likely to adopt the method. This explanation, while it might have some truth to it, seems to be contradicted by the fact that the data reported by Freeman et al (2014) suggests that the active learning effect isn’t diminished even when it’s the same professor doing the teaching in both kinds of courses.
We might also consider that there’s a lot to be said for learning by doing. When students have practice answering similar kinds of questions (along with feedback) to those which might appear on tests – either of the professor’s making or the standardized varieties – we might also expect that they do better on the tasks when they counts. After all, there’s a big difference between reading a lot of books about how to paint and actually being able to create a painting that bears a resemblance to what you hoped it would look like. Similarly, answering questions about your subject matter before a test might be good at getting you to answer questions better. Simple enough. While an exceedingly-plausible sounding explanation, the extent to which active learning facilitates learning in this manner is an unknown. In the current study, as previously mentioned, active learning could involve something as brief as a few quick questions or an entire class without lecture; the duration or type of active learning wasn’t controlled for. Learning by doing seems to help, but past a certain point it might simply be overkill.
Which is good news for all you metalhead professors out there
Another potential explanation that occurs to me returns to our initial question. If we assume that many professors do not receive their jobs on the basis of their teaching ability—at least not primarily – and if increasing one’s skill at teaching isn’t often or thoroughly incentivized, then it’s quite possible that many people placed in teaching positions are not particularly outstanding when it comes to their teaching ability. If student learning is in some way tied to teaching ability (likely), then we shouldn’t necessarily expect the best learning outcomes if the teacher is the only source of information. What that might mean is that students could learn better when they are able to rely on something that isn’t their teacher to achieve that end. As the current study might hint towards, what that “something” is might not even need to be very specific; almost anything might be preferable to a teacher reading powerpoint slides which they didn’t make and are just restatements of the textbook verbatim, as seems to be popular among many instructors who use lectures currently. If some professors view teaching as more of a chore than a pleasure, we might see similar issues. Before calling the lecture itself a worse format, I would like to see more discussion of how it might be improved and whether there are specific variables that separate “good” lectures from “bad” ones. Perhaps all lectures will turn out to be equally poor, and teaching ability has nothing at all to do with student’s performance in those classes. I would just like to see that evidence before coming to any strong conclusions about their effectiveness.
References: Freeman, S., Eddy, S., McDonough, M., Smith, M., Okoroafor, N., Jordt, H., & Wenderoth, M. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, doi: 10.1073/pnas.1319030111.