Education
Fighting the Passive Learning Trap
Understanding the difference between consumption and production.
Updated June 4, 2025 Reviewed by Kaja Perina
Key points
- Passive learning feels productive but often hides gaps in real understanding.
- True learning requires active engagement—solving problems and making mistakes.
- Challenge yourself with simulated "oral exams" to break the passive learning habit.
Understanding a proof in a math textbook is one thing; being able to reconstruct it without help is a whole other beast entirely. My classmates and I learned this the hard way at university. Most of our exams were oral exams, and nothing exposes a lack of deep knowledge faster than trying to explain a concept to someone.
Trying to explain an idea that you think you understand, only to watch the explanation fall apart with every word you speak, is an absolutely gut-wrenching feeling.
Unlike written exams, where visual learners may be able to parrot back memorized notes that they barely understand, an oral test demands creative thinking in real time. When presented with a conjecture, students not only need to recall relevant definitions and theorems, but they also need to apply them—sometimes in ways they never anticipated.
That brings us to the obvious question: How does one prepare for such an exam? Or, to put it another way: How can you tell if you’ve studied enough to truly understand a subject? Or, to rephrase yet again, this time in the words of Monty Python’s John Cleese, “If you’re very, very stupid, how can you possibly realize that you’re very, very stupid? You’d have to be relatively intelligent to realize how stupid you are.”
And indeed, as psychologist David Dunning, one of the discoverers of the Dunning-Kruger effect, describes in his book Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself: “If you’re incompetent, you can’t know you’re incompetent [...] The skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is.”
Consuming information does not develop skills
Students often fall into a familiar trap: They read the textbooks, highlight key passages, review lecture notes, and nod along with explanations. These activities feel productive—and they do genuinely contribute to learning—but when exam day arrives, a student will often realize their passive familiarity with the material doesn’t translate into the ability to apply it. Nor does it translate into a good grade.
One painful lesson about learning: Consuming information is not the same as developing skill. The cognitive psychologist Daniel T. Willingham captures the distinction between passive and active learning when he writes, “Memory is the residue of thought.” In other words, we forget most of what we encounter and remember only what we think about.
Reading about calculus doesn’t automatically create the neural pathways needed to solve calculus problems. Those pathways only form through deliberate practice: by solving problems, making mistakes, and experiencing the struggle.
As mathematician Paul Halmos famously advised, “Don’t just read it; fight it!” When studying a complex subject, look for examples, try to discover alternative proofs, and ask your own questions: Is the hypothesis necessary? Would the opposite of the statement also be true?
Do we know what we don’t know?
We still haven’t answered the question: How do we know what we don’t know? Turns out that artificial intelligence researchers have been working hard to solve the same problem—and with AI being a rather profitable business, this question is quite literally a million-dollar one.
When the first great language models started spreading like wildfire, the primary criticism was that they “hallucinate.” LLMs generate responses based on probability distributions rather than on “true knowledge.” As a result, they can generate answers that sound confident, authoritative, and even detailed—all without any of their “facts” actually being true.
To borrow from John Cleese: If AI generates something very, very stupid, how can it possibly realize that it’s very, very stupid? AI would have to be relatively intelligent to realize how stupid it is, but for a machine that can’t think the way humans do, this remains a deeply challenging problem.
However, the most sophisticated systems today are explicitly designed to recognize the boundaries of their knowledge.
One key approach is called uncertainty quantification, or what is essentially a system capable of measuring its own confidence by looking at all the different possible responses. Uncertainty manifests as entry in the output distribution. For example, when asked multiple times, a confident model will likely show a strong preference for certain words, while an uncertain one will likely create a larger variety of responses.
This mirrors our own cognitive limitations, but people are not good at interrogating themselves. When facing knowledge gaps, we substitute confidence for competence, familiarity for mastery. The cognitive psychologist Daniel Kahneman calls this “substitution,” or unconsciously replacing difficult questions with easier ones. In his book Thinking, Fast and Slow, Kahneman writes: “Declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.”
The illusion of competence
As a general rule, the harder your brain works during practice, the better it performs when it matters. To optimize long-term retention and transfer, embrace challenges and accept the slow, often frustrating, pace of true learning.
This illusion of competence becomes particularly dangerous in our modern information environment. With podcasts, online courses, and books more accessible than ever, we can easily feel productive simply by absorbing content. Hours spent watching instructional videos or reading tutorials provide dopamine hits without developing the neural architecture necessary for building a skill.
Passive learning is deceptive. It feels like we’re making progress, but without actually applying what we learn, it remains abstract, untested, and slips away quickly. We feel like we’re improving because the content makes sense to us in the moment, yet when it comes time to use that knowledge, we fail to actually solve problems on our own.
The oral math exam was valuable exactly because it broke this illusion. Standing in front of the professor, faced with a problem, there’s nowhere to hide—we could no longer mistake recognition for recall or familiarity for fluency. The pressure to produce mathematics revealed the true state of our understanding.
To truly know what we don’t know, we should simulate our own oral exams by imagining an imaginary professor’s probing questions and forcing ourselves to articulate complete answers—without so much as a peek at the textbook.
References
Dunning, D. (2005). Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself (1st ed.). Psychology Press. https://doi.org/10.4324/9780203337998
Halmos, P. R. (1985). I Want to Be a Mathematician: An Automathography (1st ed.). Springer New York, NY. https://doi.org/10.1007/978-1-4612-1084-9
Willingham, D. T. (2009). Why Don't Students Like School?: A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom (1st ed.). Jossey-Bass. https://doi.org/10.1002/9781118269527