Three Myths About “Reading Levels”
And why you shouldn’t fall for them…
Posted Feb 28, 2017
Psychologists love to measure things, and perhaps nothing has been measured as much by psychologists as reading—both texts and readers. Multiple different instruments measuring text readability have been devised and used over the past century, as have multiple standardized tests of readers’ abilities. Though their results are often first presented as numerical scores whose interpretation is difficult without a key, most instruments also translate these into more generally understood grade-level reading scores. These are typically reported as year-and-month scores; thus a book scoring at reading level 8.1 is said to be written at the early eighth-grade level, while a student scoring at reading level 4.6 is judged to be reading at the level of the average student in the sixth month of fourth grade. Two common reading level systems are exceptions to this: Fountas and Pinnell’s Guided Reading program uses letters, from A to Z, while the increasingly popular Lexile leveling scheme rates both texts and readers from 0L to approximately 2000L (there is actually no upper limit). Both of these do, however, offer rough grade level conversion charts on their websites, here and here respectively.
However measured, reading levels can be a generally useful guide to whether a particular text is going to be far too difficult for a particular reader. For example, the student who scored at 4.6 on a recent, valid reading test will probably have significant difficulty reading and understanding that text at an 8.1 reading level.
Unfortunately, though, the ubiquity and precision with which these reading levels are now being tested and reported has led to their increasingly inappropriate use, especially in schools. For example, professional development materials accompanying the Common Core initiative instruct teachers to “match” texts to readers based on Lexile level, staying within a narrow range of only 50L above to 100L below each student’s tested Lexile level. Most school reading incentive programs require students to read texts within a restricted range of their measured reading skill levels, either within the Lexile range just mentioned, or, if using another rating system, within five months of their measured reading levels. For example, that student who tested at 4.6 might only receive credit for reading books leveled from 4.1 to 5.1. Many schools now even restrict the books students can check out from the school library to those at such “appropriate” levels, and in some cases, parents are even being told to “concentrate on material within his or her Lexile range” when offering books to their children at home.
Such misguided policies and practices are based on three very prevalent myths about reading levels:
Myth #1: Each text has a discrete, accurately measurable reading level.
Because reading levels are often reported very precisely, one might think that measuring the readability of a text was an exact, agreed upon science. Nothing could be further from the truth. Almost all text level measures are based upon some combination of word difficulty (measured by number of letters, number of syllables, and/or frequency of use) and sentence complexity (measured by length in words and/or number of phrases). However, the specific formulas and differing emphases of various measures can result in very different reading levels for a single text.
Take, for example, the 2016 Newbery Medal book, The Girl Who Drank the Moon by Kelly Barnhill. The Lexile corporation rates this award-winning book for young teens at 640L, equivalent to somewhere between a 3rd and 5th grade reading level, according to their conversion tables. ATOS, the reading level system used by the Accelerated Reader program, rates it at reading level 4.8 (i.e., 4th grade, 8th month). Using the first 200 words from Chapter 2 of the book, the Free Readability Calculator results in the following range of levels, from seven more of the best known and respected readability scales, for exactly the same words from the same book:
- Automated Readability Index: RL 5.6
- The SMOG Index: RL 5.8
- Flesch-Kincaid Grade Level: RL 5.8
- Linsear Write Formula: RL 6.6
- Dale-Chall Formula: RL 6.9
- Gunning Fog Index: RL 7.6
- The Coleman-Liau Index: RL 8.0
That readability scores for this book, based on well-accepted, research-based formulas, span more than three grade levels is not an unusual result; try it for yourself using 150 words or more from any text. And of course, none of these scales can measure how well the text is actually written—whether the author uses transitions effectively, provides rich descriptions, or explains things clearly, all of which certainly affect the readability of a text.
Myth #2: Each reader has a discrete, accurately measurable level of reading skill.
Tests of reading skill levels, even those that are carefully designed and commonly used, suffer from many of the same difficulties as measures of text readability, in that each uses differing contents and criteria for measuring reading ability.
Some use lists of common words at differing grade levels to judge decoding ability, while others require readers to pronounce a list of nonsense words as well. Some include measures of oral reading fluency and/or vocabulary, while others do not. Some tests ask readers to answer questions after reading a passage, while others measure comprehension through a CLOZE procedure in which readers are asked to supply missing words in a passage, and still others require readers to recall salient details of a text as a means of judging comprehension. Some tests are given by a trained psychologist, one-on-one, while others are given to groups of students in regular class settings. Some are timed, while others are not.
Even the very criterion for scoring “at” a certain reading level differs among tests, with some tests allowing readers to attempt all the test questions (or as many as they have time for), and then scoring based on the number of correct answers, while others subtract a certain percentage of incorrect answers from the number of correct answers before scoring, and still others arrive at a reading level by stopping the test after the reader has missed a certain set number of questions.
Not surprisingly, readers will often score quite differently on different reading tests.
In addition, while texts stay the same from day to day, readers do not. People’s performance on many tasks, including reading tests, can be significantly affected by whether they are tired or well-rested, hungry or full, healthy or sick, distracted or focused, anxious or confident. Children are perhaps even more affected by such outer circumstances, since they lack the maturation and self-regulation skills that allow many adults to maintain more even performance levels in varying circumstances.
Myth #3: Readers should (almost always) read texts very near their reading level.
Given the margins of error implied by the variability in both text readability and reading skill measurement described above, the expectation that texts can be closely “matched” to readers’ skill levels is clearly unfounded. The idea that texts should always be matched to readers’ tested levels is equally problematic, for a number of reasons.
First, reading is an interactive process, so the difficulty or ease with which a particular reader can read a particular text depends in part on his or her prior knowledge related to the text and motivation for reading it.
Reading research has repeatedly demonstrated the effects of prior knowledge on reading comprehension; simply put, it is easier to read and understand texts that talk about things you already know a lot about. When readers have a good bit of prior knowledge on a topic, even difficult texts can be easier to read and understand because they can draw on their own knowledge to fill in any gaps in their comprehension. Conversely, many of us have experienced the effects of a lack of prior knowledge when first reading in an unfamiliar subject area, whether it is economics or art or environmental science. It is not so much that we cannot read or do not know the words in the texts, as it is that we don’t have the background knowledge to make sense of what they say. Similarly, readers demonstrate higher “reading levels” in genres that are familiar to them. For example, a complex science fiction novel, very difficult to read for someone who rarely reads in that genre, may be considerably less difficult for a frequent science fiction reader, who is familiar with common science fiction conventions and story lines, and perhaps even with previous books by the same author.
Interest and motivation also great affect a reader’s ability to read a text. An excellent example of this is the recent Harry Potter phenomenon, in which thousands of children as young as second grade read and enjoyed 300 and 400 page books written at fifth to seventh-grade reading levels, simply because they were fascinated by the magical world created by J.K. Rowling in her series. Most teachers can likewise recount stories of students like the struggling high school reader who tests at a fourth or fifth-grade reading level, but somehow manages to read and understand a motorcycle repair manual written at the tenth-grade level, because he wants to fix up his motorcycle. And of course, resources can be made available to help truly motivated readers read difficult texts, from dictionaries, online resources, and audio texts to the time-honored strategy of simply asking someone else, a teacher, parent, sibling, or friend, about the words you don’t know. We might also point out here that most very early simple books (those with highly repetitive text) simply cannot give children a full sense of the joy of reading.
Also, there is good evidence for the benefits of reading texts both above and below one’s official reading level. Reading easier texts can help novice readers, especially, gain fluency and confidence, and can develop readers’ enjoyment of reading at all levels. Consider how many of us enjoy relaxing with a good romance or murder mystery that is well below our adult “reading level”! On the other hand, researchers are finding that challenging students to tackle more complex texts, while supporting them in their efforts to do so, leads to greater growth in reading than simply having them read texts at or just above their current reading levels.
Finally, full comprehension is not necessary for a reader to enjoy and benefit from a book. Certainly students should not be assigned and required to learn from a book that is significantly too difficult for them to read, but most of us, again, remember reading and enjoying in childhood books like The Secret Garden or A Christmas Carol or Captains Courageous without, at first, fully understanding the historical contexts or social issues involved. Indeed, it is often as we grow, and read and then reread such books, that we first gain a sense of history or start to value equality or self-reliance or charity, from stories and characters we have grown to love.
Reading levels should never be used to limit the texts children may access or try to read. Neither reading tests nor readability measures are anywhere near exact enough to predict which individual child will best be able to read or benefit from which individual book or magazine or online text. There is nothing wrong with letting a child try to read a text, and then abandon it if it is too hard, or too simplistic, or simply boring—after all, adults do this all the time. Limiting reading selections based on reading levels too often results in children having too few texts available to them that they want to read, and thus discourages them from reading altogether.
Passion, curiosity, and knowledge are at least as important as reading levels in helping children find good things to read. This is especially true because we know that motivation and knowledge can increase effective reading levels, while, conversely, readers can enjoy and gain a lot from texts that interest them, even if they are “too easy.”
Children should not be required/expected to independently read and learn from texts that are considerably above their reading levels. Ironically, textbooks are perhaps the most common source of such demands, since they are frequently written well above the reading levels of the grades for which they are intended. But disparity between student reading levels and text readability levels does not necessarily mean we should discourage or prevent such students from engaging with such texts. Rather, it should trigger educators to provide support for reading as necessary, including vocabulary- and knowledge-building tools, partner reading, group discussions, audio- and video-texts, and simple availability for questions. Indeed, since reading levels are not exact measures, and also affected by prior knowledge and interest, such supports will probably be helpful to many other students as well.
To read more about this topic, check out these clear, but nuanced discussions of readability and leveled reading by top researchers in this field:
Hiebert, E. H. (2012). Readability and the Common Core’s staircase of text complexity. Retrieved from http://ransonbalancedliteracy.cmswiki.wikispaces.net/file/view/Text-Matters_Readability-and-Complexity.pdf
Shanahan, T. (2011). Rejecting instructional level theory. Retrieved from http://www.shanahanonliteracy.com/2011/08/rejecting-instructional-level-...
Benjamin, R. G. (2012). Reconstructing readability: Recent developments and recommendations in the analysis of text difficulty. Educational Psychology Review, 24(1), 63-88.