A Novel Idea: The History of the Science of Reading
Host Meg Mechelke explores the history of the science of reading and literacy instruction in the United States.
A Novel Idea: The History of the Science of Reading
4: Phonics Fights Back
In this episode, we take a deep dive into the work of several influential researchers in the early days of the science of reading. These pioneers of literacy research paved the way for the development of evidence-based instruction of today and played major roles in advocating for effective, equitable literacy instruction for all students. Plus, hear from:
- Natalie Wexler (Twitter: @natwexler), author, The Knowledge Gap
- Dr. Fumiko Hoeft (Twitter: @FumikoHoeft), director, University of Connecticut Brain Imaging Center
- Dr. Maryanne Wolf, author, Proust and the Squid: The Story and Science of the Reading Brain and Dyslexia, Fluency, and the Brain
- Nina Lorimor-Easley, assistant director for education and outreach, Iowa Reading Research Center
- ... & more!
Episode transcript and sources
https://irrc.education.uiowa.edu/transcript-and-sources-novel-idea-episode-4
A Novel Idea website:
Between the 1930s and the late 60s, the look-say method dominated classrooms across America. Explicit decoding instruction was actually banned in many schools, and teachers instead used the look-say approach, teaching children to read by looking at pictures and making guesses based on context clues, leaving the country’s literacy proficiency rates to suffer the consequences. At this point, you might be wondering: why did it take so long for someone to figure out that something was wrong? Wasn’t anyone doing methodical, scientific research to back up these so-called “foolproof” ideas about teaching reading?
Actually, someone was, long before Rudolf Flesch penned his inflammatory book hypothesizing why exactly it was that Johnny couldn’t read. That man was Dr. Samuel T. Orton. And he warned educators about the dangers of the look-say method years before most people even thought to ask.
Samuel Orton was a pathologist who spent the early part of his career caring for adult patients with moderate-to-severe brain damage, many of whom struggled with language-based disabilities. As a result of this work, Orton began to develop an active interest in the interactions between brain function and language processing. In 1913, he traveled to Germany to study neuropsychiatry and neuropathology under Alois Alzheimer, the psychiatrist for whom Alzheimer’s disease was later named. In 1914, Orton returned to the U.S. and was appointed clinical director of the Pennsylvania Hospital for Mental Diseases.
Following his brief stint on the east coast, Orton moved to Iowa City, where he became the director of the State Psychopathic Hospital, a building that still stands today, less than a mile from where I’m recording this episode. Orton was also the founder of the state’s Mobile Mental Hygiene Clinic, a service that provided in-the-field psychiatric care to Iowa residents. It was through this work that Orton met M.P., a 16-year-old who would inspire Orton to dedicate the rest of his life to the research of language and learning disabilities.
In his writings, Orton describes M.P. as a bright learner who simultaneously displayed extreme difficulty with learning to read and write. M.P. scored low on standard intelligence tests of the era, but Orton noted in his case files that he was “strongly impressed with the feeling that this estimate did not do justice to the boy’s mental equipment.” For example, while M.P. was unable to read even basic texts, he was able to explain the mechanics of a car engine quickly and clearly. Intrigued, Orton administered a series of additional intelligence tests, and came to a surprising result. M.P.’s scores improved significantly when literacy-based tests were administered orally, and he actually scored in the superior-intelligence category when tested in areas such as pictorial completion and mechanical assembly.
From the Iowa Reading Research Center, I’m Meg Mechelke, and this is A Novel Idea. In this episode, we’re taking a closer look at the beginnings of a wide range of research that makes up the science of reading while also learning more about some of the movement’s most influential pioneers.
Today, most people take for granted that poor reading skills do not necessarily equate to low overall intelligence. However, in the early 20th century, this was not the case.
From the 1920s through the 1950s, there was a commonly held misconception that a significant portion of the population was mentally incapable of learning to read and write. By the 1940s, some even claimed that the solution to the country’s literacy problem was to stop trying to educate these students at all and to focus instead on those who possessed the “innate aptitude for learning.” For example, in 1946, a Delaware school principal wrote an editorial to Harper’s magazine in which he claimed that “at least a third of the entire secondary school population” lacked the ability to become literate. Furthermore, he suggested that the presence of these students in American high schools was the reason that the country’s education system had “virtually collapsed.”
Nina Lorimor-Easley: The opportunity cost of that attitude is… you can't calculate how much you're giving up on.
Nina Lorimor-Easley is the assistant director for education and outreach as well as my colleague here at the Iowa Reading Research Center. She is an outstanding advocate for students with reading disabilities and is the president of the Iowa chapter of Decoding Dyslexia. When we spoke, I told her what I had found about this early 20th-century attitude toward students with an alleged “inability” to learn. Here’s what she had to say:
Nina Lorimor-Easley: There are so many of these kids that if we give up early… Number one, when we send that message, they give up on themselves. That's by far the biggest cost that happens. When you have a student who’s in first grade, and that first grader has decided that they're not worth the effort—they themselves are not worth the effort. That is, damage that will take years and years and years to overcome, if we can, right? Once somebody starts to fall behind, they're going to continue to fall further and further behind. It's kind of the “rich get richer and the poor get poorer,” right? So, as that gap continues to grow, that negative self-perception is also going to continue to grow, and it, it just… Problems compound problems, and so then… There comes a point in time there where reading is no longer the biggest issue that everybody has to deal with. We have to deal with behaviors and negative impact in school and at risk and truancy and all that stuff that will manifest. So that attitude of “we're going to give up on you.” The opportunity cost on that almost can't be calculated.
Modern research demonstrates that most students probably do possess the ability to learn to read. In fact, the National Institute of Health estimates that approximately 95% of students have the capacity to become literate when given the appropriate instruction and support.
Nina Lorimor-Easley: There aren't any students out there that I'm going to give up on. We can teach them to read. We've got the tools.
Samuel Orton was an early advocate of this exact same idea. He believed that reading difficulties were physiological in nature, meaning that they were connected to a physical, bodily function, rather than a lack of intellect. Therefore, in Orton’s mind, reading difficulties could be alleviated with proper instruction and intervention. In 1925, he conducted a 2-week study evaluating M.P. and other Iowa students who were facing similar academic struggles. When tested, all of these students demonstrated average or above-average intelligence, struggling only with tasks that required the visualizing, forming, or decoding letters. This disproved the common theory that low reading scores were a side-effect of low intelligence. Instead, Orton argued that for learners like M.P., reading difficulties were the result of physical differences in the way their brains functioned. He called this issue “strephosymbolia,” meaning “twisted symbols.” Today, we refer to this condition as dyslexia.
The concept of “twisted symbols” has carried over into a common contemporary myth about dyslexia: the idea that this condition is simply an issue of seeing certain letters in reverse.
Nina Lorimor-Easley: So, commonly reversals get lumped into, say, a stereotypical view of dyslexia. And it’s true that reversals are one common characteristic of dyslexia, but to say that dyslexia is just reversing b’s and d’s is way underestimating the magnitude of dyslexia as it occurs for different individuals.
I also spoke with Lorimor-Easley about the differences between modern and historical understandings of dyslexia. Even before our conversation, I knew that the concept of dyslexia just being reversed letters was a vast oversimplification. What I didn’t know was that even this one potential symptom has a complex neurological background.
Nina Lorimor-Easley: It’s not tied to vision in any way, shape, or form, so it's not about seeing b’s or d’s backwards. It's about not being able to process the fact that that shape—a ball with a stick next to it if you will—when that turns in space, it changes what it is.
According to Lorimor-Easley, many students with dyslexia are strong visual processors. Some actually demonstrate an above average ability for three-dimensional thinking, or an awareness of how geometric figures exist in space. Remember how M.P. scored in the superior category for pictorial completion and mechanical assembly? This strength can help them in many fields, such as architecture, construction, and design. However, it can also compound some types of reading difficulties.
Nina Lorimor-Easley: So when you're a strong visual processor and you understand that an object in space remains an object in space, no matter how you turn it or which direction it's facing. It’s not like something when it’s right-side up is an ink pen, and when you turn it upside down it turns into a remote control, right? It's always an ink pen. Doesn’t matter which way it’s facing. Well, they get to school, and they see the shape—a “b,” a “d,” a “p,” a “q”—and this whole concept of when you turn it in space, it becomes something else. That's tough to process.
While Samuel Orton’s “twisted symbol” understanding of dyslexia may be a bit of an oversimplification, he was still leaps and bounds ahead of other researchers when it came to understanding this condition.
Orton frequently collaborated with his wife, June Lyday Orton, the field organizer of the mobile psychiatric unit he had founded. Lyday Orton, herself a brilliant researcher, would eventually go on to found the organization that is now the International Dyslexia Association. Together with collaborators Lee Edward Travis, Marion Monroe, and Lauretta Bender, the Ortons completed several studies on reading and language disabilities in children.
Prior to Orton, the limited studies that had been done on the subject of reading disabilities had hypothesized that these problems were due to a visual impairment. However, Orton suspected that strephosymbolia might actually be the result of the left and right hemispheres of the brain failing to work together in the way they were supposed to. Influenced by the work of California psychiatrist Dr. Grace Fernald, Orton hypothesized that multisensory instructional methods would engage both sides of the brain and thus help his patients read more successfully. As it turns out, Orton was correct—at least partially. Although Orton’s left and right brain hypothesis wasn’t quite accurate, Orton’s support for multisensory instruction was right on. Multi-sensory activities, such as forming letters out of clay or writing words in sand, can be beneficial for students with reading disabilities. Because they require the use of more than one sense, these practices engage multiple parts of the brain. Studies have shown that this can lead to better learning and retention of information.
Nina Lorimor-Easley: So we know that multisensory increases impact in all kinds of things, and that's just logical. If you think about, like, those flashbulb memories from when you were young, and a specific smell, or you hear a specific song on the radio, and all of the multisensory input that was happening right? That makes for stronger learning, stronger memories, stronger impact when all of the senses are involved. And we know that neurologically, across the board. That when you involve the senses, the impact is just deeper across the board. And so we take that science and inform reading with that.
Today, teachers incorporate multisensory learning into literacy instruction and intervention for students with dyslexia in many creative ways. For example, they may have students spell a word using colored alphabet tiles, or practice writing by tracing letters in rice.
Nina Lorimor-Easley: When we use colors, when we use tactile, when we use tangible, movable elements in instruction, it makes for deeper learning. In addition, it really informs explicit learning. So when you have a student and you're trying to teach them a concept, when they have the opportunity to see the concept laid out in front of them and actually physically manipulate elements of the concept, we know that that learning goes so much deeper than just sitting here talking about it.
In addition to his early hypotheses around the benefits of multisensory instruction, Orton was also right in his understanding that dyslexia and other reading disabilities are not the result of low intelligence. Rather, they indicate a physical difference in the way that the brain functions during reading.
Fumiko Hoeft: There are some common themes that are more likely to be observed in individuals with dyslexia.
This is Dr. Fumiko Hoeft, the director of University of Connecticut's Brain Imaging Center, describing how dyslexia presents itself in the brain.
According to Hoeft, there are two primary areas of the brain that are often associated with dyslexia. The first is a system of neural pathways sometimes referred to as the dorsal network. These neurons allow the brain to take in visual and auditory stimuli and to interpret these stimuli as things like colors or sounds. This pathway in the brain is also linked to phonological processing, which describes a person’s ability to hear and manipulate the sounds of language. This is the system that allows us to hear the word “cat,” break it into the sounds /c/ /a/ and /t/, and then convert those sounds into the letters “c,” “a,” and “t.” According to Hoeft, dysfunction in this area of the brain is often observed in individuals with dyslexia.
Another area of the brain—the occipital-temporal cortex—is responsible for word recognition. It’s found on the lower, left side of the brain, and it allows us to see written characters and recognize them as units of meaning.
Fumiko Hoeft: There's a lot of subtleties there, but this occipital temporal area and the connections around it are thought to be either weaker in activation, smaller in brain volume, connections being weaker around that pathway, in individuals with dyslexia.
To summarize, a person with dyslexia may process sounds and written letters differently than a person who doesn’t have dyslexia, and this can cause problems with language processing. Samuel Orton did not have access to the advanced technology that has allowed contemporary researchers to make these discoveries. However, he was persistent in his beliefs that dyslexia and other reading disabilities were neurological conditions rather than problems with work ethic or intellect, which was a major step forward for our understanding of reading disabilities at the time.
Soon, Orton began collaborating with an educator and psychologist by the name of Anna Gillingham. Gillingham worked extensively to design and publish literacy instructional materials that provide the basis of many evidence-based methods used today. With the help of fellow educator Bessie Stillman, Gillingham combined Orton’s research with her own and organized it into a useable instructional model. In 1935, this work was published in a book titled The Gillingham Manual,' which outlined strategies for the “remedial training [of] students with specific disability in reading, spelling, and penmanship.” This methodology has evolved into today’s Orton-Gillingham approach, which successfully combines multisensory learning with systematic phonics lessons.
Nina Lorimor-Easley: Sam Orton and Anna Gillingham… what they brought to the field was massive. We have learned tons and tons and tons from the work that they did, and, you know, they were some of the first ones to really get into that explicit instruction and what that really looks like, how it needs to look, the routines around what is now structured literacy… I mean, Orton-Gillingham, their work was, you know, foundational.
While the Orton-Gillingham approach works particularly well for students with dyslexia, but it can help students of all ability levels. The introduction of this approach to the field of literacy instruction harkened back to the phonics-centered ideals of early literacy advocates like Noah Webster and paved the way for the emergence of what would later be known as the “science of reading.”
Nina Lorimor-Easley: It's interesting now, the majority of the work that they've done is holding true. Science is moving, evidence is moving, we're still moving forward, but most of their work is still holding true today. Yes, we're making additions to it, we know more, and as we know better, we have to do better, but their work is standing strong.
The development of the Orton-Gillingham approach marked a revolutionary step forward in the world of literacy instruction. However, as we heard in the previous episode, classrooms continued to be dominated by the look-say method through the 1950s. Orton actually warned against the dangers of look-say literacy teaching in a 1929 article titled “The 'Sight Reading' Method of Teaching Reading, as a Source of Reading Disability.” In this article, published in the Journal of Educational Psychology, Orton argues that the look-say approach does nothing to support the learning of students with disabilities. Furthermore, he describes the look-say method as an “actual obstacle to reading progress,” suggesting that this technique might result in a greater number of students being incorrectly diagnosed with learning disabilities. When in reality, they simply weren’t receiving sufficient instruction. Orton also offers a compelling appeal on behalf of learners with reading disabilities, whom he describes as a group of “considerable educational importance.” He argues that a failure to provide adequate instruction to these students not only does them an academic disservice but may actually have negative long-term impacts on their emotional development. He is firm in his belief that reading and writing must be taught in a way that serves all students, regardless of ability.
Today, Lorimor-Easley echoes this sentiment.
Nina Lorimor-Easley: We have enough instructional strategies and enough assessment tools and assessment strategies, and we know enough about language and how language is processed and how reading happens, that with almost any student we can really dig in and assess and pinpoint where those weaknesses are and instruct directly to those weaknesses. So, there will be a small percentage of students for whom language is excruciatingly hard. There can be lots of additional diagnoses that will make literacy very, very difficult inherently. But outside of a small percentage… and I can't, off the top of my head, I can't give you a specific number. I know that a lot of times we sort of generalize about 5%? Not talking about students who have other major physical maladies going on, or physical diagnostics going on, but just students who really, truly are going to struggle horribly with reading should be about 5%. We should be able to help everybody else get there.
Unfortunately, this idea was not popular in Orton’s time, and it would take decades for his strategies for teaching reading to be implemented on a wide-reaching level. It wasn’t until 1967, with the publication of yet another highly controversial book on the subject, that the education field began to give the issue of phonics instruction the attention it needed.
Though Ruldolf Flesch’s Johnny Can’t Read had few tangible impacts on the types of instruction used in American classrooms, it was a cultural flashpoint that ignited debate over the best way to teach children how to read.
In 1961, as a result of this controversy, the Carnegie Corporation of New York commissioned Harvard Professor Jeanne Chall to conduct an in-depth study of reading instruction in the United States.
Jeanne Chall’s study took three years to conduct. In this time, she visited over 300 classrooms across the United States and England, interviewed a wide range of educators and experts, and reviewed 67 existing reading research studies conducted between 1912 and 1965.
Chall published the results of her study in a 1967 book titled Learning to Read: The Great Debate. Her primary finding? The existing research regarding literacy instruction was “shockingly inconclusive.” However, this did not mean that Chall was unable to collect any useful information from her work.
Chall categorized literacy instruction into two types: meaning-emphasis and code-emphasis. For example, she categorized the look-say approach as meaning-emphasis, as it focused on comprehending the general meaning of a text over decoding each individual word. In contrast, explicit phonics instruction emphasized “breaking the code of language” and accurately reading every word. Chall noted that from 1930 to 1960, meaning-emphasis instructional strategies had been considered the “one best way” to teach reading to children by nearly all professionals in the field. However, in her review of reading research, Chall discovered that an overwhelming majority of studies favored code breaking as the only surefire way to teach children to read. The studies’ results showed that early instruction in decoding almost always produced better word recognition, spelling, and reading comprehension than instruction that was solely meaning-emphasis, especially for students with reading disabilities.
Interestingly, according to colleagues, Chall began her study assuming that the research would support meaning-emphasis instruction. Nonetheless, the evidence was clear.
In a 1999 interview with The Los Angeles Times, Dr. Marilyn Jager Adams said:
“Most people would bow to their expectations and filter and arrange the information so it made sense in their own world view. Jeanne turned hers upside-down and said: ‘What do you know? We are making a mistake.’” (Woo, 1999)
Adams was a student of Chall’s and an influential education and cognitive researcher in her own right. Her 1990 book Beginning to Read echoed many of the sentiments found in Chall’s work. Adams and many others have lauded Chall for her willingness to adapt her beliefs when faced with new evidence.
Following the publication of her study, Chall quickly became an outspoken advocate for explicit phonics instruction, even in the face of sharp criticism from her colleagues. This is an excerpt from a 1969 review of Chall’s work published in the American Education Research Journal:
“As expected, reviews from the Establishment have not been warm. What is most bothersome about these reviews, however, is that the reviewers do not examine the evidence Chall has gathered but merely quote her conclusion that the research is inadequate. This somehow allows them to ignore and deny the findings that did appear and then go right back to recommending a ‘meaning-emphasis’ approach on the basis of their own intuitive justifications.” (Williams, 1969)
This passage highlights one of the most common critiques levied at Chall’s work; that if the research results were inconclusive, the rest of what she had to say was irrelevant. Other critics argued that Chall had not placed enough of the blame for low literacy scores on the efforts and talents of individual students and teachers. However, Chall had educators’ backs. While she acknowledged that well-trained teachers were vital to the success of American students, it was her opinion that the low literacy rates had to do with the curriculum itself rather than the failures of individual educators.
Despite the backlash from many mainstream voices in the education field, Chall refused to back down. She remained an advocate of the idea that “easy does not always mean good,” suggesting that even if the look-say method seemed simpler on the surface than explicit phonics instruction, it did not necessarily produce better results down the line. Chall also found that explicit phonics instruction significantly improved reading outcomes for low-income students. She was one of the first to point out that these students often suffered the most from ineffective teaching practices, as their families lacked the resources to provide tutoring and other interventions outside of the classroom.
However, unlike Rudolf Flesch, Chall did not necessarily believe that phonics alone was the best way to teach reading. Instead, she was one of the first to clearly state that explicit and systematic instruction in both decoding strategies and language comprehension strategies was vital to the development of proficient readers. Chall’s contemporaries noticed this distinction and commented that her research marked the beginning of a change in the literacy instruction debate. Rather than arguing over whether literacy instruction should be meaning-only or phonics-only, experts were now divided over whether or not decoding skills specifically were a significant element of early reading. These tend to be the sort of arguments you hear today when it comes to the science of reading. Very few—if any—science of reading advocates believe that phonics should be taught in isolation. Rather, they suggest that both decoding and reading comprehension skills should be taught systematically and explicitly to all students, which is exactly what Chall was saying in 1967. Consider this quote from Marilyn Adams’ 2001 tribute to her late mentor:
“Still today, after many hundreds more pages, many thousands more experimental hours and subjects, and many millions more dollars worth of work, we have certified much but learned little more. The conclusions of our scientific efforts to understand beginning reading remain point for point, virtually identical to those at which Jeanne Chall had arrived on the basis of her classroom observations and interpretive reviews of the literature." (Adams, 2001)
Like Samuel Orton before her, Chall was also an advocate for students with reading disabilities. She agreed with Orton’s assessment that children’s ability to read had more to do with whether or not they could decode letters than it did with their overall intelligence, and she advocated fiercely for the implementation of teaching methods that served all students, regardless of ability level. In 1986, Chall wrote that the best way to solve the issue of adult illiteracy in the United States was to make a difference much earlier in life by providing “better instruction and services for all children, and particularly for those that tend to lag behind.”
Through their insistence that with proper support even students with reading disabilities could be taught to read and write proficiently, Chall and Orton both found themselves wading into a longstanding debate amongst reading specialists. Was reading an innate, natural human ability? Or was it a skill that could only be learned through explicit instruction? In the 1960s, many reading specialists leaned towards the former, believing that children could learn to read in the exact same way they learned to speak: through repeated exposure to whole words. Some experts, including author and education correspondent Natalie Wexler, believe that this idea can be traced back to the writings of philosopher Jean-Jaques Rousseau, who we discussed in Episode 2.
Natalie Wexler: One result of this sort of philosophy was that reading was looked at as a natural process, like learning to speak, and that kids would really, you know, you, you don't really need to teach them systematically how to do it.
Here, Wexler is referring to the instructional philosophy described by Rousseau in his 1762 treatise Emile.
Natalie Wexler: And that, we now know, is completely wrong, based on a lot of more recent evidence, and Rousseau of course was not working on any evidence at all.
This idea is echoed by neuroscientist and reading researcher Dr. Maryanne Wolf.
Maryanne Wolf: If you look at the brain, you have vision, you have hearing, you have you know all these things that were developing over evolution for language, but you don’t have anything for reading. And so teaching is part of reading. Because reading is an invention; it’s not an ability. There are people who seem to burst out with reading, you know. They have been observing, and they’re just gonna try and figure it out themselves. But the reality is, that’s not natural either. They’re just so good at making the connections that the majority of us need to have people instruct us.
The idea that reading is a natural process is a sentiment held by many contemporary whole language advocates, including Drs. Ken and Yetta Goodman, who we will discuss at length in our next episode.
However, not everyone agreed with this perspective, even in the 60s. In his 1966 book Teaching to Read, Historically Considered, linguist Mitford Mathews humorously declared, “Words are not like tadpoles or flowers or horses. Words are manmade,” further claiming that “nature has never taught anyone to read and never will.”
As it turns out, Mathews was not too far off. According to contemporary estimates, only around 30% of children will learn how to read without explicit and direct instruction in foundational literacy skills. But why is this? Why is explicit instruction, specifically explicit phonics instruction, so vital to early literacy development?
Well in the early 1960s, a researcher named Dr. Isabelle Liberman had the exact same question. It all started in 1967, with research being done by her husband, Dr. Alvin Liberman.
At this time, Alvin Liberman was working at Haskins Lab, an influential linguistics laboratory at Yale. His team was attempting to create a machine that could read texts aloud for blind students. While doing so, he came across a startling discovery about the nature of phonemes, the tiny units of sound that make up every word. Prior to this research, it was believed that the phonemes that make up every word were inherently processed as distinct auditory signals. That is to say, upon hearing a one-syllable word such as “sit,” researchers thought that all listeners consciously recognized the word as being comprised of three separate sounds, /s/ /i/ and /t/. However, Alvin Liberman’s 1967 article “Perception of the Speech Code” argued that this was not exactly correct.
Reid Lyon: What makes that difficult, the Haskins folks found was that as you and I are talking, as I'm saying, my words to you now, I am not spelling out the words for you, right?
This is Dr. Reid Lyon, a neuroscientist, former Chair of the Department of Education Policy and Leadership at Southern Methodist University, and well respected reading researcher, describing Liberman’s discovery.
Reid Lyon: I'm not spelling out for you the sounds of the language. And the fact is, though, I have to be aware of those sounds to be able to read unknown words.
The minute I say a word, let's say like “big,” I don't say /b/ /i/ /g/. I say “big.” The ear only hears “big.” It doesn't hear /b/ /i/ /g/ unless the brain is able to untangle the constituent sounds within the word.
According to Lyon, proficient readers need to be able to differentiate these sounds; it’s what allows us to understand that the sound “big” equates to the letters “b-i-g.” Without this ability, reading and writing become exponentially more difficult. But if most human brains do not instinctually differentiate the phonemes in spoken words, how could we expect students to do it without explicit instruction? This is exactly what Isabelle Liberman wanted to know.
In the late 60s, Isabelle Liberman was busy conducting her own research while working in the Remedial Reading Department at the University of Connecticut. After seeing her husband’s work, Liberman hypothesized that his theory about phonemes would have a tangible impact on reading instruction. To see whether or not she was correct, she decided to run a series of tests.
In 1974, she visited local preschool, kindergarten, and first-grade classrooms, bringing with her a list of simple two- and three-sound words. Liberman instructed the students to listen to her speaking the word, and then tap out how many sounds they heard. For example, if she said the word “at,” /a/ /t/, the students would tap twice and say “two.” If she said the word “pot,” /p/ /o/ /t/, the students would tap three times and say “three.” She then repeated this task with a different set of words, asking the students to tap out the syllables, rather than the sounds, to make sure that they understood the underlying task.
Across the board, students struggled significantly more with the phoneme counting task. At the preschool level, no students were able to complete this activity successfully, though around half were proficient in syllable counting. Similarly, while around 50% of kindergarteners could segment words into their respective syllables, only 17% succeeded when it came to phonemes. Even by the end of first grade, the amount of students struggling with phoneme counting was three times higher than the amount struggling with syllables. Additionally, at all three grade levels, Liberman observed that students who were allowed multiple attempts were able to intuitively learn to segment syllables accurately much more quickly than they were phonemes.
As it turns out, Liberman’s suspicion that this lack of ability to distinguish the sounds in spoken words could have a direct impact on students' reading abilities was absolutely correct. The following year, she discovered that 85% of the kindergarten students who had performed well on the phoneme counting test were reading successfully. The students who had struggled to count phonemes were almost all reading below grade level.
Liberman published the results of this study in a 1974 article titled “Explicit Syllable and Phoneme Segmentation in the Young Child.” It was here that she first began to draw attention to what is now considered a fundamental element of early literacy instruction: phonemic awareness.
Phonemic awareness describes an individual’s ability to isolate and manipulate individual sounds within a spoken word. Though this is one of the most crucial skills necessary for early reading, it is frequently difficult for students to grasp. While some students will figure out how to break words into phonemes independently, many others won’t, and these students will continue to struggle with reading and writing tasks as they grow. Liberman was one of the first researchers to really articulate this fact, making her work hugely influential to the field of literacy instruction.
And this was not the end of Liberman’s contributions to reading research. She would later go on to publish a piece titled “Phonology and the Problems of Learning to Read and Write,” in which she argued that the difficulties of struggling readers were linked most clearly with a failure of phonological memory, not a lack of syntactic competence. Like Orton and Chall, Liberman believed that reading difficulties were not related to a general lack of intelligence. Rather, she insisted that they were indicative of a specific pattern of brain activity that made certain literacy tasks more challenging than they might be for others. Similarly, because of her belief that phonological awareness was a skill, rather than an innate ability, Liberman argued that explicit instruction in this and other foundational literacy skills could help all students, including those with disabilities, achieve reading proficiency.
In 1988, Liberman was awarded the Samuel T. Orton Award from the Orton Dyslexia Society—now known as the International Dyslexia Association—for her outstanding contributions to the study of reading disabilities and her dedication to the cause of helping every child, regardless of ability, achieve reading proficiency.
It’s worth noting that Liberman was not alone in her work. Several other researchers, including Pat Lindamood and Nanci Bell, were conducting similar research on phonemic awareness throughout the early 70s. While we don’t have time to explore the contributions of every reading researcher of the era, it is important to recognize that their collective work has provided a vital foundation for the type of instruction science of reading advocates endorse today.
However, even with this boom of reading research being conducted across the country, there was still a significant number of educators and experts who believed that phonics just wasn’t all it was cracked up to be. We’ll cover some of those figures and their contributions to the literacy debate in our next episode of A Novel Idea.
A Novel Idea is a podcast from The Iowa Reading Research Center at the University of Iowa. It’s written, produced, and mixed by me, Meg Mechelke. Editing by Sean Thompson, and expert review by Nina-Lorimor Easley and Lindsay Seydel, with additional review provided by Grace Cacini, Natalie Schloss, and Olivia Tonelli. Fact checking by Maya Wald.
For further credits, including audio and music attribution, please see the link in the show notes.
Visit us online at irrc.education.uiowa.edu to find more episodes and additional literacy resources for educators and families. Again, that’s irrc.education.uiowa.edu. You can also follow us on Twitter at @IAReading.
If you want to help spread the word about A Novel Idea, subscribe, rate, and leave us a review wherever you get your podcasts. Institutional support for this podcast comes from the University of Iowa College of Education and the Iowa Department of Education.