Friday, July 22, 2016

What Is the Best Way to Assess Early Literacy?

Recently a teacher wrote to me about her school district's plans to scrap Running Records and use a combination of DIBELS and AIMSweb for assessing early literacy abilities. She asked for my help in fighting this ill-advised policy move and I provided some research based resources. I am pleased to say that this teacher, through her informed and impassioned action, was able to prevail and the elimination of Running Records was shelved.

This incident got me thinking and I decided to write this post to clarify some issues on early literacy assessment. As I travel to schools around the country, I have noted the proliferation of DIBELS and AIMSweb. Both DIBELS and AIMSweb are called "curriculum based measures" or CBMs. This term sounds good, of course, and may help administrators sell the product to school boards, but as literacy assessment expert, Peter Johnston points out, neither of these measures is curriculum based, unless we consider counting correctly pronounced words in a one-minute reading a part of the curriculum.

I would like to compare these assessments of early literacy to two assessments that I think are more enlightened and effective: Running Records and The Observational Survey, both devised by literacy researcher/educator Marie Clay.

Both DIBELS and AIMSweb take a "bits and pieces" approach to early literacy assessment. Children are asked to identify the letters of the alphabet, to read nonsense words, to segment the sounds in words, to perform a timed reading for fluency, and to use given vocabulary in a sentence all under rigidly timed constraints. All of these abilities contribute to a child's ability to read and are evident in readers, but to suggest that these "components of reading" tell us much about readers is an error. As literacy researcher Michael Pressley has said, DIBELS

mis-predicts reading performance on other assessments much of the time, and at best is a measure of who reads quickly without regard to whether the reader comprehends what is read.

The Running Record and Observational Survey take a more holistic approach. The Observational Survey is generally used with children after the kindergarten year and specifically with students who are struggling to take on beginning reading instruction. The Running Record is typically used with students who have begun to read, but is also a part of the Observational Survey.

The Running Record requires the teacher to listen to a child read a chosen text on a particular reading level. While the student reads the teacher notes words read correctly, words read incorrectly, substituted or omitted, self-corrections, general fluency, etc. The Running Record session may end with some questions for comprehension check, or more appropriately in my view, with a retelling.

The Observational Survey has six components:

  • Concepts about print to discover what the student understands about the way spoken language is represented in print.
  • Letter identification to find out which alphabetic symbols the student recognizes.
  • Word reading to indicate how well the student is accumulating reading vocabulary of frequently used words
  • Writing vocabulary to determine if the student is building a perdonal resouce of known words that can be written
  • Hearing and recording sounds in words to assess phonemic awareness and spelling knowledge.
  • Running Records to provide evidence of how well the student is learning to use knowledge of letters, sounds, and words to understand messages in text.

So how do DIBELS and AIMSweb measure up with Running Records and The Observational Survey? When judging early literacy assessments, I am guided by six principles many of which come from the work of Peter Johnston:

  • Early literacy assessment must inform instruction in productive ways.
  • Early literacy assessment is best when it resembles actual reading.
  • Early literacy assessment is best when the child's teacher does the assessment.
  • Early literacy assessment must help teachers identify what a child can do independently and with different kinds of support.
  • Early literacy assessment must help teachers provide useful information to parents on their child's developing literacy.
  • Early literacy assessment must inform the school on how well a program is meeting the goals of optimizing student literacy learning.
Informing Instruction

The chief purpose of assessment is to inform instruction. Teachers use assessment to give themselves actionable information that will help them design instruction for learners. As literacy researcher P. David Pearson has pointed out, DIBELS (and by extension the similar AIMSweb), not only do not inform instruction, but lead to bad instruction. In Pearson's view, DIBELS like assessments drive teachers to focus on the bits and pieces of reading, rather than actual reading. DIBELS becomes the driver of the curriculum and the curriculum is narrowed in unproductive ways as a result. To quote Pearson directly:

I have decided to join that group of scholars and teachers and parents who are convinced that DIBELS is the worst thing to happen to the teaching of reading since the development of flash cards.

In contrast, taking a Running Record of a child's oral reading provides the teacher with real, actionable information. By listening to a child's reading and taking notes on that reading, teachers get a window into what a reader knows and is able to do in reading. When the reader makes an error (or in Kenneth Goodman's terminology a miscue), the teacher gets invaluable information for future teaching points.

Following a Running Record, teachers can assess comprehension by asking the child to retell what was read. If a student has sketchy recall, the teacher can ask probing questions to get a fuller understanding of what the reader can recall with teacher assistance. All of this helps the teacher develop future instructional targets. I discussed probing for understanding in this post from two years ago.

Resembling Actual Reading

A Running Record, of course, has face validity because it is actual reading. With DIBELS and AIMSweb students do many "components of reading tasks"and read a passage to assess rate of reading, but not actual reading. As I pointed out earlier, this focus on the components of reading and reading speed can and does lead to wrongheaded instruction.

Teacher Adminsitered

Running Records are administered regularly by the classroom teacher. While AIMSweb and DIBELS assessments are often given by the classroom teacher, the information is not actionable as it relates to improved instruction.

The Observational Survey is typically administered only to students who are struggling to learn to read and take on reading behaviors at the end of kindergarten or the beginning of first grade. These surveys are administered by specially trained teachers who will use the information to inform their individual or small group instruction.

Identify What a Child Can Do Independently and With Support

Running Records provide the opportunity to assess students independently, but can be modified to see what a students can do with support. Support can take the form of a book introduction and practice read or prompting at the point of difficulty.

In assessing comprehension the teacher may ask for an unaided retelling or provide prompts to assist the student in retelling - either way, the results inform instruction.

Provide Useful Information to Parents

DIBELS and AIMSweb are sorting tools that provide teachers with numbers. Teachers can report these numbers to parents and explain what these numbers mean as far as a child's position on a normative scale of these numbers, but they don't really help the teacher say much about the child's reading.

A teacher who administers a Running Record has immediate evidence of what the child can and cannot do in reading, what instruction is needed and where the child is in reading compared to other children in the same grade or same age. This kind of specific information is more useful to parents than number scores in "reading-like" activities.

Informing the School on Literacy Progress

I suspect this is where DIBELS and AIMSweb  have their greatest appeal. They provide numbers that can be easily reported on graphs and charts that schools must provide to local and state accountability agencies. These assessments have the same attraction as an SAT score - they reduce complex processes to simple numbers. They also have all the problems of SAT scores because they reduce complex processes to numbers that are not very revealing of a what a student knows and does not know and what a teacher needs to do in designing instruction.

Systematic, documented and recorded Running Records, over time, can provide numbers for these reports and more detailed information about students actual reading progress.

AIMSweb and DIBELS do have the appeal of speed, but we need to ask at what cost. Does it really make sense to administer these screening tests to normally developing readers? The Observational Survey taps into similar information as these other assessments, but takes about 45 minutes to administer. This would be prohibitive if every child needed to be assessed in this way, but the Observational Survey is given only to those students who are struggling to take on reading. For these students it provides invaluable information and clear targets for instruction.

DIBELS and AIMSweb are artifacts of an assembly line view of reading. Children are placed on the conveyor belt and doses of phonemic awareness, phonics, fluency and vocabulary are screwed on and tightened until the conveyor belt spits out a reader. This very American view of production may work for widgets, but it does not recognize the complexity of language or reading development. Rather than an assembly line metaphor, we need to think of the single cell embryo that develops increasing complexity as it grows. For children, increasing language facility develops by interacting with the world, by experiencing language, first orally and then in writing, and then by gradually making sense of that complexity so that they listen with understanding and read with comprehension. Parents and teachers can guide this development and thoughtful assessments that acknowledge this complexity can inform our instruction.













No comments:

Post a Comment