Clare’s mother always used to tell her, “It’s not what you say that is the problem, it’s how you say it!” This same thought rings through our heads as we talk with teachers, administrators, and coaches about assessment. Most of the time, the problem is not which assessments are being used; it is how they’re being used. Assessment is not an end but rather a means to an end. What is learned in the process of assessing students is every bit as important as the final outcome of the assessment. Here are some ideas to help make the most of the assessments you are using.
An Assessment Is Not a List of Numbers
Required district or school assessments are often viewed as a list of numbers that need to be provided to a certain person by a certain date. The goal is to get them done, recorded, and delivered. Kids are then sorted by scores, and resources for extra services are allocated. The quantitative performance is emphasized and often the only aspect of the assessment that is considered.
There is so much more information we can gather from these assessments. It is a shame we rarely take enough time to use them to inform our classroom instruction. We need to look at the actual assessments (not just a compiled chart of scores) and analyze how the students performed both quantitatively and qualitatively. We look for patterns in a student’s performance and determine which strategies the child used while taking the assessment.
In a running record/DRA/Fountas-and-Pinnell-type of assessment, we analyze miscues, self-corrections, and the quality of the accuracy score achieved by the student. For example, if a student receives an accuracy score of 99 percent but had 13 self-corrections and 10 rereads, we reconsider how “easy” that text really was for the student. It seems to us that the student had to do quite a bit of work to accurately decode the text, and we wonder if he or she could easily read this text level when working independently in the classroom.
We are also concerned with the amount of data being lost when teachers and coaches do not take the time to analyze the miscues and self-corrections. This type of analysis gives us a window into understanding how a student approaches texts and monitors for meaning. These assessments can help a teacher determine the type of small-group and whole-class instruction that needs to be done to support her readers in using strategies effectively and flexibly. This type of analysis is typically not required—only the list of levels needs to be turned in. Yet conducting this type of analysis is essential for making the best use of our students’ time and maximizing the use of the assessment tool.
In a phonetic or phonemic type of assessment, such as the DIBELS, we look for how the student achieved the score they achieved. Some students perform tasks we ask of them very quickly, and this speed allows them to reach a benchmark score even though they may have been fairly inaccurate. Other students work slowly and meticulously and may have a higher percentage of accuracy overall, but do not reach a benchmark score because of their slow pace. Some students reach a benchmark score by quickly giving the first sound of each word, whereas others reach a benchmark by giving the initial, medial, and ending sounds—same scores, but qualitatively very different readers. What about the student who can tell you all the letters in the alphabet when there is no time constraint, but then does not meet a benchmark score when asked to perform the same task with limited time?
When we simply look at these students based on the numerical score they achieve on these assessments, we lose a lot of data. Knowing a student or group of students did not reach a benchmark helps us determine that these kids need support, but it does not tell us the type of support they need. The students in the examples listed above might require very different types of instruction to help them acquire the skills they need.
Required Assessments Count as a Conference
In several districts we are coaching teachers on how to use conference notes to plan whole-class, small-group, and individual lessons. Lately in these meetings, teachers have sheepishly admitted that they did not do any conferring in the last two weeks because they had to administer assessments. We emphatically reassure these teachers: “Assessing students counts as a conference!” Lucy Calkins describes the structure of a conference as research, decide, teach. When we spend time assessing our students, we are “researching” what they need to learn. When we take this information and use it to decide what our students need to learn and then organize that data to help us form whole-class, small-group, and individual lessons, we are conducting an essential part of a conference. It may be a long conference, but it sets us up for lots of “teach.”
You can use the required assessments you are gathering in many different ways in your day-to-day teaching. For example, once you complete DRAs or running records, you might take the time right then to build children’s independent reading bags with them. You have current information on their interests and reading levels—why not use it to match texts to readers? Many teachers are now conducting their DRAs/RRs near their library area so they can have easy access to books.
Another way to translate assessments into day-to-day teaching is to use your conferring notebook while you are doing your required assessments. That way, once you have completed and analyzed the assessments, you can jot down notes to help you focus your upcoming lessons. For example, if we notice a student is missing details in a retelling or is reading word by word, we would write those notes in our conferring notebook to guide future small-group or individual lessons. It is very helpful to take a little extra time after each assessment to think about what you learned and how you can use that data tomorrow to lift the quality of your instruction.
Make the Most of What You Have
We are frequently asked which assessments we think are the best, and whether a district should switch from an assessment they are using to a new assessment that is being marketed. We could debate the pros and cons of each assessment for hours. In the end, we believe that what is most important is that you can assess the full profile of a reader and you use the assessment data to inform your teaching. If a new assessment comes on the market, you have to be sure it is worth the money to purchase and the time it will require for teachers to learn how to use it well. The critical question is this: Will the new assessment truly give us information that our current assessment cannot give us? If the answer isn’t a resounding “yes!,” then the new assessment is likely a waste of time and money. There will always be something out there that is new and advocated by colleagues. If we buy each new product that comes out, we will never take the time, often years, to have an entire staff master the use of assessments in place now. Sometimes it is better to stay the course with the tools we have and understand that it is the best decision for our district at this point.
We have the opportunity and privilege to work with many schools using a range of different assessments. Our experience in these schools reinforces that there is not one “right way” to go about assessing our students. This work is messy and rarely precise. Yet we are convinced that almost any assessment can be valuable for teachers if you use the data to inform your instruction and monitor student learning.