NAEP Results: Less “Bang for Our Buck” (But Plenty of Whimpers)
“Between the idea and the reality . . . falls the shadow.” —T.S. Eliot
A new report from our friends at the National Assessment of Educational Progress (NAEP), also known as the “Nation’s Report Card,” provides data on student performance in reading and mathematics across multiple grade levels across the country. This latest report shows us how well American twelfth graders performed in 2015 as compared with the last test administration in 2013.
Those were two years of contentious Common Core adoption in many states, or resistance to Common Core and reliance on existing standards in other states—two years of hard work at educational reform and improvement, wherever you lived, whether you were changing course or staying the course; two years of teaching, reaching, explaining, begging, and maybe even bribing students to achieve.
So, what’s the result of our efforts over the past two years? Well, according to NAEP:
In comparison to 2013, the national average mathematics score in 2015 for twelfth-grade students was lower and the average reading score was not significantly different.
In comparison to the first year of the current trendline, 2005, the average mathematics score in 2015 did not significantly differ. In comparison to the initial reading assessment year, 1992, the 2015 average reading score was lower.
Not what we wanted to hear, is it? Either nothing changed, or things got a little worse – hence the quote at the top of the page. Between our aspirations and our actuality, there always seems to be a little shadow—a little gap—a little, “Sorry, not quite.”
Why? What went wrong? Or, more accurately, what’s going wrong, day after day, week after week? Why is that shadow falling between what we’re trying to do and what we’re getting?
MIND THE GAP
Part of the performance gap comes from an implementation gap—the shadow that falls between our plans and the way we put those plans into effect. New standards or textbooks or pieces of whiz-bang software may be brilliant and revolutionary in theory, but if they’re rolled out to schools ineffectively, or haphazardly, or without real buy-in and understanding from teachers. Eventually they wind up on the trash heap and reinforce our feeling that nothing ever works.
Were all of those abandoned programs and initiatives really terrible? Probably not. In fact, most of them were probably fine—maybe even better than fine…in theory. We just didn’t use them properly, or hold onto them long enough to see a result. Anything new requires a little patience, a little persistence. You would never buy a packet of apple seeds on Monday, plant them on Tuesday, and expect a glorious, fruit-laden tree by Friday. But that’s pretty much what we do in our schools, year after year. If the new thing doesn’t work in its first year of implementation, we give up on it and go back to whatever it was we were doing before. Our “flavor of the month” approach to reforms and resources may be one of our problems.
Another problem is that we aim for real thought from our students, but too often settle for mere response. If we’re not aware of that gap, it can cast a lethal shadow over all of our “college and career readiness” initiatives. Here are a few recent examples I came across in my travels:
THE VOCABULARY QUIZ
I visited with a high school teacher several weeks ago—a bright and capable young man who teaches Japanese in an excellent private school. He has been successfully teaching Japanese for a number of years already; his kids get good grades and their parents have been happy. He usually teaches in a fairly traditional style—a lot of lecture, a lot of worksheets, a lot of rote memorization. Pretty standard stuff.
But after a PD session at his school, he decided, as an experiment, to change how he assessed vocabulary. Instead of giving his students a traditional quiz on the words they had been given to learn (here’s the word; choose the correct definition from among four choices), he asked his students to use each word in a couple of sentences. The result was disastrous. They could identify the meaning of the words, but they couldn’t use them. They couldn’t do anything with what they knew.
WHAT’S IN A NAME?
In an elementary classroom at another school, I saw a teacher leading an activity in which students generated lists of nouns, adjectives, and verbs, which the teacher placed up on a large, color-coded chart. The students were asked to create sentences by choosing a couple of the nouns, one of the adjectives, and one of the verbs. When I arrived in the classroom, the students were almost done; most of them were drawing pictures to illustrate their sentences. All the kids looked happy and successful.
But when I leaned over and asked one of the girls which word was the verb, she had no idea…even though it was colored blue on her paper, just as it was colored blue on the chart, under the title, “Verbs.” When I asked her what a verb did a sentence, she didn’t know. When I told her what a verb did in a sentence and then asked her which word in her sentence was doing that, she didn’t know. And she wasn’t the only one who was having this problem. For many of the students, the sentences looked fine. Their sentences were, in fact, correct. They were able to respond successfully to the instructions and complete the activity to the teacher’s satisfaction. But they couldn’t talk about what they were doing, and they seemed not to understand what it is they had done.
SUNK IN CARBON
When I came home from my most recent travels, I saw my sixth grader hard at work on a science assignment. I asked him what he was doing, and he showed me a worksheet about the carbon cycle. The question he had just completed read, “How does deforestation affect the carbon cycle?” His answer was, “Trees are carbon sinks.” He was very happy with his answer, because it was factually correct. He could even show me, in his textbook, where that factual detail lived. But his answer, while true, didn’t respond to the question he had been asked. It took several minutes of (gentle) browbeating and asking “so what?” to get him to connect his fact to the idea of deforestation. He had a lot of facts ready at hand—in his brain and in his notes—but he wasn’t sure what to do with them.
It’s an easy miss on a homework assignment, but it’s exactly the kind of thing we need our teachers to tease out with students. Are our teachers taking the time to help students connect thought to thought, and idea to action, in a way that helps them make the things they’re learning useful?
WHAT DOES THE DATA MEAN?
Every test is a transfer task—you have to take what you learned in the classroom and apply it somewhere else. But no test prep can prepare you for every question or question type you may encounter in the world. You have to be able to come at any challenge with a deep understanding of the relevant content and an ability to be flexible in the way you use it. You have to be ready to improvise at a moment’s notice. You have to be able to think about what you know.
This is why athletes need more than drills. They need scrimmages—practice games—to get the experience of making decisions and using their skills in the crazy, unpredictable, changeable context of a game.
The question for us is: are we deficient in our skills drills, or in our scrimmages?
I don’t believe that a stagnation or slight downturn in NAEP scores means that our teaching is deteriorating, or that a particular class of students isn’t as bright as the class that came before it. We’ve been at the lower levels of Bloom’s Taxonomy—teaching, questioning, and testing at the levels of basic knowledge—and we’re still pretty good at that level. But we’re less effective at getting kids to think about what we’re teaching them so that they can use what they learn confidently and in a variety of new ways.
The more our assessments move away from basic question-and-response—the more they try to present students with authentic thinking and reasoning tasks, the more we are liable to see a shadow fall between what we’ve taught them and what they can do with what they know.