Thursday, January 6, 2011

Meet the New Test -- Same as the Old Test

In a recent Washington Post piece, Secretary of Education Arne Duncan repeated a refrain we have heard far too often about the subject of testing.

In a nutshell, Duncan admits that No Child Left Behind and its reliance on standardized, fill-in-the-bubble, multiple guess tests both dumbs-down and narrows instruction. Surprise, surprise.

The Secretary goes on to promise that new consortiums working on assessments will produce “a new test” he claims will “measure what children know across the full range of college and career-ready standards, and measures other skills, such as critical-thinking ability.”

Allow me to express a bit of doubt. For starters, I hope he doesn’t define “new test” as “one test” because that will never accomplish what he claims he wants: an assessment that measures a broad spectrum of student abilities. Further, unless these new tests are uncoupled from the high stakes they currently invoke—such as punishments for schools and teachers—they will be just another standardized, easily scored exam that tell us little about what is really going on in our classrooms.

I am thinking about all of this today because it is the first of two days of our semester performance assessments at our school. At the end of each semester, teachers engage students in extensive, half-day performances of what they have learned. The idea is to have them show what they know through performance rather than filling in bubbles or choosing the right answer from a list of choices.

Here are several examples:

We give our American Government students two opinion pieces that take opposing view points on the recently passed health-care legislation. They are first asked to read the pieces using a literacy strategy called text-marking, and they are assessed on how well they have read. Then, using resources they accessed in class and information they glean from work in the media center, they are asked to draft their own position paper and submit it as an opinion piece for the local newspaper. Finally, they will engage in a Socratic Seminar discussing both the pieces they read and their own writing. A team of teachers using the rubrics students have used all semester long will evaluate each piece of the work.

Chemistry students begin class with a paper-and-pencil activity in which they balance chemical equations and solve for missing quantities. They then move to the lab where they will find a station with the requisite materials to conduct an experiment. The task: complete the experiment and produce a full lab report that includes the hypothesis tested, the results gained, and implications for further research.

And in physical education our students spend the first part of the period using a web site to determine their metabolic levels and basic data. Then they select a candy bar from their teacher’s desk. Now, the kicker: they are asked to design and then perform a 60-minute exercise program based on what they have learned this semester that will work off those calories!

I could go on, but you get the drift.

We are not perfect at this work. That’s why we take time after the assessment days are over to review how the assessments went and, most important, what we learned about our students’ abilities. What we find impacts our teaching next time around.

Compare this kind of assessment to our students’ experience with the Ohio Graduation Test, which is used for both state and federal accountability reporting. Each test -- reading, math, writing, science and social studies -- takes the same amount of time. But rather than ask students to do something with what they know, these tests ask them to regurgitate what they have heard. In fairness, there is some writing on the tests, but most of it still involves selecting the right answer from a list of givens.

Add to this shallowness the fact that our faculty never gets to see full student results or the writing samples and how they were graded. It does little to inform teaching except to let teachers know they should spend more time on The Boxer Rebellion, photosynthesis, or two-step equations, or similar inferences based on aggregate scores of the entire class.

Duncan applauds NCLB for disaggregating data—but bad data disaggregated is still bad data. And let’s be honest. All this so-called “data-driven decision-making” talk should really be called what it is: test-driven decision making. Ohio’s school report cards consist of 26 “data” points, and 24 of them—92%--are test scores.

By the end of this week we will have mountains of information on our students, their achievement, and our teaching. All schools could have the same information. The New York Performance Assessment Consortium has demonstrated time and time again that performance assessments, teacher-designed and evaluated, have led to higher rates of student success both in and after school.

I wish I could believe that the new Congress, in some yet to be found bi-partisan spirit, would end the reliance on standardized tests—much like the higher- achieving nations we point to with admiration. But when Duncan continues to talk about one standardized, high-stakes evaluation, I have more than a few doubts.

In the meantime, our school will continue to use what we learn about our kids through performance assessments to improve instruction and prepare them for the years after high school -- years where what you can do will count for a lot more than what you can memorize.

No comments:

Post a Comment