thirteen
assessment: drunk tetris
fewer tests mean more assessment
On another test, he asks, "I'm supposed to say how many people will be at the party. If I'm not counting myself, this works out just fine. But if I'm not then it won't work. The test question doesn't ask, but I think it's rude to not attend your own birthday party." He's right. The correct answer could be 12 or 13, which is B or C.
He raises his hand five times when the question reads, "Which is the best question for . . . " and says,
"They're making the subjective into something objective. Why can't they just let me write my own question and judge that instead?"
No one asks him to defend his answers. No one gives him a chance to clarify a question. Given his special education accommodations, I can re-read a question but I can't explain it. The system is set up to efficiently measure critical thinking and few people seem to question whether higher order questioning belongs with a low-order format (multiple choice).
I don't deny that he has a hard time reading. His mind meanders in often bizarre directions. Thus, he is able to answer a critical thinking question, but miss a simple comprehension question. He over-analyzes answers that he believes are vague. In math, he can explain a complex concept and then make a simple math error that ruins the entire problem.
Doyle thinks that politicians should have to pass a series of tests regarding real life skills. Do they use public libraries and public drinking fountains? Do they know how to fix things in their house when they break? Have they ever baked bread? Only then should they be able to create meaningless tests like this.
I'm not suggesting that this story resembles most students. Yet, I have seen many students who fit this criteria. They are great thinkers and lousy test takers. Still, I'm still banking on this hope: that some of the kids who fail the test will find a way to succeed in life.
* * *
It's my first year of teaching and I have the task of analyzing ethnic demographic data in relation to achievement scores. Basically, it's stereotyping with good intentions (if one considers raising test scores to be good intentions. I'm not so sure) Each person must give a potential cause and solution.
“Given the achievement gap between various subgroups, a few potential causes might be the case. First, the Hispanic group (which is actually a bit offensive to many who call themselves Latino) might have lower scores due to a lack of language acquisition. Another option might be cultural bias of the test itself given the . . . "
The Sultan of Standards cuts me off here. “Those are excuses, not causes. I need you to produce not excuse.” She has a tendency to restate herself with rhymes, even when the words don't really rhyme.
Her favorite is "collaboration not coblabberation." Perhaps it is a side effect of watching the OJ Simpson trial, but I cringe when someone rhymes advice to me – unless that person is Dr. Seuss.
“Well, I guess one solution would be to contextualize the subject to the student's culture. It would be part of differentiated instruction. Perhaps we could study ethnography or sociology and maybe implement a little Paulo Freire.”
“We're not going to overhaul the curriculum,” she responds.
Another teacher agrees. “John, you may not like it, but it's a necessary evil.” Let's avoid using “necessary evil.” If you're convinced that evil is necessary, there is a serious flaw in your world view.
I shoot back, “Okay, then our only other option would be to get white kids. They did really well on the AIMS test. I hear they like casserole, so maybe we start adding tater tot casserole to the cafeteria. Oh, make them feel really guilty. Whites are suckers for that. Oh, and sweater parties. White people love sweater parties. Just get some ugly sweaters and they'll join in.”
“I'm not sure I see where you are going with this,” the Sultan responds.
“Oh, I just thought that, you know, as long as we were labeling the minorities and blaming the teachers instead of thinking critically about the test, we might as well do some stereotyping of whites as well. Like NASCAAR. White people love NASCAAR. I bet we could even do a scientific poll to prove it. Then we don't have to call it a stereotype if it's data-driven.”
As I look back at this conversation, I wonder what happened to me. Maybe I grew less idealistic. Maybe I just learned to shut up and carve out my own little world. Maybe they quietly reprimanded me too many times. Or maybe I learned that if I use fewer assessments and focus less on achievement, my students will do just fine on the tests and the people in power will leave me alone.
* * *
I'm sitting at a team meeting during my first year of teaching. In order to prove that we are "data-driven," (as opposed to say, "student-driven," which would be my preference) we sift through pages of graphs and charts, each with its own set of acronyms and abbreviations. RCBMS, AIMSWeb, AIMS, Galileo, DRA, DRP. The Special Ed kids have a test called Woodcock Johnson and apparently I'm not supposed to snicker when I hear the name.
"What general trends do you see?" asks a specialist. I'm not sure exactly what specialists are supposed to do, but I know that they specialize; which is essentially the problem. As a teacher, I don't specialize. I teach whole subjects to whole students. I don't see a score, I see a child.
“I guess the general trend I see is that our kids don't read very well,” I say. Honestly, I could determine this trend with a simple five minute glance into my classroom.
“I need something more specific. What is the data saying to you?” she asks.
A teammate ventures out, “I think the picture is incomplete. We don't have enough data to prove anything. We have a few tests, but I'm not sure we can determine what is causing our students to fail at reading.” I use the term "teammate," but I don't really consider it a team unless we are allowed to scratch ourselves and spit sunflower seeds.
“It's like a puzzle. We have most pieces, but we don't have the complete picture, right?” I add, so our team begins to discuss ideas of how to gather more data and gain a more complete picture.
Finally our special education teacher speaks up. “It's not like a puzzle. It's like Tetras Ideally data fits together and when it clears up it's transparent. It's smooth. The pieces fit nicely. But we have too much data. We have too many tables and charts and it's like we're on the ninth level and we're so overwhelmed that we're ready to throw our hands in the air and say, 'Game Over.' We have the data. What we lack is a lack of data.”
The curriculum specialist adds, "It's like we're playing Tetras drunk. Sometimes I wonder if we get drunk on Data and miss out on why we need data in the first place.” I start to feel bad about my mental mockery of her being a specialist, because she is able, in the moment, to see whole students.
So she offers ideas for us to change our approach. Like Tetras, we see each test as being a different shape and potentially a great diagnostic tool to see both individual information and general trends. We begin to conference with students one-on-one and ask for student input. At times we bring in other test scores. More often, we have students self-assess their own work. The process is flexible and organic - less like a puzzle and more like a plant. Moreover, as we move toward “less data” and “less assessment,” we begin to move from judgment to feedback.
Most learning, and by that I mean deep thinking and profound wisdom, cannot be measured. Most. However, some skills need to be measured. I begin to see that “data” is not inherently evil, but often a valuable aspect of knowing my students. I tell myself that it's “information” or “feedback” rather than “data,” because of the scientific connotations. However, Javi reminds me, “This is science. It's science at it's best. It's inquiry and analysis and discovery. It's not a rigid structure, but an exploration.” At times it still gets clunky, but in most cases it's flexible and transparent and like a game of Tetras, we grow more transparent and eventually students move faster toward mastering their own learning.
* * *
I gradually began to take a more realistic look at the meaning of assessment. I began to see the need for less assessment in order to do more assessment. The following are some paradigm shifts I made along the way:
- From grades to assessments. I now view all feedback as "assessment," meaning I check student work often but I don't record a grade until the end.
- From assignments to projects and portfolios. Instead of individual assignments, students do projects with a reflection piece and a portfolio.
- From anti-test to someone who sees tests as occasionally necessary for measuring discreet skills. The DRA helps me know reading level. AIMSWeb is decent for testing fluency. I now see these as diagnostic rather than judgmental tools.
- From isolation to holistic assessment. In other words, I see all work that a student does as a part of the learning process. Assessment is relational and by getting to know a student, I can better tailor lessons to fit that student's needs.
- From a management to a leadership perspective - For example, I won't walk around to "manage" a class and see what they are doing. I'll spend that time having one on one student conferences. The result is that I know students better and I trust students to get work done without me nagging them.
- From either product (behavioral) or cognitive process to a combination of both. I want to see what a student knows by what they think and how they can demonstrate it (though I do cringe at the word "product," because it assumes a certain business-like element to it).
- From testing knowledge to drawing out wisdom. I want to see how students use knowledge to make decisions. I want them to think about what they know, but also understand what they don't know.
I dug up a sage plant today after we had drowned it to death. It's hard to tell what to do in such a dry climate. Each plant is different. Arizona is dry and I have a few others that are victims of starvation / dehydration. Photosynthesis sounds really predictable. Just some water and light. Still, I screw it up far too often.
My mind always wanders to teaching when I garden. I am humbled by the reality that it's way too easy to starve and over-water in assessment and instruction. With some students, I write up their papers with so many questions and comments that they feel defeated. Others slip through the cracks. The garden forces me to face reality.
Vygotsky used the term “scaffolding,” as if it was a mechanical process of providing extra instruction. It's not easy, though. The reality is that it's a mystery. I think it's why people feel the need to construct rubrics or quantify it in a score. Mystery is hard. When do I let a kid discover truth on his own and when do I correct ignorance? How much of her grammar do I correct? At what point do I hi-jack her writing and it becomes my voice?
My hope is this: that I will know my students well enough to figure out when to step in and when to retreat, when to speak and when to listen. When this fails, my hope is that I'll be humble enough to apologize and that students will be gracious enough to accept.
BONUS
SAMPLE MULTIPLE CHOICE QUESTION
Multiple Choice Item #1
Which of the following conspiracy theory actually represents reality:
a. Pacifiers were created during the Cold War by Soviet scientists hoping to create a generation of American pacifists. Now we have millennial men who don tight pants and wear guy-liner and manscara.
b. NASCAAR is real, but the fans are fake. Or at least they were in the beginning. A secret meeting began where car companies and advertisers wanted to create a way that people would pay to watch moving advertisements. They figured that even the most illiterate bumpkins wouldn't be duped, so they started recruiting fake fans from professional wrestling venues.
c. Facebook is really a CIA operation that began as a way to track terrorist groups. However, when that failed, they sold it to agribusiness and the mafia who are using Farmville and Mafia data to inform their decisions. It has a real Wisdom of Crowds feel to it.
d. Large transnational corporations make billions of dollars creating a rigged testing system that merges their textbooks, state standards, consulting firms and tutoring subsidiaries in a way that will maximize profit. The criterion-referenced tests are designed to ensure that half the students fail (thus making them norm-referenced) and allow for more consulting, conferences and remedial testing material. Despite being a public institution, it is slowly becoming privatized through a culture of fear. Everything from the assessment to the curriculum to the teacher certification exams are run by the same oligopoly of two or three education companies.
e. Muppets are real and we are pretend. Jim Henson led a Muppet revolution in 1973 and since then we have lived in a Matrix-like reality. Sesame Street exists as a meta-narrative reality TV show. Muppet children watch human puppets who are learning alphabets through watching Muppets.
No comments:
Post a Comment