5 min read
Had a wonderful conversation with Emily Johnson about assessment at AERA. It fits right into the second week of #rhizo15 as we discuss counting things.
I likecounting things. I find factor analysis fun. Regression models are great. I like counting and making things count.
IEven if I do (try)a qualititative study I will end up counting codes. I will then want to tweak the learning space to see if I can cause a statistical difference in those codes.
The conversation began with Emily sharing out a slide from Daniel Willingham. I agree with Willingham that content knowledge = comprehension. I also agree with he and Robert Pondiscio that reading assessment makes very little sense after basic decoding skills.
There are some universal comprehension skills that can and should be taught. Yet the effect sizes of these lessons wane as readers develop greater proficiency. They also rarely transfer to other texts. Michael Fagella-Luby likes to point out that these strategy instruction is critical to special education students. He is right, but strategy instruction should not be the crux of our reading programs.
I spent my doctoral career designinging reading assessments. ThenI had to design and validate seven different measures for my dissertation alone. I don't hate testing. I just think some things essential to schools can't be assessed.
We know reading motivation is a strong predictor of comprehension. Yet the word only appears once in the Common Core State Standards. Why? It is hard to measure.
Even more important is the love of the word. I want the students I teach to have a passion for playing with prose. I want them to have a library of reactionary gifs they can post on topics that matter to them.
I am not sure this is an outcome that can be measured.
My other issue is what happens when you take Willingham's and Pondiscio's position to its ultimate logical conclusion? If content knowledge matters most than someone has to decide what knowledge. No government agency should be in the business of deciding a universal canon of knowledge. We already ignore the counter narrative of People of Color in our schools. We suppress stories of the oppressed. We already ignore the diverse multiliteracies of today's youth. Having a government decide what we need to know is no democracy I want to live in.
I agree here with James Paul Gee that students have islands of knowledge. A very young child maybe able to understand a complex text about Minecraft or about baseball. This is regardless of lexile level but governed by discourses.
Here Emily and I disagreed a little (I think. It is very easy to misconstrue positions and intentions on Twitter). I just do not think the assessment regime schools have lived under since NCLB passage (or since Nation At Risk has been published) have been good for schools. If NAEP scores have been so steady in the era of accountability based reform why are we still wasting billions, possibly trillions, on the same path? Isn't replication the first step in ed research? Don't we have enough evidence that testing does little for schools? Could those billions being used on the bad math of VAM and teacher evaluation be better spent?
So how could we do reading assessment?
What if teachers had a competency based approach to comprehension assessment? I see it somewhat in schools. They have taken the CCSS grade level expectations and made report cards, but schools get these wrong. They often have a four point scale ending in 4, exceeding grade level. My issue, since CCSS are end of the year expectations what are you doing for the child who meets or exceeds this expectation on their report card half way through the year? What about the child who finished last year with meeting the GLE based competency? Why did you move on to the next year? Based on the assessment data we are wasting their time.
These are just some quick thoughts, but I was thinking about #openbadges, reading comprehension, and the common core. A digital badge is a visual representation of the data behind the image. What if a teacher picked a series of GLE from the CSS and created a learning pathway that could be represented by a badge? The CCSS were never meant to be taught in isolation anyway.
Teachers could then require the student to reflect on their growth along this pathway. The teacher could also collect and analyze evidence of student growth by tagging evidence in work products or student dialogue and text moves during the work process.
Then the students could be assessed on the vocabulary that matters in the discipline. They could complete concept maps pre and post to measure knowledge growth. These two assessments I am sure would go a long way in predicting how students would comprehend a text in any given disicipline.
In terms of the harder things to measure: passion, engagement, etc., I do not think they can be counted but they could be cultivated. If you figure