What makes a Level 1 school?
State's measurements miss key factors
IN THE STATE of Massachusetts, the Department of Elementary and Secondary Education assigns all public schools a rating. Level 1 schools can boast they’ve received the state’s highest rating. Level 5 schools, by contrast, are branded failures. Educators and families take note.
But more than pride is at stake here. There are serious policy consequences attached to these levels – Level 4 schools, for instance, must enter into a turnaround process, while Level 5 schools enter into state receivership. Additionally, many parents make housing and enrollment decisions based on these levels. Consequently, even a small slip – from Level 1 to Level 2, for instance – can impact school demography, as well as that of the broader community.
Take, for example, John F. Kennedy Elementary School in Somerville.
For four years in a row, the Kennedy was a Level 1 school.
This year, the Kennedy is a Level 2 school because it missed its target with one subgroup—students with disabilities. For that group, the school’s CPI – Composite Performance Index – only increased from 70.8 to 73.6; the state’s target was a few points higher. Never mind the fact that the difference between the target score and the actual score is practically the definition of “margin of error.” More glaringly, this ostensible shortcoming doesn’t square with other figures – like the school’s Student Growth Percentile (SGP), which measures test score growth. SGP for special education students in English Language Arts (ELA) increased from 45 in 2015 to 52.5 in 2016; and in math, SGP increased from 47 to 58. For the whole school, students’ SGP went from 50 to 60 in ELA and from 53 to 59.5 in math.
In short, as measured by test scores, students at the Kennedy are learning. And special education students, who constitute 30 percent of the school’s population, are growing at a parallel rate to their peers within the school, and faster than their peers statewide. But according to the state, the scores should be higher.
If the Kennedy wants to get back to Level 1 status, there are several easy steps it could take. Some high-performing schools push lower-performing kids out. Other high-performing schools have ramped up the emphasis on test preparation, particularly with small subgroups of students. Still others have narrowed their aims, cutting back on any efforts disconnected from English and math tests. All of these moves would work; but any would be a shame.
Instead, leadership at the Kennedy is committed to doing what is best for its students, even if most of those efforts aren’t recognized by the state’s embarrassingly incomplete measurement system. Over the past two years, suspensions have declined to one-fifth of the previous figure, thanks in part to a restorative justice program and an emphasis on positive school culture. The school has adopted a mindfulness program that helps students cope with stress and develop the skill of self-reflection. A new “Maker Space” is being used to bring hands-on science, technology, engineering, and math into classrooms. The school’s drama club, offered free after school twice a week, now has more than 60 students involved. The inventory of achievements that don’t count is almost too long to list.
Yet we needn’t rely on anecdotes for evidence of performance. For the past two years, my research team has been building a more holistic measure of school quality for Somerville. This work has grown into a larger project – the Massachusetts Consortium for Innovative Education Assessment – which the Legislature funded through this year’s budget with a $350,000 appropriation. Seven districts across the state, including Boston, will join Somerville in working to measure school quality more holistically.
To be clear: test scores aren’t worthless. They do give us an indication of how students are performing with regard to basic competencies in English and math. But we should never rely on a single measure to make significant judgments about students or schools. And this is especially true when we know how narrow test scores are as a measure of school quality. To echo an oft-repeated comparison, relying on test scores to measure school quality is like relying on a thermometer to measure human health. It provides some useful information; but certainly not the full picture.
Dividing schools into quintiles—the top 20 percent, the next 20 percent, and so on—is an inherently problematic practice. It sends a message to educators and families that they are in competition with each other, and that only some schools can be good. But if the state is going to continue with this practice, it has a responsibility to do so as thoroughly and accurately as possible. It needs to do more than collect standardized test scores. Until that day comes, parents and policymakers should proceed with extreme caution when looking at the levels assigned to schools and districts. Levels, like looks, can be deceiving.Jack Schneider is an assistant professor of education at the College of the Holy Cross in Worcester and the research director for the Massachusetts Consortium for Innovative Education Assessment. His latest book, about how to measure school quality, will be published by Harvard University Press in 2017. He is on the John F. Kennedy Elementary School Improvement Council.