State school rankings ‘virtually worthless’

State school rankings ‘virtually worthless’

System mostly captures family background, not school quality

THE FIRST DEADLINE for enrolling students next fall in a Boston school is this week. Consequently, many parents are scrambling under the district’s controlled choice system to find the best fit for their children. As they do, many will turn to state-issued ratings, which can be found online and in the BPS registration guide.

Each year, the state classifies schools into one of five levels, with the “highest performing” designated Level 1. This practice, though distinct in its details, is in keeping with what is done in the vast majority of states. The theory behind such rankings, whether devised as numerical scores, A-F grades, or narrative labels, is that parents and communities want a clear and simple indicator of school quality. Unfortunately, there are two inherent flaws that make these levels virtually worthless.

The first and most obvious problem with state-issued ratings of schools is that they are based primarily on a flawed measure: student standardized test scores. Test scores often indicate more about a student’s home life than about what he or she is learning in school. Testing is primarily conducted in just two subject areas—English and math. And, because standardized tests are chiefly structured in the form of multiple choice questions, they measure only a small part of what students know and can do.

Even if the shortcomings of standardized tests were thoroughly accounted for, however, the state’s rankings would still fail to capture most of what parents care about. Last fall, MassINC conducted a poll of Boston parents and found that more than two-thirds of them identified as “very important” or “extremely important” all of the following: the quality of the teachers and administrators; school safety and discipline; the school’s academic programming; college and career readiness; class sizes; facility quality; the values promoted by the school; the school’s approach to discipline; and the diversity of teachers and administrators. These critical dimensions of school quality are mostly ignored in the vast majority of statewide rating systems, including the one used here in Massachusetts.

States, certainly, could include all of these factors in their summative ratings. But although that would be an improvement, such ratings would still face a second problem: the fact that schools are not uniformly good or bad. As most of us know from experience, schools—as structures, organizations, and communities—have different strengths and weaknesses. Schools that are struggling in some ways may be thriving in others. And schools with illustrious reputations often have a lot to work on. Insofar as this is the case, an evaluation system that combines the various components of school quality together into a single score will almost certainly fail to inform, and particularly if used—as the current system is—to rank-order schools.

Skeptics will make the case that parents aren’t willing or able to interpret educational data, at least not without a summative ranking. They are wrong. Seventy-nine percent of respondents to MassINC’s poll said that it was very important or extremely important to review information like graduation rates, college enrollment rates, student diversity, staff diversity, test scores, afterschool programs, and more. These were typical parents, not data analysts; nearly half did not advance past high school. Still, they wanted rich and comprehensive information about their public schools. The same is true of other parents across the commonwealth and across the nation.

For the past generation we have been evaluating schools in a manner that is misleading at best—ranking schools according to incomplete criteria and fostering the misconception that schools are either “good” or “bad.” These ratings impact community morale, foster teacher turnover, shape district priorities, and trigger accountability systems. Perhaps most importantly, ratings shape the decisions parents make about where to live and where to send their children to school. Roughly half of the schools in Boston are Level 3 schools, which ostensibly represent the bottom 20 percent of performers. But given the strong link between family income and test score performance, these ratings almost certainly indicate more about student demography than about school programming. And they communicate little about what else is happening inside those schools, many of which are excellent places to get an education.

Stakeholders are hungry for information about school quality, and they will use whatever they can get their hands on. That should be a call to arms for collecting and disseminating better data—a project that many, including leaders at the state and local level are currently invested in. Equally, however, it should humble anyone publishing information about schools. The labels applied to our schools, however inaccurate they may be, tend to stick.

Meet the Author
Jack Schneider is an assistant professor of education at the College of the Holy Cross and director of research for the Massachusetts Consortium for Innovative Education Assessment. His latest book is Beyond Test Scores: A Better Way to Measure School Quality. You can follow him @Edu_Historian

 

  • Fred Grosso

    Thanks, Jack.