Moving beyond MCAS

Moving beyond MCAS

Current measures of school quality are incomplete, inaccurate

HOW ARE THE PUBLIC SCHOOLS in Massachusetts doing? Are they nurturing engaged thinkers who value learning? Are they expanding the way young people see the world? Are they fostering creativity?

We want schools to do a great deal for young people. Yet most states, including ours, measure school quality chiefly through an absurdly narrow instrument: the standardized test. As a result, we know surprisingly little about the degree to which schools are succeeding on the full range of aims we have for them.

That would be bad enough if standardized tests effectively measured the one outcome they target—student academic competence. Yet, as research indicates, they don’t even really do that. Scores do tell us something about the factual knowledge students possess in tested subject areas. But they don’t tell us anything about untested subjects. Nor do they tell us about the development of cognitive skill. Worst of all, test scores tend to tell us far more about a school’s demography than about its quality. As research has found, school factors explain only about 20 percent of achievement scores—about a third of what student and family background characteristics explain.

This is not to suggest that measurement has no place in public education. Good information is the lifeblood of any successful organization. But the state’s current measures of school quality are incomplete and inaccurate, and the consequences have been deeply harmful for our schools.

Consider how current measurement systems have distorted the curriculum. Because schools are held accountable for a narrow set of scores—generally on math and reading tests in grades 3-8, as well as one year of high school—school leaders have responded rationally by limiting instruction in untested subjects. Meanwhile, time devoted to testing and test preparation has expanded, making school less engaging, more stressful, and often less educational.

Consider, too, how current measurement systems have unfairly punished and stigmatized schools. In most states, schools are responsible for bringing all subgroups of students to a level of defined “proficiency.” If they do not, the state intervenes. But because low-income students and students of color are likely to score lower on standardized tests, their schools are far more likely to be sanctioned, regardless of how much their scores have improved. Such penalties, which destabilize schools and undermine their autonomy, create churn in school staff, as teachers flee or are fired. They send a message to students that they are on a dead-end track. And by further scaring away well-resourced and quality-conscious parents, they intensify segregation.

Measurement is not inherently harmful. Accurate and comprehensive information about schools can empower parents and community members, strengthen teacher practice, and improve governance. It can make conversations clearer and more productive. And it can align stakeholders around shared aims. In order to realize this potential, however, data systems must begin measuring the things we actually care about in education; and they must stop measuring things that indicate more about demography than school quality.

Our organization, the Massachusetts Consortium for Innovative Education Assessment, decided to take up this challenge by beginning at square one. Sure, we have access to test scores. But what kind of information would actually bring school quality into clear focus? What kind of data might highlight where schools need support, rather than just providing a means of ranking them? To answer such questions, we spent several months reviewing research on educational effectiveness and examining polling on what Americans want from their schools. Then, we turned to stakeholders for input. Conducting focus groups with parents, students, teachers, administrators, and community members in our six partner districts—Attleboro, Boston, Lowell, Revere, Somerville, and Winchester—we produced a school quality framework that outlines a clear and comprehensive vision of what good schools do.

Only after developing this framework did we begin to look for measurement tools. Our aim, after all, was to begin measuring what we value, rather than to merely place new values on what is already measured. For some components of our framework, we knew we could turn to districts, which often gather much more information than ends up being reported. For many other components, we could employ carefully designed surveys of students and teachers—the people who know schools the best. The biggest lift—measuring academic knowledge and skill—would require a shift away from multiple choice tests, and toward curriculum-embedded performance assessments designed and rated by educators rather than by machines.

In short, we had our work cut out for us. And, as we begin our second year, much remains to be done. That said, we are already producing answers to the questions that stakeholders actually have about the public schools. Are the schools encouraging problem solving and critical thinking? Are they fostering social and emotional health? Are they cultivating engaged citizens? The Massachusetts Department of Elementary and Secondary Education can’t tell you. But we can.

Meet the Author
We hope the Legislature, which has twice allocated modest matching funds for this effort, will once again override the governor’s veto. And we hope that, unlike last year, he will allow the funds to be spent, rather than clawing them back at the first opportunity. The state spends over $30 million a year on developing, administering, scoring, and reporting standardized tests. But how comprehensive is the information they provide to parents and taxpayers? How accurate is the picture that test scores present to politicians and policy leaders? How well do standardized tests inform the work of educators and administrators?

The answers to these questions are increasingly obvious. And that’s exactly why we’re building a replacement.

Jack Schneider is an assistant professor of education at the College of the Holy Cross, the director of research for the Massachusetts Consortium for Innovative Education Assessment, and the author of Beyond Test Scores: A Better Way to Measure School Quality. Follow him on Twitter: @Edu_Historian