Time for school accountability reset
With so much change in state tests, a fresh start is needed in 2017
NOW THAT THE question of raising the charter school cap in Massachusetts has been resolved, at least for now, I am hopeful that we can begin to examine and discuss other necessary changes to our educational system here in the Commonwealth. I say this with a certain amount of trepidation, as it seems that every time those discussions arise the end results rarely align with what educators in the field believe is important to improve our system. However, there is an area of immediate concern that should be addressed.
The state accountability system in Massachusetts is currently in a state of disorder. Over the last three years students in the Commonwealth have taken three different state exams: MCAS, PARCC paper-based, and PARCC computer-based. Now, two new forms of the state exam, paper and computer based versions of MCAS 2.0, which draws on elements of both PARCC and MCAS, will be given in grades 3 through 8 in the spring of 2017.
I understand that the Department of Elementary and Secondary Education has done what it could to crosswalk the scores for these assessments in order to place schools in levels and assign percentile rankings. However, with many different assessment scores comprising districts’ accountability rating based on a four-year calculation, the validity of those calculations is very much in question, at least by many of us in the field. Furthermore, regardless of the efforts taken to be able to compare scores, growth, and achievement levels on the various exams, many important variables have not been taken into account.
For example, student performance across the state has demonstrated that students who took the PARCC exam on paper score higher than those who took the PARCC exam on computers. Nevertheless, this disparity is not taken into account by the state for purposes of accountability and those scores are compared as if they were of equal weight. Additionally, at least 40 schools across the state saw their accountability levels negatively impacted due to opt-out students lowering participation rates. Not only do opt-out students impact participation rates, but this has the added effect of lowering scores, as we have seen that the vast majority of those students are our higher achieving students. Thus, achievement in those schools is negatively impacted along with participation.
In addition to resetting accountability determinations, we should also investigate the development of a calculation to weight the paper- vs. computer-based exam for MCAS 2.0 during this transition from one mode to the other.
Some districts have made the decision to move immediately to full computer-based testing because, although they know in the short term it will negatively impact their scores compared to those districts that remain with paper-based exams, they feel it will be a benefit in the long run as their students will gain familiarity with that platform. Other schools systems, even if they have the capability to take the MCAS 2.0 on computers, are reluctant to make that move before they have to because of the likely impact on scores. It comes down a choice between playing the long game or focusing on an immediate return. However, this fact is working to inhibit this transition and resulting in an unequal playing field for districts.
These types of choices are not educational decisions and they do not essentially affect the delivery of educational services to our students one way or the other. This is a strategic gamesmanship decision and calculation all superintendents are being driven to consider due to the nature of this assessment system being in flux for years now. This should not be the case.
In looking to mitigate detrimental impacts to districts of this system in transition, the state education department has said school districts will be “held harmless” for accountability purposes. This is certainly not how those of us in the field view it — for two primary reasons.
First, districts are not truly held harmless as, if we continue with the current method of calculating accountability, an individual year’s scores are still factored into a four year accountability determination. Thus, those scores continue to follow (harm) us for four years.
Second, even in the current year we are not “held harmless.” The harm is not in our actual accountability rating, but in public perception of our schools. That perception is shaped by the percentile ranking of a school even more so than the accountability level. Consequently, since any drop in percentile ranking is still shown on the district profile, even if the system is “held harmless” for accountability purposes, public perception of the district is diminished.At a minimum, we need to reset accountability levels after the administration of the MCAS 2.0 in the spring of 2017 so that we all have a level playing field with the same assessment. Taking this action would mean that districts truly were held harmless during this transition. As part of this calculation, developing a method to weight computer- versus paper-based testing will add further validity to the system and help spur the transition to a fully computer-based system by removing districts’ incentives to delay moving to online testing.
Todd Gazda is superintendent of the Ludlow public schools.