Kent Pekel

Because assessment and evaluation in education is terrifically complicated stuff, Kent Pekel, executive director of the Search Instituteasked MinnPost — which contacted him in response to educator complaints — to publish his detailed critique of Minnesota’s Multiple Measurements Ratings. What follows is his full explanation of why he objects to the MMR: 

1. MDE Commissioner Cassellius and the Dayton Administration deserve great credit for pursuing a waiver from No Child Left Behind when it was not clear that (as has subsequently been the case) the U.S. Department of Education would issue many waivers from the federal education law that had long outlived its usefulness and validity.

That said, those of us who served on the advisory committee that led to the creation of the MMR were repeatedly told by MDE staff members who managed the committee that the system we were designing on the hastened federal timeline would be a transitional one. 

We were told that after the waiver was approved, a more deliberate effort to design an accountability system for Minnesota schools that would truly support improvement in student achievement would be launched.

We were told that subsequent effort would include much more sound technical analysis of actual student achievement data than was available to the working group and that the working group would include more representation from communities of color and low-income communities across the state — whose children were the primary focus of the new accountability system.

Many years later, that more data-driven and inclusive effort to design an accountability system has never been launched, and state officials now talk about the MMR as Minnesota’s standing accountability system. 

I, for one, endorsed the new system only as a transitional one. Had I known that all these years later Minnesota schools would still be using the MMR, I would have withheld my endorsement and started writing opposing op-eds immediately.

[cms_ad]

2. The MMR is incomprehensible even to experts, whereas promoting large-scale improvement requires simplicity at the center.  If you don’t think this is true, ask a state official to describe how the MMR is calculated and don’t allow that individual to use terms like “growth” as though that explains anything.  How exactly is growth calculated in the system?  If schools are to design and implement meaningful plans to improve the outcomes the system seeks, they must understanding the system itself.  Say what you want about the annual rankings that US News does of hospitals and colleges, if you read the technical explanations of how the rankings were determined that are printed in the back of the magazine, an average person can quite easily understand how the rankings were determined (though they might of course disagree with the factors that were used).  

3. The MMR has not led to the identification and wide-scale adoption of any best practices that are being studied and replicated in other schools.  This many years into the new system, what have we learned from the schools that achieve the Reward designation that other schools are now adopting? 

4. Key elements of the MMR system compare the performance of some schools to the performance of other schools rather than to an objective standard such as preparing students for success in postsecondary education or achieving a year’s growth for a year’s time in the classroom. We are encouraging students today to measure their performance against objective standards such as college and career readiness, and we should do the same for their schools. 

5. The MMR doesn’t actually use multiple measures — it uses scores from the same test in multiple ways plus high school  graduation rates. Many measurement experts and organizations advise against using test scores in this way.  In addition, the MMR ignores character skills such as perseverance and sociability, which research shows influence outcomes in education, the workplace, health and criminality as much as IQ.

6. The MMR summarizes the performance of schools in a single number that is the result of combining four separate numbers. There is no research on high-performing schools that justifies the equal weighting of the separate numbers that influence this single summative number, which is increasingly being used by educators and funders across the state to justify both rewards and interventions. 

It’s like giving a standardized test to four children in the same family, combining their scores into a single score and then ranking them against other families on the degree to which their family is educationally successful.  Does the composite number tell you something general about that family’s educational outcomes? In a very general sense yes, but not really.  The composite number masks differences in performance on each category, which would be of real value in a sound state accountability system.

7. MDE has consistently reported the outcomes of the MMR in ways that support the policies of the department and the Dayton Administration, rather than as the objective agency that could serve as a convening force in Minnesota’s divisive educational debates.  

For example, MDE recently reported that two-thirds of Minnesota schools are on track to cut the achievement gap by 2017. That sound bite makes it sound like two-thirds of Minnesota students who are on the wrong side of the achievement gap are making progress.  That is misleading because the schools that aren’t making progress count as only a single school in the “two-thirds” proportion, even though those schools serve a disproportionate number of the students in the state who are not meeting standards.  

Similarly, MDE’s frequent suggestion that the state’s Regional Centers of Excellence are the source of improvement in the lowest performing schools wouldn’t withstand the slightest scrutiny from a serious researcher or observer.  

When a school is at the bottom of a distribution and you put it on a list and also give it extra resources, it will improve at least a bit. To my knowledge, the department has never reported how much the schools that have received support from the Regional Centers of Excellence have improved (was it statistically significant?) and whether they improved more than a comparison group of statistically similar schools.  

8. There are important technical problems with the MMR, including the fact that system’s flaws initially resulted in wide variability from year to year in the placement of schools in the various categories of the system. That flaw in the way schools’ scores are calculated has since been addressed and stability from year-to-year has since increased, but other flaws remain. 

Among them are the fact that math and reading scores are nonsensically combined and averaged in the calculation of each school’s growth score.  Schools almost always have significantly different levels of achievement in reading and math and addressing one requires very different strategies from addressing the other. 

In addition, the achievement targets that the system sets for each subgroup under the MMR are all or nothing. A school can get credit for a subgroup’s achievement only if they hit the target.  IF they don’t hit that target, they get no credit regardless of how well those students perform. If, for example, 45 percent of the African-American students in a school are proficient in math but the target is just a bit higher than that, the school will get no credit for achieving any level of proficiency in the African-American students domain.  

Leave a comment