A right way and a wrong way to link teachers and student test scores?

A controversy is brewing in Los Angeles over whether a newspaper should publish teachers’ names along with an analysis of how well they do in raising their students’ standardized test scores.

The debate has generated heated assertions that transparency should prevail at all costs, or, on the other side, that it’s unfair to label individual teachers using possibly flawed statistics. But the bigger questions – such as the way to responsibly use these kinds of data – are being lost, say some analysts. They worry that anger over the forthcoming Los Angeles Times article will cause a backlash against so-called “value added” analysis of teacher performance – which is the method the Times uses.

“This [episode with the L.A. Times] is where the advocates for value-added are getting a bit ahead of themselves,” says Douglas Harris, an education professor at the University of Wisconsin in Madison. “Teachers are already feeling under the gun on this kind of thing. They’re willing to go in this direction if it’s done right, but this is an example where [the paper is] not being careful, and it could easily backfire and undermine the more productive uses.”

“Value-added data” is the latest trend in teacher accountability: the idea that a student’s gains from the previous year’s test – as opposed to his or her overall performance – can be measured and tied to the latest teacher. The hope is that the process weeds out a lot of external factors that can influence student achievement on the tests. While not a brand new idea – it was initially developed in Tennessee and Dallas, which have data going back many years – it’s only beginning to be used widely in districts and states, and many are just now getting the kind of data that makes such analyses possible.

The federal government has pushed for such systems, and it has urged states to link teacher evaluations and student performance.

But such evaluations remain controversial among many people, particularly unions.

“There are too many variables [in the testing process],” says A.J. Duffy, president of United Teachers Los Angeles. He is particularly critical of the Times database – and is calling for a boycott of the paper. But he says he also opposes using value-added data in evaluations at all, although he acknowledges it could be a useful tool to give teachers feedback. “I believe in a system that emphasizes the whole student,” not just standardized tests, he says.

Proponents of value-added say that’s a valid criticism, agreeing that no one should expect that student gains on a standardized test could capture the creativity or broader enrichment that goes on in many teachers’ classrooms. That’s why it should be only one tool among several used in teacher evaluations, they say.

The District of Columbia, for example, which earlier this summer attracted controversy for its decision to fire teachers based in part on value-added data, uses that data for 50 percent of the evaluation, relying on other measures such as classroom observation for the rest.

“No one is suggesting using it as a single measure of performance,” says Paige Kowalski, a senior associate at the Data Quality Campaign, which works with states to build high-quality data systems. Ms. Kowalski worries, though, that parents and others will interpret a database like the Times’ as doing just that.

Barnett Barry, president of the Center for Teaching Quality, is even more critical of the planned database, calling it “the equivalent of a newspaper indiscriminately listing the names of doctors, in rank, based on mortality rates, irrespective of the type of medicine they practice or the context in which they practice.”

Value-added data can be useful, he and others say, but it’s important to acknowledge its limitations. It doesn’t take into account, for instance, chronic student absenteeism and learning gains due to summer school, after-school programs, or supplemental teachers, such as reading specialists. In many cases – particularly in high-poverty schools with high student mobility – a teacher may be graded on the performance of students she never taught, if they moved classes or schools after she was “linked” to them in the fall.

Almost everyone acknowledges being shocked by this first decision to make such data so public, and to link it to individual teachers – and a bit curious as to what the result will be.

“The big question is, is this a game changer?” says Kati Haycock, president of the Education Trust, which advocates educational equity and urges better teachers for disadvantaged kids. “Or is the approach so riddled with flaws that it will bring everything down? Nobody knows which will happen.”

You can also learn about all our free newsletter options.

Comments (1)

  1. Submitted by Larry Copes on 08/19/2010 - 08:52 am.

    Before-and-after research designs such as “value added” generally provides more reliable statistics than single scores. But before it’s reasonable to give teachers public credit or blame for changes in the scores, other possible influences must be accounted for. The article mentions several.

    One of the most significant, it seems to me, is the possibility that mobile students may be linked to teachers they’ve rarely met. Perhaps that difficulty could be overcome if the students studied were limited to those who were with the teacher for the full year. In some schools, though, that number would too small to provide significant information; I once volunteered in a school in which all but two of the (as I recall) 20 students who began the year with a teacher moved out of the school attendance area by the end of the year.

    The other difficulty I see as most important would be more difficult to surmount in a study: Standardized tests don’t measure much of the learning and growth that might take place in a classroom.
    When Einstein said that education is “what remains after one has forgotten everything he learned in school,” I think by “learned in school” he meant what’s measured on standardized tests. Until we can measure the real educational value added or subtracted by a teacher, an be sure it was due to the teacher and not other circumstances, we can’t really say how good a job the teacher is doing.

Leave a Reply