New York City to Release Individual Teacher Effectiveness Ratings: A Discussion of “Value-Added”

The New York City Department of Education is telling reporters that on Friday, it will release the value-added ratings of 12,000 New York City teachers who teach tested subjects and grades–reading and math in grades 3-8.

Value-added is a complex mathematical technique, favored by economists, that attempts to measure a teacher's effectiveness by looking at the change in his or her students' test scores from one year to the next, while controlling (to the extent possible) for out-of-school factors like poverty and race.

When the Los Angeles Times published an online searchable database of teacher value-added rankings in August, it ignited a national debate about whether the publication of such sensitive, experimental data is appropriate. The L.A. Times used teachers' real names; it is unclear if New York City is planning on making the data available with or without teachers' names attached. The United Federation of Teachers is negotiating with the DOE on this point, and suing to prevent any release of the data.

There are a few things we know about value-added. First, even its proponents admit that it is a volatile measure. About 20 percent of teachers who have a very high valued-adding rating this year will be in the bottom 40 percent next year, and visa versa. For this reason and many others, value-added measurements are most useful when  they are averaged over several years.

Unfortunately, while some of the 12,000 New York City teachers have up to four years of data available, many are new to the system or new to their current grade-level and subject area assignment, meaning their scores will be based on far less data than researchers agree is ideal.

Secondly, value-added measurements are based on flawed standardized tests administered to students. This is a particular problem in New York, where last spring the state decided its tests had become far too easy and told many schools that a greater number of their students were below-grade level than had been previously assumed.

Across the country, most states do not administer tests that are vertically aligned. Such tests are designed to be administered together, year after year, in order to chart student growth. They therefore test students on this year's skills, but also a bit on last year's and next year's, to get the best possible snapshot of how much a student knows at any given point in time. Students' scores over several years on vertically aligned tests will tell us more about them and their teachers than their scores on standardized tests that are not vertically aligned. 

On the upside, the New York City value-added system is very complex, and strives for fairness by comparing teachers only to other teachers with a comparable number of years on the job, and who work with demographically similar students.

At this link, courtesy GothamSchools, you can see a sample teacher value-added report. To learn more about value-added, I suggest the following two papers:

Cautiously in favor: Douglas N. Harris, "Would Accountability Based on Teacher Value-Added Be Smart Policy?

Very much against: The Economic Policy Institute, "Problems With the Use of Student Test Scores to Evaluate Teachers"

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>