What Jack Shafer Gets Wrong About the L.A. Times Teacher Database Controversy

In an upcoming Nation magazine feature, I'll have more to say about the Los Angeles Times' decision to publish a searchable database of "value-added" teacher ratings. But I do want to respond here to Jack Shafer's Slate piece on the dust-up, which is dripping with malice toward teachers' unions and anyone else who has questioned the paper's choice to publish this data–despite the fact (which Shafer ignores) that many sensible education reformers, including those critical of unions, are questioning exactly that.

To make a long-story (relatively) short, the L.A. Times hired a RAND Corporation researcher, Richard Buddin, to track elementary school teachers' effectiveness at improving their students' performance on state standardized tests in math and reading. Buddin used value-added measurements, which work like this: If a student performed 10 points below the mean reading score in 3rd grade, but 10 points above the mean reading score in 4th grade, that 20-point improvement is attributed solely to the work of the 4th grade teacher, who is then rated "highly effective" and, in some school districts, may now be eligible for a salary bonus. 

Despite Shafer's proclamation that those who oppose the L.A. Times database are "enemies of open inquiry, vigorous debate, critical thinking, and holding authority accountable," education experts across the ideological spectrum have expressed concerns. For starters, the idea that a student's improvement or backslide on a test can be attributed solely to their classroom teacher is highly problematic. Conflict at home or switching schools partway through the year can negatively affect a student's performance, while a dedicated after-school tutor can mean the difference between a student failing and passing a test, regardless of the classroom teacher's actions.

At The Quick and the Ed, the blog of the think tank Educator Sector–which strongly supports teacher merit pay tied to student performance–Rob Manwaring argues that value-added measurements should be used to assess schools, not individual teachers, highlighting that standardized test score data is available for only the one-third of teachers who teach math and reading in tested grades, and state standardized tests are not known as particularly stable measures of student achievement.

"I think that there is a middle ground that can raise the focus on student progress and teacher quality without publicly embarrassing specific teachers," Manwaring writes, adding, "Releasing a school summary of value-added will support the collaborative environment that teachers want. A school’s staff will want to have all of their teachers producing high growth, and will work to support each other to make that happen."

As the New York Times reports, last year the 13 researchers who make up the Board on Testing and Assessments of the National Academies wrote to the Department of Education that the Race to the Top grant competition placed “too much emphasis on measures of growth in student achievement that have not yet been adequately studied for the purposes of evaluating teachers and principals. … At present, the best use of [value-added] techniques is in closely studied pilot projects.”

Even William Sanders, the economist known as the father of value-added measurement in education, told NPR's "Morning Edition" that parents might come to the wrong conclusions about teachers' rated average in the L.A. Times database, since "can you distinguish within the middle? No you can't, not even with the most distinguished, value-added process that you can bring to the problem."

This is a far more complex story than Shafer makes it out to be. Believing in data-driven teaching and merit pay for good teachers does not mean believing that value-added, test-based measurements of teachers should be posted on the Internet.

Update: The indispensable cognitive scientist and Washington Post online columnist Dan Willingham has perhaps the best-to-date explanation of why value-added assessments of teachers can be misleading, and why the social science community is so cautious about this measure.

5 thoughts on “What Jack Shafer Gets Wrong About the L.A. Times Teacher Database Controversy

  1. Rusty

    The focus should be on making it easier to fire the really bad teachers, not merit pay. It’s hard to draw top people to a mediocre-paying profession when you can be fired at almost a whim by a principal or lcb, but at the same time there truly are a lot of dumb elementary school teachers out there and the “it’s not just smarts, it’s classroom experience” line is untrue far more times than it is true.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>