Using ratings is an iterative process

Today's New York Times carries an On Education column by Michael Winerip titled "Evaluating New York Teachers, Perhaps the Numbers Do Lie." In it, he describes a teacher, whose past students and  principal say is highly competent, as likely to have her tenure request delayed, and possibly denied, because she is in a low percentile on one of three markers (despite quite high performances in the other two).

Since it began, New York's rating system for schools, and teachers, has weighted increases in test scores from year to year very highly, and rightly so. The problem comes when a school, or teacher, was successful in previous years. Once you've reached, say, 85% of students exceeding state standards, there's very little room for improvement. And it makes no sense to penalize a school, or a teacher, for minimal improvement, which is what appears to be happening in the case Winerip describes.

Experienced users of outcome measures know that once you've put outcome measures in place, you are not finished. Every couple of years you need to revise them to take account of the performance you've been measuring: new realities, new baselines. In the case of a school, or teacher, 85% of whose children are performing at expected levels, you don't look for a huge increase in performance. What you want to see is a small increase -- something that addresses that other 15% -- as well as some measure of maintenance. Otherwise, you get schools downgraded from As to Fs in the course of a year, or good teachers denied tenure. And that result risks undermining the credibility of the rating system. New York City's Public Schools' raters should consider re-examining the ratings process for high scoring schools, and teachers.

No comments:

Popular Posts