Main menu

Pages

Teacher-Evaluation Policies Have Flopped

Teacher-Evaluation Policies Have Flopped. Where Did They Go Wrong?


In a National Bureau of Economics Research (NBER) white paper from March on “Taking Teacher Evaluation to Scale,” five researchers offer a bottom line on the teacher-evaluation push that loomed so large in the Obama era. They conclude, with high statistical confidence, that the effort had no meaningful impact on student outcomes (regardless of the specific program design features, relevant student characteristics, or the local context).

For those who recall Race to the Top, federal dollars and directives, the Gates Foundation’s intense Measures of Effective Teaching push, grandiose state plans, the L.A. Times’ massive name-by-name look at teacher value-added scores, and the intense teacher-evaluation fights of the late aughts and early 2010s, the whole thing is a cautionary tale. Of course, none of this should be a surprise by now. After all, Brown University’s Matt Kraft (one of the co-authors of the new paper) has previously shown that nothing of import actually changed as a result of new teacher-evaluation laws. And RAND’s extensive evaluation of the Gates Foundation’s half-billion dollar effort on teacher evaluation registered a similarly dismal verdict.

In the new NBER paper, Josh Bleiberg and his colleagues offer some thoughts as to the familiar factors that help explain what happened—including political opposition and the U.S.’s decentralized system of public education. Of course, none of that stuff should be at all surprising. Indeed, these challenges and the problem of taking reform to scale is an old one (see, for instance, Dick Elmore’s classic 1996 article).

This well-worn frustration is responsible for what’s been a recurring theme of this blog for 13 years: the conviction that it’s crucial to challenge the heedless enthusiasm, moral certitude, and blind confidence that looms so large in the DNA of school improvement.

As I observed several years ago, in Letters to a Young Education Reformer, “Policy can make people do things but it can’t make them do them well. Policy is a blunt tool that works best when making people do things is enough.” Education policies are most likely to deliver the hoped-for results when dealing with “musts” and “must nots,” as with things like compulsory attendance, required annual assessments, class-size limits, and graduation requirements.

Unfortunately, as I noted in Letters, “Policy is far less effective when it comes to complex endeavors where how things are done matters more than whether they’re done. This is because policy can’t make schools or systems adopt reforms wisely or well.” That’s why advocates promoting social-emotional learning requirements, “restorative” disciplinary policies, career and technical education directives, educational savings accounts—or teacher-evaluation systems—need to be prepared for teeth-rattling bumps.

I want to be clear: The bumps (usually) aren’t due to ill intent on anybody’s part but to a series of banal factors. Educators in a given school or system may not be that invested in the effort. They may not know how to do it. Any training they receive may be slapdash, mediocre, or insufficient. Students or families in some locales may not like the measures. And, as the NBER paper authors note, proposals will encounter opposition (shocker!) or may flounder amid the byways of our decentralized system.

When improvement efforts don’t work out, those who pushed the change have the unlovely habit of acting as if no one could have anticipated the challenges that bedevil them—sounding a lot like a kid who leaves his new bike outside and unlocked and then gets furious when it’s stolen. Frustrated would-be reformers proceed to blame their frustrations on everyone else: parents, politicians, textbook publishers, educators, bike thieves. You name it.

There’s a tendency to insist that their idea was swell and that any issues are just “implementation problems.” Calling something an “implementation problem” is how those who dreamed up an improvement scheme let themselves off the hook. It’s a fancy way to avoid acknowledging their failure to anticipate predictable problems.

The upshot is that they didn’t realize how their idea would work in practice, when adopted by lots of real people in lots of real schools ... and it turned out worse than they’d hoped.

I’ve said it many times before and I’ll say it again: There’s no such thing as an “implementation problem.” What matters in schooling is what actually happens to 50 million kids in 100,000 schools. That’s all implementation.

Responsible advocates and change agents prepare accordingly. They know that the measure of their idea is not how promising it seems in theory but how it works in practice. That’s a test that would-be reformers have too often failed. Going forward, whether we’re talking about SEL or education savings accounts, we need to do better. On that count, the teacher-evaluation boomlet has valuable lessons to teach.


Comments