Tuesday

The pitfalls of evaluation

Who knew? The most e-mailed article on the Stanford Social Innovation Review website is dated  2006, and titled "Drowning in Data." Whether you're a funder or a service provider it's still a useful article to read five years after publication. Data collection, analysis, and reporting is hard, and the article outlines several of the reasons. One is that terminology is not yet standardized - my outcomes may look like implementation to you. Another is that organizations are often over-ambitious, wanting to know about outcomes that can occur only several years down the road without providing funding to develop that kind of information.

The main focus of the article is the disparate data requests generated by funders - and often, each requires its own form for reports even if data are similar. I've had to gather and report this kind of data, and it can be a problem, not least because the process may not generate data that's useful for managing a program.

The article distinguished what it calls "summative" evaluations: did the intervention "work"? from formative evaluations: does the evaluation help the organization "improve"? You can see where I'm going - it's not clear, from this context, what it means to say that an intervention worked, or that an organization improved. One thing I always tell clients is that the process of developing measures is important, and that you're going to be doing it over and over again. (I know, that's really two things.)

I don't think that there's an argument here for not trying to evaluate. But there is very good reason to be thoughtful about doing so.

No comments:

Popular Posts