Getting evaluation right: a five point plan

October 25, 2012

     By Duncan Green     

Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects.jyotsna puri This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie)

Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make an impassioned plea for evidence and rigor in their evidence collection, and worse, they hardly ever publish negative evaluations.  The internal wrangling and pressure to not publish these must have been so high:

  • ‘What will our donors say? How will we justify poor results to our funders and contributors?’
  • ‘It’s suicidal. Our competitors will flaunt these results and donors will flee.’
  • ‘Why must we put these online and why ‘traffic light’ them? Why not just publish the reports, let people wade through them and take away their own messages?’
  • ‘Our field managers will get upset, angry and discouraged when they read these.’
  • ‘These field managers on the ground are our colleagues. We can’t criticize them publicly… where’s the team spirit?’
  • ‘There are so many nuances on the ground. Detractors will mis-use these scores and ignore these ground realities.’
accountability cartoonThe zeitgeist may indeed be transparency, but few organizations are actually doing it. So while Oxfam’s results are interesting, more importantly the transparent process must be applauded. But as I read these documents, it was deja vu… In the initiatives that used quasi-experimental methods I was struck by Oxfam’s acknowledgement that they didn’t know the ‘why’ of some of the results. For the ones that used qualitative methods (the humanitarian portfolio, citizen’s voice and policy influencing), I kept asking myself, how much did they do better by? It seemed like a zero-sum game: One method meant the absence of the other. This was one source of familiar dissatisfaction… As they say, once a ship has sunk, all the mice know how it could have been saved. So here’s the mouse in me. What can an organization do to answer questions I (and it) have and not wring its (collective) hands regretfully later? Here’s my five point list for what all NGOs should think about before setting up an M&E system (or even after setting it up). It’s operational (I have put one into place), it’s not easy, but it has the potential to quieten most detractors (and people like me): Point 1: Have a good theory of change/causal pathway/impact pathway or whatever you want to call it. The name doesn’t matter (it’s a rose!) Theories of change are good for understanding the program, for schematics and great communication tools too. Additionally anHaiti reconstruction cartoonevidence-based theory of change can help you decide where you need most investigation, where a process evaluation is sufficient, where a counterfactual analysis of outcomes is required and where a simple tracking of indicators is useful. Do: Set one up and ensure everyone who needs to, knows the theory of change along with risks and assumptions. Point 2: Put in place monitoring and information systems. Track process and process/output and some outcome indicators across program areas. There should be a list of performance monitoring indicators that speak to different sectors (four in the case of Oxfam). Do: Put together a set of standard operating procedures for collecting information on process indicators. This should contain information on frequency of collection, identify data sources (clinics, households, schools), specify respondents (teachers, nurses, women, children…) and clearly elucidate methods for calculating indicators (even for simple indicators such as enrollment rates). Do: Write and revise and revise a standard operating procedure manual till you have it pat. Do: Have a management information system that also includes algorithms for quality checks and have a full time person doing data review. Do: Train your data collectors and your data base managers; Measuring babyPoint 3: Think about measuring attributable change. Can you for instance: –          Assign the intervention randomly from the beginning without losing sight of your final goal? –          Identify counterfactual sites and start collecting data there? Pros: great reporting to donors; rigorous information; Cons: more expensive than just monitoring data, does require high level of scrutiny in comparison sites especially if you use ex post techniques. –          Use other methods to establish causality? (Which ones?) For all methods: Do: Use protocols and register them (3ie will soon start to register them.) Do: Use rigorous surveys in implementation sites and in control sites (and get someone who knows how to do them. Don’t do them yourself). Do: Have standard operating procedures for site level data entry and cleaning; Do: Use anthropometric measures and bio-physical indicators to the extent possible; Do: Use and write a field operations manual, write standard operating procedure manuals for data managers that contain range and logic checks for data, and, encourage double data entry. Point 4: Undertaking cost and cost effectiveness studies. What are the priced and non-priced inputs in the project? Think about whether you want to use these projects in other places? Scale them up? (And no it’s not going to be calculated from your budget statements alone. ) Do: Put together a standardized template with cost categories and measurement methods. (E.g. how will you measure the cost of usingtrapped in rubble cartoongood seeds for the farmer? It’s not just the cost of procurement or transportation but also the cost of additional manure, the cost of storage for seed and post-harvest produce.) Do: Ensure that everyone in the delivery chain understands and sees this template the same way. (Train, train, train…train!). Point 5: Focus on implementation research. Systematically documenting implementation factors, and putting together a protocol which contains questions that are relevant to informing all stages of the evaluation. This is where participatory methods, focus groups, observational scrutiny, process research should come in, and also inform your theory of change. Do: set out a protocol at the beginning that lays out i) the questions you want answered ii) what you’ll ask in your interviews to answer them; iii) a plan for analyzing your qualitative information. There are many more things one can do. But I believe if you have these covered, you are on your way. A few more things to bridge that elusive evidence-policy gap:
  • Evidence is required for policy making but most policy makers are looking to affirm (and not inform) their opinions (as a recent article in Time says. See here for an excellent QJPS article also cited there).
  • Be circumspect about what evidence you advocate for. Not everything is worth fighting for (and often leads to evidence-fatigue.) When I have taught policy analysis, I have often used a rule of thumb long known to academic political scientists: if a policy change leads to less than a 10% change in outcome, it’s a flashing red (stop and think before translating that evidence into policy); if it’s a 10-25% change (it’s a lime, go for it but think about transition costs); if it’s more than 25% change, it’s a deep, loud green: Adopt the policy. The costs of transition will be surpassed by the benefits of policy change.
  • Change the institutional incentives: Oxfam is on its way, but will program managers on the ground really adopt this culture change or will it continue to be top down? (See here for an excellent blog by Mead Over and Martin Ravallion.)
]]>

October 25, 2012
 / 
Duncan Green
 / 
Aid
 / 

Comments