Image

Are we suffering from obsessive measurement disorder?

August 15, 2019

     By Duncan Green     

ODI’s Tiina Pasanen argues that more data doesn’t necessarily mean we make better decisions. It often means just having more data that is not used

Do any of these situations sound familiar to you?

  1. as an M&E manager, you worry that there’s a crucial aspect of your project that the current logframe doesn’t cover
  2. as a programme manager, most of your time goes on donor reporting, rather than getting stuff done
  3. as a donor you have to make sure that the evaluations you commission cover all the DAC evaluation criteria, plus your organisational needs, coming to 25+ ‘key’ evaluation questions.

If so, you and/or your organisation are probably suffering from an obsessive measurement disorder, a term initially introduced by Andrew Natsios almost a decade ago. The term rejects the belief that counting everything (in government programmes and beyond) will produce better policy choices and improved management. It’s a disorder that increasingly affects the international development community. I’m guilty too. Just last year I suggested we should start measuring learning. I know……

As a researcher and evaluator with a background in impact evaluation, I’m all for collecting data systematically and investigating the best ways to capture programme outcomes. Results-based management has pushed the development community to focus on outcomes, not just on outputs. Focus on rigorous impact evaluations has helped us to move on (at least to some extent) from evaluation reports that make sweeping statements of success based on the flimsiest of data. These days, we need to have evidence on what works and what doesn’t, why and how, to make more informed decisions on where to invest our efforts.

But have we gone too far?

It’s like washing hands. This is something that is very good for us and others around us – we obviously should not stop doing it. But if it takes over our life and we cannot do what we are supposed to do because of it, then it has become obsessive. Similarly, measuring becomes obsessive if it squeezes out implementation.

The consequences are worrying.

Donors might prefer to fund more short-term or ‘straightforward’ programmes, where it is easier to demonstrate (quick) linear attribution, instead of more diffused contribution or ‘moving in the right direction’. Measurement can be confused with judgement. And in practice, ‘implementers can’t deliver programmes effectively if their time is consumed with donor reporting’. It’s that simple.

Nonetheless, we see increased pressure to count things and demonstrate success. The number of indicators in logframes and value-for-money frameworks just keeps increasing, without a clear understanding of how these will help organisational decision-making. Several stories of change need to be produced per programme even though we know that ‘impact’ is often not aligned with tight programme cycles. And many evaluation terms of reference include a long laundry list of questions that need to be addressed within small budgets and tight schedules (but no long reports, thank you). Micro-measuring what we have done seems to be more important than what we actually do.

But the evidence for collecting all this evidence is very weak. Behavioural science has highlighted for decades how bounded our rationality is and how biased we all are when it comes to decision-making. Insights from evidence-informed policy-making underscore how our (and policy-makers’) decisions are influenced also by values, beliefs, previous experiences, context and well…politics.

It all comes down to how we really use the data. While adaptive management, learning and using data to improve programming is much discussed, its main purpose often is still accountability to donors. Don’t get me wrong – I understand the huge pressures donors are under. We know national development agencies such as DFID need to convince politicians and the public that tax-payer money is not being wasted (though I seriously doubt that any number of value-for-money indicators or impact evaluations will convince those categorially against aid spending). Similarly, philanthropic foundations may need to please risk-averse board members. It is a vicious cycle.

Could we measure less but increase usefulness?

We need to reflect on how to break this cycle. At present, a lot of solutions come across as tinkering around the edges, but they don’t address the core problem. Programmes have focused more on learning, but that has often led to separate monitoring and learning systems working in parallel and creating even more pressure on programmes and teams.

Natsios presents a number of recommendations in his paper including that we need to acknowledge that short-term quantitative indicators of results are not appropriate for all programmes. And while output data can be used to defend development programmes, it is not what development professionals need to make decisions about those programmes. I suggest that we could start by:

  1. Recognising when we have gone beyond useful monitoring and it now significantly undermines the implementation;
  2. Developing strategic, targeted monitoring and evaluation systems that focus on essential data needs. Guidance papers such as Ten reasons not to measure impact – and what to do instead can be helpful in thinking through when something is not needed or feasible;
  3. Building structured opportunities for joint sense-making and learning in programmes, with room to make changes based on this analysis. While doing this, we should draw in insights from behavioural science to mitigate against the most common biases in decision-making (e.g. how to avoid ‘group think’, how people are influenced by their previous experiences and the effects of polarised political contexts);
  4. Having honest (and yes, difficult) conversations with donors to see whether there’s room to scale back from excessive monitoring and oversight requirements and focus on what is really needed and useful. Let’s start by deleting ‘nice to have’ indicators and prioritise ‘must haves’ – keeping a few that donors need but otherwise concentrating on those needed to make programme improvements.

I’m still a fan of evidence-informed decision-making, but I think it is time to make a shift towards more targeted and user-focused data collection and analysis. In short, counting (only) what counts.

August 15, 2019
 / 
Duncan Green
 / 
Aid
 / 

Comments