Are we suffering from obsessive measurement disorder?

ODI’s Tiina Pasanen argues that more data doesn’t necessarily mean we make better decisions. It often means just having more data that is not used

Do any of these situations sound familiar to you?

  1. as an M&E manager, you worry that there’s a crucial aspect of your project that the current logframe doesn’t cover
  2. as a programme manager, most of your time goes on donor reporting, rather than getting stuff done
  3. as a donor you have to make sure that the evaluations you commission cover all the DAC evaluation criteria, plus your organisational needs, coming to 25+ ‘key’ evaluation questions.

If so, you and/or your organisation are probably suffering from an obsessive measurement disorder, a term initially introduced by Andrew Natsios almost a decade ago. The term rejects the belief that counting everything (in government programmes and beyond) will produce better policy choices and improved management. It’s a disorder that increasingly affects the international development community. I’m guilty too. Just last year I suggested we should start measuring learning. I know……

What he said

As a researcher and evaluator with a background in impact evaluation, I’m all for collecting data systematically and investigating the best ways to capture programme outcomes. Results-based management has pushed the development community to focus on outcomes, not just on outputs. Focus on rigorous impact evaluations has helped us to move on (at least to some extent) from evaluation reports that make sweeping statements of success based on the flimsiest of data. These days, we need to have evidence on what works and what doesn’t, why and how, to make more informed decisions on where to invest our efforts.

But have we gone too far?

It’s like washing hands. This is something that is very good for us and others around us – we obviously should not stop doing it. But if it takes over our life and we cannot do what we are supposed to do because of it, then it has become obsessive. Similarly, measuring becomes obsessive if it squeezes out implementation.

The consequences are worrying.

Donors might prefer to fund more short-term or ‘straightforward’ programmes, where it is easier to demonstrate (quick) linear attribution, instead of more diffused contribution or ‘moving in the right direction’. Measurement can be confused with judgement. And in practice, ‘implementers can’t deliver programmes effectively if their time is consumed with donor reporting’. It’s that simple.

Image: Chris Lysy

Nonetheless, we see increased pressure to count things and demonstrate success. The number of indicators in logframes and value-for-money frameworks just keeps increasing, without a clear understanding of how these will help organisational decision-making. Several stories of change need to be produced per programme even though we know that ‘impact’ is often not aligned with tight programme cycles. And many evaluation terms of reference include a long laundry list of questions that need to be addressed within small budgets and tight schedules (but no long reports, thank you). Micro-measuring what we have done seems to be more important than what we actually do.

But the evidence for collecting all this evidence is very weak. Behavioural science has highlighted for decades how bounded our rationality is and how biased we all are when it comes to decision-making. Insights from evidence-informed policy-making underscore how our (and policy-makers’) decisions are influenced also by values, beliefs, previous experiences, context and well…politics.

It all comes down to how we really use the data. While adaptive management, learning and using data to improve programming is much discussed, its main purpose often is still accountability to donors. Don’t get me wrong – I understand the huge pressures donors are under. We know national development agencies such as DFID need to convince politicians and the public that tax-payer money is not being wasted (though I seriously doubt that any number of value-for-money indicators or impact evaluations will convince those categorially against aid spending). Similarly, philanthropic foundations may need to please risk-averse board members. It is a vicious cycle.

Could we measure less but increase usefulness?

We need to reflect on how to break this cycle. At present, a lot of solutions come across as tinkering around the edges, but they don’t address the core problem. Programmes have focused more on learning, but that has often led to separate monitoring and learning systems working in parallel and creating even more pressure on programmes and teams.

Natsios presents a number of recommendations in his paper including that we need to acknowledge that short-term quantitative indicators of results are not appropriate for all programmes. And while output data can be used to defend development programmes, it is not what development professionals need to make decisions about those programmes. I suggest that we could start by:

  1. Recognising when we have gone beyond useful monitoring and it now significantly undermines the implementation;
  2. Developing strategic, targeted monitoring and evaluation systems that focus on essential data needs. Guidance papers such as Ten reasons not to measure impact – and what to do instead can be helpful in thinking through when something is not needed or feasible;
  3. Building structured opportunities for joint sense-making and learning in programmes, with room to make changes based on this analysis. While doing this, we should draw in insights from behavioural science to mitigate against the most common biases in decision-making (e.g. how to avoid ‘group think’, how people are influenced by their previous experiences and the effects of polarised political contexts);
  4. Having honest (and yes, difficult) conversations with donors to see whether there’s room to scale back from excessive monitoring and oversight requirements and focus on what is really needed and useful. Let’s start by deleting ‘nice to have’ indicators and prioritise ‘must haves’ – keeping a few that donors need but otherwise concentrating on those needed to make programme improvements.

I’m still a fan of evidence-informed decision-making, but I think it is time to make a shift towards more targeted and user-focused data collection and analysis. In short, counting (only) what counts.

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


7 Responses to “Are we suffering from obsessive measurement disorder?”
  1. Andy Brock

    Cogently argued. Dan Honig’s book “Navigation by Judgement” makes a related point about the relationship between control of aid programmes and their outcomes. For more messy, adaptive programmes more judgement is needed – that’s often difficult to reflect or capture in the monitoring / measuring.

  2. The real point here is that Results-Based Management has been promoted as something implementing agencies have to do to satisfy donors. In effect it is more results-based reporting, than results-based management. But focusing instead on how it can serve the people doing the work in the field, implementing agencies and their stakeholders, is more productive, and people will use it creatively if we can get past the jargon.

    • Thomas H. Norton

      That is a good point. The process is so bureaucratic that participants are really surprised at how useful it can be, when it is put into plain language.

  3. Craig Burgess

    Reminds me of Dr. Seuss ‘The bee Watcher’ (remembering Health Workers spend 25-30% of time filling administrative forms for administrators) are we all bee watchers and who is the bee?:
    “Oh, the jobs people work at! Out west near Hawtch-Hawtch there’s a Hawtch-Hawtcher bee watcher, his job is to watch. Is to keep both his eyes on the lazy town bee, a bee that is watched will work harder you see. So he watched and he watched, but in spite of his watch that bee didn’t work any harder not mawtch. So then somebody said “Our old bee-watching man just isn’t bee watching as hard as he can, he ought to be watched by another Hawtch-Hawtcher! The thing that we need is a bee-watcher-watcher!”. Well, the bee-watcher-watcher watched the bee-watcher. He didn’t watch well so another Hawtch-Hawtcher had to come in as a watch-watcher-watcher! And now all the Hawtchers who live in Hawtch-Hawtch are watching on watch watcher watchering watch, watch watching the watcher who’s watching that bee. You’re not a Hawtch-Watcher you’re lucky you see!”

  4. Jon Abbink

    Great stuff. But nothing will change, because funders/donors, etc. do not have the flexibility and imagination and real interest to allow it.
    Only massive collective refusal by researchers to the eternal, detailed measuring & counting will provoke something.