Payment by Results hasn’t produced much in the way of results, but aid donors are doing it anyway. Why?

March 23, 2016

     By Duncan Green     

I recently attended (yet another) seminar on the future of aid, where we were all sworn to secrecy to allow everyone

Simples. Or is it?

Simples. Or is it?

(academics, officials etc) to bare their bosoms with confidence. So I can’t quote anyone (even unattributed – this was ‘Chatham House plus’).

But that’s OK, because I want to talk about Payment by Results, which was the subject for my 10 minutes of fame.

When reading up I was struck by the contrast between how quickly PbR has spread through the aid world and how little evidence there is that it actually works. In a way, this is unavoidable with a new idea – you make the case for it based on theory, then you implement, then you test and improve/abandon. In this case the theory, ably argued by CGD and others, was that PbR aligns incentives in developing country governments with development outcomes, and encourages innovation, since it does not specify how to, for example, reduce maternal mortality, merely rewards governments when they achieve it.

Centre for Social Justice, October 2012

Centre for Social Justice, October 2012

Those arguments have certainly persuaded a bunch of donors. The UK government website says that this ‘new form of financing that makes payments contingent on the independent verification of results is a cross government reform priority’. DFID called its 2014 PbR strategy ‘sharpening incentives to perform’ and promised to make it ‘a major part of the way DFID works in future’. British Prime Minister David Cameron waxes lyrical on the topic (left).

Which made the prevailing scepticism of much of my reading all the more striking – see for example Paul Clist and Stefan Dercon: 12 Principles for PbR in International Development, or BOND, Payment by Results: What it Means for UK NGOs, or 3 studies by NORAD (the Norwegian aid agency) .

Clist and Dercon‘s principles set out a series of situations in which PbR is either unsuitable or likely to backfire. For example if results cannot be unambiguously measured, lawyers are going to have a field day when a donor tries to refuse payment by arguing they haven’t been achieved. They also make the point that PbR makes no sense if the recipient government already wants to achieve a certain goal – then you should just give them the money up front and let them get on with it. There’s also an interesting sleight of hand in the argument, as the kind of incentive argument that might work for individuals is applied to institutions, even though it is not at all obvious how eventual PbR payments to governments will translate into improved performance by individual officials.

NORAD points out that if PbR is to be used when you are trying to persuade a government into doing something it

But what if it's a department that gets the bonus, not an individual?

But what if it’s a department that gets the bonus, not an individual?

doesn’t want to do, we already know how unlikely that is to succeed from the whole aid conditionality experience.

BOND finds that PbR contracts with NGOs are plagued by micromanagement and often amount to little more than transferring risk from donor to recipient (no results, no dosh).

Even its originators, CGD seemed pretty underwhelmed at that earlier discussion on PbR, so why has it got such momentum among donors?

The PbR hype cycle seems to follow a well-established pattern in the aid biz, which I call the ‘microfinance syndrome’ (policy entrepreneur comes up with whizzy new idea → massive overselling to donors → disillusion when it fails to produce predicted miraculous results → reduced to niche product as we learn when the new snake-oil might actually be worth applying). At best, it’s a painful, inefficient way to innovate and improve the impact on poor people’s lives. Why not try positive deviance  or venture capitalist style multiple parallel experiments instead?

Then what exactly are these results and who are we measuring them for? PbR pushes project implementation even further towards ‘upwards accountability’- mainly developing country governments collecting and processing results (which can be an expensive business) in order to satisfy aid donors and their political backers and tax payers. To what extent are those results any use for a) learning and improving or b) increasing accountability where it is really lacking – downwards to poor people and communities? My fear is that this will merely create a parallel system of results alongside the kinds of information practitioners need to learn and improve, diverting effort and money and doing nothing for downwards accountability.

Gartner_Hype_Cycle.svgTime horizons are a real problem: If (a big if) you could solve the problem of attribution becoming more attenuated with time, a 30 year PbR contract might indeed encourage innovation, certainly far more than a 3 year one, where there is too little time to experiment and take risks and find a better path to results. In fact, short-term PbR contracts appear to discourage innovation – there isn’t time to learn by failing, so stick with the tried and tested, even if it’s not that good. But there is precious little appetite for longer project cycles – political and management timelines seem to be shortening instead.

Trawling through the interwebs for this piece it looks like a lot of the PbR discussion comes from the health sector – could this be another problematic attempt to import medical thinking into development, like randomized control trials?

Anyway, even though the evidence seems pretty thin that introducing PbR achieves improved results (ironic, eh?), donors are jumping into PbR contracts with gusto. Why is that? Conspiracy theorists/political economists please form an orderly queue.

Update: Excellent comments thread below, with some useful updates on latest experience with PbR

March 23, 2016
 / 
Duncan Green
 / 

Comments