How can we rate aid donors? Two very different methods yield interesting (and contrasting) results

November 21, 2018

     By Duncan Green     

Two recent assessments of aid donors used radically different approaches – a top down technical assessment of aid quality, and a bottom up survey of aid recipients. The differences between their findings are interesting.

The Center for Global Development has just released a new donor index of Quality of Official Development Assistance (QuODA), with a nice blog summary by Ian Mitchell and Caitlin McKee.

‘How do we assess entire countries? One way is to look at indicators associated with effective aid. The OECD donor countries agreed on a number of principles and measures in a series of high-level meetings on aid effectiveness that culminated in the Paris Declaration on Aid Effectiveness (2005), the Accra Agenda for Action (2008)and the Busan Partnership Agreement (2011). Our CGD and Brookings colleagues—led by Nancy Birdsall and Homi Kharas—developed QuODA by calculating indicators based largely on these principles and grouping them into four themes: maximising efficiency, fostering institutions, reducing burdens, and transparency and learning.’

QuODA’s 24 aid effectiveness indicators were then averaged to give scores to the 27 bilateral country donors and 13 multilateral agencies.

I’d pick out two findings from this exercise:

Big is (on average) better – there’s some kind of line of best fit that suggests aid quality is higher for governments that commit a higher % of their national income. But there are outliers – New Zealand is small but beautiful; Norway is big and ugly.

The best performers are a mix of bilaterals and multilaterals, although there’s a cluster of multilaterals just below the kiwis at the top

But there’s another, completely different, approach – ask people on the receiving end what they think of the donors. AidData have been doing this for years, so I took a look at their most recent ‘Listening to Leaders’ report.

The report is based on a 2017 survey of 3,500 leaders (government, private sector and civil society) working in 22 different areas of development policy in recipient countries.

‘Using responses to AidData’s 2017 Listening to Leaders Survey, we construct two perception-based measures of development partner performance: (1) their agenda-setting influence in 64 shaping how leaders prioritize which problems to solve; and (2) their helpfulness in implementing policy changes (i.e., reforms) in practice. Respondents identified which donors they worked with from a list of 43 multilateral development banks and bilateral aid agencies. They then rated the influence and helpfulness of the institutions they had worked with on a scale of 1 (not at all influential / not at all helpful) to 4 (very influential / very helpful). In this analysis, we only include a development partner if they were rated by at least 30 respondents.’ Sadly New Zealand (top on QuODA) didn’t make the cut in the AidData analysis.

On this exercise, the multilateral organizations clean up, with the US the top-rated bilateral donor at number 8 on helpfulness (the much criticised European Union comes in at number 5).

I would love to hear views on why this might be. Off the top of my head, a couple of possible explanations:

  • Multilateral organizations are on average bigger, so have more presence in the lives of people on the receiving end
  • The surveys were measuring different things – aid quality v support for policy reform
  • Multilateral organizations may do better on the soft stuff – technical assistance, but also partnership, dialogue etc

Thoughts?

November 21, 2018
 / 
Duncan Green
 / 
 / 

Comments