Value for money in UK aid: the good, the bad and the ugly

Cathy Shutt mugCathy Shutt (left, on vintage phone) and Craig Valters unsugar a recent pill on DFID’s craig valtersapproach to Value for Money

All aid programmes should be good ‘Value for Money’ – hard to argue with that, right? 8 years ago, DFID put this principle at the heart of its work. Here we reflect on a recent report by the UK aid watchdog, ICAI.

The good

At first glance, the review appears good news. DFID is portrayed as a ‘global champion’, using value for money tools and approaches to increase the returns on its investment while influencing the practice of other donors, implementers, partner governments and NGOs.

ICAI concludes that DFID staff and implementers have a good grasp of the 3E framework (see diagram), which is used across the UK government to describe and assess value for money at various stages in programme cycles. Furthermore, at the behest of a previous ICAI review, DFID added a fourth ‘e’, equity. This aimed to ward off DFID VFM frameworkconcerns that penny pinching would lead to the prioritisation of programmes aiming to benefit large numbers of people, but missing the poorest who are more expensive to reach.  At a theoretical level, this is a good start.

In practice, value for money considerations have helped DFID curb waste, fraud and inefficiency. They’ve also driven down costs. It is sensible and right that this happens. Money wasted could be spent more effectively to help people both abroad, or indeed at home. The report suggests that DFID has taken this seriously.

The bad

That’s about it for the good news. Within programmes, the value for money focus tends to be on economy and efficiency in delivery, with effectiveness analysis proving highly erratic. The only convincing story of a value for money argument increasing cost effectiveness cited by ICAI relates to a programme in Uganda, where DFID staff drew on global evidence and encouraged implementing partners to use cash transfers rather than food aid. Failing to focus on effectiveness across the board undercuts the promise of value for money: what’s the point of doing things cheaply and quickly without demonstrable evidence that they are having sustainable impacts?

What’s more, the value for money approach of DFID is a bit too Mystic Meg: that is, it assumes DFID can make financial and social predictions when it can’t; DFID often works in complicated contexts, on complex problems. But ICAI found few cases of programme managers or participants monitoring the costs and outcomes of ‘small bets’ with a view to evaluating and learning what worked best and then adapting their approach. Instead they found that what might best be described as DFID’s blueprint planning approach is in tension with its commitment to learning and adaptation, emphasised in the Smart Rules.

Another alarm bell: ICAI commented that DFID staff regularly set overambitious results targets in their business cases – presumably to get them authorised (or more generously due to ‘optimism bias’). But then suppliers tend to reduce them once the stark reality of implementation kicks in.  We worry this gaming of targets will be wrongly associated with adaptive management (‘ooh look, now we’ve banked the donor cheque, we can reduce our targets and call it being adaptive’). Adaptive management involves understanding we often don’t have solutions to problems upfront and puts in place serious learning processes to work them out. It’s not about not duping the system.

The ugly

According to ICAI, DFID’s value for money approach is narrow, focusing on individual projects, not their country or sector spending on complex and cross cutting issues like climate change. If DFID is serious about tackling the causes of poverty and conflict, as suggested in their aid strategy, then this is a false economy. It’s critical to consider the overall positive and negative contributions of DFID (and indeed wider government) in each country or region they work. In the absence of this, ICAI’s messages appear contradictory. DFID is a ‘global leader’ in value for money on the one hand, yet it resorts to the kind of bean counting that ignores issues of country strategy, complexity or long-term change on the other.

This accountability paradox is about methods – but also politics. Assessing the contribution of the British pound to aDilbert on trust change process overseas is not easy, yet there are evaluation methodologies available (see here, here, and here). So, why haven’t alternatives been taken up?

As one of us outlined here before, the reality is that over the past 10 years DFID’s political leaders became more interested in demonstrating quantifiable results to tax payers than longer term institutional change. The UK legitimately has a mix of development aims: from simple to complex, from short-term to long-term. But DFID’s value for money approach is reflective of the narrower political priority. Hence, we have ended up with misleading methods that suggest DFID and its suppliers are in control of development outcomes, when they’re not, and which ignores aid effectiveness principles of local ownership.  This begs the perennial question, when it comes to value for money and accountability, whose values count and accountability to whom?

What’s to be done?

ICAI have sent a strong message that current value for money approaches are inadequate. So, what are the options for DFID and its implementers moving forward?

Can they get away with business as usual? DFID and the UK aid community could play the politics of the value for money game – appearing to be a global leader in using value for money for accountability while continuing to fall short in practice.  The immediate prioritisation of the ‘global leader’ message by DFID communications department following ICAI’s report appears such a response.

Or, should we scrap value for money altogether? Pablo Yanguas has recently argued that value for money language will only suit simple initiatives like vaccination programmes. It thus leaves no option but to continue to mislead the public about the risks of supporting local actors to pursue institutional change. According to Yanguas, scrapping the idea of aid as a value for money investment is the way forward. It could force a more honest conversation with the public about the realities of aid, development and social change.

For now, we propose a compromise. Making financial savings in aid is a good thing. As is seeking to understand which interventions, comparatively, can be most effective for the money being spent. It is also unlikely and undesirable that pressures to spend taxpayers’ money well will disappear. So, the starting point must be that senior DFID leadership take this report seriously. A recent performance in front of MP’s suggests they are doing so. The challenge will be for them to reframe how value is understood in the department and by its implementers: that may mean taking possibly unpalatable messages to politicians about the uncertainty of developmental change processes, rethinking evaluative methods, and telling a better story of the UK’s role in long-term change.



Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


8 Responses to “Value for money in UK aid: the good, the bad and the ugly”
  1. I object to the claim that revising business case indicators is necessarily “gaming the system”. I have encountered a number of DFID TORs or business cases with little to no theory of change and outcome indicators that are way beyond what can be attributed to an external programme. Giving ostensibly “adaptive” programmes the flexibility to review logframes during inception should be a cause for celebration, not suspicion. Every programme – adaptive or not – should work hard to establish a rigorous and realistic results framework, and the fact is a lot of the political and operational challenges become obvious during the inception phase.

    • Duncan Green

      Fair point Pablo, but how do you distinguish between genuine iteration, and gaming, eg by contractors who are really only interested in the bottom line?

      • Ex ante? You can’t, unless you add new regulations to prevent/punish gaming – which is costly. Ex post? You could do a 3rd-party evaluation of inception phases, before implementation, to determine which contractors are scamming the public and which are truly adaptive. Also costly, but perhaps more fair to bidders. The question is: would the findings be published, thus naming and shaming?

        I guess I worry about conflating the visible and invisible problems. The visible problem is gaming by private contractors; the invisible problem is DFID staff gaming their internal system to get programmes approved. If you just address the visible problem you are going to end up with a whole host of underperforming programmes. Consider also adverse selection. If the “DFID blueprint” does not allow for more flexible logframes and results frameworks, and if you force adaptive-minded implementers to go through ever more complicated hoops, you might end up precisely with the kinds of providers (like business consultancy firms) who are better at delivering “results”.

        • Cathy Shutt

          Thanks for commenting Pablo. We agree that flexibility to change results targets should be celebrated. If a knock-on effect of the ICAI report was more rigid log frames, that would be a shame. We think ICAI is sending mixed signals on this, as Craig said in this twitter thread:

          However, we didn’t claim those revising business case indicators are ‘necessarily’ gaming the system. ICAI’s suspicion comes from the fact that 23 out of 24 were revised down. That indicates an issue and they draw attention to analysis suggesting target revisions are driven by the need for suppliers to score well in DFID’s performance reviews.

          Even if downward revisions were due to a better understanding of the context or problem, we’d still question why designs are almost always wildly optimistic. Technical, apolitical assumptions in economic appraisals are often dodgy. I’ve been involved in a couple of challenge fund programmes that wrongly assumed competition among innovators with shovel ready projects would enable rapid spending. This, in turn, was expected to help programme managers identify successful projects with scaling potential. In both cases, however, programme managers experienced problems meeting KPIs, which caused anxiety at performance review.

          Fortunately, they also realised that some key business case design assumptions were flawed. Hence they decided to adapt and use relational, brokering approaches to find innovators instead. But in one case there were other design issues stemming from the ‘invisible problems’ you mention that weren’t addressed. This was partly because no one returned to the business case during implementation. ICAI point out that VFM assessments of DFID programmes often fail to revisit value propositions in economic appraisals. I suspect this is because much VFM analysis tends to boil down to monitoring performance against discrete VFM indicator targets linked to the 3 or 4E framework. It doesn’t encourage much evaluative reasoning or learning.

          Alternative approaches address this indicator driven problem head on. Oxford Policy Management and Action Aid both propose robust frameworks that ‘break free’. Crucially they stress the need for VFM analysis to be driven by evaluative questions linked to value propositions. These are then tested using criteria selected through inclusive processes involving and encouraging accountability to a more diverse set of stakeholders. Of course there is a risk that such approaches will become bureaucratic hoops. But the ICAI report indicates an urgent need to do something to improve the quality of evaluative reasoning used to make VFM judgements on DFID programmes. Engaging with them seems a good place to start.

  2. Francesca D'Emidio

    Really interesting posts and comments. Thanks! When we were developing ActionAid’s approach to VfM we really struggled to think what and how could the whole VfM agenda be useful for an organisation like ActionAid, focussed on human rights based approaches and on social and political processes often hard to measure and predict. In the paper we developed we summarise some of our learning which mainly was about moving away from 3/4Es, which ultimately we did not find useful to analyse VfM and learn from the analysis. We also moved away from a donor-focussed approach to VfM as a reporting requirement to one that focusses on engaging with the actors of the programmes to understand which areas of work are worth the investment according to their perspective and to what they value. In this way VfM becomes a tool for participatory MEL where the investment is analysed together with the changes that have been achieved in order to facilitate adaptive management. While the ICAI review does touch upon the importance of promoting local ownership, I feel it does not emphasise enough the point that you raise in this blog about whose value counts when we talk about VfM. Given the recent emphasis on beneficiary feedback, why not put it at the centre of VfM analysis to enable a more genuine use of VfM for learning, rather than a bureaucratic reporting requirement?