Image

What does the evidence tell us about ‘thinking and working politically’ in development assistance?

July 2, 2019

     By Duncan Green     

We’re having an ‘Adaptive Management week’ on FP2P, because so much good material has been coming through recently.

First up is a new paper by Niheer Dasandi, Edward Laws, Heather Marquette, and Mark Robinson that I read on the way to the TWP conference in Washington that I wrote about recently. It really got me thinking.

The paper is pretty damning: ‘much evidence is anecdotal, does not meet high standards of robustness, is not comparative, and draws on self-selected successes reported by programme insiders.’ Ouch.

To arrive at this, the authors analysed and compared 44 case studies – a really useful contribution, which generates some important insights:

The dangers of a single, very influential piece of research: ‘If TWP is at its heart about illuminating contextual differences in order to move away from ‘cookie cutter’ best practice approaches, then we would expect to see variations in programme design, implementation and outcomes. However, while many different case studies have been published since Booth and Unsworth’s comparative study, there is very little, if any, variation between them along these factors. Indeed, given the similarities highlighted below, it is difficult, if not impossible, to discern if the patterns that are beginning to emerge from comparing the various cases genuinely reflect an emerging consensus, or if, in fact, they reflect growing ‘group think’ among TWP insiders about the necessary programme design characteristics.’

Blind spot on Fragility and Conflict: Of the 44 programmes that we identified as being the subject of TWP research, only seven are based exclusively in countries that are featured on the World Bank’s most recent Harmonised List of Fragile Situations. Given the growing concentration of aid from major donors, including DFID and the World Bank, in fragile and conflict-affected states, a greater emphasis of TWP research efforts in violent and unstable political contexts would seem to be important given the untested nature of these ways of working. [nb this is what we are trying to do in the A4EA research, but for some reason our Myanmar paper did not make it into the selection of case studies].

Unintended consequences of hiring connected lobbyists: Given the argument found in many of the case studies that effective programmes require politically well-connected staff, there has been surprisingly little analysis about how these staff are recruited, how their activities are assessed or what this may mean in practice in politically divided societies. I sent this post in draft to the authors, and Heather Marquette said in reply

‘’I’m increasingly worried about potential unintended consequences from untested approaches that carry their own

risks: what will happen if aid programmes start routinely embedding politically well-connected insiders with the ability to direct resources? Will ‘working with the grain’ help to speed up or slow down the closing of political space? What does any of this mean when programmes are being delivered by non-aid actors with close links to intelligence operating in conflict-affected areas? Basically, these programmes are experiments, but they’re not treated like experiments, with the sort of safeties you’d build into an experiment, both at the case study level and at the body of evidence level.’

All the case studies were reform programmes, with governance, justice and security and infrastructure being the most common: TWP might look similar, in terms of programme design, for reform programmes, regardless of sector; whether or not that is useful for someone trying to design an infrastructure programme, or a service delivery one, is not clear.

Donor dominated: The literature on TWP programmes is focused primarily on the role of bilateral and multilateral donors. For the most part, the agencies in question are DFID, DFAT and the World Bank.

Weak Evidence base: The literature continues to be almost entirely made up of single-programme case studies, with few attempts at comparison and written for the most part by programme insiders. Even [more rigorous efforts] rely largely on interviews and documentary analysis, or a form of action research, rather than methods more appropriate for establishing causal explanations, and approaches to triangulation are often unclear or entirely absent. As a result, in the case studies reviewed it is often hard to discern a direct causal relationship between TWP and the outcomes that were said to have been achieved. Only one study in our sample discusses counterfactuals, and very few discuss challenges faced in the programmes or areas that were unsuccessful….. Studies rarely focus on outcomes, instead focusing on the reform and/or programming process instead.

This is a really good challenge, but I have some serious questions about the paper too:

Much of what is TWP is not actually called TWP – for example there are no case studies deriving from the work of the Building State Capability team, and its work on ‘problem driven iterative adaptation’, or any of the work of the Action for Empowerment and Accountability programme, including papers I have written on donor theories of change, and Pyoe Pin in Myanmar.

But more interestingly, what would constitute evidence in the eyes of the authors? There are in depth, warts-and-all case studies of the Coalitions for Change programme in the Philippines by an independent academic John Sidel (also not in the reviews list) – are they evidence? Would process tracing help establish the degree of attribution a TWP programme can claim for a given ‘win’? In another version of the paper the authors call for ‘triangulation’ (not sure what they mean by that) and greater discussion of failures. In reply to my email, Niheer Dasandi expanded on this point:

‘The papers we reviewed imply causality – they clearly suggest that the use of TWP has led to improved outcomes. In fact, that’s at the centre of most of these papers. However, very few of them directly engage with the issue of causality. I think that’s the main problem rather than them really trying and us simply dismissing their efforts as ‘anecdotes’. If you were to seek to justify the causal claims then there a whole range of things that might arise that will help you evidence the changes that have occurred as a result of incorporating TWP components into a programme. This might be spending a bit more time discussing similar programmes that were tried in the same context that did not take a TWP approach, and didn’t have the desired results. It might mean comparing different programmes. It might be presenting different forms of data/evidence that all point to a TWP aspect of a programme being critical for achieving the positive results – this is what we mean by ‘triangulation’. Using different forms of data – interviews (with a range of different actors – something often not done), focus groups, reports, media coverage, any available statistics (that show improved results), etc. Many of the studies we look at provide a few quotes from people involved in the programme, but don’t seek to really justify their claims with different forms of data.’

I actually think what they dismiss as ‘action research’ is probably the best way to provide convincing evidence that pays due attention to context, chance, and the interaction between political economy and human agency and leadership. How else can you ‘establish causal explanations’? But anyone who is predisposed to be sceptical will say ‘that’s just an anecdote’. Anyone got a better idea?

Tomorrow: Heather Marquette explores the differences between all the acronyms (TWP, DDD, PEA etc) and why it matters

Comments