The evidence debate continues: Chris Whitty and Stefan Dercon respond from DFID

January 23, 2013

     By Duncan Green     

whitty_christopherYesterday Chris Roche and Rosalind Eyben set out their concerns over the results agenda. Today Chris Whitty (left), DFID’s Director of Research and Evidence and Dercon, StefanChief Scientific Adviser and Stefan Dercon (right), its Chief Economist, respond.

It is common ground that “No-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions.” Neither would anyone argue that power, politics and ideology are not central to policy and indeed day-to-day decisions. Much of the rest of yesterday’s passionate blog by Rosalind Eyben and Chris Roche sets up a series of straw men, presenting a supposed case for evidence-based approaches that is far removed from reality and in places borders on the sinister, with its implication that this is some coming together of scientists in laboratories experimenting on Africans, 1930s colonialism, and money-pinching government truth-junkies. Whilst this may work as polemic, the logical and factual base of the blog is less strong.

Rosalind and Chris start with evidence-based medicine, so let’s start in the same place. One of us (CW) started training as the last senior doctors to oppose evidence-based medicine were nearing retirement. ‘My boy’ they would say, generally with a slightly patronising pat on the arm, ‘this evidence-based medicine fad won’t last. Every patient is different, every family situation is unique; how can you generalise from a mass of data to the complexity of the human situation.” Fortunately they lost that argument. As evidence-informed approaches supplanted expert opinion the likelihood of dying from a heart attack dropped by 40% over 10 years, and the research tools which achieved this (of which randomised trials are only one) are now being used to address the problems of health and poverty in Africa and Asia.

The consequences of moving from expert (ie opinion-based, seniority-based and anecdote-based) to evidence-based healthcare policy, far from being some sinister neocolonial experiment, have been spectacular. To quote a recent Economist headline, ‘Africa is currently experiencing some of the fastest falls inOxfam africa campaign childhood mortality ever seen, anywhere’. It is a great example of the positive side to modern Africa the current excellent Oxfam publicity campaign (right) is all about. This success is based on many small bits of evidence, from many disciplines, leading to multiple incrementally better interventions. Critically, it also involves stopping doing things which the expert consensus agreed should work, but which when tested do not. It is no accident that one of the most evidence-based parts of development is also one where development efforts have had some of their greatest successes.

Proper evidence empowers the decision-maker to be able to make better choices. This is a good thing. In every discipline, in every country, where rigorous testing of the solutions of experts has started, many ways of doing things promoted by serious and intelligent people with years of experience have been shown not to work. International development is no different, except that the communities we seek to assist are more vulnerable, including to our bad choices.

Much of what we all do in international development has very limited evidence that it does any good  (in this it is no different from many other policy areas) – which is not the same as saying it is pointless. Rather we don’t know what is pointless. Some of our actions will work better than we think, much of it will work much less well than we hope, and some of it will be damaging the poorest without us realising it. In the evidence-light areas we just don’t know which are which.

We must have the humility to accept that we are all often wrong, however reflexive the practitioner, however deep their reading and experience and passion to do good. Evidence-based approaches are not about imposing a particular theory or view of the world. It is simply about taking any opportunity to test our own solutions in the best way available, using evidence honestly when it is available to inform (note the word) decisions, and when the facts change, changing our minds.

This honesty includes saying to decision-makers when evidence is methodologically weak, mixed or missing so they know they are on their own, unable to rely on (or make a claim on) the evidence. The worst possible solution, which we know Chris and Ros would also deplore, is using the social power of the ‘expert’ to imply we know the answer when we actually have no solid evidential basis for our opinion or prejudice.

A few false assumptions about evidence-based decision making

Some of those who express unease about evidence-based policy and practice seem to assume that it is always based on randomised trials and quantitative methodologies: not so. Methods from all disciplines, qualitative and quantitative, are needed, with the mix depending on the context. Randomised trials are one tool amongst very many, although a good one in the right setting. The argument that evidence-based approaches can “only apply in cases of individual treatment and not the wider community level” ignores over 30 years of methodology which has done exactly that, with very convincing results.

A sterile argument  between people who are on the one side believe that a  randomised trial can answer any question (they can’t), and people who do not appear to be aware of any  methodological advances since the 1970s except in their own narrow field is a depressingly familiar experience. We know this does not apply to Rosalind and Chris, but listening to people passionately critiquing methodologies they have not taken the trouble to understand does no good to anyone. This applies both to a randomista who seems to believe that all there is to social research is a few focus groups and in-depth interviews, and to people from a more qualitative social science background who would have trouble explaining the difference between cluster randomised and step-wedge design but assume both are irrelevant to social research anyway (both can be used to measure societal rather than individual effects).

It is tempting to take every point the authors make where we have concerns about their factual basis and logical framework but we will take just three.

“Evidence-based approaches are pre-occupied with avoiding bias and increasing the precision of estimates of effect”. On less bias – generally true. Please complete the sentence ‘More biased research is better because…’. On precision – no, incorrect, the range of situations where a more precise answer is a better answer is small.

One statement we would like to address head-on starts “Evidence-based approaches became linked to value for money concerns to deliver ‘results’…”. We agree- and this is a good thing. Doing a pointless thing, professionally delivered and passionately believed in, is always going to be poor value for money. Testing what works and what does not therefore is essential to value for money. More importantly, doing pointless things diverts very limited human and financial resources, in an ocean of need, away from those who could best use them- not what any of us are in international development to do.

Is it “technical approaches” on the one hand, and “power, political economy” analysis on the other?

Rosalind and Chris’ key criticism is that evidence-based approaches “deflect attention from the centrality of power [and] politics […] in shaping society”, and they offer “power analyses” as an apparent alternative to assessing rigorously what works. This creates a false dichotomy, as if a choice has to be made between a “technical, rational and scientific approach to development” and an approach that recognises politics and the role of power. It is easy rhetoric, but troubling and, if taken much further, even dangerous. Understanding power and politics and how to assist in social change also require rural indiacareful and rigorous evidence, and again, results are not simply what experts would have expected a priori. Recent studies on the positive impacts of female leadership quotas in rural India are for many of us rather surprisingly good news, even if one can fairly worry about its applicability in other settings, while the struggle to find systematically a positive impact of decentralisation and community-driven development programmes is important to internalise in our actions for change, and highlights the importance of understanding contexts and politics. In these cases, it is not a matter of just RCTs, but of rigour, and of combining appropriate methods, including more qualitative and political economy analysis.

Strong analysis of politics and power without offering much in terms of what can be acted upon is similarly unhelpful. They criticise an evidence-focused agenda by stating that “to act ‘technically’ in a politically complex context can make external actors pawns of more powerful vested interests and therefore by default makes them, albeit unintentionally, political actors.” But all actions by external actors will interact with political forces and vested interests. In many of the settings where development actors want to make a difference, power and political institutions are biased against the poor. Being able to act on strong evidence of what works in constrained political settings is crucial.

A reductionist and misinformed view of evidence as purely ‘technical’ or as being only about “what works” is unhelpful – it is also about generating evidence and understanding (and learning) on why interventions and approaches may work, including understanding the social, political, and economic factors that may enable or constrain success of different approaches. Far from the search for evidence pushing us in a ‘technical’, apolitical direction it has reinforced the importance of understanding and trying to tackle the underlying causes of poverty and conflict. There is agreement on the importance of politics and institutions in shaping growth, security and human development. However, the ability of external actors to influence institutions is much less clear and this is where DFID research is now focussed. Ros and Chris have misread the context – the commitment to evidence has opened up the space fundamentally to challenge conventional, technical approaches to aid.

Why it matters for international development

There are large areas of international development where decision-makers are largely flying blind – forced to make decisions purely on gut feeling and ideology not because they wish to because they have no option. Try making difficult decisions in education policy compared to health policy and the difference in usable evidence is dramatic – yet both are complex, social and context-dependent parts of human life. It is always puzzling when people say airily ‘health is easy’- it is not, and is an intensely political and social subject requiring interventions at societal level.

Today we can eradicate rinderpest in cattle and build bridges over the Zambezi based on rock-solid evidence from many disciplines, but do not have anywhere near as clear an idea how to reduce violence against women or tackle police corruption. All are great challenges with social dimensions but in two of them people have set about finding and testing solutions in a systematic way over many decades.

Having robustly tested evidence-based solutions certainly does not eliminate politics: the decision whether to build a bridge, what sort and where, is an intensely political choice – but at least those making the choice now have a fair assumption it will stand up- based on hundreds of years of incremental evidence. The evidence-barren areas in development are a collective, and in our view shameful, failure by us all in the academic and practitioner community. We should never excuse them with the feeble assertion that it is too difficult or complicated. Development is difficult and complicated – but the bases for making decisions will gradually improve if we are serious about improving it.

In conclusion, we collectively have the capacity to be able to give to our successors in every continent a far better basis on which to makeevidence based change placard their decisions for their lives than our generation have. To imply it is not worth trying to provide the best and most rigorous evidence to those who need to make difficult decisions because they will have other influences as well is like saying to someone going for a walk in dangerous mountains that they do not need a map because there will be many other factors that will determine where they go. That is true – but they are still less likely to fall off the cliff if they have one.

Where evidence is clear-cut we should be making that plain to decision makers – and where it is not we should say that as well, be honest about what is there and try to get better evidence for the future. That, in essence, is what evidence-based decision making is about – and all it is about. If the academic community is serious about trying to assist those working in the field (including in Oxfam), and above all empowering the most vulnerable communities to make the most informed possible decisions available for their own development, we should be putting our greatest efforts into supporting decision-makers to use the best evidence, and finding better methodologies in areas where we currently have very weak evidence. There are many, and this should be tackled as a matter of urgency.

Tomorrow, Chris Roche and Rosalind Eyben respond

January 23, 2013
 / 
Duncan Green
 / 

Comments