The Randomistas just won the Nobel Economics prize. Here’s why RCTs aren’t a magic bullet.

Lant Pritchett once likened Randomized Controlled Trials (RCTs) to flared jeans. On the way out and soon we’d be wondering what on earth we’d seen in them.

Not so fast. Yesterday, three of the leading ‘Randomistas’ won the Nobel economics prize (before the pedants jump in, strictly speaking it’s the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel). Congrats to Esther Duflo, who becomes only the second woman to win the prize (yes really, the other was Elinor Ostrom), with Abhijit Banerjee and Michael Kremer.

There will be lots of plaudits in the press, and a fair amount of magic-bulletism about RCTs as the alleged ‘gold standard’ for evidence of what works in development. So, channelling Pulp Fiction, ‘allow me to retort’. For the ‘bah humbug’ corner, here (in reverse chronological order) are some FP2P links to various sceptical views of RCTs going back several years.

Naila Kabeer on Why Randomized Controlled Trials need to include Human Agency (2019)

Quote: ‘RCTs need to acknowledge the central role of human agency in enabling or thwarting project objectives at every stage of the processes they study. It is unlikely they will be able to do this by confining themselves to quantitative methods alone.’

How did the Randomistas get so good at influencing Policy? (2019)

See diagram

Stefan Dercon (2018) on Duflo and Banerjee: ‘Everything has to be inductive, experimental. Lots of little solutions will move us forward. They have no big theory of what causes low growth, no big questions, just ‘a technocratic agenda of fixing small market failures’. Getting institutions right is not crucial – we can do lots of bad policies in good institutional settings, and lots of good policies in bad institutional settings.

Lant Pritchett v the Randomistas on the nature of evidence – is a wonkwar brewing? (2012)

Quote: ‘‘RCTs are a tool to cut funding, not to increase learning.’  ‘Randomization is a weapon of the weak’ – a sign of how politically vulnerable the argument for aid has become since the end of the Cold War. ‘Henry Kissinger wouldn’t have demanded an RCT before approving aid to some country.’

And a quote from me in that blog: ‘On one side are the ‘best fit’ institutionalists and complexity people, with their focus on path dependence, evolution and trial and error. On the other are the ‘universal law’ experimentalists, offering the illusory certainty of numbers, and (crucially) comfort to the political paymasters seeking to prove to sceptical publics that aid works. It’s hard to see how they can both be right, or happily coexist for long.’ 

Poor Economics – a rich new book from Abhijit Banerjee and Esther Duflo (2011)

Quote: ‘But is it a Big Book? Yes in terms of the approach – I think it will leave a lasting impact on its readers in showing the merits of a bottom-up, evidence-driven approach. But not, I think, in terms of content – lots of interesting, surprising facts and analyses, but no one big message. Given their suspicion of grand narratives, I’m sure the authors would be quite happy with that.’

Well, I got that wrong…..

And here’s another economics Nobel, Angus Deaton, on why RCTs aren’t all they’re cracked up to be (2m). 2017 paper with Nancy Cartwright here.

In my experience, those at the centre of a hype cycle are often highly aware of the limitations and weaknesses of their product. It’s the acolytes and spin doctors who remove all the caveats and nuances, and a magic bullet is born. Let’s hope we can generate a better discussion about the strengths and weaknesses of RCTs in response to the Nobel prize.

Please add your own links.

Update: and do please read the comments below – impassioned and thought-provoking. Thanks everyone, and keep them coming!

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


19 Responses to “The Randomistas just won the Nobel Economics prize. Here’s why RCTs aren’t a magic bullet.”
  1. Dolphie

    Tim Allen and Melissa Parker’s paper on how a few RCTs have been cited to support large-scale deworming programmes in East Africa, with over-optimistic assertions of success. Quote: ‘The results of a much-cited study
    on deworming Kenyan school children, which has been used to promote the
    intervention, are flawed, and a systematic review of randomized controlled trials
    demonstrates that deworming is unlikely to improve overall public health.’
    A blog pots on this:

  2. Tina Wallace

    Thanks for this. I agree completely that RCTs are a largely red herring in trying to understand how change happens and what is needed to enable sustainable change for the poorest – for me it continues to be about power, rights, justice and addressing inequality as well as watching the real politics in each context and understanding the important issues as they evolve- wars caused by poor USA policy for example, the drip feed of climate damage, the backlash against women and LGBTI seen in right wing governments and more.
    Measuring how many women get loans and replay them, how many girls get better exam results etc tells us very little about whether these things make lives better.
    I’m so worried that the Nobel prize goes to these approaches and this will increase the focus on that as a ‘magic bullet’ So thanks for the rebuttal.

  3. Teni T.

    I don’t think a silver bullet exists when it comes to development but RCTs do add nicely to the body of knowledge. Like most other things, its usefulness and relevance will always be limited by context. It’s a valuable contribution either way, if you ask me. But no one is asking me haha.

  4. Sina Odugbemi

    My problem is the overwhelmingly dominant position of economists in international development. They have some powerful but limited tools yet they reign supreme. Worrying about randomnistas is the internal fight within the discipline. As a wag once said when I worked for the World Bank, the only other place where one profession totally dominates is Israeli politics; it pays to be a general, maybe with an eye-patch.

  5. Stefan Dercon

    Hi Duncan, Despite seemingly offering you a good quote for your blog, I am still very pleased they were rewarded with the Nobel prize. As researchers they have a genuine commitment to development, and may have a particular view on how development can come about that we may not all or at all agree on. But their contribution to stimulating thinking and research, and debate in development and in economics has been tremendous: this was real innovation and challenged theory and practice in economics for the better. (Just compare with what some of my other dismal academic economics colleagues are up to). The quote from my lecture was part of a ‘compare and contrast’ exercise with some other big names in development, and as you know from experience, lots of that lecture can be easily misunderstood. I know which side you are on the current Nobel, and glad you are so upfront about it. But their view, even as expressed in my tongue-and-cheek caricature that you quote, is really worthy of consideration and should not just be dismissed. I have been involved in too much aid spending not to have some sympathy with a search for some certainty even if the problems are seemingly ‘small’ and do not touch well on the ‘grander’ problems and challenges we see that the world faces (which I really think we need to focus on as well and more, and will involve addressing political economy challenges that are not ‘small’). Concluding we need much more than just looking at small improvements does not mean starting from the ‘small’ is not worth to do, if only to get insight. Finally, personally, although there have been some better than others, I have been particularly pleased with two Nobel prizes in economics in recent times: Angus Deaton and the current one; the fact that they disagree on quite a lot of things is good for economics (and is good to make me think). And anything we can do to keep academic economists focused on the realities of developing countries is good for economics as an academic field.

  6. Ruth Levine

    I am both distressed and unsurprised that the news about the Nobel to Banerjee, Duflo, and Kremer is prompting many people to dust off well-rehearsed and impassioned critiques of RCTs in the context of global development. I’m unsurprised because people have been in their intellectual corners for a long while. Personal antipathy has overtaken professional respect in some cases, and both the “randomista” and the “non-rigorous” caricatures have become such resonant memes that it makes it difficult to build bridges across the divide. And there is a real and persistent tension between asking questions that are small enough to answer, on the one hand, and focusing on big, complex, important systems and structures that are often too messy to gain a handhold, on the other. It’s easier, apparently, to pick one camp and force people to choose sides than to see and appreciate the complementarities.

    I’m distressed because the critiques are getting in the way of what could be an occasion for us all to find common ground. We could all acknowledge and applaud the fact that these three highly regarded academics, who could easily spend their time on esoterica, instead are doing the very hard work of partnership and problem-oriented research. We could give a shout out to the value of research findings for real-world decisionmaking, even if it’s not the kind of research that we ourselves do. We could express delight that the award easily could have gone to people whose work primarily has benefited corporations or people in rich countries, and instead the spotlight turned toward some of the problems that affect most people in the world, and certainly most people living in poverty.

    Just as the hopes for what random assignment evaluations might bring to the world may have been exaggerated, so are the fears about how attention to those methods might distort our understanding of problems or solutions. It would be far better if we could focus on the shared agenda and what people holding different perspectives can learn from each other than on the old, tired, and unproductive debates.

    • Duncan Green

      Thanks Ruth, but I have to disagree – the comments from you, Stefan and others are really thought-provoking and I think this has been a worthwhile exchange that will hopefully reach the plateau at the end of the hype curve, where instead of uncritical praise, or kneejerk condemnation, we build some kind of consensus on where different methodologies should be applied and what they do and don’t tell us. That is a difficult, but essential, conversation, surely?

  7. Jonathan Glennie

    “In my experience, those at the centre of a hype cycle are often highly aware of the limitations and weaknesses of their product. It’s the acolytes and spin doctors who remove all the caveats and nuances, and a magic bullet is born.” This is spot on. I don’t think they ever over-claimed for RCTs but my god did some people believe! A new religion was underway… In fact, I think they did great work. There is a tension between those that like technical fixes and those that like to attack power, but both are necessary. Surely we can celebrate the brilliance of techniques to improve specific interventions as well as emphasising the need to challenge structures…

  8. Hi Duncan, thanks for raising these issues. I share much of your scepticism about RCTs, particularly the way they conceptualise development programmes as standard, replicable “treatments” without much consideration of human personality, creativity, error, self-interest, adaptability, and so on. They are also not well placed to find out about programmes’ unintended consequences, which can often be really important, both in a negative and a positive way. However, there was one thing I really liked about ‘Poor Economics’, and that was how the evaluations produced results which were surprising to Banerjee and Duflo and forced them to go and talk to poorer people to try to understand what was happening (I remember discussions on why people eat unhealthy foods…). I would argue it’s worth celebrating the randomistas’ openness to surprises, especially in a world where some political scientists’ and critical theorists’ frameworks somehow cannot be challenged by any empirical data. But I would still echo Naila Kabeer and others in arguing strongly for the value of qualitative methods in evaluating development programmes.
    (Speaking of which, a brief line of flagrant self-publicity: my PhD, currently under way, is an oral history of a rural development project which was implemented by a German NGO in Ethiopia from 1992 to 2007. I hope it will add to the case for qualitative evaluation based on in-depth conversations with aid workers, ‘beneficiaries’, and the wide range of individuals and groups in between).
    they remain open to surprises, something that frameworks for understanding development drawn from political science can be weak on.

  9. Duncan Green

    This from Scott Guggenheim:
    Hi Duncan,
                I’m glad you’re encouraging this discussion of RCTs and its detractors. Let me chip in a bit too, as an anthropologist participant observer who can’t follow the math and gets mad at some of the hegemonic (and sloppy!) practices that gets the critics so mad. And yet, I’m a really big fan of Esther, Abhijit, Michael and the others for three really big reasons. First, the results of their work has affected the lives of millions of poor people for the better.  That already sets them a cut above an awful lot of economics Nobel prize winning, winner-take-all, trickle down economists. Esther, Abhijit, Michael etc. don’t just do RCTs. They do RCTs on policy problems that matter. And they do them on problems whose solutions help fight malaria, target health insurance, and reduce corruption. 
                Second, I’ve watched how they work. As a long-time collaborator of Ben Olken and Rema Hannah, I nevertheless barely see either of them when they come to Indonesia – they’re always off in the field checking and re-checking not just their data, but the qualitative interpretations of their’ initial hypotheses. Esther DuFlo came to check out one of our wilder experimental ideas (using vouchers to create “markets” for displaced people). She stopped in Jakarta long enough to pick up a smart young Indonesian also interested in this idea, and then she simply vanished to the Kalimantan swamps for the next two weeks to embed herself in context. How many economists do that? 
                Last, nobody else I’ve seen invests as much in building long-term local capacities. Not just by training students, but by serious, long-term capacity building such as by making special short courses so that Indonesian officials understand evidence-based policy, or by nurturing a JPAL-Jakarta program headed and staffed by really bright young Indonesians who know they can call on the JPAL network. Again, how many other economists make these investments?  No wonder their work has impacts.
                I share the critics concerns about abuse of the methodology. In fact, my own critique of their work and training is that they are not sufficiently explicit about how important those investments in understanding the context are for interpreting their regressions. But it’s easy to carp. Esther, Abhijit, Michael and their collaborators set a very high bar for the development economics field, and I was really pleased to see them get global recognition not just for their identification strategies and revitalization of applied microeconomic work, but also for their humanity and commitment to development and the poor.

  10. Cristina Bacalso

    My current project is this nexus between translating/connecting evidence in development for policy-makers, and this conversation has been immensely insightful on the role that RCTs should (or shouldn’t) be playing in evidence-informed policy-making. I have three questions:
    (1) Can anyone share any studies on the actual uptake of RCT findings by policy-makers/programme implementers/funders in shaping decisions? While many are skeptical of RCTs as a “silver bullet” solution, I am wondering if this is indeed the way that they are accepted, or if they treated as one of many tools in our “research” toolbox – each method with its own limitations and blindspots, and considered alongside other methods (together of course with ideology, interest, politics etc. and all the other non-evidence factors that come into the mix when making policy decisions)
    (2) What is the role of synthesis (namely systematic reviews) in adding further nuance to RCT findings, or is it felt that it simply compounds the problem (a “super silver bullet” so to speak)? In an ideal case the role of synthesis would be to ensure that no one impact evaluation would be the be-all-end-all, but rather be taken together among a breadth of findings. I am curious among the RCT detractors about their feelings on this (not being a methodological expert myself!)
    (3) Some (including Kabeer in her interview) mention mixed method approaches which could be better suited to evaluation. Can anyone point me to some methodological papers of how this looks like in practice? What I am struggling with is if the critique of RCTs means calls for a whole scale abandonment of the approach, or if there are ways in which it can be improved upon.

  11. This is a brilliant debate. Many thanks for provoking it, Duncan. I am currently working with some brilliant people in attempting to persuade the Tanzania government to adopt a slightly heretical policy of paying teachers a bonus (all upside, no downside) for delivering learning. Twaweza’s RCT (published in the August 2019 QJE) documented some promising results, and one arm of the double-barreled treatment actually became official policy in January 2016. That doesn’t mean that policy makers are completely sold on the idea we proposed and tested. One very senior ministry official suggested that the notion of rewarding a bonus to teachers for ensuring their pupils achieve a minimum level of literacy and numeracy is akin to a bribe. So far, that is a minority view. Unfazed, we’re working to scale up the cash-on-delivery idea using government delivery systems and civil servants. Will it work? Will we get the 0.2sd effect that the initial RCT found? Well, that what the current RCT is trying to find out. Stay tuned.

  12. A question: According to almost all ethical rules for research, you have to inform the people that you are going to include in your experiment about the purpose of the experiment. I happen to think that corruption and problems related to corruption in the public sector is the most serious problem in development. If you are going to do a RCT with say local schools or local health clinics to see what “treatment” works against their corruption, you will according to the ethical rules have to inform them that you are going to do research to see what works against corruption. This would in all likelihood create a massive “Hawthorne effect” implying that the people who work in the schools or in the health clinic would abstain from corrupt practices while your experiment is ongoing (both the control and the treatment organisations). How do they handle this, or are they simply not avoiding to study what works against corruption?