Adaptive rigour: bridging the art and science of adaptive management

Ben Ramalingam and Leni Wild share the thinking behind a new initiative to support adaptive management in aid.

Adaptive management seems to be everywhere these days – and is one of the most popular topics on this blog. More and more, it is becoming seen as the best way to deal with a wide range of development and humanitarian problems, from addressing conflict to achieving political reforms to supporting better learning outcomes in schools.

Some bureaucracies are showing greater tolerance for experimentation and creating internal processes and systems to accommodate this – Yuen Yuen Ang’s work  documents how experimentation has been central to China’s progress in poverty reduction, while in the UK, parts of local government and some organisations have begun embracing the need for change. In aid, donors, funders and implementers have been signing up to such approaches – the Global Learning for Adaptive Management (GLAM) initiative, which we direct, is one of the most significant investments in this area. Funded (just under £4 million) by the Department for International Development (DFID) and the United States Agency for International Development (USAID), it aims to build greater uptake and stronger use of evidence in such approaches.

GLAM rocks. Sorry.
Photo credit: CC license

Much has been written about how to ensure effective and appropriate adaptation, and there is a growing diversity and range of perspectives. But there are real risks that it will end up being dismissed as a passing fad with little genuine impact on how development and humanitarian efforts really work.

Making space for experimentation and adaptation within the realities of bureaucratic constraints means finding a way of bridging two worlds – the creative dynamism that characterises adaptive management and the considered, systematic approaches that can be documented, shared and improved – the art and the science.

It means showing adaptive management can demonstrate forms of accountability and rigour for the use of quality evidence. But it also means understanding rigour differently – accountability cannot be geared to pre-defined results and processes, and needs to enable experimentation rather than inhibit it. These core aspects of effective adaptive management need to be clear, intuitive and easy to grasp, to convince others of their merit. This is what we call adaptive rigour in a recently released GLAM briefing paper, which highlights three factors:

1: Embed monitoring, learning and evaluation throughout delivery – not just at the start and end

Taking an adaptive approach at its core means ensuring evaluative thinking is not just undertaken at the design and the ex-post assessment stages, but is embedded throughout. That means recognising that measurement indicators and methods may need to change as something is being delivered – a country may experience major political change or a natural disaster leading to changing conditions for reform; evidence may highlight that the initial set of activities are not leading to intended outcomes and need to adapt; or you may learn more about who really needs to be engaged in a change process, and shift networks and key relationships accordingly. While this might appear to be less ‘rigorous’ to some, undertaking these changes and documenting them is much more rigorous than insisting that an intervention be implemented in a uniform way across sites, over time and with the original design.

The role of MEL across an adaptive programme cycle:

2: Ensure quality in how evidence for adaptations is gathered, assessed and used

Done well, adaptive approaches should demonstrate better use of evidence than more traditional top-down delivery models – but again, this needs to be practiced differently. There are three principles for this:

  • Usefulness: How to ensure that evidence is actually used and informs decision making on an ongoing basis, for instance through regular strategy review and forecasting?
  • Practicality:  How to ensure that the types of evidence used are diverse, involving different perspectives and reflecting tacit knowledge and processes that are often undervalued but can be key to success?
  • Timeliness: How to ensure that evidence is produced and used at the right time to inform ongoing decision making, rather than coming too late or not in a form that actually supports programme leaders to make better decisions?

3: Strengthen the enabling environment

While our first two points can help counter criticisms of ‘making things up without a plan’, change won’t really happen without more fundamental shifts in mindsets, behaviours and power balances, within government bureaucracies and aid organisations.

Ultimately, funders need to recognise that they are not playing conventional ‘commissioner’ roles (i.e. buying a particular service, output or reform). Rather, that they need to act as ‘system enablers’, providing the right conditions for others to solve problems, work collectively and achieve ambitious outcomes. Organisations at different levels (funders, implementers) need to foster the right internal capacities and incentives to create space for change. We suggest a set of questions to identify the ‘institutional readiness’ of organisations:

  • Do senior leaders and managers foster an enabling working environment and shared mindsets around adaptive change? Are there safe spaces to recognise uncertainty, identify early failures and ensure action is taken to address them?
  • Are staff capacities of curiosity and creativity, critical thinking, comfort with uncertainty valued, and reflected in key recruitment, rewards, training and promotion systems?
  • Are reporting and accountability mechanisms, including contracting, supportive of the need for adaptations through the implementation process?

The ideas presented in our paper provide an initial inventory (which can be downloaded separately here), and a set of ideas that can hopefully provide a platform for shared learning. We are already finding the approach to be of genuine value in our own work advising new and existing ‘adaptive management’ programmes and portfolios. But this is definitely just a preliminary attempt – and we warmly invite feedback, comments, and critique in the spirit of collaborative improvement.

In closing, from our perspective, strengthening monitoring, learning and evaluation systems is not a ‘nice to have’ or simply a technical endeavour. Building better and more rigorous feedback and learning systems lies at the very heart of the effective and adaptive endeavours that are needed ever more urgently around the world. We believe that better and more creative experimentation can help remake humanitarian and development organisations, ensuring our efforts better reflect and respond to realities on the ground. This means working hard to improve methods, mechanisms and modes of development – but also being aware that ultimately, this must result in changed mindsets. Ultimately, the effort to  ‘do development differently’ begins with ourselves.

What do you think? Share feedback via our online survey and answer this poll (you can tick up to 3 answers):

[poll id=”58″]

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


4 Responses to “Adaptive rigour: bridging the art and science of adaptive management”
  1. Jake Allen

    Thanks for this and the generous sharing of your work. Couple of thoughts.
    Firstly, I think AM has definitely now gone beyond ‘passing fad’ stage. But why? Is it because it’s not passing or not a fad, or because actually what it’s about is an inherent feature of this work so was always there anyway?
    Secondly, in terms of embedding MEL, whilst I agree, the idea of embedding still suggests you have your intervention – programme, portfolio, whatever – and then you have this *other* thing about MEL, which you embed. I think it needs to go further and, in the spirit of our internalising and personalising of DDD, everyone in teams needs to know intimately how their work connects with others, so everything is embedded, and the the whole becomes greater than the sum of the parts.
    But this rests on my third thought. The danger of the idea of selecting what is/are the most important aspect/s of AM is that is goes against what the reality is, IMHO, which is it is any and all of these at different times, and the skill is being able to navigate these against and within the complex contexts we work in. The third and a half additional point is that what I think is needed to achieve this is the empowerment and support to the ‘pinch point’ where top-down and bottom-up meet – maybe a team leader or a portfilio director, these needing to be the people who have the skills, abilities and personal qualities to do this navigation – which requires sometimes harder choices about who does this – can manage the bureacracies, and protect and empower their teams to enable ‘doing’.

  2. Adaptive learning is most needed when field staff first begin to discover what is doable and worth doing in a specific local context. For that, they need evidence rather than assumptions, nuanced information about local relationships and capabilities rather than indicators and experiments, and answers to questions they are initially too clueless to ask. Qualitative methods are designed specifically for this situation – especially process tracing, which discovers cause-and-effect in a (static or dynamic) local context with high-powered unknown unknowns that are identified steadily as a project learns.

    All of that has been largely overlooked by the development community’s use of adaptive management. Your September 2018 ODI paper (“Building a global learning alliance on adaptive management”) is a rare exception. You identify process tracing as “particularly well suited to adaptive programs,” but you believe it requires “longer timeframes than the rapid feedback cycles that are core to adaptive management.” That is true of process tracing for rigorous academic research. However, the basic essentials are readily adapted for fast, simple learning by field staff in the early stages of a development project. See John Hoven (2018) “Adaptive Theories of Change for Peacebuilding” (

  3. This is very interesting. As someone who has moved in/out of the adaptive learning & evidence fields I definitely look forward to continuing to follow this work!

    Can you describe more about how you bring “rigor” while being practical about different types of knowledge? For example, I agree that documenting tacit knowledge is extremely important to building good quality learning systems, but how do you differentiate between good-quality tacit knowledge and poor-quality tacit knowledge?

  4. Really excellent blog. As a True Believer in the “fact” of program adaptation (it’s what we do/it’s what we’ve always done whether consciously and skillfully or not), I agree with most of the great points being made. But the idea of “adaptive rigor” did poke at a gnawing belief I’ve long held that in our natural desire to “counter criticism” of AD/AM we might inadvertently play on turf that is inconsistent with the new mindset being rightly invoked here. The phrase “adaptive rigor” is appealing because it flags to colleagues and donors that we aren’t just “making this stuff up,” but at the same time it might downplay/undermine the nature of dynamic, complex implementation environments that are rigor-resistant in a significant sense (even if we frame ‘rigor’ differently as you suggest). The most appealing feature of adaptive development, to me, is it’s refusal to ignore the fact that programs are complex and unpredictable and calls on implementers to have the integrity to face that fact and accommodate uncertainty. Don’t get me wrong, I like rigor as much as the next guy at the right time and place, but reconciling any definition of rigor with (sometimes radical) flexibility is hard to pull off conceptually. Similarly, I sometimes find it difficult to speak of “evidence” to mainstream audiences with a completely straight face–of course, we use evidence extensively, but when we use the word we recognize that it is inevitably partial, tentative and its selection, subjective. Evidence is the foundation of AM (if anything it is interrogated more frequently and critically than in non-adaptive programs) but sometimes the word is used to allow positivist colleagues to think that we are all playing the same game when we aren’t. In bridging art and science, we may use terms that risk overemphasizing the scientism of AM to placate colleagues who are more comfortable with traditional practice. Though rhetorically effective perhaps, this may have the effect of making it seem like we’re not really doing development so differently after all.
    In other words, I think we sometimes choose our words strategically so as not to appear *too* threatening to the status quo, but I wonder if this might not compromise the message of AD/AM and release pressure on the need for a genuine change of mindset. As I say, this is all back-of-the-mind stuff that just came to the fore when coming across the phrase “adaptive rigor.”