Can complex systems thinking provide useful tools for aid workers? Draft paper on some DFID pilot experiments, for your comments

Ben Ramalingam, who wrote last year’s big book on complexity and aid (Aid on the Edge of Chaos) has been doing some interesting work with DFID and wants comment on his draft paper (with Miguel Laric and John Primrose) summarizing the project. The draft is here BestPracticetoBestFitWorkingPaper_DraftforComments_May2014 (just comment on this post, and the authors will read and reply where necessary, and make sure any non-bonkers comments are reflected in the final version).

The project tries to answer a thorny question for complexity wallahs. Can the standard research tools for studying complex systems provide a useful tyranny is the absence of complexitytoolkit for aid agencies, or is that an oxymoron? i.e. is the whole point of complex systems that you can’t have standard approaches, only connected, agile people able to respond and improvise?

If the answer is ‘oxymoron’, then we may have a problem in terms of thinking about and navigating complexity. But there is an issue if the answer is ‘yes’, as the ‘toolkit temptation’ is deeply rooted in the aid and development sector often unhelpfully, and could lead to unhelpful and damaging ‘complexity silver bullets’ and tick boxes.

Ben and colleagues reckon that this circle can be squared, and that the right complexity toolkit:

‘can help navigate a middle ground in the face of complex problems: to ensure development professionals neither have to surrender to uncertainty on the one hand nor construct convenient but false and potentially unhelpful log-frame ‘fictions’ on the other.’

To explore this, they have been trying out some complexity tools, using various DFID wealth creation programmes as guinea pigs (pilots). The pilots were trade facilitation and girls’ empowerment in Nigeria, private sector development in DRC, and a cross-DFID programme management review.

The Nigerian trade programme gets the most coverage in the paper, and exemplifies some of the issues. The aim of the programme was to improve the volume and value of trade. Trade is a classic complex system, full of feedback loops – e.g. if an activity becomes more profitable, and there are no significant barriers to entry, more entrepreneurs jump in. That means that linear analyses (reduce tariffs by X and trade will rise by Y) are often proved wrong in practice, because they fail to include the effects of feedback. So the researchers built a simulation of the trading system, and used it to run a simulation game for DFID staff to illustrate how tweaking different activities would ripple through the trade system.

Their conclusion?

‘The best way forward, short of trying to analyse and predict the system in advance – which is likely to be impossible, is to employ a portfolio approach: identifying possible entry points for interventions, launching multiple parallel interventions and learn in ‘real time’ to ensure the appropriate sequence and mix of activities. Indeed, the method is designed to support such an evolutionary approach to programming.’

Internet as complex systemThe programme on girls’ empowerment used network analysis tools to explore the range of actors influencing girl power, and the interaction between those actors, producing an improved understanding of the ‘power ecosystem’ at play (the difference with traditional stakeholder mapping is an explicit analysis of the interactions between different stakeholders, rather than mapping them in isolation). The DRC private sector work did something similar. And I must admit I didn’t read the case study on internal DFID process reviews (can anyone blame me for that?!)

Across the four pilots, the research arrived at some findings that will ring true with many aid workers:

Lots of DFID staff are already working in ways that fit with a complexity approach, but ‘Examples of flexible and adaptable approaches in DFID were seen to happen despite corporate processes, rather than because of them.’

DFID staff want tools, mentors and permission to experiment.

They also want some way, at corporate level, for senior management to classify the nature of the problem – simple systems where traditional linear approaches are probably OK, or complex systems where they are not. ‘At present, [complex] problems are officially recognised as such only after several failed attempts to tame them.’

The researchers concluded that the benefits of a complexity toolkit include:

‘• ‘Getting inside the black box’ of the problems covered;

• Developing a sharper understanding how wider contexts shape and influence a given problem;

• Providing more sophisticated analyses of the potential causal pathways through which a change process might unfold;

• Bringing multiple perspectives together to broker a common understanding;

• Supporting the development of strategies to cope with inherent complexity, thereby giving a more systematic way of working towards ‘best fit’;

• Providing an analytical platform for experimentation and learning and supporting a more adaptive management approach with appropriate evidence-based tools.’

While I thought the paper was excellent, overall, I had to read it a couple of times to write this post – the language is often quite abstract, and I hankered for much more specific ‘so whats’, for example, what you would do differently compared to a traditional trade project.

My general take away is that the project focused at the analysis stage, helping us understand the nature and dynamics of complex systems. Even this is a big step forward on generating a log frame and blindly following it regardless of reality. But whether there are specific toolkits for what you then do, beyond ‘try lots of stuff, learn, adapt and iterate’ was less clear to me. Perhaps this should be the focus of follow-up work?

One other issue, this kind of approach seems to work best for the big guys (and gals) – organizations like DFID that are trying to influence an entire complexity signsystem, like a large chunk of Nigeria’s trade. For small players like Oxfam, there would have to be more focus on how other organizations are intervening (absent from this paper), and which small, simple interventions are compatible with complex systems (see the ‘responding’ bit of my recent post on complexity and small island states).

Anyway, over to you. ODI, which is publishing the final version as a Working Paper, and DFID, which sponsored the work, are interested to see if posting the draft on this blog produces useful comments and feedback – please don’t shame me up/let me down!

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


55 Responses to “Can complex systems thinking provide useful tools for aid workers? Draft paper on some DFID pilot experiments, for your comments”
  1. Doug Reeler

    Very enlightening and curiously reassuring for practitioners who, perhaps more intuitively, navigate change in a more adaptive way, meeting the diversity of complexity through inclusiveness and keeping good focus on where the converging and diverging energies or wills are and working with these. Asking ourselves continually “Where is the real work here?” and “Who is participating in whose process?” I suspect we will continue to have to creatively reinvent (and discard) the most important tools but base ourselves on a continually improving understanding of the principles and values which best meet the complexities of human interaction and on good questions as the best tools to steer us forward.

    • Craig Russon

      Dear Writers

      I found interesting your assertion that “This ‘accountability revolution’ has made a useful contribution to how we think about and deliver aid . . .”

      One of my favourite quotes about accountability is by Lee Cronbach, who said

      ‘A demand for accountability is a sign of pathology in the social system,’ March (1972) has commented. Such a demand, each time it has occurred during the past century, has been a sign of discontent: those in charge of services are believed to be inefficient, insufficiently honest, or not self-critical. The demand may be colored by an idealist’s distrust of elected officials and the party system, by a technocrats’s distrust of ordinary human judgement, or by a businessman’s distrust of govermental efficiency.

      This quote leads us to conclude that calls for greater accountability often lead to an culture of distrust.

      Such distrust is inimical to the collaboration and cooperation that you later state are necessary to deal with wicked problems.



  2. Søren

    Thanks for sharing, Ben, Duncan and top dogs. Will read with interest.

    Two initial thoughts – that I hope other people too may want to consider while reading.

    – Seems to me that sorting systems on a continuum between stable and chaotic systems is more useful than the simple/complex classification.

    – Toolkit vs skill-set. The latter is paramount, so, are we producing manuals in the right order here?


  3. aldo matteucci

    In cosmology, all the issues worth addressing are articulated, according to string theory, in 11D. Alas, we can only observe 4D – so we “compact” the string theories down to 4. Sometimes it works (Higgs Boson), sometimes it does not.

    In order to compare “wants” we have compacted them into “money” – we have made them fungible (to do so, we have replaced quality by quantity). For many issues it is a good tool-kit. Amartya SEN has shown that in economic development money is often a poor guide.

    Why ask the principle question, rather than simply go forward as best we can?

  4. Ben Ramalingam

    Hi Doug, Soren and Aldo, thanks for these early comments.

    Doug – glad it is reassuring and illuminating. I very much like the idea of creatively reinventing the methods – after all, every tool becomes a prison eventually. It reminds me of Robert Chambers’ adage that our approaches should be ‘permanently provisional’. But do you think it is okay to say this ‘creative reinvention’ should happen within specific domains of interest e.g. understanding systems, behaviours, relationships, etc? Or is this too limiting?

    Soren – on the continuum, yes, agreed: many different and more detailed classifications are available, as we note in footnote 2 of the paper. The one we chose was the one we felt worked best for this paper in terms of the argument and the research we undertook. That’s not to say others are not useful, or that we wouldn’t use them in other contexts. What’s your view on why the stable-chaotic is more useful?

    Aldo – I am intrigued by your point, could you say a bit more about what you think the implications are for this post and the paper?

  5. Cornelius Chipoma

    Duncan and the complexity team,

    I am not sure when we discovered complexity. For me, what is ‘wicked’ and wondrous is not the reality of a complex world but the inflexible nature of mind-sets in traditional development approaches. The preoccupation with force fitting (one size fits all) development makes me want to drill down into aid’s DNA’s to find out when certain ideas were set in stone. Doing things differently (addressing complexity) is a state of mind that tools, while helpful, cannot completely materialize. The tools are guidelines rather than prescriptions. Reminds me of a conversation I had with a teacher who confidently spoke about how she did not do lessons plans because she had taught for over eight years. I said to her but surely your students have changed each year. She was adamant arguing that in the general gist of things, she could deliver her lessons. Never mind the particular child, just focus on the general group. With huge class sizes (60 plus) I could sympathize with her strategy and thought that she could make a fine development worker – efficient but not effective!

    Ultimately, as Søren notes, everything happens along a continuum (our lives, careers, understanding, etc.). The challenge is knowing where to start from. Looking at the list of references, I would suggest that the team reviews the work Ostrom et al. (2002) did for SIDA (Aid, Incentives, and Sustainability An Institutional Analysis of Development Cooperation MAIN REPORT Elinor Ostrom, team leader, Clark Gibson, Sujai Shivakumar, Krister Andersson). They may not have talked ‘complexity’ but were certainly addressing similar issues for another agency trying to do development well.

    Finally, another ‘wicked’ issue for me that will harm accountability and our ability to respond to complex situations is the exalted attention to rigorous evaluation that has consigned the project learning experience to outsiders – the independent experts – who unfortunately only appear ex post. Perhaps this is the key pitfall of the primary objective of learning from impact evaluations. The preoccupation with serving information needs to fix things not only alienates practitioners but also diminishes the importance of client learning. In other words, I have wondered how the people we serve, the poor people, the provincial, district, and school administrators (indeed teachers), might learn from technical interventions, let alone impact evaluations, so that they might generate their own local solutions.

  6. Adam Kessler

    Great blog, thanks a lot. As you say, a lot of current thinking focuses on the analysis and design phase. After this, I think it’s the programme implementers who really need to be encouraged to learn, adapt, and iterate. Static logframes and meaningless targets (“10 trade facilitation reforms by the end of 2015”) can be worse than useless – but it’s not clear what could replace them.

    The key is for programme managers to integrate monitoring into their management system; to collect small amounts of data, test ideas, and iterate their programmes. So the incentives need to be focused not on what they do or whether it succeeds or fails, but the quality of their processes. In particular, I think DFID needs to set incentives for implementers to have a credible and honest monitoring system, and use this information to inform programme management.

    The DCED has a Standard on this for private sector development, and field offices of DFID (including I believe in DRC) have been increasingly adopting it. We’d be very interested to hear from other programmes using similar approaches in other complex environments. More info:

  7. Philipp Grunewald


    thanks for this.

    I am in the oxymoron camp. As an academic I am comfortable saying that.

    If I were a consultant then I would probably say that “the right complexity toolkit ‘can help navigate a middle ground in the face of complex problems'”.

    However, in light of the status-quo any sort of consideration of (complex adaptive) systems thinking will lead to an improvement.

    Best wishes,

  8. Mark Wentling

    I am sorry. As much as I try, I cannot see how the complex content of this academic paper can possibly help us here in the field in Burkina Faso. I guess we who are struggling to help very impoverished people have better access to the basic essentials are not the targeted audience of this paper. We need more simplicity, rather than the opposite. I agree that providing development assistance is a complex business, but to gain a wide readership of the nature of this business, things need be kept simple as possible.

    After over 40 years as a development practitioner in Africa, I increasingly believe that the best way to communicate widely the complexity of what we do is through a good fictional story that covers the myriad of factors that impinge on what we do. Think about it. “Communicate development via fiction.”

    Mark Wentling, author of “Africa’s Embrace,” the first book in his African Trilogy

  9. chris mallmann

    Dear All
    thanks for this blog, interesting and timely. After many years struggling in the field – with complexities and inertia in development organizations – I’ve now spend some time in reading up and trying to understand the issue properly. Fat chance. I wrote an article as an organization thinker in the issue of Zeitschrift für Organisationsentwicklung in Jan 2009 and still keep thinking. But before going any deeper: As long as we do not commuicate properly – eg really ask the client what his problem is – and act in highly distorted “markets” (distorting markets ever further) where the tax payer doesnt want to know and the client never receives a bill, we are all residents of “islands of unaccountability” (bill easterly, a worthwhiloe ressource on the adventures of economists in the tropics)
    As for in-flight “developmental evaluation” that encourages learning for practitioners during execution of programs I recommend pattens developmental evaluation, a systemic and communication-based approach to realtime evaluation. best regards, chris

  10. chris mallmann

    ….okay, maybe that was a bit too blunt and some might ask: what does that have to do with complexity? My point is that we actually are adding to the wrong kind of complexities, so to say. Instead of being part of the solution we all to often prove to be part of the problem. Not understanding complexities is part of it, misunderstanding the social systems we act in and our own role therein makes it worse. Small illustration if you allow: to increase the capacities for good governance (and strongly reduce complexities) we would love to create, for example, a court of accounts and/or institute proper property rights legislation. Alas, institutions we cannot import, for better or worse, they have to be grown locally. We could contribute in delivering case stories to then be adapted by the client systems. Not impossible, but maybe not sexy enough (or not expensive enough). The whole and functioning solution thus cannot be supplied, the necessary knowledge to make it happen resides in the system, but might not be put to use. So the solution will always be co-created. This requires trust between the partners and transparent relations, including power relations and economic interests. The existing modes of delivery are not helpful here, to say the least.

    Our actual and personal interventions in the social systems we intervene in, however, do have results, if not the ones we wanted (or professed to want). Given the negative price for the advisory service on offer that we established and the competition that the diverse organizations put themselves in, we actually might serve to augment the complexities (e.g.of creating corruption in the public sector) of the country in question.

    So could it be the case that we do not help to reduce the real world complexities of intricate developmental issues while adding yet another layer? To be sure, on some level we do have control over the results (outputs like schoolbooks produced with our resources), but not on more intricate and multi-factor outcomes (better educated children). So why do we propose to solve something clearly out of our reach?

    Social systems do not leand themselves to outside instruction, we know this much. So we simply cannot know the outcome and all the sorry logframing is just an excuse to help us thru the day. We and others do intervene with some success, undoubtedly. How to make the relevant distinction then, between good and bad interventions? Chris Argyris (Intervention Theory of 1970) gives a rather simple definition: add relevant knowledge, thus increase the options for the system in question, add to internal commitment of the client system. I guess if we started to select our interventions thru this simple logic and give the client voice, we would go a long way in handling complexities better without increasing them unduely (although we would properly not be able to disburse the billions we are meant to spend in time)

    PS The man I recommended in my contribution above was Michael Quinn Patton, Developmental Evaluation, Guilford Press 2011 – a valuable counter to all the data series based robust empirical evaluation hype.

    So, that must make me the first contributor to leave a reply for his own reply. Enjoy and please do me the favour to comment? Chris

  11. Tracey Martin

    Thanks Duncan, Ben et al for the opportunity to comment on this paper. It is good to see that DFID is looking at alternatives to the logframe for complex issues (or is ‘wicked’ the new ‘complex’??).
    I have a number of comments:
    I agree that a continuum is better than either/or – but we also need to recognise that a ‘tame’ problem can quickly become wicked because people are involved and people are complex – for example, polio vaccination in Pakistan, bednet distribution without taking into account that cultural practice is to sleep together as a family round the fire, and so on.
    In practice, I think many NGOs already use some of the tools described or variants of them to develop projects – and then try to fit what they have learnt into a logframe imposed on them by DFID or another donor. The report says that the fact that DFID is a commissioning agency may limit them using such tools – but if they allow and encourage others who they commission to use such tools/practices then they would make a big difference – less money spent on tailoring things to them, supporting NGOs and delivery agencies to use effective tools and encouraging them to take on ‘wicked’ issues rather than counting easy things to count so that they can be sure of continuing to get money.
    I understand that tools are useful and must be tested – but it is the way and when such tools are used that is crucial. We need to be looking at what enables a development practitioner at whatever level to work in complex systems and use such tools effectively – and make sure these skills are incorporated into training and development.
    It looks as though DFID does recognise this at a senior level – maybe there is some learning from that?
    Finally, one of the reasons the logframe has survived so long is that it is an easy way of representing a project. I see that some staff were concerned that the diagrams developed using the tools were too difficult to share with others – we need to find new ways of presenting approaches to complex issues – or rather we need to help people to read them. (In fact, logframes are quite difficult to read it is just that we have learnt how to read them. As Mark says stories are a good way to represent complexity – not because they are simple but because they can show complexity in a simple way that is accessible to everyone.
    I look forward to the final version of the report – and DFID’s response to it.

  12. Chris Alford

    And interesting post – and really interesting to see DFID start to take some of these ideas up.

    A couple of initial comments based on Duncan’s summary and a quick skim read of the paper:

    – There’s no doubt that having tools can be useful. Ben’s book is full of great examples of useful tools that have been successfully applied to confront complex problems. But for me the biggest message of Aid on the Edge of Chaos was about changing our mentalities rather than the tools we use and I think here is where we’ve got to be careful about developing “complexity tool kits” as it could lead to putting the cart before the horse, so to speak. In my experience I find that, with tool kits at their disposal, many people just apply them blindly and uncritically without analysing first whether they might be appropriate for the context in which they are being applied (something which I can certainly say I’ve been guilty of in the past), which kind of goes against the principles needed to confront complex problems laid out in Ben’s books and ODI’s papers on the topic.

    I agree with Søren that perhaps it is important first of all to focus on skill-sets and also on changing the overall environment in which development practitioners operate in order to give them more freedom to innovate and adapt, rather than just pumping out manuals of “complexity tools”.

    – I think another important step forward is to push forward multi-disciplinary approaches to development projects. Most of the successful case studies in Aid on the Edge of Chaos came about when a group of people from diverse backgrounds (economists, computer scientists, engineers, sociologists, etc.) came together and combined their different perspectives, knowledge and expertise. I think that spurring these kinds of working environments within the aid sector would have a much greater impact than trying to produce universal tool kits that any aid worker can use, regardless of their background. People with different backgrounds can help others they are working with to challenge their commonly held assumptions that can be detrimental to dealing with complex problems and come up with new and innovate ways to progress forward.

    Another potential issue arising from this point is: if we depart from the assumption that confronting complex problems requires thinking and skills that transcend any one discipline – how universally applicable could we assume that “complexity tools” could be? Should any one person (or group of people from the same discipline) be able to learn how to use and apply complexity tools, or should they require the input from various people with different backgrounds and expertise in order to be applied?

  13. Ben Ramalingam

    Cornelius: thanks for the references and suggestions – very useful indeed. I know the Ostrom Sida work – which is distinguished form ours by the depth and length of the study, and also the fact that they were assessing programmes ex-post while we were trying to support programme development ex ante. But still very useful reminder to pick up on some of those issues. Very good point also about external evaluation, accountability and learning – a key message is that we need much more emphasis on operational research and learning done by and for programme managers.

    Adam: we are very keen to move onto do work on the programming side of things – but our resources limited us to relatively light touch pilots in the design stage. Next phase, perhaps! Thanks also for the DCED link – in fact Miguel was working on DCED stuff when he and John set about designing our project.

    Philipp: I guess I am glad we are in ‘any sort of consideration’ category!

    Mark: Very much agree about the power of stories to communicate complexity – but hard to do in this paper format…

    Tracey: very useful and constructive thoughts – thank you. Very useful points about the latitude given to implementors to use new tools, and the need to focus on skills as well as tools.

    Chris: Again, thanks for your very useful & thoughtful comments. I think we were aiming for mindset shifts in this project, but we also needed to be very modest about what we were able to achieve with this initial pilot. Interesting to hear your agreement with Soren about investing in skills first, rather than testing tools and techniques. Does it have to be in this sequence, do you think? Also, do you think skills in ‘how we work’ are supported well? I am thinking of other areas where the development sector has rapidly ‘skilled up’ – it is usually focusing on tools (such as RCTs) or a problem or issue (climate change, gender, etc), rather than on the ‘how’.

  14. Ben Ramalingam

    Sorry for missing Chris M’s post – it has only just popped up in my feed! Chris – I am a big fan of developmental evaluation and have written a guide on its use in humanitarian operations ( and also led a major evaluation of donor work in fragile states which used the DE approach (‎). Michael QP explicitly links developmental evaluation to working in complex adaptive contexts, and is also willing to bring quantitative and qualitative approaches together in the pursuit of real time learning.

    I also very much agree on co-creation of change – and in the programme management pilot where we worked toward this across DFID teams, it was clear that the process of engagement was just as important as the tools we were employing.

    • chris mallmann

      Good morning and thanks for replying! Finally went thru the paper quickly and like to commend the authors on the approach taken und highly readable result: DFID should be able to learn a lot from this overview. The discussion about complexity and management, however, is much bigger than the system dynamics school. Social system theory proper or the complex responsive systems school of Ralph Stacey would be worthwile to look at. At the end of this search and review, as I found out for myself, organizational development thinking will show up on the screen to help us understand which interventions in support of institution building are more or less likely to be successful. I believe it is not complexity per se that irks us, but the limits it sets to our own usefulness (as exemplified in the managing for results agenda and search for impact discussions since Paris).

      As renovation of agencies’ toolkits go, GIZ of Germany has decided some years ago to introduce a management approach called Capacity WORKS, which you might want to have a look at. Based on systems theory and bundling a lot of known tools aroung a more complexity-accepting attitude. Even more interesting, I find on the process and intervention level, is the use of a software-based facilitation tool called EIDOS toolsuite (formerly Thinktools), developed in the 80s by Max-Planck Institute and then marketed by one researcher involved. It helps you to bring groups of stakeholders together and then go thru a facilitated joint factor analysis (the tool helps to indentify the key drivers of the system by computing the values that the group identifies and introduces into the systemic landscape), goal-oriented option development and evaluation to handle the identified challenges etc. The amazing thing is how first complexity is created and grows significantly before our eyes as different perspectives and valutations are invited and discussed, and then gets cut back as the group jointly decides what can be realistically done to solve the issues. Adaptation real-time and growing ownership and buy-in on the side of the partners. The high-end tool requires licensing (European School of Governance, a small consulting arm of Parmenides Foundation despite of its grand name), but actually the approach can be implemented paper-based (a German consultancy called Denkmodell had once developed an analogue approach of 12 steps to handle complexity called SYMPHONY if I remeber correctly). It goes back to consideralbe extent to the input-output factor matrix invented by the late Munich-based Prof. Frederic Vester, some of you might have heard of.

      regards, Chris

  15. Chris Alford

    Interesting point Ben about how promoting the use of new tools and methodologies can be an effective way to promote new skill sets, instead of the other way round. Brings us back to the argument that it’s easier to act your way into a new way of thinking rather than think you way into it…

    Perhaps the way forward with complexity then would be to leave a certain degree of vagueness / openness / flexibility in the tools that are developed in order to compel people to think creatively and to adapt them to their context when they are using them? This could help to avoid that complexity tools become silver bullets or tick boxes (i.e. defeating the purpose of developing them in the first place!) and could help to motivate the types of thinking that are needed to confront complex problems.

  16. Chris Alford

    Also, whilst we’re on the subject of skill sets for complexity – do people know that the Santa Fe Institute offers free online courses on complexity? They have an introductory course starting in Fall 2014 that looks like it could be of use to aid workers wanting to explore the topic in greater depth.

    Here’s the link:

  17. Bob Williams

    Regarding the Santa Fé Institute stuff. A colleague of mine who knows a thing or two about CAS and its application put himself through the course and was very impressed. One thing of course to remember is that CAS is essentially a mathematical modeling process, whereas we use terms like “complexity” in more metaphorical terms.

  18. masood

    Even while working in highly complex areas like FATA borderlands of Pakistan and implementing one finds that there are parts of the programme that have to deal with complexity( government institutions, space for civil society, sectarian issues, insurgency, poverty and employment) which call for a flexible and adaptive approach. But within the complex environment that one has to grapple with there will be parts of the programmes which deal with a complicated or even a simple environment which are amenable to more linear approach. ( like implementing a community infrastructure programme in a community once one has made a base there. The interesting examples of this from Pakistan can be found in a paper at this site:

  19. Nicholas Colloff

    The challenge of a complexity tool kit is that it will be understood and used at precisely the level of felt comfort that its particular users have with complexity. Unless you pay attention to this dimension – people’s attitudinal disposition to complexity – and build teams reflective and able to manage those differences (that cannot be ‘educated or trained’ out of people – except at the margins) all and every toolkit will be of fractional use.

  20. Søren

    Hi Ben
    I’ll write you once I have had a chance to actually read the paper. I feel a bit silly for commenting prematurely, but, in short answer to your question:

    My remark about the stable/chaotic continuum is really two different things – although they are closely related.
    One reason I find the continuum important is that I don’t think that any society, community, or whatever the object of intervention, will, in fact, be a simple system. I do, however, acknowledge the relevance (and justness) of sorting between more and less complex systems. I think a better way to address it, though, is where we for analytical and practical purposes can justifiably establish some kind of closure to the system. That will naturally be a continuum, not a dichotomy.

    My strong preference for the stable / non-stable (chaotic) categorisation comes from (Prigogine actually but ..) the observation that any system, structure, institution etc. doesn’t exist but is continuously reproduced. That matters quite a lot when we are concerned with changing things. I’m not particularly happy with the analogy but it’s the best thing I can come up with on the spot. Think of the difference between a small stream and a footpath. Both would be considered ‘simple’ but changing their courses are two quite different operations. You can dig up a footpath section by section and place it somewhere else. That you cannot do with the stream. A lot of contemporary thinking in the social sciences has finally got that much right but seems to assume that you can shot the water off while you redirect the stream, rearrange the context or renegotiate the institutional rules of the game. Not so. The water will flow all along.

  21. Cornelius

    Ben: I thought the Ostrom et al. piece is useful because of their focus on incentives. What will motivate aid practitioners/agencies to embrace complexity especially because it requires working even harder? They also tackle issues related to information and power asymmetries that plague development agencies, recipient governments and aid beneficiaries, which I believe are relevant for complexity thinking.
    Finally, I want to add a little more on the perils of the association of rigorous evaluation with upward accountability. We need to think more dynamically so that even the people we serve (not just the paymasters) can hold us accountable. A good example of Tracey Martin’s ‘tame’ problem turned (truly) wicked is the following story of arsenic poisoning in Bangladesh from the archives.
    Costly error
    Bangladeshis have not always relied on wells for their water needs. In the 1970s and 1980s, the United Nations Children’s Fund (UNICEF) and other development agencies funded the Bangladeshi government to dig shallow tube wells across the country.
    They did so in order to stamp out the incidence of children dying from water-borne diseases found in ponds, which were traditionally used as a source of drinking water.
    The solution seemed simple enough at the time; sink tube wells in every village to provide the population with “clean” drinking water.
    But a grave error was made. Despite it being standard practice across the world, no one bothered to check the ground water for arsenic – and Bangladesh has the highest levels of naturally occurring arsenic in ground water in the world.
    Not only did those leading this project forget to test the ground water, they did not heed warnings from independent scientists and researchers.
    When arsenic poisoning was discovered in the adjoining Indian state of West Bengal in the 1980s, it took years for international aid agencies and the Bangladeshi government to begin tackling the problem.
    They only officially recognized the extent of the arsenic poisoning problem in Bangladesh in 1993. Carel de Rooy, the Unicef country representative calls this “a missed opportunity”.
    Read more:

  22. twecky

    A fascinating topic. Coming from a military rather than a development background the problems of making sense of complex systems and how (and why) change occurs seem very similar, particularly in recent “messy” conflicts. One issue I couldn’t see much detail on (i may have missed it) is the challenge in system diagramming of representing lag or delayed change, especially when it forces inappropriate attention on quicker system changes which are not actually relevant.

  23. MJ

    Great to see this happening. Alas I do not have time to read the paper, but based on the above and a skim of the comments, here are my two cents’ worth:

    Nothing wrong with having a toolkit in my opinion as long as the contents is all optional. If people blindly misuse the tools, become imprisoned by them, or mandate their use by others then sack ’em or pack them off for re-training! Maybe a bit harsh, but you get my point … a bad workman blames his tools, a good workman makes do and/or switches to another one.

    A good project design is important, because anything too badly flawed will doom almost everything else that follows. So any kind of approach (even a toolkit!) that can help project designers to think better through wicked problems is to be welcomed. But at the end of the design phase we are only about 20% there.

    My prescription for the implementation phase is as follows:
    – Adaptive management
    – Adaptive management
    – Adaptive management
    (Which puts me in strong agreement with Duncan.) The key is to agreeing a framework between the donor(s) and implementing agency(ies) to facilitate this without onerous formal proposal/budget amendment processes.

  24. Carl Jackson

    Pleased to know that this initiative took place and to have the opportunity to read reflections on it in the paper and threads above.

    The impression that I got from reading the paper was that the tools piloted helped partly because they created more time for a diversity of relevant perspectives to be heard and socialized. For organisations, like DFID, with low staff to spend ratios time though is a scarce resource. Not all actors are time poor though. So an adjunct to these pilots might be thinking about the level in the aid system at which there is likely to be sufficient staff resources to use these tools. My guess is that its at the sub-contracted national implementing organisation level.

    This also implies that an administrative tool that is compatible with these tools would be required for an organisation like DFID to be able to channel funding to these levels before design work is undertaken. Is that something like a design / preparation grant to the implementing organisation that may or may not lead to further funding (perhaps on a payment by results basis)? The lack of these design grants at the right organisational level at present tends to exacerbate the weaknesses of the logical framework approach, and would probably do the same for complexity tools.

  25. Ben Ramalingam

    Chris M – thanks for the pointers on complex responsive processes and the systems thinking tools.
    Chris A, Bob – great, useful reflections on tools and how to present them – I think we are agreed that tools may be necessary but are not sufficient. On SFI complexity courses – yes, I think everyone should sign up!
    Masood – very useful reminder that not everything is complex – resonates with Soren’s point about continuum.
    Nicolas – interesting reflection – so if training won’t work and tools won’t work, and need to build new and better teams, what to do with existing staff?
    Cornelius – thanks for the clarification on the Ostrom work – I agree we are in the same general territory, but we were not able to do anything like as detailed a look at incentives,at least not directly. certainly needs to be considered in follow-up work.
    Twecky- the issues of thresholds and lags does come up in the more detailed paper on the trade work – I can send that to you separately if you are interested.
    MJ – agreed 100% – and we also need some basis for adaptive management, which I think methods such as system dynamics, network analysis, simulations, etc do usefully provide. Or do you think otherwise?
    Carl – typically thoughtful and thought-provoking – thanks! Also very useful for our thinking on how to get these approaches applied at different points in the aid chain.

  26. Rick Davies

    Re section “5.2 Pilot 2: Nigeria Girls Empowerment – girls empowerment and network analysis” of Ben’s paper

    Readers who are interested to know more about the use of social network visualisation and analysis methods in relation to social change communications may find this more detailed report useful:

    The Use of Social Network Analysis Tools in the Evaluation of Social Change Communications By Dr Rick Davies, April 2009. 23 pages. Produced for the Communication for Social Change Consortium.

    Could this be included in the list of references on page 44?

  27. Nicholas Colloff

    Ben, the key point is to be able to asses teams for their current capability to manage complexity and to design how they work together in ways that are consciousness of how people approach their work. This can be genuinely liberating because people understand the lens through which they seek to manage/navigate their work, identify the place of their best contribution, and work out of being in flow. It enables people to see how they might use tools and which are the best combinations of people to exploit the tools in any given context (depending on the complexity of the challenges faced – which can also be mapped). Elliott Jacques was doing this in the 60s ( without the benefits of complexity science) and the processes for mapping teams are readily available. see

  28. T

    Hey Ben,
    I read your post in KM4Dev and I read as well your draft version of the first chapters of “aid on the edge of chaos” what I found quite interesting. The paper here I only scanned very quickly.

    Also in the Aid book I was wondering why all this discussion and frameworks about complexity. If you are working near to the levels which receive aid and farer from burocratic systems, its a simple line from donors, via organizations, to a territory or group of people where aid “ends” in something (may be only in a one day visit of a consultant). In the territory you can see in a simple way what works and what not. Its not that complex.
    Its complex for people far away, thats true.

    Somebody wrote in a post “We need more simplicity”. Thats the sentence I totally agree with.

    So my question: is this complexity discussion may be made by the burocratic system itself trying to focus on macro levels too big questions?
    ALWAYS when I ask local people who observe nearby politics, nearby corruption, nearby infrastructure projects, they have a very clear understanding of what is going on. Its not that complex.

    We need more simplicity and burocratic systems which help and dont generate work themselves.

  29. chris mallmann

    Hi T – I do agree, as most of us concerned will to a certain extent: more simplicity would be nice and yes, this kind of thinking might seem more relevant to the planners and bureaucrats residing far above the fray and not dirtying their knees in the struggle….
    But at the same time, simple and seemingly straightforward proposals often prove to be a bit too short-sighted, if easier to implement. Just think of the dilemma that the humanitarian aid industry is contantly facing: doctors healing child soldiers repeatedly over time so that they can be send back into the fighting; food aid agencies paying warlords only to be able to operate in the warzone etc. One would like to think there is nothing more straightforward and efficiently humane as MSF or EU food aid, healing and feeding the poor. Open the systemic view a bit wider and more of the implications show, like undue taking sides in violent conflicts or destroying local markets by flooding them with subsidized european surplus.
    So it might seem faintly superfluous to try cope with complexities, but to be halfway able to defend our impact chain logic, we have to think harder, I am afraid. Yes, there are intervention levels that are less complex – though still complicated enough – and we could well be frank and only promise (and get paid for!) what we are sure we can deliver. But what can we really be sure of…..
    There are some of us that after a lifetime of struggling with these double bind situations finally concede defeat and stop trying. I would still like to answer with Billy Bragg: when asked if he really believed that writing songs against idiocy and fascism would stop them, he answered “f’course its not gonna stoppit, but it doesnt stop you trying, doesit”

  30. T Muller

    Interesting read. Makes me think about the Listening Project and its findings, and I wonder if perhaps the answer to tools for complexity aren’t to be found in development that is driven by the voice of the people at start and the end of the process. How does the analysis of the current paper fit with the findings of the Listening Project? Slower, more inclusive aid might be the tool to managing complexity. Is that too simple? If it isn’t, what does it mean for how we represent complexity, because if we represent complexity only for some, what is the benefit, and whose knowledge and ways of understanding and determining what is development are we empowering through complexity?

  31. rick davies

    Some more comments on section “5.2 Pilot 2: Nigeria Girls Empowerment – girls empowerment and network analysis” of Ben’s paper.

    How do you interpret network diagrams such as Figure 6 on page 26? This is a question that many readers may be asking. We can see the network diagram as a first draft of an actor-oriented theory of change, with the causal pathways being the linkages between the actors, connecting the Girl Hub project, via various intermediaries, with girls in northern Nigeria.

    A default interpretation of the diagram might be that influencing is likely to be most successful when all available pathways are used, especially when as Ben notes that “all actors had some degree of influence on at least one of the hoped-for outcomes for girls” The cheapest way to do this would via a mass media communication campaign. However there might be more specific pathways whose use might be more effective, and others whose use would be ineffective or even counterproductive. How can they be identified, and the network model refined to have an incrementally better fit with reality?

    Girl Hub has already carried out a baseline social survey of girls in northern Nigeria and others in their immediate circle, asking about their views on girls lives, expectations and rights. Some of the findings of this survey have probably already informed the structure of the network diagram that Girl Hub staff helped construct. But my impression was that, like many such surveys, the analysis that has been made represents only the tip of the iceberg of what could be done with the data. Two data mining methods could be usefully applied to the data: (a) clustering algorithms, to find clusters of girls with similar characteristics, which dont necessarily fit the predefined categories used in the survey instrument, (b) association rule finding algorithms, which could identify what immediate actors, and combinations thereof, were perceived as most influential for a given category or cluster of girls. The results of these analyses would help clarify what influence pathways could be used for what purposes, and the design of more customised media messages for people along those pathways.

    Although I have indicated the possiblity of this use of survey data to refine the network model Girl Hub have not to my knowledge taken any steps to do further analysis of the survey data. At best, this is a lost opportunity. If the surveys were undertaken with DFID rather than Nike Foundation funds, it represents an inefficient use of public funds that is quite avoidable. The costs of further analyses of the survey data do not need to be significant. These days it is increasingly common to see data sets being made publicly available so that other parties can analyse them at their own expense, on condition that results are placed back in the public domain. Some online services organise data analysis competitions where the data provider provides a problem that needs to be solved by analyses of the existing data (see Perhaps in future DFID might make funding of surveys conditional upon the data being made available in the public domain. Or is this already the case, but not necssarily known to everyone?

    How does this all relate to complexity? Theories of change of what will happen are at best rough approximations that ideally are progressively refined through comparisons with new data, as it becomes available. Making sense of large data sets is no easy task, and hypothesis-led analysis is not enough. We also need wider search and pattern finding tools where we are less constrained by our existing mental models. That is why my more recent methodological interest has been in the potential of data mining methods for ex-post sense making.

  32. Pete Vowles

    Some great comments and thoughts within this blog and set of comments.

    I really like the theme weaving through here that the key is people not process. I’d agree that dealing with complexity requires a different kind of mindset – one where we know what is going on (feedback loops etc) and are empowered to iterate and adapt and respond to changing contexts. So simple, but amazing how DFID (and frankly many of our partners and their HQs) feel constrained by a log frame or particular direction. So we need make a concerted effort to integrate adaptability at the start but more importantly develop the confidence to shift things around to do what will deliver better outcomes for poor people. A very crude illustration of the problem: When Goma was overtaken by M23 in November 2012, we really struggled to get a partner that we funded to down tools use our programme funds to focus on responding to the humanitarian crisis. We all agreed in the end but if donors and partners can’t adapt to even the most crude changes in context, the wider complexity based programming is a way off…

    The way we think about the log frame is one key challenge. Too often we see it as a description of an entire project rather than a dynamic monitoring tool. If we are going to use it properly shouldn’t it connect to the theory of change, focus on things that actually matter (rather than a way of showing how much we have achieved). Is there a better monitoring tool DFID and others should be using?

    We are working hard to simplify processes in DFID and create more space (and permission) for flexible and adaptive programme (based on the one pilot Duncan didn’t read!). We will need close collaboration between DFID and our partners to take this opportunity, build case law and demonstrate the practical implications on delivery. Challenge and ideas welcome. Drop me a line (


    • Doug Reeler

      Hi Pete, dropping you a line… a few comments on your post… you say “If we are going to use it (the logframe) properly shouldn’t it connect to the theory of change” There are many theories of change and logframe is not simply a tool – it is embedded with a simple cause-and-effect theory of change that applies to conditions that are relatively stable, predictable and now-and-again adjustable. Donors love logframes because they give the promise (or illusion) of control and accountability. The problem is that most conditions of social change where development aid works are either emergent or crisis-ridden – in our organisation we talk about 3 key conditions of change – emergent, transformative and projectable (see – logframes are like a hammer, useful only for nails – but most situations require far more nuanced approaches, both learning and unlearning our way forward (not just tweaking logframes) – bottom-up, top-down, sideways, inside-out, in-the-middle. And so one of the biggest constraint we face is dealing with donor anxieties for accountability. This needs to be an inclusive conversation, that ensures that self-honesty and learning are never sacrificed on the altar of (external) accountability

      But a much bigger conversation revolves around the enormously destructive separation of M&E from practice itself – like having M&E officers, and the whole evaluation industry that robs everyone else of learning together. Unless we build permanent evaluation into the work of every practitioner, from community leader, field-worker to donor, we shall continue to get tangled in M&E systems that undermine our capacities to deal with complexity.

  33. Eric Momanyi

    I am glad this issue is finally getting attention. I happen to have a dual background as a systems dynamics modeller and program manager and as the paper rightly points out, the thinking within the aid community has been conditioned by the linear assumptions underlying the LFA approach. Continued use of the LFA for wicked problems is due to both a lack of better tool-kit as well as mindsets that are not prepared to struggle with complexity. Having used System dynamics and systems thinking tools in program design (as an experiment), I can confirm that they provide fresh new perspectives – the challenge however is that the LFA is the ‘gold standard’ for presenting project designs. I have had to conduct the dynamic analysis – then translate it to the LFA format for presentation.

    How good any system analysis will be will depend on what is known about the wicked problem (from observation, experience and research). Nevertheless, the insights gained from this kind of analysis are far greater than what you can get from the log frame approach.

    A number of results that I am happy to share regarding my ‘experiment’ include:

    1) Consensus building on potentially diverse opinions: The analytical process is iterative, and with good facilitation gives people a chance to weigh fairly the relative merit of each opinion. I previously had sessions with different government ministries together and the realisation of how linked their problems are related to each other was startling.

    2) Simplify the complexity: Complexity in the wicked problems could be either due to detail complexity (too many variables interacting) or dynamic complexity ( a few variables interacting, changing as a result of this interaction, thereby changing the nature of the interaction in subsequent periods). In my experience, simplifying the analysis by identifying key variables and their influence on observed behaviour over time gives some surprisingly non-obvious insights.

    3) Theory of Change formulation: These tools are excellent for deepening an understanding of the problems, selecting the most appropriate interventions and enabling users to define a truly credible theory of how their proposed solution(s) will transform the problem – in the process making assumptions explicit within the analysis.

    4) Integration of program components: For large programs – some that cut across sectors (like the Mwea Malaria control program cited elsewhere in this blog – for which I was involved in some way) – the system tools enabled visualisation of the relative role of each program action in achieving the overall objective. This has been very useful when getting staff handling project components under a large programme have a shared understanding of their relative expected contribution to the overall programme goal.

    5) Advocacy: This analysis provides a powerful arsenal for advocacy. Linkages of issues at hand to issues of political/social interest help to formulate strong messages and make convincing logical advocacy presentations on issues of interest.

    6) Capacity for system analysis: Using system dynamics/systems modelling is possible but will need training to get users to appreciate the tools (Causal Loop diagrams, Stock and flow diagrams) key concepts (Feedback, non-linearities & delays) – this is a paradigm shift for most people but commitment to learn and practice gets people making progress. However, having a knowledgeable facilitator helps to begin with.

    Its a coincidence that I designed a 2-day training workshop on the use of a pilot systems thinking tools for problem analysis on 20th- 21st August 2014 in Nairobi. I will be happy to share the brochure with complete information by email (

  34. Mischa Foxell

    Thanks for the post, and the link to the paper (very interesting- although it would be nice to have some ‘key messages’ bullet points on the first page for those who aren’t going to trawl through all 46 pages…). I manage a complex/wicked programme at DFID that works on conflict prevention in the DRC. The big challenge for us is less around learning to use complexity theory in design/problem analysis (we always understood the problem was knotty!), but in using complexity theory to inform how we monitor and adapt the programme through its lifetime. As we move forwards into the implementation stage, what are the best tools for us to monitor programme performance (and learn from lessons and adapt the programme in real time) when tackling wicked problems in a complex environment?

    • Jim Tanburn

      Mischa, Thanks for this post. As several have suggested, there is a danger in making this too complicated and theoretical.

      In practice, every intervention is based on an operating hypothesis or logic – which invariably involves some assumptions and simplifications (otherwise we would never act!) The key, as you suggest, is to monitor progress in real time, to see if things are unfolding in the way you thought they would.

      That means articulating the logic clearly in advance – which people initially find quite hard to do (it is in their heads but not on paper). Another challenge is to develop a programme culture of honest enquiry, rather than a drive to ‘hit the numbers’. Using monitoring data to improve the intervention design in real time is difficult; people get attached to their model.

      Adam Kessler (above) referenced one framework within which many M4P programmes do this (the DCED Standard) but perhaps there are others too. Putting this sort of thing into practice is much more important (IMHO) than developing more theories. There are capacity issues, a shortage of the right skills and a tendency to go for box-ticking rather than the aforementioned honest enquiry.

      But essentially it should be fairly straightforward to do much more in the field to accommodate complexity.

  35. Ben Taylor

    Thanks All. I think the biggest plus to come out of the paper is the clear acceptance of the issues at hand within DfID and the desire to do something about them. I do, however, have concerns.

    It is almost a decade since the M4P approach was codified in the DfID/SDC operational guide. This proposed not only a conception of development as complex and a framework for analysing problems but some guidance of what to do about it, which works at both a donor and a small-scale programme level. There were certainly flaws in the document, not least in terminology itself with some seeing markets as a dirty word, but as systemic approaches using the tools of M4P continue to grow in both number and scope, this paper, in the early sections, seems to be another example of the emperor’s-new-clothesism which seems to occupy much valuable time and research effort. There is a significant community of academics and practitioners who share a common understanding of the nature of the problem. Perhaps by restating it in different ways we hope to expose new audiences to it but equally, there is a significant risk of creating tribalism and dissuading those yet to accept the complex nature of development problems by creating complexity in the theories of complexity. Let’s move on?
    As Duncan says, this paper does little but recognise a problem. Efforts might be better directed at rigorously testing whether such approaches can work to generate policy buy-in and in doing so, thinking of better ways to operationalise measurement in complex systems.
    Perhaps the most important way to move the debate forward would be to assess what development programmes that already exist have done to cope with the problems at hand. Each has developed or adapted their own toolkit and some very successfully, and the failure to recognise these achievements and what can be learnt from them consigns us to more idle whimsy. Not least amongst these achievements is reconceptualisation of logframe such that outputs represent systemic changes, outcomes represent improved performance and impacts are indicators of poverty reduction. Incentives are not aligned for such changes to be widely adopted owing to current output-based payment structures but don’t get me started on that one…
    In sum, despite enjoying the paper overall, I feel better acknowledgement of the history and state of the field would have improved it and allowed for the more fundamental issues of addressing challenges to be tackled.

  36. W.J. Pels

    Hi All,
    My problem with this discourse is the amount of reading work it puts on top of my already large pile. I added CDRA Reeler and SIDA Ostrom (398 pages!!) and hope to skim them one day 🙂
    In respect to the paper. Write-ups are good but learning only happens if that information is being (re)-digested by – in this case – DfID people. They have to figure out what’s a better model of reality for the business (of development) they are in. The CDRA piece describes nicely what happened / happens on the ground.
    But sure, they need to practice (not use because than it will loose a play element | see ‘magic circle’ in PS) with tools, methods, information and get comfortable being uncomfortable not meeting logframe expectations and drowning in information (see also PS).
    Once it was remarked, I think by Bush, that ‘If you are not part of the solution, you are part of the problem’. Complexity teaches me you can only be part of the solution if you are part of the problem. So DfID has to ‘problemize’ itself to deliver – or better attribute to – a solution they should shape with beneficiaries.
    Cheers, Jaap

    PS Some fun reading to correlate with complexity at and,EJ.htm and 🙂

  37. chris mallmann

    Hi W.J, Hi All
    that is an excetíonal way to put it: DfiD – i.e. in prolongation all of us practicians and bureaucrats, searchers and planners – must become part of the problem to be part of the solution. I like that! Not that many of our superiors would agree, I am afraid, but it opens up to a truely systemic discussion about interdependency and circular causation. Maybe we should go that way some time, but not now.
    But I do take W.J.s post as an invitation for humble acceptance: yes, we are part of the problem and our realization of that simple fact will allow us to understand our possible participation on the solution side of this equation. Like an old graffiti on a railway bridge over a notoriously and perennially jammed highway had it famously in the 80s: No, you are not standing in a traffic jam, YOU ARE THE TRAFFIC JAM!

  38. Shawn Cunningham

    When we consider complexity, we have to shift from a ordered logic where interventions have predictable our casual impact, to an understanding that we have to experiment with what is possible within the specific context. This is being done everyday by savvy and determined development practitioners. The challenge is that the development organizations have a very strong preference for certain kinds of outcomes, meaning that programmes are designed with a certain logic that may influences what they detect within the context. We need one set of tools at HQ and programmde design level, and yet another set for the field. Then we have the further challenge that counterparts are by now also conditioned to how donors think, so they may even express a desire for particular interventions because a) they know they might get support b) maybe they genuinely believe that a particular intervention may be desirable.

    In my own experience I have found that when I am contracted directly by the counterpart, they are much more likely to embrace even the more challenging aspects of a complexity sensitive approach.

    More comments to follow as I go through the paper.

    Best wishes,


  39. Søren

    Hi again.
    I don’t know if the ship has sailed but thought I should write my short comment after actually reading the paper, now I wrote that I would : )

    It really isn’t a lot that I can add while respecting the objective. I think there’re a number of good suggestions in the report. If I understand you right, the aim is to drive a cognitive shift through complexity sensitive methods. The portfolio approach seems very appropriate in that context.
    But I can’t help being sceptical of the change process really happening in that order. (Which is not to say the exact opposite is true. The two need to go hand in hand.) My presumption is that the feedback processes (formerly known partially as unintended consequences) won’t stand out unless you’re looking for them. Hence my initial call for skill-sets.

    Perhaps it’s a separate report but two things I missed.
    One is related to my preference for stable/chaotic rather than simple/complex and would be to a least highlight that a lot depends on perspective. An area may rightfully be perceived as ‘simple’ from one perspective but is typically complex when scrutinized on its own – even vaccinations. Taleban and Western ‘hippies’ come to mind as a very simple example.
    My other comment is if it wouldn’t be a good idea to work on methods of deciding whether a problem should be approached as before or behind the edge of chaos, if you know what I mean. My own work suggests that it may be a good first step to see if a development challenge is primarily the effect of too few resources or too many. In the former situation it may be beneficial adding to the complexity. In the the latter the overall aim is to bring more order and/or separate ‘systems’.

    Hope it makes sense. Best


  40. caitlin scott

    Great to see how much engagement this issue and paper has generated.
    I found this a very timely report, and was particularly interested in the Nigerian girls’ empowerment programme.

    Main comment would be that although interesting, it’s not 100% obvious why complexity offers a new perspective, as opposed to clarifying what processes could (and should) have been applied before. For example a good situation and stakeholder analyses before the programme could have focused attention on the need to include an analysis of the networks through which the programme would necessarily unfold, as a key aspect of delivery– for as the report identifies, these are key structural issues.

    The report’s suggestion that what may be needed are more tools, or skill sets, to take up Søren’s point, seems cogent. If what complexity is doing is broadening the remit of angles and domain that need to be covered as we understand the contexts in which programmes do or more importantly don’t work, then that has to be welcome. Cultural and social analysis are woefully underrepresented amongst the tools or approaches that agencies tend to use in programme devolvement, despite the incursions of anthropologists across development over the last twenty years. Perhaps this is because culture is so ubiquitous and also intangible that it is taken as self -evident when in reality it is anything but. As such it can be the elephant in the room, so implicit it’s invisible, and often stubbornly intransigent. I and colleagues working on child rights governance are in the process of piloting a complexity-based ‘thinking tool’ which we hope will help with a number of challenges, including shifting political contexts, surfacing complex cultural structures, and encouraging the constant monitoring and re-development of plans. Underlining the point re skills sets vs tools, we are hoping that an approach rooted in helping people think differently will preclude the need to add more tools to the already-burdened shoulders of programme staff, and instead give them a lighter load that empowers them to re-act to what is going on around them.
    We look forward to sharing lessons and to learning with others working in this field!

  41. Matt Ripley

    Hi there, in Nepal on another DFID-funded market systems programme we’ve been working at putting “adaptive management and complexity-informed theories of change in PSD” into practice…and wrote a short paper to share our experiences of some of the toolkits that helped us do that. See:

    Our take-away (substitute ‘messy’ for ‘wicked’) resonates with the working paper: “Programmes seeking to truly develop market systems have to come to terms with ‘messiness’. They cannot implement a blueprint and must accept there is no perfect plan. Teams implementing interventions using this approach can deal with uncertainties – stemming from what will almost always be an incomplete understanding, at least initially – by planning iteratively and acting incrementally. However, this requires a set of tools and processes to help staff make sense of market systems…

    Yet for all this to be operationalised, a shift in cultural mind-set is required: one that takes seriously the application of structures for gathering, interpreting and reacting to data. Indeed, “the effective use of data – learning by measuring – is at the heart of how we should manage complexity” (a quote from Owen). That is, in market systems development, results can really only be as good as the use of the tools and processes that help teams learn, adjust and improve.”

    As Ben T says, there’s now a fair few programmes that have already been innovating around ways to cope with ‘wicked/messy problems’, so there would definitely be value in recognising these, taking a look at what they’ve done and trying to extract some practical learning about how to operationalise measurement in complex systems.

    We also covered the topic of ‘messy’ systems in a presentation at the recent DCED Seminar:

    (the case study is also linked at and

  42. David Week

    I’ve posted a rather an extensive comment on the ODI paper here:

    In sum: The systems theorists who defined “wicked problems” did not think that operations research could solve them. Just the opposite. And what’s presented in the ODI paper as an approach to wicked problems is just first order cybernetics. There is good value in the ODI paper, but that value is obscured, rather than enhanced, by dressing it as an approach to wicked problems.

  43. chris mallmann

    Hi everybody, back again for one more comment on this:
    Not so sure, as to what Davis just posted: what’s wrong with being accountable? William Easterly (Economists’ Adventures in the Tropics, The White Man’s Burden) formerly of the Worldbank called our programs with some reason “islands of unaccountability”. Given that the client government don’t pay (and often didn’t even ask for the program) and our taxpayers never saw any of it…isn’t there a real danager that things go wrong exactly because a lack of oversight and transparency? Who is our client, really? Who are we answerable to in the end? Not to dismissed out of hand too easily, I’m afraid.

  44. David Week

    Hi Chris. Your closing remarks go to the heart of the matter. If you look at all the political economy of the aid industry, you’ll find that most NGOs and aid researchers are today servants of government. (Disclosure: me too.) Most of these governments—even those traditionally Labor or Social Democrat—now operate under a neoliberal intellectual regime. Most of what passes as “learning” is in fact learning what the governments want to know: how to ration aid according to those governments’ criteria of effectiveness; most what passes as “accountability” is accountability to those governments.

    Note: I have heard that Oxfam has started to institute “downward accountability”. I haven’t looked at what they’re doing myself, but that seems like a good idea. On the other hand, it would also be an admission that the prime mode of operation to date has been “upward accountability.” The mental frame revealed by the metaphor, with donors as being “up” and the poor being “down”, is also unfortunate.

    Coincidentally, the Open Society Foundations (citation is not endorsement) just published this story:
    QUOTE: The Brekete Family Radio (BFR) is a reality radio program in Abuja, modeled after a public complaint forum or people’s court. Conducted in the local lingua franca (pidgin English), people call in to report on issues of impunity, whether public or private. The panel sitting in the studio discusses the issue and invites the public to give advice to the plaintiff.

    In some circumstances, the government official involved is actually called while the program is still on air to offer an explanation over an alleged act of impunity. This kind of on-air public accountability inquest has become very effective in putting a large number of public officers on the spot and has also achieved significant results in confronting impunity. END QUOTE

    This is traditional democratic public accountability: by independent actors (often the press) reporting directly to the public. Why do we have these channels in democracies? Because governments and corporations are inherently untrustworthy. They have their own interests to pursue and protect.

    In the KDP project in Indonesia (now PNPM Mandiri) the funder (the WB, under a TTL with a good grounding in political economy) mandated that the implementing agency (the Ministry of Home Affairs) channel program funds into a blind trust, managed by the (banned!) Indonesian Union of Journalists, to in turn commission investigative journalists to investigate and publish as they chose. I’m not sure how well that worked, but that’s an attempt at real accountability, to democratic standards.

    This is another account of an attempt at democratic accountability:
    In this film, poor Indian children map their neighbourhoods, analyse problems, and use the knowledge they gain to interrogate the people in charge.

    What passes as “accountability” in our industry is to a very weak standard. It is by financially beholden actors reporting to (from the poor’s perspective) a foreign power. All of these actors will frame their actions as being in the interests of the poor, without actually being accountable to the poor. A political-economic perspective suggests that one ignore such rhetoric, and instead look at economic interests, flows of money, and structures of power.

  45. chris

    David, absolutely. But as a humble student of economics (and a tax-paid government adviser for 20 years….) I do not even have to employ political context / or systems thinking, which I am very fond of/ to conclude that public money is prone to be misallocated if not properly overseen. As the production, distribution and oversight of other fuzzy or international public goods show, it is not always easy to substitute for or simulate a good client – producer relationship. Easterly comes up with the idea of giving the recipients of aid more say in monetizing allocations: vouchers from states to organization could thus only be cashed for ex ante defined goods, deemed helpful for development. That might well prove morally hazardous in many ways but would doubtlessly increase voice and ownerhip (again, maybe of the wrong kind of people). So no client, no information what a good project is…. somebody here requested before: just ask them. have a good day, chris

  46. David Week

    Hi Chris. As a person employed in providing such oversight, I am 100% in favour of oversight, monitoring, analysis, audits, you name it. But this is not either/or. What bothers me, and what I argued against, is the use of the word “accountability” in the very limited sense of “accountability to the financiers”, and the practice of undertaking “learning” in the very limited sense of “learning how to achieve what the financiers want to achieve.” That is the accountability you will find in any corporate enterprise. But I like to think that development is more than just a corporate enterprise, in which case “accountability” needs a much broader interpretation and approach, as does “learning.” You have a good day too. David

  47. chris mallmann

    Dear David, I know and I think I got your point. Craig above wrote about the “culture of distrust” that accountability engendered….

    The blog is all about complexities, systemic thinking and especially “wicked problems” and I maintained in one of my first contributions above that donors and consultants rather contribute adding new layers of complexity that we then marvel at. I don’t want to be seen cynical, but some of those would need less academic deliberation and more straight forward and transparent behavior on our side. That would still leave loads of real problems and more complexity than we wish for, where we could muster the thinking, help and contributions of those we are meant to be supporting. For this, we would then have to ask them first, what they think the problem was. When did DfID, USAID, EUAID, the Worldbank and the like ask last time, where their help was really needed and appreciated? Why do some many of our efforts fail? Bad thinking or bad process? Who is the client of all this – are we answerable to anybody in those countries (let alone the poor whose poverty we allegedly help aleviate)? Or do we owe all those reports only to our agencies (which in turn do what they can not to really answer our own legislators)?

    What I was trying to say is that very often those that supposedly (or factually) benefitting from the provision of (social) public goods do not generally have a say – especially if there is a (multi-cultural?) culture of condescension involved (e.g. in providing social assistance to the poor in our own countries). How much harder if the poor are from distant cultures and in countries that partly depend for ODA for big chunks of their public budgets. So, yes, it would be very much needed and appreciated if it entailed more than accountability in the private sector – but I guess, contrary to what we profess to, it first had to be at least as accountable! Inspite of many milions spent on harmonizing and coordinating ourselves since the glorious days of Paris agenda etc. Governemnts maintain whole ministries, the valuable time of dozens or hundreds of public servants to satisfy the demands of many donor agencies. We still mostly sit nicely on those “islands of unaccountability”, I am afraid. In a way this is the way our delivery systems are structured: the taxpayer does not see what he pays for, the client receiving it might never have asked for it. We are piggy in the middle – and struggle. I guess missionaries owe more to the souls they collect (as a derivative to building schools, teaching and keep up needed health posts in far away places), because at least they must fear the final reckoning….

    So, please, we need more transparency, voice and ethics, not less. Maybe we even need less money to spend, not more. Maybe we should really have to prove our point or need to compete for ressources and create better productivity in our delivery systems – and we would not have to compete with negative prices, luring the public officials away from their poorly paid local jobs into our workshops with sitting allowances and the like.

    Yes, complicated and complex, but not impossible I should hope.
    Cheers, Chris

    • W.J. Pels

      Chris, Yep, the problem is money. It is that simple. And the costs for development derivative processes like admin / planning / nudging / (social) marketing / symposiums / UN-parade / satisfying ‘donors’ etc take more and more out of the available budget. Yes, we monetized everything, even Twitter attention 🙂 But money flows are never complex!