Can impact diaries help us analyse our impact when working in complex environments?

One of the problems about working in a complex system is that not only do you never know what is going to happen, but you aren’t sure whatcomplexity sign developments, information, feedback etc will turn out (with hindsight) to be important. In these results-obsessed times, what does that mean for monitoring and evaluation?

One answer is to keep what I call an ‘impact diary’, where you dump any relevant info as it passes across your screen/life, so that you can reconstruct a plausible story of impact at a later date.

What might such a diary (more like a folder) contain?

  • Business cards (one former head of our Pan Africa programme used number of business cards collected by staff as a performance indicator, increasing the pressure on them to get out and network)
  • Feedback (emails, press coverage, conversations)
  • Feelings at the time – apprehensions, enthusiasms
  • Critical Junctures: big decisions, why, when, what
  • List of things that people say that need more thought (so they don’t get lost)

I ran this past some of our monitoring and evaluation wallahs. Claire Hutchings added some general guidelines on what to record, focussing on how to make it intelligible to people like her who come in later and have to evaluate a campaign that can often be heavy on jargon and wonky detail:

  • describe who changed, what changed in their behaviour, relationships, activities or actions, when, and where – explain why the outcome is important. The challenge is to contextualise the outcome so that a reader who does not have country and topical expertise will be able to appreciate why this change in a social actor is significant. How do you know that the outcome was a result— partially or totally, directly or indirectly, intentionally or not – of our activities?
  • Also important for watershed moments to be recorded, along with some contextual information (what are the key factors, key actors etc.) that will help to explain it in the future.

Our campaigns MELista, Kimberly Bowman, focused on the how, rather than the what. If you set up an elaborate system that requires people to enter loads of data, you can be pretty sure it won’t happen. So the key is to redirect existing flows of information, chatter and analysis into a ‘bucket’ that evaluators or campaigners can analyse later, when things aren’t so frenetic. The simplest is to add a bucket email address to all the existing group emails, and do something similar for twitter and blogs.

Kimberly also suggested some new software (at which point, this post crosses my IT frontier and moves off into the outer darkness):

‘For the World Bank land freeze campaign (Oct-April this year), we used ‘Evernote‘ to collect all of the team’s internal emails. Evernote has a nice webclipping tool, apps for phones, group data-sharing options, tagging, and optical character recognition for searching PDFs and Word docs, so it seemed like an interesting thing to test.’

Any other suggestions, either on the what or the how?

correlation v causation cartoon

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


8 Responses to “Can impact diaries help us analyse our impact when working in complex environments?”
  1. Claire Hutchings

    Those of you familiar with Outcome Journals will recognize that credit needs to be given to Outcome Mapping for the steer on the kinds of details we want to be capturing, or dumping, as the case may be.

    Thanks Duncan, really helpful, and one of those areas that we continue to grapple with. No matter how clever we are about the ‘what’ though, it seems to me that it’s the ‘how’ to encourage busy people to take the time to dump those precious pieces of gold dust sitting in their blackberry, smart phone, email inbox, their heads… and in such a way that they can be usefully reviewed by them and others to piece the picture together after the fact that we really struggle with. Kimberly and her colleagues done some great work thinking through this challenge, and have tried piloting some more creative approaches to journalling, such as encouraging teams to audio-record 30 seconds of reflection on their phone during busy periods – concluding among other things that automation is key, that it needs to be something so automatic that teams almost forget about it.

    But I’d be really interested to hear what others are trying – in terms of the ‘what’ question certainly, but would be particularly interested in hearing about any innovative approaches to the ‘how’ question – especially for or with governance and advocacy/ campaigns teams.

  2. Tracey Martin

    Having a ‘diary’ that is physical as well as virtual also helps people to reflect on what has happened over the day, week or month – at meetings set aside for this purpose. In a complex situation this also means that things that we didn’t think were important are not lost and can be picked up again later – not forgotten.
    The challenge is to get people into the habit of writing things down – their feelings and hunches as well as what happened. Often it is organisations that have less sophisticated IT systems and a need to communicate in other ways that are better at this.
    Having one person who encourages this, then synthesises reflections, tidies up the information is also very helpful – especially for the M&E wallahs.
    Writing itself is a way of reflecting. In complext situations we need to act quickly – but only after we have properly understood – i.e. reflected and learnt.

  3. Hi Duncan,

    Journals are the primary tool for data collection in the Outcome Mapping approach. The manual describes a process for customising three types of journal: one to collect data on outcomes, one on activities carried out and one on internal performance.

    A lot of people use pen and paper still but I’m hearing more and more who use mobile or web-based tools.

    You can read more here:
    – The OM manual:
    – A good example from Zimbabwe:
    – A discussion thread on the use of journals:


  4. Another quick comment after reading the other responses…

    The what and how questions are important but still only talk about data collection. The other side of the equation to balance is sense-making: how and when is all that data you’ve been collecting going to be synthesised and made sense of, and who is involved in that?

  5. Claire Hutchings

    Agree completely Simon! We find a spectrum ranging from those teams that are great at collecting and recording, dumping as it were, but struggle to find time or a meaningful process for reflection and sense-making (often because they’ve amassed SO MUCH information that the task feels overwhelming) and those teams who struggle to record the gold dust and for whom the challenge is to move the reflection and sense-making from an often unconscious individual process into a communal space.

  6. Your post has sparked a lot of thoughts. Thank you. I recently did an evaluation of a fraught project–one of those that have a whisper campaign against it, people saying that it was a basket case or a failure. The diary that we had asked the project officer to keep was invaluable. It showed the obstacles, the ways she and others got over them (after several days of fuming) and most importantly it has a quote of a senior member of the community saying that he was proud of the project because it got the young men involved. She said it was a eureka moment and it was for us too. We used it as a central part of our evaluation (along with plenty of community voices). I think that having lots of people involve in a project to keep a diary or record thoughts would be almost impossible to achieve and even harder to analyse but getting one or two key people who are at the frontline of decision making and implementation can provide priceless insights and probably be therapeutic.

  7. Great post and comments. Yes, Outcome Mapping and Outcome Harvesting (1) are two useful Monitoring, Evaluation and Learning (MEL) approaches that lend themselves to the often complex (aka crazy-making) and unpredictable (aka head-spinning) nature of policy advocacy.

    One of the biggest challenges for MEL of policy advocacy is actually cultural – campaigners are constantly on the go, and looking ahead to what’s next. An anthropologist might say it’s more an “oral culture” than a “literate culture”. Marked by ephemeral exchanges, and making knowledge memorable by making it witty, surprising, snarky, etc. oral culture is at core conversational, where meaning comes through exchanges. (2) This description rings true for campaigners (and a fair number of their managers as well).

    For MEL folks, the take away is that campaigners are unlikely to compose or read lengthy evaluation reports, however well researched, documented and argued they are. So, coming up with “how’s” for data collection, sense-making and knowledge sharing that match this oral culture are key. Two things we’re using with Oxfam campaign teams have been pretty helpful.

    First, we’ve found that After-Action Reviews are a great way to get busy campaigners to stop and take stock of what’s happened, to make sense of what led to the outcomes (our contribution, and that of other actors and factors), and to embed the knowledge among the team members. After-Action Reviews are also a privileged moment for data collection and documentation. But this is almost a side-benefit, serving the “literate culture” out there. It also, importantly, serves our desire to convey knowledge beyond those who participate in the immediate conversation. So writing up an After-Action Review report is key. It can be anything from a very brief summary note to a more formal report with data annexes and back-up documentation like press clips, quotes from politicians, etc. We find that campaign teams often go back to these when planning their next big action, to remind themselves what their brilliant insights were – what to do again, and what to not do again.

    Second, we’ve recently been experimenting with Yammer (, an enterprise social network (3). It’s like Facebook for organizations and companies. It is a lot like the journal idea, but it’s in the cloud. You can post information and links, photos and videos. You can create groups and collaborate on documents. You can “like” and comment on posts, and follow and praise other members. Best of all, you can post from your cell phone or log onto the web site or via their mobile app. Membership is based on your email address, so it’s got some built-in securities about who can join and see your stuff. We’ve found it’s a great way to capture information on the fly (like the ‘bucket’ metaphor Kimberly uses). This is extremely useful, when it comes time to gather up and synthesize all the information in preparation for an After-Action Review. Like any social network, we’re finding some people are super users, some are lurkers, and others are slow to “join the conversation” at all. So we have some staff who are ‘gatherers’, who make sure juicy bits of information are captured from the vast stream of information that flies around on emails among campaigners. So far, Yammer has proved pretty useful for the data collection bit (like journaling), and because of the social network aspect of it, it’s also been good for knowledge exchange bit – for those who are active users or lurkers. But you still need a live conversation for the sense-making bit. And for that, nothing beats face-to-face, real-live oral conversations.

    (1) Ricardo Wilson-Grau and Heather Britt have put together a useful guide on Outcome Harvesting, published last May by the Ford Foundation.
    (2) See Walter Ong’s work, and some interesting discussions on how oral culture is re-entering the public sphere through twitter, Facebook (and I’d add blogs), in Zeynep Tufekci’s post at and in the Atlantic
    (3) We also looked into using provided a better set of features, for what we were looking for, and a more friendly, intuitive interface.

  8. One approach I’ve trialled which both gives a view of impact and also gives evidence for emerging changes is just to add into project meetings a bit which asks people whether they have noticed anything new or anything interesting. Field workers were then able to triangulate their observations at these meetings and it threw up both changes in the wider context as well changes connected to the project. It gave a sense of what to keep an eye on, where to make changes to the project, where to monitor new themes as well as gave evidence of impact where impact was not sought.