Participatory Evaluation, or how to find out what really happened in your project

October 23, 2014

     By Duncan Green     

Trust IDS to hide its light under a bushel of off-putting jargon. It took me a while to get round to reading ‘Using Participatory Process Evaluation to Understand the 2012_0827_usaid_kenya_womenDynamics of Change in a Nutrition Education Programme’, by Andrea Cornwall, but I’m glad I did – it’s brilliant. Some highlights:

[What’s special about participatory process evaluation?] ‘Conventional impact assessment works on the basis of snapshots in time. Increasingly, it has come to involve reductionist metrics in which a small number of measurable indicators are taken to be adequate substitutes for the rich, diverse and complex range of ways in which change happens and is experienced. [In contrast] Participatory Process Evaluation uses methods that seek to get to grips with the life of an intervention as it is lived and perceived and experienced by different kinds of people.

[Here’s how it works on the ground, evaluating a large government nutrition programme in Kenya]

It was not long after arriving in the area that we were to show the program management team quite how different our approach to evaluation was going to be. After a briefing by the program leader, we were informed as to the fieldsites that had been selected for us to visit. Half an hour under a tree in the yard with one of the extension workers was all it took to elicit a comprehensive matrix ranking of sites, using a series of criteria for success generated by the extension worker, on which three sites we had been offered as a “range” of examples appeared clustered at the very top of the list and one at the very bottom. That we were being offered this pick of locations was, of course, to be expected. Evaluators are often shown the showcase, and having a basket case thrown in there for good measure allows success stories to shine more brilliantly. Even though there was no doubt in anyone’s minds that this was an exceptionally successful program, our team had been appointed by the donor responsible for funding; it was quite understandable that those responsible for the program were taking no chances.

It was therefore with a rather bewildered look of surprise that the program manager greeted our request to visit a named list of sites, chosen at random from various parts of the ranked list, and not the ones we were originally due to visit. What we did next was not to go to the program sites, but spend some more time in the headquarters. We were interested in the perspectives of a variety of people involved with implementation, from the managers to those involved in everyday activities on the ground. And what we sought to understand was not only what the project had achieved, but also what people had thought it might do at the outset and what had surprised them…… These ‘stories of change’ offered us a more robust, rigorous and reliable source of evidence than the single stories that conventional quantitative impact evaluation tends to generate.

participatory evaluatonOur methodology consisted of three basic parts. The first was to carry out a stakeholder analysis that allowed us to get a picture of who was involved in the program. We were interested in hearing the perspectives not just of program “beneficiaries”, but also of others – everyone who had a role in the design, management and implementation of activities, from officials in the capital to teachers in local schools.

The next step involved using a packet of coloured cards, a large piece of paper and pieces of string. It began with an open-ended question about what the person or group had expected to come out of the program. Each of the points that came out of this were written by one of the facilitation team on a card, one point per card, and extensive prompting was used to elicit as many expectations as possible. The next step was to look at what was on the cards and cluster them into categories.

Each of these categories then formed the basis for the next step of the analysis, which was to look at fluctuations over time. This was done by using the pieces of string to form a graphical representation, with the two-year time span on the x axis and points between two horizontal lines representing the highest and lowest points for every criterion on the y axis. What we were interested in was the trajectories of those lines – the highs and lows, the steady improvement or decline and where things had stayed the same. We encouraged people to use this diagram as a way of telling the story of the program, probing for more detail where a positive or negative shift was reported.

The third step was to use this data as a springboard to analyse and reflect on what had come out of their experience of the project. We did this by probing for positive and negative outcomes. These were written onto cards. We asked people to sort them into two piles: those that had been expected, and those that were unexpected. We then spent some time reflecting on what emerged from this, focusing in particular on what could have been done to avoid or make more of unexpected outcomes and on the gaps, where they emerged, between people’s expectations and what had actually happened. We kept people focused on their own experience, rather than engaging in a more generalized assessment of the program.

[What kind of reaction did they get?] “We’ve never had visitors coming here who knew so much,” one said to us. Another confided that it’s easy enough to direct the usual kind of visitor towards the story that the program team wanted them to hear. Development tourists, after all, stay such a short time: “they’re in such a rush, they go to a village and say they must leave for Nairobi by 3 and they [the program staff] take them to all the best villages.”

[What kind of things did they uncover?] One of the most powerful lessons that the program learnt came from a very unexpected reaction to something that was utterly conventional: a baseline survey. A team of

measuring arm sizeenumerators had set out to gather data from a random sample of households, such as height for weight and upper arm circumference measurements. At the same time, a rumour was sweeping the area about a cult of devil-worshippers seeking children to sacrifice. Families greeted the enumerators with hostility. People in the communities likened the measurement kits developed for ease of use to measuring up their children for coffins. The survey proved difficult to administer. In one place, the team were chased with stones.

To get things off the ground again, the program needed the intercession and the authority of the area’s chiefs to call their people and explain what the program was all about and what it was going to do for the area. What was so striking about the stories of this initial process of stumbling and having to rethink was that it simply had not occurred to the researchers that entering communities to measure small children might be perceived as problematic.’

Brilliant, and there’s lots more in the paper. Whenever I read anything by Andrea, I wish we had more ‘political anthropologists’ like her writing about development. But maybe we should lend them some subeditors to jazz up their titles a bit.

October 23, 2014
 / 
Duncan Green
 / 

Comments