What kind of Evidence Influences local officials? A great example from Guatemala

May 29, 2018

     By Duncan Green     

I met Walter Flores at a Twaweza seminar in Tanzania a couple of months ago, but have only just got round to reading his fascinating paper reflecting on 10 years of trying to improve Public Health in Guatemala. It is short (12 pages), snappily written, with a very crisp, hard-hitting thesis, so no need to do more than provide some excerpts to whet your appetites:

The Setting: ‘In Guatemala, because of decentralization, local public services are governed primarily by municipal governments and the Ministry of Health’s local and regional branches. This means that local authorities are actually in a position to address some issues of service quality, corruption, and abuse—though not deeper systemic issues, such as health budgets.

The Project: Over the last decade, the Centro de Estudios para la Equidad y la Gobernanza de los Sistemas de Salud (the Center for the Study of Equity and Governance in Health Systems, or CEGSS) has considered the question of how to use evidence to influence authorities and promote participation by users of public services in rural indigenous municipalities of Guatemala.

The Process: Our initial approach relied on producing rigorous evidence through the surveying of health care facilities using random samples. However, when presented to authorities, this type of evidence did not have any influence on them. In the follow-up phases, we gradually evolved our approach to employ other methods to collect evidence (such as ethnography and audiovisuals) that are easier to grasp by the non-expert public and the users of public services.

We found that evidence collected, analyzed, and systematized by the users of the health system was key to engaging the authorities. This conclusion was based on a systematic analysis of different methods for gathering evidence CEGSS used to document the conditions and user experience of local health services.

Between 2007 and up to now, we have implemented five different methods for gathering evidence:

1) Surveys of health clinics with random sampling,

2) Surveys using tracers and convenience-based sampling,

3) Life histories of the users of health services,

4) User complaints submitted via text messages,

5) Video and photography documenting service delivery problems.

Each of these methods was deployed for a period of 2-3 years and accompanied by detailed monitoring to track its effects on two outcome variables:

1) the level of community participation in planning, data collection and analysis; and

2) the responsiveness of the authorities to the evidence presented.

Our initial intervention generated evidence by surveying a random sample of health clinics—widely considered to be a highly rigorous method for collecting evidence. As the surveys were long and technically complicated, participation from the community was close to zero. Yet our expectation was that, given its scientific rigor, authorities would be responsive to the evidence we presented. The government instead used technical methodological objections as a pretext to reject the service delivery problems we identified. It was clear that such arguments were an excuse and authorities did not want to act.

Our next effort was to simplify the survey and involve communities in surveying, analysis, and report writing. However, as the table shows, participation was still “minimal,” as was the responsiveness of the authorities. Many community members still struggled to participate and the authorities rejected the evidence as unreliable, again citing methodological concerns. Together with community leaders, we decided to move away from surveys altogether, so authorities could no longer use technical arguments to disregard the evidence.

For our next method, we introduced collecting life-stories of real patients and users of health services. The decision about this new method was taken together with communities. Community members were trained to identify cases of poor service delivery, interview users, and write down their experiences. These testimonies vividly described the impact of poor health services: children unable to go to school because they needed to attend to sick relatives; sick parents unable to care for young children; breadwinners unable go to work, leaving families destitute.

This type of evidence changed the meetings between community leaders and authorities considerably, shifting from arguments over data to discussing the struggles real people faced due to nonresponsive services. After a year of responding to individual life-stories, however, authorities started to treat the information presented as “isolated cases” and became less responsive.

We regrouped again with community leaders to reflect on how to further boost community participation and achieve a response from authorities. We agreed that more agile and less burdensome methods for community volunteers to collect and disseminate evidence might increase the response from authorities. After reviewing different options, we agreed to build a complaint system that allowed users to send coded text messages to an open-access platform.

We also wanted to continue to facilitate communities to tell their stories and experiences. Instead of presenting life-stories as text, we began helping communities to use photography and video to document their stories. Audiovisual evidence proved a powerful method to attract the interest of traditional media and other civil society organizations. Also, by using coded complaints sent via text messages and sending electronic alerts and follow-up phone calls to authorities, we were able to draw attention to service delivery problems in real time. This situation resulted in a “high” level of both community engagement and government responsiveness from officials.

And the big takeway conclusion? In contrast to theories of change that posit that more rigorous evidence will have a greater influence on officials, we have found the opposite to be true. A decade of implementing interventions to try to influence local and regional authorities has taught us that academic rigor itself is not a determinant of responsiveness. Rather, methods that involve communities in generating and presenting evidence, and that facilitate collective action in the process, are far more influential. The greater the level of community participation, the greater the potential to influence local and regional authorities.’

Great stuff. The big omission, for me, is any discussion of why this is the case. Why do officials find testimonies more convincing than hard data? Is it because they prefer a human narrative, or because these are the voters who can influence their political masters? More research needed, as ever.

Comments