Shouting or cooperating? What’s the best way to use indexes to get better local government?

Went to an enjoyable panel at ODI last week, with the wonderful subtitle ‘Shouting at the system won’t make it TZ indicators 1work!’. It presented new research on how to improve the accountability of local government in Tanzania. Here’s the paper presented by two of the authors, Anna Mdee and Patricia Tshomba, the first of a series.

The research is about how you construct a local government performance index that means something to local people. It’s research rather than consultancy, so the task is not to produce an index for others to implement, but to work out how to do it.

And it turns out (surprise surprise) to be really difficult. According to one of the authors, Anna Mdee ‘We found a big gap between theory and practice. Practice is much more multiple, with parallel negotiations, the remains of a one party system, the formal state, extremely influential faith organizations, civil society organizations. So it’s hard to establish who is being held to account. We found lines of accountability, but also ‘lines of blame’ – everyone blames everyone else, not always along the same lines. Government tries to push blame all the way down to the villages (which have massive responsibilities and no resources).’

The researchers opted for building an index as a starting point for triggering conversations about problems and how to fix them. Through a combination of workshops, focus groups and interviews, they identified those problems as:

  • Politicians are not concerned with peoples’ problems
  • Councillors and other local representatives feel they are misjudged
  • Lack of communication, openness, cooperation and togetherness
  • No platform to bring together stakeholders
  • Lack of important documents and knowledge
  • Weak culture of reading
  • Shortage of funds (‘District officials reported that the social welfare department has many activities and manpower but has only a budget of 1 million Tanzanian Shillings (approx. £360) per month’).
Examples of indicators
Examples of indicators

They used the conversations to identify a draft set of indicators, went back to consult and refine them, and came up with the ones in the diagram, with the litmus test being ‘if you had this, would it make any difference? Would anyone use it?’

The purpose of this is definitely not advocacy of the finger-wagging variety, but rather ‘getting beyond the blame game’: ‘We see potential in using a Local Governance Performance Index as a collaborative problem-solving tool, that helps to move from a list of complaints about problems that local officials and representatives have limited capacity to resolve, to a collective understanding between citizens and local government about where blockages lie, and what they can do together to overcome them.’

Which is really interesting and echoes in part our own governance work in the Chukua Hatua programme: we started off helping local communities ‘demand accountability’ from local officials, but moved to something more collaborative when those officials said they would love to be more accountable, but had little idea what their roles and responsibilities actually were.

I’m a sceptic on finger wagging (see one of my favourite cartoons on ‘speaking truth to power) and love the focus on Truth to Powercollective problem solving, on getting local people to identify the right indicators and the acknowledgement of the complexity of local governance. But I did worry that by aiming purely for using the index to trigger conversations, they were missing a trick.

In particular, they reject the idea of a single composite index, because they think that would prompt a slide into attempts to ‘game’ the index, and this would lose the emphasis on collective problem solving. And anyway, if you really listen to your interviewees, you are likely to come up with a different index for each village, which messes up any kind of comparison anyway.

But we know from experiences like PAPI in Vietnam, and others in Uganda (as well as in lots of other settings from the UN to corporates) that a league table focuses the minds of leaders like nothing else – they really don’t want to be beaten by their neighbours and rivals. Anna replied to my question on this by saying ‘I’m an anthropologist, so instinctively cautious about quantification, and the distortion produced through overemphasis on proxy indicators’. Actually, there’s lots of quantification in the index, what she’s opposing is over-simplification, which is laudable, but could carry a cost in lost influence.

Has anyone done a comparative study of different approaches to local government indexes – how they are designed, who uses them, what impact they have etc? If so, I’d love to read it.

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


5 Responses to “Shouting or cooperating? What’s the best way to use indexes to get better local government?”
  1. Finn

    Hi Duncan,

    Interesting post. In my experience, the questions of whether “to index or not to index” depends quite a bit on who the main user of the results is. You’re right that PAPI and other league table indices work well, but probably since here the main user is the (quite empowered and centralized) local administration (i.e. top down). However, when the main user is rather the local community, comparisons to other communities via league tables might not matter as much as an inclusive process which lets local actors create – or at least, adapt the framework so that they full own it and its results (such as in the Tanzanian case). So, my – simplified – take would be: comparative indices/league tables to instigate top-down reforms, contextualized assessment processes to help with collective-action focused, bottom-up processes. Trying to achieve both at the same time seems to, in my experience, usually lead to failure, but I’d be very curious to hear about counter-examples to my hypothesis.

    As for assessments of local governance performance frameworks, TI did an overview of what exists on local governance assessments some years ago. It’s here:

  2. working with hundreds of community organisation in building up voice and delivering services both through linkages with the government and helping communities deliver them on their own, we found an intense pressure from the donors, government and the World Bank to measure the institutional maturity of the community organisation through indexes. A rigorous study was designed and implemented to assess this. The donors expected that the organisation would use the results of this sophisticated study for its planning. The front line workers of the organisation, the social organisers simply refused to do so. They with their years of experience, deep knowledge of the field and using intuition were synthesising problems to bring people together and get results. The study in contrast used analysis to break down problems into bits and pieces and invariably missed some ingredients which the social organisers found to be the most important. Context and leadership for instance varied every where which was not captured by the indexes but was understood by the social organisers. Its for this reason we are doubtful about the utility of voice being the great change agent if taken in isolation and doubt is ability to make the system responsive to people’s needs.. The problems at local level are too complex and require a high level of cooperation and coordination among different stakeholders to resolve. Instead of index identifying problems an approach which focuses on how far local authorities have been able to resolve them collectively would be a good measure. A good example is that we just happen to be working with a town with about thirty thousand population which has barely any electricity for the last fifteen years, while two public organisation work here to provide it.. Voice expressed though agitation was there. We as a non profit organisation decided to bring in resources to tap the vast water resources the town had to help set up a two megawat electricity station for the town.. Every stakeholder happily joined in the dialogue to initiate the project giving their approval with thousand of community members participating in the final dialogue. But when it came to implementation very few actually helped in raising the resources, or implementing the project and resolving the conflicts which is invariably threw up. What State could not do in two decades was done in two years at a nominal cost by us and a small part of the community. We thought the problem was over but only realised it actually began once the electricity started flowing. The politicians we found actually never wanted it because their whole power base had been built on the existence of the problem and not in resolving it; the two power organisation with the responsibility to deliver power for the State were too centralised to take advantage of the new power available and were not adaptive enough for us to link it up the new system with their outdated system; the communities who benefited the most started forgetting that we were only a voluntary organisation trying to help them and had only responded to their plight and started making us accountable and forgetting that it was the State whose responsibility it was to address their needs. Both the politicians and the power organisation preferred not to come together to smoothen power supply and put in place a sustainable system now. So what should the index been measuring our compassion or values or the inability of the local power structures to respond to a genuine demand. Voice was there but its ability to make the system respond simply not there

  3. Interesting discussion, Duncan, and a timely one in the light of the ‘indicator explosion’ going on in development and urban / ‘green city’ initiatives.

    Concerning the question of index design: Virtually all tools monitoring and ranking local government performance aim for outcome indicators (e.g. air quality). This has merits as for most issues standardisation is easier and data collection cheaper. However, the claim and assumption that outcome information will automatically inspire learning (what to do to reduce air pollution..) usually doesn’t hold up: Many rankings get a second of attention by the media and senior decision-making yet are ignored by practitioners.

    To counter this, a few indicator initiatives explicitly apply process indicators involving local actors (e.g., does our municipality have an air quality action plan meeting certain quality criteria..). This is much more directly related to policy learning yet has distinct challenges: process indicators are more opaque, more labour intensive and expensive if evaluated by expert panels, and consequently often fizzle out. However, if designed well such process monitoring can work.
    Studying one case of ‘positive deviance’ where huge numbers of Dutch local governments voluntarily participated over many years in self-administered rankings of their sustainability policy performance, we produced this paper contains many lessons: The journal is unfortunately gated yet people are welcome to get the paper by writing to
    Our follow-up study addressing sustainability reporting by local governments is fortunately Open Access: In this paper, the comparison of approaches (e.g. tailor-made vs standardised indicators, yearly vs three-yearly reporting) also showed interesting trade-offs that are relevant for many types of city rankings.

  4. Per Tidemand

    Hi Duncan

    Very interesting.

    Over the years (since 1997) I have in East Africa mainly worked with central government systems for LG performance assessments – the main purpose of such assessments is to ensure LG compliance to rules, legislation and procedures. But also to assess whether LGs are responsive, transparent and accountable. They are often linked to forms of fiscal incentives with the LG grant systems. A colleague of mine (Jesper) provided an overview of most of these in 2010 – see

  5. Dear Duncan – sorry to be a bit late with my response. the Gender and Social Development Resource Centre did a good overview in 2010, which still reads very relevant. Interestingly – they linked indices to assess local governance performance to leadership.

    In the Ethiopia Social Accountability program after 4 years of SA experiences we have developed sector based checklists for local communities – based on issues that came up regularly in SA processes all over the country. These lists are now used by local communities to monitor what was raised, what solved in the SA process (and yes, they can and do add the issues that are missing). The checklists are collected an aggregated by CSO at district level (quarterly at the moment) and send to us (the management agency of the program) for compilation. We develop regional and national sector graphs – which CSOs are now using at regional level for sector dialogues. Quite unique in Ethiopia. For our latest data reports set see

    Reason we are doing the analysis ourselves at the moment is that the 86 CSOs in the program found it hard to imagine what would happen if they brought their data about service issues raised and solved together. They’re getting it now – aggregate data is powerful and has started to lead to some responsive regional sector actions. We can now let the data analysis gradually go to see where it lands. Meanwhile, we are exploring with the sector ministries how the citizens’ lists could help streamline access to information about standards and budgets – like citizens’ charters for services.

    It’s important to recognise, respect and work with local diversity, but to tackle systemic issues, at some point you need to aggregate data. Our response to this duality has been to build an ‘index’ from the grassroots up. So far so good…