What’s the best way to measure empowerment?

Monitoring, Evaluation and Learning (MEL) used to send me into a coma, but I have to admit, I’m starting to get sucked in. After all, who doesn’t want to know more GAD jacketabout the impact of what we do all day?

So I picked up the latest issue of Oxfam’s Gender and Development Journal (GAD), on MEL in gender rights work, with a shameful degree of interest.

Two pieces stood out. The first, a reflection on Oxfam’s attempts to measure women’s empowerment, had some headline findings that ‘women participants in the project were more likely to have the opportunity and feel able to influence affairs in their community. In contrast, none of the reviews found clear evidence of women’s increased involvement in key aspects of household decision-making.’ So changing what goes on within the household is the toughest nut to crack? Sounds about right.

But (with apologies to Oxfam colleagues), I was even more interested in an article by Jane Carter and 9 (yes, nine) co-authors, looking at 3 Swiss-funded women’s empowerment projects (Nepal, Bangladesh and Kosovo). They explored the tensions between the kinds of MEL preferred by donors (broadly, generating lots of numbers) and alternative ways to measure what has been going on.

The start by breaking down the fuzzword ‘empowerment’, into the ‘four powers’ (power within; power with; power to and power over) model best known from my Oxfam colleague Jo Rowlands’ 1997 book ‘Questioning Empowerment’ (although she claims not to have invented it) and used by everyone ever since.

When you disaggregate power in this way, you come up with an interesting finding:

‘[quantitative] M&E can capture some evidence of increased ‘power-to’ in numbers of people trained in a skill or knowledge, or able to market their products in a new way, or mobile phones distributed to enable women traders to share knowledge. However, ‘power-within’ is a realm of empowerment which does not directly lend itself to being captured by quantitative M&E methods.’

evaluationWhat’s more, while obviously women worry a good deal about income and putting food on the table, soft data (eg on feelings and perceptions) best collected by qualitative methods such as in depth interviews:

‘Appear to be what the women value most in [these] projects. While this is probably a very obvious point for feminists working with women, it is noteworthy for practitioners who tend to focus on supporting ‘power-to’ through provision of material resources and other tangible changes.’

That certainly chimes with what I found when talking to women in Community Protection Committees in the DRC last month – the biggest personal impact of their participation was the palpable sense of pride and self esteem that came from learning about their rights and how to exercise them, and then passing that knowledge on to their neighbours. Hard (though not impossible) to put a number on that. Listen to this interviewee from Bangladesh:

‘Before taking part in the project, I was not allowed to visit places outside my house. This all changed after I joined the producers’ group. My income and communication skills increased and improved. Due to the income and awareness, my husband allows me to attend different meetings of the producers’ group, village and district levels. Due to my involvement in the producers’ group, other producers encouraged me to run for a local government election as member in Union Parisad [UP – lowest tier of local government in Bangladesh]. I was motivated to try and finally was successful in winning the election. From a simple housewife, I am now an elected member of the UP.’

That last quote highlights another plus of qualitative methods – they really help communicate project impact (as do numbers, of course – maybe for different audiences). But Carter & co. want to move on from a crude ‘quant v qual’ dichotomy. They argue that quant and qual methods complement each other – and mixed methods can actually be the best way to tackle both the “did change happen” questions, as well as the why. For example, ‘Research methods associated with collecting qualitative data often actually reveal unexpected quantitative data, including changes in children’s school attendance, better nutrition, and so on.’

And how you do qualitative research matters. Yes, lots of qualitative research is pretty slapdash, but beware the temptation to ‘professionalize’ it and send for Rigorous women empowermentQual Consultants Inc: ‘If qualitative information is collected by an external agency with a clear, pre-determined mandate, there is the risk that much potentially interesting information is ignored as irrelevant to the task in hand, and not transmitted to the project staff.’ So NGOs may be better advised to build staff skills in qualitative research, rather than outsourcing.

In the end, they argue, ‘the best way to measure empowerment is to ask those directly concerned’. But it’s not quite as simple as that. Sure, it would be perverse in the extreme if we tried to measure empowerment by ignoring the nuance of voice and lived experience of those involved in order to generate another dry statistic. But equally, you just can’t rock up in a village and ask do you feel empowered?’ and expect to get a useful result.

There are clearly difficulties with putting all this into practice, namely that ‘‘value for money’ seems to require quantifiable facts’. But the authors think that’s no excuse. ‘Nevertheless we wonder if better communication on the part of development professionals about the worth of qualitative evidence in demonstrating value could mitigate this demand.’

Excellent piece, and if you’re not already signed up to either the journal or its twitter feed (@GaDjournal), why not?

And here’s a blog introducing the GAD issue from one of the contributing authors, Kimberly Bowman

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


15 Responses to “What’s the best way to measure empowerment?”
  1. Helen Lindley

    ‘But equally, you just can’t rock up in a village and ask do you feel empowered?’ and expect to get a useful result’
    Duncan – Perhaps not in those exact words, but the Chars Livelihood Programme has done some interesting work in Bangladesh using focus groups to ask different women and men ‘what an empowered woman looks like’, which can be turned into quantitative indicators http://tinyurl.com/k7qs34z

    Womankind Worldwide has adapted this to a project in Peru through asking women’s group members ‘what a powerful woman looks like’ which established criteria for scoring. The definitions developed by women included having self-respect and value, and also sharing information with others.
    For me a key issue to consider with such an approach (which establishes quantitative criteria from qualitative results) is the potential for women’s own definitions of empowerment to shift as they become involved in a programme; for example a woman may not value ‘increased decision making in the home’ as a sign of empowerment if she does not recognize her right to do so, hence the value of objective measures as described by Oxfam to complement this.
    Womankind has also found Outcome Mapping a useful framework to help engage partner organisations in discussion on change, including empowerment – see the fourth article for more information (shameless self plug!)

  2. Colm Moloney

    How to measure empowerment was a question I struggled with when I started an evaluation project of a social accountability programme in Ghana back in November (I wrote a little about some of the challenges here http://colmmoloney.com/2013/11/02/empowerment/).

    Although I there was a certain amount of reluctance, given I like numbers, I settled on a qualitative approach. I felt there were few indicators of change that could be quantified, and there was no other way to collect rich data that could really illustrate the change as it was experienced.

    However, I did find that it’s very difficult to accurately talk about the scale of change with qualitative information. E.g. if you’re thinking of an indicator like citizen participation in local governance, how do measure the scale of change in that indicator? I ultimately acknowledged the limitations of the study in that respect and didn’t emphasise scale, but I think it’s an important question.

  3. Sabita Banerji

    Excellent piece, thank you, Duncan. I think this is probably what you meant but I would rephrase: “NGOs may be better advised to build staff skills in qualitative research,” as “NGOs may be better advised to build staff skills in LISTENING”. Your piece highlights the difference between evaluating in order to understand the impact your work is having (which I think tends to be more qualitative) and evaluating to provide evidence of VFM -ie for donors (which tends to be more quantitative). So if we genuinely want to understand how the human beings who we are purporting to help are affected by the things we are doing and asking them to, we need to really listen to them. And then we need to resist the temptation to interpret what they are saying (or get Rigorous Qual Consultants Inc to interpret it!). To do this, I suspect that the Sensemake approach (http://www.sensemaker-suite.com/smsite/index.gsp) is the most effective. Would you agree?

    • Kimberly B

      In response to Sabita’s comment:

      I think we need to distinguish between the general ways that we behave, manage a programme, interact with people with different perspectives on the programme – and do more informal monitoring…with our (2) more formal approaches to understanding outcomes and impact – where well-designed evaluation, appropriately applied, can add tremendous value.

      On (1) – I think Sensemaker *can* be useful, in some cases. The downside for me is the reasonably large and formal infrastructure that needs to be built around it – and the associated cost (both time and money – it’s a product you buy). The idea of collecting large amounts of stories (or micro-narratives, or noise – whatever) from your ‘audience/beneficiaries/client base’ and analysing those on a regular basis is a great one – but doesn’t always require fancy software behind it. And the reality is, we are ALWAYS going to add a level of our own interpretation/analysis to what ‘they’ are saying (to make sense of this in our own minds, to go from from findings –> action) but I do like Sensemaker’s ability to allow people to interpret their own stories against pre-defined markers. The added layer of meaning is neat.

      Beyond the ‘LISTENING,’ we’re inevitably going to have blind spots and cognitive biases, etc. – which is where formal evaluation (often impact evaluation) is going to have to come in. I’d disagreely strongly with the idea that evaluating to understand the impact of the work that you’re doing is more qualitative, while donor reporting is more quant. Quality evaluation is going to involve both, because they can deliver entirely different things. To be honest, I find this quant/qual discussion a bit frustrating and counter-productive. Use the methods that are going to help answer your questions best!

      (And then of course, take the evaluation findings back to the people who matter and ask them what they think.)

      • Sabita Banerji

        Fair points, Kimberly and I actually agree with them all, including the qual/quant point. It’s just that in my experience, donors often ask for more quantitative evidence. I agree that it shouldn’t be that way and also agree with you and Duncan that what is needed is a mix of whatever works best in the circumstances. NB I understand that the Sensemaker approach is free for anyone to use/adapt, but you do have to pay if you want to use their software. I hope that the concept of collecting micro-narratives and allowing self-categorisation will take off on its own, with or without Sensemaker software.

  4. Rick davies

    For those looking for an alternative to sensemaker software, find any free social network analysis/visualisation software and use it to find clusters of stories linked together by the same 1 or more self-signifiers. Email me for help if you need it

  5. Danboyi Nuhu

    It think this is an eye opening piece Ducan. Both Qualitative and quantitative M&E should be the focus of programs in ascertaining impact because only a thorough method such as these can reveal a true picture of a good data. Although, you broached on some other variables that could either mar or make this practice work, especially how quantitative out look has an overriding issue with VFM.
    In my opinion and if I have got the entire perception of the piece, the debate is that as program embark on M,E&L, it should be that Quantitative evaluation should reveal quality and vice versa. In fact, even in new critical theories particularly in Literature, the argument tends to reveal that as we analyze meaning, style is naturally revealed. Therefore, it should not be a one way thing if true impact is to be revealed in a program.

  6. Claire Hutchings

    I think we’re all in agreement that quantitative and qualitative information gathering have value, offering distinct and complimentary information in our efforts to understand change processes. And let’s recognise that decisions about appropriate approaches to data collection and analysis are not determined solely by the nature of the issue that you’re grappling with, but by the question you’re trying to answer – qualitative data gives us an incredibly rich picture and can help us understand an issue, problem, solution, change process etc in a deep and nuanced way, where quantitative analysis (of quant or qual data) can help us to look across the landscape to understand the average experience and generalise. And then of course, there are time and resource constraints which will have important implications for research and evaluation designs.

    For Oxfam’s effectiveness reviews, a very first challenge, before we even got on to evaluation design though, was to identify an appropriate measurement approach (and one, by the way, that could be used across a diverse portfolio of programmes in a myriad of country and sub-national contexts). Multi-dimensional was a given, but even then, what dimensions are important / core? What indicators are relevant? What weighting should they be given? And it is this question about how the concept of “women’s empowerment” is defined, and by whom, that is most interesting to me. While it feels right to me that self-definition be at the heart of empowerment, I’m conscious that there are aspects that I feel should be core to women’s empowerment irrespective, around personal safety and freedom from violence, around equal access to and benefit from education etc. But what to do when our own definition of core issues are at odds with self-definition?

    All by way of saying that even from a conceptual / definitional standpoint women’s empowerment is an incredibly complex issue.
    While there are ongoing efforts to build in more qualitative enquiry into Oxfam’s effective reviews of ‘large n’ interventions, to be honest, it is this definitional piece that is still at the heart of what we are learning and our efforts to strengthen and evolve our approach.

  7. Rick davies

    You can design survey instruments where respondents provide weightings to their own choices/preferences/views and these can be aggregated on a one person = one vote basis

  8. George Cottina

    Thanks this is exciting reading and as one one woman in Mtwapa Mombasa Kenya said it if i paraphrase your statement a bit, you cant parachute (read drop)into a village and ask the women if they are empowered and expect any useful information.

    Having worked with the women who were involved in the Kindernothilfe Self Help Approach to develop a participatory self monitoring methodology and consequently strengthen the participating sponsor NGOs monitoring and evaluation capacities and impact orientation. i am now a firm believer in mixed methods in M&E. We worked with the women to discuss and agree on what empowerment in their community would mean and how they would tell they are empowered. The women hen agreed as groups an also as individuals on certain goals to work towards with support from several stakeholders towards empowerment and regularly met to measure and discuss progress. (www.ngo-ideas.net) the progress reflection meetings which took place after six months for three years generated alot of interesting qualitative information on women empowerment but also some quantitative measurements.

  9. Jane Carter

    Great to see our article focusing so prominently in your blog post, Duncan! You summarised our arguments very nicely – although as you know, we were not so simplistic as to suggest that you can *rock up in a village and ask, do you feel empowered?” What we actually wrote was, “As Dee and Ibn Ali (2010) argue, the best way to measure empowerment is to ask those directly concerned.” This was a direct reference the title of their very interesting SIDA publication, ‘Measuring Empowerment? Ask Them: Quantifying qualitative outcomes from people’s own analysis’. In it, they record how in Bangladesh, they asked groups of women and men living in poverty to represent their concept of empowerment through drama – in plays that they conceived and performed themselves. From these plays, rich in anecdotal detail, Dee and Ibn Ali developed a series of statements about empowerment that could be presented to other men and women to see if they resonated with their own experience. Thus they arrived at a quantitative tool – one that they argue can be widely used for evaluating projects that take a rights-based approach.

    In our paper, the three projects discussed were varied in their goals, and not necessarily conceived using a strictly rights based approach – although this is certainly one that our organisation uses widely (http://www.helvetas.org). Our comment was that, “The more that project ‘beneficiaries’ have the opportunity to be actively engaged in the process of knowledge production, the more space they will have to represent reality in their own terms.” As Claire Hutchings very rightly points out, “empowerment” is one of those terms that is extremely difficult to define. This makes it all the more important that we ensure that those concerned are actively involved in its measurement.

  10. Tarun Joshi

    Quite agree with the Author here.
    As I see:
    First we need to decide on why we are doing the MnE. We measuring impact for 2 reasons, one is: to show it to our donors, and this MnE method could be short and qualitative one. Second is to take measures from the MnE and develop improvement processes and work on specific areas through out the year.
    Measuring subjective topics like empowerment, or measuring compassion among kids etc, are really difficult. One smart way is the qualitative way and to have a quantitative output out of these are the standard psychometric test on these topics, which will provide you results in numbers.

  11. Kaustubh Pandharipande

    This is a first line question in my mind from many days. thanks for writing on it. I am observing that social change is becoming like service providing and income generation. This might be needed but if we wish to bring real change it should come with empowerment. sorry commenting without reading full text and a book “Questioning Empowerment” but will come back after reading those….

  12. rick davies

    One way of viewing empowerment is as an increase in choice. One way of measuring the degree of choice available is by measuring diversity of behavior in a given area of interest. Ecologists have developed a number of quantitative measures of diversity which may be worth looking at. Back in 1998 at IDS Andrew Stirling (“On the Economics and Analysis of Diversity”) made a distinction between three aspects of diversity: (a) variety , being the number of kinds of things,(b) balance, being the relative numbers of each of these kinds of things, and (c) disparity, being the degree to which these kinds are different from each other. The latter is more difficult to measure, but it is possible, especially with ethnographic pile sorting methods.

    Some more thoughts on this can be found here: