What do we know about when data does/doesn’t influence policy?

Josh Powell, Chief Strategy Officer at the Development Gateway weighs in on the Data and Development debatejosh powell

While development actors are now creating more data than ever, examples of impactful use are anecdotal and scant. Put bluntly, despite this supply-side push for more data, we are far from realizing an evidence-based utopia filled with data-driven decisions.

Dilbert on dataOne of the key shortcomings of our work on development data has been failing to develop realistic models for how data can fit into existing institutional policy/program processes. The political economy – institutional structures, individual (dis)incentives, policy constraints – of data use in government and development agencies remains largely unknown to “data people” like me, who work on creating tools and methods for using development data.

We’ve documented several preconditions for getting data to be used, which could be thought of in a cycle:

While broadly helpful, I think we also need more specific theories of change (ToCs) to guide data initiatives in Josh Powell 1different institutional contexts. Borrowing from a host of theories on systems thinking and adaptive learning, I gave this a try with a simple 2×2 model. The x-axis can be thought of as the level of institutional buy-in, while the y-axis reflects whether available data suggest a (reasonably) “clear” policy approach. Different data strategies are likely to be effective in each of these four quadrants.

So what does this look like in the real world? Let’s tackle these with some examples we’ve come across:

Josh Powell 2Top-Right (Command and Control): In Tanzania and elsewhere, there is a growing trend of performance-based programs (results-based financing or payment by results) that use standard indicators to drive budget allocations. In Tanzania, this includes the health sector basket fund and a large World Bank results-based financing program. Note that this process of “indicator selection => indicator performance => budget allocation” provides a clear relationship between data and policy outcome, lending itself to a top-down approach (well-characterized by a Tanzania government official, who told me that “people do what you inspect, not what you expect”). Where fixed policies and actionable data are present, this command and control approach makes sense – but be careful before trying to move this approach to another quadrant.

Positive Deviance (bottom-left): Here relationships between data and policy are much less clear, and neither lends itself to action. So why not drill down and find what is already emerging/working on the ground (positive deviance)? (i) Identifying high-performing districts, (ii) studying factors (both internal and external) that differ from the norm, then (iii) developing specific theories of change and (iv) using peer learning or other dissemination methods to test and learn from these “outliers”.Josh Powell 3

Name and Shame (top-left): Where evidence-based policy options exist, but actors are either unaware or unwilling to adapt, good old-fashioned naming and shaming can work wonders. We saw this in Ghana: a District Health Director presented local (high) maternal mortality rates to the District Assembly, which rapidly led to an increase of health worker coverage and engagement with community education groups. When we spoke to the director recently, she reported that district maternal mortality rates had been cut in half over a 2-year period (of course, many factors may have contributed to this). Naming and shaming can be a powerful motivator when data suggest clear policy changes, but can be otherwise difficult to replicate.

Josh Powell 4Analyze and Define (bottom-right): Here, relevant decision-makers are keen to solve a problem, but data may not provide a clear-cut solution. Here’s where the “elbow grease” approach of exploratory analysis and comparison of inputs (e.g. aid allocation) and outcomes (e.g. poverty) comes in. As an example, Nepal’s Ministry of Finance struggled with planning its investments and use of external resources, and was frustrated by the transaction costs of working with 30+ development partners. Using data from its Aid Management Platform, the government did its own analysis to create a development cooperation policy, outlining the “rules of the game” for development partners in Nepal. Exploratory data analysis for policy should be accompanied with an adaptive policy mindset: perceived relationships within data may end up being blind alleys, requiring flexibility to test and change when theories are disproven.

So what? Putting some ToCs to the test

At Development Gateway, our Results Data Initiative is working with country governments and development agencies, using a PDIA approach. At country level, we’re convening quarterly inter-ministerial steering committees (authorizers/problem holders) to identify problems or decisions for which they want to use data, and technical committees (testers/problem solvers) to identify ways to use data to get at these issues. We plan to work only with the government’s existing data sources– to learn what they can (and cannot) do with what they’ve got. Throughout this program, we’ll be applying these ToCs, and will report back on what we learn. I know we’re not the only ones thinking about this, and would love to hear what others have done to use data to influence development programming.

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.


7 Responses to “What do we know about when data does/doesn’t influence policy?”
  1. This is a great blog, and all the better with the fantastic comics!

    Your 2×2 framework is very useful. Too often solutions are applied in the wrong context/quadrant (the classic hammer in search of a nail). When results-based finance is pushed in contexts of low political buy-in, or worse, limited usefulness of data (contexts where there’s more noise than signal) it’s likely to have all sorts of unintended consequences. Likewise the ‘name & shame’ approach in contexts of high political buy in may backfire and make policymakers more averse to using data to drive better decision making.

    One challenge is that entities/organizations oftentimes specialize (and for good reason) in one of these quadrants. An organization that’s great at naming and shaming may not be great at results-based finance. For PDIA to be successful, we need mechanisms for local problem-identifiers to flexibly and quickly access specialized knowledge to help them solve their own problems. Your approach in setting up these technical committees may be one way to do that.

  2. Varja Lipovsek

    Ah, the illusion of command and control! How we wish it were a real quadrant which to inhabit! Alas, even in the case presented as an example (Tanzania health basket), there is a glaring gap between data-nimble policy phrasing at the national level, and what actually goes on in the dispensaries, health centers, and other front-line public services. The donor community and the government have been pouring funding into health for years. The entire system of purchasing, storing, dispensing and managing essential drugs and medicines is very sophisticated, completely digitized and real-time — courtesy of significant donor dollars and many vertical programs. And yet! If you checked the government’s own Medical Stores Department in late 2016 you would find out there was centrally a 47% stock-out of essential drugs and medicines. A local CSO made that a point of discussion with the media, to which the Ministry of Health responded by saying that figure is not correct, that in fact drug availability stands at 53%. It would be funny if it were not so depressing. A long story to say that even when data is there, of high quality, and at the surface level it is meant to be used for a performance-based system in a “command and control” scenario, the reality is very different. As a running theme on this blog would have it — the real reasons have nothing to do with data, but with power dynamics and politics. So what I do really appreciate about this blog is the last (rather brief) point — on testing TOCs and possible solutions with inter-ministerial steering committees (authorizers/problem holders). That part I’d love to hear more about.

    • Hi Varja,
      Thanks very much for the thoughtful response! I certainly agree with your points on command and control systems – in fact the point I was hoping to draw out is that the top-right and bottom-left both should be at play within the Tanzania example: top-right is used at the central level to drive a particular decision (funding allocation based upon performance on specific methods), while bottom-left is likely the “more messy” approach taken to actually drive better performance. I think the key point here is to think of the 2×2 in terms of specific decisions, and that a particular system/policy may require working in different quadrants at different levels/moments of the system. Hopefully this makes sense!

      In terms of the PDIA/TOC testing at the bottom – currently working on a separate post that will be on the Development Gateway blog shortly. Very much hope you’ll read and engage!

  3. Josh – Thanks for the thoughtful piece. Before my current work, I was in the HQ office of PEPFAR, which in 2015 effectively mandated a command and control style of organizational change, shifting towards data-justified decisions. I think most would agree a shift of that magnitude–several US agencies, thousands of partners and staff globally–was unprecedented in foreign assistance; however, the verdict is still out on the ultimate benefits to program and health outcomes. I maintain that the reason PEPFAR was able to pivot so rapidly was simply because the requirement to use data was directly linked to the budget that was provided. Econ 101: line up your desired behavior with compelling incentives.

    The same success factor has been named in the Tanzanian Essential Health Intervention Program (TEHIP)–still what I consider to be one of the most successful health and development interventions yet. (Here’s a quick fact sheet: https://www.idrc.ca/sites/default/files/sp/Documents%20EN/Burden-of-disease.pdf, but I highly recommend the full book). District managers were encouraged to use data and were provided tools, but most importantly, they were given discretionary income to spend on filling gaps identified by their data analysis. Ultimately, the districts with TEHIP cut child mortality by 40% relative to all other districts in Tanzania.

    Though it can be effective in the short term, the main challenge I see with command and control is that the incentives are not really aligned for the worker bees (those doing the bulk of management and implementation). PEPFAR’s pivot has dramatically increased workload on staff at all levels, both requiring them to rapidly develop new skills and burn the candle at both ends to meet deadlines. Not inherently bad things, but many of these people were already quite stretched and that can lead to a demoralizing effect and tendency to do the minimum required.

    At country level, I’ve seen all too often that “political buy-in” for using data to drive program varies tremendously within a country (despite official policy or guidelines) and really is driven more by specific agents of change. The degree to which those agents are in positions of authority and can either compel or inspire actual data-driven decisions, matched with effective accountability mechanisms, is the real question. We can bolster political cover for our work in the form of better standards and stated policies, but ultimately nurturing these agents of change and putting the right tools in their hands and the right times is where the real change will happen. Further, the all the items in your bubble diagram above will have different answers depending on use case (e.g., community worker, clinician in health facility, district health manager, central program coordinator, central financial planner, etc.). I like the idea of the 2X2 as a start for simply explaining some of the challenges with data use, but feel like it may need to be extended to capture of the challenges that are related to decision type and use case incentives.

    For the Malawi data use project, we took a different approach/starting point to the measure the same phenomenon. We collected data on the most critical decisions that were being made from the community up to the central level for the HIV program and then found out which data were used to inform these decisions. When you look at this quantitatively, only a small number of data elements are really valued by decision makers, begging many questions. Check out the results of this initial work here: kuunika.org. Any thoughts you or others have as we further develop the project is most useful!

  4. Thanks Tyler!
    100% agreed – the 2×2 is hopefully just a starting point for an ongoing conversation driven throughout our work (and Kuunika and other programs like these). I think the key takeaway is hopefully to think more strategically about navigating fairly complex political systems with different tools for different decisions/moments. Definitely an admirer of the work you all have done on mapping decisions and we are trying to emulate a similar approach: start with the decisions/problems, understand the actors/constraints/incentives, THEN think about the method/tools/data to fit (and very much to your point that within a given “problem” we may need to think of different approaches for different actors at each level).

  5. Andrew

    Hi Josh,

    I take it you realise that your cycle for data is pretty much the Policy Cycle. A beneficial way to build more data driven decision making would be to see how it augments each stage of the policy cycle. I work in a thing called futures and that was how we made strides with the policy community.

  6. Interesting blog indeed. A quadrant that needs to be central to the discussion is what I would view as the . Here there is not relationship between data and policy. There are no fixed policies in place (or are often ignored) and actionable data are never presented—and no one dares to ask. Policies are made based on the personal likes and whims of the leader/leadership. You cannot shame because that only works for leadership that pays attention to that. I think we all can agree on the fact that leadership sets the tone at the top and it filters down like water from a mountain to every corner of the organization or country. That tone or culture and its inherent behaviors co-opts even the most resistant. This phenomenon is clearly not limited to developing countries only.
    My argument is simple: we will continue to face huge challenges with data demand and use if there is no meaningful accountability in a country. I could even dare to assert that, with a few exceptions, a country’s development is directly correlated to its level of accountability. In donor-funded projects, participants are often incentivized to use data for decision making. Additionally, there is a potential price to pay–including losing of job– for not adhering to that requirement. Imagine the potential overnight increase in the demand for data if administrators in developing countries knew for certain that they would lose their jobs if data were not used for decision making!! In the absence of real accountability, the question becomes why would any leadership care about data-driven decision making when there is no real consequence for not applying it? It is really telling that there is no funding for qualified data expert positions in these countries.