Why did help arrive so late? Evidence v Incentives in the Horn of Africa drought.

January 18, 2012

     By Duncan Green     

drought aid recipientlooming in the Horn of Africa (and said so), but the system was largely unable to respond until people actually started dying. From the exec sum of A Dangerous Delay: The cost of late response to early warnings in the 2011 drought in the Horn of Africa: “The 2011 crisis in the Horn of Africa has been the most severe emergency of its kind this century. More than 13 million people are still affected, with hundreds of thousands placed at risk of starvation. One estimate suggests that 50,000–100,000 people have died. This crisis unfolded despite having been predicted. Although brought on by drought, it was human factors which turned the crisis into a deadly emergency. Tragically, the 2011 crisis is not an isolated case. The response to drought is invariably too little too late, representing a systemic failure of the international system – both ‘humanitarian’ and ‘development’. In the Horn of Africa, there were indications that a crisis was coming from as early as August 2010. In November 2010, these warnings were repeated and they became more strident in early 2011. Some actors did respond, but full scale-up only really happened after the rains had failed for a second successive time (see chart, below). By this time, in some places people were already dying. Many had lost their livelihoods, and many more – particularly women and children – were suffering extreme hardship. The scale of death and suffering, and the financial cost, could have been reduced if early warning systems had triggered an earlier, more substantial response. Why was the international system so slow in responding to accurate early warnings? One reason is that raising large sums for humanitarian response currently depends on getting significant media and public attention – which did not happen until the crisis point was reached. Horn drought - warnings v dollarsDecision makers are often not comfortable with uncertainty and forecasts, requiring hard data before initiating a response. So, while many people ‘on the ground’ in the region – representatives of many agencies and institutions, and communities themselves – were aware of the impending crisis and trying to set alarm bells ringing in January and February 2011, they were not always able to get traction ‘further up the chain’ from those who needed to act to avert another crisis.” The underlying problem is that the incentives (or lack of them) are preventing the system (both national governments and international aid agencies) from acting on the early warning system, which is working pretty well. Why is that? • “fear of getting it wrong – with both financial and reputational risk at stake; • fear of being too interventionist – undermining communities’ own capacities to cope; • fatigue – ‘there are droughts every year’ – encouraging an attitude of resignation to the high levels of chronic malnutrition, and an inability to react to the crisis triggers.” What could change that? “The decision to respond is ultimately a political one. National governments often see an emergency declaration as a sign of weakness, especially if there is a drive for food self-sufficiency. This can make it difficult for humanitarian agencies to declare an emergency themselves. Early response is more likely when there are clear links with those directly affected by the food crisis – thus multi-party democracy and a free press are necessary, but not always sufficient for the politically marginalized. A strong, vibrant civil society voice is required to ensure that there is a political price for failure to respond. For the donors, their relationship with national governments is a key determinant of early response. Although humanitarian aid should be exempt from political conditionality, political differences can seriously delay the response, as in Somalia in 2011.” In the end it all comes down to dealing with uncertainty (echoes of my recent reading….) “Responding on the basis of forecasts instead of hard data requires a shift in dealing with uncertainty.42 Currently, uncertainty too often stifles action; one study in Kenya found that while forecasts allow for prepositioning of food stocks, national decision makers often do not rely on them for scaling up a response. Forecasts involve uncertainty: they are inevitably based on data which is not totally comprehensive and are tinged with judgement; the earlier the warning, the less accurate it is likely to be. Yet this uncertainty is not unquantifiable – standard risk management techniques allow us to convert this uncertainty into risk, which can then be managed and risk probability impact 2x2minimised. The figure shows a typical risk impact/probability chart, which plots the probability that a hazard will occur against its impact. Clearly, the most dangerous risks are those with high impact and high probability; these are the risks that should be prioritised for action, and require the closest attention. Using this logic, it would have been clear from around January 2011 that the high probability of poor March–May rains in the Horn of Africa, magnified by the failure of the previous rains in late 2010, would constitute a critical risk that needed to be addressed immediately. The principles of risk reduction and management are well accepted in other fields, such as insurance (where paying money upfront is regarded as a responsible approach to prevent high losses in the event of a crisis) and public vaccination campaigns (to prevent epidemics and reduce medical costs). These principles must be embedded in short-term emergency response, longer-term development work and government investment programmes.” It’s on the front page of the Guardian, a top story on the BBC, and I’m on press duty – could be a busy day.]]>

January 18, 2012
 / 
Duncan Green
 / 

Comments