As Global Handwashing Day approaches, the WASH Results MV team explores the challenges of verifying changes in handwashing behaviour.
Hygiene has tended to be the Cinderella of WASH (Water, Sanitation & Hygiene): relegated to the end of the acronym and with hygiene promotion often having been treated as an afterthought in larger water and sanitation programmes. But good hygiene is recognised as an essential component in addressing the burden of disease, and hygiene promotion can be a cost-effective way of improving health outcomes. In linking payment milestones to sustained handwashing behaviour the WASH Results Programme is breaking new ground, and pushing against some of the limits of understanding in the sector. The programme has faced key challenges in how to establish what reasonable handwashing outcomes might look like, and the most appropriate way to measure them. This post draws on discussions within the Monitoring and Verification (MV) team on some of these issues, and resulting internal briefings.
What is a good handwashing outcome?
Across the WASH sector there is little established knowledge on the sustainability of WASH outcomes. Whilst the well-known (and well-resourced) SuperAmma campaign saw a sustained increase in handwashing practice of 28% after 12 months, it’s still an open question as to what level of behaviour change it might be reasonable to expect across differing countries, contexts and projects. This matters, because in a payment by results (PBR) context, payment is linked to achieving targets. Set targets too high and suppliers risk being penalised for failing to reach unrealistic expectations. Set targets too low and donors risk overpaying.
How do we measure handwashing?
Compounding the uncertainty over what reasonable targets may be, is uncertainty on how to measure achievement against those targets. There are several commonly accepted methodologies, but they all involve compromises. At one end of the scale there is structured observation of household handwashing behaviour. Considered the gold standard in measuring behaviour change, this can provide a wealth of data on the handwashing practices of individuals. But it’s also prohibitive to undertake at scale – the most rigorous designs can involve hundreds of enumerators making simultaneous observations. The feasibility of conducting such measurements regularly for a PBR programme is questionable.
A much simpler (and on the face of it, more objective) indicator might be the presence of handwashing facilities (a water container and soap). This is used by the Joint Monitoring Programme (JMP) to measure progress against SDG 6.2 because it is the most feasible proxy indicator to include in large, nationally-representative household surveys which form the basis for the Sustainable Development Goal monitoring (and typically collect data on WASH as just one of several health topics). However, these observations tell us nothing about individuals’ handwashing behaviour, or if the facilities are even being used.
Setting indicators for handwashing outcomes
Within the WASH Results Programme, the suppliers have tended to define handwashing outputs in terms of the reach of handwashing promotion (though evidencing how a programme has reached hundreds of thousands or even millions of beneficiaries leads to its own challenges). They have defined outcomes in terms of:
- knowledge of handwashing behaviour,
- self-reported behaviour, and
- the presence of handwashing facilities.
The suppliers have taken different approaches to considering these indicators separately or as part of a composite indicator to attempt to reach a reasonable conclusion on what the sustained change in handwashing practice has been. Each of the indicators has drawbacks (self-reporting, for example, has been shown to consistently over-estimate handwashing behaviour), but by considering all of them, a more reliable estimate may be reached.
Where this process has become particularly challenging for the WASH Results programme, is in attempting to measure outcomes at scale and across highly diverse contexts. All too often the devil is in the detail and small details in how indicators are set and measured could lead to huge variations in results, in turn leading to major differences in what suppliers get paid for. For example, we may wish to define a standard for the presence of a handwashing facility as a water container and soap, but very quickly some of the following issues arise:
- In some countries, ash is commonly used instead of soap: national governments may actively promote the use of ash, and in some areas, it’s almost impossible to even find soap to buy. But at the same time there is evidence that using ash (or soil, or sand) is less effective than using soap: and for this reason, handwashing with soap is the indicator for basic handwashing facilities for SDG 6.2. Should payment targets follow national norms, or would this be a case of paying for lower-quality outcomes?
- In other areas households choose not to store soap outside because it can be eaten by livestock or taken by crows (a far more common problem than one might imagine). Do we adopt a strict definition and record that there is no soap (so hence no handwashing facility) or is it acceptable if a household can quickly access soap from within a building? Do we need to see evidence that this soap has been used?
- Local practice may mean that handwashing facilities do not have water in-situ, instead people carry a jug of water to the latrine from a kitchen. Recording a handwashing facility only if there is water present may risk a significant underestimate of handwashing practice, but how can we determine if people actually do carry water? And does this practice adversely impact on marginalised groups such as the elderly or people living with disabilities?
And this is just for one indicator. Across all three (knowledge of handwashing behaviour, self-reported behaviour, and the presence of handwashing facilities) the complexity of the methodology for measuring the results can quite quickly become unwieldy. There are tensions between wanting to adopt the most rigorous assessment of handwashing possible (to be more certain that the results do reflect changed behaviour), what data it is feasible to collect, and what it is reasonable for the suppliers to achieve. The answers to these will depend on what the programme is trying to achieve, the results of baseline surveys, and the costs of measurement and verification of results.
There may not be an easy answer to the question of how to measure handwashing outcomes, but the experience of the WASH Results Programme suppliers has provided useful learning on some of the aspects that need to be thought through. As the programme progresses, the suppliers are continuing to refine their understanding of how best to approach this issue in the countries and contexts they work in, and iterate the indicators they use. What’s next? Well, exploring how the various handwashing indicators relate to improved public health could be interesting…