Category Archives: Learning from WASH Results

Beyond a burden: what value does verification offer?

Police officers, auditors, teachers marking homework and giving out detentions – just some of the unfavourable analogies we have heard applied to the role of the independent verification team in the WASH Results Programme. Catherine Fisher highlights the positive roles missing from these comparisons.

Our job is to verify that the achievements reported by Suppliers delivering the programme are accurate and reliable in order that DFID can make payment. It’s easy to see why the relationship between Verifier and Supplier can be an uncomfortable one, but in this post we look at the value of verification and what, if any, benefits it brings to Suppliers.

Why does the WASH Results Programme have a verification team?

Payment by Results (PbR) guru, Russell Webster, concluded from his review of PbR literature :

“When commissioners devise a contract where payment is mainly contingent on providers meeting outcome measures, they need to be confident in the data relating to whether these measures are achieved. There are two main issues:

  • Is the provider working with the right people (i.e. not cherry picking those who will achieve the specified outcomes most easily)?
  • Are the data reliable?”

Let’s take each of these in turn.

All the Suppliers in the WASH Results Programme are international NGOS who have continued to pursue their commitment to values such as equity and inclusiveness even if it has not been incentivised by the PbR mechanism. A central theme in our peer learning workshops has been the ongoing puzzle of how to place value (both in commercial/financial and Value for Money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. Suppliers and the Verification Team have been exploring how PbR can enable alignment with national systems and promote downward, as well as upward accountability.

There has been no evidence of gaming in the WASH Results Programme. That is not to say that it might never be an issue for other PbR contracts and the higher the risk of gaming, the greater emphasis there needs to be on verification. So if verification has not identified any gaming, what value has it brought?

Are the data reliable?
Because the WASH Results Programme relies largely on Supplier’s own monitoring data, the benefits of verification stem from the question of whether Suppliers’ data about their achievements are reliable. This has been a matter of great debate.

We have found that in some cases it is right not to rely unquestioningly on data that comes from Suppliers’ monitoring systems – those systems are not always as robust as Suppliers themselves thought. Verification has identified several situations where Suppliers could have gone on to inadvertently over-report results, which would have led to DFID paying for results that had not been achieved. Verification ensured DFID only paid for genuine results and helped Suppliers improve their monitoring. We explore the value to Suppliers of improved monitoring, later.

One of our Country Verifiers (members of the Verification Team based where the implementation is taking place) recently observed: “From my experience, the WASH Results programme is quite different from the traditional way of doing implementation – having someone who is independent, who checks the Suppliers’ results before they are paid for, makes it quite a good tool to hold Suppliers to account.”

So far, the obvious value that verification in the WASH Results Programme has brought to DFID is confidence in results, through third party information about those results, and a reduced risk of paying for results that were not achieved. But there are more, less apparent, benefits and we describe towards the end of this post.

Can verification bring value to Suppliers?

Having explored the value of verification to the donor, we now turn to the value for Suppliers.

The same Country Verifier commented that while he felt some Suppliers were initially scared that the verifier was there to spot their mistakes, “I think with time they realise that the role of independent verification is just to check that what they’re reporting is what the reality is when the verifier goes out to sites where they’ve been working. You’re only checking.”

Although Suppliers often view verification as a “burden”, our team identified a set of potential returns for the Suppliers on the effort and investment they put into participating in the process (effects, we suspect, that donors would appreciate). We acknowledge that it can be hard to unpick the value of verification from the value of investing in better monitoring per se, but without overstating our role, we feel we have contributed to:

  • Identifying areas for improvement – verification has revealed flaws in a system thought by the Supplier to be strong and introduced tests that were not previously used. In one example, verification revealed problems with third party enumerators’ work and this prompted greater scrutiny of their data by the Supplier and changes to training processes.
  • Strengthening Quality Assurance – We have seen how the expectation of verifiers checking data can prompt Suppliers to improve their own Quality Assurance (QA) processes, for example, carrying out internal checks prior to submitting data for verification and introducing QA protocols.
  • Increasing the value of data – the process of verification counters the commonly-held belief that “no-one looks at this data anyway”, which, unchecked, can reduce the effort put into data collection and the usability of the data systems.
  • Reducing risk of failure (and withholding of payment) – The requirement to have more and better data can pre-empt some problems. For example, knowing that they would need to demonstrate to verifiers that they had met their water systems targets, prompted one Supplier to check in advance if the declared yield of sources would be enough to reach the population they were planning to reach.
  • Forcing deeper reflection – linking PbR to the achievement of WASH outcomes has forced Suppliers to think about WASH outcomes and how they can be measured and be clearer on definitions to a greater degree than in other, non-PbR, programmes. Verification has by no means driven that process but has contributed to it.

We acknowledge that these may not always have felt like benefits to the Suppliers! In particular, some Suppliers have pointed out the trade-off between data collection and learning, and suggested that the burden of verification has stifled innovation and inhibited adaptive programming. Others, however claim the opposite, which implies there may be other factors at play.

In spite of concerns, there is broad consensus that the PbR modality, of which verification is a part, has driven higher investment in and attention to programme M&E systems. PbR requires Suppliers to be clear about what they are trying to achieve, to collect good quality data to monitor their progress and to use that data to report on their progress regularly. Verification has helped to build confidence in the strength of systems and data on which those processes are based. There is an emerging sense that effective use of reliable M&E data by Suppliers has enabled rapid course correction and so contributed to high achievements across the WASH Results Programme.

And if that is not enough, we think there are benefits for other stakeholders in countries in which WASH Results is operating. We have seen some benefits from capacity spillover– skills and knowledge acquired through working in or observing the data collection, analysis and verification in the WASH Results Programme are available to other programmes e.g. among enumerators, Country Verifiers, programme staff, even Government agencies.  Again, this is by no means all attributable to verification but verification has contributed.

Value and the limits of verification

It can be hard to unpick the benefits of verification from benefits that stem from the greater emphasis on data collection inherent to PbR. In some contexts PbR is being used without third party verification. But, in contexts where reassurance is needed about the reliability of the data on outputs and outcomes, we believe verification offers value to the donor, to the Suppliers and, potentially to others in the country in which the programme is operating.

While we have argued for the benefits of verification, there are weaknesses in PbR that verification cannot solve. Verifiers, like police officers, don’t make the rules, they just enforce them. They verify results that have been agreed between the donor and the supplier. As one of our team observed recently “Payment by Results makes sure you do what you said you would. It doesn’t make you do the right thing….”

However, if verification helps drive a “race to the top” in terms of quality of monitoring systems, the sector will begin to have better data on which to base decisions. Better data about what kinds of programmes produce what kinds of outcomes in which contexts could help donors to fund, and programmers to implement, more of the “the right thing”. And the police officers will feel their job has been worthwhile.


Catherine Fisher, Learning Advisor, Monitoring and Verification Team for the WASH Results Programme. This post draws on a reflection process involving members of the Monitoring and Verification team for the WASH Results Programme (Alison Barrett, Amy Weaving, Andy Robinson, Ben Harris, Cheryl Brown, Don Brown, Joe Gomme and Kathi Welle).


Want to learn more about the experience of the WASH Results Programme? Join us in Stockholm during World Water Week for ‘The Rewards and Realities of Payment by Results in WASH’

Truly exceptional? Handling misfortune within Payment by Results

An exceptional event or a predictable adversity? The difference matters more in a Payment by Results (PbR) setting, as this blog post explores.

Conflict, political upheaval, epidemic, drought, flooding and earthquake; the WASH Results Programme has been hit by a wide range of disasters across the 12 countries in which it operates. All these adversities had an impact on the populations involved: some hit the programme’s implementation, some the sustainability of its achievements and others have affected the ability to monitor and verify those achievements.

Discussions on how to deal with these events have involved considering what is within the reasonable expectation of a Supplier to anticipate and deal with, and what is a truly exceptional event for which there should be flexibility around what Suppliers are expected to deliver – whether in the quantity, scope or timing of results.

The challenge of responding to exceptional events is not new for development programmers, but like many challenges, it takes a different shape in the context of a PbR programme. In such programmes, payment is linked to achieving and verifying results, and an impact on results ultimately leads to a financial impact for one or more of the parties. This is particularly challenging in the second phase of WASH Results, where Suppliers are paid for sustaining results achieved in the first “Outputs” phase. The passage of time means that programme achievements (whether physical infrastructure or behaviour change) are more likely to be affected by exceptional events, and suppliers may not have the resources (such as programme field staff) in place to undertake substantial mitigating actions.

Members of our team (Monitoring and Verification for the WASH Results Programme) recently met to discuss their experience of exceptional events in the programme and how these were tackled. Here are three of the examples they discussed followed by the team’s reflections on the issues they raise:

1) The moving population In this example a conflict-affected community relocated out of the proposed project area. In response, the Supplier closed the project in that area but thanks to overachievement of results in other locations, the overall outputs of the programme (in terms of beneficiaries reached) were not reduced and the Supplier did not suffer financially. In this case, the flexibility of PbR meant the risk to Supplier, Donor and Verifier could be effectively mitigated, although some of the original intended beneficiaries were not reached.

2) Destroyed infrastructure, different decisions In one instance, WASH infrastructure built in the first (output) phase of the WASH Results Programme was destroyed when a river eroded part of a village.  Although there was a known risk of erosion, the timing of erosion could not be foreseen nor the risk mitigated. The people living in this area were some of the poorest and most vulnerable whom the Supplier did not want to exclude from the programme.  The erosion was considered extreme, as evidenced by newspaper reports and other local data and it was agreed the area with the destroyed infrastructure would not be included in the sample frame for outcome surveys and so would not affect outcome results.

Meanwhile, in the same country, infrastructure was damaged by flooding, but this was considered expected, not extreme. In contexts where flooding can be expected, the demand for sustained outcomes (in which payment is linked to the sustained use of infrastructure) requires that infrastructure is built in such a way that it can withstand expected levels of flooding, or that plans for reconstruction or repair in the case of damage should be integral to programming. Consequently, areas in which infrastructure was affected by flooding were to be included in the sample frame for the outcome survey, which was amended to include questions about flood damage and beneficiary priorities for reconstruction.

3) When verification is too risky  When conflict erupted in one project location, the programme was able to implement activities regardless and continued to achieve results. However, the security situation on the ground made it too risky (for programme staff and the verification team) for the results to be independently verified through a field visit. In this case, alternative and less representative forms of verification were accepted. In this example, there was no adverse impact on the results achieved, or reduction in payment to the Supplier, but there was increased risk around the confidence that could be placed in the results.

Making decisions about risk

In exceptional circumstances, decisions need to be made about who bears the risk (Donor, Supplier, Verifier, Beneficiaries) and what kind of risk (physical, financial, reputational). If financial risk falls exclusively on Suppliers, they need to factor that into their “price per beneficiary” across the programme. Alternatively, Suppliers may choose not to operate in riskier areas, with potential negative consequences for the equity of programme interventions. If donors accept all risk, there is little incentive for Suppliers to programme in ways that account for predictable risks, such as flooding, particularly over the longer term.

Reflections and suggestions emerging from the discussion of these cases included the following:

  • There are different types of impact to consider: effect on population, effect on ability to deliver activities, effect on achievement of results, and effect on ability to verify results. Being clear on the type of impact might aid decisions about who bears the risk and the mitigation strategy.
  • Discussions about risk need to happen during the design phase; one approach is to use a risk matrix that explores what level of risk each party is going to absorb (and so design into the programme) and what would be considered ‘exceptional’.
  • Programmes need to include within their design plans for responding to anticipated events e.g. in areas at risk of flood, include help for villages to cope with normal levels of flooding.
  • Suppliers can minimise their financial and operational risk by balancing their work across a range of fragile and more secure areas, so enabling them to pull out of more hazardous areas in extreme circumstances and achieve results elsewhere. However, if the Supplier commits to working with specific communities in conflict-affected areas, then incentives will need to be set up differently within the results framework.
  • In fragile contexts, a compromise may need to be made on rigour of verification and plans made for reliance on remote verification from the start, e.g. analysis of systems, remote data collection through phone or satellite, and beneficiary monitoring.

Our conclusions about exceptional events in the context of a PbR programme in WASH echo those in many of our previous blog posts. PbR throws a spotlight on an issue, raises questions of how risk is shared between stakeholders, and highlights the importance of planning at design phase and of flexibility, should the unforeseen occur.

If you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us

Reflections on our experience of DFID’s results agenda

As verifiers of a DFID Results Based Finance programme, ODI’s research on the UK’s results agenda prompted us to reflect on our experience.

 

Kakimat latrine eaten by goats

Why context matters when you focus on results #1: Some latrine building projects have to allow for the impact of hungry goats. Photo credit: Chamia Mutuku

 

In their report ‘The Politics of the Results Agenda in DFID: 1997 to 2017’, Craig Valters and Brendan Whitty argue that 2007 saw a new explicit focus from DFID on aggressively implementing results-based management. 10 years later, we have WASH Results: a DFID-funded programme where financial risk has been completely transferred away from UK taxpayers to the international NGOs who deliver the work and who only get paid for results that have been checked by a third party – us. However, as its name promises, the programme is delivering results in water, sanitation and hygiene (WASH). DFID was able to read in the programme’s 2017 annual report (with great confidence in the figures), for example, that WASH Results had reached over 1.1 million people with improved water supply, more than 4.7 million people with improved sanitation, and over 14.9 million people with hygiene promotion.

In our role as Monitoring, Verification and Evaluation Team for the WASH Results programme, our attention is less focused on the politics of the results agenda, and more in how results are monitored and verified and the very real impact that this approach has on ongoing programme delivery. However, we read the report and blog post by Valters and Whitty with great interest.

After more than three years of supporting the programme, how does our experience compare with the conclusions and recommendations of the ODI report? One key finding from the research is that some DFID staff have found ways to adhere to the results agenda, while retaining flexibility. This theme of the ways in which both donors and programme implementors are working creatively around the “tyranny of results” was one that we heard during last year’s BOND Conference session ‘How to crack Results 2.0’.

How can PBR be adapted to address the imbalance in accountability?

We absolutely agree with Valters and Whitty about the importance of finding a balance between being accountable to UK citizens and to the beneficiaries (poor people abroad). This time last year, we shared our opinion that if verification was designed to include beneficiary feedback and this was linked to payment, Payment by Results (PBR) could actually generate more downward accountability than other funding modalities. However, our team of verifiers felt that the demands of verification for large scale, representative, quantitative information on which to base payment decisions may leave less time, money and inclination to undertake more qualitative work with beneficiaries. So, we suggested that a resource-effective solution to upholding downwards accountability through verification would be to include payment for the existence and effective functioning of a beneficiary feedback system (rather than the results of that system). Payment would be made on verification of the effectiveness of the system in promoting downwards accountability.

We welcome the authors’ call to “Create a results agenda fit for purpose”. Our first reflection would be that a results agenda, at least one hard-wired into a PBR modality, is not going to be appropriate in all contexts and for all intended outcomes, particularly those where outcomes are difficult to predict or challenging to measure. Our set of recommendations to commissioners of PBR programmes, complement several of those made by ODI, for example, their suggestion that DFID spend more time considering whether its aid spending has the right mix of risks and the view that regular testing (that leads to course-correction) is important.

The challenge of communicating about costs and value

The authors also call on ministers to be honest with the British public about aid. Part of this, we feel, is making it clearer that Value for Money (VFM) is not synonymous with “cheap”. We feel that the results agenda, particularly a PBR model, should require donors/commissioners to clearly articulate the “value” they expect to see in VFM. Otherwise the importance placed by a donor on achieving clearly costed, verified results could risk squeezing out other values and principles that are central to development programming. A central theme in last year’s WASH Results learning workshop was the ongoing puzzle of how to place value (both in commercial/financial and VFM terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. This is particularly important in an increasingly commercialised aid context, where one supplier’s approach would be to parachute in and build many toilets very quickly and cheaply, whereas another proposes taking longer to work with local stakeholders, norms and materials. This articulation of value may not be as simple as it sounds, when every commitment in a PBR programme, such as reaching the poorest, gender equity, national ownership, sustainable outcomes, etc. needs to be reflected in meaningful and measurable indicators.

Payment By Results can aid course correction

Interestingly, one of the reforms that the authors call for may be an inherent feature of the results framework itself. They say that “interventions need to be based on the best available information, with regular testing to see if they are on the right track”. We have found that a product of the PBR modality is that much greater emphasis is placed on monitoring systems and the generation of reliable data about what is happening within programmes. In WASH Results we have seen cases where the rigorous (compulsive?) tracking of results has identified areas where programmes are failing to deliver and rapid action has then been taken to address that failure. As verification agents we argue that this is due not only to the link between results and payment but also the independent verification of data and systems that has led to better information on which to base decision-making.

Benefits of the results agenda

In this way we think that the focus on monitoring within the results agenda, can, in some cases, enable flexibility and innovation. In its reliance on high quality data, it contains within it a driver that could improve the way that development work happens. The results agenda brings benefits – some of which we did not see reflected in the article – but it comes with risks; both ideological about the ambitions for UK Aid and practical for those involved in its delivery. And so we welcome this debate.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

If you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us

Why is it so hard to measure handwashing with soap?

As Global Handwashing Day approaches, the WASH Results MV team explores the challenges of verifying changes in handwashing behaviour.

Hygiene has tended to be the Cinderella of WASH (Water, Sanitation & Hygiene): relegated to the end of the acronym and with hygiene promotion often having been treated as an afterthought in larger water and sanitation programmes. But good hygiene is recognised as an essential component in addressing the burden of disease, and hygiene promotion can be a cost-effective way of improving health outcomes. In linking payment milestones to sustained handwashing behaviour the WASH Results Programme is breaking new ground, and pushing against some of the limits of understanding in the sector. The programme has faced key challenges in how to establish what reasonable handwashing outcomes might look like, and the most appropriate way to measure them. This post draws on discussions within the Monitoring and Verification (MV) team on some of these issues, and resulting internal briefings.

What is a good handwashing outcome?

Across the WASH sector there is little established knowledge on the sustainability of WASH outcomes. Whilst the well-known (and well-resourced) SuperAmma campaign saw a sustained increase in handwashing practice of 28% after 12 months, it’s still an open question as to what level of behaviour change it might be reasonable to expect across differing countries, contexts and projects. This matters, because in a payment by results (PBR) context, payment is linked to achieving targets. Set targets too high and suppliers risk being penalised for failing to reach unrealistic expectations. Set targets too low and donors risk overpaying.

How do we measure handwashing?

Compounding the uncertainty over what reasonable targets may be, is uncertainty on how to measure achievement against those targets. There are several commonly accepted methodologies, but they all involve compromises. At one end of the scale there is structured observation of household handwashing behaviour. Considered the gold standard in measuring behaviour change, this can provide a wealth of data on the handwashing practices of individuals. But it’s also prohibitive to undertake at scale – the most rigorous designs can involve hundreds of enumerators making simultaneous observations. The feasibility of conducting such measurements regularly for a PBR programme is questionable.

A much simpler (and on the face of it, more objective) indicator might be the presence of handwashing facilities (a water container and soap). This is used by the Joint Monitoring Programme (JMP) to measure progress against SDG 6.2 because it is the most feasible proxy indicator to include in large, nationally-representative household surveys which form the basis for the Sustainable Development Goal monitoring (and typically collect data on WASH as just one of several health topics). However, these observations tell us nothing about individuals’ handwashing behaviour, or if the facilities are even being used.

Setting indicators for handwashing outcomes

Within the WASH Results Programme, the suppliers have tended to define handwashing outputs in terms of the reach of handwashing promotion (though evidencing how a programme has reached hundreds of thousands or even millions of beneficiaries leads to its own challenges). They have defined outcomes in terms of:

  • knowledge of handwashing behaviour,
  • self-reported behaviour, and
  • the presence of handwashing facilities.

The suppliers have taken different approaches to considering these indicators separately or as part of a composite indicator to attempt to reach a reasonable conclusion on what the sustained change in handwashing practice has been. Each of the indicators has drawbacks (self-reporting, for example, has been shown to consistently over-estimate handwashing behaviour), but by considering all of them, a more reliable estimate may be reached.

Where this process has become particularly challenging for the WASH Results programme, is in attempting to measure outcomes at scale and across highly diverse contexts. All too often the devil is in the detail and small details in how indicators are set and measured could lead to huge variations in results, in turn leading to major differences in what suppliers get paid for. For example, we may wish to define a standard for the presence of a handwashing facility as a water container and soap, but very quickly some of the following issues arise:

  • In some countries, ash is commonly used instead of soap: national governments may actively promote the use of ash, and in some areas, it’s almost impossible to even find soap to buy. But at the same time there is evidence that using ash (or soil, or sand) is less effective than using soap: and for this reason, handwashing with soap is the indicator for basic handwashing facilities for SDG 6.2. Should payment targets follow national norms, or would this be a case of paying for lower-quality outcomes?
  • In other areas households choose not to store soap outside because it can be eaten by livestock or taken by crows (a far more common problem than one might imagine). Do we adopt a strict definition and record that there is no soap (so hence no handwashing facility) or is it acceptable if a household can quickly access soap from within a building? Do we need to see evidence that this soap has been used?
  • Local practice may mean that handwashing facilities do not have water in-situ, instead people carry a jug of water to the latrine from a kitchen. Recording a handwashing facility only if there is water present may risk a significant underestimate of handwashing practice, but how can we determine if people actually do carry water? And does this practice adversely impact on marginalised groups such as the elderly or people living with disabilities?

And this is just for one indicator. Across all three (knowledge of handwashing behaviour, self-reported behaviour, and the presence of handwashing facilities) the complexity of the methodology for measuring the results can quite quickly become unwieldy. There are tensions between wanting to adopt the most rigorous assessment of handwashing possible (to be more certain that the results do reflect changed behaviour), what data it is feasible to collect, and what it is reasonable for the suppliers to achieve. The answers to these will depend on what the programme is trying to achieve, the results of baseline surveys, and the costs of measurement and verification of results.

There may not be an easy answer to the question of how to measure handwashing outcomes, but the experience of the WASH Results Programme suppliers has provided useful learning on some of the aspects that need to be thought through. As the programme progresses, the suppliers are continuing to refine their understanding of how best to approach this issue in the countries and contexts they work in, and iterate the indicators they use. What’s next? Well, exploring how the various handwashing indicators relate to improved public health could be interesting…

Measuring progress towards SDGs: a Payment by Results perspective

Attending the 2017 WEDC Conference prompted our team members to share their reflections on measuring progress towards SDGs from a Payment by Results (PBR) perspective.

Some of the e-Pact Monitoring and Verification (MV) team recently attended the WEDC Conference – an annual international event focused on water, sanitation and hygiene (WASH), organised by the Water, Engineering and Development Centre (WEDC) at Loughborough University. One of the key themes this year was the Sustainable Development Goals (SDGs): what they are, how close we are to achieving them, and how we are going to monitor them. The SDGs are important for PBR programmes because they influence what programmes aspire to achieve and how they measure their progress.

The recent publication of the first report (and effective baseline) on SDG 6, covering drinking water, sanitation and hygiene, marked a watershed. With the shift to understanding universal and equitable access, the inclusion of hygiene and a focus on ‘safely managed’ and ‘affordable access’, the breadth and depth of data we aspire to have on water and sanitation services is unprecedented. But the first SDG progress report also highlights a yawning data gap: for example, estimates for safely managed drinking water are only available for one third of the global population, and we are only starting to get to grips with how to measure affordable levels of service.

As part of the WASH Results Programme, the three consortia are constantly grappling with how to objectively measure complex outputs and outcomes linked to water, sanitation and hygiene. At the same time our MV team is trying to understand how we can verify such measures, and if they are sufficiently robust to make payments against. How do the SDGs influence this process? We have three reflections from our experience of verifying results under the WASH Results Programme:

Reflection 1: the relationship between the SDGs and PBR-programming can be mutually beneficial.

Embed from Getty Images

The SDGs help PBR programmes to set ambitious benchmarks

It’s clear that to track progress against the SDGs, the WASH sector is going to have to become a lot better at collecting, managing and analysing an awful lot of data. One of the learning points from the WASH Results Programme is that the verification process requires the consortia (and in-country partners) to take data far more seriously.

Compared to more conventional grant programmes, Monitoring and Verification functions take on the importance of financial reporting. One function of this, is that everyone has more confidence that reported results (whether access to water, number of latrines built or handwashing behaviour) accurately reflect reality. As such, PBR programmes can help focusing peoples’ attention on improving service levels.

Conversely, the SDGs help PBR programmes to set ambitious benchmarks and provide an orientation on how to measure them. This is proving important under the WASH Results Programme, which has, at times, struggled with aligning definitions, targets, indicators and how to measure them.

Reflection 2: some of the SDG targets are hard to incorporate into a PBR programme
Embed from Getty ImagesPhysical evidence of a handwashing facility doesn’t guarantee use at critical times

Measuring hygiene behaviour change illustrates this point neatly: the simplest way to understand if people are washing their hand with soap may appear to be just to go out and ask them. Yet self-reported behaviour indicators are notoriously unreliable. Looking for physical evidence of a handwashing facility (with water and soap) is the approach currently suggested by the WHO/UNICEF Joint Monitoring Program (JMP), but there is no guarantee that people use such facilities at the most critical times, for example, after defecation or before handling food.

Under a PBR programme (where implementers get paid against pre-defined results) the temptation to take the shortest route to success, namely focusing on getting the hardware in place, may be high. Therefore, it may be important to complement this indicator with a knowledge-related indicator to also capture behaviour change albeit in a crude way. This brings along another challenge: how to agree on appropriate, payment-related targets in a situation where experience on how to accurately measure behaviour change is still in its infancy?

Reflection 3: keeping indicators lean is challenging when faced with the breadth and depth of the SDGs

Hygiene behaviour change is just one indicator. Attempting to robustly measure changes across three consortia, eight result areas and two phases (output and outcome) has resulted in the MV team reviewing a large amount of surveys, databases, and supporting evidence since 2014.

Under the WASH Results programme, the sustainability of services is incentivised via payment against outcomes: people continuing to access water and sanitation facilities and handwashing stations for up to two years after they gained access to improved services. In the meantime, between the final MDG report, and the initial SDG report, the number of data sources used by JMP to produce estimates for the water, sanitation and hygiene estimates has more than doubled[1]. Instead of more traditional household surveys, increasingly, data is obtained from administrative sources such as utilities, regulators and governments.

How to marry these new data ambitions with the necessary goal to keep the number of indicators manageable under a PBR programme will be an interesting challenge going forward.

Katharina Welle and Ben Harris, MV Team, WASH Results

[1] Progress on drinking water, sanitation and hygiene: 2017 update and SDG baselines (p50) https://washdata.org/reports 

Can Payment by Results raise the bar for downward accountability?

Payment by Results (PBR) could encourage downward accountability if verification included beneficiary feedback that was linked to payment.

One of the criticisms levelled at PBR is that it promotes upward accountability to donors rather than downward accountability to beneficiaries. Members of the Monitoring and Verification (MV) team for the WASH Results Programme recently discussed whether this reflects their experience and what role there is for beneficiary feedback in verification processes and in this post we summarise those discussions.

The MV team agreed that upward accountability is not necessarily any worse in PBR programmes than in other funding modalities. In fact, the feeling was that the scale of verification is likely to provide a more accurate picture of what is happening across programmes than the not necessarily representative glossy human interest pieces that often emerge from grant funded programmes. Indeed, if verification was designed to include beneficiary feedback and this was linked to payment, PBR could actually generate more downward accountability than other funding modalities.

However it is this link between beneficiary feedback and payment that is challenging. Data needs to be unambiguously verifiable which limits the kinds of areas that can be explored. If payment is only made against very specific (technology-focused) outcomes, then there is a fair chance that some, if not all, of the qualitative and governance issues will be missed or under-emphasized. But it is difficult to come up with effective “soft” indicators that can be linked to payments, as these tend to be more subjective, with less certain targets, that are more easily affected by the facilitation/enumeration of the survey/measuring process, and more variable by context. So far, in the WASH Results Programme, this has generally limited beneficiary feedback within verification to confirming that something happened when the supplier reported it (e.g. that a toilet was built in September 2014), not whether the beneficiary liked the toilet or even wanted it.

Opportunities to include beneficiary feedback in data collection

There is scope for using approaches such as satisfaction surveys or including questions about beneficiary satisfaction in household surveys, where results are triangulated with other methods. In addition, a lot of work is currently being undertaken on scorecard and feedback approaches in a wide range of sectors, including the WASH sector. The use of any of these approaches in a PBR context would require both donor and supplier to be very clear on what is being paid for (e.g. infrastructure development, service provision or behaviour change) and what the triggers are for payment or non-payment.

Including more qualitative approaches in verification, such as focus group discussions or individuals’ stories is also achievable. The challenge is ensuring they are representative of the programme as a whole, requiring them to be randomly sampled.  This, of course, requires more resources.

Managing the resource implications of beneficiary feedback

The question of resourcing is key – it is possible to verify almost anything with unlimited resources, but in the real world, different priorities need to be weighed up against each other. In practice, the demands of verification for large scale, representative, quantitative information on which to base payment decisions may leave less time, money and inclination to undertake more qualitative work with beneficiaries.

A resource effective approach to upholding downwards accountability through verification would be to include payment for the existence and effective functioning of a beneficiary feedback system (rather than the results of that system). Payment would be made on verification of the effectiveness of the system in promoting downwards accountability.

This would only work in a systems-based approach to verification such as that used in the WASH Results Programme where verification is based on data generated by suppliers with the MV team assessing the strength of the systems that generate that data through ‘systems appraisals’.  In this scenario, assessment of any beneficiary feedback system would be an extension of the systems appraisal currently undertaken by the MV team; payments could be linked to the results of that appraisal, which is not currently the case.

Finally, it is worth highlighting that the results based reporting requirements, on which the PBR system relies, generate reports that are different from the more qualitative and narrative reports associated with other forms of aid modalities, such as grants.  If donors require human interest stories to communicate a programme’s results to the public, they will need to include this requirement within suppliers’ contracts and Terms of Reference.

In conclusion, our experience suggest that the PBR approach does not inherently promote upward accountability at the expense of downward accountability; it depends on how the contract is designed. We believe that including a requirement for beneficiary feedback as part of verification of results could help to promote downward accountability. We encourage donors to consider this when designing and negotiating PBR programmes.


* This blog post is based on a summary by Catherine Fisher of an online discussion held in June 2016 among members of the Monitoring and Verification team for the WASH Results Programme (Andy Robinson, Joe Gomme, Rachel Norman, Alison Barrett, Amy Weaving, Jennifer Williams and Don Brown).

 

Alignment, aid effectiveness and Payment by Results

To what extent does the Payment by Results approach of the WASH Results Programme follow the aid effectiveness principle of alignment?
One argument for Payment by Results (PBR) is that it can promote “alignment”, which is also an important principle in aid effectiveness. So that’s good, right? But, a closer look at how this slippery term is used reveals differences in understanding that are particularly relevant to the use of PBR in International Development.

According to some PBR commentators, PBR can bring advantages in situations where there is misalignment between the objectives of donors and implementers, but there is some debate about this argument, (see, for example, CGD’s commentary on principle 5 of Clist & Dercon’s 12 principles of PBR).  Either way, the alignment in question here is between the objectives of donor and implementer (or Suppliers as we call them in WASH Results; in our case either individual – SNV, or consortia of, non-governmental organisations – SAWRP and SWIFT).

Compare this with the understanding in the Paris Declaration in which alignment is one of the five principles of Aid Effectiveness. The first principle of Ownership states: “Developing countries set their own strategies for poverty reduction, improve their institutions and tackle corruption.”. The second principle, Alignment, builds on this: “Donor countries align behind these objectives and use local systems.”. In this case, the alignment is that of donors behind national strategies and objectives.

The PBR funding mechanism for the WASH Results Programme is the type that DFID calls Results Based Finance. Under this approach, the contract is between a donor and a service provider, not recipient governments (DFID calls the latter Results Based Aid*). In this context, the term “alignment” as used in the PBR literature may be at odds with the concept of alignment in the Paris Declaration for Aid Effectiveness as it encourages alignment between service provider and donor rather than donor with national stakeholders and priorities. This has led some people to claim that PBR promotes upwards accountability to donors at the expense of accountability to national and local stakeholders.

Experience of alignment with national government stakeholders under the WASH Results Programme

At a WASH Results learning workshop held earlier this year, participants shared their views on alignment in the context of the WASH Results Programme. Over the last year of implementation, some concerns were raised that the PBR modality was a barrier to alignment with national priorities and stakeholders in the countries in which the WASH Results is being implemented. During the learning workshop a nuanced picture emerged of the programme’s experience to date as this extract from the workshop report demonstrates:


Alignment is happening, whether incentivised by PBR or not:  All of the Suppliers work with local stakeholders as a matter of course. However differences in programme design affected to what extent this was incentivised or recognised in payment packages. All of the Suppliers had experienced positive reactions from local stakeholders to the principle of PBR – with one Supplier being asked by local government officials for support in rolling out PBR in one of their programmes. 

Value of building alignment into results packages: There was some sense that the focus on outputs in the first phase of WASH Results had taken attention away from areas such as alignment that are not so easily linked to milestones and so opportunities for alignment had been missed. However, SNV took a different approach to other suppliers by building concrete items into their results packages to reflect their work with local partners, e.g. district plans in each of 60 districts in which they work. While this was felt to be a “smart” approach – SNV warned that there are also disadvantages: “We think we have found some meaningful ways to address elements of alignment, but let’s not be too optimistic about these instruments; they focus attention on direct deliverables” (Jan Ubels, SNV).

Flexibility supports alignment: Suppliers value being allowed to change their approach without going through a contract amendment process. In one case a Supplier was able to change definitions of results to better align with national government definitions.  However, there are potential risks in this approach: “Alignment to what? If the government has a much lower CLTS standard than the SDGs – is that still the alignment we are trying to encourage?” (Louise Medland, SAWRP) 

 Challenge of timelines:  WASH Results has tight deadlines and an emphasis on deliverables while partners, e.g. water authorities, are working to a different longer timeline and may not deliver at the pace required. There is a limit to how much risk can be transferred to partners in this context.

PBR risks limiting Suppliers to existing relationships: participants agreed that PBR can only be introduced where there is existing relationships and social capital and it would be risky to try to implement PBR in places where there was no established relationship.

Additional demands of monitoring for PBR: One Supplier felt that the kind of monitoring carried out for WASH Results could never be the same as that carried out at a local level; it would always be additional to, rather than aligned with, that of the government, although it might stimulate M&E at a local level.

Ways in which alignment could be promoted in future Results Based Finance forms of PBR

To support alignment within a PBR mechanism, participants in the workshop suggested:

  • Valuing alignment at tendering and contracting stage:   Alignment should be considered when costing at the tendering and contracting stage so that prospective Suppliers are competing on an equal basis, given the additional cost (and value) alignment brings.
  • Defining specific hard deliverables, perhaps during a pre-inception phase, that were somewhere between output and outcome phase e.g. district plans.
  • Including specific rewards or incentives in the programme aimed at government to encourage their buy-in to the programme
See especially pages 9 – 10 of DFID WASH Results Programme: Learning Event. e-Pact Consortium, Hove, UK (2016)

Conclusions and looking forward

For PBR to be accepted as an effective form of aid financing it will need to follow all the principles of aid effectiveness, including alignment. The experience of WASH Results so far suggests that this is possible but requires careful consideration of how alignment can be promoted during the design of programmes, contracting and tendering processes,  definition of results and design of verification systems.

Another, more macro, way of supporting alignment using PBR, is for the Independent Verifiers to work much more closely with the national government monitoring systems to verify results. In this model, significant support is given by the donor to improve the national systems and then recipient countries themselves can do the verification. In this case, the use of PBR to fund service delivery would act as a catalyst for strengthening monitoring systems at a national level (although the PBR programmes would need to be of significant scale to be an effective catalyst). This sector-strengthening approach requires long-term investment and a multi-pronged approach, within which PBR projects may only be one element, albeit a potentially catalytic one.

We have not seen much focus on this area so far in the debates around PBR (please alert us to it if we are wrong!). We hope that the experience of our programme will help contribute to that understanding. We will continue to share ongoing lessons learned from implementation as well as findings from the evaluation.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

* For an example of a Payment By Results programme in WASH that uses Results Based Aid (where payment goes from the donor to a recipient government), we suggest readers take a look at DFID’s Support to Rural Water Supply, Sanitation & Hygiene in Tanzania.

The report from the WASH Results learning workshop is available to download from DFID’s Research for Development website. 

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.

The paybacks and pains of Payment by Results (Part 2)

Our series of reflections on WASH Results’ learning continues by exploring value and costs in a Payment by Results (PBR) programme.

DFID has been clear from the outset about what it wants from the Water, Sanitation and Hygiene (WASH) Results Programme: WASH interventions delivered at scale within a short time-frame and confidence in the results being reported to the UK taxpayer. DFID got what it wanted, but at what cost? In this post we build on discussions at the WASH Results Programme’s learning event held earlier this year which looked beyond the numbers of people reached with interventions to explore some of the challenges faced in implementing the programme.

Embed from Getty Images

Can PBR frameworks be designed to incentivise suppliers to focus on the “harder to reach”

A central theme in the workshop was the ongoing puzzle of how to place value (both in commercial/financial and Value for Money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. So, does the importance placed by a donor on achieving clearly costed, verified results risk squeezing out other values and principles that are central to development programming? Which values might end up being pushed aside and could this be mitigated through better design?

1. High quality programming

Suppliers hit a major challenge during tendering when DFID asked them to provide a price per beneficiary that reflected the cost to the suppliers of reaching that beneficiary. But calculating this cost is complex. Potential suppliers have to think about what kind of costs they should fold into that price per beneficiary when bidding: the more the costs, the higher the bid. During the workshop one Supplier asked rhetorically “Should we say $20 with alignment and $23 without?”

There is some apprehensiveness within the NGO sector about competing with the private sector in this commercial context and they are often advised to be cautious. Will they be undercut by commercial organisations submitting more attractive (read cheaper) bids that lack the added benefits that NGOs can bring: the social capital and ways of working that are difficult to put a commercial value on but will affect the quality of the programming?

DFID has been clear that it does not equate Value for Money (VfM) with “cheap” and it is willing to pay for quality programming, whoever is best placed to deliver it. One improvement to the tendering process would be to articulate some of these added benefits (such as existing relationships and social capital in a programme area) as requirements for bidders. Potential suppliers would thus need to provide evidence within the bidding process.

2. Reaching the hardest to reach 

A criticism levelled at PBR is that by using a fixed “price per beneficiary” approach, it encourages suppliers to focus on people who are easier to reach, a practice sometimes described as “creaming” or “cherry picking”. Stakeholders in the WASH Results Programme are firmly committed to inclusion and during the workshop investigated how that could be incentivised better within a PBR framework. Options explored included multi-tiered price per beneficiary frameworks (as used in drug and alcohol recovery programmes in the UK) although these carry the risk of increasing complexity and reducing flexibility. Another suggestion for incentivising inclusion was careful selection and wording of the objectives and appropriate verification processes in the tender document, however this may risk compromising the flexibility to negotiate targets and verification approaches in response to different contexts.

3. Investing for the future

One related but different challenge that emerged during the workshop was that of placing commercial value on activities that invest for future work in the sector. This includes building the social capital to work with local stakeholders and investing in programmatic innovation (which some suppliers suggested had not been possible under the WASH Results Programme). Do the practical implications of PBR risk capitalising on previous investment made by suppliers, without contributing to it in turn? This is perhaps not an issue while PBR contracts constitute a small proportion of aid financing but would become more so if PBR contracts started to dominate. On the other hand, the benefits that suppliers report, particularly in terms of strengthening monitoring and reporting systems to enable more rigorous real-time results tracking may also spill over into other programmes, benefitting them in turn. It is too early to draw conclusions but it may be the case that a range of different aid mechanisms are required, with the benefits and limitations of each clearly identified.

4. Confidence in results

Finally, it is worth observing the possible trade-off between the value placed by DFID on confidence in results that is so important for communicating with taxpayers, and the effectiveness of aid spending that can be achieved through PBR and the nature of the results it produces. Verification is undoubtedly costly (someone paid you to come here just to look at that toilet?” a baffled resident of a beneficiary village is reported to have asked of a verification team member).

But there is another aspect of effectiveness: if PBR prompts suppliers to focus their efforts on what can be counted (i.e. what can be verified at scale without incurring prohibitive expense), this may shift their efforts away from development programming with longer-term and more uncertain outcomes. Put simply, this could equate to building toilets rather than working on sanitation behaviour change interventions, that are considered to generate more sustainable positive outcomes. Of course there is no guarantee other forms of aid financing will generate these results and as there is likely to be less focus on measuring those results, comparison would be difficult.

Advice for PBR commissioners

What might this mean for those considering PBR modalities and designing PBR programmes? The experience of WASH Results so far suggests that when designing a PBR programme, commissioners need to:

  • be clear on the value implied in “value for money” – consider all of the “values” that are important, including the value of donor confidence in results;
  • strike a balance between clearly specifying expected results (particularly for more vulnerable people) and being flexible to the contexts in which suppliers are operating;
  • think creatively and collaboratively about how long-term outcomes can be measured;
  • explore hybrid funding models but avoid creating the “worst of all worlds” that lacks the flexibility of PBR, increases complexity and imposes multiple reporting frameworks;
  • consider whether PBR is the right funding mechanism for the kinds of results you wish to achieve (tools are emerging that can help) ;
  • view the PBR component in the context of the broad spectrum of funding to the sector – seek to maximise linkages and mutual value across the sector.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

The report from the WASH Results learning workshop is available to download from DFID’s Research for Development website. 

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.

The paybacks and pains of Payment by Results (part 1)

Our series of reflections on the WASH Results Programme’s learning starts by identifying where Payment by Results has added value.

Payment by Results (PBR) has been “a highly effective means of incentivising delivery at scale” according to the people that deliver the WASH Results Programme. This finding taken from the report of a recent WASH Results learning event may surprise some PBR naysayers. However, as this first post in a series of reflections on the report shows, when the donors, suppliers and verifiers of WASH Results came together to reflect on their experience of actually delivering and verifying the programme, they were able to agree on several positives alongside their concerns.

blog 1

Participants of the WASH Results 2016 Learning Workshop exploring areas of agreement.

The pros and cons of PBR in development are hotly debated online, but the Center for Global Development reminds us that when discussing PBR, we should be clear about who is being paid, for what and how. The particular way in which WASH Results was designed has therefore influenced the experiences of its suppliers (SNV, and the SAWRP and SWIFT consortia). An important feature of the design (extrinsic to the PBR modality) is that delivery was tied to the water and sanitation target (Target 7.C.) of the Millennium Development Goals. The programme began with an extremely time-pressured initial ‘output phase’ to December 2015 (focussing on installation of WASH infrastructure), followed by an ‘outcomes phase’ that started this year. Another key design feature is that WASH Results is 100% PBR. The nature of the results, however, were agreed on a case-by-case basis with each supplier and include outputs, outcomes and in some cases, process-type activities.

Sharpening focus on results
It is certainly the case that the WASH Results Programme has delivered huge results within a very tight time-frame. Earlier this year, for example, SWIFT reported having reached close to 850,000 people with two or more of water, sanitation or hygiene services. During the workshop participants broadly agreed with the statement that PBR was an important factor in incentivising delivery. Some questioned the extent of the contribution of the PBR mechanism, highlighting instead their core commitment to delivery. However, others were clear that the PBR mechanism has sharpened the focus on achieving results:

Grants have never made it so clear that you ought to deliver. Country directors have to deliver in ways that they have not necessarily had to deliver before and this transpires through to partners, local governments and sub-contractors…Quite a number of these actors have started to applaud us for it.” (Jan Ubels, SNV).

Different consortia passed on the risk of PBR to partners in different ways and the SNV experience reflects their particular approach. But it is evident that the clarity of expectations and pressure to deliver across consortia has been effective in generating results. So, apart from the focus on delivery, what else did people value about the way that PBR has been implemented in the WASH Results Programme?

Flexibility in PBR design
In particular, participants valued the flexibility shown by DFID in setting targets and results milestones to reflect different programme approaches – including agreeing payments for process achievements in some cases. Flexibility in definitions also allowed alignment with local government definitions. The drawback of this flexibility was lack of clarity about expectations and lack of standardisation across different suppliers.

Flexibility in implementation
Suppliers have been able to reallocate resources in response to changing contexts and priorities, without negotiating with the donor. It has also been possible to spread risk across multiple sites of operation; overachieving in one location to offset lower results in another.

Clarity of reporting
The focus on results has driven investment and improvements in Monitoring and Evaluation which is broadly thought to have value beyond the programme. Although reporting requirements that are focused exclusively on results are demanding, people welcomed not having to do the activity reporting that is a feature of many other forms of aid.

Some positives were identified during the discussions at the WASH Results workshop and there is much to celebrate. However, a central theme in the workshop was the ongoing challenge of how to place value (in commercial/financial and value for money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. These challenges will be explored in the next post.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

The report from the workshop is available to download from DFID’s Research for Development website. 

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.

What have we learned about Payment by Results (PBR) programmes from verifying one?

After 19 verification rounds, the WASH Results Monitoring and Verification team shares its suggestions for how to design future PBR programmes.

Martha Keega assesses a latrine in South Sudan

Verification in action: MV team member Martha Keega assesses a latrine in South Sudan

Verification is at the heart of the WASH Results Programme. Suppliers only get paid if we, the Monitoring and Verification (MV) team, can independently verify the results they are reporting. Usually we can: results are reported by Suppliers, verified by us and Suppliers are paid by DFID to an agreed schedule. However, all Suppliers have received deductions at least once, which, although painful for everyone, is testament to the rigour of the verification process. Overall, the system is working and the results of the programme are clear. But the demands of verification are also undeniable, leading to some aspects of verification being labelled “Payment by Paperwork” and like any process, it could be improved.

In January 2016 the team* came together to reflect on what we have learned so far from conducting 19 rounds of verification across the three Suppliers. Our discussions focused on verification but inevitably considered wider issues around design of a PBR programme. Here we share some suggestions for design of future PBR programmes, from a verification perspective.

  1. Ensure targets and milestones reflect high level programme objectives
  2. Be clear on targets and assumptions about their measurement
  3. Think carefully about enabling alignment with local government and other WASH stakeholders
  4. Reconsider the 100% PBR mechanism to avoid verification inefficiencies
  5. Consider payments for over-achievement of outcomes, but not of outputs
  6. Include provision for a joint Supplier and Verifier inception phase that will streamline verification
  7. Consider pros and cons of relying more on Supplier-generated evidence as opposed to independent evidence generation

1. Ensure targets and milestones reflect high level programme objectives
The WASH Results Programme has ambitions with regard to equity, gender and disability and overall health benefits that are not universally built into targets and payment milestones agreed between DFID and Suppliers. As a consequence, these ambitions are not explicitly incentivised. Any future programme should think carefully about how the design of the programme, especially the targets set in the tender and agreed with Suppliers, uphold objectives based on good practice within the sector.

2. Be clear on targets and assumptions about their measurement
We have found that when payment decisions are riding on whether targets have been met, the devil is in the detail. During implementation, some discrepancies have emerged over targets and how to achieve them. Discussions have taken place about minimum standards for latrines (DFID or JMP definition) and hygiene targets (what does ‘reach’ mean?). In addition, there was occasionally lack of clarity on how achievement of targets would be measured.

When working at scale, assumptions made about the average size of a household in a particular area, or the best way of measuring the number of pupils in a school become subject to intense scrutiny.  This is quite a departure from how programmes with different funding mechanisms have worked in the past and the level of detailed evidence required may come as a shock for Suppliers and Donors alike. In response, we suggest that future programmes should provide clear guidance on technical specifications relating to targets and guidelines for evidencing achievements.

3. Think carefully about enabling alignment with local government and other WASH Stakeholders
One concern that we discussed in the meeting was that the design of the WASH Results Programme does not sufficiently incentivise alignment with local government. We suspect that this was a result of the scale of the programme and the tight timelines, but also the demands of verification. The need to generate verifiable results can dis-incentivise both the pursuit of “soft” outcomes such as collaboration, and, working with government monitoring systems.

We suggest that PBR programmes need to think carefully about how to incentivise devolution of support services from progamme teams to local governments, and to other sector stakeholders during the life of the programme, for example by linking payments to these activities. Also, to think how programme design could encourage long-term strengthening of government monitoring systems.

4. Reconsider the 100% PBR mechanism to avoid verification inefficiencies
The merits or otherwise of the 100% PBR mechanism in the WASH Results Programme are subject to much discussion; we considered them from a verification perspective. We believe that, in response to the 100% PBR mechanism, some Suppliers included input- and process-related milestone targets to meet their internal cash flow requirements. In some cases, this led to verification processes that required high levels of effort (i.e. paperwork) with relatively few benefits.

We suggest that people designing future PBR programmes consider non-PBR upfront payments to Suppliers to avoid the need to set early input and process milestones, and run a substantial inception phase that includes paid-for outputs for Suppliers and Verifiers. In the implementation phase of the WASH Results Programme, payment milestones have been mainly quarterly, so requiring seemingly endless rounds of verification that put pressure on all involved, particularly Supplier programme staff. In response, we suggest that payments over the course of a programme should be less frequent (and so possibly larger), so requiring fewer verification rounds and allowing greater space between them. This may have implications for the design of the PBR mechanism.

5. Consider payments for over-achievement of outcomes, but not of outputs
The WASH Results Programme does not include payment for over-achievement. Over the course of the programme, some Suppliers have argued that over-achievement should be rewarded, just as under-achievement is penalised. As Verifiers, we agree that paying for over-achievement for outcomes would be a positive change in a future PBR design. However, there were concerns among our team that encouraging over-achievement of outputs could have unintended consequences such as inefficient investments or short-term efforts to achieve outputs without sufficient attention to sustainability and the quality of service delivery.

6. Include provision for a joint Supplier and Verifier inception phase that will streamline verification
It is broadly accepted that the WASH Results Programme would have benefited from a more substantial inception phase with the Verification Team in place at the start. Our recommendations about how an inception phase could help streamline and strengthen verification are as follows:

  • Key inception outputs should include a monitoring and results reporting framework agreed between the Supplier and the Verification Agent. Suppliers and Verifiers could be paid against these outputs to overcome cash flow issues.
  • The inception phase should include Verification Team visits to country programmes to establish an effective dialogue between the Verifiers and Suppliers early on.
  • If Suppliers evidence their achievements (as opposed to independent collection of evidence by the Verification Agent – see below), assessment of, and agreement on, what are adequate results reporting systems and processes need to be included in the inception phase.
  • Run a ‘dry’ verification round at the beginning of the verification phase where payments are guaranteed to Suppliers irrespective of target achievement so that early verification issues can be sorted out without escalating stress levels.

7. Consider pros and cons of relying more on Supplier-generated evidence as opposed to independent evidence generation
In the WASH Results Programme, Suppliers provide evidence against target achievements, which is subsequently verified by the Verification Team (we will be producing a paper soon that outlines how this process works in more detail). Is this reliance on Supplier-generated evidence the best way forward? What are the pros and cons of this approach as compared with independent (verification-led) evidence generation?

Indications are that the PBR mechanism has improved Suppliers’ internal monitoring systems, and has shifted the internal programming focus from the finance to the monitoring and evaluation department. However, relying on Suppliers’ internal reporting systems has required some Suppliers to introduce substantial changes to existing reporting systems and the MV team has faced challenges in ensuring standards of evidence, particularly in relation to surveys.

We have some ideas about pros and cons of Supplier-generated evidence as opposed to evidence generated independently, but feel this can only be fully assessed in conversation with the Suppliers. We plan to have this conversation at a WASH Results Programme Supplier learning event in March. So, this is not so much a suggestion as a request to watch this space!

Coming up…

WASH Results Programme Learning Event:  On March 7 2016 Suppliers, the e-Pact Monitoring & Verification and Evaluation teams, and DFID will convene to compare and reflect on learning so far. Key discussions at the event will be shared through this blog.

Verification framework paper: an overview of how the verification process works in the WASH Results Programme. This will present a behind-the-scenes look at verification in practice and provide background for future lessons and reflections that we intend to share through our blog and other outputs.

 

 


* About the MV Team: In the WASH Results Programme, the monitoring, verification and evaluation functions are combined into one contract with e-Pact. In practice, the ongoing monitoring and verification of Suppliers’ results is conducted by one team (the MV team) and the evaluation of the programme by another.  The lessons here are based on the experience of the MV team although members of the Evaluation team were also present at the workshop. Read more about the WASH Results Programme.