Tag Archives: Monitoring & Evaluation

Measuring WASH sustainability in a payment by results context

Andy Robinson, Lead Verifier for SNV, reports back from a WEDC Conference panel session organised by the three WASH Results suppliers

The three suppliers in the DFID WASH Results programme (SAWRP consortium, SWIFT consortium and SNV) came together at a side event held during the WEDC Conference in Kumasi, Ghana (11-15 July 2016) to present their thoughts on “measuring WASH sustainability in a Payment by Results (PBR) context”.

As Lead Verifier on the SNV contract and WEDC conference participant, I was invited to join the panel with the three suppliers and make a short presentation on behalf of the e-Pact consortium – to explain e-Pact’s role in WASH Results and elaborate some of the initial learning from the perspective of our Monitoring and Verification (MV) team.

Kevin Sansom (WEDC, SAWRP) began by outlining the key differences between PBR and grant programmes. He noted that PBR programmes require significant pre-finance and engender higher risks (particularly when tight timelines are applied), but allow greater flexibility and encourage more rigorous monitoring and evaluation (both internally, within the implementing agencies and externally by the verification and evaluation teams).

SAWRP presentation

Mimi Coultas (Plan UK) detailed the sustainability monitoring system adopted by the SAWRP consortium, explaining that some of the elements (sustainability assessment frameworks, outcome implementation manuals and the learning framework) are not linked to payments, but are designed to meet DFID’s requirement for reporting against five different dimensions of sustainability (functional, institutional, financial, environmental and equity).

Mimi noted that there was a lack of clarity at the outset around the criteria for payment (and the criteria for disallowance of payments), which caused some problems and could have been avoided by agreeing these details during a longer inception phase. She also suggested that the sampling approach used by the MV team has the potential “to scale mistakes” by exaggerating the effect of any poor results included in the sample (to suggest problems larger than actuality). Another comment was that the commercial pressures on the suppliers, all of whom are interested in bidding for any follow-on programmes, might have reduced collaboration and sharing of lessons learned.

Nonetheless, the SAWRP consortium felt that the programme had produced “amazing results”, with a high level of confidence in the quality and reliability of the results due to the strong scrutiny provided by the MV team. Mimi also noted that the monitoring and evaluation (M&E) focus required by the programme was a positive outcome, leading to a strengthening of M&E systems and the development of better ways of measuring WASH outcomes and sustainability. However, a longer programme duration would have been better, including an inception period during which the results framework and verification approaches could be carefully designed and negotiated.

SNV presentation

Anne Mutta (SNV) talked about the critical importance of political engagement to WASH sustainability, with governance activities integrated into the SNV programme from the start to address this requirement. Where local government capacity for sanitation and hygiene is low, sustainable results will obviously be harder to achieve. She also noted that some practical sustainability problems arise, such as heavy rain and flooding (which can wash away sanitation facilities, and constrain implementation) and changes in capacity, knowledge and commitment due to issues like government transfers or elections. Anne also agreed that the PBR programme required stronger progress monitoring, to track results and allow course corrections before the household survey results are verified.

SWIFT presentation

Rachel Stevens (TEARFUND) explained that the SWIFT consortium is using household, water point and latrine surveys, as well as local government and local service provider data, to assess sustainability (with two sets of surveys planned – one in mid-2016 and the other at end-2017). The SWIFT sustainability assessments use a similar traffic light system to those described by the other two suppliers, reporting against DFID’s five dimensions of sustainability.

Common challenges

The three suppliers had agreed on a list of common challenges, which were presented by Mimi Coultas (Plan UK). One of the most interesting of these was the risk that PBR encourages implementation in easier contexts – through the selection of less vulnerable and more accessible communities and project areas – in order to reduce both cost and risk.

The suppliers also questioned whether verification was appropriate for all aspects of sustainability, particularly the intangible and more qualitative factors (such as community empowerment), which are often important elements associated with the sustainability of sanitation and hygiene practices and outcomes.

Another potential issue is that the reduced reporting burden, with the production of evidence of results generally replacing the need for the detailed progress reporting and evaluation required by conventional programmes, may mean that the lessons learned by the programmes are not well captured or adequately documented.

Common opportunities

The suppliers agreed that, while some aspects of sustainability may be missed, the inclusion of payments for specific sustainability outcomes led to more attention to sustainability than in conventional programmes. Furthermore, the MV team’s work had encouraged greater transparency and accountability.

MV presentation

I made a short presentation on the role of the MV team and the key challenges and opportunities. After describing the composition of the e-Pact team, and introducing Bertha Darteh (Ghana country verifier for the SNV programme, who was in the audience), I explained that we were using “systems-based verification” rather than fully independent verification, which means that we are reliant on the data and reports produced by the suppliers’ M&E systems. As a result, we have to understand these systems well, and identify any weaknesses and any potential for errors, misreporting or gaming of results. DFID’s decision to adopt a systems-based verification approach was based on the assumption it would be cheaper than statistically sampled independent surveys (across such a large population), but the MV experience suggests that there are a lot of unforeseen costs (often to the suppliers) related to this systems-based approach.

Key verification challenges include the large number of closely spaced results, with little time between each verification cycle for the design, review and improvement of the verification process. The SNV programme includes nine country projects, with significant variations in context across the projects, which requires considerable flexibility in the verification system; whereas the other two suppliers’ programmes include multiple implementation partners, each of which has slightly different monitoring and reporting systems, and different priorities and targets, which in turn require adaptation of the verification systems.

I concurred that not enough time had been provided up front for the planning and design of the programme, including the MV framework and activities, which increased the pressure on all stakeholders during the first year of the programme, when suppliers were developing systems, implementing and reporting, with little time to respond to the additional demands of the verification process.

One positive outcome of the need for verified results has been the use of smartphone survey applications, which have greatly sped up and reduced the cost of the survey process; improved data processing and quality control; and made it much easier to verify large-scale results quickly. A key learning from the PBR programme is that these household surveys appear to be a far quicker and more effective way of evaluating programme outcomes than conventional evaluations.

Overall, the PBR approach appears to be improving M&E approaches and systems, encouraging more thinking about how to measure and evidence outcomes and sustainability, and providing reliable feedback on progress and performance at regular intervals during the life of the programme. This feedback enables regular improvements to be made to programme policy, planning and practice (unlike conventional programmes, which often are not rigorously evaluated until the end of the programme duration).

Questions from the floor

When the panel was asked whether the PBR approach encourages efficiency, the suppliers noted that both the programme and the approach encourage scale, which in turn encourages efficiency; however, the additional costs of verification and the related reporting were thought to partially offset the efficiency gains.

A similar question was asked about whether PBR encouraged value-for-money: the suppliers suggested that they are very confident of their results (compared to conventional programmes, which may over-report results), thus the cost-per-result is clear. They also noted that there is an incentive to reduce costs, but that these reductions may not always be passed on (and, because there is no payment for over-achievement in this programme, any additional results appear to reduce the cost per outcome/result, but do not change the suppliers’ fixed costs).

Several Ghanaian participants expressed their confusion about the new terminology associated with PBR. Output based Aid (OBA) is common in Ghana, notably through a World Bank WASH programme (with payments linked to toilet construction), and it was suggested that there “was no need to introduce yet another acronym for the same thing”. Louise Medland (WEDC, SAWRP) responded that DFID differentiated between the OBA and PBR approaches by the PBR focus on outcomes (whereas OBA focuses on outputs).

The final question was around PBR’s effect on innovation: the suppliers noted that the design was supposed to encourage innovation, but that the time pressure (of the short implementation period) limited the chance of innovation. I added that we have seen different outcomes in different contexts – in low capacity settings, the programme management generally provide firm guidelines to the project team to minimise risk; but in high capacity settings, there was evidence of innovation driven by the need to achieve results, especially in more difficult contexts where standard approaches were not working.

The general tone of the PBR session was positive, with the suppliers agreeing that the PBR approach has led to reliable and large-scale results, and that the need to report and verify results has led to significant improvements in M&E systems. A lot of learning has taken place, and the suppliers hoped that this learning will inform the design of any future WASH PBR programmes.

Andy Robinson, Lead Verifier on the SNV Contract, WASH Results MVE Team

The paybacks and pains of Payment by Results (part 1)

Our series of reflections on the WASH Results Programme’s learning starts by identifying where Payment by Results has added value.

Payment by Results (PBR) has been “a highly effective means of incentivising delivery at scale” according to the people that deliver the WASH Results Programme. This finding taken from the report of a recent WASH Results learning event may surprise some PBR naysayers. However, as this first post in a series of reflections on the report shows, when the donors, suppliers and verifiers of WASH Results came together to reflect on their experience of actually delivering and verifying the programme, they were able to agree on several positives alongside their concerns.

blog 1

Participants of the WASH Results 2016 Learning Workshop exploring areas of agreement.

The pros and cons of PBR in development are hotly debated online, but the Center for Global Development reminds us that when discussing PBR, we should be clear about who is being paid, for what and how. The particular way in which WASH Results was designed has therefore influenced the experiences of its suppliers (SNV, and the SAWRP and SWIFT consortia). An important feature of the design (extrinsic to the PBR modality) is that delivery was tied to the water and sanitation target (Target 7.C.) of the Millennium Development Goals. The programme began with an extremely time-pressured initial ‘output phase’ to December 2015 (focussing on installation of WASH infrastructure), followed by an ‘outcomes phase’ that started this year. Another key design feature is that WASH Results is 100% PBR. The nature of the results, however, were agreed on a case-by-case basis with each supplier and include outputs, outcomes and in some cases, process-type activities.

Sharpening focus on results
It is certainly the case that the WASH Results Programme has delivered huge results within a very tight time-frame. Earlier this year, for example, SWIFT reported having reached close to 850,000 people with two or more of water, sanitation or hygiene services. During the workshop participants broadly agreed with the statement that PBR was an important factor in incentivising delivery. Some questioned the extent of the contribution of the PBR mechanism, highlighting instead their core commitment to delivery. However, others were clear that the PBR mechanism has sharpened the focus on achieving results:

Grants have never made it so clear that you ought to deliver. Country directors have to deliver in ways that they have not necessarily had to deliver before and this transpires through to partners, local governments and sub-contractors…Quite a number of these actors have started to applaud us for it.” (Jan Ubels, SNV).

Different consortia passed on the risk of PBR to partners in different ways and the SNV experience reflects their particular approach. But it is evident that the clarity of expectations and pressure to deliver across consortia has been effective in generating results. So, apart from the focus on delivery, what else did people value about the way that PBR has been implemented in the WASH Results Programme?

Flexibility in PBR design
In particular, participants valued the flexibility shown by DFID in setting targets and results milestones to reflect different programme approaches – including agreeing payments for process achievements in some cases. Flexibility in definitions also allowed alignment with local government definitions. The drawback of this flexibility was lack of clarity about expectations and lack of standardisation across different suppliers.

Flexibility in implementation
Suppliers have been able to reallocate resources in response to changing contexts and priorities, without negotiating with the donor. It has also been possible to spread risk across multiple sites of operation; overachieving in one location to offset lower results in another.

Clarity of reporting
The focus on results has driven investment and improvements in Monitoring and Evaluation which is broadly thought to have value beyond the programme. Although reporting requirements that are focused exclusively on results are demanding, people welcomed not having to do the activity reporting that is a feature of many other forms of aid.

Some positives were identified during the discussions at the WASH Results workshop and there is much to celebrate. However, a central theme in the workshop was the ongoing challenge of how to place value (in commercial/financial and value for money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. These challenges will be explored in the next post.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

The report from the workshop is available to download from DFID’s Research for Development website. 

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.