Andy Robinson, Lead Verifier for SNV, reports back from a WEDC Conference panel session organised by the three WASH Results suppliers
The three suppliers in the DFID WASH Results programme (SAWRP consortium, SWIFT consortium and SNV) came together at a side event held during the WEDC Conference in Kumasi, Ghana (11-15 July 2016) to present their thoughts on “measuring WASH sustainability in a Payment by Results (PBR) context”.
As Lead Verifier on the SNV contract and WEDC conference participant, I was invited to join the panel with the three suppliers and make a short presentation on behalf of the e-Pact consortium – to explain e-Pact’s role in WASH Results and elaborate some of the initial learning from the perspective of our Monitoring and Verification (MV) team.
Kevin Sansom (WEDC, SAWRP) began by outlining the key differences between PBR and grant programmes. He noted that PBR programmes require significant pre-finance and engender higher risks (particularly when tight timelines are applied), but allow greater flexibility and encourage more rigorous monitoring and evaluation (both internally, within the implementing agencies and externally by the verification and evaluation teams).
Mimi Coultas (Plan UK) detailed the sustainability monitoring system adopted by the SAWRP consortium, explaining that some of the elements (sustainability assessment frameworks, outcome implementation manuals and the learning framework) are not linked to payments, but are designed to meet DFID’s requirement for reporting against five different dimensions of sustainability (functional, institutional, financial, environmental and equity).
Mimi noted that there was a lack of clarity at the outset around the criteria for payment (and the criteria for disallowance of payments), which caused some problems and could have been avoided by agreeing these details during a longer inception phase. She also suggested that the sampling approach used by the MV team has the potential “to scale mistakes” by exaggerating the effect of any poor results included in the sample (to suggest problems larger than actuality). Another comment was that the commercial pressures on the suppliers, all of whom are interested in bidding for any follow-on programmes, might have reduced collaboration and sharing of lessons learned.
Nonetheless, the SAWRP consortium felt that the programme had produced “amazing results”, with a high level of confidence in the quality and reliability of the results due to the strong scrutiny provided by the MV team. Mimi also noted that the monitoring and evaluation (M&E) focus required by the programme was a positive outcome, leading to a strengthening of M&E systems and the development of better ways of measuring WASH outcomes and sustainability. However, a longer programme duration would have been better, including an inception period during which the results framework and verification approaches could be carefully designed and negotiated.
Anne Mutta (SNV) talked about the critical importance of political engagement to WASH sustainability, with governance activities integrated into the SNV programme from the start to address this requirement. Where local government capacity for sanitation and hygiene is low, sustainable results will obviously be harder to achieve. She also noted that some practical sustainability problems arise, such as heavy rain and flooding (which can wash away sanitation facilities, and constrain implementation) and changes in capacity, knowledge and commitment due to issues like government transfers or elections. Anne also agreed that the PBR programme required stronger progress monitoring, to track results and allow course corrections before the household survey results are verified.
Rachel Stevens (TEARFUND) explained that the SWIFT consortium is using household, water point and latrine surveys, as well as local government and local service provider data, to assess sustainability (with two sets of surveys planned – one in mid-2016 and the other at end-2017). The SWIFT sustainability assessments use a similar traffic light system to those described by the other two suppliers, reporting against DFID’s five dimensions of sustainability.
The three suppliers had agreed on a list of common challenges, which were presented by Mimi Coultas (Plan UK). One of the most interesting of these was the risk that PBR encourages implementation in easier contexts – through the selection of less vulnerable and more accessible communities and project areas – in order to reduce both cost and risk.
The suppliers also questioned whether verification was appropriate for all aspects of sustainability, particularly the intangible and more qualitative factors (such as community empowerment), which are often important elements associated with the sustainability of sanitation and hygiene practices and outcomes.
Another potential issue is that the reduced reporting burden, with the production of evidence of results generally replacing the need for the detailed progress reporting and evaluation required by conventional programmes, may mean that the lessons learned by the programmes are not well captured or adequately documented.
The suppliers agreed that, while some aspects of sustainability may be missed, the inclusion of payments for specific sustainability outcomes led to more attention to sustainability than in conventional programmes. Furthermore, the MV team’s work had encouraged greater transparency and accountability.
I made a short presentation on the role of the MV team and the key challenges and opportunities. After describing the composition of the e-Pact team, and introducing Bertha Darteh (Ghana country verifier for the SNV programme, who was in the audience), I explained that we were using “systems-based verification” rather than fully independent verification, which means that we are reliant on the data and reports produced by the suppliers’ M&E systems. As a result, we have to understand these systems well, and identify any weaknesses and any potential for errors, misreporting or gaming of results. DFID’s decision to adopt a systems-based verification approach was based on the assumption it would be cheaper than statistically sampled independent surveys (across such a large population), but the MV experience suggests that there are a lot of unforeseen costs (often to the suppliers) related to this systems-based approach.
Key verification challenges include the large number of closely spaced results, with little time between each verification cycle for the design, review and improvement of the verification process. The SNV programme includes nine country projects, with significant variations in context across the projects, which requires considerable flexibility in the verification system; whereas the other two suppliers’ programmes include multiple implementation partners, each of which has slightly different monitoring and reporting systems, and different priorities and targets, which in turn require adaptation of the verification systems.
I concurred that not enough time had been provided up front for the planning and design of the programme, including the MV framework and activities, which increased the pressure on all stakeholders during the first year of the programme, when suppliers were developing systems, implementing and reporting, with little time to respond to the additional demands of the verification process.
One positive outcome of the need for verified results has been the use of smartphone survey applications, which have greatly sped up and reduced the cost of the survey process; improved data processing and quality control; and made it much easier to verify large-scale results quickly. A key learning from the PBR programme is that these household surveys appear to be a far quicker and more effective way of evaluating programme outcomes than conventional evaluations.
Overall, the PBR approach appears to be improving M&E approaches and systems, encouraging more thinking about how to measure and evidence outcomes and sustainability, and providing reliable feedback on progress and performance at regular intervals during the life of the programme. This feedback enables regular improvements to be made to programme policy, planning and practice (unlike conventional programmes, which often are not rigorously evaluated until the end of the programme duration).
Questions from the floor
When the panel was asked whether the PBR approach encourages efficiency, the suppliers noted that both the programme and the approach encourage scale, which in turn encourages efficiency; however, the additional costs of verification and the related reporting were thought to partially offset the efficiency gains.
A similar question was asked about whether PBR encouraged value-for-money: the suppliers suggested that they are very confident of their results (compared to conventional programmes, which may over-report results), thus the cost-per-result is clear. They also noted that there is an incentive to reduce costs, but that these reductions may not always be passed on (and, because there is no payment for over-achievement in this programme, any additional results appear to reduce the cost per outcome/result, but do not change the suppliers’ fixed costs).
Several Ghanaian participants expressed their confusion about the new terminology associated with PBR. Output based Aid (OBA) is common in Ghana, notably through a World Bank WASH programme (with payments linked to toilet construction), and it was suggested that there “was no need to introduce yet another acronym for the same thing”. Louise Medland (WEDC, SAWRP) responded that DFID differentiated between the OBA and PBR approaches by the PBR focus on outcomes (whereas OBA focuses on outputs).
The final question was around PBR’s effect on innovation: the suppliers noted that the design was supposed to encourage innovation, but that the time pressure (of the short implementation period) limited the chance of innovation. I added that we have seen different outcomes in different contexts – in low capacity settings, the programme management generally provide firm guidelines to the project team to minimise risk; but in high capacity settings, there was evidence of innovation driven by the need to achieve results, especially in more difficult contexts where standard approaches were not working.
The general tone of the PBR session was positive, with the suppliers agreeing that the PBR approach has led to reliable and large-scale results, and that the need to report and verify results has led to significant improvements in M&E systems. A lot of learning has taken place, and the suppliers hoped that this learning will inform the design of any future WASH PBR programmes.
Andy Robinson, Lead Verifier on the SNV Contract, WASH Results MVE Team