Alignment, aid effectiveness and Payment by Results

To what extent does the Payment by Results approach of the WASH Results Programme follow the aid effectiveness principle of alignment?
One argument for Payment by Results (PBR) is that it can promote “alignment”, which is also an important principle in aid effectiveness. So that’s good, right? But, a closer look at how this slippery term is used reveals differences in understanding that are particularly relevant to the use of PBR in International Development.

According to some PBR commentators, PBR can bring advantages in situations where there is misalignment between the objectives of donors and implementers, but there is some debate about this argument, (see, for example, CGD’s commentary on principle 5 of Clist & Dercon’s 12 principles of PBR).  Either way, the alignment in question here is between the objectives of donor and implementer (or Suppliers as we call them in WASH Results; in our case either individual – SNV, or consortia of, non-governmental organisations – SAWRP and SWIFT).

Compare this with the understanding in the Paris Declaration in which alignment is one of the five principles of Aid Effectiveness. The first principle of Ownership states: “Developing countries set their own strategies for poverty reduction, improve their institutions and tackle corruption.”. The second principle, Alignment, builds on this: “Donor countries align behind these objectives and use local systems.”. In this case, the alignment is that of donors behind national strategies and objectives.

The PBR funding mechanism for the WASH Results Programme is the type that DFID calls Results Based Finance. Under this approach, the contract is between a donor and a service provider, not recipient governments (DFID calls the latter Results Based Aid*). In this context, the term “alignment” as used in the PBR literature may be at odds with the concept of alignment in the Paris Declaration for Aid Effectiveness as it encourages alignment between service provider and donor rather than donor with national stakeholders and priorities. This has led some people to claim that PBR promotes upwards accountability to donors at the expense of accountability to national and local stakeholders.

Experience of alignment with national government stakeholders under the WASH Results Programme

At a WASH Results learning workshop held earlier this year, participants shared their views on alignment in the context of the WASH Results Programme. Over the last year of implementation, some concerns were raised that the PBR modality was a barrier to alignment with national priorities and stakeholders in the countries in which the WASH Results is being implemented. During the learning workshop a nuanced picture emerged of the programme’s experience to date as this extract from the workshop report demonstrates:


Alignment is happening, whether incentivised by PBR or not:  All of the Suppliers work with local stakeholders as a matter of course. However differences in programme design affected to what extent this was incentivised or recognised in payment packages. All of the Suppliers had experienced positive reactions from local stakeholders to the principle of PBR – with one Supplier being asked by local government officials for support in rolling out PBR in one of their programmes. 

Value of building alignment into results packages: There was some sense that the focus on outputs in the first phase of WASH Results had taken attention away from areas such as alignment that are not so easily linked to milestones and so opportunities for alignment had been missed. However, SNV took a different approach to other suppliers by building concrete items into their results packages to reflect their work with local partners, e.g. district plans in each of 60 districts in which they work. While this was felt to be a “smart” approach – SNV warned that there are also disadvantages: “We think we have found some meaningful ways to address elements of alignment, but let’s not be too optimistic about these instruments; they focus attention on direct deliverables” (Jan Ubels, SNV).

Flexibility supports alignment: Suppliers value being allowed to change their approach without going through a contract amendment process. In one case a Supplier was able to change definitions of results to better align with national government definitions.  However, there are potential risks in this approach: “Alignment to what? If the government has a much lower CLTS standard than the SDGs – is that still the alignment we are trying to encourage?” (Louise Medland, SAWRP) 

 Challenge of timelines:  WASH Results has tight deadlines and an emphasis on deliverables while partners, e.g. water authorities, are working to a different longer timeline and may not deliver at the pace required. There is a limit to how much risk can be transferred to partners in this context.

PBR risks limiting Suppliers to existing relationships: participants agreed that PBR can only be introduced where there is existing relationships and social capital and it would be risky to try to implement PBR in places where there was no established relationship.

Additional demands of monitoring for PBR: One Supplier felt that the kind of monitoring carried out for WASH Results could never be the same as that carried out at a local level; it would always be additional to, rather than aligned with, that of the government, although it might stimulate M&E at a local level.

Ways in which alignment could be promoted in future Results Based Finance forms of PBR

To support alignment within a PBR mechanism, participants in the workshop suggested:

  • Valuing alignment at tendering and contracting stage:   Alignment should be considered when costing at the tendering and contracting stage so that prospective Suppliers are competing on an equal basis, given the additional cost (and value) alignment brings.
  • Defining specific hard deliverables, perhaps during a pre-inception phase, that were somewhere between output and outcome phase e.g. district plans.
  • Including specific rewards or incentives in the programme aimed at government to encourage their buy-in to the programme
See especially pages 9 – 10 of DFID WASH Results Programme: Learning Event. e-Pact Consortium, Hove, UK (2016)

Conclusions and looking forward

For PBR to be accepted as an effective form of aid financing it will need to follow all the principles of aid effectiveness, including alignment. The experience of WASH Results so far suggests that this is possible but requires careful consideration of how alignment can be promoted during the design of programmes, contracting and tendering processes,  definition of results and design of verification systems.

Another, more macro, way of supporting alignment using PBR, is for the Independent Verifiers to work much more closely with the national government monitoring systems to verify results. In this model, significant support is given by the donor to improve the national systems and then recipient countries themselves can do the verification. In this case, the use of PBR to fund service delivery would act as a catalyst for strengthening monitoring systems at a national level (although the PBR programmes would need to be of significant scale to be an effective catalyst). This sector-strengthening approach requires long-term investment and a multi-pronged approach, within which PBR projects may only be one element, albeit a potentially catalytic one.

We have not seen much focus on this area so far in the debates around PBR (please alert us to it if we are wrong!). We hope that the experience of our programme will help contribute to that understanding. We will continue to share ongoing lessons learned from implementation as well as findings from the evaluation.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

* For an example of a Payment By Results programme in WASH that uses Results Based Aid (where payment goes from the donor to a recipient government), we suggest readers take a look at DFID’s Support to Rural Water Supply, Sanitation & Hygiene in Tanzania.

The report from the WASH Results learning workshop is available to download from DFID’s Research for Development website. 

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.

Measuring WASH sustainability in a payment by results context

Andy Robinson, Lead Verifier for SNV, reports back from a WEDC Conference panel session organised by the three WASH Results suppliers

The three suppliers in the DFID WASH Results programme (SAWRP consortium, SWIFT consortium and SNV) came together at a side event held during the WEDC Conference in Kumasi, Ghana (11-15 July 2016) to present their thoughts on “measuring WASH sustainability in a Payment by Results (PBR) context”.

As Lead Verifier on the SNV contract and WEDC conference participant, I was invited to join the panel with the three suppliers and make a short presentation on behalf of the e-Pact consortium – to explain e-Pact’s role in WASH Results and elaborate some of the initial learning from the perspective of our Monitoring and Verification (MV) team.

Kevin Sansom (WEDC, SAWRP) began by outlining the key differences between PBR and grant programmes. He noted that PBR programmes require significant pre-finance and engender higher risks (particularly when tight timelines are applied), but allow greater flexibility and encourage more rigorous monitoring and evaluation (both internally, within the implementing agencies and externally by the verification and evaluation teams).

SAWRP presentation

Mimi Coultas (Plan UK) detailed the sustainability monitoring system adopted by the SAWRP consortium, explaining that some of the elements (sustainability assessment frameworks, outcome implementation manuals and the learning framework) are not linked to payments, but are designed to meet DFID’s requirement for reporting against five different dimensions of sustainability (functional, institutional, financial, environmental and equity).

Mimi noted that there was a lack of clarity at the outset around the criteria for payment (and the criteria for disallowance of payments), which caused some problems and could have been avoided by agreeing these details during a longer inception phase. She also suggested that the sampling approach used by the MV team has the potential “to scale mistakes” by exaggerating the effect of any poor results included in the sample (to suggest problems larger than actuality). Another comment was that the commercial pressures on the suppliers, all of whom are interested in bidding for any follow-on programmes, might have reduced collaboration and sharing of lessons learned.

Nonetheless, the SAWRP consortium felt that the programme had produced “amazing results”, with a high level of confidence in the quality and reliability of the results due to the strong scrutiny provided by the MV team. Mimi also noted that the monitoring and evaluation (M&E) focus required by the programme was a positive outcome, leading to a strengthening of M&E systems and the development of better ways of measuring WASH outcomes and sustainability. However, a longer programme duration would have been better, including an inception period during which the results framework and verification approaches could be carefully designed and negotiated.

SNV presentation

Anne Mutta (SNV) talked about the critical importance of political engagement to WASH sustainability, with governance activities integrated into the SNV programme from the start to address this requirement. Where local government capacity for sanitation and hygiene is low, sustainable results will obviously be harder to achieve. She also noted that some practical sustainability problems arise, such as heavy rain and flooding (which can wash away sanitation facilities, and constrain implementation) and changes in capacity, knowledge and commitment due to issues like government transfers or elections. Anne also agreed that the PBR programme required stronger progress monitoring, to track results and allow course corrections before the household survey results are verified.

SWIFT presentation

Rachel Stevens (TEARFUND) explained that the SWIFT consortium is using household, water point and latrine surveys, as well as local government and local service provider data, to assess sustainability (with two sets of surveys planned – one in mid-2016 and the other at end-2017). The SWIFT sustainability assessments use a similar traffic light system to those described by the other two suppliers, reporting against DFID’s five dimensions of sustainability.

Common challenges

The three suppliers had agreed on a list of common challenges, which were presented by Mimi Coultas (Plan UK). One of the most interesting of these was the risk that PBR encourages implementation in easier contexts – through the selection of less vulnerable and more accessible communities and project areas – in order to reduce both cost and risk.

The suppliers also questioned whether verification was appropriate for all aspects of sustainability, particularly the intangible and more qualitative factors (such as community empowerment), which are often important elements associated with the sustainability of sanitation and hygiene practices and outcomes.

Another potential issue is that the reduced reporting burden, with the production of evidence of results generally replacing the need for the detailed progress reporting and evaluation required by conventional programmes, may mean that the lessons learned by the programmes are not well captured or adequately documented.

Common opportunities

The suppliers agreed that, while some aspects of sustainability may be missed, the inclusion of payments for specific sustainability outcomes led to more attention to sustainability than in conventional programmes. Furthermore, the MV team’s work had encouraged greater transparency and accountability.

MV presentation

I made a short presentation on the role of the MV team and the key challenges and opportunities. After describing the composition of the e-Pact team, and introducing Bertha Darteh (Ghana country verifier for the SNV programme, who was in the audience), I explained that we were using “systems-based verification” rather than fully independent verification, which means that we are reliant on the data and reports produced by the suppliers’ M&E systems. As a result, we have to understand these systems well, and identify any weaknesses and any potential for errors, misreporting or gaming of results. DFID’s decision to adopt a systems-based verification approach was based on the assumption it would be cheaper than statistically sampled independent surveys (across such a large population), but the MV experience suggests that there are a lot of unforeseen costs (often to the suppliers) related to this systems-based approach.

Key verification challenges include the large number of closely spaced results, with little time between each verification cycle for the design, review and improvement of the verification process. The SNV programme includes nine country projects, with significant variations in context across the projects, which requires considerable flexibility in the verification system; whereas the other two suppliers’ programmes include multiple implementation partners, each of which has slightly different monitoring and reporting systems, and different priorities and targets, which in turn require adaptation of the verification systems.

I concurred that not enough time had been provided up front for the planning and design of the programme, including the MV framework and activities, which increased the pressure on all stakeholders during the first year of the programme, when suppliers were developing systems, implementing and reporting, with little time to respond to the additional demands of the verification process.

One positive outcome of the need for verified results has been the use of smartphone survey applications, which have greatly sped up and reduced the cost of the survey process; improved data processing and quality control; and made it much easier to verify large-scale results quickly. A key learning from the PBR programme is that these household surveys appear to be a far quicker and more effective way of evaluating programme outcomes than conventional evaluations.

Overall, the PBR approach appears to be improving M&E approaches and systems, encouraging more thinking about how to measure and evidence outcomes and sustainability, and providing reliable feedback on progress and performance at regular intervals during the life of the programme. This feedback enables regular improvements to be made to programme policy, planning and practice (unlike conventional programmes, which often are not rigorously evaluated until the end of the programme duration).

Questions from the floor

When the panel was asked whether the PBR approach encourages efficiency, the suppliers noted that both the programme and the approach encourage scale, which in turn encourages efficiency; however, the additional costs of verification and the related reporting were thought to partially offset the efficiency gains.

A similar question was asked about whether PBR encouraged value-for-money: the suppliers suggested that they are very confident of their results (compared to conventional programmes, which may over-report results), thus the cost-per-result is clear. They also noted that there is an incentive to reduce costs, but that these reductions may not always be passed on (and, because there is no payment for over-achievement in this programme, any additional results appear to reduce the cost per outcome/result, but do not change the suppliers’ fixed costs).

Several Ghanaian participants expressed their confusion about the new terminology associated with PBR. Output based Aid (OBA) is common in Ghana, notably through a World Bank WASH programme (with payments linked to toilet construction), and it was suggested that there “was no need to introduce yet another acronym for the same thing”. Louise Medland (WEDC, SAWRP) responded that DFID differentiated between the OBA and PBR approaches by the PBR focus on outcomes (whereas OBA focuses on outputs).

The final question was around PBR’s effect on innovation: the suppliers noted that the design was supposed to encourage innovation, but that the time pressure (of the short implementation period) limited the chance of innovation. I added that we have seen different outcomes in different contexts – in low capacity settings, the programme management generally provide firm guidelines to the project team to minimise risk; but in high capacity settings, there was evidence of innovation driven by the need to achieve results, especially in more difficult contexts where standard approaches were not working.

The general tone of the PBR session was positive, with the suppliers agreeing that the PBR approach has led to reliable and large-scale results, and that the need to report and verify results has led to significant improvements in M&E systems. A lot of learning has taken place, and the suppliers hoped that this learning will inform the design of any future WASH PBR programmes.

Andy Robinson, Lead Verifier on the SNV Contract, WASH Results MVE Team

The paybacks and pains of Payment by Results (Part 2)

Our series of reflections on WASH Results’ learning continues by exploring value and costs in a Payment by Results (PBR) programme.

DFID has been clear from the outset about what it wants from the Water, Sanitation and Hygiene (WASH) Results Programme: WASH interventions delivered at scale within a short time-frame and confidence in the results being reported to the UK taxpayer. DFID got what it wanted, but at what cost? In this post we build on discussions at the WASH Results Programme’s learning event held earlier this year which looked beyond the numbers of people reached with interventions to explore some of the challenges faced in implementing the programme.

Can PBR frameworks be designed to incentivise suppliers to focus on the “harder to reach”

A central theme in the workshop was the ongoing puzzle of how to place value (both in commercial/financial and Value for Money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. So, does the importance placed by a donor on achieving clearly costed, verified results risk squeezing out other values and principles that are central to development programming? Which values might end up being pushed aside and could this be mitigated through better design?

1. High quality programming

Suppliers hit a major challenge during tendering when DFID asked them to provide a price per beneficiary that reflected the cost to the suppliers of reaching that beneficiary. But calculating this cost is complex. Potential suppliers have to think about what kind of costs they should fold into that price per beneficiary when bidding: the more the costs, the higher the bid. During the workshop one Supplier asked rhetorically “Should we say $20 with alignment and $23 without?”

There is some apprehensiveness within the NGO sector about competing with the private sector in this commercial context and they are often advised to be cautious. Will they be undercut by commercial organisations submitting more attractive (read cheaper) bids that lack the added benefits that NGOs can bring: the social capital and ways of working that are difficult to put a commercial value on but will affect the quality of the programming?

DFID has been clear that it does not equate Value for Money (VfM) with “cheap” and it is willing to pay for quality programming, whoever is best placed to deliver it. One improvement to the tendering process would be to articulate some of these added benefits (such as existing relationships and social capital in a programme area) as requirements for bidders. Potential suppliers would thus need to provide evidence within the bidding process.

2. Reaching the hardest to reach 

A criticism levelled at PBR is that by using a fixed “price per beneficiary” approach, it encourages suppliers to focus on people who are easier to reach, a practice sometimes described as “creaming” or “cherry picking”. Stakeholders in the WASH Results Programme are firmly committed to inclusion and during the workshop investigated how that could be incentivised better within a PBR framework. Options explored included multi-tiered price per beneficiary frameworks (as used in drug and alcohol recovery programmes in the UK) although these carry the risk of increasing complexity and reducing flexibility. Another suggestion for incentivising inclusion was careful selection and wording of the objectives and appropriate verification processes in the tender document, however this may risk compromising the flexibility to negotiate targets and verification approaches in response to different contexts.

3. Investing for the future

One related but different challenge that emerged during the workshop was that of placing commercial value on activities that invest for future work in the sector. This includes building the social capital to work with local stakeholders and investing in programmatic innovation (which some suppliers suggested had not been possible under the WASH Results Programme). Do the practical implications of PBR risk capitalising on previous investment made by suppliers, without contributing to it in turn? This is perhaps not an issue while PBR contracts constitute a small proportion of aid financing but would become more so if PBR contracts started to dominate. On the other hand, the benefits that suppliers report, particularly in terms of strengthening monitoring and reporting systems to enable more rigorous real-time results tracking may also spill over into other programmes, benefitting them in turn. It is too early to draw conclusions but it may be the case that a range of different aid mechanisms are required, with the benefits and limitations of each clearly identified.

4. Confidence in results

Finally, it is worth observing the possible trade-off between the value placed by DFID on confidence in results that is so important for communicating with taxpayers, and the effectiveness of aid spending that can be achieved through PBR and the nature of the results it produces. Verification is undoubtedly costly (someone paid you to come here just to look at that toilet?” a baffled resident of a beneficiary village is reported to have asked of a verification team member).

But there is another aspect of effectiveness: if PBR prompts suppliers to focus their efforts on what can be counted (i.e. what can be verified at scale without incurring prohibitive expense), this may shift their efforts away from development programming with longer-term and more uncertain outcomes. Put simply, this could equate to building toilets rather than working on sanitation behaviour change interventions, that are considered to generate more sustainable positive outcomes. Of course there is no guarantee other forms of aid financing will generate these results and as there is likely to be less focus on measuring those results, comparison would be difficult.

Advice for PBR commissioners

What might this mean for those considering PBR modalities and designing PBR programmes? The experience of WASH Results so far suggests that when designing a PBR programme, commissioners need to:

  • be clear on the value implied in “value for money” – consider all of the “values” that are important, including the value of donor confidence in results;
  • strike a balance between clearly specifying expected results (particularly for more vulnerable people) and being flexible to the contexts in which suppliers are operating;
  • think creatively and collaboratively about how long-term outcomes can be measured;
  • explore hybrid funding models but avoid creating the “worst of all worlds” that lacks the flexibility of PBR, increases complexity and imposes multiple reporting frameworks;
  • consider whether PBR is the right funding mechanism for the kinds of results you wish to achieve (tools are emerging that can help) ;
  • view the PBR component in the context of the broad spectrum of funding to the sector – seek to maximise linkages and mutual value across the sector.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

The report from the WASH Results learning workshop is available to download from DFID’s Research for Development website. 

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.

The paybacks and pains of Payment by Results (part 1)

Our series of reflections on the WASH Results Programme’s learning starts by identifying where Payment by Results has added value.

Payment by Results (PBR) has been “a highly effective means of incentivising delivery at scale” according to the people that deliver the WASH Results Programme. This finding taken from the report of a recent WASH Results learning event may surprise some PBR naysayers. However, as this first post in a series of reflections on the report shows, when the donors, suppliers and verifiers of WASH Results came together to reflect on their experience of actually delivering and verifying the programme, they were able to agree on several positives alongside their concerns.

blog 1

Participants of the WASH Results 2016 Learning Workshop exploring areas of agreement.

The pros and cons of PBR in development are hotly debated online, but the Center for Global Development reminds us that when discussing PBR, we should be clear about who is being paid, for what and how. The particular way in which WASH Results was designed has therefore influenced the experiences of its suppliers (SNV, and the SAWRP and SWIFT consortia). An important feature of the design (extrinsic to the PBR modality) is that delivery was tied to the water and sanitation target (Target 7.C.) of the Millennium Development Goals. The programme began with an extremely time-pressured initial ‘output phase’ to December 2015 (focussing on installation of WASH infrastructure), followed by an ‘outcomes phase’ that started this year. Another key design feature is that WASH Results is 100% PBR. The nature of the results, however, were agreed on a case-by-case basis with each supplier and include outputs, outcomes and in some cases, process-type activities.

Sharpening focus on results
It is certainly the case that the WASH Results Programme has delivered huge results within a very tight time-frame. Earlier this year, for example, SWIFT reported having reached close to 850,000 people with two or more of water, sanitation or hygiene services. During the workshop participants broadly agreed with the statement that PBR was an important factor in incentivising delivery. Some questioned the extent of the contribution of the PBR mechanism, highlighting instead their core commitment to delivery. However, others were clear that the PBR mechanism has sharpened the focus on achieving results:

Grants have never made it so clear that you ought to deliver. Country directors have to deliver in ways that they have not necessarily had to deliver before and this transpires through to partners, local governments and sub-contractors…Quite a number of these actors have started to applaud us for it.” (Jan Ubels, SNV).

Different consortia passed on the risk of PBR to partners in different ways and the SNV experience reflects their particular approach. But it is evident that the clarity of expectations and pressure to deliver across consortia has been effective in generating results. So, apart from the focus on delivery, what else did people value about the way that PBR has been implemented in the WASH Results Programme?

Flexibility in PBR design
In particular, participants valued the flexibility shown by DFID in setting targets and results milestones to reflect different programme approaches – including agreeing payments for process achievements in some cases. Flexibility in definitions also allowed alignment with local government definitions. The drawback of this flexibility was lack of clarity about expectations and lack of standardisation across different suppliers.

Flexibility in implementation
Suppliers have been able to reallocate resources in response to changing contexts and priorities, without negotiating with the donor. It has also been possible to spread risk across multiple sites of operation; overachieving in one location to offset lower results in another.

Clarity of reporting
The focus on results has driven investment and improvements in Monitoring and Evaluation which is broadly thought to have value beyond the programme. Although reporting requirements that are focused exclusively on results are demanding, people welcomed not having to do the activity reporting that is a feature of many other forms of aid.

Some positives were identified during the discussions at the WASH Results workshop and there is much to celebrate. However, a central theme in the workshop was the ongoing challenge of how to place value (in commercial/financial and value for money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. These challenges will be explored in the next post.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

The report from the workshop is available to download from DFID’s Research for Development website. 

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.

Is it time for Results 2.0?

Catherine Fisher reports back on a Bond Conference panel discussion about the origins and future of the ‘results agenda’.

Aid should generate results. On the face of it, an indisputably good idea, but the ‘results agenda’ is anything but uncontroversial and can spark “epic” debates. In the WASH Results Programme, this agenda is manifest in the funding relationship – Results Based Financing (RBF) – a form of Payment By Results (PBR) through which DFID makes payments to Suppliers contingent on the independent verification of results. As the Monitoring, Verification and Evaluation (MVE) providers for WASH Results, we’re keen to exchange the programme’s insights with those of other people who have first-hand experience of the results agenda.

25093906970_1f78a254e6_o

Credit: Bond/Vicky Couchman. Jessica Horn and Irene Guijt sharing their views on the ‘results agenda’ at the Bond Conference 2016.

One such opportunity arose this week at the Bond Conference, in a session entitled ‘How to crack Results 2.0’ chaired by Michael O’Donnell, Head of Effectiveness and Learning at Bond and author of ‘Payment by Results: What it means for UK NGOs’. The session considered the origins and implications of the results agenda and looked ahead to the next version. Catherine Fisher, Learning Advisor for the WASH Results MVE team, reports on a lively discussion about how results agendas could be aligned with work on social transformation, enable learning and reflection within programmes, and provide value for money themselves.

Looking back to the origins of results approaches in DFID

Opening the session, Kevin Quinlan, Head of Finance, Performance and Impact at DFID explained how, in 2010, DFID encountered two opposing forces: increased funding in order to meet the UK’s legal commitment to spending 0.7 percent of national income on Overseas Development Aid alongside the introduction of austerity measures that required cuts in, and increased scrutiny on, public spending. Results and transparency agendas were DFID’s response to those competing demands and made a shift towards delivering results now in comparison with systems strengthening for results in future. This implied a corresponding shift to talking about the results DFID is going to support rather than the activities they would support to achieve results in future. Six years on, DFID is reassessing its approach.

Can results approaches be reconciled with the “art of transformation”?

Earlier that day, Dr Danny Sriskandarajah, Secretary General of CIVICUS, told conference delegates that INGOs had become too focused on the “science of delivery” (which he described as the achievement of impact by any means) as opposed to the “art of transformation” – the work of bringing about social change. This theme re-emerged during the ‘Results 2.0’ discussion: how could the focus on hard results, embedded in results frameworks, be reconciled with the messy business of social transformation that is at the heart of struggles for equity and rights?

Jessica Horn, Director of Programmes at the African Women’s Development Fund, noted that results frameworks do not acknowledge power or monitor how it is transformed. Consequently she and her colleagues resort to what she called “feminist martial arts” – twisting and turning, blocking and jabbing to defend the transformative work they do, from the “tyranny of results”.  Often, Jessica argued, the politics of the process are as important as the politics of the outcome and asked “how does the results framework capture that?” Yet as Irene Guijt, newly appointed Head of Research at Oxfam GB, argued, being forced to think about results even in the social transformation context helps to make things clearer. Between them, they had some suggestions about how it could be done.

Irene contended that there needed to be greater differentiation of what kind of data we need for different reasons, rather than a one-size-fits-all approach to accountability. She argued that “results” are too often about numbers and we need to bring humans back in and tell the story of change. Irene recommended using the tool SenseMaker to bring together multiple qualitative stories which, through their scale, become quantifiable. Jessica shared some frameworks for approaching monitoring and reporting on social transformation more systematically and in ways that consider power, such as Making the Case: the five social change shifts and the Gender at Work Framework.

Does focusing on monitoring results for accountability squeeze out reflection and learning?

This criticism is often levelled at results-based approaches and their associated heavy reporting requirements. Irene commented that “learning and data are mates but compete for space”. To align learning and reflection with results monitoring, she advised focusing on collective sense-making of reporting data, a process that enables evidence-based reflection and learning. She also suggested streamlining indicators focussing on those with most potential for learning, a point echoed by Kevin from DFID who emphasised the need to select indicators that are most meaningful to the people implementing programmes (rather than donors).

Do results agendas themselves demonstrate value for money?

This question resonated with the participants, triggering musings on the value of randomised controlled trials and the cost of management agents from the private sector. One point emerging from this discussion was that often what is asked for in results monitoring is difficult to achieve. Indeed, this has at times, been the experience of the WASH Results Programme, particularly in fragile contexts (see for example, the SWIFT Consortium’s report [PDF]) . Both Irene and Jessica talked of the need to use a range of different tools for different purposes and Irene made reference to her recent work on balancing feasibility, inclusiveness and rigour in impact assessments.

What is the trajectory for DFID and the results agenda?

Kevin Quinlan took this question head on, agreeing that this is something DFID needs to decide in the next few months. He suggested that some of the areas for discussion were:

  • Getting to a more appropriate place on the spectrum between communication (to tax-payers) and better programme design; results are part of communicating to tax-payers but not the only part;
  • Reducing standard indicators in favour of flexible local indicators; each project would need at least one standard indicator to allow aggregation but there should be more local indicators to enable learning;
  • Alleviating the torture of results – “rightsizing” the reporting burden and reducing the transaction costs of results reporting; thinking about what results can do alongside other tools.
  • Adopting a principles-based approach rather than a set of rules.

Meanwhile the Evaluation Team for WASH Results is investigating some of the issues raised during the panel such as examining the effect of results verification on Suppliers’ learning and reflection, and seeking to explore the value for money of verification.

So it sounds like there will be more interesting discussions about the results agenda in the near future and we look forward to contributing insights from WASH Results*. Whether Results 2.0 is on the horizon remains to be seen.

* Please email the MVE Team if you would like us to let you know when our evaluation findings are available.

Latest views and news on Payment By Results

Our last post shared our lessons on Payment By Results (PBR); this time we’re taking a look at what other people are saying about it.

An over-looked benefit of PBR is sharpening the minds of donors and recipients, observes Joseph Holden, deputy lead of Monitoring and Evaluation (M&E) for the Fund Manager of DFID’s £350 million Girls’ Education Challenge (GEC) Programme. Writing at the end of 2015, Holden highlights PBR’s advantage over other funding models: reducing “fuzzy thinking” and over-statement of results by replacing self-reporting with independently verified, reliable and robust evidence. Writing about the approach and experience of the GEC, Holden comments “This is not cheap, and complexities have meant the application is far from perfect, but the evidence produced is of a higher quality than for the vast majority of other development programmes”. You can read more about the GEC’s experience of verifying PBR in the report on our chat show at the UK Evaluation Society

A new literature review [PDF]  intends to help PBR stakeholders: clarify the purpose of an individual PBR scheme, identify critical success factors, identify common issues which cause problems and difficulties, and be aware of how they might address or mitigate them. The review’s author, Russell Webster, is currently exploring the main themes of the review through a series of blog posts – the most recent ones question if PBR can improve outcomes and if PBR can save money.

Finally, even as this post is being written, delegates at the BOND conference are discussing ‘How to crack Results 2.0‘. We look forward to finding out what they had to say…

Cheryl Brown, Communications Manager for WASH Results MVE

What have we learned about Payment by Results (PBR) programmes from verifying one?

After 19 verification rounds, the WASH Results Monitoring and Verification team shares its suggestions for how to design future PBR programmes.

Martha Keega assesses a latrine in South Sudan

Verification in action: MV team member Martha Keega assesses a latrine in South Sudan

Verification is at the heart of the WASH Results Programme. Suppliers only get paid if we, the Monitoring and Verification (MV) team, can independently verify the results they are reporting. Usually we can: results are reported by Suppliers, verified by us and Suppliers are paid by DFID to an agreed schedule. However, all Suppliers have received deductions at least once, which, although painful for everyone, is testament to the rigour of the verification process. Overall, the system is working and the results of the programme are clear. But the demands of verification are also undeniable, leading to some aspects of verification being labelled “Payment by Paperwork” and like any process, it could be improved.

In January 2016 the team* came together to reflect on what we have learned so far from conducting 19 rounds of verification across the three Suppliers. Our discussions focused on verification but inevitably considered wider issues around design of a PBR programme. Here we share some suggestions for design of future PBR programmes, from a verification perspective.

  1. Ensure targets and milestones reflect high level programme objectives
  2. Be clear on targets and assumptions about their measurement
  3. Think carefully about enabling alignment with local government and other WASH stakeholders
  4. Reconsider the 100% PBR mechanism to avoid verification inefficiencies
  5. Consider payments for over-achievement of outcomes, but not of outputs
  6. Include provision for a joint Supplier and Verifier inception phase that will streamline verification
  7. Consider pros and cons of relying more on Supplier-generated evidence as opposed to independent evidence generation

1. Ensure targets and milestones reflect high level programme objectives
The WASH Results Programme has ambitions with regard to equity, gender and disability and overall health benefits that are not universally built into targets and payment milestones agreed between DFID and Suppliers. As a consequence, these ambitions are not explicitly incentivised. Any future programme should think carefully about how the design of the programme, especially the targets set in the tender and agreed with Suppliers, uphold objectives based on good practice within the sector.

2. Be clear on targets and assumptions about their measurement
We have found that when payment decisions are riding on whether targets have been met, the devil is in the detail. During implementation, some discrepancies have emerged over targets and how to achieve them. Discussions have taken place about minimum standards for latrines (DFID or JMP definition) and hygiene targets (what does ‘reach’ mean?). In addition, there was occasionally lack of clarity on how achievement of targets would be measured.

When working at scale, assumptions made about the average size of a household in a particular area, or the best way of measuring the number of pupils in a school become subject to intense scrutiny.  This is quite a departure from how programmes with different funding mechanisms have worked in the past and the level of detailed evidence required may come as a shock for Suppliers and Donors alike. In response, we suggest that future programmes should provide clear guidance on technical specifications relating to targets and guidelines for evidencing achievements.

3. Think carefully about enabling alignment with local government and other WASH Stakeholders
One concern that we discussed in the meeting was that the design of the WASH Results Programme does not sufficiently incentivise alignment with local government. We suspect that this was a result of the scale of the programme and the tight timelines, but also the demands of verification. The need to generate verifiable results can dis-incentivise both the pursuit of “soft” outcomes such as collaboration, and, working with government monitoring systems.

We suggest that PBR programmes need to think carefully about how to incentivise devolution of support services from progamme teams to local governments, and to other sector stakeholders during the life of the programme, for example by linking payments to these activities. Also, to think how programme design could encourage long-term strengthening of government monitoring systems.

4. Reconsider the 100% PBR mechanism to avoid verification inefficiencies
The merits or otherwise of the 100% PBR mechanism in the WASH Results Programme are subject to much discussion; we considered them from a verification perspective. We believe that, in response to the 100% PBR mechanism, some Suppliers included input- and process-related milestone targets to meet their internal cash flow requirements. In some cases, this led to verification processes that required high levels of effort (i.e. paperwork) with relatively few benefits.

We suggest that people designing future PBR programmes consider non-PBR upfront payments to Suppliers to avoid the need to set early input and process milestones, and run a substantial inception phase that includes paid-for outputs for Suppliers and Verifiers. In the implementation phase of the WASH Results Programme, payment milestones have been mainly quarterly, so requiring seemingly endless rounds of verification that put pressure on all involved, particularly Supplier programme staff. In response, we suggest that payments over the course of a programme should be less frequent (and so possibly larger), so requiring fewer verification rounds and allowing greater space between them. This may have implications for the design of the PBR mechanism.

5. Consider payments for over-achievement of outcomes, but not of outputs
The WASH Results Programme does not include payment for over-achievement. Over the course of the programme, some Suppliers have argued that over-achievement should be rewarded, just as under-achievement is penalised. As Verifiers, we agree that paying for over-achievement for outcomes would be a positive change in a future PBR design. However, there were concerns among our team that encouraging over-achievement of outputs could have unintended consequences such as inefficient investments or short-term efforts to achieve outputs without sufficient attention to sustainability and the quality of service delivery.

6. Include provision for a joint Supplier and Verifier inception phase that will streamline verification
It is broadly accepted that the WASH Results Programme would have benefited from a more substantial inception phase with the Verification Team in place at the start. Our recommendations about how an inception phase could help streamline and strengthen verification are as follows:

  • Key inception outputs should include a monitoring and results reporting framework agreed between the Supplier and the Verification Agent. Suppliers and Verifiers could be paid against these outputs to overcome cash flow issues.
  • The inception phase should include Verification Team visits to country programmes to establish an effective dialogue between the Verifiers and Suppliers early on.
  • If Suppliers evidence their achievements (as opposed to independent collection of evidence by the Verification Agent – see below), assessment of, and agreement on, what are adequate results reporting systems and processes need to be included in the inception phase.
  • Run a ‘dry’ verification round at the beginning of the verification phase where payments are guaranteed to Suppliers irrespective of target achievement so that early verification issues can be sorted out without escalating stress levels.

7. Consider pros and cons of relying more on Supplier-generated evidence as opposed to independent evidence generation
In the WASH Results Programme, Suppliers provide evidence against target achievements, which is subsequently verified by the Verification Team (we will be producing a paper soon that outlines how this process works in more detail). Is this reliance on Supplier-generated evidence the best way forward? What are the pros and cons of this approach as compared with independent (verification-led) evidence generation?

Indications are that the PBR mechanism has improved Suppliers’ internal monitoring systems, and has shifted the internal programming focus from the finance to the monitoring and evaluation department. However, relying on Suppliers’ internal reporting systems has required some Suppliers to introduce substantial changes to existing reporting systems and the MV team has faced challenges in ensuring standards of evidence, particularly in relation to surveys.

We have some ideas about pros and cons of Supplier-generated evidence as opposed to evidence generated independently, but feel this can only be fully assessed in conversation with the Suppliers. We plan to have this conversation at a WASH Results Programme Supplier learning event in March. So, this is not so much a suggestion as a request to watch this space!

Coming up…

WASH Results Programme Learning Event:  On March 7 2016 Suppliers, the e-Pact Monitoring & Verification and Evaluation teams, and DFID will convene to compare and reflect on learning so far. Key discussions at the event will be shared through this blog.

Verification framework paper: an overview of how the verification process works in the WASH Results Programme. This will present a behind-the-scenes look at verification in practice and provide background for future lessons and reflections that we intend to share through our blog and other outputs.

 

 


* About the MV Team: In the WASH Results Programme, the monitoring, verification and evaluation functions are combined into one contract with e-Pact. In practice, the ongoing monitoring and verification of Suppliers’ results is conducted by one team (the MV team) and the evaluation of the programme by another.  The lessons here are based on the experience of the MV team although members of the Evaluation team were also present at the workshop. Read more about the WASH Results Programme.

As the MDGs become SDGs, what progress has WASH Results made?

What has the DFID WASH Results Programme achieved so far and what lies ahead for the programme in 2016?

January 1st, 2016 is an important date for the DFID WASH Results Programme. The Sustainable Development Goals (SDGs) take the place of the Millennium Development Goals (MDGs) and ‘WASH Results’ moves into the second half of its funding period. At this stage the focus for the programme’s Suppliers (SAWRP, SNV and SWIFT) will start shifting from getting things done, to keeping things going and ensuring the sustainability of WASH outcomes. For e-Pact too, as the independent verifiers, it’s farewell to Outputs and hello to Outcomes.

Once the goals to provide access have been reached, attention turns to sustainability.

Before we look ahead to next year, let’s take a look at what’s been achieved so far. According to the UN, 2.6 billion people gained access to improved drinking water sources between 1990 and 2015 – a key part of MDG 7. Worldwide, 2.1 billion people have gained access to improved sanitation but 2.4 billion are still using unimproved sanitation facilities, including nearly 1 billion people who are still defecating in the open. This July, DFID was able to report it had exceeded its own target of supporting 60 million people to access clean water, better sanitation or improved hygiene conditions.

What contribution can we attribute to the WASH Results Programme? The full report on the programme’s 2015 Annual Review is available on the UKAid Development Tracker website. Here are some of the highlights:

  • The reviewers gave the programme an ‘A’ overall (for the second year running) and considered it to be “on track” to meeting its targets by the end of 2015.
  • By December 2014 the WASH Results Programme had reached 296,438 people with improved sanitation, 65,611 people with improved water supply, and over 1.25 million people with hygiene promotion.
  • The reviewers noted that “strong independent verification systems” have been established that also allow for adjustment and improvement based on learning from previous verification rounds.
  • WASH Results is generating significant policy knowledge around use of Payment by Results and programming for outcomes (sustainability) in the WASH sector.

We’ll come back to these results in early 2016 when the final numbers are in and compare them to the programme’s targets for December 2015 which are:

  • 968,505 people have access to clean drinking water;
  • 3,769,708 people have access to an improved sanitation facility;
  • 9,330,144 people reached through hygiene promotion activities through DFID support.

Large numbers, however impressive, don’t fully convey the effects that improvements in water, sanitation and hygiene are having on people’s lives. The Suppliers have been collecting stories of change from some of the people on the ground who are closely involved in delivering the WASH Results Programme or directly benefitting from its work. You can read about 70 year old subsistence farmer Maria and mother-of-five, Jacinta, in a recent DFID blog post. The SWIFT consortium’s website is also packed with news and images from their involvement in the WASH Results Programme.

So what happens next?

From 2016 onwards, e-Pact starts answering a critical question for DFID: how many poor people continue to use improved water and sanitation facilities and are practising improved hygiene because of the WASH Results Programme? Right now we’re exploring how best to monitor, report on and verify these outcomes and look forward to sharing what we learn, with you.

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.

Suggestions to donors commissioning Payment by Results programmes

Reflections by DFID WASH Results’ suppliers on the programme’s design and commissioning, summarised into six suggestions for commissioners.

Payment by Results (PbR) is a technically challenging form of contracting and one that is new for DFID and many of its partners (NAO 2015, p8, DFID, 2015 p18).   The WASH Results Programme was designed using the PbR financing mechanism, and therefore has great potential for learning about the impact of PbR on programming.

One year into the programme, the WASH Results suppliers met to share experiences so far, particularly how the programme was designed and commissioned and the effects PbR had. This discussion has been distilled into a set of suggestions for people commissioning PbR programmes in development.

Six suggestions for commissioners of Payment By Results programmes

The discussions reflected the participants’ experiences as people leading supplier consortia, rather than those directly implementing the programme or its beneficiaries and although DFID staff were present their role was largely that of observer. Discussions built on the early findings shared by the Oxford Policy Management team leading the formal evaluation of the programme for the e-Pact consortium. The suggestions that emerged are inevitably not as rigorous as the final findings that will come from this formal evaluation nor from others’ extensive reviews drawing on multiple programmes and experience. However, our suggestions add support to some of the National Audit Office (NAO) recommendations on PbR that were published a month after our workshop, in particular the “Payment by Results: analytical framework for decision-makers” (NAO, 2015).

1. Consider what type of PbR is the right mechanism for achieving the desired outcomes.

There are multiple ways in which PbR programmes can be designed and the selection made will influence who applies, how suppliers devise and manage their programmes and the subsequent results. An important difference between PbR programmes is the proportion of payment that is “pure PbR”. WASH Results is designed as a 100% PbR programme and this prompted some suppliers to take steps to minimise the risk of not achieving targets. Some set disbursement milestones early in the impact pathway to aid cashflow, rather than set exclusively outcome-based milestones. Some rethought the areas they were working in, shifting away from work in the most challenging areas, and there is some indication that suppliers relied on tried and trusted methods rather than trying new approaches.

The workshop participants suggested that pre-financing may be necessary to achieve genuine emphasis on outcomes, particularly in risky contexts or if innovation is important so that risks are not born solely by suppliers. Echoing other discussions on PbR (BOND p13) , participants discussed how the PbR mechanism may not be appropriate for programmes that are intended to target the most vulnerable and/or operate in fragile contexts, however, the success of the WASH Results Programme in the Democratic Republic of Congo demonstrates that it should not be ruled out. These issues need careful consideration when selecting the design of a PbR programme and the NAO report provides an analytical framework for doing so.

2. Create spaces for learning about PbR within the commissioning process.

As the PbR modality is new to many organisations (suppliers and commissioners) it is important for commissioners and suppliers to explore options and learn from experience in order to design the most effective model. Ideally this will happen in an interactive way that will help to build relationships between stakeholders. Workshop participants stressed the importance of learning from the private sector about approaches to PbR, for example in areas such as how to assess and price risk. They also advocated a longer inception phase in programmes using multiple suppliers, to enable suppliers to talk to each other about the approaches they are considering adopting.

The NAO’s analytical framework for decision-makers advocates a learning approach to commissioning PbR programmes and it encourages commissioners to engage in dialogue with potential providers from the outset. However it does not explicitly recommend creating spaces for learning and relationship-building between stakeholders within the commissioning process; we think this would be a useful addition to the framework.

3. The tender documents need to be clear about what is meant by PbR, from the outset.

This includes providing the rationale for using PbR and sufficient detail about matters such as disbursement, results, and the processes and conditions around force majeure, particularly but not exclusively in fragile contexts. There should be clear guidance on how to define minimum standards: in sanitation, for example, does the minimum criteria for improved latrines allow for shared use?  The NAO framework helps commissioners to think through high-level issues, however the detail of sector specific standards may need to be determined in pre-tendering discussions with potential suppliers and/or monitoring and verification providers.

4. Set up Monitoring, Verification and Evaluation (MVE) frameworks before implementation starts.

The MVE team for a PbR programme needs to be appointed before the suppliers and the MVE framework needs to be created before implementation starts. This would enable standardisation of approaches across suppliers and enable suppliers to create their monitoring and evaluation frameworks according to the requirements of verification processes. This is perhaps the clearest suggestion to emerge from the workshop and once again accords with the NAO framework which asks “Is there an agreed system for measuring provider performance? Will this system be in place at the start of the scheme?” The WRP suppliers’ experience suggests that it should be!

5. Consider what impact the contracting process will have on opportunities for collaboration and learning.

The workshop participants felt that the tendering process for the WASH Results Programme made international non-governmental organisations into competitors rather than collaborators and this can lead to lack of transparency and learning between organisations. Given the broader move within aid commissioning from grants towards more competitively tendered contracts, this is an example of where it is difficult to attribute effects purely to the PbR mechanism.

As an alternative to a competitive tender, the WASH Results suppliers suggested commissioners pursue a “negotiated” procurement procedure during which the commissioner enters into contract negotiations with one or more suppliers.

6. Allow extra time to commission PbR programmes than for more familiar contracting processes.

The complexity of PbR contracts and their relative unfamiliarity in the development sector imply the need to allow more time for commissioning than usual. How to use that extra time? The previous suggestions from our workshop indicate that this lengthier commissioning process should comprise the following phases:

  • Pre-bidding phase: in which the donor’s understanding of PbR is clearly laid out, and in which potential suppliers are able to access insights into the different models of PbR financing and the requirements and risks of implementing a PbR programme.
  • Bidding/contracting phase: recognising that this is a resource-intensive process for suppliers; possibly conducted as a negotiated process rather than a competitive tender.
  • Inception phase: with space for suppliers to share approaches and refine their programme design, in which a standardised MVE framework is shared and refined and means of verification agreed; milestones and payment schedules are agreed; programme expectations are agreed with partners; and consortia develop common understanding, language and approaches.
  • Implementation phase: implementation with appropriate verification cycles and disbursement points, ongoing learning and review.
  • Closing phase: for end-of-term evaluation, especially to draw out lessons learnt and find ways of furthering work beyond the project period of implementation.

Have you tendered for, or commissioned a PbR programme?

Do our suggestions and observations reflect your experience?

What suggestions would you add to our list?

Comparative experiences of evaluating and verifying Payment by Results programmes

Highlights of the discussion about verification and evaluation of Payment By Results in our chat show at the UK Evaluation Society Conference.

A session at the UK Evaluation Society conference in May 2015 compared two Payment by Results (PbR) programmes in different sectors. The discussion revealed that although PbR programmes can be set up in very different ways, the management, evaluation and verification face similar tensions. These include balancing cash flows with the focus on “real outcomes” that PbR is designed to encourage.

Using an informal chat show approach, the session explored practical experiences of monitoring and verification of two DFID funded PbR programmes: the Water, Sanitation and Hygiene (WASH) Results Programme and the Girls Education Challenge (GEC). The chat show was attended by one of the WASH Results Programme service providers thus providing a different perspective on the issues.

Chat show participants were:
Dr Katharina Welle, Deputy Team Leader, WASH Results Monitoring, Verification & Evaluation (MVE) Team, (Itad)
Dr Lucrezia Tincani, Manager, WASH Results MVE Team, (Oxford Policy Management)
Jason Calvert, Global Monitoring & Evaluation (M&E) Lead, Girls’ Education Challenge
The host was me, Catherine Fisher, Learning Adviser, WASH MVE Team, (Independent Consultant).

The discussion focused on the practicalities of implementing programmes financed through a Payment by Results funding mechanism (specifically Results Based Financing, a term used by DFID, among others, when the payments from funders or government go to service providers or suppliers). This blog post shares a few of the areas that were discussed.

There are huge variations in the set-up of PbR programmes
Perhaps unsurprisingly for two programmes of different sizes in different sectors, the WASH Results Programme (£68.7 million over approximately four years) and GEC (£330 million over four years) are managed, monitored and evaluated in very different ways.

GEC is managed by Price Waterhouse Coopers (PwC) who manage the funding to the 37 GEC projects, set suppliers’ targets and create M&E frameworks, while suppliers themselves contract external evaluators to monitor and evaluate their progress. By contrast the three WASH Results Programme supplier contracts are managed in-house by DFID and the Monitoring, Verification and Evaluation is contracted out to a consortium: e-Pact. There are no doubt pros and cons of each approach which the chat show did not explore. But a key advantage of the GEC set-up (from the perspective of the e-Pact WASH MVE team) was the involvement of PwC from the start of the GEC programme. This meant PwC could shape the “rules of the game” for verification and were able to set standardised targets up front.

Differences in amount of funding subject to PbR
Another key difference between the two programmes was the amount of the supplier funding that was subject to PbR. For GEC suppliers it is on average 10%, whereas WASH Results Programme suppliers see 100% of their funding dependent on verified results. But this startling figure may mask some similarities….

GEC differentiates between payment against inputs (not called PbR) and payment against outcomes, the “real” PbR which constitutes 10% of the total funding. By contrast, the WASH programme has a broader definition of results that includes outputs, inputs and outcomes, and varies across suppliers. So in practice the programmes may be more similar than they appear.

Balancing cash flow and adaptive programming in a PbR context
An important driver for the broader interpretation of “results” within the WASH Results programme was the very real need for some suppliers to maintain a steady cash flow across the course of the programme in a 100% PbR context.   The temptation for suppliers can be to set lots of milestones that trigger payment. However the need to stick to and demonstrate achievement of these milestones may inhibit flexibility in delivery. This tension between maintaining cash flow and the adaptive programming that PbR is intended to foster has been experienced by both programmes.

Rigour in measurement
The ability to effectively measure results is at the heart of PbR. For PwC, this means that every project subject to PbR is monitored through use of a quasi-experimental Randomised Controlled Trial (RCT) that measures outcomes in the project site with a control group. One reason PwC insist on an RCT for each project is to protect themselves against the risk of being sued for incorrectly withholding payment.

Where an RCT is not possible, for example in Afghanistan where security risks for implementers and cultural considerations mean that control groups are not feasible, these projects are removed from PbR. A number of the 37 GEC projects have been taken off PbR due to cultural considerations and challenging environments.

The ability to measure results is also dependent on the existence of consensus and evidence about expected results and effective means of measurement. This is more the case in the education sector than the WASH sector and makes setting targets and assessing progress towards them more difficult for a WASH programme, particularly in hygiene promotion activities such as those promoting hand washing.

As a result of these discussions, participants suggested the use of a spectrum (see Figure 1) that matches the proportion of programme funding dependent on PbR to how to how easy it is to measure results in that particular sector.

A potential PbR - measurement spectrum

Does PbR generate perverse incentives?
One of the much talked about cons of PbR is that it will create perverse incentives among stakeholders that drive them to behave in the opposite way than intended. Participants shared stories that included examples of the PbR mechanism inhibiting innovation and encouraging suppliers to focus on “low hanging fruit” rather than greatest need. A review of PbR in Swiss cantons suggested it didn’t work at all in terms of generating efficiencies and effectiveness.

PbR is not a one-size-fits-all solution; there remains lots to learn about when and where it can work
There was consensus that PbR cannot be used effectively in all contexts. The risks and uncertainties of working in fragile states are one setting in which PbR can be difficult (but not impossible?) to implement, however even in more stable contexts, issues around organisational capacity and motivations can inhibit its effective implementation.

Participants agreed that there are probably sets of circumstances in which PbR can be effective. People working in countries and sectors in which PbR has been used for many years have spent time identifying what those contexts are – see for example this paper about PbR in UK community work. For those of us who are new to the challenges of commissioning, designing, implementing, monitoring, evaluating or verifying PbR projects and are doing it in contexts in which it has not been tried, there is a lot to learn.  This chat show illustrates that there is great value in coming together to do so.

What do you think?
What are you learning about managing, implementing or verifying Payment By Results programmes? Does this discussion reflect your experience? Would you like to learn with us?  Please use the comment box to share your thoughts.

Catherine Fisher, Learning Adviser, WASH Results MVE Team.

See also: Payment-by-results: the elixir or the poison?, 6 January 2015, Jason Calvert