Tag Archives: value for money

Beyond a burden: what value does verification offer?

Police officers, auditors, teachers marking homework and giving out detentions – just some of the unfavourable analogies we have heard applied to the role of the independent verification team in the WASH Results Programme. Catherine Fisher highlights the positive roles missing from these comparisons.

Our job is to verify that the achievements reported by Suppliers delivering the programme are accurate and reliable in order that DFID can make payment. It’s easy to see why the relationship between Verifier and Supplier can be an uncomfortable one, but in this post we look at the value of verification and what, if any, benefits it brings to Suppliers.

Why does the WASH Results Programme have a verification team?

Payment by Results (PbR) guru, Russell Webster, concluded from his review of PbR literature :

“When commissioners devise a contract where payment is mainly contingent on providers meeting outcome measures, they need to be confident in the data relating to whether these measures are achieved. There are two main issues:

  • Is the provider working with the right people (i.e. not cherry picking those who will achieve the specified outcomes most easily)?
  • Are the data reliable?”

Let’s take each of these in turn.

All the Suppliers in the WASH Results Programme are international NGOS who have continued to pursue their commitment to values such as equity and inclusiveness even if it has not been incentivised by the PbR mechanism. A central theme in our peer learning workshops has been the ongoing puzzle of how to place value (both in commercial/financial and Value for Money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. Suppliers and the Verification Team have been exploring how PbR can enable alignment with national systems and promote downward, as well as upward accountability.

There has been no evidence of gaming in the WASH Results Programme. That is not to say that it might never be an issue for other PbR contracts and the higher the risk of gaming, the greater emphasis there needs to be on verification. So if verification has not identified any gaming, what value has it brought?

Are the data reliable?
Because the WASH Results Programme relies largely on Supplier’s own monitoring data, the benefits of verification stem from the question of whether Suppliers’ data about their achievements are reliable. This has been a matter of great debate.

We have found that in some cases it is right not to rely unquestioningly on data that comes from Suppliers’ monitoring systems – those systems are not always as robust as Suppliers themselves thought. Verification has identified several situations where Suppliers could have gone on to inadvertently over-report results, which would have led to DFID paying for results that had not been achieved. Verification ensured DFID only paid for genuine results and helped Suppliers improve their monitoring. We explore the value to Suppliers of improved monitoring, later.

One of our Country Verifiers (members of the Verification Team based where the implementation is taking place) recently observed: “From my experience, the WASH Results programme is quite different from the traditional way of doing implementation – having someone who is independent, who checks the Suppliers’ results before they are paid for, makes it quite a good tool to hold Suppliers to account.”

So far, the obvious value that verification in the WASH Results Programme has brought to DFID is confidence in results, through third party information about those results, and a reduced risk of paying for results that were not achieved. But there are more, less apparent, benefits and we describe towards the end of this post.

Can verification bring value to Suppliers?

Having explored the value of verification to the donor, we now turn to the value for Suppliers.

The same Country Verifier commented that while he felt some Suppliers were initially scared that the verifier was there to spot their mistakes, “I think with time they realise that the role of independent verification is just to check that what they’re reporting is what the reality is when the verifier goes out to sites where they’ve been working. You’re only checking.”

Although Suppliers often view verification as a “burden”, our team identified a set of potential returns for the Suppliers on the effort and investment they put into participating in the process (effects, we suspect, that donors would appreciate). We acknowledge that it can be hard to unpick the value of verification from the value of investing in better monitoring per se, but without overstating our role, we feel we have contributed to:

  • Identifying areas for improvement – verification has revealed flaws in a system thought by the Supplier to be strong and introduced tests that were not previously used. In one example, verification revealed problems with third party enumerators’ work and this prompted greater scrutiny of their data by the Supplier and changes to training processes.
  • Strengthening Quality Assurance – We have seen how the expectation of verifiers checking data can prompt Suppliers to improve their own Quality Assurance (QA) processes, for example, carrying out internal checks prior to submitting data for verification and introducing QA protocols.
  • Increasing the value of data – the process of verification counters the commonly-held belief that “no-one looks at this data anyway”, which, unchecked, can reduce the effort put into data collection and the usability of the data systems.
  • Reducing risk of failure (and withholding of payment) – The requirement to have more and better data can pre-empt some problems. For example, knowing that they would need to demonstrate to verifiers that they had met their water systems targets, prompted one Supplier to check in advance if the declared yield of sources would be enough to reach the population they were planning to reach.
  • Forcing deeper reflection – linking PbR to the achievement of WASH outcomes has forced Suppliers to think about WASH outcomes and how they can be measured and be clearer on definitions to a greater degree than in other, non-PbR, programmes. Verification has by no means driven that process but has contributed to it.

We acknowledge that these may not always have felt like benefits to the Suppliers! In particular, some Suppliers have pointed out the trade-off between data collection and learning, and suggested that the burden of verification has stifled innovation and inhibited adaptive programming. Others, however claim the opposite, which implies there may be other factors at play.

In spite of concerns, there is broad consensus that the PbR modality, of which verification is a part, has driven higher investment in and attention to programme M&E systems. PbR requires Suppliers to be clear about what they are trying to achieve, to collect good quality data to monitor their progress and to use that data to report on their progress regularly. Verification has helped to build confidence in the strength of systems and data on which those processes are based. There is an emerging sense that effective use of reliable M&E data by Suppliers has enabled rapid course correction and so contributed to high achievements across the WASH Results Programme.

And if that is not enough, we think there are benefits for other stakeholders in countries in which WASH Results is operating. We have seen some benefits from capacity spillover– skills and knowledge acquired through working in or observing the data collection, analysis and verification in the WASH Results Programme are available to other programmes e.g. among enumerators, Country Verifiers, programme staff, even Government agencies.  Again, this is by no means all attributable to verification but verification has contributed.

Value and the limits of verification

It can be hard to unpick the benefits of verification from benefits that stem from the greater emphasis on data collection inherent to PbR. In some contexts PbR is being used without third party verification. But, in contexts where reassurance is needed about the reliability of the data on outputs and outcomes, we believe verification offers value to the donor, to the Suppliers and, potentially to others in the country in which the programme is operating.

While we have argued for the benefits of verification, there are weaknesses in PbR that verification cannot solve. Verifiers, like police officers, don’t make the rules, they just enforce them. They verify results that have been agreed between the donor and the supplier. As one of our team observed recently “Payment by Results makes sure you do what you said you would. It doesn’t make you do the right thing….”

However, if verification helps drive a “race to the top” in terms of quality of monitoring systems, the sector will begin to have better data on which to base decisions. Better data about what kinds of programmes produce what kinds of outcomes in which contexts could help donors to fund, and programmers to implement, more of the “the right thing”. And the police officers will feel their job has been worthwhile.


Catherine Fisher, Learning Advisor, Monitoring and Verification Team for the WASH Results Programme. This post draws on a reflection process involving members of the Monitoring and Verification team for the WASH Results Programme (Alison Barrett, Amy Weaving, Andy Robinson, Ben Harris, Cheryl Brown, Don Brown, Joe Gomme and Kathi Welle).


Want to learn more about the experience of the WASH Results Programme? Join us in Stockholm during World Water Week for ‘The Rewards and Realities of Payment by Results in WASH’

The paybacks and pains of Payment by Results (Part 2)

Our series of reflections on WASH Results’ learning continues by exploring value and costs in a Payment by Results (PBR) programme.

DFID has been clear from the outset about what it wants from the Water, Sanitation and Hygiene (WASH) Results Programme: WASH interventions delivered at scale within a short time-frame and confidence in the results being reported to the UK taxpayer. DFID got what it wanted, but at what cost? In this post we build on discussions at the WASH Results Programme’s learning event held earlier this year which looked beyond the numbers of people reached with interventions to explore some of the challenges faced in implementing the programme.

Embed from Getty Images

Can PBR frameworks be designed to incentivise suppliers to focus on the “harder to reach”

A central theme in the workshop was the ongoing puzzle of how to place value (both in commercial/financial and Value for Money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. So, does the importance placed by a donor on achieving clearly costed, verified results risk squeezing out other values and principles that are central to development programming? Which values might end up being pushed aside and could this be mitigated through better design?

1. High quality programming

Suppliers hit a major challenge during tendering when DFID asked them to provide a price per beneficiary that reflected the cost to the suppliers of reaching that beneficiary. But calculating this cost is complex. Potential suppliers have to think about what kind of costs they should fold into that price per beneficiary when bidding: the more the costs, the higher the bid. During the workshop one Supplier asked rhetorically “Should we say $20 with alignment and $23 without?”

There is some apprehensiveness within the NGO sector about competing with the private sector in this commercial context and they are often advised to be cautious. Will they be undercut by commercial organisations submitting more attractive (read cheaper) bids that lack the added benefits that NGOs can bring: the social capital and ways of working that are difficult to put a commercial value on but will affect the quality of the programming?

DFID has been clear that it does not equate Value for Money (VfM) with “cheap” and it is willing to pay for quality programming, whoever is best placed to deliver it. One improvement to the tendering process would be to articulate some of these added benefits (such as existing relationships and social capital in a programme area) as requirements for bidders. Potential suppliers would thus need to provide evidence within the bidding process.

2. Reaching the hardest to reach 

A criticism levelled at PBR is that by using a fixed “price per beneficiary” approach, it encourages suppliers to focus on people who are easier to reach, a practice sometimes described as “creaming” or “cherry picking”. Stakeholders in the WASH Results Programme are firmly committed to inclusion and during the workshop investigated how that could be incentivised better within a PBR framework. Options explored included multi-tiered price per beneficiary frameworks (as used in drug and alcohol recovery programmes in the UK) although these carry the risk of increasing complexity and reducing flexibility. Another suggestion for incentivising inclusion was careful selection and wording of the objectives and appropriate verification processes in the tender document, however this may risk compromising the flexibility to negotiate targets and verification approaches in response to different contexts.

3. Investing for the future

One related but different challenge that emerged during the workshop was that of placing commercial value on activities that invest for future work in the sector. This includes building the social capital to work with local stakeholders and investing in programmatic innovation (which some suppliers suggested had not been possible under the WASH Results Programme). Do the practical implications of PBR risk capitalising on previous investment made by suppliers, without contributing to it in turn? This is perhaps not an issue while PBR contracts constitute a small proportion of aid financing but would become more so if PBR contracts started to dominate. On the other hand, the benefits that suppliers report, particularly in terms of strengthening monitoring and reporting systems to enable more rigorous real-time results tracking may also spill over into other programmes, benefitting them in turn. It is too early to draw conclusions but it may be the case that a range of different aid mechanisms are required, with the benefits and limitations of each clearly identified.

4. Confidence in results

Finally, it is worth observing the possible trade-off between the value placed by DFID on confidence in results that is so important for communicating with taxpayers, and the effectiveness of aid spending that can be achieved through PBR and the nature of the results it produces. Verification is undoubtedly costly (someone paid you to come here just to look at that toilet?” a baffled resident of a beneficiary village is reported to have asked of a verification team member).

But there is another aspect of effectiveness: if PBR prompts suppliers to focus their efforts on what can be counted (i.e. what can be verified at scale without incurring prohibitive expense), this may shift their efforts away from development programming with longer-term and more uncertain outcomes. Put simply, this could equate to building toilets rather than working on sanitation behaviour change interventions, that are considered to generate more sustainable positive outcomes. Of course there is no guarantee other forms of aid financing will generate these results and as there is likely to be less focus on measuring those results, comparison would be difficult.

Advice for PBR commissioners

What might this mean for those considering PBR modalities and designing PBR programmes? The experience of WASH Results so far suggests that when designing a PBR programme, commissioners need to:

  • be clear on the value implied in “value for money” – consider all of the “values” that are important, including the value of donor confidence in results;
  • strike a balance between clearly specifying expected results (particularly for more vulnerable people) and being flexible to the contexts in which suppliers are operating;
  • think creatively and collaboratively about how long-term outcomes can be measured;
  • explore hybrid funding models but avoid creating the “worst of all worlds” that lacks the flexibility of PBR, increases complexity and imposes multiple reporting frameworks;
  • consider whether PBR is the right funding mechanism for the kinds of results you wish to achieve (tools are emerging that can help) ;
  • view the PBR component in the context of the broad spectrum of funding to the sector – seek to maximise linkages and mutual value across the sector.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

The report from the WASH Results learning workshop is available to download from DFID’s Research for Development website. 

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.

Is it time for Results 2.0?

Catherine Fisher reports back on a Bond Conference panel discussion about the origins and future of the ‘results agenda’.

Aid should generate results. On the face of it, an indisputably good idea, but the ‘results agenda’ is anything but uncontroversial and can spark “epic” debates. In the WASH Results Programme, this agenda is manifest in the funding relationship – Results Based Financing (RBF) – a form of Payment By Results (PBR) through which DFID makes payments to Suppliers contingent on the independent verification of results. As the Monitoring, Verification and Evaluation (MVE) providers for WASH Results, we’re keen to exchange the programme’s insights with those of other people who have first-hand experience of the results agenda.

25093906970_1f78a254e6_o

Credit: Bond/Vicky Couchman. Jessica Horn and Irene Guijt sharing their views on the ‘results agenda’ at the Bond Conference 2016.

One such opportunity arose this week at the Bond Conference, in a session entitled ‘How to crack Results 2.0’ chaired by Michael O’Donnell, Head of Effectiveness and Learning at Bond and author of ‘Payment by Results: What it means for UK NGOs’. The session considered the origins and implications of the results agenda and looked ahead to the next version. Catherine Fisher, Learning Advisor for the WASH Results MVE team, reports on a lively discussion about how results agendas could be aligned with work on social transformation, enable learning and reflection within programmes, and provide value for money themselves.

Looking back to the origins of results approaches in DFID

Opening the session, Kevin Quinlan, Head of Finance, Performance and Impact at DFID explained how, in 2010, DFID encountered two opposing forces: increased funding in order to meet the UK’s legal commitment to spending 0.7 percent of national income on Overseas Development Aid alongside the introduction of austerity measures that required cuts in, and increased scrutiny on, public spending. Results and transparency agendas were DFID’s response to those competing demands and made a shift towards delivering results now in comparison with systems strengthening for results in future. This implied a corresponding shift to talking about the results DFID is going to support rather than the activities they would support to achieve results in future. Six years on, DFID is reassessing its approach.

Can results approaches be reconciled with the “art of transformation”?

Earlier that day, Dr Danny Sriskandarajah, Secretary General of CIVICUS, told conference delegates that INGOs had become too focused on the “science of delivery” (which he described as the achievement of impact by any means) as opposed to the “art of transformation” – the work of bringing about social change. This theme re-emerged during the ‘Results 2.0’ discussion: how could the focus on hard results, embedded in results frameworks, be reconciled with the messy business of social transformation that is at the heart of struggles for equity and rights?

Jessica Horn, Director of Programmes at the African Women’s Development Fund, noted that results frameworks do not acknowledge power or monitor how it is transformed. Consequently she and her colleagues resort to what she called “feminist martial arts” – twisting and turning, blocking and jabbing to defend the transformative work they do, from the “tyranny of results”.  Often, Jessica argued, the politics of the process are as important as the politics of the outcome and asked “how does the results framework capture that?” Yet as Irene Guijt, newly appointed Head of Research at Oxfam GB, argued, being forced to think about results even in the social transformation context helps to make things clearer. Between them, they had some suggestions about how it could be done.

Irene contended that there needed to be greater differentiation of what kind of data we need for different reasons, rather than a one-size-fits-all approach to accountability. She argued that “results” are too often about numbers and we need to bring humans back in and tell the story of change. Irene recommended using the tool SenseMaker to bring together multiple qualitative stories which, through their scale, become quantifiable. Jessica shared some frameworks for approaching monitoring and reporting on social transformation more systematically and in ways that consider power, such as Making the Case: the five social change shifts and the Gender at Work Framework.

Does focusing on monitoring results for accountability squeeze out reflection and learning?

This criticism is often levelled at results-based approaches and their associated heavy reporting requirements. Irene commented that “learning and data are mates but compete for space”. To align learning and reflection with results monitoring, she advised focusing on collective sense-making of reporting data, a process that enables evidence-based reflection and learning. She also suggested streamlining indicators focussing on those with most potential for learning, a point echoed by Kevin from DFID who emphasised the need to select indicators that are most meaningful to the people implementing programmes (rather than donors).

Do results agendas themselves demonstrate value for money?

This question resonated with the participants, triggering musings on the value of randomised controlled trials and the cost of management agents from the private sector. One point emerging from this discussion was that often what is asked for in results monitoring is difficult to achieve. Indeed, this has at times, been the experience of the WASH Results Programme, particularly in fragile contexts (see for example, the SWIFT Consortium’s report [PDF]) . Both Irene and Jessica talked of the need to use a range of different tools for different purposes and Irene made reference to her recent work on balancing feasibility, inclusiveness and rigour in impact assessments.

What is the trajectory for DFID and the results agenda?

Kevin Quinlan took this question head on, agreeing that this is something DFID needs to decide in the next few months. He suggested that some of the areas for discussion were:

  • Getting to a more appropriate place on the spectrum between communication (to tax-payers) and better programme design; results are part of communicating to tax-payers but not the only part;
  • Reducing standard indicators in favour of flexible local indicators; each project would need at least one standard indicator to allow aggregation but there should be more local indicators to enable learning;
  • Alleviating the torture of results – “rightsizing” the reporting burden and reducing the transaction costs of results reporting; thinking about what results can do alongside other tools.
  • Adopting a principles-based approach rather than a set of rules.

Meanwhile the Evaluation Team for WASH Results is investigating some of the issues raised during the panel such as examining the effect of results verification on Suppliers’ learning and reflection, and seeking to explore the value for money of verification.

So it sounds like there will be more interesting discussions about the results agenda in the near future and we look forward to contributing insights from WASH Results*. Whether Results 2.0 is on the horizon remains to be seen.

* Please email the MVE Team if you would like us to let you know when our evaluation findings are available.