Tag Archives: accountability

Beyond a burden: what value does verification offer?

Police officers, auditors, teachers marking homework and giving out detentions – just some of the unfavourable analogies we have heard applied to the role of the independent verification team in the WASH Results Programme. Catherine Fisher highlights the positive roles missing from these comparisons.

Our job is to verify that the achievements reported by Suppliers delivering the programme are accurate and reliable in order that DFID can make payment. It’s easy to see why the relationship between Verifier and Supplier can be an uncomfortable one, but in this post we look at the value of verification and what, if any, benefits it brings to Suppliers.

Why does the WASH Results Programme have a verification team?

Payment by Results (PbR) guru, Russell Webster, concluded from his review of PbR literature :

“When commissioners devise a contract where payment is mainly contingent on providers meeting outcome measures, they need to be confident in the data relating to whether these measures are achieved. There are two main issues:

  • Is the provider working with the right people (i.e. not cherry picking those who will achieve the specified outcomes most easily)?
  • Are the data reliable?”

Let’s take each of these in turn.

All the Suppliers in the WASH Results Programme are international NGOS who have continued to pursue their commitment to values such as equity and inclusiveness even if it has not been incentivised by the PbR mechanism. A central theme in our peer learning workshops has been the ongoing puzzle of how to place value (both in commercial/financial and Value for Money terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. Suppliers and the Verification Team have been exploring how PbR can enable alignment with national systems and promote downward, as well as upward accountability.

There has been no evidence of gaming in the WASH Results Programme. That is not to say that it might never be an issue for other PbR contracts and the higher the risk of gaming, the greater emphasis there needs to be on verification. So if verification has not identified any gaming, what value has it brought?

Are the data reliable?
Because the WASH Results Programme relies largely on Supplier’s own monitoring data, the benefits of verification stem from the question of whether Suppliers’ data about their achievements are reliable. This has been a matter of great debate.

We have found that in some cases it is right not to rely unquestioningly on data that comes from Suppliers’ monitoring systems – those systems are not always as robust as Suppliers themselves thought. Verification has identified several situations where Suppliers could have gone on to inadvertently over-report results, which would have led to DFID paying for results that had not been achieved. Verification ensured DFID only paid for genuine results and helped Suppliers improve their monitoring. We explore the value to Suppliers of improved monitoring, later.

One of our Country Verifiers (members of the Verification Team based where the implementation is taking place) recently observed: “From my experience, the WASH Results programme is quite different from the traditional way of doing implementation – having someone who is independent, who checks the Suppliers’ results before they are paid for, makes it quite a good tool to hold Suppliers to account.”

So far, the obvious value that verification in the WASH Results Programme has brought to DFID is confidence in results, through third party information about those results, and a reduced risk of paying for results that were not achieved. But there are more, less apparent, benefits and we describe towards the end of this post.

Can verification bring value to Suppliers?

Having explored the value of verification to the donor, we now turn to the value for Suppliers.

The same Country Verifier commented that while he felt some Suppliers were initially scared that the verifier was there to spot their mistakes, “I think with time they realise that the role of independent verification is just to check that what they’re reporting is what the reality is when the verifier goes out to sites where they’ve been working. You’re only checking.”

Although Suppliers often view verification as a “burden”, our team identified a set of potential returns for the Suppliers on the effort and investment they put into participating in the process (effects, we suspect, that donors would appreciate). We acknowledge that it can be hard to unpick the value of verification from the value of investing in better monitoring per se, but without overstating our role, we feel we have contributed to:

  • Identifying areas for improvement – verification has revealed flaws in a system thought by the Supplier to be strong and introduced tests that were not previously used. In one example, verification revealed problems with third party enumerators’ work and this prompted greater scrutiny of their data by the Supplier and changes to training processes.
  • Strengthening Quality Assurance – We have seen how the expectation of verifiers checking data can prompt Suppliers to improve their own Quality Assurance (QA) processes, for example, carrying out internal checks prior to submitting data for verification and introducing QA protocols.
  • Increasing the value of data – the process of verification counters the commonly-held belief that “no-one looks at this data anyway”, which, unchecked, can reduce the effort put into data collection and the usability of the data systems.
  • Reducing risk of failure (and withholding of payment) – The requirement to have more and better data can pre-empt some problems. For example, knowing that they would need to demonstrate to verifiers that they had met their water systems targets, prompted one Supplier to check in advance if the declared yield of sources would be enough to reach the population they were planning to reach.
  • Forcing deeper reflection – linking PbR to the achievement of WASH outcomes has forced Suppliers to think about WASH outcomes and how they can be measured and be clearer on definitions to a greater degree than in other, non-PbR, programmes. Verification has by no means driven that process but has contributed to it.

We acknowledge that these may not always have felt like benefits to the Suppliers! In particular, some Suppliers have pointed out the trade-off between data collection and learning, and suggested that the burden of verification has stifled innovation and inhibited adaptive programming. Others, however claim the opposite, which implies there may be other factors at play.

In spite of concerns, there is broad consensus that the PbR modality, of which verification is a part, has driven higher investment in and attention to programme M&E systems. PbR requires Suppliers to be clear about what they are trying to achieve, to collect good quality data to monitor their progress and to use that data to report on their progress regularly. Verification has helped to build confidence in the strength of systems and data on which those processes are based. There is an emerging sense that effective use of reliable M&E data by Suppliers has enabled rapid course correction and so contributed to high achievements across the WASH Results Programme.

And if that is not enough, we think there are benefits for other stakeholders in countries in which WASH Results is operating. We have seen some benefits from capacity spillover– skills and knowledge acquired through working in or observing the data collection, analysis and verification in the WASH Results Programme are available to other programmes e.g. among enumerators, Country Verifiers, programme staff, even Government agencies.  Again, this is by no means all attributable to verification but verification has contributed.

Value and the limits of verification

It can be hard to unpick the benefits of verification from benefits that stem from the greater emphasis on data collection inherent to PbR. In some contexts PbR is being used without third party verification. But, in contexts where reassurance is needed about the reliability of the data on outputs and outcomes, we believe verification offers value to the donor, to the Suppliers and, potentially to others in the country in which the programme is operating.

While we have argued for the benefits of verification, there are weaknesses in PbR that verification cannot solve. Verifiers, like police officers, don’t make the rules, they just enforce them. They verify results that have been agreed between the donor and the supplier. As one of our team observed recently “Payment by Results makes sure you do what you said you would. It doesn’t make you do the right thing….”

However, if verification helps drive a “race to the top” in terms of quality of monitoring systems, the sector will begin to have better data on which to base decisions. Better data about what kinds of programmes produce what kinds of outcomes in which contexts could help donors to fund, and programmers to implement, more of the “the right thing”. And the police officers will feel their job has been worthwhile.


Catherine Fisher, Learning Advisor, Monitoring and Verification Team for the WASH Results Programme. This post draws on a reflection process involving members of the Monitoring and Verification team for the WASH Results Programme (Alison Barrett, Amy Weaving, Andy Robinson, Ben Harris, Cheryl Brown, Don Brown, Joe Gomme and Kathi Welle).


Want to learn more about the experience of the WASH Results Programme? Join us in Stockholm during World Water Week for ‘The Rewards and Realities of Payment by Results in WASH’

Reflections on our experience of DFID’s results agenda

As verifiers of a DFID Results Based Finance programme, ODI’s research on the UK’s results agenda prompted us to reflect on our experience.

 

Kakimat latrine eaten by goats

Why context matters when you focus on results #1: Some latrine building projects have to allow for the impact of hungry goats. Photo credit: Chamia Mutuku

 

In their report ‘The Politics of the Results Agenda in DFID: 1997 to 2017’, Craig Valters and Brendan Whitty argue that 2007 saw a new explicit focus from DFID on aggressively implementing results-based management. 10 years later, we have WASH Results: a DFID-funded programme where financial risk has been completely transferred away from UK taxpayers to the international NGOs who deliver the work and who only get paid for results that have been checked by a third party – us. However, as its name promises, the programme is delivering results in water, sanitation and hygiene (WASH). DFID was able to read in the programme’s 2017 annual report (with great confidence in the figures), for example, that WASH Results had reached over 1.1 million people with improved water supply, more than 4.7 million people with improved sanitation, and over 14.9 million people with hygiene promotion.

In our role as Monitoring, Verification and Evaluation Team for the WASH Results programme, our attention is less focused on the politics of the results agenda, and more in how results are monitored and verified and the very real impact that this approach has on ongoing programme delivery. However, we read the report and blog post by Valters and Whitty with great interest.

After more than three years of supporting the programme, how does our experience compare with the conclusions and recommendations of the ODI report? One key finding from the research is that some DFID staff have found ways to adhere to the results agenda, while retaining flexibility. This theme of the ways in which both donors and programme implementors are working creatively around the “tyranny of results” was one that we heard during last year’s BOND Conference session ‘How to crack Results 2.0’.

How can PBR be adapted to address the imbalance in accountability?

We absolutely agree with Valters and Whitty about the importance of finding a balance between being accountable to UK citizens and to the beneficiaries (poor people abroad). This time last year, we shared our opinion that if verification was designed to include beneficiary feedback and this was linked to payment, Payment by Results (PBR) could actually generate more downward accountability than other funding modalities. However, our team of verifiers felt that the demands of verification for large scale, representative, quantitative information on which to base payment decisions may leave less time, money and inclination to undertake more qualitative work with beneficiaries. So, we suggested that a resource-effective solution to upholding downwards accountability through verification would be to include payment for the existence and effective functioning of a beneficiary feedback system (rather than the results of that system). Payment would be made on verification of the effectiveness of the system in promoting downwards accountability.

We welcome the authors’ call to “Create a results agenda fit for purpose”. Our first reflection would be that a results agenda, at least one hard-wired into a PBR modality, is not going to be appropriate in all contexts and for all intended outcomes, particularly those where outcomes are difficult to predict or challenging to measure. Our set of recommendations to commissioners of PBR programmes, complement several of those made by ODI, for example, their suggestion that DFID spend more time considering whether its aid spending has the right mix of risks and the view that regular testing (that leads to course-correction) is important.

The challenge of communicating about costs and value

The authors also call on ministers to be honest with the British public about aid. Part of this, we feel, is making it clearer that Value for Money (VFM) is not synonymous with “cheap”. We feel that the results agenda, particularly a PBR model, should require donors/commissioners to clearly articulate the “value” they expect to see in VFM. Otherwise the importance placed by a donor on achieving clearly costed, verified results could risk squeezing out other values and principles that are central to development programming. A central theme in last year’s WASH Results learning workshop was the ongoing puzzle of how to place value (both in commercial/financial and VFM terms) on intangible aspirations and benefits, such as reaching the most vulnerable and investing in the processes and social capital that underpin effective programming. This is particularly important in an increasingly commercialised aid context, where one supplier’s approach would be to parachute in and build many toilets very quickly and cheaply, whereas another proposes taking longer to work with local stakeholders, norms and materials. This articulation of value may not be as simple as it sounds, when every commitment in a PBR programme, such as reaching the poorest, gender equity, national ownership, sustainable outcomes, etc. needs to be reflected in meaningful and measurable indicators.

Payment By Results can aid course correction

Interestingly, one of the reforms that the authors call for may be an inherent feature of the results framework itself. They say that “interventions need to be based on the best available information, with regular testing to see if they are on the right track”. We have found that a product of the PBR modality is that much greater emphasis is placed on monitoring systems and the generation of reliable data about what is happening within programmes. In WASH Results we have seen cases where the rigorous (compulsive?) tracking of results has identified areas where programmes are failing to deliver and rapid action has then been taken to address that failure. As verification agents we argue that this is due not only to the link between results and payment but also the independent verification of data and systems that has led to better information on which to base decision-making.

Benefits of the results agenda

In this way we think that the focus on monitoring within the results agenda, can, in some cases, enable flexibility and innovation. In its reliance on high quality data, it contains within it a driver that could improve the way that development work happens. The results agenda brings benefits – some of which we did not see reflected in the article – but it comes with risks; both ideological about the ambitions for UK Aid and practical for those involved in its delivery. And so we welcome this debate.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

If you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us

Can Payment by Results raise the bar for downward accountability?

Payment by Results (PBR) could encourage downward accountability if verification included beneficiary feedback that was linked to payment.

One of the criticisms levelled at PBR is that it promotes upward accountability to donors rather than downward accountability to beneficiaries. Members of the Monitoring and Verification (MV) team for the WASH Results Programme recently discussed whether this reflects their experience and what role there is for beneficiary feedback in verification processes and in this post we summarise those discussions.

The MV team agreed that upward accountability is not necessarily any worse in PBR programmes than in other funding modalities. In fact, the feeling was that the scale of verification is likely to provide a more accurate picture of what is happening across programmes than the not necessarily representative glossy human interest pieces that often emerge from grant funded programmes. Indeed, if verification was designed to include beneficiary feedback and this was linked to payment, PBR could actually generate more downward accountability than other funding modalities.

However it is this link between beneficiary feedback and payment that is challenging. Data needs to be unambiguously verifiable which limits the kinds of areas that can be explored. If payment is only made against very specific (technology-focused) outcomes, then there is a fair chance that some, if not all, of the qualitative and governance issues will be missed or under-emphasized. But it is difficult to come up with effective “soft” indicators that can be linked to payments, as these tend to be more subjective, with less certain targets, that are more easily affected by the facilitation/enumeration of the survey/measuring process, and more variable by context. So far, in the WASH Results Programme, this has generally limited beneficiary feedback within verification to confirming that something happened when the supplier reported it (e.g. that a toilet was built in September 2014), not whether the beneficiary liked the toilet or even wanted it.

Opportunities to include beneficiary feedback in data collection

There is scope for using approaches such as satisfaction surveys or including questions about beneficiary satisfaction in household surveys, where results are triangulated with other methods. In addition, a lot of work is currently being undertaken on scorecard and feedback approaches in a wide range of sectors, including the WASH sector. The use of any of these approaches in a PBR context would require both donor and supplier to be very clear on what is being paid for (e.g. infrastructure development, service provision or behaviour change) and what the triggers are for payment or non-payment.

Including more qualitative approaches in verification, such as focus group discussions or individuals’ stories is also achievable. The challenge is ensuring they are representative of the programme as a whole, requiring them to be randomly sampled.  This, of course, requires more resources.

Managing the resource implications of beneficiary feedback

The question of resourcing is key – it is possible to verify almost anything with unlimited resources, but in the real world, different priorities need to be weighed up against each other. In practice, the demands of verification for large scale, representative, quantitative information on which to base payment decisions may leave less time, money and inclination to undertake more qualitative work with beneficiaries.

A resource effective approach to upholding downwards accountability through verification would be to include payment for the existence and effective functioning of a beneficiary feedback system (rather than the results of that system). Payment would be made on verification of the effectiveness of the system in promoting downwards accountability.

This would only work in a systems-based approach to verification such as that used in the WASH Results Programme where verification is based on data generated by suppliers with the MV team assessing the strength of the systems that generate that data through ‘systems appraisals’.  In this scenario, assessment of any beneficiary feedback system would be an extension of the systems appraisal currently undertaken by the MV team; payments could be linked to the results of that appraisal, which is not currently the case.

Finally, it is worth highlighting that the results based reporting requirements, on which the PBR system relies, generate reports that are different from the more qualitative and narrative reports associated with other forms of aid modalities, such as grants.  If donors require human interest stories to communicate a programme’s results to the public, they will need to include this requirement within suppliers’ contracts and Terms of Reference.

In conclusion, our experience suggest that the PBR approach does not inherently promote upward accountability at the expense of downward accountability; it depends on how the contract is designed. We believe that including a requirement for beneficiary feedback as part of verification of results could help to promote downward accountability. We encourage donors to consider this when designing and negotiating PBR programmes.


* This blog post is based on a summary by Catherine Fisher of an online discussion held in June 2016 among members of the Monitoring and Verification team for the WASH Results Programme (Andy Robinson, Joe Gomme, Rachel Norman, Alison Barrett, Amy Weaving, Jennifer Williams and Don Brown).