Tag Archives: outcomes

What can we learn from the Government Outcomes Lab?

Our Learning Advisor, Catherine Fisher, investigates a new research centre on outcomes-based commissioning and finds plenty of interest for the WASH Results Programme.

You know an idea has traction when the University of Oxford sets up a research centre to investigate it. This is the case for outcomes-based commissioning (aka Payment by Results), which is the focus of the new Government Outcomes Lab (GO Lab) based at the University of Oxford’s Blavatnik School of Government.

The GO Lab focusses on the interests and experience of government departments in procuring services in an outcomes-based way, rather than those of contractors (or suppliers, in WASH Results terminology) in providing them. It is a collaboration between Blavatnik School of Government and Her Majesty’s Government. The focus of the research centre is on outcomes-based commissioning that uses Social Impact Bonds (SIBs), a model in which external investors provide the initial investment for programme implementation which is repaid on achievement of outcomes. However, the GO Lab will also look at other models, presumably including that used in the WASH Results Programme in which the suppliers themselves provide the upfront investment.

The rationale for the GO Lab is as follows: “While numbers of, and funding for, outcomes-based approaches have increased over recent years, research has not kept pace with this speed of growth. Much is still unknown about whether outcomes-based commissioning will meet its promise….Through rigorous academic research, the GO Lab will deepen the understanding of outcomes-based commissioning and provide independent support, data and evidence on what works, and what doesn’t.”  (GO Lab FAQ) .

So far, the GO Lab has organised three “Better Commissioning” events looking at outcomes-based commissioning in different sectors in a UK context, namely Children’s Services, Older People’s Services and promotion of Healthy Lives.

A quick skim of the interesting post-event reports suggests that outcomes-based commissioning is seen as a way of promoting a greater focus on outcomes by providers (who may not already think in this way), of prompting innovation in service provision and of transferring the risk of new approaches from commissioners to socially-minded private enterprises. Similar themes occur in Results Based Aid discussions, although I’d suggest that the international development sector places a slightly greater emphasis on incentivising delivery, value for money and accountability to commissioners.

One aspect of the GO Lab work that caught my eye is their interest in the creation of Outcomes Frameworks which were discussed at each event: “Developing an outcomes framework is a key part of any SIB or outcome based contract, but accessing data and articulating robust metrics that can be rigorously defined and measured is often seen as a challenge by commissioning authorities.”  (Better Commissioning for Healthy Lives: a Summary Report, p 13).

This process of articulating appropriate metrics and identifying indicators has been a key area of learning within the WASH Results Programme and continues to be discussed. It was reassuring to see others grappling with similar challenges in different sectors, such as challenges of creating indicators that:

During this, the outcomes-focused period of the WASH Results Programme, we will be following the progress of the GO Lab with interest, and hope to find opportunities to exchange learning with them, and others researching innovative funding approaches. Our team is particularly interested in contributing to, and benefiting from, learning around:

  • independent monitoring and verification of outcomes-based contracts;
  • creating outcomes frameworks that reflect sustained outcomes in areas such as behaviour change (e.g. handwashing behaviours) and institutional change (e.g. ability of district stakeholders to manage water systems);
  • streamlining metrics and indicators while balancing needs of all parties: beneficiaries, service providers, commissioners and, in the case of international development programmes, national stakeholders and global commitments such as Sustainable Development Goals.

Catherine Fisher, Learning Advisor, WASH Results MVE Team

If you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us

Measuring WASH sustainability in a payment by results context

Andy Robinson, Lead Verifier for SNV, reports back from a WEDC Conference panel session organised by the three WASH Results suppliers

The three suppliers in the DFID WASH Results programme (SAWRP consortium, SWIFT consortium and SNV) came together at a side event held during the WEDC Conference in Kumasi, Ghana (11-15 July 2016) to present their thoughts on “measuring WASH sustainability in a Payment by Results (PBR) context”.

As Lead Verifier on the SNV contract and WEDC conference participant, I was invited to join the panel with the three suppliers and make a short presentation on behalf of the e-Pact consortium – to explain e-Pact’s role in WASH Results and elaborate some of the initial learning from the perspective of our Monitoring and Verification (MV) team.

Kevin Sansom (WEDC, SAWRP) began by outlining the key differences between PBR and grant programmes. He noted that PBR programmes require significant pre-finance and engender higher risks (particularly when tight timelines are applied), but allow greater flexibility and encourage more rigorous monitoring and evaluation (both internally, within the implementing agencies and externally by the verification and evaluation teams).

SAWRP presentation

Mimi Coultas (Plan UK) detailed the sustainability monitoring system adopted by the SAWRP consortium, explaining that some of the elements (sustainability assessment frameworks, outcome implementation manuals and the learning framework) are not linked to payments, but are designed to meet DFID’s requirement for reporting against five different dimensions of sustainability (functional, institutional, financial, environmental and equity).

Mimi noted that there was a lack of clarity at the outset around the criteria for payment (and the criteria for disallowance of payments), which caused some problems and could have been avoided by agreeing these details during a longer inception phase. She also suggested that the sampling approach used by the MV team has the potential “to scale mistakes” by exaggerating the effect of any poor results included in the sample (to suggest problems larger than actuality). Another comment was that the commercial pressures on the suppliers, all of whom are interested in bidding for any follow-on programmes, might have reduced collaboration and sharing of lessons learned.

Nonetheless, the SAWRP consortium felt that the programme had produced “amazing results”, with a high level of confidence in the quality and reliability of the results due to the strong scrutiny provided by the MV team. Mimi also noted that the monitoring and evaluation (M&E) focus required by the programme was a positive outcome, leading to a strengthening of M&E systems and the development of better ways of measuring WASH outcomes and sustainability. However, a longer programme duration would have been better, including an inception period during which the results framework and verification approaches could be carefully designed and negotiated.

SNV presentation

Anne Mutta (SNV) talked about the critical importance of political engagement to WASH sustainability, with governance activities integrated into the SNV programme from the start to address this requirement. Where local government capacity for sanitation and hygiene is low, sustainable results will obviously be harder to achieve. She also noted that some practical sustainability problems arise, such as heavy rain and flooding (which can wash away sanitation facilities, and constrain implementation) and changes in capacity, knowledge and commitment due to issues like government transfers or elections. Anne also agreed that the PBR programme required stronger progress monitoring, to track results and allow course corrections before the household survey results are verified.

SWIFT presentation

Rachel Stevens (TEARFUND) explained that the SWIFT consortium is using household, water point and latrine surveys, as well as local government and local service provider data, to assess sustainability (with two sets of surveys planned – one in mid-2016 and the other at end-2017). The SWIFT sustainability assessments use a similar traffic light system to those described by the other two suppliers, reporting against DFID’s five dimensions of sustainability.

Common challenges

The three suppliers had agreed on a list of common challenges, which were presented by Mimi Coultas (Plan UK). One of the most interesting of these was the risk that PBR encourages implementation in easier contexts – through the selection of less vulnerable and more accessible communities and project areas – in order to reduce both cost and risk.

The suppliers also questioned whether verification was appropriate for all aspects of sustainability, particularly the intangible and more qualitative factors (such as community empowerment), which are often important elements associated with the sustainability of sanitation and hygiene practices and outcomes.

Another potential issue is that the reduced reporting burden, with the production of evidence of results generally replacing the need for the detailed progress reporting and evaluation required by conventional programmes, may mean that the lessons learned by the programmes are not well captured or adequately documented.

Common opportunities

The suppliers agreed that, while some aspects of sustainability may be missed, the inclusion of payments for specific sustainability outcomes led to more attention to sustainability than in conventional programmes. Furthermore, the MV team’s work had encouraged greater transparency and accountability.

MV presentation

I made a short presentation on the role of the MV team and the key challenges and opportunities. After describing the composition of the e-Pact team, and introducing Bertha Darteh (Ghana country verifier for the SNV programme, who was in the audience), I explained that we were using “systems-based verification” rather than fully independent verification, which means that we are reliant on the data and reports produced by the suppliers’ M&E systems. As a result, we have to understand these systems well, and identify any weaknesses and any potential for errors, misreporting or gaming of results. DFID’s decision to adopt a systems-based verification approach was based on the assumption it would be cheaper than statistically sampled independent surveys (across such a large population), but the MV experience suggests that there are a lot of unforeseen costs (often to the suppliers) related to this systems-based approach.

Key verification challenges include the large number of closely spaced results, with little time between each verification cycle for the design, review and improvement of the verification process. The SNV programme includes nine country projects, with significant variations in context across the projects, which requires considerable flexibility in the verification system; whereas the other two suppliers’ programmes include multiple implementation partners, each of which has slightly different monitoring and reporting systems, and different priorities and targets, which in turn require adaptation of the verification systems.

I concurred that not enough time had been provided up front for the planning and design of the programme, including the MV framework and activities, which increased the pressure on all stakeholders during the first year of the programme, when suppliers were developing systems, implementing and reporting, with little time to respond to the additional demands of the verification process.

One positive outcome of the need for verified results has been the use of smartphone survey applications, which have greatly sped up and reduced the cost of the survey process; improved data processing and quality control; and made it much easier to verify large-scale results quickly. A key learning from the PBR programme is that these household surveys appear to be a far quicker and more effective way of evaluating programme outcomes than conventional evaluations.

Overall, the PBR approach appears to be improving M&E approaches and systems, encouraging more thinking about how to measure and evidence outcomes and sustainability, and providing reliable feedback on progress and performance at regular intervals during the life of the programme. This feedback enables regular improvements to be made to programme policy, planning and practice (unlike conventional programmes, which often are not rigorously evaluated until the end of the programme duration).

Questions from the floor

When the panel was asked whether the PBR approach encourages efficiency, the suppliers noted that both the programme and the approach encourage scale, which in turn encourages efficiency; however, the additional costs of verification and the related reporting were thought to partially offset the efficiency gains.

A similar question was asked about whether PBR encouraged value-for-money: the suppliers suggested that they are very confident of their results (compared to conventional programmes, which may over-report results), thus the cost-per-result is clear. They also noted that there is an incentive to reduce costs, but that these reductions may not always be passed on (and, because there is no payment for over-achievement in this programme, any additional results appear to reduce the cost per outcome/result, but do not change the suppliers’ fixed costs).

Several Ghanaian participants expressed their confusion about the new terminology associated with PBR. Output based Aid (OBA) is common in Ghana, notably through a World Bank WASH programme (with payments linked to toilet construction), and it was suggested that there “was no need to introduce yet another acronym for the same thing”. Louise Medland (WEDC, SAWRP) responded that DFID differentiated between the OBA and PBR approaches by the PBR focus on outcomes (whereas OBA focuses on outputs).

The final question was around PBR’s effect on innovation: the suppliers noted that the design was supposed to encourage innovation, but that the time pressure (of the short implementation period) limited the chance of innovation. I added that we have seen different outcomes in different contexts – in low capacity settings, the programme management generally provide firm guidelines to the project team to minimise risk; but in high capacity settings, there was evidence of innovation driven by the need to achieve results, especially in more difficult contexts where standard approaches were not working.

The general tone of the PBR session was positive, with the suppliers agreeing that the PBR approach has led to reliable and large-scale results, and that the need to report and verify results has led to significant improvements in M&E systems. A lot of learning has taken place, and the suppliers hoped that this learning will inform the design of any future WASH PBR programmes.

Andy Robinson, Lead Verifier on the SNV Contract, WASH Results MVE Team

What have we learned about Payment by Results (PBR) programmes from verifying one?

After 19 verification rounds, the WASH Results Monitoring and Verification team shares its suggestions for how to design future PBR programmes.

Martha Keega assesses a latrine in South Sudan

Verification in action: MV team member Martha Keega assesses a latrine in South Sudan

Verification is at the heart of the WASH Results Programme. Suppliers only get paid if we, the Monitoring and Verification (MV) team, can independently verify the results they are reporting. Usually we can: results are reported by Suppliers, verified by us and Suppliers are paid by DFID to an agreed schedule. However, all Suppliers have received deductions at least once, which, although painful for everyone, is testament to the rigour of the verification process. Overall, the system is working and the results of the programme are clear. But the demands of verification are also undeniable, leading to some aspects of verification being labelled “Payment by Paperwork” and like any process, it could be improved.

In January 2016 the team* came together to reflect on what we have learned so far from conducting 19 rounds of verification across the three Suppliers. Our discussions focused on verification but inevitably considered wider issues around design of a PBR programme. Here we share some suggestions for design of future PBR programmes, from a verification perspective.

  1. Ensure targets and milestones reflect high level programme objectives
  2. Be clear on targets and assumptions about their measurement
  3. Think carefully about enabling alignment with local government and other WASH stakeholders
  4. Reconsider the 100% PBR mechanism to avoid verification inefficiencies
  5. Consider payments for over-achievement of outcomes, but not of outputs
  6. Include provision for a joint Supplier and Verifier inception phase that will streamline verification
  7. Consider pros and cons of relying more on Supplier-generated evidence as opposed to independent evidence generation

1. Ensure targets and milestones reflect high level programme objectives
The WASH Results Programme has ambitions with regard to equity, gender and disability and overall health benefits that are not universally built into targets and payment milestones agreed between DFID and Suppliers. As a consequence, these ambitions are not explicitly incentivised. Any future programme should think carefully about how the design of the programme, especially the targets set in the tender and agreed with Suppliers, uphold objectives based on good practice within the sector.

2. Be clear on targets and assumptions about their measurement
We have found that when payment decisions are riding on whether targets have been met, the devil is in the detail. During implementation, some discrepancies have emerged over targets and how to achieve them. Discussions have taken place about minimum standards for latrines (DFID or JMP definition) and hygiene targets (what does ‘reach’ mean?). In addition, there was occasionally lack of clarity on how achievement of targets would be measured.

When working at scale, assumptions made about the average size of a household in a particular area, or the best way of measuring the number of pupils in a school become subject to intense scrutiny.  This is quite a departure from how programmes with different funding mechanisms have worked in the past and the level of detailed evidence required may come as a shock for Suppliers and Donors alike. In response, we suggest that future programmes should provide clear guidance on technical specifications relating to targets and guidelines for evidencing achievements.

3. Think carefully about enabling alignment with local government and other WASH Stakeholders
One concern that we discussed in the meeting was that the design of the WASH Results Programme does not sufficiently incentivise alignment with local government. We suspect that this was a result of the scale of the programme and the tight timelines, but also the demands of verification. The need to generate verifiable results can dis-incentivise both the pursuit of “soft” outcomes such as collaboration, and, working with government monitoring systems.

We suggest that PBR programmes need to think carefully about how to incentivise devolution of support services from progamme teams to local governments, and to other sector stakeholders during the life of the programme, for example by linking payments to these activities. Also, to think how programme design could encourage long-term strengthening of government monitoring systems.

4. Reconsider the 100% PBR mechanism to avoid verification inefficiencies
The merits or otherwise of the 100% PBR mechanism in the WASH Results Programme are subject to much discussion; we considered them from a verification perspective. We believe that, in response to the 100% PBR mechanism, some Suppliers included input- and process-related milestone targets to meet their internal cash flow requirements. In some cases, this led to verification processes that required high levels of effort (i.e. paperwork) with relatively few benefits.

We suggest that people designing future PBR programmes consider non-PBR upfront payments to Suppliers to avoid the need to set early input and process milestones, and run a substantial inception phase that includes paid-for outputs for Suppliers and Verifiers. In the implementation phase of the WASH Results Programme, payment milestones have been mainly quarterly, so requiring seemingly endless rounds of verification that put pressure on all involved, particularly Supplier programme staff. In response, we suggest that payments over the course of a programme should be less frequent (and so possibly larger), so requiring fewer verification rounds and allowing greater space between them. This may have implications for the design of the PBR mechanism.

5. Consider payments for over-achievement of outcomes, but not of outputs
The WASH Results Programme does not include payment for over-achievement. Over the course of the programme, some Suppliers have argued that over-achievement should be rewarded, just as under-achievement is penalised. As Verifiers, we agree that paying for over-achievement for outcomes would be a positive change in a future PBR design. However, there were concerns among our team that encouraging over-achievement of outputs could have unintended consequences such as inefficient investments or short-term efforts to achieve outputs without sufficient attention to sustainability and the quality of service delivery.

6. Include provision for a joint Supplier and Verifier inception phase that will streamline verification
It is broadly accepted that the WASH Results Programme would have benefited from a more substantial inception phase with the Verification Team in place at the start. Our recommendations about how an inception phase could help streamline and strengthen verification are as follows:

  • Key inception outputs should include a monitoring and results reporting framework agreed between the Supplier and the Verification Agent. Suppliers and Verifiers could be paid against these outputs to overcome cash flow issues.
  • The inception phase should include Verification Team visits to country programmes to establish an effective dialogue between the Verifiers and Suppliers early on.
  • If Suppliers evidence their achievements (as opposed to independent collection of evidence by the Verification Agent – see below), assessment of, and agreement on, what are adequate results reporting systems and processes need to be included in the inception phase.
  • Run a ‘dry’ verification round at the beginning of the verification phase where payments are guaranteed to Suppliers irrespective of target achievement so that early verification issues can be sorted out without escalating stress levels.

7. Consider pros and cons of relying more on Supplier-generated evidence as opposed to independent evidence generation
In the WASH Results Programme, Suppliers provide evidence against target achievements, which is subsequently verified by the Verification Team (we will be producing a paper soon that outlines how this process works in more detail). Is this reliance on Supplier-generated evidence the best way forward? What are the pros and cons of this approach as compared with independent (verification-led) evidence generation?

Indications are that the PBR mechanism has improved Suppliers’ internal monitoring systems, and has shifted the internal programming focus from the finance to the monitoring and evaluation department. However, relying on Suppliers’ internal reporting systems has required some Suppliers to introduce substantial changes to existing reporting systems and the MV team has faced challenges in ensuring standards of evidence, particularly in relation to surveys.

We have some ideas about pros and cons of Supplier-generated evidence as opposed to evidence generated independently, but feel this can only be fully assessed in conversation with the Suppliers. We plan to have this conversation at a WASH Results Programme Supplier learning event in March. So, this is not so much a suggestion as a request to watch this space!

Coming up…

WASH Results Programme Learning Event:  On March 7 2016 Suppliers, the e-Pact Monitoring & Verification and Evaluation teams, and DFID will convene to compare and reflect on learning so far. Key discussions at the event will be shared through this blog.

Verification framework paper: an overview of how the verification process works in the WASH Results Programme. This will present a behind-the-scenes look at verification in practice and provide background for future lessons and reflections that we intend to share through our blog and other outputs.

 

 


* About the MV Team: In the WASH Results Programme, the monitoring, verification and evaluation functions are combined into one contract with e-Pact. In practice, the ongoing monitoring and verification of Suppliers’ results is conducted by one team (the MV team) and the evaluation of the programme by another.  The lessons here are based on the experience of the MV team although members of the Evaluation team were also present at the workshop. Read more about the WASH Results Programme.

As the MDGs become SDGs, what progress has WASH Results made?

What has the DFID WASH Results Programme achieved so far and what lies ahead for the programme in 2016?

January 1st, 2016 is an important date for the DFID WASH Results Programme. The Sustainable Development Goals (SDGs) take the place of the Millennium Development Goals (MDGs) and ‘WASH Results’ moves into the second half of its funding period. At this stage the focus for the programme’s Suppliers (SAWRP, SNV and SWIFT) will start shifting from getting things done, to keeping things going and ensuring the sustainability of WASH outcomes. For e-Pact too, as the independent verifiers, it’s farewell to Outputs and hello to Outcomes.

Once the goals to provide access have been reached, attention turns to sustainability.

Before we look ahead to next year, let’s take a look at what’s been achieved so far. According to the UN, 2.6 billion people gained access to improved drinking water sources between 1990 and 2015 – a key part of MDG 7. Worldwide, 2.1 billion people have gained access to improved sanitation but 2.4 billion are still using unimproved sanitation facilities, including nearly 1 billion people who are still defecating in the open. This July, DFID was able to report it had exceeded its own target of supporting 60 million people to access clean water, better sanitation or improved hygiene conditions.

What contribution can we attribute to the WASH Results Programme? The full report on the programme’s 2015 Annual Review is available on the UKAid Development Tracker website. Here are some of the highlights:

  • The reviewers gave the programme an ‘A’ overall (for the second year running) and considered it to be “on track” to meeting its targets by the end of 2015.
  • By December 2014 the WASH Results Programme had reached 296,438 people with improved sanitation, 65,611 people with improved water supply, and over 1.25 million people with hygiene promotion.
  • The reviewers noted that “strong independent verification systems” have been established that also allow for adjustment and improvement based on learning from previous verification rounds.
  • WASH Results is generating significant policy knowledge around use of Payment by Results and programming for outcomes (sustainability) in the WASH sector.

We’ll come back to these results in early 2016 when the final numbers are in and compare them to the programme’s targets for December 2015 which are:

  • 968,505 people have access to clean drinking water;
  • 3,769,708 people have access to an improved sanitation facility;
  • 9,330,144 people reached through hygiene promotion activities through DFID support.

Large numbers, however impressive, don’t fully convey the effects that improvements in water, sanitation and hygiene are having on people’s lives. The Suppliers have been collecting stories of change from some of the people on the ground who are closely involved in delivering the WASH Results Programme or directly benefitting from its work. You can read about 70 year old subsistence farmer Maria and mother-of-five, Jacinta, in a recent DFID blog post. The SWIFT consortium’s website is also packed with news and images from their involvement in the WASH Results Programme.

So what happens next?

From 2016 onwards, e-Pact starts answering a critical question for DFID: how many poor people continue to use improved water and sanitation facilities and are practising improved hygiene because of the WASH Results Programme? Right now we’re exploring how best to monitor, report on and verify these outcomes and look forward to sharing what we learn, with you.

As always, if you have any ideas or observations about this topic, we encourage you to Leave A Reply (below), or email us.