Highlights of the discussion about verification and evaluation of Payment By Results in our chat show at the UK Evaluation Society Conference.
A session at the UK Evaluation Society conference in May 2015 compared two Payment by Results (PbR) programmes in different sectors. The discussion revealed that although PbR programmes can be set up in very different ways, the management, evaluation and verification face similar tensions. These include balancing cash flows with the focus on “real outcomes” that PbR is designed to encourage.
Using an informal chat show approach, the session explored practical experiences of monitoring and verification of two DFID funded PbR programmes: the Water, Sanitation and Hygiene (WASH) Results Programme and the Girls Education Challenge (GEC). The chat show was attended by one of the WASH Results Programme service providers thus providing a different perspective on the issues.
Chat show participants were:
Dr Katharina Welle, Deputy Team Leader, WASH Results Monitoring, Verification & Evaluation (MVE) Team, (Itad)
Dr Lucrezia Tincani, Manager, WASH Results MVE Team, (Oxford Policy Management)
Jason Calvert, Global Monitoring & Evaluation (M&E) Lead, Girls’ Education Challenge
The host was me, Catherine Fisher, Learning Adviser, WASH MVE Team, (Independent Consultant).
The discussion focused on the practicalities of implementing programmes financed through a Payment by Results funding mechanism (specifically Results Based Financing, a term used by DFID, among others, when the payments from funders or government go to service providers or suppliers). This blog post shares a few of the areas that were discussed.
There are huge variations in the set-up of PbR programmes
Perhaps unsurprisingly for two programmes of different sizes in different sectors, the WASH Results Programme (£68.7 million over approximately four years) and GEC (£330 million over four years) are managed, monitored and evaluated in very different ways.
GEC is managed by Price Waterhouse Coopers (PwC) who manage the funding to the 37 GEC projects, set suppliers’ targets and create M&E frameworks, while suppliers themselves contract external evaluators to monitor and evaluate their progress. By contrast the three WASH Results Programme supplier contracts are managed in-house by DFID and the Monitoring, Verification and Evaluation is contracted out to a consortium: e-Pact. There are no doubt pros and cons of each approach which the chat show did not explore. But a key advantage of the GEC set-up (from the perspective of the e-Pact WASH MVE team) was the involvement of PwC from the start of the GEC programme. This meant PwC could shape the “rules of the game” for verification and were able to set standardised targets up front.
Differences in amount of funding subject to PbR
Another key difference between the two programmes was the amount of the supplier funding that was subject to PbR. For GEC suppliers it is on average 10%, whereas WASH Results Programme suppliers see 100% of their funding dependent on verified results. But this startling figure may mask some similarities….
GEC differentiates between payment against inputs (not called PbR) and payment against outcomes, the “real” PbR which constitutes 10% of the total funding. By contrast, the WASH programme has a broader definition of results that includes outputs, inputs and outcomes, and varies across suppliers. So in practice the programmes may be more similar than they appear.
Balancing cash flow and adaptive programming in a PbR context
An important driver for the broader interpretation of “results” within the WASH Results programme was the very real need for some suppliers to maintain a steady cash flow across the course of the programme in a 100% PbR context. The temptation for suppliers can be to set lots of milestones that trigger payment. However the need to stick to and demonstrate achievement of these milestones may inhibit flexibility in delivery. This tension between maintaining cash flow and the adaptive programming that PbR is intended to foster has been experienced by both programmes.
Rigour in measurement
The ability to effectively measure results is at the heart of PbR. For PwC, this means that every project subject to PbR is monitored through use of a quasi-experimental Randomised Controlled Trial (RCT) that measures outcomes in the project site with a control group. One reason PwC insist on an RCT for each project is to protect themselves against the risk of being sued for incorrectly withholding payment.
Where an RCT is not possible, for example in Afghanistan where security risks for implementers and cultural considerations mean that control groups are not feasible, these projects are removed from PbR. A number of the 37 GEC projects have been taken off PbR due to cultural considerations and challenging environments.
The ability to measure results is also dependent on the existence of consensus and evidence about expected results and effective means of measurement. This is more the case in the education sector than the WASH sector and makes setting targets and assessing progress towards them more difficult for a WASH programme, particularly in hygiene promotion activities such as those promoting hand washing.
As a result of these discussions, participants suggested the use of a spectrum (see Figure 1) that matches the proportion of programme funding dependent on PbR to how to how easy it is to measure results in that particular sector.
Does PbR generate perverse incentives?
One of the much talked about cons of PbR is that it will create perverse incentives among stakeholders that drive them to behave in the opposite way than intended. Participants shared stories that included examples of the PbR mechanism inhibiting innovation and encouraging suppliers to focus on “low hanging fruit” rather than greatest need. A review of PbR in Swiss cantons suggested it didn’t work at all in terms of generating efficiencies and effectiveness.
PbR is not a one-size-fits-all solution; there remains lots to learn about when and where it can work
There was consensus that PbR cannot be used effectively in all contexts. The risks and uncertainties of working in fragile states are one setting in which PbR can be difficult (but not impossible?) to implement, however even in more stable contexts, issues around organisational capacity and motivations can inhibit its effective implementation.
Participants agreed that there are probably sets of circumstances in which PbR can be effective. People working in countries and sectors in which PbR has been used for many years have spent time identifying what those contexts are – see for example this paper about PbR in UK community work. For those of us who are new to the challenges of commissioning, designing, implementing, monitoring, evaluating or verifying PbR projects and are doing it in contexts in which it has not been tried, there is a lot to learn. This chat show illustrates that there is great value in coming together to do so.
What do you think?
What are you learning about managing, implementing or verifying Payment By Results programmes? Does this discussion reflect your experience? Would you like to learn with us? Please use the comment box to share your thoughts.
Catherine Fisher, Learning Adviser, WASH Results MVE Team.
See also: Payment-by-results: the elixir or the poison?, 6 January 2015, Jason Calvert