Catherine Fisher reports back on a Bond Conference panel discussion about the origins and future of the ‘results agenda’.
Aid should generate results. On the face of it, an indisputably good idea, but the ‘results agenda’ is anything but uncontroversial and can spark “epic” debates. In the WASH Results Programme, this agenda is manifest in the funding relationship – Results Based Financing (RBF) – a form of Payment By Results (PBR) through which DFID makes payments to Suppliers contingent on the independent verification of results. As the Monitoring, Verification and Evaluation (MVE) providers for WASH Results, we’re keen to exchange the programme’s insights with those of other people who have first-hand experience of the results agenda.
One such opportunity arose this week at the Bond Conference, in a session entitled ‘How to crack Results 2.0’ chaired by Michael O’Donnell, Head of Effectiveness and Learning at Bond and author of ‘Payment by Results: What it means for UK NGOs’. The session considered the origins and implications of the results agenda and looked ahead to the next version. Catherine Fisher, Learning Advisor for the WASH Results MVE team, reports on a lively discussion about how results agendas could be aligned with work on social transformation, enable learning and reflection within programmes, and provide value for money themselves.
Looking back to the origins of results approaches in DFID
Opening the session, Kevin Quinlan, Head of Finance, Performance and Impact at DFID explained how, in 2010, DFID encountered two opposing forces: increased funding in order to meet the UK’s legal commitment to spending 0.7 percent of national income on Overseas Development Aid alongside the introduction of austerity measures that required cuts in, and increased scrutiny on, public spending. Results and transparency agendas were DFID’s response to those competing demands and made a shift towards delivering results now in comparison with systems strengthening for results in future. This implied a corresponding shift to talking about the results DFID is going to support rather than the activities they would support to achieve results in future. Six years on, DFID is reassessing its approach.
Can results approaches be reconciled with the “art of transformation”?
Earlier that day, Dr Danny Sriskandarajah, Secretary General of CIVICUS, told conference delegates that INGOs had become too focused on the “science of delivery” (which he described as the achievement of impact by any means) as opposed to the “art of transformation” – the work of bringing about social change. This theme re-emerged during the ‘Results 2.0’ discussion: how could the focus on hard results, embedded in results frameworks, be reconciled with the messy business of social transformation that is at the heart of struggles for equity and rights?
Jessica Horn, Director of Programmes at the African Women’s Development Fund, noted that results frameworks do not acknowledge power or monitor how it is transformed. Consequently she and her colleagues resort to what she called “feminist martial arts” – twisting and turning, blocking and jabbing to defend the transformative work they do, from the “tyranny of results”. Often, Jessica argued, the politics of the process are as important as the politics of the outcome and asked “how does the results framework capture that?” Yet as Irene Guijt, newly appointed Head of Research at Oxfam GB, argued, being forced to think about results even in the social transformation context helps to make things clearer. Between them, they had some suggestions about how it could be done.
Irene contended that there needed to be greater differentiation of what kind of data we need for different reasons, rather than a one-size-fits-all approach to accountability. She argued that “results” are too often about numbers and we need to bring humans back in and tell the story of change. Irene recommended using the tool SenseMaker to bring together multiple qualitative stories which, through their scale, become quantifiable. Jessica shared some frameworks for approaching monitoring and reporting on social transformation more systematically and in ways that consider power, such as Making the Case: the five social change shifts and the Gender at Work Framework.
Does focusing on monitoring results for accountability squeeze out reflection and learning?
This criticism is often levelled at results-based approaches and their associated heavy reporting requirements. Irene commented that “learning and data are mates but compete for space”. To align learning and reflection with results monitoring, she advised focusing on collective sense-making of reporting data, a process that enables evidence-based reflection and learning. She also suggested streamlining indicators focussing on those with most potential for learning, a point echoed by Kevin from DFID who emphasised the need to select indicators that are most meaningful to the people implementing programmes (rather than donors).
Do results agendas themselves demonstrate value for money?
This question resonated with the participants, triggering musings on the value of randomised controlled trials and the cost of management agents from the private sector. One point emerging from this discussion was that often what is asked for in results monitoring is difficult to achieve. Indeed, this has at times, been the experience of the WASH Results Programme, particularly in fragile contexts (see for example, the SWIFT Consortium’s report [PDF]) . Both Irene and Jessica talked of the need to use a range of different tools for different purposes and Irene made reference to her recent work on balancing feasibility, inclusiveness and rigour in impact assessments.
What is the trajectory for DFID and the results agenda?
Kevin Quinlan took this question head on, agreeing that this is something DFID needs to decide in the next few months. He suggested that some of the areas for discussion were:
- Getting to a more appropriate place on the spectrum between communication (to tax-payers) and better programme design; results are part of communicating to tax-payers but not the only part;
- Reducing standard indicators in favour of flexible local indicators; each project would need at least one standard indicator to allow aggregation but there should be more local indicators to enable learning;
- Alleviating the torture of results – “rightsizing” the reporting burden and reducing the transaction costs of results reporting; thinking about what results can do alongside other tools.
- Adopting a principles-based approach rather than a set of rules.
Meanwhile the Evaluation Team for WASH Results is investigating some of the issues raised during the panel such as examining the effect of results verification on Suppliers’ learning and reflection, and seeking to explore the value for money of verification.
So it sounds like there will be more interesting discussions about the results agenda in the near future and we look forward to contributing insights from WASH Results*. Whether Results 2.0 is on the horizon remains to be seen.
* Please email the MVE Team if you would like us to let you know when our evaluation findings are available.