Results-based management and strategic collection of results data

On the basis of the recently completed evaluation of results based management, senior advisers Ida Lindkvist and Anette Wilhelmsen, argue, in this comment, that the intended use of needs to be considered in order to improve the quality of results information.

Written by Ida Lindkvist and Anette Wilhelmsen, senior advisers in the Evaluation Department in Norad. 

A recent evaluation of the Norwegian aid administration’s practice of results-based management, initiated by the Evaluation Department in Norad, found that despite stringent requirements for results reporting from partners, the Norwegian aid administration (i.e. the Ministry of Foreign Affairs and Norad) did not systematically use these data for their own management and learning. Furthermore, staff were uncertain about the quality of results data, and the form in which the data were presented could not easily be used to inform the administration’s own decisions.

While the aid administration had undertaken several measures to improve the quality of results data, it appeared to be a working assumption that quality had to be improved before use could be considered.

While we acknowledge the logic of this strategy, in this comment we argue that this is risky because the planned use may affect both the type and quality of data collected. Unless use of results data is clear up-front, the quality of data may never come up to par.

While many different types of use exist, we will provide a few examples of results data collected for two different types of use: accountability and learning.

Use of results data for learning

While projects typically are implemented by partners, the aid administration often manages portfolios and programmes, i.e. a group of projects sharing the same overall objectives and underlying programme logic.

Depending on who will use results data and at what level, different questions are likely to be asked. For example, partners may desire to know whether single projects are effective and efficient, while the aid administration may want to know whether the right type of interventions and partners are funded to achieve overall objectives.

Currently, the aid administration has focused on collecting data at the project level, struggling with compiling and aggregating data from several projects to give an indication of whether overall goals are achieved. This is challenging, and evaluations may be better suited to answer questions about effectiveness at the programme/portfolio level rather than, or in addition to, aggregation of indicators.

Either way, if data are collected for learning purposes, without it being clear who should use the data and at what level, the wrong type of information may be collected.

Partners may also invest less in quality-assuring these data. It is therefore important to be clear about who shall learn and on what level, before deciding what and how results data should be collected and analysed.

Use of results data for accountability purposes  

The aid administration is required to report to Parliament on the use of public funds. Depending on the sector, reporting may not necessarily rely solely on detailed results information from the entire results-chain from partners. If evaluations and research suggest that projects are of proven efficacy, contribution to overall objectives may be established through documenting activities and using evaluations and research to make reasonable contribution claims.

If this is the case, it may not be necessary to ask partners to document effects. If, on the other hand, the aid administration wishes to hold partners accountable for results, this is likely to require a heavy investment in both monitoring and evaluation, with careful consideration of potential unintended effects.

Many development aid projects are implemented in settings where corruption is prevalent, and an important part of the aid administration’s work is to set reporting requirements as a means to control and prevent abuse. However, reporting and control to prevent abuse of funds may look very different from reporting to answer questions of effectiveness and efficiency.

In some instances, it may be sufficient to establish whether activities are implemented; in other cases more resource-intensive public expenditure tracking surveys need to be implemented. Again, it is important to know for what purpose data shall be collected.

Concluding remarks

We have argued above that different types of use require very different results information, including evaluations. Depending on use, results reporting could be very simple or it could be more complicated and costly.

Deciding when to do what is important to keep costs related to reporting at a manageable level and to ensure that results information maintains high quality, and contribute to improved transparency and development outcomes.

Published 10.04.2018
Last updated 10.04.2018