Start main page content

How “Practical” or “Useful” are Rapid Evaluation, Really?

- Khotso Tsotsotso, Monitoring & Evaluation Technical Specialist

There is no doubt that evaluations can play an integral role in decision-making. On the one hand, when the information is accurate, detailed, and insightful; evaluations can help make effective programme decisions, like whether to continue, scale-up, discontinue or even, whether to intervene at all.  On the other hand, it is well understood that there are significant investment requirements for producing reliable and insightful evaluations.

Besides the financial burden associated with collecting programme data, evaluations can require high levels of technical skills while also commanding long periods of time. However, there are situations when decisions have to be made under time constraints and limited access to technical skills. And indeed, it has been argued that high pressure to deliver development imperatives, limited technical skills, access to financial resources, including weak information systems are a common feature across African public institutions. As such, the idea of producing good evaluation information timeously can be a challenge.

To counter expensive and complicated studies with long turn-around time, rapid evaluations have been strongly lobbied as a more practical alternative. Rapid evaluations are meant to strike a balance between maintenance of high levels of investigative rigour in evaluation, and, the high cost and technical requirements.

“The elephant in the room” though, is the reminder that evaluations are a means to a greater end. Or that an evaluation report itself, IS NOT the end. In other words, the real judgement of rapid evaluations is in whether they truly accommodate key functions of a good evaluation. You see; praxis teaches us that the value of evaluation must be demonstrated by its role across the programme or policy cycle. And if we are going to rely on rapid evaluations to be in anyway helpful in decision-making – I argue – it is only fair to assess their worth against the same functions. To facilitate my argument, I present the following five key evaluation functions:

1) The Improvement function, referring to the enhancement of the efficiency and effectiveness of the chosen programme strategies and how they are implemented. 2) The Coordination function, meaning that the evaluation process assesses the roles of different collaborating actors, allowing partners to know what the others are doing, and how this works or links with their respective roles. 3) The Accountability function, assessing whether identified outcomes are actually being reached, and how wide the deviation is. 4) The Celebration function, celebrating the achievements of the underlying programme or policy, and finally, 5) the Legitimation function, which refers to the idea that evaluation serves to provide persuasive and credible data to justify the underlying programme or intervention logic. This function is critical in Theory-Based evaluation as it is the primary function of evaluation in this context.

The common threats connecting and enabling the attainment of each of the five functions is the ability of rapid evaluations to adhere to basic quality elements such as reliability, credibility, accuracy and relevance. With that said, a key point of contention surrounding rapid evaluation is their ability to deliver exceptional levels of quality information within a shorter period of time, and in a cost-effective manner. A matter in question is whether the ideal that evaluations are meant to be a systematic determination of a subject’s merit, worth or significance (Canadian Evaluation Society, 2015), can sufficiently be met by rapid evaluations. Can the balance between a quick turn-around and comprehensiveness evaluative information actually be achieved in practice?  Can evaluators can provide rigorously researched answers within a limited amount of time and budget, and if so, in which situations can this be done?

The idea of reducing evaluation “turn-around time” and budget, while at the same time ensuring high levels of rigour by using multiple methods to collect data from multiple sources, implies very direct assumptions regarding capacity. For one, it assumes the existence of adequate technical skills to manage such levels of triangulation. Secondly, it assumes the existence of reliable secondary data. And finally, it assumes a good programme design with acceptable monitoring frameworks to facilitate the evaluation process.

It is a reality that, in Africa, we operate in a context where systems of evidence are not necessarily supportive of evaluations, i.e. we seldom have integrated systems with complete data; or logically designed social programmes with clear monitoring frameworks. In fact, it is almost always the case that evaluations have to include a collection of primary data and hardly rely on monitoring data (if it exists at all). While evaluation budgets are also usually constrained, our commitment to dedicate resources to capacity development is proof that our institutions have glaring gaps in evaluation technical capacity to take on evaluation projects.

Any approach to the rapid evaluation we are to employ should, therefore, take cognisance of our reality. It should be responsive to the levels of evaluation capacity as we experience them. Therefore, my position is that an appropriate guide to the rapid evaluation should be one that is based on well-researched elements of what is needed. And not one that is based on a theoretical idea, which might not necessarily fit the context.

Share