Start main page content

The battle to return to evidence-based developmental practice and governance

- Dr Takunda J. Chirau (Senior M&E Technical Specialist), Ayabulela Dlakavu (M&E Technical Specialist); and Banele Masilela (Researcher)

Quality evaluations are paramount to evidence-based policy and decision making. Of course, contemporary monitoring and evaluation debates among scholars contest ‘what constitute quality’, however, we are not bogged on the central issues around quality but rather focus on the quality of evaluations as they can undermine evidence-based policy and decision making. There is not always a direct, positive relationship between quality of evaluation report and utilization of thereof in evidence-based policy and decision making.  The quality of evaluations is largely determined by the commissioning organisation’s standards more so, the quality of evaluation starts in programme design phase, evaluability, and the terms of reference. Countries such as South Africa has developed standards of evaluation which seeks to foster quality of evaluations; however such standards have become a mere tick box exercise for compliance. In Kenya, County Integrated Monitoring and Evaluation System has provision for quality assurance through County Monitoring and Evaluation Committee. The Committee is responsible for formulation of the county monitoring and evaluation policy and facilitating the allocation of resources for monitoring and evaluation amongst other tasks. Commissioners of evaluations are involved at various stages of evaluation process, however, some reports end up of being of poor quality regardless of measures put in place.

We explored the quality of evaluations found in the African Evaluation Database also known as AfrED. Evaluation reports commissioned by the same donor are diverse in their curation however, the structure and presentation of evaluation reports will differ among various donors. Lessons coming out from the analysis are informed by the Quality Assessment Framework (QF) established by Spencer, Ritchie, Lewis and Dillon (2003) with an aim to interrogate the quality of qualitative evaluation reports.  We have realized the following:

  • It is critical for the evaluation reports to be verified against existing knowledge. Evaluation reports are not well corroborated with the existing body of knowledge or literature. Supporting the evaluation findings with literature increases the significance of the results by comparing it with what is already known.
  • Evaluation reports do not put sufficient weight on ethical considerations in the evaluation process. We speculate that there is insufficient understanding of the role of ethics in evaluation, more so, evaluations are given a limited time which does not allow enough time for ethical clearance hence the non-adherence to ethical process.
  • Methodology and conceptual framework are equally critical. Critical to arriving at credible conclusion is a detailed explanation and communication of methods and approaches in a clear manner to allow replicability and provide the readers with adequate evidence to believe interpretations are plausible.
  • The packing of the report is not suitable for high level political people and technocrats to grapple with. Subject jargon is excessively used to the extent that it falsifies the message impeding the use of the recommendations.

Although, it is contested, good quality evaluation reports should be grounded on concepts of completeness, consistency, and transparency reporting (Weiss, 1998). However, the lack thereof may impend on trustworthiness of evaluations. There are important questions which the evaluation community should start thinking about: how do we move forward, what are your thoughts on how we can improve the quality of evaluations and their influence on use?  How do we capacitate evaluators on reporting skills for different audience? 

These are our thoughts, what are your thoughts?

Share