IPDET Nairobi 2025: A Milestone for Evaluation Capacity in Africa
- Talitha Hlaka
From November 3–7, 2025, the International Programme for Development Evaluation Training (IPDET) delivered its first-ever regional core course on African soil. Hosted in Nairobi through a collaboration between IPDET, the Clear Evaluation Anglophone Africa (CLEAR-AA) and the Global Evaluation Initiative (GEI), the course brought together more than 90 participants from 28 countries. The initiative marks an important step toward more locally accessible, contextually grounded evaluation capacity building across the continent.
-1000x800.png)
The opening ceremony set an inspiring tone. Dr Bonface B. Makokha, Principal Secretary in Kenya’s National Treasury and State Department for Economic Planning, officially welcomed participants and affirmed the significance of hosting IPDET in Kenya. He noted that the program “marks a significant milestone in the country’s efforts to enhance monitoring and evaluation systems” and emphasized that such a program is “a first-of-its-kind in Kenya.” His remarks highlighted the government’s commitment to strengthening evaluation as a foundation for effective planning and evidence-driven development.
The core course was facilitated by Dr. Candice Morkel, Dr. Steven Masvaure, Dr. Taku Chirau, Mr Siyabonga Sibiya, and Ms Matodzi Amisi. Over five intensive days, they guided participants through evaluation design process, theories and evaluation approaches, data collection and analysis methods, evidence use, and evaluation management. In her opening remarks, Dr. Candice Morkel stressed that “IPDET Nairobi is about more than transferring skills; it is about equipping evaluators to navigate complexity, influence decisions, and strengthen institutions across the continent.” CLEAR–AA’s director, Dr. Taku Chirau, described the Nairobi delivery as “a moment of ownership,” noting that Africa is no longer merely a recipient of global evaluation knowledge but an active shaper and host of it.
The cohort represented a wide range of institutions, including government ministries, NGOs, private enterprises, UN agencies, research institutes, development organizations, and academic institutions. Participants also brought varied evaluation responsibilities—from designing and conducting evaluations to supervising, managing, and using evaluation results in policy and programming. This diversity created a rich environment for shared learning and cross-country reflection.
Throughout the week, participants reflected on the transformative impact of the course. They emphasized how the training deepened their understanding of robust evaluation frameworks, strengthened their focus on accountability and learning cycles, and reinforced the importance of connecting evidence with planning and decision-making. Many highlighted the value of practical tools and examples that could be directly applied to real-world projects in their institutions.
As participant Daman Bogato noted, “This learning journey is giving me deeper insights into developing strong evaluation frameworks, strengthening accountability and learning cycle, connecting evidence with planning and decision-making, and applying practical research and evaluation approaches in real projects.”
A strong emphasis was placed on evaluation use. Participants noted how sessions on utilization helped them think more systematically about the factors that influence whether evaluation findings are actually used, and how to design processes that encourage uptake. They pointed to the importance of writing clear and concise executive summaries, formulating actionable recommendations, and actively engaging stakeholders so that they understand, own, and apply the findings.
As participant Ronald Kouago remarked, “The last module (utilization) is one of my preferred modules, as it aligns with my interest to support development managers in effectively using evidence for decision-making.”
The sense of connection extended beyond the classroom. Informal discussions, group exercises, and peer learning activities allowed participants to share country perspectives and common challenges. As participant Isiyaku Zainab observed, “Attending the IPDET Core Course in Nairobi was one of the most refreshing learning experiences I’ve had. The sessions strengthened my ability to design evaluations that inform real decisions, not just produce reports. Beyond the technical skills, the honest conversations, teamwork, and shared commitment to improving development programs made the week truly inspiring.”
The Nairobi edition also emphasized local ownership, accessibility, and sustainability. Participants appreciated that while the course is grounded in global evaluation standards, it was delivered with African examples, experiences, and debates.
By the end of the week, it was clear that the impact of IPDET Nairobi would extend far beyond the training room. More than 90 professionals returned to their institutions equipped with new skills, networks, and perspectives, ready to strengthen evaluation practice in government, civil society, academia, and the private sector.
The success of the Nairobi delivery affirms the importance of regionalizing evaluation capacity building, and it marks the beginning of a new chapter for IPDET on the continent. With Kenya hosting this first edition and CLEAR–AA leading efforts to strengthen African evaluation systems, momentum for evidence-driven development in Africa is set to grow. Preparations for the next edition are progressing, and further details will be communicated once confirmed.
