Evaluation

Building our understanding of what works and why.

Evaluating projects, programmes and policies is central to efforts to achieving SDG4. Evaluations can inform and improve ongoing delivery of specific projects and programmes and contribute to the wider evidence base on what works in improving learning outcomes – for whom, in which contexts and why.

Fundamentals of our Approach 

Our guiding principle is that evaluations must be designed to be fit for purpose. Our starting point for each evaluation assignment is to understand what our partners need to know. We are not tied to specific evaluation designs or methodologies, but rather draw on our toolkit of evaluation best-practice and data collection and analysis approaches to select the best tools for the job.

Our evaluation work draws on our world-leading assessment expertise. We are able to design and adapt learning outcome measures to accurately and robustly measure impacts on learning outcomes in a way that is proportionate to the evaluation questions and limits burden on participants.

We understand the importance of context. We know that understanding the context in which projects, programmes are policies are being implemented is crucial to asking the right questions, designing the right evaluation, and understanding our evaluation findings. We bring a strong understanding of both education systems and classroom realities within lower- and middle-income countries and often collaborate with in-country partners.

We work in close collaboration with funders and implementers - not only to understand what they need to know, but to understand what they are trying to achieve and explore the logic underpinning their project or programme. We often use theory-based approaches to understand how programmes have been delivered and how and why they have - or have not - achieved intended impacts.

We also engage with wider stakeholders. We know that close engagement with stakeholders throughout the evaluation design and implementation can encourage stakeholders to engage with and act upon our evidence.

Finally, we believe we have a role to play in “joining up the dots” across the sector. We aim to build learning between and across individual evaluations to build a wider understanding of “what works and why” that is greater than the sum of its parts.

 What we do

  • Experimental and quasi-experimental impact evaluations

  • Theory-based evaluations

  • Implementation and process evaluations

  • Pilot evaluations

  • Developing and adapting tools for measuring learning outcomes in evaluations

Explore our other Technical Areas

Learning Assessment

Evaluation

Research

Organisational Learning and Growth