The Quest for the Missing Counterfactual: Transfer Project Trains African Researchers in Impact Evaluation

How do we know if a programme made a difference?

The answer to this question is not as straightforward as it seems, because we never know what would have happened without the programme.

This concept is referred to as the ‘missing counterfactual’ (or simply ‘the counterfactual’ since, by definition, a counterfactual is missing). Impact evaluation is the science of estimating the missing counterfactual; getting it right is the necessary first step in any evidence-based approach to policy design.

The science of impact evaluation was the subject of a two-week technical training workshop organized jointly by the Transfer Project and the African Economic Research Consortium (AERC) in Nairobi, Kenya from June 24 to July 4 2019. The training attracted 24 participants (15 women) from 22 African countries who were selected by the AERC from over 350 applicants. Participants were primarily university lecturers working in the fields of applied and agricultural economics. The workshop is part of a larger collaboration between the Transfer Project and AERC focused on promoting evidence-informed decision-making in sub-Saharan Africa funded by the Hewlett Foundation.

The workshop content was structured around this core question (“How do we know if a programme made a difference?”) and the related fact that unless we can invent a machine to take us back in time, we can never observe the outcomes both with and without the programme. The job of the evaluator is to estimate the “missing counterfactual” using the tools of research design and statistics.

‘Greatest Hits’

During the first week of training, participants were taught the ‘greatest hits’ of estimating a counterfactual, beginning with randomized designs and followed by non-randomized designs, such as propensity score matching, discontinuity design, instrumental variables, and natural experiments. They also learned how to add ‘design elements’ to strengthen their studies – for example through baselines or repeated follow-up observations – and to estimate sample sizes in both simple and complex sampling scenarios. Theoretical lectures were complemented with hands-on computer-based demonstrations and case studies from Transfer Project impact evaluations in Ghana, Malawi and Zambia.

In the second week, participants were given a hypothetical “Request for Proposals” for an impact evaluation and worked in groups to develop a response using the tools they had learned throughout the course. Final presentations included proposal designs for a new impact evaluation of the Ghanaian LEAP cash transfer, an evaluation of the Kenya NICHE programme, and an evaluation of the Government of China/UNICEF joint pilot conditional cash transfer programme for child nutrition. These are all actual programmes undergoing or seeking to undergo an evaluation—implementers should feel free to contact the study teams for their full proposals!

It was clear the participants had absorbed the course contents when, by the end of the second week, we noticed an increasing number of jokes and statements using evaluation language. Patrice from Congo suggested that we didn’t know if we could replicate the success of the course again because he and the other participants were “self-selected”; Isabelle from Cote d’Ivoire suggested we use ‘exact matching’ to figure out who should stand next to whom for the group photo; and everyone supported the idea that the order of group presentations should be randomly assigned… If nothing else, at least we have increased the number of nerdy evaluation jokes that are likely to be cracked across sub-Saharan Africa.

In the closing session, Gustavo Angeles from the University of North Carolina at Chapel Hill told participants that the workshop itself was an intervention, and he was expecting a steep upward slope in the intervention group relative to the counterfactual. AERC Executive Director Dr. Njuguna Ndung’u encouraged participants to make the most of the training by using impact evaluation techniques to make a difference in their home countries.

While increasingly used to inform policymakers on the effects of policies, impact evaluation methods are not always included in curricula of undergraduate economics degrees, and even when they are, the focus is merely theoretical with few methods covered. There is increasing demand for comprehensive courses on impact evaluation methods, which has led to the proliferation of several intensive workshops worldwide. In developing countries, however, the supply of such courses has not yet met the demand. The Transfer Project is beginning to close this gap with its capacity building work, but it is clear from the large number of applications for this training that there is a large appetite in the region for more information on this technical topic.

 

Assess your own knowledge of impact evaluation techniques—take our ‘Who Wants to be a Millionaire?’ quiz and see whether you would benefit from this course!


Ashu Handa, University of North Carolina: Ashu is Lawrence I. Gilbert Distinguished Professor in UNC’s Department of Public Policy.

Elsa Valli, UNICEF Innocenti: Elsa is a Research Analyst with UNICEF’s Office of Research – Innocenti in Florence.

Gustavo Angeles, University of North Carolina: Gustavo is the Senior Evaluation Advisor for MEASURE Evaluation & Associate Professor at UNC.

All three authors contribute to the work of the Transfer Project research and learning collaborative.

Leave a reply

Your email address will not be published. Required fields are marked with “required.”