Impact evaluations reap long term benefits for children

We have an obligation to invest where it makes the most difference for children. But how do we decide what will reap the greatest benefits in the long term?

The dilemma of whether to invest in services that provide immediate benefits, or in evidence generating initiatives for the long term, is a difficult one. The answer requires a careful analysis of the cost of not addressing immediate needs versus the potential future benefits of policy and budgetary change brought about by research and advocacy.

As countries climb up the GDP ladder, UNICEF’s assistance is less critical for basic service delivery. Increasingly, what decision makers from low and middle income countries seek is knowledge and evidence for the design of their own programmes and policies. Investment in sound data, research, and evaluation is an essential component of guiding important decisions for years – and perhaps generations – to come.

The Impact Evaluation Series of methodological briefs and instructional videos just released by the UNICEF Office of Research – Innocenti, is a contribution to building a “culture of research” at UNICEF and strengthening our capacity to provide evidence-based advice to partners. The series covers a range of impact evaluation designs and methods, including randomized controlled trials (RCTs). It discusses their pros and cons, ethical concerns and practical issues. The series is primarily aimed at UNICEF programme staff but is also available to the public.

How often do we in UNICEF conduct rigorous impact evaluations and invest in evidence for the long term? What are some of the benefits when we do?

The Transfer Project is an example of how UNICEF’s investment in research contributes to evidence-based advice which motivates and empowers governments to effectively support children. This multi-country project runs experimental and quasi-experimental impact evaluations in sub-Saharan Africa, repeatedly providing evidence to governments about the positive effects of social cash transfers on children and their families. The methodological design of choice is RCT, often considered the ‘gold standard’ of impact evaluation. It provides powerful responses to questions of causality by proving that an impact is achieved as a result of a specific intervention (e.g. the cash transfer) and nothing else.

Video 1 building blocks of IE

In Zambia for example, an RCT conducted in three districts from 2010 to 2013 showed that the government’s cash transfer programme led to a wide range of health and nutrition benefits. It also contributed to an increase in productive activity, and ownership of livestock. Encouraged by this evidence, the Government of Zambia boosted its budget allocation for the transfer programme from US$3.5 million in 2013 to 30 million in 2014, with larger amounts expected in the coming years as the programme expands. The overall cost of the evaluation of US$5m will ultimately leverage US$150m for children over the next five years.

Similarly in Kenya, the evaluation of the government’s cash transfer for orphans and vulnerable children programme showed impacts on consumption, diet diversity and secondary school enrolment. It was an important part of the dialogue on the scale-up of the programme which now reaches over 160,000 families. The government’s own contribution to the program increased from less than US$1m in 2006 to US$12.5m in 2013. The evaluation itself cost US$2m and leveraged an increase in US$10m from the treasury for the programme, securing benefits for children for years to come.

video 4 data collection and analysis
RCTs can be costly. They require a large sample size and cannot be undertaken retrospectively. The random assignment they require can sometimes be perceived as unethical or politically sensitive and in such cases other options, such as quasi-, or non-experimental designs for evaluating impact need to be considered.

The new Impact Evaluation Series briefs outline different options for conducting an impact evaluation, are written in accessible language, and use examples from UNICEF’s own work. They are accompanied by animated videos particularly useful to impact evaluation novices or for training purposes. The overarching aim of these tools is to contribute to building UNICEF’s capacity in research and evaluation, improving our ability to provide evidence-based, strategic guidance on children for the long term.

The materials were written by international evaluation experts from RMIT University, BetterEvaluation and the International Initiative for Impact Evaluation (3ie). An advisory board comprised of UNICEF staff from the Evaluation Office, Data & Analytics section, the Programme Division and many Country and Regional Office guided the quality and relevance of the work. We hope that the materials will contribute to more UNICEF-supported, high-quality impact evaluations, and to more evidence-based decision-making by our partners.

Sudhanshu (Ashu) Handa is Chief of Social Policy & Economic Analysis, at UNICEF’s Office of Research – Innocenti and a Principal Investigator on the Transfer Project. Nikola Balvin is Knowledge Management Specialist at the Office of Research – Innocenti and was responsible for coordinating the Impact Evaluation Series.

Leave a reply

Your email address will not be published. Required fields are marked with “required.”


  1. With the launch of the 2030 Agenda and its 17 Sustainable Development Goals (SDGs), the United Nations (UN) is called to build a ‘culture of research’ given that decision makers and donors seek evidence to invest in programs and policies. Therefore evaluators have to systematically assess the outcomes of interventions against a set of standards. And, in my opinion, taking into consideration the values system is fundamental to improve the humanitarian action in the world’s ongoing crisis and engage effectively with donors and partners.

    In evaluation, understanding the context is a must as well as attributing values which tend to vary and, at times, even create conflict among stakeholders, especially when one set is imposed on another preferred by a marginalized group. Thus analysis is not enough and intuition, sensitivity and interaction are needed to understand the broader picture. The process of valuing is complex, emotional and variable; it is determining what is important. The difficulty is establishing what is good and making all stakeholders accept that it is a value, a priority. The main steps include: observation and sensitivity towards everybody’ feelings and emotions; then agreement on a specific set of values to be used as standard for comparison and finally evaluation assessment.

    The evaluator has to assume a leadership role through creating appropriate situations for knowledge sharing and learning by doing but, at the same time, has to take into account some factors shaping the judgment, such as: personal values, experiences, expertise and commitment to scientific method. Evaluators have to preserve their professional reputation through being independent, relevant, helpful, and respectful. At the same time, for funders, programs are means for transforming the resources available in a community into something to foster or maintain, therefore evaluators have to include information on the types and amounts of resources used by programs as well as the possible multiple perspectives. Evaluators have to recognize the diverse interests of the stakeholders to provide credible useful information for effective management and oversight.

    Valuing depends on the context that can be described as sum of different dimensions: the evaluator, the stakeholders (the evaluated, the funders and others), the program, and the world (which is affected by economic and socio-political externalities). These four dimensions vary over time both independently and in relation to each other. Therefore international and cross-cultural evaluations need deep sensitivity and intuition in order to properly value the context. A good evaluator should be able to leave fear, resistance, anger, envy and other negative emotions on a side while prioritizing values and later assessing the program outcomes.

    David Hume was used to say that ‘reason is and ought only to be the slave of the passions’ but I think that reason can also change and modify the passions in better if integrity, critical thinking, objectivity, sensitivity and compassion are exercised.

    Thank you in advance for keeping an eye at values while conducting evaluation.

    Warm regards, Laura Gagliardone