Are randomized control trials bad for children?

There was a time when UNICEF was known in development circles as the agency that “does everything but knows nothing.” Indeed, UNICEF is known for getting things done for children through persuasive advocacy, a human rights approach, and its presence on the ground. Today UNICEF is increasingly committed to evidence-based programming, and researchers around the world are studying the effectiveness of UNICEF’s work.

In my role at UNICEF Innocenti, I frequently have discussions with UNICEF country staff who want to know how their programmes are working. A typical discussion with those working on violence against children, poverty reduction, emergency response, nutrition, and more starts with colleagues telling me: “We want to rigorously test how well our programme works, but we don’t want to do a randomized control trial (RCT).” For many in UNICEF, RCT is a bad word. It conjures ideas of cold-hearted researchers arbitrarily withholding programme benefits from some households and villages for the sole purpose of racking up academic publications in journals no one will read. This thinking assumes that other options are equally as good, so we can simply take those evil RCTs off the table and select from other, “pro-children” evaluation methods.

For many RCT is a bad word. It conjures ideas of cold-hearted researchers arbitrarily withholding programme benefits from some households and villages for the sole purpose of racking up academic publications in journals no one will read.

And while other evaluation methods can provide powerful evidence on programme impacts, and RCTs are not always needed, before choosing a method, we need to first understand, in the words of Rachel Glennerster and Shawn Powers, “what are we judging RCTs against?” Indeed, while RCTs get the most attention when discussing the ethics of impact evaluation, all methods come with ethical implications.

© Lusajo KajulaTo make a random selection of RCT treatment villages for an ongoing social protection programme impact evaluation in Tanzania the names of villages were literally drawn blindly from this hat.

Both experimental (RCT) and quasi-experimental methods try to get at causal impacts of programmes and policies. They do so by constructing a “counter-factual,” the term researchers use to describe what would have happened to beneficiaries had they not received the program (also referred to as “treatment” or “intervention”). Since we haven’t yet invented a time machine where we can first give a group of people a treatment, see what happens, and then go back in our time machine and observe what happened without the treatment, we have to use other techniques to measure the counterfactual.

RCTs do this by determining who gets the treatment and who doesn’t by chance, which usually ensures there are no systematic differences between the groups. For example, those who get the treatment aren’t getting it because they are more motivated, more informed, live closer to health facilities, or are from a privileged political group, etc. Quasi-experimental methods use other techniques to construct a comparison group of people who did not receive the treatment. However, we cannot be as certain the estimated impacts are a result of the treatment, and not due to other factors.

Of all these methods (non-experimental/observational, quasi-experimental and experimental/RCT), RCTs provide the most credible evidence on programme impacts, however, they are not always possible. In my work at UNICEF with the Transfer Project, we use both RCTs and quasi-experimental methods. However, non-experimental and quasi-experimental come with limitations.

If we use a poor comparison group (or no comparison group at all), we could end up overestimating or underestimating treatment impacts—and we often don’t know with certainty which is the case.

A non-credible or non-rigorous evaluation is a problem because underestimating program impacts might mean that we conclude a program or policy doesn’t work when it really does (with ethical implications). Funding might be withdrawn and an effective program is cut off. Or we might overestimate program impacts and conclude that a program is more successful than it really is (also with ethical implications). Resources might be allocated to this program over another program that actually works, or works better.

© UNICEF/UN068833/KarimiMobile health teams provide essential basic health services and collect household data in remote and isolated communities, with a special focus on maternal and neonatal care in Afghanistan.

So if RCTs produce the most solid evidence, why don’t we use them everywhere? There are several reasons for this. Sometimes you just can’t randomize who gets a program due to implementation-related reasons (for example, every village in the district benefits from the same road or improved water system). Sometimes you can randomize, but programmers are reluctant to do so because of perceived ethical concerns. In the first scenario, we turn to quasi-experimental methods where possible. Now let’s break down some of the concerns in the second scenario.

All research methods (not just RCTs) have ethical considerations to be mindful of. These include, among others, informed consent for research, principles of ‘do not harm’, necessary referrals for additional services if needed and review of national and international ethics review boards to ensure ethical guidelines are adhered to. However, one concern unique to RCTs is that benefits are purposefully given to one group and not to another. Implementers need to consider whether in fact this is ethical. In many cases it is.

For example, if roll-out of the programme can’t reach all intended beneficiaries at the same time (say there’s a phased roll-out due to budgetary or capacity constraints) then we can take advantage of the group experiencing delayed roll-out and use them as a control group. Further, if we don’t know whether a programme is effective, it’s not unethical to randomize some individuals to not receive that programme (in fact receiving an ineffective programme may do more harm than good). Finally, we must also ask ourselves: Is it ethical to pour donor money into projects when we don’t know if they work? Is it ethical not to learn from the experience of beneficiaries about the impacts of a program?

RCTs can be a powerful tool to generate evidence to inform policies and programmes to improve the lives of children. As with any type of study, researchers must adhere to ethical research principles. However, when choosing the right type of methodology to evaluate a programme, it’s important to keep the ethical implications of each in mind, as well as a clear understanding of all the options, including the option of never knowing what impact your programme is making.

Tia Palermo is Social Policy Specialist in the Social & Economic Policy Section at UNICEF Innocenti, where she conducts research on social protection programmes in Sub-Saharan Africa with the Transfer Project. Explore the UNICEF Innocenti research catalogue for new publications. Follow UNICEF Innocenti on Twitter and sign up for e-newsletters on any page of the UNICEF Innocenti website.

 

Leave a reply

Your email address will not be published. Required fields are marked with “required.”

Comments:

  1. Thank you for raising very valid issues that support using randomized control trials for assessing effectiveness.

    I found that the article undermined itself because of a couple quite inaccurate sentences: “if we don’t know whether a programme is effective, it’s not unethical to randomize some individuals to not receive that programme” and ” Is it ethical not to learn from the experience of beneficiaries about the impacts of a program?”

    Not knowing effectiveness does not mean it is ethical to randomize individuals in/out – one has to consider “expected” values and rights as well. And “learning” is about much more than RCTs which can sometimes fail to capture rich qualitative data about real long-lasting impacts.

    (0)
    1. Thank you for your comment and for joining the discussion. I absolutely agree with you that learning is “not just about RCTs”, as I highlight in the blog with the statement that “other evaluation methods can provide powerful evidence on program impacts, and RCTs are not always needed.” There are times when key informant interviews, qualitative methods, or process evaluations are the most appropriate to answer the question at hand. However, when trying to understand impacts and effectiveness of new policies/interventions, RCTs provide the most compelling information, and should at a minimum be considered as an option (even if that option is subsequently discarded in favor of other options). This blog is written in response to experiences I have had as a researcher talking through evaluation options with implementers, who are often unwilling to even consider RCTs as an option due to misperceptions. My aim of this blog was to counter some of the misperceptions, so that we can consider all the options and select the best one for the question at hand. Indeed, complementary methods such as qualitative methods can also be helpful in RCTs, and in the Transfer Project, we often embed qualitative studies within quantitative RCTs to help triangulate information. On your points related to my statements undermining the blog, I disagree. I do not suggest that not knowing whether a program is effective alone is a necessary and sufficient condition to conduct an RCT. Rather this statement should be interpreted within the context of my aforementioned statements and other statements in the blog referring to ethical conduct of research and ethical implications of whichever method is chosen. Often times we focus on the ethical implications of RCTs, but we need to keep in mind that all methods have ethical implications. With respect to your comment on expected rights, I would encourage you to read this paper by Douglas McKay: http://onlinelibrary.wiley.com/doi/10.1111/bioe.12403/full In this paper, McKay argues that governments have obligations to their citizens to realize certain outcomes and that they have a duty to implement the best proven policies that are morally and practically attainable (BPA) and sustainable. He argues that when there is no BPA policy or existing policy, and we don’t have good evidence on some pilot policy, then it should be fine to have a control arm receiving nothing and there is no violation of the principle policy equipoise.

      (0)