Recently, I’ve found myself immersed in a number of intellectually stimulating and ethically challenging discussions on the role of research in international development. Two instances stand out, because they vividly capture some of the dilemmas encountered in ethical research management and evidence-informed decision making.
The first took place when I visited Dakar to co-deliver a five day ‘Introduction to Research Management and Methods’ workshop to UNICEF staff from East and West African country offices. The course provided an overview of UNICEF research policies and procedures, an introduction to research ethics, a refresher on research designs, methods, and basic statistics, and best practices in research communication and uptake. The debates were lively and thought-provoking and participants seemed to head home with a deeper understanding of the complexities associated with commissioning and conducting research for children in international development.
As we covered the strengths and shortcomings of different research designs and methods, one participant recounted that during the session, one by one, he’s crossed off every item in his list of commonly used research methods. The more he learnt, the more complicated or limited they seemed. And although the comment was made with a smile, it captures how overwhelming – and sometimes even paralyzing – methodological decisions can be for research managers.
I wished I never attended your workshop (LOL), now I am aware of the limitations of our usual studies and at the same time sensitized to the importance of it!
– Participant in UNICEF Introduction to Research Management and Methods workshop, Dakar, Senegal.
After Senegal I traveled to London where I chaired a session at the What Works Global Summit. The conference brought together hundreds of researchers, policy makers, practitioners and donors to discuss issues such as which research methods are most suitable under what circumstances.
My panel was organized by Jo Puri and Deo-Gracias Houndolo from the International Initiative for Impact Evaluation (3ie). It examined the Minimum Detectable Effect (MDE) – a parameter used in power calculations to determine how big a sample needs to be to detect a meaningful difference between the treatment and control/comparison group, if it indeed exists.
One of the tensions around MDE is that it is often decided by researchers with technical expertise, while policy makers and practitioners might feel that their contextual knowledge puts them in a better position to decide what effect an intervention can reasonably be expected to have within a given time frame.
Once again, this discussion was lively and engaging. Some researchers argued that determining MDE should be a consultative process that includes technical experts and decision makers. Others went further to propose that beneficiaries’ input on what change would be considered “significant” in their lives also needs to be included. On the one hand, the issue is who should determine what MDE can be reasonably expected from an intervention. On the other, it is about who should indicate if an effect is large enough to be considered significant – i.e. who decides what a minimum change should look like?
Other panelists deviated from the technical discussion to a view that evidence can only go so far and in some cases policy makers will make the decisions they want, regardless of the effect size of a carefully evaluated intervention.
When discussing the complexities of Type I and Type II error in hypothesis testing and the frequent pressure to find significant effects, a senior government decision maker had a “light-bulb” moment. He reflected on the ethics of making decisions based on evaluations that were conducted in haste.
Sometimes the pressure to evaluate means that evaluations take place within short time frames, with restricted budgets and less than optimal sample sizes. They may evaluate a program which has not fully matured, and based on unrealistic assumptions around the MDE, lead to decisions to terminate the programme because of its seemingly modest – or non-significant – effects. In such instances, the decision to proceed with an evaluation may be unethical, because it sets the programme up to be determined a failure, when in fact the fault is in the evaluation approach.
In international development, research should never be conducted purely for the sake of research, and like programming, its usability, effectiveness and the validity of its findings need to be critically scrutinized. The research management workshop in Dakar did not cover the ethics around MDE, but I can see the complexity of this issue disheartening my UNICEF colleague even further.
The Dakar and London examples highlight how the culture of “evidence-based” policy and programming brings new dilemmas to well-meaning stakeholders who wish to use evidence for positive outcomes, but might be put off by its complexity. There are no easy solutions for how to alleviate these anxieties, but dragging non-experts into complex academic debates around reliability and validity will only alienate them further.
Researchers need to become better at translating complex methods and results with user-friendly tools like info-graphics, briefs, videos and data visualization. They should also consult practitioners on the decisions they are best-placed to make, such as the effect that an intervention can reasonably be expected to have after a certain period. The complexity of research methods should not be a reason for a lack of dialogue between researchers and practitioners.
Nikola Balvin is a Knowledge Management Specialist at the Office of Research – Innocenti. Prior to that she was a Research Officer on UNICEF’s flagship publication ‘The State of the World’s Children’ at the New York headquarters. The Office of Research – Innocenti is UNICEF’s dedicated research centre investigating emerging and current priorities to shape policy and practice for children. Access the UNICEF Innocenti research catalogue at:unicef-irc.org/publications. Follow UNICEF Inocenti on Twitter @UNICEFInnocenti Subscribe to UNICEF Innocenti emails here.
The author wishes to thank Deo-Gracias Houndolo from the International Initiative for Impact Evaluation for his comments on an earlier draft of this blog.