Administrative Data: Missed opportunity for learning and research in humanitarian emergencies?

The use of administrative data for learning and research purposes in humanitarian emergencies is a relatively unexplored field. How can we make better use of these rich pools data in humanitarian settings? And what are the potential pitfalls? It is a stylized fact that in more severe emergencies, more administrative data is available from donors, NGOs and authorities, especially when compared to the availability of standard survey data.

First a brief definition: While survey data are usually collected for research or M&E purposes, administrative data are typically collected for programming purposes as part of regular activities (e.g. recording children’s weight and height in routine health checks). Administrative data are collected by any institution involved in service delivery, be they government, development agencies or service providers.

The main strength of administrative data is their immediate availability, at zero additional cost for analysts: administrative data already exist and can be used for research and learning.

Examples of administrative data collected in fragile, conflict or humanitarian settings include infant growth monitoring during mother-baby health clinics, family composition and employment status for safety net programs, school attendance and grade progression for education programs, or refugee status for civil registration programs.

We see several strengths of using administrative data for learning and research:

  • The main strength of administrative data is their immediate availability, at zero additional cost for analysts: administrative data already exist and can be used for research and learning. The potential of such data is even greater in humanitarian and fragile settings where collecting (research) data per se often presents a number of difficulties and there is pressure to act quickly.
  • While surveys may ask many questions, they typically only sample a fraction of all program beneficiaries. Administrative data, in contrast, collect fewer variables but do so for all program beneficiaries and indeed perhaps also for all applicants to a program to assess eligibility.
  • While survey data collection often requires an “extra” effort (and hence cost), administrative data collection is “built into” a program. Its collection is, of course, also costly but it is typically deemed a necessity and readily budgeted.

 

© UNICEF/UNI169484/NesbittA health worker consults with Angelina Michael and her daughters, one of whom is suffering from severe acute malnutrition, in an emergency stabilization centre in South Sudan.

A few weaknesses permeate the use of administrative data in fragile contexts:

  • In an emergency setting, development agencies at times tend to collect data in the process of implementation without a clear learning purpose in mind. This translates into a weak data collection design and finally into poor data quality. At times, such weak administrative data then gets recycled as a “baseline” months after the program has started, further weakening the learning opportunities.
  • It can be hard in practice to link administrative data about the same person or household from different sources, even within one agency. For example, different programmes may use different unique identifiers. In addition, privacy and security concerns may well prohibit such linking of administrative data across different datasets.
  • By design, administrative data does not typically include a control (i.e. non-beneficiary) group, as administrative data are collected as part of the program implementation to serve the beneficiaries. The lack of a control group may impede the design of an impact evaluation. Having said that, some programmes also collect data on all those applying for a specific service, thereby establishing data on the rejected applicants or on members of a waiting list. Such data can then be used for impact evaluation designs based on a dis-continuity design, which compares otherwise similar beneficiaries and non-beneficiaries.
  • Data quality can be a concern. Understanding how administrative data is collected might help to assess whether data are accurate, reliable and valid. At the same time, realizing that administrative data matter for learning also may help elevate their role and standing and may lead to more training being provided for its correct collection.
  • Sometimes data are not immediately available, as they are collected by pen and paper, without being collected or transcribed in an electronic format. Very often these data relate to understudied themes such as nutritional outcomes. Adopting real-time monitoring using electronic devices and software may have clear benefits both for programming and for learning.

 

Nonetheless, we identified exciting opportunities to use administrative data for learning during a humanitarian emergency:

  • As administrative data are often routinely collected, there are very low marginal costs to adding a few variables that may serve the purpose of specific learning questions.
  • When precise location data are available, through (e.g.) GPS coordinates, it can be relatively easy to match administrative data with weather data or data on shocks (such as conflict event data). This can be a very powerful avenue for the analysis of natural disasters and of forced displacements.
  • Beneficiaries’ registries may reveal a high level of heterogeneity of programme implementation (dosage, frequency, modality among others). This diversity of programme execution can be potentially quite relevant for both a process evaluation and for an impact evaluation. Even the absence of a “pure” comparison group may be compatible with the latter analysis, as it allows to measure impacts with respect to a base group.
  • Census data are one possible source of information that has not been properly exploited for evidence generation in humanitarian emergencies. Understanding who was where before the onset of an emergency can help reveal a lot of about social structures and dynamics of relevance in the emergency and the recovery phases.

Finally, we highlight some threats to the use of registries for research purposes.

  • Setting up an information management system takes time, planning and requires a definite skill set for the organization collecting the administrative data. Many such data are routinely collected by organizations, but standards will vary and so do the quality of the data collected. Yet sub-standard administrative data is not just a threat to good learning – it is a threat to good implementation in the short term. Hence implementers have an incentive to collect good administrative data – perhaps more so than with research data.
  • Another threat are concerns of privacy and security. These data, in their raw form, can be used to identify individuals and households. Abuses can arise and become more pronounced when various datasets are combined and more information on these subjects is retrieved. The role played by the dataset manager as well as discussions with stakeholders becomes fundamental to overcome ethical concerns. As does the need for a clear data policy by implementing agencies, our next point.
  • Finally, we believe that the largest threat to the use of administrative data for learning and research in humanitarian settings is the absence of clear data policies by many implementing agencies on this topic. How do implementers choose to use administrative data for learning and research? There may be many “right” answers to this question but we believe it is important that agencies and governments, agencies and NGOs are able to explain how they use administrative data for improving their service delivery and our understanding of the complex environments in which we work and live.

 

Administrative data can do much more than help deliver services or provide inputs for monitoring. We can use administrative data also for learning and research in humanitarian emergencies if agencies make available their data for analysis as part of an ethical, secure and deliberate strategy. The treasure chest is ready to be opened!

 

Suggested Further Reading:

“Can Rigorous Impact Evaluations Improve Humanitarian Assistance?”

“New Developments in Measuring the Welfare Effects of Conflict Exposure at the Micro-Level”

 

This blog was written by participants at the recent Workshop on Evidence on Social Protection in Humanitarian Situations, hosted by UNICEF Innocenti. For more information on UNICEF’s research on children in humanitarian settings, visit our dedicated Research Watch page.

 

Tilman Brück is a development economist at Leibniz Institute of Vegetable and Ornamental Crops and at the International Security and Development Center in Germany. He conducts micro-level research on how people cope with crises and emergencies. @tilmanbrueck

Elisabetta Aurino is a development economist at Imperial College London and research associate at Young Lives, University of Oxford, UK. Her research focuses on food insecurity, child and adolescent development and social protection in low- and middle-income countries. @elisabettaurino

Silvio Daidone is an applied economist in the Social Protection team at the Food and Agriculture Organization. His research focuses on social protection programs and rural development interventions. @DaidoneSilvio

Luisa Natali is a social policy consultant at the UNICEF Office of Research —Innocenti. She specializes in social policy and social protection, and in particular dynamics of child labor, education, and gender within evaluation of cash transfer programs in Sub-Saharan Africa. @luisanatali

Dr Benjamin Schwab is a development and health economist in the Department of Agricultural Economics at Kansas State University. He has collaborated on several large scale impact evaluations, and currently researches a variety of topics related to food security, agriculture and rural poverty in developing countries.

Leave a reply

Your email address will not be published. Required fields are marked with “required.”