Gender-based violence is notoriously under-reported—for understandable reasons. Experience of violence is highly stigmatized and victims are often shamed. Respondents may be fearful of retaliation if perpetrators and others find out they have disclosed their experiences. There may also be cultural taboos about disclosing violence, which may be seen as a family issue. This is why most official statistics on gender-based violence are said to be the lower bound of the true prevalence within a population.
In an earlier study we published using nationally representative data from 24 countries, we found that most women who experienced physical or sexual intimate partner violence (IPV) had never told anyone. Only a third of women disclosed to friends or family, and only 7 per cent formally reported to police, health facilities, social or legal services. Imagine attempting to improve IPV programming and policy while using data which represents only the tip of the iceberg.
Researchers have long invested in designing and implementing surveys to measure the incidence of violence while seeking to both maximize disclosure and minimize harm. However, these careful fieldwork considerations often come at a price. For example, survey instruments which ask behaviorally specific questions for IPV (commonly collected in the DHS and currently the gold standard), can be lengthy to complete. In addition, survey teams must undergo specialized ethics training, provide anonymous referrals to services, employ same sex enumerators, and implement sampling methods to ensure that both women and perpetrators are not interviewed in the same household (or, at times, in the same communities). All this comes at a cost, which may overburden or be logistically infeasible for multi-topic surveys which are not primarily focused on violence.
What if there was a way to collect information about violence, while reducing under-reporting without directly asking about violence?
In a recent publication in Health Economics, we implemented a list randomization to assess the impact of an unconditional cash transfer program on IPV in rural Zambia. List randomization or list experiments, are not a new technique. They have been utilized by Political Scientists for decades to examine public opinion on sensitive topics which respondents were likely to misreport (think racial prejudice, abortion opinions, support for insurgents in war). Economists have increasingly become interested in this method, and others have blogged about methodological considerations. In basic terms, list randomization aggregates a response to a sensitive question alongside responses to non-sensitive questions, thereby masking the respondent’s specific answer to the sensitive question. By randomizing lists with and without the sensitive question, researchers can identify prevalence or incidence of the sensitive item within the population or differences between groups (for example treatment and control), but not attribute the sensitive response on an individual basis. If respondents believe that their sensitive answer is not disclosed to the interviewer, they may be more likely to report private behaviors, such as experience of violence.
In our study, we asked one sensitive question to female primary caretakers of children under the age of five at the start of the study: “In the past 12 months, have you ever been slapped, punched, kicked or physically harmed by your partner” in combination with four non-sensitive questions of the same recall period. An example of the non-sensitive questions is: “In the past 12 months have you ever taken care of a sick relative who is unable to care for themselves?” Respondents were asked to report how many of the items they experienced in total (but not which specific items). The data collection was part of the four-year follow-up of a longitudinal randomized control trial where the beneficiary group received an unconditional child grant program (CGP) provided by the Government of Zambia. In our study, we were interested in two things. First, was it feasible to implement the list randomization in a large-multi topic survey—would enumerators collect the data correctly and would prevalence of violence be credible? Second, since the CGP had been highly successful in meeting its main poverty-related objectives, and increased the financial standing of women, we wanted to know if the cash transfer also affected IPV (a potential which has been demonstrated elsewhere).
What if there was a way to collect information about violence, while reducing under-reporting and without directly asking about violence?
What do we find? First, the list randomization appeared to function as planned, with no evidence of ‘too low’ or ‘too high’ reporting across groups of questions (referred to as ceiling and floor effects). We estimated 15 per cent of women had been exposed to physical IPV in the last year. This is lower than the DHS estimates for Zambia of the same year (2014), which estimate a 21 per cent prevalence for past year physical IPV. This is not surprising as the DHS asks seven questions about specific violent acts which are aggregated to produce this figure. Moreover, when we analyze the impacts of the program, we find that after four years, the CGP resulted in no measurable increases or decreases in IPV. Despite these interesting findings, we are not able to assess the level of underreporting for IPV specifically for the same reasons we could not implement a full IPV module: As the evaluation was not focused on violence or gender topics, there was no logistical room to add complex ethical and logistical arrangements to the evaluation needed to collect IPV measures.
However, two other recent working papers have experimented with this approach and shed light on measurement bias.
The first paper, by Joseph and colleagues [Underreporting of Gender-based Violence in Kerala, India], examines two types of violence: domestic violence and physical harassment on buses. The authors measure both questions at the household level (e.g. at least one woman in my household has faced physical aggression from her husband; at least one woman/girl in my household has faced physical harassment while traveling on public/private buses). They find that underreporting is over nine percentage points for IPV, however negligible for harassment. They also find a number of sub-groups are more likely to underreport, for example urban households and poorer households are more likely to underreport IPV, as are female respondents. Although a useful demonstration of reporting bias, the analysis is complicated by the fact that the questions are asked at the household level, and thus are not cleanly comparable to the gold standard of women’s self reports. However, it does suggest that respondents in general are more likely to disclose an incident of ‘public’ harassment as compared to ‘private’ abuse—as hypothesized by the literature on underreporting.
The second paper, by Agüero and Frisancho [Misreporting in Sensitive Health Behaviors], conducts a list randomization for physical and sexual IPV among female micro-credit clients in peri-urban and rural areas of Lima, Peru. The nice thing about this paper is that the authors conduct nine separate list experiments for each IPV item, thus they are able to compare each IPV item separately (being pushed, being slapped, being threatened with a knife, gun or weapon etc.). In comparing women who were asked the list experiment and those who were asked the direct question—they find no significant differences. However, digging deeper, they find there are differences by level of education of the woman. In particular, women with completed tertiary education report higher levels of IPV under the list experiment as compared to the direct questioning. The implications are that reporting bias may differ by characteristics of the woman, thus changing our conclusions about who suffers from violence, or how interventions affect different sub-groups of women. In this case, authors conjecture that higher educated women may face larger (real or perceived, including higher stigma) costs of being exposed and require higher levels of confidentiality to make them feel safe. However, there are no differential effects by other characteristics of the women, including age, marital status, employment status, memory scores or others, suggesting overall differences between the two methods are limited.
Taken together, these three papers build on efforts across disciplines to consider the usefulness and applicability of list randomizations for collection of data on violence. In the best case scenario, we may be able to assess underreporting, similar to other methodologies researchers have investigated, including use of self-administered surveys and qualitative methodologies. In other cases, we may be able to leverage this method for a ‘light touch’ way to monitor potential backlash, increases or decreases in violence in multi-topic and non-sectoral evaluations, which would otherwise not endeavor to collect violence information. We encourage further experimentation and creativity to further understanding of how to best measure, respond to and program for reduction of violence.
Amber Peterman and Tia Palermo are Social Policy Specialists at the UNICEF Office of Research—Innocenti working on health, gender and social protection evaluations under the Transfer Project, a multi-organization research and learning initiative evaluating Government cash transfers in sub-Saharan Africa. Explore the UNICEF Innocenti research catalogue for new publications. Follow UNICEF Innocenti on Twitter and sign up for e-newsletters on any page of the UNICEF Innocenti website.