This is the third entry in our ‘Mind the Mode’ series on the mVAM blog. We are constantly assessing our data collection modalities to better understand what produces the most-accurate results and what biases may be present. One of our recent experiments took us to Mali, where we were comparing the food consumption score between face-to-face (F2F) interviews versus mVAM live calls.
It’s all in the details
To do this, in February and March, the WFP team first conducted a baseline assessment in four regions of the country. As part of the baseline, we collected phone numbers from participants. Approximately 7-10 days later, we then re-contacted those households who had phones, reaching roughly half of those encountered during the face-to-face survey. We weren’t able to contact the other households. To ensure the validity of the results, we made sure the questionnaire was the exact same between the F2F and telephone interviews. Any differences in wording or changes in the way in which the questions were asked could adversely affect our analysis.
The findings from our analysis were quite interesting. We found that food consumption scores (FCS) collected via the mVAM survey tended to be slightly higher than those collected via the face-to-face survey. The graph below illustrates this shift to higher scores between the two rounds. Higher FCS via mVAM versus F2F surveys is not atypical to Mali. We’ve observed similar outcomes in South Sudan and other countries where mVAM studies have taken place.
Why could this be? There are two main reasons that could explain this difference. Either it might be due to the data collection modality (i.e., people report higher food consumption scores on the phone)? Or, a perhaps a selection bias is occurring? Remember that we were only able to contact roughly half of the participants from the F2F survey during the telephone calls. So, it’s possible that people who responded to the phone calls are less food insecure, which could make sense, since we often see that the poorest of the poor either don’t own a phone or have limited economic means to charge their phone or purchase phone credit.
To test these hypotheses, we dug a bit deeper.
Are people telling the same story on the phone versus face-to-face? Based on our results, the answer is yes! If we compare the same pool of respondents who participated in both the F2F and telephone survey rounds, their food security indicators are more or less the same. For example, the mean mVAM FCS was 56.21 while the mean F2F FCS was 55.65, with no statistically significant difference between the two.
So what about selection bias? In the F2F round, there are essentially three groups of people: 1) those who own phones and participated in both the F2F and mVAM survey; 2) people who own phones but didn’t participate in the mVAM survey, because they either didn’t answer the calls or their phone was off; and 3) people who do not own a phone and thus couldn’t participate in the mVAM survey.
People who replied to the mVAM survey have overall higher FCS than those that we were unable to contact. What we learned from this experiment is that bias does not only come from the households that do not own a phone but also from non-respondents (those households who shared their phone number and gave consent but then were not reachable later on for the phone interview). Possible reasons why they were not reachable could be that they have less access to electricity to charge their phone or that they live in areas with bad network coverage. The graph below illustrates the distribution by respondent type and their respective FCS.
When you compare the demographics of people in these three groups based on the data collected in the baseline, you can see that there are significant differences, as per the example below. Notice that the education levels of respondents varies amongst the three groups—those without a phone tend to be less educated than those who own a phone and participated in the mVAM survey.
This study taught us a valuable lesson. While we are confident that there is no statistically significant difference between face-to-face and phone responses within the Mali context, there is a selection bias in mVAM-collected data. By not including those without phones as well as those who did not respond, we are missing an important (and likely poorer) subset of the population, meaning that the reported FCS is likely higher than it may be if these groups were included. One way to account for this bias is to ensure that telephone operators attempt to contact the households numerous times, over the course of several days. It’s important that they really try to reach them. The team is also studying how to account for this bias in our data analyses.
Also published on Medium.