Over the last ten months we have been testing if we can use mobile phones to collect nutrition indicators. One of these experiments involved using SMS to ask questions about women’s diet quality via the Minimum Dietary Diversity – Women (MDD-W) indicator. The MDD-W involves asking simple questions about whether women of reproductive age (15-49 years) consumed at least five out of ten defined food groups. We were interested in using SMS surveys to measure MDD-W, because SMS offers an opportunity to collect data regularly at scale and at low cost.
From October 2016 to April 2017, we worked with GeoPoll to conduct five survey rounds on MDD-W and find a way to adapt the indicator to SMS. We analysed data from each round, identified gaps and refined the survey instrument. We were able to collect data quickly and identify strengths and weaknesses to make revisions through an iterative process. Through this process, we believe that we have successfully designed an instrument that can be used to monitor MDD-W trends by SMS. Here’s a short summary of what we learned:
1. Using a mix of open-ended and list-based questions helped people better understand our questions.
By using a mix of open-ended and list-based questions, we were able to significantly improve data quality. In the first few rounds, we had an unusually high number of respondents who either scored “0” or “10” on the MDD-W score, which are both unlikely under normal circumstances. A score of “0” means that the respondent did not consume food items from any of the 10 core food groups the previous day or night, while a score of “10” means that the respondent consumed food items from all food groups. In the first round, scores of “0” or “10” accounted for 29 percent of all respondents, but by Round 5 these scores represented only 3 percent of responses. It seems that having respondents reflect about what they ate in the open-ended questions we introduced in later rounds helps them recall the food items they consumed and answer the subsequent list-based questions more accurately.
2. Keep questions simple.
We originally asked people by SMS whether they ate food items from the core food groups that comprise the MDD-W score. For example, “Yesterday, did you eat any Vitamin A-rich fruits and vegetables such as mangos, carrots, pumpkin, …….” Perhaps respondents thought that they needed to consume food items from both the fruit and vegetable groups in order to reply “yes” to this question. So instead, we split that question into two separate questions (one on Vitamin A-rich fruits and the other on Vitamin A-rich vegetables) to make it easier for the respondent to answer. We did the same for some of the other questions and found a very low percentage of women scoring “0” or “10” on the MDD-W score. Of course there is a trade-off here, and splitting too many questions might lead to a long and unwieldy questionnaire that could frustrate respondents.
3. Let respondents take the survey in their preferred language.
Comprehension remains a challenge in automated surveys, so helping respondents by asking questions in their own language will ensure data quality and limit non-response. In the Malawi study, translating food items into the local language (Chichewa), while keeping the rest of the questionnaire in English, improved comprehension. We recommend providing the respondent with the option to take the survey in their preferred language.
4. Pre-stratify and pre-target to ensure representativeness.
SMS surveys tend to be biased towards people who have mobile phones; we reach a lot of younger, urban men, and relatively few women of reproductive age, our target group for surveys on women’s diet. To ensure we are reaching them, an MDD-W SMS survey should be designed or ‘pre-stratified’ to include a diverse group of respondents. In Malawi, we were able to pre-stratify according to variables that included age, level of education, location and wealth. This allowed us to include women from all walks of life.
5. Post-calibrate to produce estimates that are more comparable to face-to-face surveys.
The MDD-W SMS surveys we conducted produced higher point estimates than those we would expect in face-to-face surveys. This suggests we may wish to consider calibration to adjust for sampling bias, the likely cause for the discrepancy. Calibration is the process of maintaining instrument accuracy by minimizing factors that cause inaccurate measurements. We’re still working on this and hope to find a solution soon. In the meantime, we think we are able to track trends in MDD-W by SMS with some reliability.
Also published on Medium.