Mind the Mode 2: Settling the (Food Consumption) Score in South Sudan

POC3_Nektarios_Markogiannis

POC 3
Photo: UNMISS/Nektarios Markogiannis

For the second installment of our ‘mind the mode’ series, we’re taking you to Juba, South Sudan, where we previously conducted a mode experiment. What we wanted to see was how food security indicators compare when data is collected face-to-face and through operators over the phone.

South Sudan is a complex setting for mobile surveys to begin with. The country has low cell phone penetration- it’s estimated to be only 20%. Network quality is a problem, often calls don’t go through or audio is poor.  Last, but not least, the country has been extremely unstable. While we have been using key informant phone interviews to date, we are investigating the feasibility of conducting phone surveys to collect household food security indicators. Given the complexities, starting with a test to evaluate biases related to survey mode seemed prudent.

Methodology

The mode experiment took place in “POC 3”, a Protection of Civilians (POC) camp in Juba near the main UN compound. POC 3 is the largest of three camps at the UN House site in Juba, with an estimated population of 20,000 people, according to the International Organization for Migration. People in the POC are there in search of protection against the violence and conflict that South Sudan has been experiencing. We’re hoping to use mobile phones to monitor food security indicators in POC communities. POC 3 happens to have good cell phone coverage – a 2014 survey estimated that some 70% of households in the camp had access to a phone.  

 

Photo: WFP/Silvia Passeri

Photo: WFP/Silvia Passeri

We evaluated how mode effects the Food Consumption Score (FCS), which measures the frequency of consumption of different food groups consumed by a household during the 7 days before the survey. A higher score means a better level of the respondent’s household food security. The FCS is a commonly used proxy for household food security.

We carried out two rounds of data collection, round 1 in March and round 2 in May 2016. In round 1, half of the respondents received a voice call survey and the other half participated in an identical interview face-to-face. The ‘treatment’ (voice call) was random. In round 2, some of the respondents that received a voice call took the exact same survey face-to-face, and vice versa.

There were challenges relating to security in the POC and some of the respondents from March were not found in the camp when we conducted the second round in May. As a result, we had 132 voice and 333 face-to-face interviews in round one, but 138 voice and only 117 face-to-face surveys in round 2. This sample size is smaller than we would have liked, but we think it’s indicative enough to tell us how responding to a phone survey differs from one that took place face-to-face.

Calls were placed by operators that were ‘converted’ enumerators – field monitors who usually carry out WFP’s post-distribution monitoring but were new to phone-based surveys. This meant that they were already familiar with the food security indicators and the camp community, but needed training on the protocol for phone-based surveys.

Results

We observed substantial mode effects in round 1. We obtained a mean FCS of 34 via face-to-face surveys, but a much higher score of 45  through voice calls. Our regression analysis shows that mode alone accounted for 7 points in the difference in a household’s response (p<0.01), with other factors accounting for the remainder of the difference. This means that a voice survey would inflate the FCS by 20%, leading to a gross underestimation of the severity of food insecurity in the population of interest. During round 1, the voice FCS question behaved as an almost binary variable – we would get 1s and 7s, but very few 2,3,4,5 answers. That means a lot of people said they ate a given food item one day or every day, but that very few other answers were being recorded.

FCS results, round 1

FCS results, round 1

In round 2, the difference between voice calls and face to face surveys diminished substantially. Also, the difference was not statistically significant. In fact, the slight remaining difference between the two groups was due to respondent households’ socio economic profile, not because of the mode we used to collect data.

 

R2

FCS results, round 2

Lessons learned

For the food consumption score, the differences between voice and face-to-face due to the mode effect were large in round 1, but vanished in round 2. This is a positive finding for us as we are seeking to rigorously test and validate the data collected through mobile and reporting on the results with some degree of confidence. We want to highlight a few lessons here that could help guide others into the right direction.

Lesson 1: Practice makes perfect.  We suspect that the poor quality of the data collected in round 1 is due to our call center being brand new, and experiencing ‘teething’ problems. When an in-house call center is first set up, it tends to be small scale comprising of one or two operators. With resources permitting (and provided there is increased information needs) the call center may be expanded with additional operators who will receive regular training and coaching. Our analysts have been saying anecdotally that data quality improves as time goes by and the system becomes more established. We have a good illustration of the phenomenon here in South Sudan.

Lesson 2: Close supervision is required! Although our operators were familiar with data collection, it took time to train them to implement surveys by phone with quality.  This again shows that operator selection, training, and supervision are key to obtaining good quality data.

Lesson 3: Work with professional call centers. Overall, this encourages us to continue working with professional call centers when possible, and avoid the temptation to do things in-house in a hurry – something that can be all too tempting in an emergency setting.

We also think the method used in South Sudan could be applied elsewhere to help evaluate mode effects. We will post the survey design on the mVAM Resource Center for others to use.

How many pizzas does it take to build a chatbot?

Hackers are hungry Photo: WFP/Pia Facultad

Hackers are hungry
Photo: WFP/Pia Facultad

This week, government, business, academia and civil society leaders will gather at Davos to discuss solutions to the world’s biggest challenges – including how new technologies can be leveraged to solve some of the most serious problems we face. At mVAM, we continue to explore how some of these technologies could be used to help eliminate chronic hunger, malnutrition and food insecurity – most recently looking at how chatbots could help collect important information during a humanitarian response.

Last week, our collaborators at Nielsen – one of the early supporters of mVAM – organized a 24-hour hackathon at the Nielsen Tech Hub in New York City. As part of ongoing efforts through Nielsen Cares, the hackathon aimed to develop an open-source humanitarian chatbot that can collect real-time information about food security. This came at the right time for WFP – we’d developed and tested a prototype of the chatbot with InSTEDD, and Nielsen’s technology and development input helped bring in important new capabilities. Ultimately, our goal is to field-test a chatbot in Haiti in the next few months to help us track food security conditions as people recover from the impacts of Hurricane Matthew.

The event was open to the public. A diverse group of students, volunteer hackers, and Nielsen staff showed up to take on the challenge, despite the wintry weather. InSTEDD’s Director of Platform Engineering, Nicolás di Tada also participated.

Much more than a chatbot

What the hackers built is much more that a chatbot: it is a bona-fide chat-based data collection and reporting system. Rather than attempt to outdo each other (as is the case in most hackathons), the teams split up to build the different components of the system. The different teams, made up of perfect strangers, communicated during the hackathon through Slack. After 24 hours, most components were fully coded up, but there were still bugs with the orchestrator and the gateway that additional post-hackathon work will resolve.

The architecture of the system, as defined by Nielsen, includes:

  • a management interface that allows an analyst to set up a questionnaire, including and skip logic, and validation rules that prompt the user when they enter a wrong answer. The interface was built using the Angular 2 JavaScript framework;
  • a gateway that is able to interact with respondents through Facebook Messenger and potentially other chat applications. The Facebook gateway was built on top of the AWS Lambda service;
  • a natural language processing engine that analyzes text on the fly. It allows the chatbot to ‘interpret’ a user’s answers. For now, the NLP engine processes English language text, although the engine includes a translation service and, by default, translates all languages to English for more advanced NLP tasks. The engine was built using the AWS Lambda service and leverages IBM Watson’s AlchemyLanguage service for text processing.;
  • a set of ‘backend APIs’ that manage respondent and survey data, route respondents from each response to the next question, and provide data to user interfaces .  The APIs were built using the Django framework for python and deploys on the AWS Elastic Beanstalk service;
  • an ‘orchestration layer’ that maintains survey status and routes messages between the end user and the various backend services. The orchestration service is built on top of the AWS Lambda service; and
  • a “reporting and data visualization engine”. Data vizzes were built using Highcharts, a JavaScript-based application. This allows an analyst to instantly see the results of the chatbot surveys.

 

chatbot

 

Leveraging cloud services from the Amazon Web Services product catalog, the teams were able to build a scalable, cost effective platform that can be deployed quickly to multiple locations globally.

Remember the humans

We also received tips from a chatbot specialist, Alec Lazarescu from Chatbots Magazine. He encouraged us to ‘onboard’ users with an initial message that gives people a clear idea of what the chatbot is for. He told us to avoid ‘dead ends’ and allow users to speak to a human being in case they get stuck.

We’re very grateful to Nielsen for their support and to all the participants for their energy and creativity. The next steps involve WFP and InSTEDD accessing the code and work on ironing out the kinks. We expect challenges with the natural language processing in Haitian Creole, a language that is probably under-researched. Making the different parts of the chatbot work together seamlessly also appears to be an area we will still have to work on.  And, of course, the final test will be to see whether our target group – people living in Haiti – find the chatbot engaging.

Can we reach rural women via mobile phone? Kenya case study

SONY DSC

WFP/Kusum Hachhethu

 

A few months ago, we published a blog post on our plans to test collecting nutrition data through SMS in Malawi and through live voice calls in Kenya. We just got back from Kenya where we conducted a large-scale mode experiment with ICRAF to compare nutrition data collected face-to-face with data collected through phone calls placed by operators at a call center. But before we started our experiment, we did a qualitative formative study to understand rural women’s phone access and use.

We traveled to 16 villages in Baringo and Kitui counties in Kenya, where we conducted focus group discussions and in-depth interviews with women. We also conducted key informant interviews with mobile phone vendors, local nutritionists, and local government leaders.

So in Kenya, can rural women be reached via mobile phone?

Here are the preliminary findings from our qualitative study:

  1. Ownership: Women’s phone ownership is high in both counties. However, ownership was higher in Kitui than Baringo, which is more pastoralist. From our focus group discussions and interviews, we estimate that 80-90% of women own phones in Kitui and 60-70% own phones in Baringo.
  1. Access: The majority of women had access to phones through inter- and intra-household sharing even if they didn’t own one themselves. This suggests that even women who don’t own a phone personally have access to phones that they may be able to use to participate in phone surveys.
  1. Usage: Women mostly use phones to make and receive calls, not send SMS. This supports our hypothesis that voice calls, not SMS, would be the optimal modality to reach women through mobile surveys.
  1. Willingness: Women were enthusiastic about participating in phone surveys during our focus group discussions and in-depth interviews, implying that they are interested in phone surveys and willing to take part.
  1. Trust: Unknown numbers create trust issues, but they are not insurmountable. Women voiced concerns about receiving phone calls from unknown numbers. Despite these trust issues, we were eventually able to successfully conduct our phone surveys after sensitizing the community, using existing community and government administration structures.
  1. Network: Poor network coverage, not gender norms or access, is the biggest barrier to phone surveys in the two counties. Women identified network coverage as the biggest barrier for communication. Some parts of the counties had poor to no network coverage. However, we found that phone ownership was high even in these areas, and women would travel to find network hotspots to make or receive phone calls.

So in conclusion, yes, in Kenya it is possible to reach rural women by phone.
Our findings from Kitui and Baringo counties show that we can reach women in similar contexts with mobile methodologies to collect information on their diet as well as their child’s diet.

We are also analysing the quantitative data from our mode experiment to find out whether data on women and children’s diet collected via live phone operators gives the same results as data collected via traditional face-to-face interviews.

Crowdsourcing food prices in remote areas: a bridge too far?

img_4738

Photo: WFP

We have success using crowdsourcing to collect food prices by SMS in the refugee camps of Kenya. The experiences made us curious about trying out other methods that could help us deliver data quickly and efficiently from the remote and hard to reach geographies  where WFP works.

We found out about a startup that specializes in crowdsourced data collection. Anonymous ‘contributors’ would carry out simple data collection tasks through a dedicated smartphone app, the sales pitch went. Intrigued, we decided to pilot this system to monitor food prices in a drought-affected area of Southern Africa. We were hoping to use the data to complement the information traditional information systems produce.  What did we learn?

The anonymous ‘citizen reporter’ is a myth. The company we worked with had to go through local organizations, such as NGOs, to find people able to collect the data for us. This a far cry from the vision of sourcing data from an anonymous crowd. There is more to finding contributors than putting out some ads on social media and magically reaching masses of people. Our contributors were not really anonymous and were easily identified by traders. In the end, the activity looked a lot like traditional tablet-based data collection. The World Bank also found the same thing when they contracted a private company for crowdsourcing. You can find more on their experience in here.

Getting started is labor intensive. It’s going to be a learning process for both your organization and the company, and this will mean investing significant staff time. On our side, since we were unfamiliar with the methodology, there were a lot of iterations as we attempted to specify commodity types and data types. This is perhaps surprising because we at WFP have been collecting food prices for a long time. It turns out we needed to revisit the commodity lists, specify unit measures — a process that required patience. On the company’s side, they had limited experience in the geographies of the pilot which could lead to an overestimation of what was possible and how quickly.

Expect long ramp up times.  The ramp up to the data volumes we wanted took months, because that time was needed to set up the system and recruit the local contributors. Our roll out was planned this way. Do not expect an army of anonymous contributors to materialize out of thin air.

It’s still hard to reach remote places. The crowdsourcing model is no silver bullet when it comes to reaching the remote places we were interested in monitoring. It proved hard or even impossible to source enough data from the more remote markets when using a crowdsourcing service. This is perhaps because of low smartphone penetration in remote locations, the high cost of sending a contributor to such places, or to poor connectivity. In contrast, collecting data from larger urban areas was much easier.

High costs are a barrier to handover in resource-poor environments. It became clear that the cost of the activity was higher than lower-tech alternatives. WFP works to enable handover of information systems to national authorities or other local partners. For the moment, the cost of app-based crowdsourcing is perhaps out of the financial reach of our local partners.

After some trial and error, we were able to obtain good quality data through crowdsourcing that was helpful to our field offices. However, ultimately we returned to our mVAM strategy – using phone calls to traders to collect food prices each week. Although our approach can’t cover as many commodities as the company’s crowdsourcing activity provided, it has its own advantages. It’s lower tech – there is no fancy app to download. There is no far away company to deal with. Above all, it’s an approach we can hand over to our local partners.

Our 5 mVAM Highs from 2016

collage

1. Awards for Remote Mobile Data Collection Work

At the Humanitarian Technology 2016 conference, our paper Knowing Just in Time Knowing Just in Time’ won Best Paper for Outstanding Impact. In the paper, we assessed mVAM’s contribution to decision-making by looking at use cases for mVAM in camps, conflict settings and vulnerable geographies. Check out our blog Tech for Humanity for more on it and our other conference paper  mVAM: a New Contribution to the Information Ecology of Humanitarian Work

To close the year, we had a nice surprise from Nominet Trust, the UK’s leading tech for good funder. We made their 100 most inspiring social innovations using digital technology to drive social change around the world.  

2. New Tech

In this day and age there’s a lot of buzz around data visualization. We’ve been honing our skills with Tableau. Check out the data visualizations we did for Yemen and Haiti.

We’re also in the era of Big Data. We partnered with Flowminder, experts in analyzing call detail records, to track displacement in Haiti after Hurricane Matthew.  Find out more in ‘After the storm: using big data to track displacement in Haiti

We’re also super excited about the chatbot we started developing for messaging apps and our roll out of Free Basics in Malawi which is allowing us to share the food prices we collect in mVAM surveys with people in Malawi With mVAM, our main focus has been reaching people on their simple feature phones. But we know that smartphone ownership is only going to increase. Contacting people through internet-enabled phones opens up loads of new forms of communication and data collection. is still reaching people on their -free basics

3. Expansion!

mVAM expanded to 16 new countries facing a wide set of challenges: conflict, El Nino drought, hurricanes, extremely remote geographies. We’ve been tracking and learning about what remote mobile data collection can add to food security monitoring systems and what its limits are in different contexts. For some of the highlights, check out our blogs on Afghanistan, Democratic Republic of Congo, Haiti, Nigeria, Papua New Guinea, and  El Nino in Southern Africa,

4. Dynamic Partnerships

To have a lasting impact, we need to work with governments. We are really proud of our partnership with CAID, the Cellule d’Analyses des Indicateurs du Développement  under the Prime Minister’s Office in the Democratic Republic of Congo. We collaborated on setting up a national market monitoring system- mKengela that they are now running. We’ve had intensive technical sessions with the CAID team in Rome and Kinshasa to work on solutions that will fit their data management and analysis needs. The CAID team even traveled to Johannesburg to share their remote mobile data experience with other African countries and help other governments use this technology.

We’re also working with Leiden University. Bouncing ideas off of their team at the Centre for Innovation helps us move forward on tricky challenges. We’re also collaborating with them to develop an online course where we’re going to share our methodologies and how to use remote technology to monitor food security. Check out Welcome to Vamistan for more.

We are in the field of tech. So we can’t do our job well without partnering with the private sector. It’s definitely a dynamic area, and also one where we at mVAM are learning what works best in melding our humanitarian goals with the exciting private tech potential out there. Check out our blog From the Rift Valley to Silicon Valley and our hackathon with Data Mission for more.

5. Learning- the neverending process

In addition to trying out new technology, we’ve been trying to answer some important questions about the live calls, SMS, and IVR surveys which make up the bulk of mVAM data collection.  We’re also doing mode experiments to understand how people answer differently based on which mode we use to contact them. Check out our first Mind the Mode article with more coming in 2017. In Kenya, we are looking into whether we can ask nutrition indicators through mVAM methods. A major challenge is reaching women through phone surveys so we organized a gender webinar with partners to learn from what they are doing- check out our key gender takeaways. These are key questions and they can’t be resolved overnight. But we’re making steady progress in understanding them, and we’re excited for what more we’ll find out in 2017.

Thanks to everyone who has supported our work this year and kept up with our blog!

Calling all developers: Join us at #Hackforhunger

hackathon-advert-pic

Have you ever wanted to help out the World Food Programme? Sign up for our Chatbot Hackathon!

When: Our partner Nielsen is holding a 24 hour ‘Hack for Hunger’ Chatbot Hackathon from Saturday, January 7th to on Sunday, January 8, 2017 at their global headquarters.  The Hackathon is sponsored by Nielsen, the world’s largest data and information company, and their data science subsidiary eXelate and Nielsen Marketing Cloud. You can sign up at this link

The Challenge:  Build an ‘emergency response chatbot’ to revolutionize how we get the information we need to respond during emergencies.  

The United Nations World Food Programme is the world’s largest humanitarian agency fighting hunger. When there is a crisis, we get the bags of food or cash assistance to people to make sure they don’t go hungry. We currently assist around 80 million people in 80 countries around the world. But to do this well, we need to know where people are that need the most help.

The chatbot you build will allow community members to report to WFP about food security conditions in their local area. This information can save lives after a disaster like Hurricane Matthew in Haiti where roads were destroyed and ports were closed for days. WFP can chat with community members and find out what is happening on the ground in order to get assistance to the areas that need it most.

Our Chief Economist Arif Husain will be at the hackathon to tell you more about the World Food Programme’s work in emergencies and technology’s potential to accelerate our response. For more info on  work we’ve been doing on chatbots with InStedd, read our blog: chatbot prototype

So, are you up for the challenge?

To Register: Sign up at this link – we’re looking forward to seeing you there!  Interested engineers or developers should be highly experienced in Java, Javascript, PHP, Ruby, Hadoop, SQL and development in Android, HTML, and iOS. UI/UX, Product Management, QA and HTML/CSS experience also welcome.

Our key takeaways from the gender webinar

mvam_webinar-gender

As you know we’ve recently held our first ever #data4food webinar, thanks to everyone who managed to join! But don’t worry if you didn’t get the chance – a recording is now available on our bigmarker account.  We thought it would be good to share our takeaways. 

We were lucky enough to have 4 great panelists who spoke to us about their experiences with data collection and gender:

  • Joyce Luma, Country Director of WFP South Sudan
  • Sangita Vyas Managing Director of r.i.c.e. (Research Institute for Compassionate Economics)
  • Micah Boyer, University of South Florida
  • Kusum Hachhethu, mVAM Team and Nutritionist

Thanks for joining our #data4food webinar! Our key takeaways are:

  1. To get to Zero Hunger, it is critical to understand women’s experience. As Joyce explained, women are the best placed to describe issues that matter – including child nutrition, feeding practices and household food consumption. Without having women’s perspectives, it’s not possible to have programmes that are well designed. If mobile surveys are to play the role in delivering information to design relevant hunger alleviating programs, we need to reach women.  
  2. Understand your context. Using mobile technology to reach women is easier in some communities than others. Micah explained  that there are important barriers to women’s access to and use of mobile phones in many places in West Africa. In Kenya on the other hand, Kusum found that many women either owned or had access to phones. A good practice is to conduct formative research that helps understand women’s access to mobile before launching your survey. You can then plan your questionnaire design and project around this information.
  3. Yes, it is possible to reach out to women who do not own phones. Sangita explained how  r.i.c.e asks mobile survey respondents to identify harder to reach demographics, including women from deprived backgrounds. Asking to speak to women members of the household even if a man answers or going through shared phones are ways to reach women. Similarly Joyce pointed out that in these contexts you should simplify the questionnaire – making sure that you use voice rather than SMS to ensure that you don’t have any problems with literacy.
  4. Don’t push it. Does it really make sense to reweigh a sample that is 95% male and 5% female? While mobile data collection is cheap and quick, in some cases, like when the biases are too large, we are better off collecting data face to face.
  5. Consider alternatives to representative statistics.  More use of qualitative approaches would help. Joyce said that in South Sudan, mobile phone ownership is too low to carry out representative surveys. WFP South Sudan therefore uses key informants to obtain food security information. One could obtain information about women’s health and nutrition from health workers.
  6. Continue investing in methodology. The potential of remote data collection to provide food security information in contexts like conflicts means that it’s important to continue investing in methodology to ensure that this information is as good as possible.  Sangita pointed to the importance of thoroughly training enumerators to achieve quality results.  

You can also track our conversation on Twitter by following the hashtag #data4food. Stay tuned for our next webinar!

Mind the Mode: IVR vs SMS in Zimbabwe

img_0046It’s all in the mode. Or is it? Would your response over the phone be different than when you had a person in front of you asking a question?  When answering a question over the phone would you respond differently if you were speaking to a friendly operator or a recorded voice or were replying by SMS? These are pretty key considerations when you are in the business of asking people questions from afar, and we get asked about it a lot.

So, welcome to our first edition of our ‘Mind the Mode’ Series. We have been conducting some mode experiments to find out whether people respond differently to different survey modes: live calls, IVR (Interactive Voice Response- that recorded voice asking you to press 1 for English or 2 for Spanish), SMS, or face-to-face. In this first edition, we look at IVR and SMS in Zimbabwe.

You might never have thought about it before, but it turns out that IVR and SMS compete. In the automated data collection space, there are two schools of thought: one favors data collection via SMS, the other IVR. The SMS advocates argue that a respondent can take the survey at the time of their choice and at their pace. Proponents of IVR point to the fact that voice recordings are easier to understand than a text message because you don’t need to be literate to take the survey.  It’s therefore the more ‘democratic’ tool.

At mVAM, we’ve mostly been using SMS but in Zimbabwe, we had the opportunity to compare these two modes. Food security data was collected by both SMS and IVR in August 2016. IVR responses were received from 1760 randomly selected respondents throughout Zimbabwe and 2450 SMS responses were received from a different set of random respondents stratified by province. Most responses came from Manicaland, Harare, Masvingo and Midlands for both types of surveys due to higher population densities, better network coverage and higher phone ownership in these areas.

Respondents were asked pretty similar questions in both surveys. Both surveys asked:

  • demographic and location questions such as the age and gender of the respondent, the gender of the head of household, and the province and district that they lived in
  • type of toilet in their house (to gain a rough estimate of socio-economic status);
  • daily manual labour wage and
  • whether they used any of the five coping strategies (a proxy for food insecurity
    1.  Rely on less preferred or less expensive food due to lack of food or money to buy food?
    2. Borrow food, or rely on help from a friend or relative due to lack of food or money to buy food?
    3. Reduce the number of meals eaten in a day due to lack of food or money to buy food?
    4. Limit portion sizes at mealtime due to lack of food or money to buy food?
    5. Restrict consumption by adults so children could eat

However, there were a few aspects where the surveys were slightly different. The SMS survey gave an incentive of USD 0.50 airtime credit to respondents who completed the survey whilst there was no incentive to do the IVR one. In the IVR survey, respondents could choose between English or Shona (most respondents chose to take it in Shona) whereas the SMS survey was only conducted in English.

So, what have we learned?

IVR and SMS reach different demographics.

Our IVR and SMS surveys reached different demographics. A higher proportion of IVR responses came from the worse-off households, i.e. those with no toilets or with pit latrines compared to SMS responses. Similarly, a higher proportion of households headed by women participated in the IVR survey than the SMS survey. WFP generally finds that households headed by women usually are more food insecure. So IVR surveys appear have greater reach to worse-off households. This may be because they do not require literacy or knowledge of Englishas with SMS surveys.

zimblog-1b

Fig. 1a: IVR respondents by toilet type

zimblog-1

Fig. 1b: SMS respondents by toilet type

zimblog-1c

Fig. 1c: IVR respondents by head of household sex

zimblog-1d

Fig. 1d: SMS respondents by head of household sex

 

 

 

 

 

 

 

 

 

 

 

 

 

IVR surveys give higher food insecurity estimates than SMS. Spoiler: The reason is unclear.

In general, we found that IVR responses showed higher coping levels than SMS responses. The mean reduced coping strategy index (rCSI) is used as a proxy for food insecurity. A higher rCSI means people have to cope more in response to lack of food or money to buy food, meaning they are more food insecure. In Zimbabwe, mean rCSI captured through IVR (21.9) was higher than that captured through SMS (18.3) for the entire country. This difference in mean rCSI was consistent across cross-sections by the sex of the household head and by province (Figs. 2 and 3).

zimblog-2

Fig. 2: rCSI by sex of household head

zimblog-3

Fig. 3: Mean rCSI by province

However, when the data was analysed by toilet type, which was used as the proxy indicator for wealth, we saw a slightly different pattern. Flush toilets are considered as a proxy for the best-off, followed by Blair pit latrine (a ventilated pit latrine), then pit latrine and then no toilets. We also asked about composting toilets but too few households had them to make any meaningful comparisons. The mean rCSI was only significantly different for households with flush toilets and with pit latrines (in both cases IVR responses had higher rCSI). The mean rCSI results for the other two toilet categories (Blair pit latrine and no toilet) were not significantly different in the two types of surveys. Therefore, the commonly observed difference between IVR and SMS responses is not observed across all wealth groups (Fig. 4).

zimblog-4

Fig. 4: rCSI by toilet type

This suggests that the higher overall mean rCSI in IVR respondents compared to SMS respondents is not be coming from the fact that IVR reached more worse off households. However, we say this with a big caveat. Toilet type as we said above is a rough indicator and it might not be an accurate indication of which households are worse off.  It’s possible that we would have seen different results if we had used a different type of proxy indicator for wealth groups.

When we examine this a bit further and break down the rCSI into the individual coping strategies in Figure 5, we see that IVR respondents use more coping strategies more frequently than SMS respondents. This make sense because the individual coping strategies are what are used to calculate the rCSI and we already observed higher mean rCSI in IVR respondents.

zimblog-5

Fig. 5: Percentage of households using different coping strategies

However, we also noticed something else when looking at responses to each coping strategy.  There is a much higher variation in coping strategy use within SMS respondents compared to IVR respondents (see Figure 5). This suggests that respondents may be ‘straightlining’, i.e. providing the same response to every question. Straightlining suggests that people just don’t respond well to a recorded voice over the phone. While SMS is not good for literacy reasons, it does give the respondent more control over the pace of the survey. With SMS, respondents have as much time as they want to read (or re-read) the whole text and respond. With IVR, people have to go at the speed of the questions. They could get impatient waiting to hear all the answers to a question or they might not have enough time to understand the question. In both cases, they might just start pressing the same answer to get to the next question. Thus IVR might not give quality results.

Interestingly, we saw a similar pattern in Liberia during the Ebola epidemic. We used both SMS and IVR to collect information during the emergency. IVR results showed very high rCSI with limited variation. SMS data consistently produced lower (and more credible) rCSI estimates, and the variation in the data was greater (perhaps a sign of greater data quality).

Different demographics or differences in user experiences (i.e. straightlining) could be contributing to different food security estimates in IVR and SMS.

The upshot is that different survey tools lead to different results, and we need to understand these differences as the use of automated mobile data collection expands. We are not sure whether the different demographics among IVR and SMS respondents are the cause of higher food insecurity estimates for IVR or whether the different user experiences are in play, especially that IVR respondents may be straightlining their answers and not accurately reflecting their coping levels. We suspect that a bit of both might be in play.

Stay tuned for the next editions of our ‘Mind the Mode’ series as we continue to document our learning on the mode experiments

mVAM recognized for innovation in the 2016 ‘Nominet 100’

nominet1

We’re pleased to announce that mVAM has been recognised as one of 2016’s 100 most inspiring social innovations using digital technology to drive social change around the world. The competition, the NT100, is run by the Nominet Trust, the UK’s leading tech for good funder.

The 2016 NT100 was selected from 700 projects reviewed by Nominet Trust and a panel of partner organisations including: Big Lottery Fund, Cancer Research UK, Comic Relief, Nominet, Oxfam, Telefonica O2 and Skoll Centre for Social Entrepreneurship.

mVAM has been recognised for its contribution to humanitarian interventions by leveraging mobile technology to provide frequent, lower cost food security data.

If you want to find out more about other NT100 projects check out their Social Tech Guide, a comprehensive collection of inspiring ways tech pioneers are changing lives, communities and our world for the better.

#data4food: Join us for the first mVAM Webinar!

webinar1-twitter

Join us on Monday for the first of our webinar series! We’ll be hosting a discussion with experts in the fields of mobile data collection, gender, and data analysis:

Addressing Gender-related Challenges in Remote Mobile Data Collection


12 December 2016
9am EST/2pm Dakar/3pm Rome/5pm Nairobi 

 

The discussion will explore some key issues that arise in remote mobile data collection, such as:

  1. Women’s Participation: How can we engage more women when conducting surveys via mobile phone? How can qualitative research help improve female participation rates?
  1. Analyzing Data for Zero Hunger: How do low female participation rates bias our data and thus our ability to design effective, evidence-based programmes? Given the barriers to women’s participation, what can we do right now to analyze our data in a way that better represents women’s experiences? Are we even asking the right questions?
  1. Mobile’s Potential: What are the untapped possibilities for using remote mobile data collection to collect information on both men’s and women’s experiences (e.g. protection issues like anonymous reporting of gender-based violence)? What are the limitations?

Panelists:

  • Joyce Luma, Country Director, WFP South Sudan (former head of WFP Trends and Analysis Service): Gender, mobile phone surveys, and data analysis for Zero Hunger
  • Sangita Vyas, Managing Director, r.i.c.e. (Research Institute for Compassionate Economics): Methodologies for capturing women’s experiences in mobile phone surveys in India
  • Micah Boyer, University of South Florida: Women, markets, and mobile phones in the Sahel
  • Kusum Hachhethu, WFP mVAM Team and Nutritionist, Qualitative research for using mVAM to reach rural Kenyan women

 

To join the webinar, connect via this link:

https://www.bigmarker.com/world-food-programme/Gender-Remote-Mobile-Data-Collection

 

Are you on Twitter? Participate in the discussion on Monday with the hashtag #data4food