Qu’est-ce qui se passe au Burundi?

IMG_0093

WFP/Silvia Calo

This week, we were in Burundi to improve how we collect, manage, and visualize data. Specifically, we wanted to work on two surveys that we conduct in the country using mVAM: an early warning survey – “Systeme d’Alerte Précoce au Burundi” (SAP) – and price data collection, known as mMarket.

mVAM has been active in Burundi since October 2016 and has collected information on different early warning indicators every month since then. Given the very low mobile phone ownership rates in the country, it is not feasible to conduct household food security surveys using mVAM. However, we have been able to gather useful information by regularly calling 55 Burundi Red Cross volunteers there who make up the SAP. These volunteers are Burundian citizens who work closely with the communities we’re trying to reach. They organize weekly meetings with local community focal points, which gives them a good understanding of the food security situation.

Gathering information about food security in the communities through key informants has its challenges of course. Finding out how households are coping without interviewing them directly can sometimes be difficult.

We visited WFP’s Country Office in Burundi in order to combine the local team’s knowledge of the Burundian context with our experience of conducting phone surveys. The result?  A new questionnaire that is shorter than the previous one, but still contains all the indicators needed for a meaningful early warning survey. Although the additional indicators we collected in the longer survey provided valuable information, very long questionnaires conducted over the phone have their own set of risks – the length may lead to key informants dropping out or not being willing to participate in the survey at all. Even worse, key informants may want to speed up the survey and don’t think carefully about their answers. We need to remember that volunteers who provide information are often very busy providing assistance to the local communities and may not have much time to speak over the phone!

Burundi pic

WFP/Silvia Calo

The second objective of our trip was to improve the mMarket data collection, which uses information from traders in different geolocated markets in the country. We added some commonly consumed food items to the questionnaire, as well as some non-food items, such as the cost of fuel, which serves as an early warning indicator for a rise in food prices.

Both SAP and mMarket yield large amounts of data at a high frequency. Since the added value of mVAM is providing valuable information in as close to real-time as possible, we always try to find new ways of speeding up the data analysis process and the publication of bulletins. As key informant surveys like SAP and mMarket deliver qualitative, rather than quantitative information, there is no magic statistical formula that can be used to make sense of the data. Hence, the only way to build a story around the data is to look at the data itself. We go about doing this by using data visualization tools like the software Tableau. Rather than simply looking at tables, we used dashboards and triangulated the indicators we collected, which enabled us to track how the food security situation is evolving. In the future, we might also use Tableau to produce interactive bulletins, so that users can explore the data we collect in more depth.

Starting in November 2017, our revamped questionnaires will be used and we will publish new bulletins, which will include interactive data visualizations. Stay tuned!

South Sudan: communicating both ways

South Sudan1

WFP/Hagar Ibrahim

We are back in South Sudan, where, in June, we identified two main areas of opportunity for employing a mobile Vulnerability Analysis and Mapping (mVAM) approach: using it to monitor urban food security and applying it to improve early warning systems.

This time, we are pleased to announce that the project is moving forward, we are collecting more and more numbers and are getting closer to piloting an Interactive Voice Response (IVR) system, which will both boost the capacity of our in-house call center and enable beneficiaries to access information and get answers to their questions.

South Sudan2

WFP/Hagar Ibrahim

The food security situation in urban areas in South Sudan has been deteriorating. According to WFP’s latest urban food security assessment in Bor town, 85% of households are food insecure (of which 44% are severely food insecure, and 41% moderately food insecure). As the urban food security situation needs to be monitored frequently and there is better mobile phone coverage in urban than in rural areas, mVAM is stepping in to collect the data.

Through face-to-face assessments and via our partner agencies on the ground, we have collected over 400 phone numbers and used some of them to conduct food security live call interviews with households in urban centers mostly across Greater Equatoria.

South Sudan map

WFP/Map 1: Number of surveyed households by county, September 2017

However, the context for conducting phone surveys in South Sudan continues to be challenging due to the low mobile phone penetration rates and connectivity problems. We had already reported last time that the main mobile network operators downsized their businesses due to recurrent conflicts. In our most recent round of phone surveys, we found that nearly 40% of the numbers were not reachable. Nevertheless, we were able to talk to over 240 households and ask them about their food consumption, negative coping behaviours, and the food security situation in their communities.

The goal of our latest mission was to provide technical support and assist with capacity building at our in-house call centre. We have configured an interactive voice response (IVR) system, a technology which allows users to access relevant information using the phone keypad and speech recognition. Through the pre-recorded voice response option, the system will be used to answer beneficiaries’ questions relating to, for example, the registration process, food distribution dates, and technical issues, such as lost or damaged vouchers. Users will also be able to record their questions, upon which WFP gets back to them. The IVR system can also initiate calls automatically and direct them to an operator only when a respondent picks up the phone, thereby saving the operators time. This will help address a challenge that mVAM operators in South Sudan have had to grapple with all this time.

The next steps for mVAM in South Sudan will involve deploying and improving the IVR system and expanding our contact information database of potential survey respondents with the help of WFP units and our cooperating partners in the field. Until the next time!

 

Designing a new communication channel – the Food Bot

Kenya blog 2

WFP/Lucia Casarin

After missions to several field locations (including Nigeria, Haiti, and Kenya) aimed at assessing the feasibility of deploying chatbots in WFP’s operational contexts, the mVAM team concluded that they offer great potential for both the sharing and receiving of useful information on food security.  It is now time to take a step forward and actually build a chatbot for WFP – the Food Bot!

In case you haven’t been tracking our work on chatbots (about which you can learn more here and here), here’s a quick refresher. A chatbot is a computer program designed to simulate conversation with human users over the Internet; imagine an invisible robot living inside the Internet asking you questions.

Tailoring the chatbot to its users

The first step needed in designing a new tool is to garner a strong understanding of its users – who will be using the chatbot and for what purposes?

In our case, we are working simultaneously on two levels:

  1. Chatbot builder tool: this is an interface where WFP staff will be able to design, deploy, and manage customized chatbots. The primary users of the chatbot builder tool will be WFP staff in the field, who will use the platform to design contextually-appropriate chatbots for their location. As you can imagine, each WFP Country Office envisions using the chatbot for a specific purpose. In Kenya, for example, colleagues are eager to deploy a chatbot to share updated information about WFP food and cash distributions as well as other programmatic details. In Nigeria, on the other hand, staff want to share details on how to use nutritional supplements provided by WFP.
  2. Contents within the chatbot: this refers to the information the chatbot provides and the dialogues between the chatbot and its users. Targeted users for the chatbot are people living in marginalized and food insecure communities who can use the chatbot to receive information from WFP. They can also ask us questions about WFP’s programmes in their area and provide their feedback and complaints. WFP will develop different chatbots for different locations and target populations.
Kenya blog 3

WFP/Lucia Casarin

To get to know our users better and start defining the design of the Food Bot, WFP and our technical partner InSTEDD (who has extensive experience designing innovative mobile tools) travelled once again to the Kakuma Refugee Camp, located in Western Kenya, where we spent a few days collaborating with WFP staff and refugees to understand how to create a user-friendly chatbot to meet their needs.

We first worked with a small group of refugees to better understand how they use the chatbot technology. To do so, we employed a popular prototype technique called ‘Wizard of Oz’. Under our supervision and guidance, refugees were asked to visit a Facebook page and start a conversation with what they believed was a WFP chatbot. Instead, they were actually chatting with our colleague. Through this type of human-centered approach, we were able to quickly learn what types of information the Kakuma refugees were interested in receiving as well as how they were asking questions. During the field test, we also confirmed our hypothesis that chatbot conversations need to be as light as possible (not using many pictures, menus, or emoticons) in order to minimize data charges and make conversations possible when network coverage is weak or the user is employing Messenger Lite.

We then spent some time with our WFP colleagues in the Kakuma and Nairobi Offices brainstorming the ways in which the chatbot could complement existing activities and provide useful information for our work.

Kenya blog 1 edited

WFP/Lucia Casarin

An iterative design approach

We are now dedicating the next few months to developing the chatbot builder and refining the chatbot contents for a larger pilot project in Kenya. Building a new platform will require a lot of trial and error, and we know that we’ll not get everything right on the first try. For this reason, we have now begun an interactive, iterative design approach, meaning that we will carry out multiple field tests along the way to further refine our product. This will allow us to collect valuable feedback from users at each stage of development so that we can mitigate potential issues early on.

Stay tuned during the coming months as we share additional information on the development of our very first Food Bot!

A new mVAM baby in Mali, weight: 7800 respondents!

WFP/Sebastien Rieussec

WFP/Sebastien Rieussec

This week we’re reporting on our latest news from mVAM in Mali. In this landlocked country in the Sahel chronic food insecurity and malnutrition is widespread – WFP has been present in Mali since 1964. In the last few years Mali has been coping with numerous shocks – such as droughts, floods and a military coup – that led to a political and security crisis and increased food insecurity in the country: by 2016 around 3.1 million people in Mali were food insecure. Households are particularly affected during the lean season, between June and September; and this year WFP estimated 3.8 million people affected by food insecurity, of which 601,00 people in urgent need of food assistance.

To monitor the food security situation, the Government of Mali, with WFP support, does two nationwide face-to-face surveys, in February and September each year. However, in between these times and especially during the lean season that takes place during the summer in Mali there was no data collection – so mVAM was there to fill the ‘data gap.’ We’ve previously blogged about the Mali mode experiment we did comparing data collected by live calls and face-to-face data. As the results showed that there was little difference between the modes, in August the Country Office rolled out mVAM nationwide so that they could get food security information from households affected by this particularly difficult period of the year. During the previous face-to-face survey phone numbers were collected…out of the 13,400 numbers we collected we reached over 7,800 households – mVAM’s largest-ever survey!

With each survey comes different country-specific ‘problems’. There are many different reasons why people might not want to take part in a phone survey – but in Mali, we found one of the biggest was mistrust. People are not used to doing surveys via mobile phones and are sure that there is some form of trick behind them. Many reported that they know that there are lots of mobile phone scams and worry that the call from an unknown number purporting to be from WFP is just another one of these. One of the reasons why they were suspicious  was due to the fact that there was a long time gap between the number collection and the phone survey. This was actually a deliberate choice by the Country Office to ensure that the survey was not just a ‘follow up’ survey to face-to-face data collection like our mode experiment and was getting new information during this specific time period. What wasn’t foreseen was that this meant people forgot that they had given WFP their number and may have not fully understood why they did so in the first place.

Mali blog Edith 2

WFP/Nanthilde Kamara

To get around this issue, the Country Office is planning to use several tactics. As well as using SMS and national radio to advertise the survey, the next time that phone numbers are collected, there will be more time spent on explaining exactly what the purpose of the survey is. The annual September face-to-face food security survey is currently ongoing, so enumerators are now explaining that they might be called by WFP later on this year. The call centre that supports mVAM in Mali calls everyone with the same unique number, this number will be shared with community leaders just before the survey so that they can inform people that they will be rung by this specific number and that it’s an official call from WFP. Respondents will then be able to save the number in their phone so they know when they get the call exactly who it is and it won’t be just an unknown number.

The analysis is still ongoing: We’re looking forward to the results!

 

New places, new tools: what’s up next for mVAM?

KOICA pic 2

We’ve just got back from Rwanda where we were holding a workshop on using mVAM to expand real-time food security and nutrition monitoring with Internally Displaced Persons (IDPs) and refugee populations. The project, which is made possible by the support of the Korean International Cooperation Agency (KOICA), will be implemented in ten countries in sub-Saharan Africa where WFP works.

What’s the project?

The KOICA project has two aims. First, it aims to empower information exchange with marginalized populations, specifically IDPs and Refugees. Secondly, it supports the collection of food security and nutrition data using the latest mobile and satellite technologies. This will happen in ten countries in Sub-Saharan Africa: the Central African Republic (CAR),The Democratic Republic of Congo (DRC), Kenya, Malawi, Niger, Nigeria, Rwanda, Somalia, South Sudan and Uganda.

How are we going to do this?

As you know, two-way communication systems are an important part of our work. As well as getting information that we can use to inform WFP programmes, we want to ensure that the line is open so that people in the communities we serve can contact us and access information that is useful to them. We’ve already been using Interactive Voice Response and live calls to share information with affected populations, and are now expanding our toolbox to include new technologies: Free Basics and a chatbot.

Remote data collection isn’t just done by mobile phones – VAM already uses other sources, such as  satellite imagery analysis – to understand the food security situation on the ground.  Under this project, we’ll also help countries incorporate similar analysis which will complement two-way communication systems to provide a fuller picture of the food security situation.

Finally, we’re going to harness our knowledge of Call Detail Records analysis: de-identified metadata collected via cell phone towers about the number of calls or messages people are sending and which towers they are using. We have already used this technique in Haiti to track displacement after Hurricane Matthew, and we’re really excited to transfer these ideas to another context to ensure we get up-to-date information on where affected communities are so we can better target food assistance in the right locations.

What happened at the workshop?

Representatives from all 10 country offices, three regional bureaus and staff from HQ came together to discuss the three main project components. During the workshop, the different country offices had the chance to learn more from members of the mVAM team about the specific tools they can harness and ensure their collected data is high quality, standardised and communicated effectively. However, the best part about bringing everyone together was that country teams could share their experiences about how they are already using mVAM tools. We heard from the Malawi country office about their Free Basics pilot, and Niger and Nigeria explained how they’re implementing IVR so affected communities can easily contact WFP, even after work hours. Sharing their different experiences and learning about how different tools have worked in each context not only gave everyone an overview of what mVAM is doing so far, it also helped everyone understand the implementation challenges and how to overcome them.

What’s next for the KOICA project?

We’re really excited for the next stage of the project. Each country office has now planned what tools they’re going to use to increase their communications with affected communities and how they will improve their existing data collection systems. It’s going to be great to see the impact these tools will have not only on WFP’s response, but also how they will empower the communities we’re serving. 

Our experiment using Facebook chatbots to improve humanitarian assistance

Testing the chatbot in Nigeria

Testing the chatbot in Nigeria

It must have been above 40 degrees Celsius that afternoon in Maiduguri, Nigeria. Hundreds of people were waiting to cash the mobile money they receive from the World Food Programme (WFP), sitting under tarps that provided some protection from the sun – in other words, the perfect time to sit and chat.

“How many of you have smartphones?” we asked. We waited for the question to be asked in Hausa, and out came mobile devices of all shapes and sizes. “How many of you have Facebook accounts?” Even before the question was translated, we saw nods all around.

“Of course we’re on Facebook – it’s the way we can message friends and family”

Displaced people in Nigeria, even those facing famine and urgently need aid, are connected and rely on messaging apps.

A leap of faith: from SMS to chatbot surveys

Collecting information in communities on the humanitarian frontline is dangerous, cumbersome and expensive, particularly in conflict settings. In north-east Nigeria, our assessment teams travel by helicopter or in convoys, and some locations are simply too insecure to visit at all. This means that decisions about emergency food assistance are sometimes made with very limited information.

But increasing access to mobile phones is changing this. WFP’s mobile Vulnerability Analysis and Mapping (mVAM) project has adopted SMS, Interactive Voice Response and call centres to collect food security information from communities enduring crises like Ebola or the Syrian civil war. Nelsen, a global information and measurement company, found that using SMS we are able to run our surveys 50% cheaper and 83% faster than we would have for face-to-face surveys, while putting no enumerators in harm’s way. The system’s success means we’re now using mobile tools to collect and share information in 33 countries.

Our successes with automated surveys meant we were keen to look into using chatbots (automated assistants that are programmed into messaging apps) to collect food security data. We were especially curious about the fact that a bot could help us ‘chat’ with thousands of people simultaneously and in real-time, like others have.

chatbot interaction

A sample chatbot interaction

To reach as many people as possible, we decided to create a bot that would operate on a popular messaging app, like Facebook Messenger or Telegram, so people could take our surveys on a platform they already use.

You might think it’s unreasonable to expect people in conflict settings to be connected at all. But, as our Nigeria example shows, their connection is a lifeline to normality. We also found that in many countries operators sell ‘social bundles’ that offer unlimited Facebook, WhatsApp or other social media for a single low price.

Where ‘Facebook Lite’ is available, people can even connect for free. All this means that communicating with vulnerable communities could happen in real time and at little to no cost to the respondent or WFP.

Introducing Food Bot

Last summer, we decided to try it out. InSTEDD developed a chatbot prototype that we demoed with Sub-Saharan African migrants in Rome. The demo asked the respondent to share information about food security in their community and allowed them to look up updated food prices.

Our testers liked the fact that talking to our bot felt like having a conversation with a real person. We felt like we were on to something! Earlier this year, Nielsen helped us further develop a chatbot design that calls for multiple gateways, natural language processing capabilities, and a reporting engine.

The current version of Food Bot is programmed to ask a predefined set of questions to the user – it does not rely on artificial intelligence yet. Food Bot goes through a simple questionnaire and saves the answers so that our analysts can process them.

The chatbot format also lets users ask us questions and is a channel for us to give useful information we’ve collected back to these communities. These include messaging on WFP programmes, food prices, weather updates, nutrition and disease prevention. The version we are using for testing currently runs on Facebook Messenger, but we want to make sure it works on all the relevant messaging apps.

No walk in the park

Before we get carried away, we need to consider some of the very real challenges. A timely report by the ICRC, Block Party and the Engine Room emphasizes the new responsibilities that humanitarian agencies assume as they make use of messaging apps to communicate with affected populations. Notably, the use of chat apps to collect information from people who have fled their countries or home raises the important issue of responsible data practices. If we are ever hacked, people’s personal details could be put at risk, including names and pictures. We will certainly have to review our existing data responsibility guide and continue obtaining advice from the International Data Responsibility Group (IDRG), as well as build an understanding of data responsibility principles in the field.

We also suspect that the audience we reach through Food Bot will be younger, better off, more urban and more male than the general population. The convenience of collecting data through a bot does not dispense with the hard task of seeking out those who are not connected and who are probably the most vulnerable. We want to explore ways to make our bot as accessible as possible like translating text into local languages, using more icons in low literacy settings and working with civil society organisations that specialize in digital inclusion.

Finally, we realize that we must prepare to manage all of the unstructured information that Food Bot will collect. Colleagues in the field are already weary of collecting yet more data that won’t be analysed or used. As a result, the team is working on setting up the infrastructure that is needed to process the large volumes of free text data that we expect the bot to produce. This is where our work with automated data processing and dashboards should pay dividends.

This post was originally published on ICT Works as part of a series on humanitarian chatbots.

Mind the mode:

Who's texting & who's talking in Malawi?

Malawi mVAM respondent WFP/Alice Clough

Malawi mVAM respondent
WFP/Alice Clough

It’s time for another installment of our Mind the Mode series. For those of you who follow this blog regularly, you know that the mVAM team is continually evaluating the quality of the data we collect. Past Mind the Mode blogs have discussed our work in Mali looking at face-to-face versus voice calls, our comparison of SMS and IVR in Zimbabwe and the differences in the Food Consumption Score (FCS) for face-to-face versus Computer-Assisted Telephone Interviews (CATI) interviews in South Sudan.

This month, we turn our attention to Malawi, where we recently completed a study analyzing the differences in the reduced Coping Strategies Index (rCSI) when it’s collected via CATI and SMS. This indicator helps measure a household’s food security by telling us what actions they might be taking to cope with any stresses such as reducing the number of meals a day or borrowing food or money from friends or family. From February to April 2017, around 2,000 respondents were randomly-selected for an SMS survey and 1,300 respondents were contacted on their mobile phones by an external call centre to complete a CATI survey.

People Profiling: who’s Texting and who’s Talking? 

Across all three rounds, a greater proportion of respondents in both modalities were men who lived in the South and Central Regions of the country and came from male-headed households. However, the respondents taking the SMS survey were much younger (average age 29) than those who took the CATI survey (average age 40). This probably isn’t surprising when you consider that young people across the world tend to be much more interested in new technologies and in Malawi are more likely to be literate.

The results from our mode experiment in Zimbabwe showed that IVR and SMS surveys reached different demographic groups so we figured we might see the same results in Malawi. However, this was surprisingly not the case: both CATI and SMS participants seemed to come from better-off households. In our surveys we determine this by asking them what material the walls of their home are made from (cement, baked bricks, mud, or unbaked bricks).

better off-worse off wall type malawi

More respondents (60%) said they have cement or baked brick walls as opposed to mud or unbaked brick walls, an indicator of being richer.

Digging into the rCSI

So what about the results observed for the rCSI between the two modes? The CATI rCSI distribution shows a peak at zero (meaning that respondents are not employing any negative coping strategies) and is similar to the typical pattern expected of the rCSI in face-to-face surveys (as you can see in the two graphs below).

Density plot for CATI Feb-April 2017

 

SMS rCSI

The SMS results, on the other hand, tend to have a slightly higher rCSI score than in CATI, meaning that respondents to the SMS survey are employing more negative coping strategies than households surveyed via CATI. This is counter-intuitive to what we might expect, especially since the data illustrates that these households are not more vulnerable than CATI respondents. Presumably, they would actually be better educated (read: literate!) to be able to respond to SMS surveys. We’re therefore looking forward to doing some more research in to why this is the case.

Box plot cati rcsi

It’s All in the Numbers

Some interesting patterns in terms of responses were also observed via both modalities. SMS respondents were more likely to respond to all five rCSI questions by entering the same value for each question (think: 00000, 22222…you get the idea!). At the beginning of the survey, SMS respondents were told that they would earn a small airtime credit upon completion of the questionnaire. We conjecture that some respondents may have just entered numbers randomly to complete the questionnaire as quickly as possible and receive their credit. Keep in mind that entering the same value for all five rCSI questions via CATI is a lot more difficult, as the operator is able to ask additional questions to ensure that the respondent clearly understands the question prior to entering the response.  For SMS, there’s no check prohibiting the respondent from dashing through the questionnaire and entering the same response each time.

We also saw that the percentage of respondents stating that they were employing between zero and four strategies was much lower among SMS respondents than CATI respondents across all three months of data collection. Conversely, more respondents (three out of five) in the SMS survey reported that they were using all five negative coping strategies than in the CATI survey. Again, this is counter-intuitive to what we would expect.  It might mean that SMS respondents didn’t always correctly understand the questionnaire or that they didn’t take the time to reflect on each question, completing questions as rapidly as possible to get their credit; or simply entered random numbers in the absence of an operator to validate their responses.  The graphs below illustrate the differences in rCSI responses between CATI and SMS.

Figure 3: Distribution of the number of coping strategies reported by SMS and CATI respondents by months

Figure 3: Distribution of the number of coping strategies reported by SMS and CATI respondents by months

From these results, you can see that we still have a lot to learn on how survey modality affects the results. This is just the start of our research; so expect more to come as the team digs deeper to better understand these important differences.

Postcard from Dakar

mVAM workshop participants all smiles after learning more about IVR WFP/Lucia Casarn

mVAM workshop participants all smiles after learning more about IVR WFP/Lucia Casarn

During the last week of June, staff from WFP HQ’s mVAM team, the West and Central Africa Regional Bureau, and Nigeria and Niger Country Offices met in beautiful Dakar to work together on Interactive Voice Response (IVR) systems for two-way communication. (If you want to dig deep into all details IVR-related, check out the lesson in our mVAM online course!)

We’ve previously blogged about how WFP is responding to the needs of people who have been displaced due to Boko Haram insurgencies in both Nigeria and Niger. When we implemented these operations we also put communication channels in place so beneficiaries are able to contact WFP. In Nigeria, the Maiduguri Field Office created a hotline. Their operators receive an average of 100 calls per day from beneficiaries asking food security-related questions and providing feedback on the operations. The problem is the hotline is only available during working hours and has a limited number of people who can call in at the same time. To work around this they’re therefore looking at how an IVR system can support the call operators who are dealing with high volumes and better manage calls that take place outside of normal office hours. WFP Niger wants to set up a similar hotline system but without full time phone operators. Beneficiaries will call in to an automated IVR system and their queries and feedback recorded by the system and followed up by the Country Office. 

A Nigeria IT Officer working to install a GSM gateway for IVR usage in Maiduguri WFP/Lucia Casarin

A Nigeria IT Officer working to install a GSM gateway for IVR usage in Maiduguri WFP/Lucia Casarin

During the workshop participants were trained by InSTEDD on how to physically deploy IVR using a GSM gateway (a fancy tool that automatically places phone calls) and Verboice, the free open source software they’ve developed to manage these systems. The team also discussed the nitty gritty technical aspects of the system, including creating and modifying call flows (the sequencing of questions), scheduling calls and downloading collected call logs and recordings. Most importantly, participants had the opportunity to share their experiences and challenges with experts in this field and discuss best practices, alternative deployments and technical solutions.

The Country Office staff have now returned to Niger and Nigeria and they’ve already started testing the use of the IVR machines. We’re eager to begin logging data and hearing more from our beneficiaries. So stay tuned!

 

 

 

Mind the mode …. and the non-response

How voice and face-to-face survey data compares in Mali

This is the third entry in our ‘Mind the Mode’ series on the mVAM blog. We are constantly assessing our data collection modalities to better understand what produces the most-accurate results and what biases may be present. One of our recent experiments took us to Mali, where we were comparing the food consumption score between face-to-face (F2F) interviews versus mVAM live calls.

It’s all in the details
To do this, in February and March, the WFP team first conducted a baseline assessment in four regions of the country. As part of the baseline, we collected phone numbers from participants. Approximately 7-10 days later, we then re-contacted those households who had phones, reaching roughly half of those encountered during the face-to-face survey. We weren’t able to contact the other households. To ensure the validity of the results, we made sure the questionnaire was the exact same between the F2F and telephone interviews. Any differences in wording or changes in the way in which the questions were asked could adversely affect our analysis.

The findings from our analysis were quite interesting. We found that food consumption scores (FCS) collected via the mVAM survey tended to be slightly higher than those collected via the face-to-face survey. The graph below illustrates this shift to higher scores between the two rounds. Higher FCS via mVAM versus F2F surveys is not atypical to Mali. We’ve observed similar outcomes in South Sudan and other countries where mVAM studies have taken place.

mali dist

 

Why could this be? There are two main reasons that could explain this difference. Either it might be due to the data collection modality (i.e., people report higher food consumption scores on the phone)? Or, a perhaps a selection bias is occurring? Remember that we were only able to contact roughly half of the participants from the F2F survey during the telephone calls. So, it’s possible that people who responded to the phone calls are less food insecure, which could make sense, since we often see that the poorest of the poor either don’t own a phone or have limited economic means to charge their phone or purchase phone credit.

To test these hypotheses, we dug a bit deeper.

Same same…
Are people telling the same story on the phone versus face-to-face? Based on our results, the answer is yes! If we compare the same pool of respondents who participated in both the F2F and telephone survey rounds, their food security indicators are more or less the same. For example, the mean mVAM FCS was 56.21 while the mean F2F FCS was 55.65, with no statistically significant difference between the two.

But different…
So what about selection bias? In the F2F round, there are essentially three groups of people: 1) those who own phones and participated in both the F2F and mVAM survey; 2) people who own phones but didn’t participate in the mVAM survey, because they either didn’t answer the calls or their phone was off; and 3) people who do not own a phone and thus couldn’t participate in the mVAM survey.

People who replied to the mVAM survey have overall higher FCS than those that we were unable to contact. What we learned from this experiment is that bias does not only come from the households that do not own a phone but also from non-respondents (those households who shared their phone number and gave consent but then were not reachable later on for the phone interview). Possible reasons why they were not reachable could be that they have less access to electricity to charge their phone or that they live in areas with bad network coverage. The graph below illustrates the distribution by respondent type and their respective FCS.

mali boxp

When you compare the demographics of people in these three groups based on the data collected in the baseline, you can see that there are significant differences, as per the example below. Notice that the education levels of respondents varies amongst the three groups—those without a phone tend to be less educated than those who own a phone and participated in the mVAM survey.

mali profile

This study taught us a valuable lesson. While we are confident that there is no statistically significant difference between face-to-face and phone responses within the Mali context, there is a selection bias in mVAM-collected data. By not including those without phones as well as those who did not respond, we are missing an important (and likely poorer) subset of the population, meaning that the reported FCS is likely higher than it may be if these groups were included. One way to account for this bias is to ensure that telephone operators attempt to contact the households numerous times, over the course of several days. It’s important that they really try to reach them. The team is also studying how to account for this bias in our data analyses.

Trial and Error: How we found a way to monitor nutrition through SMS in Malawi

WFP/Alice Clough

WFP/Alice Clough

Over the last ten months we have been testing if we can use mobile phones to collect nutrition indicators. One of these experiments involved using SMS to ask questions about women’s diet quality via the Minimum Dietary Diversity – Women (MDD-W) indicator.  The MDD-W involves asking simple questions about whether women of reproductive age (15-49 years) consumed at least five out of ten defined food groups. We were interested in using SMS surveys to measure MDD-W, because SMS offers an opportunity to collect data regularly at scale and at low cost.

From October 2016 to April 2017, we worked with GeoPoll to conduct five survey rounds on MDD-W and find a way to adapt the indicator to SMS. We analysed data from each round, identified gaps and refined the survey instrument. We were able to collect data quickly and identify strengths and weaknesses to make revisions through an iterative process. Through this process, we believe that we have successfully designed an instrument that can be used to monitor MDD-W trends by SMS. Here’s a short summary of what we learned:

1. Using a mix of open-ended and list-based questions helped people better understand our questions.

By using a mix of open-ended and list-based questions, we were able to significantly improve data quality. MDD-W round 1In the first few rounds, we had an unusually high number of respondents who either scored “0” or “10” on the MDD-W score, which are both unlikely under normal circumstances. A score of “0” means that the respondent did not consume food items from any of the 10 core food groups the previous day or night, while a score of “10” means that the respondent consumed food items from all food groups. In the first round, scores of “0” or “10” accounted for 29 percent of all MDD-Wrespondents, but by Round 5 these scores represented only 3 percent of responses. It seems that having respondents reflect about what they ate in the open-ended questions we introduced in later rounds helps them  recall the food items they consumed and answer the subsequent list-based questions more accurately.

2. Keep questions simple.

We originally asked people by SMS whether they ate food items from the core food groups that comprise the MDD-W score. For example, “Yesterday, did you eat any Vitamin A-rich fruits and vegetables such as mangos, carrots, pumpkin, …….” Perhaps respondents thought that they needed to consume food items from both the fruit and vegetable groups in order to reply “yes” to this question. So instead, we split that question into two separate questions (one on Vitamin A-rich fruits and the other on Vitamin A-rich vegetables) to make it easier for the respondent to answer. We did the same for some of the other questions and found a very low percentage of women scoring “0” or “10” on the MDD-W score. Of course there is a trade-off here, and splitting too many questions might lead to a long and unwieldy questionnaire that could frustrate respondents.

3. Let respondents take the survey in their preferred language.

Comprehension remains a challenge in automated surveys, so helping respondents by asking questions in their own language will ensure data quality and limit non-response. In the Malawi study, translating food items into the local language (Chichewa), while keeping the rest of the questionnaire in English, improved comprehension. We recommend providing the respondent with the option to take the survey in their preferred language.

4. Pre-stratify and pre-target to ensure representativeness.

SMS surveys tend to be biased towards people who have mobile phones; we reach a lot of younger, urban men, and relatively few women of reproductive age, our target group for surveys on women’s diet. To ensure we are reaching them, an MDD-W SMS survey should be designed or ‘pre-stratified’ to include a diverse group of respondents. In Malawi, we were able to pre-stratify according to variables that included age, level of education, location and wealth. This allowed us to include women from all walks of life.

5. Post-calibrate to produce estimates that are more comparable to face-to-face surveys.

The MDD-W SMS surveys we conducted produced higher point estimates than those we would expect in face-to-face surveys. This suggests we may wish to consider calibration to adjust for sampling bias, the likely cause for the discrepancy. Calibration is the process of maintaining instrument accuracy by minimizing factors that cause inaccurate measurements. We’re still working on this and hope to find a solution soon. In the meantime, we think we are able to track trends in MDD-W by SMS with some reliability.