A new mVAM baby in Mali, weight: 7800 respondents!

WFP/Sebastien Rieussec

WFP/Sebastien Rieussec

This week we’re reporting on our latest news from mVAM in Mali. In this landlocked country in the Sahel chronic food insecurity and malnutrition is widespread – WFP has been present in Mali since 1964. In the last few years Mali has been coping with numerous shocks – such as droughts, floods and a military coup – that led to a political and security crisis and increased food insecurity in the country: by 2016 around 3.1 million people in Mali were food insecure. Households are particularly affected during the lean season, between June and September; and this year WFP estimated 3.8 million people affected by food insecurity, of which 601,00 people in urgent need of food assistance.

To monitor the food security situation, the Government of Mali, with WFP support, does two nationwide face-to-face surveys, in February and September each year. However, in between these times and especially during the lean season that takes place during the summer in Mali there was no data collection – so mVAM was there to fill the ‘data gap.’ We’ve previously blogged about the Mali mode experiment we did comparing data collected by live calls and face-to-face data. As the results showed that there was little difference between the modes, in August the Country Office rolled out mVAM nationwide so that they could get food security information from households affected by this particularly difficult period of the year. During the previous face-to-face survey phone numbers were collected…out of the 13,400 numbers we collected we reached over 7,800 households – mVAM’s largest-ever survey!

With each survey comes different country-specific ‘problems’. There are many different reasons why people might not want to take part in a phone survey – but in Mali, we found one of the biggest was mistrust. People are not used to doing surveys via mobile phones and are sure that there is some form of trick behind them. Many reported that they know that there are lots of mobile phone scams and worry that the call from an unknown number purporting to be from WFP is just another one of these. One of the reasons why they were suspicious  was due to the fact that there was a long time gap between the number collection and the phone survey. This was actually a deliberate choice by the Country Office to ensure that the survey was not just a ‘follow up’ survey to face-to-face data collection like our mode experiment and was getting new information during this specific time period. What wasn’t foreseen was that this meant people forgot that they had given WFP their number and may have not fully understood why they did so in the first place.

Mali blog Edith 2

WFP/Nanthilde Kamara

To get around this issue, the Country Office is planning to use several tactics. As well as using SMS and national radio to advertise the survey, the next time that phone numbers are collected, there will be more time spent on explaining exactly what the purpose of the survey is. The annual September face-to-face food security survey is currently ongoing, so enumerators are now explaining that they might be called by WFP later on this year. The call centre that supports mVAM in Mali calls everyone with the same unique number, this number will be shared with community leaders just before the survey so that they can inform people that they will be rung by this specific number and that it’s an official call from WFP. Respondents will then be able to save the number in their phone so they know when they get the call exactly who it is and it won’t be just an unknown number.

The analysis is still ongoing: We’re looking forward to the results!

 

New places, new tools: what’s up next for mVAM?

KOICA pic 2

We’ve just got back from Rwanda where we were holding a workshop on using mVAM to expand real-time food security and nutrition monitoring with Internally Displaced Persons (IDPs) and refugee populations. The project, which is made possible by the support of the Korean International Cooperation Agency (KOICA), will be implemented in ten countries in sub-Saharan Africa where WFP works.

What’s the project?

The KOICA project has two aims. First, it aims to empower information exchange with marginalized populations, specifically IDPs and Refugees. Secondly, it supports the collection of food security and nutrition data using the latest mobile and satellite technologies. This will happen in ten countries in Sub-Saharan Africa: the Central African Republic (CAR),The Democratic Republic of Congo (DRC), Kenya, Malawi, Niger, Nigeria, Rwanda, Somalia, South Sudan and Uganda.

How are we going to do this?

As you know, two-way communication systems are an important part of our work. As well as getting information that we can use to inform WFP programmes, we want to ensure that the line is open so that people in the communities we serve can contact us and access information that is useful to them. We’ve already been using Interactive Voice Response and live calls to share information with affected populations, and are now expanding our toolbox to include new technologies: Free Basics and a chatbot.

Remote data collection isn’t just done by mobile phones – VAM already uses other sources, such as  satellite imagery analysis – to understand the food security situation on the ground.  Under this project, we’ll also help countries incorporate similar analysis which will complement two-way communication systems to provide a fuller picture of the food security situation.

Finally, we’re going to harness our knowledge of Call Detail Records analysis: de-identified metadata collected via cell phone towers about the number of calls or messages people are sending and which towers they are using. We have already used this technique in Haiti to track displacement after Hurricane Matthew, and we’re really excited to transfer these ideas to another context to ensure we get up-to-date information on where affected communities are so we can better target food assistance in the right locations.

What happened at the workshop?

Representatives from all 10 country offices, three regional bureaus and staff from HQ came together to discuss the three main project components. During the workshop, the different country offices had the chance to learn more from members of the mVAM team about the specific tools they can harness and ensure their collected data is high quality, standardised and communicated effectively. However, the best part about bringing everyone together was that country teams could share their experiences about how they are already using mVAM tools. We heard from the Malawi country office about their Free Basics pilot, and Niger and Nigeria explained how they’re implementing IVR so affected communities can easily contact WFP, even after work hours. Sharing their different experiences and learning about how different tools have worked in each context not only gave everyone an overview of what mVAM is doing so far, it also helped everyone understand the implementation challenges and how to overcome them.

What’s next for the KOICA project?

We’re really excited for the next stage of the project. Each country office has now planned what tools they’re going to use to increase their communications with affected communities and how they will improve their existing data collection systems. It’s going to be great to see the impact these tools will have not only on WFP’s response, but also how they will empower the communities we’re serving. 

World Humanitarian Day 2017

Photo: WFP

Photo: WFP/ Regional Bureau of Cairo

To celebrate World Humanitarian Day 2017, this week we interviewed one of the humanitarians who makes mVAM possible. Hatem works as a data scientist in the Cairo Regional Bureau so we asked him more about his work – remotely monitoring food security in conflict zones in the Middle East.

1. Duty station: Regional Bureau of Cairo (RBC)

2. Job title: Data Scientist (VAM)

3. What does your job entail? My job is mainly focused on the data analysis, aggregation and visualization of the monthly mVAM food security surveys in L3 Countries (Yemen, Syria & Iraq). The process starts with the monthly data collection done by call centres or operators. I follow up with the call centres to make sure that the data they’ve collected is in good format and has minimal or no errors. I also make sure that they are following the sampling guidelines and methodologies designed by our team in headquarters. After that, I perform some data cleaning and validation before storing the data in our database. Then, I run some statistical tests on different variables so that I can understand what significant changes there are in the data compared to previous months. According to the analysis results, trends and statistical changes compared to previous months. According to the analysis results, trends and statistical tests, as well as secondary data/news, me and my team start to gather the most important/significant data and create a brief story that summarizes the food security situation in the country. The bulletin is usually 4-5 pages containing text narratives, charts, images and sometimes maps.  There is also usually a qualitative analysis part based on the open text comments of the respondents. It is usually an interesting yet challenging process to find new ways of visualizing open-ended comments from respondents (usually around 1,000-2,000 comments).

4. How does your work help WFP’s response in conflict zones? The mVAM bulletins provide up-to-date and almost real-time data about people that live in conflict zones who you can’t reach by any other means other than mobile phones. These bulletins inform the programme teams about their needs, the most vulnerable areas and the most vulnerable population groups such as displaced people. This ensures WFP is in a better and more informed position to take any programmatic decision on who is affected by conflict, where they are and how they can assist these people most effectively.

5. What’s the most challenging part of your job? Creating a full story from raw data. As a data scientist I usually face technical difficulties – whether it’s in the data cleaning, storage, or analysis code. However, the most challenging part is usually correlating all the data from mVAM and other sources to represent them in a meaningful and complete story that briefly describes the situation in a specific country.

6. What’s the most rewarding part of your job? Working in the humanitarian sector is very rewarding, even if it is not directly with beneficiaries. Not to mention working with data related to conflict zones where there are rapid changes and up-to-date data is in high demand. The fact that I’m a part of a process that makes other people’s lives better, especially those who are in serious need is, in itself, a huge drive to make me do what I do.

Our experiment using Facebook chatbots to improve humanitarian assistance

Testing the chatbot in Nigeria

Testing the chatbot in Nigeria

It must have been above 40 degrees Celsius that afternoon in Maiduguri, Nigeria. Hundreds of people were waiting to cash the mobile money they receive from the World Food Programme (WFP), sitting under tarps that provided some protection from the sun – in other words, the perfect time to sit and chat.

“How many of you have smartphones?” we asked. We waited for the question to be asked in Hausa, and out came mobile devices of all shapes and sizes. “How many of you have Facebook accounts?” Even before the question was translated, we saw nods all around.

“Of course we’re on Facebook – it’s the way we can message friends and family”

Displaced people in Nigeria, even those facing famine and urgently need aid, are connected and rely on messaging apps.

A leap of faith: from SMS to chatbot surveys

Collecting information in communities on the humanitarian frontline is dangerous, cumbersome and expensive, particularly in conflict settings. In north-east Nigeria, our assessment teams travel by helicopter or in convoys, and some locations are simply too insecure to visit at all. This means that decisions about emergency food assistance are sometimes made with very limited information.

But increasing access to mobile phones is changing this. WFP’s mobile Vulnerability Analysis and Mapping (mVAM) project has adopted SMS, Interactive Voice Response and call centres to collect food security information from communities enduring crises like Ebola or the Syrian civil war. Nelsen, a global information and measurement company, found that using SMS we are able to run our surveys 50% cheaper and 83% faster than we would have for face-to-face surveys, while putting no enumerators in harm’s way. The system’s success means we’re now using mobile tools to collect and share information in 33 countries.

Our successes with automated surveys meant we were keen to look into using chatbots (automated assistants that are programmed into messaging apps) to collect food security data. We were especially curious about the fact that a bot could help us ‘chat’ with thousands of people simultaneously and in real-time, like others have.

chatbot interaction

A sample chatbot interaction

To reach as many people as possible, we decided to create a bot that would operate on a popular messaging app, like Facebook Messenger or Telegram, so people could take our surveys on a platform they already use.

You might think it’s unreasonable to expect people in conflict settings to be connected at all. But, as our Nigeria example shows, their connection is a lifeline to normality. We also found that in many countries operators sell ‘social bundles’ that offer unlimited Facebook, WhatsApp or other social media for a single low price.

Where ‘Facebook Lite’ is available, people can even connect for free. All this means that communicating with vulnerable communities could happen in real time and at little to no cost to the respondent or WFP.

Introducing Food Bot

Last summer, we decided to try it out. InSTEDD developed a chatbot prototype that we demoed with Sub-Saharan African migrants in Rome. The demo asked the respondent to share information about food security in their community and allowed them to look up updated food prices.

Our testers liked the fact that talking to our bot felt like having a conversation with a real person. We felt like we were on to something! Earlier this year, Nielsen helped us further develop a chatbot design that calls for multiple gateways, natural language processing capabilities, and a reporting engine.

The current version of Food Bot is programmed to ask a predefined set of questions to the user – it does not rely on artificial intelligence yet. Food Bot goes through a simple questionnaire and saves the answers so that our analysts can process them.

The chatbot format also lets users ask us questions and is a channel for us to give useful information we’ve collected back to these communities. These include messaging on WFP programmes, food prices, weather updates, nutrition and disease prevention. The version we are using for testing currently runs on Facebook Messenger, but we want to make sure it works on all the relevant messaging apps.

No walk in the park

Before we get carried away, we need to consider some of the very real challenges. A timely report by the ICRC, Block Party and the Engine Room emphasizes the new responsibilities that humanitarian agencies assume as they make use of messaging apps to communicate with affected populations. Notably, the use of chat apps to collect information from people who have fled their countries or home raises the important issue of responsible data practices. If we are ever hacked, people’s personal details could be put at risk, including names and pictures. We will certainly have to review our existing data responsibility guide and continue obtaining advice from the International Data Responsibility Group (IDRG), as well as build an understanding of data responsibility principles in the field.

We also suspect that the audience we reach through Food Bot will be younger, better off, more urban and more male than the general population. The convenience of collecting data through a bot does not dispense with the hard task of seeking out those who are not connected and who are probably the most vulnerable. We want to explore ways to make our bot as accessible as possible like translating text into local languages, using more icons in low literacy settings and working with civil society organisations that specialize in digital inclusion.

Finally, we realize that we must prepare to manage all of the unstructured information that Food Bot will collect. Colleagues in the field are already weary of collecting yet more data that won’t be analysed or used. As a result, the team is working on setting up the infrastructure that is needed to process the large volumes of free text data that we expect the bot to produce. This is where our work with automated data processing and dashboards should pay dividends.

This post was originally published on ICT Works as part of a series on humanitarian chatbots.

Mind the mode:

Who's texting & who's talking in Malawi?

Malawi mVAM respondent WFP/Alice Clough

Malawi mVAM respondent
WFP/Alice Clough

It’s time for another installment of our Mind the Mode series. For those of you who follow this blog regularly, you know that the mVAM team is continually evaluating the quality of the data we collect. Past Mind the Mode blogs have discussed our work in Mali looking at face-to-face versus voice calls, our comparison of SMS and IVR in Zimbabwe and the differences in the Food Consumption Score (FCS) for face-to-face versus Computer-Assisted Telephone Interviews (CATI) interviews in South Sudan.

This month, we turn our attention to Malawi, where we recently completed a study analyzing the differences in the reduced Coping Strategies Index (rCSI) when it’s collected via CATI and SMS. This indicator helps measure a household’s food security by telling us what actions they might be taking to cope with any stresses such as reducing the number of meals a day or borrowing food or money from friends or family. From February to April 2017, around 2,000 respondents were randomly-selected for an SMS survey and 1,300 respondents were contacted on their mobile phones by an external call centre to complete a CATI survey.

People Profiling: who’s Texting and who’s Talking? 

Across all three rounds, a greater proportion of respondents in both modalities were men who lived in the South and Central Regions of the country and came from male-headed households. However, the respondents taking the SMS survey were much younger (average age 29) than those who took the CATI survey (average age 40). This probably isn’t surprising when you consider that young people across the world tend to be much more interested in new technologies and in Malawi are more likely to be literate.

The results from our mode experiment in Zimbabwe showed that IVR and SMS surveys reached different demographic groups so we figured we might see the same results in Malawi. However, this was surprisingly not the case: both CATI and SMS participants seemed to come from better-off households. In our surveys we determine this by asking them what material the walls of their home are made from (cement, baked bricks, mud, or unbaked bricks).

better off-worse off wall type malawi

More respondents (60%) said they have cement or baked brick walls as opposed to mud or unbaked brick walls, an indicator of being richer.

Digging into the rCSI

So what about the results observed for the rCSI between the two modes? The CATI rCSI distribution shows a peak at zero (meaning that respondents are not employing any negative coping strategies) and is similar to the typical pattern expected of the rCSI in face-to-face surveys (as you can see in the two graphs below).

Density plot for CATI Feb-April 2017

 

SMS rCSI

The SMS results, on the other hand, tend to have a slightly higher rCSI score than in CATI, meaning that respondents to the SMS survey are employing more negative coping strategies than households surveyed via CATI. This is counter-intuitive to what we might expect, especially since the data illustrates that these households are not more vulnerable than CATI respondents. Presumably, they would actually be better educated (read: literate!) to be able to respond to SMS surveys. We’re therefore looking forward to doing some more research in to why this is the case.

Box plot cati rcsi

It’s All in the Numbers

Some interesting patterns in terms of responses were also observed via both modalities. SMS respondents were more likely to respond to all five rCSI questions by entering the same value for each question (think: 00000, 22222…you get the idea!). At the beginning of the survey, SMS respondents were told that they would earn a small airtime credit upon completion of the questionnaire. We conjecture that some respondents may have just entered numbers randomly to complete the questionnaire as quickly as possible and receive their credit. Keep in mind that entering the same value for all five rCSI questions via CATI is a lot more difficult, as the operator is able to ask additional questions to ensure that the respondent clearly understands the question prior to entering the response.  For SMS, there’s no check prohibiting the respondent from dashing through the questionnaire and entering the same response each time.

We also saw that the percentage of respondents stating that they were employing between zero and four strategies was much lower among SMS respondents than CATI respondents across all three months of data collection. Conversely, more respondents (three out of five) in the SMS survey reported that they were using all five negative coping strategies than in the CATI survey. Again, this is counter-intuitive to what we would expect.  It might mean that SMS respondents didn’t always correctly understand the questionnaire or that they didn’t take the time to reflect on each question, completing questions as rapidly as possible to get their credit; or simply entered random numbers in the absence of an operator to validate their responses.  The graphs below illustrate the differences in rCSI responses between CATI and SMS.

Figure 3: Distribution of the number of coping strategies reported by SMS and CATI respondents by months

Figure 3: Distribution of the number of coping strategies reported by SMS and CATI respondents by months

From these results, you can see that we still have a lot to learn on how survey modality affects the results. This is just the start of our research; so expect more to come as the team digs deeper to better understand these important differences.

Postcard from Dakar

mVAM workshop participants all smiles after learning more about IVR WFP/Lucia Casarn

mVAM workshop participants all smiles after learning more about IVR WFP/Lucia Casarn

During the last week of June, staff from WFP HQ’s mVAM team, the West and Central Africa Regional Bureau, and Nigeria and Niger Country Offices met in beautiful Dakar to work together on Interactive Voice Response (IVR) systems for two-way communication. (If you want to dig deep into all details IVR-related, check out the lesson in our mVAM online course!)

We’ve previously blogged about how WFP is responding to the needs of people who have been displaced due to Boko Haram insurgencies in both Nigeria and Niger. When we implemented these operations we also put communication channels in place so beneficiaries are able to contact WFP. In Nigeria, the Maiduguri Field Office created a hotline. Their operators receive an average of 100 calls per day from beneficiaries asking food security-related questions and providing feedback on the operations. The problem is the hotline is only available during working hours and has a limited number of people who can call in at the same time. To work around this they’re therefore looking at how an IVR system can support the call operators who are dealing with high volumes and better manage calls that take place outside of normal office hours. WFP Niger wants to set up a similar hotline system but without full time phone operators. Beneficiaries will call in to an automated IVR system and their queries and feedback recorded by the system and followed up by the Country Office. 

A Nigeria IT Officer working to install a GSM gateway for IVR usage in Maiduguri WFP/Lucia Casarin

A Nigeria IT Officer working to install a GSM gateway for IVR usage in Maiduguri WFP/Lucia Casarin

During the workshop participants were trained by InSTEDD on how to physically deploy IVR using a GSM gateway (a fancy tool that automatically places phone calls) and Verboice, the free open source software they’ve developed to manage these systems. The team also discussed the nitty gritty technical aspects of the system, including creating and modifying call flows (the sequencing of questions), scheduling calls and downloading collected call logs and recordings. Most importantly, participants had the opportunity to share their experiences and challenges with experts in this field and discuss best practices, alternative deployments and technical solutions.

The Country Office staff have now returned to Niger and Nigeria and they’ve already started testing the use of the IVR machines. We’re eager to begin logging data and hearing more from our beneficiaries. So stay tuned!

 

 

 

Mind the mode …. and the non-response

How voice and face-to-face survey data compares in Mali

This is the third entry in our ‘Mind the Mode’ series on the mVAM blog. We are constantly assessing our data collection modalities to better understand what produces the most-accurate results and what biases may be present. One of our recent experiments took us to Mali, where we were comparing the food consumption score between face-to-face (F2F) interviews versus mVAM live calls.

It’s all in the details
To do this, in February and March, the WFP team first conducted a baseline assessment in four regions of the country. As part of the baseline, we collected phone numbers from participants. Approximately 7-10 days later, we then re-contacted those households who had phones, reaching roughly half of those encountered during the face-to-face survey. We weren’t able to contact the other households. To ensure the validity of the results, we made sure the questionnaire was the exact same between the F2F and telephone interviews. Any differences in wording or changes in the way in which the questions were asked could adversely affect our analysis.

The findings from our analysis were quite interesting. We found that food consumption scores (FCS) collected via the mVAM survey tended to be slightly higher than those collected via the face-to-face survey. The graph below illustrates this shift to higher scores between the two rounds. Higher FCS via mVAM versus F2F surveys is not atypical to Mali. We’ve observed similar outcomes in South Sudan and other countries where mVAM studies have taken place.

mali dist

 

Why could this be? There are two main reasons that could explain this difference. Either it might be due to the data collection modality (i.e., people report higher food consumption scores on the phone)? Or, a perhaps a selection bias is occurring? Remember that we were only able to contact roughly half of the participants from the F2F survey during the telephone calls. So, it’s possible that people who responded to the phone calls are less food insecure, which could make sense, since we often see that the poorest of the poor either don’t own a phone or have limited economic means to charge their phone or purchase phone credit.

To test these hypotheses, we dug a bit deeper.

Same same…
Are people telling the same story on the phone versus face-to-face? Based on our results, the answer is yes! If we compare the same pool of respondents who participated in both the F2F and telephone survey rounds, their food security indicators are more or less the same. For example, the mean mVAM FCS was 56.21 while the mean F2F FCS was 55.65, with no statistically significant difference between the two.

But different…
So what about selection bias? In the F2F round, there are essentially three groups of people: 1) those who own phones and participated in both the F2F and mVAM survey; 2) people who own phones but didn’t participate in the mVAM survey, because they either didn’t answer the calls or their phone was off; and 3) people who do not own a phone and thus couldn’t participate in the mVAM survey.

People who replied to the mVAM survey have overall higher FCS than those that we were unable to contact. What we learned from this experiment is that bias does not only come from the households that do not own a phone but also from non-respondents (those households who shared their phone number and gave consent but then were not reachable later on for the phone interview). Possible reasons why they were not reachable could be that they have less access to electricity to charge their phone or that they live in areas with bad network coverage. The graph below illustrates the distribution by respondent type and their respective FCS.

mali boxp

When you compare the demographics of people in these three groups based on the data collected in the baseline, you can see that there are significant differences, as per the example below. Notice that the education levels of respondents varies amongst the three groups—those without a phone tend to be less educated than those who own a phone and participated in the mVAM survey.

mali profile

This study taught us a valuable lesson. While we are confident that there is no statistically significant difference between face-to-face and phone responses within the Mali context, there is a selection bias in mVAM-collected data. By not including those without phones as well as those who did not respond, we are missing an important (and likely poorer) subset of the population, meaning that the reported FCS is likely higher than it may be if these groups were included. One way to account for this bias is to ensure that telephone operators attempt to contact the households numerous times, over the course of several days. It’s important that they really try to reach them. The team is also studying how to account for this bias in our data analyses.

Trial and Error: How we found a way to monitor nutrition through SMS in Malawi

WFP/Alice Clough

WFP/Alice Clough

Over the last ten months we have been testing if we can use mobile phones to collect nutrition indicators. One of these experiments involved using SMS to ask questions about women’s diet quality via the Minimum Dietary Diversity – Women (MDD-W) indicator.  The MDD-W involves asking simple questions about whether women of reproductive age (15-49 years) consumed at least five out of ten defined food groups. We were interested in using SMS surveys to measure MDD-W, because SMS offers an opportunity to collect data regularly at scale and at low cost.

From October 2016 to April 2017, we worked with GeoPoll to conduct five survey rounds on MDD-W and find a way to adapt the indicator to SMS. We analysed data from each round, identified gaps and refined the survey instrument. We were able to collect data quickly and identify strengths and weaknesses to make revisions through an iterative process. Through this process, we believe that we have successfully designed an instrument that can be used to monitor MDD-W trends by SMS. Here’s a short summary of what we learned:

1. Using a mix of open-ended and list-based questions helped people better understand our questions.

By using a mix of open-ended and list-based questions, we were able to significantly improve data quality. MDD-W round 1In the first few rounds, we had an unusually high number of respondents who either scored “0” or “10” on the MDD-W score, which are both unlikely under normal circumstances. A score of “0” means that the respondent did not consume food items from any of the 10 core food groups the previous day or night, while a score of “10” means that the respondent consumed food items from all food groups. In the first round, scores of “0” or “10” accounted for 29 percent of all MDD-Wrespondents, but by Round 5 these scores represented only 3 percent of responses. It seems that having respondents reflect about what they ate in the open-ended questions we introduced in later rounds helps them  recall the food items they consumed and answer the subsequent list-based questions more accurately.

2. Keep questions simple.

We originally asked people by SMS whether they ate food items from the core food groups that comprise the MDD-W score. For example, “Yesterday, did you eat any Vitamin A-rich fruits and vegetables such as mangos, carrots, pumpkin, …….” Perhaps respondents thought that they needed to consume food items from both the fruit and vegetable groups in order to reply “yes” to this question. So instead, we split that question into two separate questions (one on Vitamin A-rich fruits and the other on Vitamin A-rich vegetables) to make it easier for the respondent to answer. We did the same for some of the other questions and found a very low percentage of women scoring “0” or “10” on the MDD-W score. Of course there is a trade-off here, and splitting too many questions might lead to a long and unwieldy questionnaire that could frustrate respondents.

3. Let respondents take the survey in their preferred language.

Comprehension remains a challenge in automated surveys, so helping respondents by asking questions in their own language will ensure data quality and limit non-response. In the Malawi study, translating food items into the local language (Chichewa), while keeping the rest of the questionnaire in English, improved comprehension. We recommend providing the respondent with the option to take the survey in their preferred language.

4. Pre-stratify and pre-target to ensure representativeness.

SMS surveys tend to be biased towards people who have mobile phones; we reach a lot of younger, urban men, and relatively few women of reproductive age, our target group for surveys on women’s diet. To ensure we are reaching them, an MDD-W SMS survey should be designed or ‘pre-stratified’ to include a diverse group of respondents. In Malawi, we were able to pre-stratify according to variables that included age, level of education, location and wealth. This allowed us to include women from all walks of life.

5. Post-calibrate to produce estimates that are more comparable to face-to-face surveys.

The MDD-W SMS surveys we conducted produced higher point estimates than those we would expect in face-to-face surveys. This suggests we may wish to consider calibration to adjust for sampling bias, the likely cause for the discrepancy. Calibration is the process of maintaining instrument accuracy by minimizing factors that cause inaccurate measurements. We’re still working on this and hope to find a solution soon. In the meantime, we think we are able to track trends in MDD-W by SMS with some reliability.

 

Postcard from Bangui

Good to be OKING:It may not be new and super large, but the owner claims this phone has a week-long battery life! WFP/Dominique Ferretti

It may not be new and super large, but the owner claims this phone has a week-long battery life!
WFP/Dominique Ferretti

Greetings from the Central African Republic (CAR)! Our team recently visited Bangui and Kaga-Bandoro to help the Country Office team assess how to enhance the current mVAM system and see what other mVAM technologies we might be able to deploy. CAR is a very unique context, because there’s little-to-no cell phone reception outside of main towns. Only 26% of the population own a phone, one of the lowest rates in the world according to the World Bank.  This means that collecting data remotely takes some creativity. The CAR team uses a key informant system, where they contact approximately 200 people around the country each month to collect information on basic commodity prices, market access, population movements, and security issues. The collected information is then shared with the humanitarian community, who appreciate the data, as it’s the only national-level food security data that’s currently collected regularly!

A local woman in Kaga-Bandoro selling a great source of protein and a central African delicacy—caterpillars! WFP/Dominique Ferretti

A local woman in Kaga-Bandoro selling a great source of protein and a central African delicacy—caterpillars!
WFP/Dominique Ferretti

The only downfall to the key informant system is that it doesn’t give us household-level food security information. The CAR team has therefore decided to try a small pilot using household questionnaires in the city of Kaga-Bandoro. Courtesy of UNHAS, we visited the city (more like a very small town!) and the 2 IDP camps it hosts during our day trip. While not that many people had cell phones, enough community members and displaced persons had phones that we’ll be able to get some idea of the food security situation.

Stay tuned for more as the pilot unfolds…!

Chatting with community members as they collect water WFP/Dominique Ferretti

Chatting with community members as they collect water
WFP/Dominique Ferretti

If you’re not human then who are you?

Experimenting with chatbots in Nigeria and Haiti

WFP/Lucia Casarin

Testing the bot in Haiti – WFP/Lucia Casarin

Readers of this blog know that the team has been experimenting with chatbots to communicate with disaster-affected communities – read our previous posts about our prototype and the Nielsen Hackathon.

As part of this effort, during recent missions to Haiti and Nigeria, our team went out to talk to communities to find out whether a chatbot would be right for them.

Would a chatbot be a stretch in these communities?

Well it’s not that much of a stretch.

In North East Nigeria, most displaced people live in Maiduguri, a city of over 1 million people. In this ‘urban’ setting connectivity is good, most people own cell phones and many young people use social media and messaging apps. Mobile operators have been offering services that allow people to access the internet by selling ‘social bundles’ (unlimited social media access sold in very small increments) and offer some services for free, including Facebook Light and Facebook Messenger.

In Haiti, three-quarters of the population live in the capital, Port-au-Prince, where 3G connectivity is good and most people use messaging apps to communicate with friends and family. Even in rural and difficult-to-reach communities, leaders and young people own smartphones and connect to the internet. There is a lot of competition between mobile operators so the prices for mobile data are very low. This means that most people can afford to access the internet either via their own smartphone or from shared smartphones.

A

Mobile phones charging station on the road from Léogane Peri to Port-au-Prince WFP/Lucia Casarin

A bare-bones demo

In both countries we tested a simple chatbot that asks people about food prices and what the food security is like in their community. The survey we used was much more basic than our usual mobile questionnaires as we felt it was important to keep things simple at this stage.

For Nigeria, the bot demo was initially in English but we soon translated it into Hausa, the primary language spoken by displaced persons in Maiduguri. In Haiti we made it available both in Creole and French. The chatbot was very responsive on 3G and it even worked with slower 2G connections so the technology works in these contexts. But this was only the starting point, what we really wanted to know was what ‘real’ people thought about the bot.

We organized focus group discussions with displaced people in Maiduguri and with community representatives in Haiti. We helped people access the WFP bot via their Facebook accounts, and they began chatting away.

Sounds cool, but what are the limitations?

Here’s what people said:

First of all, people thought the bot is a convenient, quick, and easy way to get in touch directly with WFP and they really liked that the bot allows them to speak to WFP without intermediaries. They had lot to tell us particularly through the open-ended question where they typed out detailed responses.

In Nigeria, they did tell us that our (somewhat wordy) English-language demo should be translated into Hausa because it would make it easier for everyone to use. Our first group of testers were young people who were already Facebook users and so were familiar with Messenger. It was therefore no surprise that they were interacting smoothly with the bot and able to go through our questionnaire in minutes.

WFP/Jean-Martin Bauer

Testing the bot in Nigeria – WFP/Jean-Martin Bauer

In Haiti, people started interacting with the bot as if it was a human rather than an automated questionnaire so they got stuck pretty fast when it wasn’t as naturally responsive as they’d expected. This means that either we give clearer instructions to people or we add Natural Language Processing capabilities to our bot.

There are of course other barriers. In both countries women appeared to be less likely to own a smartphone. This means that bot users will likely be overwhelmingly young, male and better educated than other people – hardly ‘representative’ of WFP’s target affected population. The free version of the bot is also not always available: in Nigeria only Airtel subscribers can access it, while in Haiti the free service doesn’t exist yet.

This means that the bot would need to be a complement to the other tools we have. We might use data from the bot to obtain a quick situation update, but we will continue relying on other sources for more representative data.