Mind the mode:

Who's texting & who's talking in Malawi?

Malawi mVAM respondent WFP/Alice Clough

Malawi mVAM respondent
WFP/Alice Clough

It’s time for another installment of our Mind the Mode series. For those of you who follow this blog regularly, you know that the mVAM team is continually evaluating the quality of the data we collect. Past Mind the Mode blogs have discussed our work in Mali looking at face-to-face versus voice calls, our comparison of SMS and IVR in Zimbabwe and the differences in the Food Consumption Score (FCS) for face-to-face versus Computer-Assisted Telephone Interviews (CATI) interviews in South Sudan.

This month, we turn our attention to Malawi, where we recently completed a study analyzing the differences in the reduced Coping Strategies Index (rCSI) when it’s collected via CATI and SMS. This indicator helps measure a household’s food security by telling us what actions they might be taking to cope with any stresses such as reducing the number of meals a day or borrowing food or money from friends or family. From February to April 2017, around 2,000 respondents were randomly-selected for an SMS survey and 1,300 respondents were contacted on their mobile phones by an external call centre to complete a CATI survey.

People Profiling: who’s Texting and who’s Talking? 

Across all three rounds, a greater proportion of respondents in both modalities were men who lived in the South and Central Regions of the country and came from male-headed households. However, the respondents taking the SMS survey were much younger (average age 29) than those who took the CATI survey (average age 40). This probably isn’t surprising when you consider that young people across the world tend to be much more interested in new technologies and in Malawi are more likely to be literate.

The results from our mode experiment in Zimbabwe showed that IVR and SMS surveys reached different demographic groups so we figured we might see the same results in Malawi. However, this was surprisingly not the case: both CATI and SMS participants seemed to come from better-off households. In our surveys we determine this by asking them what material the walls of their home are made from (cement, baked bricks, mud, or unbaked bricks).

better off-worse off wall type malawi

More respondents (60%) said they have cement or baked brick walls as opposed to mud or unbaked brick walls, an indicator of being richer.

Digging into the rCSI

So what about the results observed for the rCSI between the two modes? The CATI rCSI distribution shows a peak at zero (meaning that respondents are not employing any negative coping strategies) and is similar to the typical pattern expected of the rCSI in face-to-face surveys (as you can see in the two graphs below).

Density plot for CATI Feb-April 2017

 

SMS rCSI

The SMS results, on the other hand, tend to have a slightly higher rCSI score than in CATI, meaning that respondents to the SMS survey are employing more negative coping strategies than households surveyed via CATI. This is counter-intuitive to what we might expect, especially since the data illustrates that these households are not more vulnerable than CATI respondents. Presumably, they would actually be better educated (read: literate!) to be able to respond to SMS surveys. We’re therefore looking forward to doing some more research in to why this is the case.

Box plot cati rcsi

It’s All in the Numbers

Some interesting patterns in terms of responses were also observed via both modalities. SMS respondents were more likely to respond to all five rCSI questions by entering the same value for each question (think: 00000, 22222…you get the idea!). At the beginning of the survey, SMS respondents were told that they would earn a small airtime credit upon completion of the questionnaire. We conjecture that some respondents may have just entered numbers randomly to complete the questionnaire as quickly as possible and receive their credit. Keep in mind that entering the same value for all five rCSI questions via CATI is a lot more difficult, as the operator is able to ask additional questions to ensure that the respondent clearly understands the question prior to entering the response.  For SMS, there’s no check prohibiting the respondent from dashing through the questionnaire and entering the same response each time.

We also saw that the percentage of respondents stating that they were employing between zero and four strategies was much lower among SMS respondents than CATI respondents across all three months of data collection. Conversely, more respondents (three out of five) in the SMS survey reported that they were using all five negative coping strategies than in the CATI survey. Again, this is counter-intuitive to what we would expect.  It might mean that SMS respondents didn’t always correctly understand the questionnaire or that they didn’t take the time to reflect on each question, completing questions as rapidly as possible to get their credit; or simply entered random numbers in the absence of an operator to validate their responses.  The graphs below illustrate the differences in rCSI responses between CATI and SMS.

Figure 3: Distribution of the number of coping strategies reported by SMS and CATI respondents by months

Figure 3: Distribution of the number of coping strategies reported by SMS and CATI respondents by months

From these results, you can see that we still have a lot to learn on how survey modality affects the results. This is just the start of our research; so expect more to come as the team digs deeper to better understand these important differences.

Trial and Error: How we found a way to monitor nutrition through SMS in Malawi

WFP/Alice Clough

WFP/Alice Clough

Over the last ten months we have been testing if we can use mobile phones to collect nutrition indicators. One of these experiments involved using SMS to ask questions about women’s diet quality via the Minimum Dietary Diversity – Women (MDD-W) indicator.  The MDD-W involves asking simple questions about whether women of reproductive age (15-49 years) consumed at least five out of ten defined food groups. We were interested in using SMS surveys to measure MDD-W, because SMS offers an opportunity to collect data regularly at scale and at low cost.

From October 2016 to April 2017, we worked with GeoPoll to conduct five survey rounds on MDD-W and find a way to adapt the indicator to SMS. We analysed data from each round, identified gaps and refined the survey instrument. We were able to collect data quickly and identify strengths and weaknesses to make revisions through an iterative process. Through this process, we believe that we have successfully designed an instrument that can be used to monitor MDD-W trends by SMS. Here’s a short summary of what we learned:

1. Using a mix of open-ended and list-based questions helped people better understand our questions.

By using a mix of open-ended and list-based questions, we were able to significantly improve data quality. MDD-W round 1In the first few rounds, we had an unusually high number of respondents who either scored “0” or “10” on the MDD-W score, which are both unlikely under normal circumstances. A score of “0” means that the respondent did not consume food items from any of the 10 core food groups the previous day or night, while a score of “10” means that the respondent consumed food items from all food groups. In the first round, scores of “0” or “10” accounted for 29 percent of all MDD-Wrespondents, but by Round 5 these scores represented only 3 percent of responses. It seems that having respondents reflect about what they ate in the open-ended questions we introduced in later rounds helps them  recall the food items they consumed and answer the subsequent list-based questions more accurately.

2. Keep questions simple.

We originally asked people by SMS whether they ate food items from the core food groups that comprise the MDD-W score. For example, “Yesterday, did you eat any Vitamin A-rich fruits and vegetables such as mangos, carrots, pumpkin, …….” Perhaps respondents thought that they needed to consume food items from both the fruit and vegetable groups in order to reply “yes” to this question. So instead, we split that question into two separate questions (one on Vitamin A-rich fruits and the other on Vitamin A-rich vegetables) to make it easier for the respondent to answer. We did the same for some of the other questions and found a very low percentage of women scoring “0” or “10” on the MDD-W score. Of course there is a trade-off here, and splitting too many questions might lead to a long and unwieldy questionnaire that could frustrate respondents.

3. Let respondents take the survey in their preferred language.

Comprehension remains a challenge in automated surveys, so helping respondents by asking questions in their own language will ensure data quality and limit non-response. In the Malawi study, translating food items into the local language (Chichewa), while keeping the rest of the questionnaire in English, improved comprehension. We recommend providing the respondent with the option to take the survey in their preferred language.

4. Pre-stratify and pre-target to ensure representativeness.

SMS surveys tend to be biased towards people who have mobile phones; we reach a lot of younger, urban men, and relatively few women of reproductive age, our target group for surveys on women’s diet. To ensure we are reaching them, an MDD-W SMS survey should be designed or ‘pre-stratified’ to include a diverse group of respondents. In Malawi, we were able to pre-stratify according to variables that included age, level of education, location and wealth. This allowed us to include women from all walks of life.

5. Post-calibrate to produce estimates that are more comparable to face-to-face surveys.

The MDD-W SMS surveys we conducted produced higher point estimates than those we would expect in face-to-face surveys. This suggests we may wish to consider calibration to adjust for sampling bias, the likely cause for the discrepancy. Calibration is the process of maintaining instrument accuracy by minimizing factors that cause inaccurate measurements. We’re still working on this and hope to find a solution soon. In the meantime, we think we are able to track trends in MDD-W by SMS with some reliability.

 

Can we reach rural women via mobile phone? Kenya case study

SONY DSC

WFP/Kusum Hachhethu

 

A few months ago, we published a blog post on our plans to test collecting nutrition data through SMS in Malawi and through live voice calls in Kenya. We just got back from Kenya where we conducted a large-scale mode experiment with ICRAF to compare nutrition data collected face-to-face with data collected through phone calls placed by operators at a call center. But before we started our experiment, we did a qualitative formative study to understand rural women’s phone access and use.

We traveled to 16 villages in Baringo and Kitui counties in Kenya, where we conducted focus group discussions and in-depth interviews with women. We also conducted key informant interviews with mobile phone vendors, local nutritionists, and local government leaders.

So in Kenya, can rural women be reached via mobile phone?

Here are the preliminary findings from our qualitative study:

  1. Ownership: Women’s phone ownership is high in both counties. However, ownership was higher in Kitui than Baringo, which is more pastoralist. From our focus group discussions and interviews, we estimate that 80-90% of women own phones in Kitui and 60-70% own phones in Baringo.
  1. Access: The majority of women had access to phones through inter- and intra-household sharing even if they didn’t own one themselves. This suggests that even women who don’t own a phone personally have access to phones that they may be able to use to participate in phone surveys.
  1. Usage: Women mostly use phones to make and receive calls, not send SMS. This supports our hypothesis that voice calls, not SMS, would be the optimal modality to reach women through mobile surveys.
  1. Willingness: Women were enthusiastic about participating in phone surveys during our focus group discussions and in-depth interviews, implying that they are interested in phone surveys and willing to take part.
  1. Trust: Unknown numbers create trust issues, but they are not insurmountable. Women voiced concerns about receiving phone calls from unknown numbers. Despite these trust issues, we were eventually able to successfully conduct our phone surveys after sensitizing the community, using existing community and government administration structures.
  1. Network: Poor network coverage, not gender norms or access, is the biggest barrier to phone surveys in the two counties. Women identified network coverage as the biggest barrier for communication. Some parts of the counties had poor to no network coverage. However, we found that phone ownership was high even in these areas, and women would travel to find network hotspots to make or receive phone calls.

So in conclusion, yes, in Kenya it is possible to reach rural women by phone.
Our findings from Kitui and Baringo counties show that we can reach women in similar contexts with mobile methodologies to collect information on their diet as well as their child’s diet.

We are also analysing the quantitative data from our mode experiment to find out whether data on women and children’s diet collected via live phone operators gives the same results as data collected via traditional face-to-face interviews.

Our 5 mVAM Highs from 2016

collage

1. Awards for Remote Mobile Data Collection Work

At the Humanitarian Technology 2016 conference, our paper Knowing Just in Time Knowing Just in Time’ won Best Paper for Outstanding Impact. In the paper, we assessed mVAM’s contribution to decision-making by looking at use cases for mVAM in camps, conflict settings and vulnerable geographies. Check out our blog Tech for Humanity for more on it and our other conference paper  mVAM: a New Contribution to the Information Ecology of Humanitarian Work

To close the year, we had a nice surprise from Nominet Trust, the UK’s leading tech for good funder. We made their 100 most inspiring social innovations using digital technology to drive social change around the world.  

2. New Tech

In this day and age there’s a lot of buzz around data visualization. We’ve been honing our skills with Tableau. Check out the data visualizations we did for Yemen and Haiti.

We’re also in the era of Big Data. We partnered with Flowminder, experts in analyzing call detail records, to track displacement in Haiti after Hurricane Matthew.  Find out more in ‘After the storm: using big data to track displacement in Haiti

We’re also super excited about the chatbot we started developing for messaging apps and our roll out of Free Basics in Malawi which is allowing us to share the food prices we collect in mVAM surveys with people in Malawi With mVAM, our main focus has been reaching people on their simple feature phones. But we know that smartphone ownership is only going to increase. Contacting people through internet-enabled phones opens up loads of new forms of communication and data collection. is still reaching people on their -free basics

3. Expansion!

mVAM expanded to 16 new countries facing a wide set of challenges: conflict, El Nino drought, hurricanes, extremely remote geographies. We’ve been tracking and learning about what remote mobile data collection can add to food security monitoring systems and what its limits are in different contexts. For some of the highlights, check out our blogs on Afghanistan, Democratic Republic of Congo, Haiti, Nigeria, Papua New Guinea, and  El Nino in Southern Africa,

4. Dynamic Partnerships

To have a lasting impact, we need to work with governments. We are really proud of our partnership with CAID, the Cellule d’Analyses des Indicateurs du Développement  under the Prime Minister’s Office in the Democratic Republic of Congo. We collaborated on setting up a national market monitoring system- mKengela that they are now running. We’ve had intensive technical sessions with the CAID team in Rome and Kinshasa to work on solutions that will fit their data management and analysis needs. The CAID team even traveled to Johannesburg to share their remote mobile data experience with other African countries and help other governments use this technology.

We’re also working with Leiden University. Bouncing ideas off of their team at the Centre for Innovation helps us move forward on tricky challenges. We’re also collaborating with them to develop an online course where we’re going to share our methodologies and how to use remote technology to monitor food security. Check out Welcome to Vamistan for more.

We are in the field of tech. So we can’t do our job well without partnering with the private sector. It’s definitely a dynamic area, and also one where we at mVAM are learning what works best in melding our humanitarian goals with the exciting private tech potential out there. Check out our blog From the Rift Valley to Silicon Valley and our hackathon with Data Mission for more.

5. Learning- the neverending process

In addition to trying out new technology, we’ve been trying to answer some important questions about the live calls, SMS, and IVR surveys which make up the bulk of mVAM data collection.  We’re also doing mode experiments to understand how people answer differently based on which mode we use to contact them. Check out our first Mind the Mode article with more coming in 2017. In Kenya, we are looking into whether we can ask nutrition indicators through mVAM methods. A major challenge is reaching women through phone surveys so we organized a gender webinar with partners to learn from what they are doing- check out our key gender takeaways. These are key questions and they can’t be resolved overnight. But we’re making steady progress in understanding them, and we’re excited for what more we’ll find out in 2017.

Thanks to everyone who has supported our work this year and kept up with our blog!

Prince Charming: A Triplex Tale

img_4427_resize

Welcome to “Sorland”! (Photo: WFP/Jennifer Browning)

The mVAM team sent a team member, Jen, to Triplex, the largest humanitarian emergency simulation in the world. mVAM was thrilled to join over 400 military, UN, government and NGO participants who travelled to Lista, Norway, for training in how to respond to a humanitarian emergency. In the pre-exercise stage, we presented our work on mVAM, and we hope that our participation will help to increase our engagement with such a diverse group of partners. There were also interesting presentations on shelter, supply chain, data analysis, and new tools. 

Our favorite session was on smart assessments. Lars Peter Nissen, Director of ACAPS, offered important wisdom that we should always strive to follow with mVAM. He warned against getting trapped in your own small study and losing what he termed “situational awareness,” or the bigger picture.

His three rules for humanitarian analysts to live by:

  1. “Know what you need to know.”
  2. “Make sense, not data.”
  3. “Don’t be precisely wrong, be approximately right.”

In thinking about how we can apply these three gems to our work on remote data collection, we need to make a constant effort to collect data that will really help improve humanitarian responses. Like all data nerds, we can sometimes get bogged down in calculating exact bias estimates or making sample size calculations, risking losing sight of the bigger picture from down in the weeds of our small mVAM survey in one country. But we need to remember to look at the wider situation to ensure we are collecting useful information.

img_4406

Presenting mVAM (Photo: WFP/Lucy Styles)

Then we need to make sense of our data by triangulating with what others are doing and what we already know. In our mVAM bulletins, we need to communicate clearly in a way that makes data quickly understandable to decision-makers. We need to pay attention to what the trends from our mVAM data are telling us, while not forgetting the limitations of the remote mobile data collection methodology.

After a couple days of introspection, or as we would find out later, the calm before the storm, the two-day pre-exercise ended and we embarked on the natural disaster simulation phase. We boarded buses or “flights” and travelled to Base Camp in “Sorland”, a fictional developing country that had just been hit by a hurricane and where the simulation would take place.  For the next 72 hours we would do our best to respond, learning along the way.  

The organizers made a herculean effort to have the 72 hours be as realistic as possible. We were sleeping in (admittedly high tech) tents and crossing a road jammed with huge supply trucks and lines of land rovers. The scale was impressive. Prince Harry even flew a helicopter in to observe the exercise and play the role of a Minister from the Sorland government. The organizers couldn’t have planned it, but at one point, the winds became dangerously high, almost making it necessary to really evacuate us.

img_4433_resized

The Minister of “Sorland” played by Prince Harry (Photo: WFP/Jennifer Browning)

In these conditions as in any real life emergency, it was inevitable that we would run into problems. We had planned to deploy mVAM quickly. The organizers had provided us with a list of phone numbers of IDPs in “Sorland,” actually students from the United Nations University in Bonn who did a great job role playing throughout the simulation. We wanted to contact them via SMS, using Pollit, the in-house SMS survey tool developed by InStedd. We have used Pollit successfully in Goma to collect food prices, but for Pollit to work, you need a WiFi connection. (For more on Pollit, see our blog entries Pollit Customized and Ready to Go and Working with DRC Youth to Text Back Market Prices).  At Triplex,  WiFi was supposed to be up and running the first evening, but conditions on the ground made it difficult to establish a connection. We didn’t get WiFi until the last night of the exercise, which was too late for us to use Pollit.

So instead, we participated in OCHA-led face-to-face surveys and in focus group discussions. Sometimes we get so caught up in remote data collection that these other data collection exercises can fall off our radar screen, but there is so much we learn from talking to local communities face-to-face and from coordinating with other partner agencies as they plan their own data collection. So perhaps because WiFi was such a problem, Triplex turned into a great experience to keep our coordination and face-to-face data collection skills sharp.

triplex-4

The Logistics Cluster explains access constraints (Photo: WFP/Ricardo Gonzalez)

In addition to collaborating with different organizations, working within a diverse team of WFP colleagues from different units pushed us to consult closely and understand what information they needed most. At WFP headquarters, we don’t generally have the same opportunity to work this closely on a daily basis with colleagues from other branches like logistics, procurement, and cash-based transfers. As WFP considered a potential cash-based transfer response for the fictional Sorland, it became clear that operationally, information on market functioning and food availability was very important. This meant that  while we were not able to use existing mVAM tools per se, we recognized clear demand within WFP to address this critical information gap. For next time, we will keep these information needs, i.e. “knowing what we need to know,” clearly in mind. And we’ll also make sure to prepare for all types of scenarios, think about the limitations of our technology, and do our best to have a Plan B.

Even without WiFi and Pollit, the Triplex simulation ended up being very relevant and provided a great brainstorming session for what came later. During the 72 hour simulation, colleagues from Haiti and Cuba were receiving increasingly grim alerts about the approach of Hurricane Matthew. Through Triplex, we’d already identified some of the information that could be most relevant in responding to a hurricane. So our practice in Sorland turned out to be very useful in quickly deciding what questions to ask in Haiti where we are rolling out a remote market assessment. Stay tuned for more!

 

The El Niño Aftermath: Tracking Hunger in the Millions in Southern Africa

We’ve been writing a lot about how mVAM can help in conflict situations where whole areas are cut off because of violence or an epidemic (see our blogs on Yemen, Somalia, Iraq and article on Ebola). But over the past year, the world was disrupted by another type of event- a climatic one: El Niño. The El Niño weather pattern results from a warming of sea temperatures in the Pacific roughly every three to seven years. This El Niño was one of the strongest on record.  The reason why El Niño was so concerning is its global reach, it didn’t just affect the Pacific; places as far away as Guatemala, Pakistan, Indonesia and Ethiopia were all at risk of floods and/or droughts. While the El Niño itself has abated, it has left millions hungry in its wake (current estimates are that 60 million people are food insecure globally). And a La Niña year is looming.

One area that has been particularly affected is Southern Africa. Across the region, this year’s rainfall season was the driest in the last 35 years. Most farmers are facing significantly reduced and delayed harvests.

El Niño hit when Southern Africa was already vulnerable to food insecurity. The region had already experienced a poor 2014-15 harvest season, meaning that food stocks were already depleted. Now, after El Niño, roughly 41 million people are classified as food insecure. On 13 June 2016 WFP categorized the region as an L3 emergency – a situation requiring the highest level of humanitarian support. We’re therefore dramatically expanding our national food security monitoring in the region so WFP can quickly provide as much relevant food security information as possible to effectively respond to the crisis.

Predictions that this El Niño would have a big effect had already started coming in 2015 so we began setting up mobile monitoring in countries that were particularly vulnerable to El Niño. We started in Malawi which had very disruptive weather patterns looming (potentially too much rain in the north and huge rainfall deficits in the south). We lacked current household data to track the impact on food security across the country.

To get information quickly and cheaply, we started a monthly SMS survey with GeoPoll in December 2015. And Malawians sure were quick to respond! In 24 hours, we had 1,000 questionnaires completed.  When analyzing the results, we wanted to make sure people were understanding our texts. The adult literacy rate in Malawi is only 61.3% so we kept the questionnaire short and as simple as possible. We included questions for one food security indicator- the reduced coping strategy index (rCSI) which asks people about the coping strategies they are using when they don’t have enough to eat. We also checked that the data made sense, and in general, the rCSI behaved as we would suspect. It was correlated with people’s messages about their community’s food security situation and their wealth status. As with all of our surveys, we are continually improving them. In this case, we increased our sample size and district quotas to capture more people in rural areas.

Monitoring Maize Prices

IMG_0095Market prices, especially maize prices, are key to Malawians’ food security. Maize is the staple food, used to make nsima which is consumed daily. So to monitor market prices in 17 hotspot districts, we collected phone numbers from over 100 traders in 51 markets throughout Malawi. We first tried asking them prices by text message, but we didn’t receive many responses.  It seems like sending back a series of texts is a bit too much to ask of traders who volunteered out of their own good will to participate in our market survey. We therefore set up a small call center in WFP’s country office. We trained two operators, and they were quickly placing calls to traders every week. When they could just answer a quick phone call instead of having to type in answers, traders willingly reported current commodity prices.

Our latest report from June 2016 shows that maize prices are now between 50 and 100 percent higher than this time last year. This is having a big effect on Malawians. As you can see from our word cloud, alarmingly ‘not-enough’ featured prominently in our open ended question about maize.

word cloud_cropped

Nutrition Surveillance for the first time

In most countries, we have been concentrating on household level indicators like food consumption. But health centers treating malnutrition could potentially give us important indications of the nutrition situation of different parts of the country. In Malawi, WFP works with health centers to address moderate acute malnutrition (MAM) in Malawi by providing fortified blended foods. So to make the most of our call center, we decided to call these health centers every two weeks and track malnutrition admission data for children (aged 6-59 months) and for adults with HIV/AIDS or tuberculosis. In the first six weeks of monitoring, we saw a big increase in the number of moderate acute malnutrition admissions for children increased greatly where severe acute malnutrition rates did not show a clear pattern. We dug further, and the Ministry of Health had initiated mass screenings to enroll malnourished children in nutrition programmes which generally pick up moderately malnourished children. With health center admission data, it’s important to check what else is going on in the country. We’re hoping to soon pilot contacting mothers of malnourished children about their children’s progress to gain additional insight into the nutrition status of vulnerable populations in Malawi.

Now that we have Malawi firmly established, we’ve started reporting on Madagascar and our data collection is ongoing in Zambia, Lesotho and Mozambique. So watch this space for more news about how we get on in these next few months.

Our 5 hacks for mobile surveys for 2015

WomanPhoneGoma

An mVAM respondent in Mugunga III camp, DRC.

  1. Gender matters. Design and run your survey in a way that promotes women’s participation. With mobile surveys, it’s hard to get as many women to respond as men. Make sure you’re calling at the right time and that you provide incentives. We also recommend having women operators. For more of our thinking on gender in mobile surveys, check out our blog entry on gender issues in West Africa.
  1. Validate mobile data against face-to-face data. Your mobile survey results may differ significantly. In many contexts, cell phone penetration has not reached the most vulnerable groups. In DRC, we had to provide phones to Internally Displaced Persons (IDPs) and access to electricity- to learn more check out our video and our blog entry. But it’s not always possible to distribute phones so it’s important to check your results against other data sources. Also, people get tired of answering their phones all the time so attrition and low response rates will affect your results.
  1. Mind the mode!  Your results will differ according to whether the survey is done through SMS, IVR, or live calls by an operator. Live calls have the highest response rates, but you have to be ready to pay. For simpler data, we have found that SMS is effective and cheap. Just remember- the context matters. SMS is working well with nationwide surveys, even in countries where literacy rates are not that high- check out our recent results in Malawi. However, SMS can be a problem in communities where literacy rates are very low or familiarity with technology is low as we found in DRC IDP camps. For Interactive Voice Response (IVR) that use voice-recorded questions, the jury is still out on its usefulness as a survey tool.  IVR didn’t work as well as SMS in Liberia, Sierra Leone, and Guinea during the Ebola crisis (HPN June 2015). But IVR has potential as a communication tool to push out information to people. Check out our entry on our two-way communication system where we use IVR to send distribution and market price information to IDPs in DRC.
  1. Keep the survey user friendly and brief. Always keep your survey short and simple. Stay below 10 minutes for voice calls, or people will hang up. If you are texting people, we don’t recommend much longer than 10 questions. Go back to the drawing board if respondents have trouble with some of your questions. With mobile surveys, you don’t have the luxury of explaining everything as with in person interviews. It might take a few rounds to get it right. When we want food prices, we’ve found we need to tweak food items and units of measurement in Kenya and DRC to best capture what people buy in local markets. Again, short and sweet should be the mobile survey mantra.
  1. Upgrade your information management systems. There is nothing as frustrating as collecting a lot of great data – without being able to manage it all! Standardize, standardize, standardize! Standardize questions, answer choices, variable names, and encoding throughout questionnaires. Automate data processing wherever possible. Also, you’ll be collecting phone numbers. This is sensitive information so make sure you have the correct confidentiality measures in place. Check out our Do’s and Don’ts of Phone Number Collection and Storage and our script for anonymizing phone numbers. Finally, share your data so others can use it! We’re posting our data in an online databank.

 

 

Can we use SMS for food security surveys in a Congolese IDP camp?

Blog entry originally posted in December 2014 on the Humanitarian Innovation Fund website.


Featured image

Response rates to voice calls, DR Congo Source: WFP

Almost one year into data collection, we are now fairly confident that live voice calls, placed by operators, are a good way to stay in touch with people in the extremely vulnerable communities we work with.  Since January 2014, we have been able to conduct monthly rounds of phone surveys typically reaching between half and two-thirds of selected respondents, while collecting data of good quality. However, it’s not yet clear if either IVR or SMS offer the same advantages in our pilot contexts.

SMS: cool tool, wrong setting?

This month, we attempted to understand whether SMS surveys would work in an IDP camp. Using SMS is attractive, because it is low-cost and easy to automate using free software.  While we have had good results with SMS(link is external)when running simple national or province-level food security surveys, we have yet to evaluate the tool’s suitability in a high-vulnerability refugee camp setting.  In November, two Rome-based mVAM team members, Marie and Lucia, travelled to Goma to attempt to do just that. They helped the team in Goma organize a simple food security survey involving face-to-face interviews, live voice calls and SMS. The data collected from this exercise will allow us to understand the strengths and weaknesses of these different survey tools.

Featured image

Residents in a Congolese refugee camp responding to text messages

In order to run the SMS survey, we used Pollit(link is external) a free, open-source tool. It is easily accessible with an internet connection and requires only minimal hardware to function—a computer, a mobile phone and an internet connection. The tool was developed by In(link is external)STEDD, the same company that developed Verboice(link is external), the programme we are using for IVR calls. In the future, Pollit may allow us to periodically run short SMS surveys in-house, bringing a lot of flexibility to our field teams.  During the Goma test, Pollit proved to be a simple and flexible tool. It was easy to set up and worked smoothly during the six days of data collection.

However, response rates to SMS surveys turned out to be low, particularly compared to voice calls and face-to-face surveys. Our enumerators reported that people in the camp are not used to using the SMS function on their phones. They typically communicate using voice calls, due to low literacy and habit. In some cases, the phones people owned were broken or had dirty screens, making it difficult to read and reply to the messages we were sending.  These issues, however, do not prevent us from using voice calls, which seem to be the preferred modality amongst respondents in DR Congo. This seems to suggest that we should stick to live calls for Mugunga 3 camp, and use SMS questionnaires in other settings. We are now analyzing the data we collected in Goma in order to answer other questions we have, which includes comparing data quality for the different survey modes. We’ll be sure to share those insights later.

At the other end of the line: voices from Guinea, Sierra Leone and Liberia

Blog entry originally posted in November 2014 on the Humanitarian Innovation Fund website.


Over the past couple of months mVAM project has grown, and fast. The catalyst has been the Ebola virus disease (EVD) outbreak.

When the mVAM project started in 2013, we envisioned the advantages of remote data collection in areas that are frequently affected by conflict, natural disasters, or inaccessibility (e.g. villages cut off by impassable roads during the rainy season). The thought of not being able to collect data due to an infectious disease outbreak never crossed our minds. Fast forward a year, and we are suddenly collecting food security data from quarantined areas of Sierra Leone, Guinea, and Liberia through short SMS and interactive voice response (IVR) surveys.

Featured image

Staff in Rome testing new software we could use to conduct SMS surveys

As physical movement of our staff became increasingly more restricted in EVD-affected areas, remote data collection began to seem like the most viable option. Overnight, the team in Rome went from analysing data from DRC and Somalia to planning with colleagues in our West Africa Office on what would be the quickest way to conduct remote food security surveys on the ground. Using crowd source data in collaboration with a private company—meaning that we call or SMS households already registered in calling databases—our teams were able to begin data collection in late September. Data on food prices, households’ coping strategies, and other relevant topics is now being collected monthly in Sierra Leone, Liberia, and Guinea. The goal of this ongoing data collection is to observe if/how the EVD is impacting food prices and households’ food security. So far, two rounds of data collection have been conducted, and we are gearing up for round three of data collection in early December. We are pleased to see the learning we’ve accumulated over the past year be put to practice for the Ebola response. For more information on our work in Liberia, Sierra Leone, and Guinea, please visit the mVAM monitoring website (link is external).

MEANWHILE IN DRC AND SOMALIA…

Despite the ongoing emergency in West Africa, our teams in DRC and Somalia are continuing with their monthly data collection and striving to improve the ways in which we implement the project.  Two of our Rome colleagues will be heading to Goma, DRC next week to work with the team on a review of SMS versus face-to-face surveys and provide technical support. So stay tuned for more from their trip soon!

In addition, this week, the Rome team had a conference call with staff in Galkayo, Somalia, to discuss project progress. During the call, team members brainstormed how to address declines in response rates. Since the inception of the project, response rates in Somalia for each round have slowly been decreasing. The staff expressed that this could be due to the fact that respondents’ cell phones are often switched off to save power; may be indicative of decreased interest on the part of respondents; or could be due to other variables. As such, the team is hoping to conduct focus group discussions to learn more. Based on the responses, possible solutions to boost response rates could include launching a campaign to remind people to respond, arranging for respondents to charge their phones for free with a local vendor; or perhaps providing a one-time airtime credit for respondents who respond often. Natural declines in response rates are normal over time; however, we are confident that with the implementation of some creative solutions Somalia’s call rates will be increased soon and will keep you posted on how it goes.