If you’re not human then who are you?

Experimenting with chatbots in Nigeria and Haiti

WFP/Lucia Casarin

Testing the bot in Haiti – WFP/Lucia Casarin

Readers of this blog know that the team has been experimenting with chatbots to communicate with disaster-affected communities – read our previous posts about our prototype and the Nielsen Hackathon.

As part of this effort, during recent missions to Haiti and Nigeria, our team went out to talk to communities to find out whether a chatbot would be right for them.

Would a chatbot be a stretch in these communities?

Well it’s not that much of a stretch.

In North East Nigeria, most displaced people live in Maiduguri, a city of over 1 million people. In this ‘urban’ setting connectivity is good, most people own cell phones and many young people use social media and messaging apps. Mobile operators have been offering services that allow people to access the internet by selling ‘social bundles’ (unlimited social media access sold in very small increments) and offer some services for free, including Facebook Light and Facebook Messenger.

In Haiti, three-quarters of the population live in the capital, Port-au-Prince, where 3G connectivity is good and most people use messaging apps to communicate with friends and family. Even in rural and difficult-to-reach communities, leaders and young people own smartphones and connect to the internet. There is a lot of competition between mobile operators so the prices for mobile data are very low. This means that most people can afford to access the internet either via their own smartphone or from shared smartphones.

A

Mobile phones charging station on the road from Léogane Peri to Port-au-Prince WFP/Lucia Casarin

A bare-bones demo

In both countries we tested a simple chatbot that asks people about food prices and what the food security is like in their community. The survey we used was much more basic than our usual mobile questionnaires as we felt it was important to keep things simple at this stage.

For Nigeria, the bot demo was initially in English but we soon translated it into Hausa, the primary language spoken by displaced persons in Maiduguri. In Haiti we made it available both in Creole and French. The chatbot was very responsive on 3G and it even worked with slower 2G connections so the technology works in these contexts. But this was only the starting point, what we really wanted to know was what ‘real’ people thought about the bot.

We organized focus group discussions with displaced people in Maiduguri and with community representatives in Haiti. We helped people access the WFP bot via their Facebook accounts, and they began chatting away.

Sounds cool, but what are the limitations?

Here’s what people said:

First of all, people thought the bot is a convenient, quick, and easy way to get in touch directly with WFP and they really liked that the bot allows them to speak to WFP without intermediaries. They had lot to tell us particularly through the open-ended question where they typed out detailed responses.

In Nigeria, they did tell us that our (somewhat wordy) English-language demo should be translated into Hausa because it would make it easier for everyone to use. Our first group of testers were young people who were already Facebook users and so were familiar with Messenger. It was therefore no surprise that they were interacting smoothly with the bot and able to go through our questionnaire in minutes.

WFP/Jean-Martin Bauer

Testing the bot in Nigeria – WFP/Jean-Martin Bauer

In Haiti, people started interacting with the bot as if it was a human rather than an automated questionnaire so they got stuck pretty fast when it wasn’t as naturally responsive as they’d expected. This means that either we give clearer instructions to people or we add Natural Language Processing capabilities to our bot.

There are of course other barriers. In both countries women appeared to be less likely to own a smartphone. This means that bot users will likely be overwhelmingly young, male and better educated than other people – hardly ‘representative’ of WFP’s target affected population. The free version of the bot is also not always available: in Nigeria only Airtel subscribers can access it, while in Haiti the free service doesn’t exist yet.

This means that the bot would need to be a complement to the other tools we have. We might use data from the bot to obtain a quick situation update, but we will continue relying on other sources for more representative data.

How many pizzas does it take to build a chatbot?

Hackers are hungry Photo: WFP/Pia Facultad

Hackers are hungry
Photo: WFP/Pia Facultad

This week, government, business, academia and civil society leaders will gather at Davos to discuss solutions to the world’s biggest challenges – including how new technologies can be leveraged to solve some of the most serious problems we face. At mVAM, we continue to explore how some of these technologies could be used to help eliminate chronic hunger, malnutrition and food insecurity – most recently looking at how chatbots could help collect important information during a humanitarian response.

Last week, our collaborators at Nielsen – one of the early supporters of mVAM – organized a 24-hour hackathon at the Nielsen Tech Hub in New York City. As part of ongoing efforts through Nielsen Cares, the hackathon aimed to develop an open-source humanitarian chatbot that can collect real-time information about food security. This came at the right time for WFP – we’d developed and tested a prototype of the chatbot with InSTEDD, and Nielsen’s technology and development input helped bring in important new capabilities. Ultimately, our goal is to field-test a chatbot in Haiti in the next few months to help us track food security conditions as people recover from the impacts of Hurricane Matthew.

The event was open to the public. A diverse group of students, volunteer hackers, and Nielsen staff showed up to take on the challenge, despite the wintry weather. InSTEDD’s Director of Platform Engineering, Nicolás di Tada also participated.

Much more than a chatbot

What the hackers built is much more that a chatbot: it is a bona-fide chat-based data collection and reporting system. Rather than attempt to outdo each other (as is the case in most hackathons), the teams split up to build the different components of the system. The different teams, made up of perfect strangers, communicated during the hackathon through Slack. After 24 hours, most components were fully coded up, but there were still bugs with the orchestrator and the gateway that additional post-hackathon work will resolve.

The architecture of the system, as defined by Nielsen, includes:

  • a management interface that allows an analyst to set up a questionnaire, including and skip logic, and validation rules that prompt the user when they enter a wrong answer. The interface was built using the Angular 2 JavaScript framework;
  • a gateway that is able to interact with respondents through Facebook Messenger and potentially other chat applications. The Facebook gateway was built on top of the AWS Lambda service;
  • a natural language processing engine that analyzes text on the fly. It allows the chatbot to ‘interpret’ a user’s answers. For now, the NLP engine processes English language text, although the engine includes a translation service and, by default, translates all languages to English for more advanced NLP tasks. The engine was built using the AWS Lambda service and leverages IBM Watson’s AlchemyLanguage service for text processing.;
  • a set of ‘backend APIs’ that manage respondent and survey data, route respondents from each response to the next question, and provide data to user interfaces .  The APIs were built using the Django framework for python and deploys on the AWS Elastic Beanstalk service;
  • an ‘orchestration layer’ that maintains survey status and routes messages between the end user and the various backend services. The orchestration service is built on top of the AWS Lambda service; and
  • a “reporting and data visualization engine”. Data vizzes were built using Highcharts, a JavaScript-based application. This allows an analyst to instantly see the results of the chatbot surveys.

 

chatbot

 

Leveraging cloud services from the Amazon Web Services product catalog, the teams were able to build a scalable, cost effective platform that can be deployed quickly to multiple locations globally.

Remember the humans

We also received tips from a chatbot specialist, Alec Lazarescu from Chatbots Magazine. He encouraged us to ‘onboard’ users with an initial message that gives people a clear idea of what the chatbot is for. He told us to avoid ‘dead ends’ and allow users to speak to a human being in case they get stuck.

We’re very grateful to Nielsen for their support and to all the participants for their energy and creativity. The next steps involve WFP and InSTEDD accessing the code and work on ironing out the kinks. We expect challenges with the natural language processing in Haitian Creole, a language that is probably under-researched. Making the different parts of the chatbot work together seamlessly also appears to be an area we will still have to work on.  And, of course, the final test will be to see whether our target group – people living in Haiti – find the chatbot engaging.

Our 5 mVAM Highs from 2016

collage

1. Awards for Remote Mobile Data Collection Work

At the Humanitarian Technology 2016 conference, our paper Knowing Just in Time Knowing Just in Time’ won Best Paper for Outstanding Impact. In the paper, we assessed mVAM’s contribution to decision-making by looking at use cases for mVAM in camps, conflict settings and vulnerable geographies. Check out our blog Tech for Humanity for more on it and our other conference paper  mVAM: a New Contribution to the Information Ecology of Humanitarian Work

To close the year, we had a nice surprise from Nominet Trust, the UK’s leading tech for good funder. We made their 100 most inspiring social innovations using digital technology to drive social change around the world.  

2. New Tech

In this day and age there’s a lot of buzz around data visualization. We’ve been honing our skills with Tableau. Check out the data visualizations we did for Yemen and Haiti.

We’re also in the era of Big Data. We partnered with Flowminder, experts in analyzing call detail records, to track displacement in Haiti after Hurricane Matthew.  Find out more in ‘After the storm: using big data to track displacement in Haiti

We’re also super excited about the chatbot we started developing for messaging apps and our roll out of Free Basics in Malawi which is allowing us to share the food prices we collect in mVAM surveys with people in Malawi With mVAM, our main focus has been reaching people on their simple feature phones. But we know that smartphone ownership is only going to increase. Contacting people through internet-enabled phones opens up loads of new forms of communication and data collection. is still reaching people on their -free basics

3. Expansion!

mVAM expanded to 16 new countries facing a wide set of challenges: conflict, El Nino drought, hurricanes, extremely remote geographies. We’ve been tracking and learning about what remote mobile data collection can add to food security monitoring systems and what its limits are in different contexts. For some of the highlights, check out our blogs on Afghanistan, Democratic Republic of Congo, Haiti, Nigeria, Papua New Guinea, and  El Nino in Southern Africa,

4. Dynamic Partnerships

To have a lasting impact, we need to work with governments. We are really proud of our partnership with CAID, the Cellule d’Analyses des Indicateurs du Développement  under the Prime Minister’s Office in the Democratic Republic of Congo. We collaborated on setting up a national market monitoring system- mKengela that they are now running. We’ve had intensive technical sessions with the CAID team in Rome and Kinshasa to work on solutions that will fit their data management and analysis needs. The CAID team even traveled to Johannesburg to share their remote mobile data experience with other African countries and help other governments use this technology.

We’re also working with Leiden University. Bouncing ideas off of their team at the Centre for Innovation helps us move forward on tricky challenges. We’re also collaborating with them to develop an online course where we’re going to share our methodologies and how to use remote technology to monitor food security. Check out Welcome to Vamistan for more.

We are in the field of tech. So we can’t do our job well without partnering with the private sector. It’s definitely a dynamic area, and also one where we at mVAM are learning what works best in melding our humanitarian goals with the exciting private tech potential out there. Check out our blog From the Rift Valley to Silicon Valley and our hackathon with Data Mission for more.

5. Learning- the neverending process

In addition to trying out new technology, we’ve been trying to answer some important questions about the live calls, SMS, and IVR surveys which make up the bulk of mVAM data collection.  We’re also doing mode experiments to understand how people answer differently based on which mode we use to contact them. Check out our first Mind the Mode article with more coming in 2017. In Kenya, we are looking into whether we can ask nutrition indicators through mVAM methods. A major challenge is reaching women through phone surveys so we organized a gender webinar with partners to learn from what they are doing- check out our key gender takeaways. These are key questions and they can’t be resolved overnight. But we’re making steady progress in understanding them, and we’re excited for what more we’ll find out in 2017.

Thanks to everyone who has supported our work this year and kept up with our blog!

After the storm: using big data to track displacement in Haiti

Photo: Igor Rugwiza – UN/MINUSTAH


This week’s blog is a guest entry by Gabriela Alvarado, the WFP Regional IT Officer for Latin America and the Caribbean. In the aftermath of Hurricane Matthew, Gaby lead the IT Working Group in Haiti, which provided support to the humanitarian response through the provision of
ETC Connectivity Services. The team from the Regional Bureau worked with mVAM and Flowminder to supply valuable time-bound information to the operation.

 

Supporting Emergencies through Technology & Joint Efforts

It’s now been just over a month since Hurricane Matthew made landfall in Haiti, devastating the western side of the country. The hurricane has affected an estimated 2.1 million people, leaving 1.4 million in need of humanitarian assistance.

In the days following the hurricane, a rapid food security assessment was carried out to determine the impact of the hurricane on the food security of households and communities in the affected areas.  In the most-affected areas, the départements of Grande-Anse and Sud, people reported that crops and livestock, as well as agricultural and fishing equipment, were almost entirely destroyed.

 

Credit: WFP

Credit: WFP


We all know the challenges we face at WFP when looking to collect information, in order to determine what would be the best response under the circumstances on the ground.  In the aftermath of the hurricane, which had destroyed infrastructure, caused flooding, and temporarily knocked out telecommunications, gathering information from affected areas was especially difficult. So, WFP’s Information Technology team in the Regional Bureau for Latin America and the Caribbean reached out to Flowminder, a non-profit organization that uses big data analysis to answer questions that would be operationally relevant for government and aid agencies trying to respond to emergencies. Thanks to an existing agreement between WFP and Flowminder, WFP was able to quickly establish a working group and start data collection one day after the hurricane struck Haiti.

 

An aerial view of Jérémie following the passage of Hurricane Matthew (photo: Logan Abassi - UN/MINUSTAH)

An aerial view of Jérémie following the passage of Hurricane Matthew
(photo: Logan Abassi – UN/MINUSTAH)

Flowminder aggregates, integrates and analyses anonymous mobile operator data (call detail records), satellite and household survey data, which helps to estimate population displacements following a crisis. Displaced people are some of the most vulnerable following a hurricane, and knowing where people have gone helps to provide more effective assistance.

By 24 October 2016, Flowminder estimated that 260,500 people had been displaced within the Grande Anse, Sud, and Nippes départements. In Les Cayes, the major city in Sud, the population grew by an estimated 42% in the aftermath of Hurricane Matthew according to Flowminder analysis. In fact, Flowminder analyses suggest that many people moved toward cities, even Jérémie and Les Cayes, which were severely damaged by the hurricane.

 

Flowminder.org

Flowminder.org

So how exactly did Flowminder make these estimates with so many areas barely accessible? By analysing anonymized call detail records from Digicel, one of Haiti’s major cell phone network providers, and comparing where people placed calls before and after the hurricane, Flowminder was able provide an estimate of the number of displaced people. Flowminder uses algorithms that look at where the last “transaction” (phone call or sms) took place each day in order to identify the place where people were living before the hurricane and then subsequently moved afterwards. . It makes sense – the last few calls or texts you make at night are often from your home. While Flowminder does not get exact locations from the call data records, they can identify a general home location using the closest cell phone tower. After identifying the home location, Flowminder needs to determine how many people each phone represents. In poorer areas, not everyone may own a phone, or many people may not be able to charge and use their phones after a natural disaster like a hurricane. Flowminder uses formulas which takes these factors into account, and translates the number of phones into an estimate of the number of people who are displaced.

How will this further help?

With the information provided by Flowminder, WFP is able to estimate:

  • possible gaps in assistance in areas of the country which were not damaged by Hurricane Matthew, but which are experiencing an influx of people in need of food assistance following the hurricane;
  • use and community ‘acceptance’ of the use of mobile money (one aspect is the availability of the service, while the other aspect is if it is being used in that area);
  • the prevalence and spread of diseases (including Cholera, which continues to pose a risk in the aftermath of the hurricane).

It has been a very challenging yet incredible opportunity to see where and how technology can be used to further support an emergency response under difficult conditions and to ensure that WFP can reach the most vulnerable after a disaster.

Prince Charming: A Triplex Tale

img_4427_resize

Welcome to “Sorland”! (Photo: WFP/Jennifer Browning)

The mVAM team sent a team member, Jen, to Triplex, the largest humanitarian emergency simulation in the world. mVAM was thrilled to join over 400 military, UN, government and NGO participants who travelled to Lista, Norway, for training in how to respond to a humanitarian emergency. In the pre-exercise stage, we presented our work on mVAM, and we hope that our participation will help to increase our engagement with such a diverse group of partners. There were also interesting presentations on shelter, supply chain, data analysis, and new tools. 

Our favorite session was on smart assessments. Lars Peter Nissen, Director of ACAPS, offered important wisdom that we should always strive to follow with mVAM. He warned against getting trapped in your own small study and losing what he termed “situational awareness,” or the bigger picture.

His three rules for humanitarian analysts to live by:

  1. “Know what you need to know.”
  2. “Make sense, not data.”
  3. “Don’t be precisely wrong, be approximately right.”

In thinking about how we can apply these three gems to our work on remote data collection, we need to make a constant effort to collect data that will really help improve humanitarian responses. Like all data nerds, we can sometimes get bogged down in calculating exact bias estimates or making sample size calculations, risking losing sight of the bigger picture from down in the weeds of our small mVAM survey in one country. But we need to remember to look at the wider situation to ensure we are collecting useful information.

img_4406

Presenting mVAM (Photo: WFP/Lucy Styles)

Then we need to make sense of our data by triangulating with what others are doing and what we already know. In our mVAM bulletins, we need to communicate clearly in a way that makes data quickly understandable to decision-makers. We need to pay attention to what the trends from our mVAM data are telling us, while not forgetting the limitations of the remote mobile data collection methodology.

After a couple days of introspection, or as we would find out later, the calm before the storm, the two-day pre-exercise ended and we embarked on the natural disaster simulation phase. We boarded buses or “flights” and travelled to Base Camp in “Sorland”, a fictional developing country that had just been hit by a hurricane and where the simulation would take place.  For the next 72 hours we would do our best to respond, learning along the way.  

The organizers made a herculean effort to have the 72 hours be as realistic as possible. We were sleeping in (admittedly high tech) tents and crossing a road jammed with huge supply trucks and lines of land rovers. The scale was impressive. Prince Harry even flew a helicopter in to observe the exercise and play the role of a Minister from the Sorland government. The organizers couldn’t have planned it, but at one point, the winds became dangerously high, almost making it necessary to really evacuate us.

img_4433_resized

The Minister of “Sorland” played by Prince Harry (Photo: WFP/Jennifer Browning)

In these conditions as in any real life emergency, it was inevitable that we would run into problems. We had planned to deploy mVAM quickly. The organizers had provided us with a list of phone numbers of IDPs in “Sorland,” actually students from the United Nations University in Bonn who did a great job role playing throughout the simulation. We wanted to contact them via SMS, using Pollit, the in-house SMS survey tool developed by InStedd. We have used Pollit successfully in Goma to collect food prices, but for Pollit to work, you need a WiFi connection. (For more on Pollit, see our blog entries Pollit Customized and Ready to Go and Working with DRC Youth to Text Back Market Prices).  At Triplex,  WiFi was supposed to be up and running the first evening, but conditions on the ground made it difficult to establish a connection. We didn’t get WiFi until the last night of the exercise, which was too late for us to use Pollit.

So instead, we participated in OCHA-led face-to-face surveys and in focus group discussions. Sometimes we get so caught up in remote data collection that these other data collection exercises can fall off our radar screen, but there is so much we learn from talking to local communities face-to-face and from coordinating with other partner agencies as they plan their own data collection. So perhaps because WiFi was such a problem, Triplex turned into a great experience to keep our coordination and face-to-face data collection skills sharp.

triplex-4

The Logistics Cluster explains access constraints (Photo: WFP/Ricardo Gonzalez)

In addition to collaborating with different organizations, working within a diverse team of WFP colleagues from different units pushed us to consult closely and understand what information they needed most. At WFP headquarters, we don’t generally have the same opportunity to work this closely on a daily basis with colleagues from other branches like logistics, procurement, and cash-based transfers. As WFP considered a potential cash-based transfer response for the fictional Sorland, it became clear that operationally, information on market functioning and food availability was very important. This meant that  while we were not able to use existing mVAM tools per se, we recognized clear demand within WFP to address this critical information gap. For next time, we will keep these information needs, i.e. “knowing what we need to know,” clearly in mind. And we’ll also make sure to prepare for all types of scenarios, think about the limitations of our technology, and do our best to have a Plan B.

Even without WiFi and Pollit, the Triplex simulation ended up being very relevant and provided a great brainstorming session for what came later. During the 72 hour simulation, colleagues from Haiti and Cuba were receiving increasingly grim alerts about the approach of Hurricane Matthew. Through Triplex, we’d already identified some of the information that could be most relevant in responding to a hurricane. So our practice in Sorland turned out to be very useful in quickly deciding what questions to ask in Haiti where we are rolling out a remote market assessment. Stay tuned for more!

 

Test Complete: 6 things we learned about online surveys

IMG_3030

Photo: WFP/Catherine Clark

We told you in a recent blog post that we were testing online surveys in Haiti to get an idea of urban food security perceptions. This was very new for us so here’s a quick follow up on what we learned:

1. Get ready to experiment! For online Surveys (and pretty much anything else we do), this is our approach. We don’t have that much experience designing community level questions so we decided to throw in a variety to see what worked and what didn’t. Our question asking respondents to name two main difficulties faced by poor people probably worked the best so we’ll keep that in. However, we also asked if poor people’s quality of life had improved, worsened, or stayed the same in the past 3 months. Almost everyone said “worsened”  but we weren’t sure if we were really capturing a trend. Given that our respondents were generally pretty wealthy, they might automatically put this response when they are asked about poor people.

2. Re-iterate! Your questions will need tweaking. In our case, street food is an important part of urban Haitians’ diet so we asked a question on the price of a plate of spaghetti. The survey format meant respondents couldn’t free type any answers so we gave them price ranges. It turns out these were a bit too wide as almost all the responses were in one range. However, now that we know the approximate range, in the next round we can put specific numbers in and allow respondents to select an exact value.

3. Tailor your questions to your respondents. You might not actually know that much about them the first time so think about who your respondents will be and what they can realistically answer. Again, a lot of this comes through trial and error. We asked about wage rates for a male manual worker and for a female domestic worker, but our
respondents were mostly young, male and well-educated so they had no idea about how much these people get paid. Age gender

We’d also really like to know about migration so we tested a question. But daily migratory flows in Haiti aren’t very big so back to the drawing board with this idea.

4. Use partial responses. People will drop out of the survey as it goes on. However, we still wanted to use all of the information they did give us, especially since this was a qualitative survey.

5. Be patient. Expect a steady volume of responses, but it could take some time to reach your target. We averaged about 30 completed questionnaires a day. Our target was 750 completes so it took about 5 weeks.

IMG_3242

6. Reaching poor areas is always a challenge. It’s still difficult to get enough responses from poor areas. We knew that going in, and sure enough, in Haiti’s biggest slum, we did not have many responses.

As always, we’ve learned a lot while experimenting. We’re looking forward to revising our questionnaire and giving online surveys another go!

Now testing: online surveys

first question in the surveyPrevious posts in this blog have described how WFP’s mVAM project implements mobile surveys using text messages, voice calls and interactive voice response. This month, we tried something different: online surveys. We’ve been curious for a while about the possibility of delivering surveys straight to people’s desktops or smartphones. While it’s clear that only a small minority of people in developing countries have access to mobile broadband (for more information see the World Bank’s ‘Digital Dividends’ report), more and more people are getting connected. In fact, the exponential growth of global internet access means that in a few years online surveys could become a viable tool for measuring food security. We were therefore looking for a way of testing out our ideas.

Here’s how it happened. There has recently been a drought in Haiti that has caused rising food prices, so a number of assessments have taken place to estimate the drought’s impact on the population’s food security. However, these assessments have only focused on estimating the drought’s impact on rural areas in a country where more than half of the population lives in an urban setting. At the same time, Haiti is also experiencing rapid inflation and political instability, factors that could increase food insecurity, particularly in urban areas. We therefore started looking for a way of monitoring food security dynamics in Haiti’s capital: Port-au-Prince. On investigation, it seemed like our standard tools like SMS or voice calls would take a long time to set up and we needed the information as soon as possible. The result? We had the perfect opportunity to test out online surveys.

We designed a 10 question online survey that asks people about their perceptions of the food security situation in greater Port-au-Prince. It’s administered through RIWI, a company based in Canada. Their methodology involves showing surveys on links that are broken or that no longer exist. Instead of seeing the broken link, people see an invitation to start a survey. These randomly selected people then see and click/swipe through the survey on their desktop, laptop or mobile phone.

Of course when using online surveys there is bias towards well-educated, urban and better off respondents. But this meant it might just work as a way of reaching out to respondents in Haiti’s capital city. We adjusted our techniques to this urban setting, so rather than attempting to conduct a ‘representative’ survey we simply asked people to comment on food security in their communities. This means they answered qualitative questions about their perceptions of food prices, migration, and other issues.

 haiti survey prix nourriture

One of the great things about online surveys is their flexibility. We were able to have two versions of the questionnaire, one in French and one in Creole, as people just select the language on the landing page. We were also able to insert pictures that people see when they are asked a question. For example, we used the picture below to help people understand our question about the price of a serving of spaghetti (a cheap street food in Port-au-Prince).  

spaghetti in survey

Survey responses are also geolocated, and we’re able to know what operating system people are using to answer our questions. We can then use this information to better understand the socio-economic profile of the respondent. Results are now coming in and we will tell you what we learn!