Language data fills a critical gap for humanitarians

Until now, humanitarians have not had access to data about the languages people speak. But a series of open-source language datasets is about to improve how we communicate with communities in crisis. Eric DeLuca and William Low explain how a seemingly simple question drove an innovative solution.

“Do you know what languages these new migrants speak?”

Lucia, an aid worker based in Italy, asked this seemingly simple question to researchers from Translators without Borders in 2017. Her organization was providing rapid assistance to migrants as they arrived at the port in Sicily. Lucia and her colleagues were struggling to provide appropriate language support. They often lacked interpreters who spoke the right languages and they asked migrants to fill out forms in languages that the migrants didn’t understand.

Unfortunately, there wasn’t a simple answer to Lucia’s question. In the six months prior to our conversation with Lucia, Italy registered migrants from 21 different countries. Even when we knew that people came from a particular region in one of these countries, there was no simple way to know what language they were likely to speak.

The problem wasn’t exclusive to the European refugee response. Translators without Borders partners with organizations around the world which struggle with a similar lack of basic language data.

Where is the data?

As we searched various linguistic and humanitarian resources, we were convinced that we were missing something. Surely there was a global language map? Or at least language data for individual countries?

The more we looked, the more we discovered how much we didn’t know. The language data that does exist is often protected by restrictive copyrights or locked behind paywalls. Languages are often visualized as discrete polygons or specific points on a map, which seems at odds with the messy spatial dynamics that we experience in the real world. 

In short, language data isn’t accessible, or easily verifiable, or in a format that humanitarians can readily use.

We are releasing language datasets for nine countries

Today we launch the first openly available language datasets for humanitarian use. This includes a series of static and dynamic maps and 23 datasets covering nine countries: DRC, Guatemala, Malawi, Mozambique, Nigeria, Pakistan, Philippines, Ukraine, and Zambia.

This work is based on a partnership between TWB and University College London. The pilot project received support from Research England’s Higher Education Innovation Fund, managed by UCL Innovation & Enterprise. With support from the Centre for Translation Studies at UCL, this project was the first of its kind in the world to systematically gather and share language data for humanitarian use.

The majority of these datasets are based on existing sources — census and other government data. We curated, cleaned, and reformatted the data to be more accessible for humanitarian purposes. We are exploring ways of deriving new language data in countries without existing sources, and extracting language information from digital sources.

This project is built on four main principles:

TWB Language Data Initiative

1. Language data should be easily accessible

We started analyzing existing government data because we realized there was a lot of quality information that was simply hard to access and analyze. The language indicators from the 2010 Philippines census, for example, were spread over 87 different spreadsheets. Many census bureaus also publish in languages other than English, making it difficult for humanitarians who work primarily in English to access the data. We have gone through the process of curating, translating, and cleaning these datasets to make them more accessible.

2. Language data should work across different platforms

We believe that data interoperability is important. That is, it should be easy to share and use data across different humanitarian systems. This requires data to be formatted in a consistent way and spatial parameters to be well documented. As much as possible, we applied a consistent geographic standard to these datasets. We avoided polygons and GPS points, opting instead to use OCHA administrative units and P-codes. At times this will reduce data precision, but it should make it easier to integrate the datasets into existing humanitarian workflows.

We worked with the Centre for Humanitarian Data to develop and apply consistent standards for coding. We built an HXL hashtag scheme to help simplify integration and processing. Language standardization was one of the most difficult aspects of the project, as governments do not always refer to languages consistently. The Malawi dataset, for example, distinguishes between “Chewa” and “Nyanja,” which are two different names for the same language. In some cases, we merged duplicate language names. In others, we left the discrepancies as they exist in the original dataset and made a note in the metadata.

Even when language names are consistent, the spelling isn’t always. In the DRC dataset, “Kiswahili” is displayed with its Bantu prefix. We have opted instead to use the more common English reference of “Swahili.”

Every dataset uses ISO 639-3 language codes and provides alternative names and spellings to alleviate some of the typical frustrations associated with inconsistent language references.

3. Language data should be open and free to use

We have made all of these datasets available under a Creative Commons Attribution Noncommercial Share Alike license (CC BY-NC-SA-4.0). This means that you are free to use and adapt them as long as you cite the source and do not use them for commercial purposes. You can also share derivatives of the data as long as you comply with the same license when doing so.

The datasets are all available in .xlsx and .csv formats on HDX, and detailed metadata clearly states the source of each dataset along with known limitations. 

Importantly, everything is free to access and use.

4. Language data should not increase people’s vulnerability

Humanitarians often cite the potential sensitivities of language as the primary reason for not sharing language data. In many cases, language can be used as a proxy indicator for ethnicity. In some, the two factors are interchangeable.

As a result, we developed a thorough risk-review process for each dataset. This identifies specific risks associated with the data, which we can then mitigate. It also helps us to understand the potential benefits. Ultimately, we have to balance the benefits and risks of sharing the data. Sharing data helps humanitarian organizations and others to develop communication strategies that address the needs of minority language speakers.

In most cases, we aggregated the data to protect individuals or vulnerable groups. For each dataset, we describe the method we used to collect and clean the data, and specify potential imitations. In a few instances, we chose to not publish datasets at all.

How can you help?

This is just the beginning of our effort to provide more accessible language data for humanitarian purposes. Our goal is to make language data openly available for every humanitarian crisis, and we can’t do it alone. We need your help to:

  1. Integrate and share this data. We are not looking to create another data portal. Our strategy is to make these datasets as accessible and interoperable as possible using existing platforms. But we need your feedback so we can improve and expand them.
  2. Add language-related questions into your ongoing surveys. Existing language data is often outdated and does not necessarily represent large-scale population movements. Over the past year, we have worked with partners such as IOM DTM, REACH, WFP, and UNICEF to integrate standard language questions into ongoing surveys. This is essential if we are to develop language data for the countries that don’t have regular censuses. The recent multi-sectoral needs assessment in Nigeria is a good example of how a few strategic language questions can lead to data-driven humanitarian decisions.
  3. Use this language data to improve humanitarian communication strategies. As we develop more data, we hope to provide the tools for Lucia and other humanitarians to design more appropriate communication strategies. Decisions to hire interpreters and field workers, develop radio messaging, or create new posters and flyers should all be data-driven. That’s only possible if we know which languages people speak. An inclusive and participatory humanitarian system requires two-way communication strategies that use languages and formats that people understand.

Clearly, the answer to Lucia’s question turned out to be more complicated than any of us expected. This partnership between TWB and the Centre for Translation Studies at UCL has finally made it possible to incorporate language data into humanitarian workflows. We have established a consistent format, an HXL coding scheme, and processes for standardizing language references. But the work does not stop with these nine countries. Over the next few months we will continue to curate and share existing language datasets for new countries. In the longer term we will be working with various partners to collect and share language data where it does not currently exist. We believe in a world where knowledge knows no language barriers. Putting language on the map is the first step to achieving that.

Eric DeLuca is the Monitoring, Evaluation, and Learning Manager at Translators without Borders.

William Low is a Senior Data and GIS Researcher at University College London.

Funding for this project was provided by Research England’s Higher Education Innovation Fund, managed by UCL Innovation & Enterprise.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.