Skip to content
Blog

The data of the most vulnerable people is the least protected

How has biometric data collection caused harm in the context of humanitarian interventions and where do future risks lie?

Belkis Wille , Katja Lindskov Jacobsen

7 July 2023

Reading time: 13 minutes

Fingerprint collection by border control authorities.

Debates around protecting personal data, including biometrics, have intensified, as public and private entities increasingly deploy biometric technologies in contexts like policing, public spaces and schools.

Yet, there is one context in which biometric data is frequently collected and shared that is rarely considered in mainstream policy conversations: humanitarian responses. In these specific settings, data protection policies and practices tend to be insufficient, and vulnerable people, often fleeing atrocities, wars or food insecurity, can be exposed to further harm.

In this post, we look at data collection practices deployed during humanitarian interventions and the ensuing incidents of harm. We review the reasons why organisations collect personal data, especially biometric data, from vulnerable people, and the existing barriers to improving data protection.

Existing legal frameworks

There are currently few guardrails to protect the data rights of refugees and vulnerable people in need of humanitarian support. Moreover, many organisations that were established and funded to protect these communities collect their data without applying the safeguards required under international and, in some cases, national law.

Among the reasons why this happens is the fact that humanitarian crises – wars, other forms of violent conflict, natural disasters or famine – often take place in countries that have no meaningful data protection regulation. And, where data protection regulations exist, they may not be enforced during a humanitarian crisis.

Secondly, the millions of displaced people whose personal data is being collected may be refugees or asylum seekers, who are no longer in their country of origin and have little-to-no leverage over the actions of their host government or the organisations they rely on to access fundamental services.

Third, the international organisations that collect the personal data of people receiving humanitarian support – predominantly UN agencies – are rarely bound to abide by the laws of the countries in which they operate.

UN agency staff are able to work in countries on the basis of privileges and immunities, meaning that even if the country had a meaningful data protection regime in place, UN personnel would not be held accountable in court for breaching it. This is relevant to note considering that, as of late 2021, one UN humanitarian database stored the personal information of 21.7 million people, including the biometric data of 9.8 million, while another UN humanitarian agency, as of late 2020, was controlling a system holding the personal data of roughly 63.8 million people.

When national law falls short, is not upheld or enforced or does not apply, international human rights law still requires that, consistent with the right to privacy, state authorities collect personal data only when it is necessary for and proportionate to a specific aim, and on legitimate legal grounds.

Additionally, the principles of data protection outline that every individual should be informed of what personal data is collected, by whom, why, for how long it will be held and what measures have been taken to keep it safe. The collection of biometric data comes with additional restrictions, as biometric data should be classified as a special category of sensitive data, as stipulated, for instance, in the EU’s General Data Protection Regulation (GDPR).[1]

In spite of these legal protections, our research, covering a wide variety of interventions, has shown that extensive amounts of personal data about vulnerable people receiving aid (beneficiaries) is collected without taking significant steps to inform  them of the reasons why and through which mechanisms their data is stored and shared.

Why do donor governments and UN agencies want to collect vulnerable people’s data?

Given the legal framework that should be applied, our research seeks to understand why UN agencies and other humanitarian organisations, often funded by the UN and acting as implementing partners, collect vulnerable people’s personal data, especially biometrics, in humanitarian crisis zones without adequate safeguards.

Data collection supported by emerging technological solutions, like the use of biometrics to match aid to beneficiaries, has become the norm. There is a strong narrative in the humanitarian sector that distributing bags of rice to someone based on their fingerprint, rather than their paper-based identity card, is more efficient and cost-saving, will yield more accurate refugee population figures, reduce the risk of fraud and enhance accountability towards donors.

There are good reasons for addressing fraud in the context of humanitarian aid, but these must be balanced against the risk of harmful effects when designing beneficiary systems around biometric data. These trade-offs must be considered carefully in each context, to understand the potential effects of personal data collection on the safety of vulnerable people and take meaningful action to mitigate against potential harm. This may mean designing systems where biometric data cannot be used, as the stakes are too high.

Over the course of our research, we have interviewed many staff directly involved in data collection and processing, including biometric data, at some of the largest humanitarian non-governmental organisations (NGOs) receiving UN funding. Interviewees have spoken confidentially of their discomfort with the amount of data they are obliged to collect and share with UN agencies, donor governments and host governments. They raised concerns about the legal basis for collecting data, including the lack of meaningful consent from beneficiaries, and the limited information provided about how the data will be used.

Consent to data collection is an especially contentious topic. Some humanitarian organisations have tried to justify collecting sensitive data like biometrics on the legal ground of ‘vital interest’ (i.e. collecting such data is ‘vital’ to the survival of the person they are collecting the data from). Given the high threshold for making such an argument, however, most organisations rely on the legal justification of ‘informed consent,’(i.e. data is collected only after beneficiaries have freely agreed to it and to how the data will be used, fully understanding what they have agreed to). Indeed, agencies are keen to show that they are giving due consideration to best data practices, which centre on consent.[2]

Yet, vulnerable people, including traumatised refugees who have fled violence and hunger, are told that they need to provide extensive personal information (e.g. name, where they are from, information on family members, profession, etc., as well as their fingerprints, face scan and/or iris scan) simply to get access to food or receive a tent. In many instances, people are asked for consent at a time when they need humanitarian assistance urgently.

This means that the organisation providing assistance holds such power over them that the idea of informed consent is meaningless: ‘it’s like dangling a lollipop,’ as a humanitarian worker in Somalia noted.

Considering the position of vulnerability of the beneficiaries and the lack of alternatives, can consent be given genuinely? Can people know what exactly they consent to – having their personal data, including biometrics, stored, or having it shared? Can they really balance future risks ensuing from biometric data collection against their urgent present needs? And, if consent is not genuine, do humanitarian actors have the right to be capturing such data in these contexts?

The power dynamics in the sector between donors and organisations are also complex: staff members at humanitarian organisations said that they had been explicitly told by UN agencies and governments financially supporting their operations that refusing to collect and share data would lead to losing their funding.

This is all the more significant as some donors demand the personal data of NGOs’ beneficiaries for purposes unrelated to humanitarian support, including as part of financial regulation and counterterrorism and national security policies.

In one example, as part of a formalised agreement with the US Government, the UNHCR shares personal data, including biometrics, of thousands of refugees who are seeking resettlement in the USA, even if they are never resettled there. The US Government stores the data in systems that are shared with agencies working on immigration and border enforcement, criminal investigations and defence policy, rather than refugee protection.

Since 2020, Human Rights Watch has engaged with the UNHCR on its data collection practices globally and raised specific questions about potentially harmful effects for people in Bangladesh, Jordan and Kenya. In response, the UNHCR has cited its data protection policy, emphasising that it registers individuals and collects their data to support governments in their obligation to protect refugees and asylum seekers, and that governments have the right to know who resides in their territory.

There is a risk that law enforcement entities, which are exempt from some data protection regulations on national security grounds, may use the information that vulnerable people have given to humanitarian agencies for entirely different purposes than those originally stated. This could have a broad impact, including on non-discrimination rights, the right to a fair trial, free movement and other rights, depending on which state agencies the data is shared with and for what purposes.

What are the potential harms ensuing from humanitarian collection of personal data?

Personal data collected for any purpose brings risks. Hacks and leaks of vulnerable people’s personal data are a real concern, even where agencies collecting data have good practices. In one instance, the International Committee of the Red Cross, which takes a very narrow approach to the collection of biometric data and has invested in creating an extremely secure data storage system, experienced a hacking episode in January 2022, involving the personal data of more than half a million people.

However, it is important to recognise that harm sometimes results from an organisation’s decision to share data with another organisation, private sector entity, host or donor government. For example, in 2018, the UNHCR registered, on behalf of the Government of Bangladesh, the Rohingya population who had fled horrendous abuses at the hands of the Myanmar military. This enabled the local government to issue them biometric identity cards, deemed necessary to receive essential aid and services.

Some Rohingya leaders protested against biometric registration for various reasons, including fear that Bangladesh would share the data with Myanmar authorities – the very officials who had targeted, executed and raped so many people from their community.

Bangladeshi authorities did share the data collected with the Myanmar Government, in an effort to send some Rohingya refugees back to Myanmar, something UNHCR senior staff knew from the outset would happen. In the cases examined by Human Rights Watch, the UNHCR did not seek to obtain meaningful consent nor adequately inform Rohingya people of exactly what would happen with their data. [3]

In other instances, systems built and paid for by donor governments and humanitarian organisations and containing sensitive personal data have fallen into the hands of unintended actors.

In Afghanistan, a multitude of donors such as the United States and international institutions, including UN agencies and the World Bank, funded and, in some cases built, or helped to build vast systems to hold the biometric and other personal data of various groups of Afghans for digital identity verification and payroll purposes. Some of these systems were built for the former Afghan Government, while others were designed for foreign governments and militaries.

While not a case of a humanitarian actor’s data collection, one of these systems – the Automated Biometric Identification System (ABIS) – is a powerful example of how unsafe mechanisms can lead to unforeseen harm. Initiated in 2004 by the US Department of Defence, it serves as a central repository for personal data, including biometrics collected by US military officers and other departments’ staff. The ABIS includes data of those considered a US national security concern, among whom are detainees, and people who applied to work on US military bases in Iraq and Afghanistan and Iraqis and Afghans working on US-funded projects. At least some of the devices that US forces used to store and process the data held in the ABIS have been compromised. [4]

Moreover, following their takeover of Afghanistan in August 2021, the Taliban, who target people because of their past association with the former Afghan Government and its allies, have been able to access some of the devices holding ABIS data. They also have been able to access other systems established and/or paid for by the international community. This means that Taliban officials at checkpoints or in administrative offices can immediately identify and potentially harm individuals, including former Afghan government workers like former police and soldiers.

What are the barriers to the effective protection of people’s data in humanitarian contexts?

Humanitarian agencies and NGOs may collect data for efficiency purposes and to supposedly tackle fraud, but they are also funded by a donor community of states, who may have other interests in accessing the personal data of vulnerable communities.

Also host governments have their own interests in getting hold of the personal data of beneficiaries. Some make data-sharing a prerequisite for an organisation that wants to operate on their territory, leaving humanitarian workers with no choice. A government’s demands are likely to persist unless all humanitarian organisations operating in its country are willing to take a shared position and refuse to comply with those demands. Notably, this is something that researchers have been told confidentially has indeed happened, in places where aid organisations held a united position and were able to successfully push back against authorities’ demands.

Besides collecting too much data, a key barrier hindering NGOs’ compliance with data protection is financial. NGOs compete for grants from the UN or a donor country and their mission binds them to deliver as much and as cheaply as possible.

Staff at smaller humanitarian organisations, which usually have limited core funding, told us that, without dedicated resources, they cannot afford to hire enough data protection officers and pay for adequate training for staff involved in sensitive data collection. As a result, informed consent is often reduced to a pro-forma box-ticking exercise that a beneficiary is not even aware of. Critically, aid workers mentioned inadequate in-house resources to conduct meaningful data protection impact assessments, before initiating projects that require data collection.

To overcome these barriers, humanitarian actors and their donors need to commit to the fundamental data protection principle of data minimisation, i.e. collecting the minimum amount of data necessary for the purposes of their assistance. And they should refuse to collect that data until donor governments and UN agencies provide adequate funding for them to abide by the tenets of data protection to the standard that all of us deserve.

As the discussion on greater transparency and accountability towards local communities intensifies among NGOs and agencies, humanitarian actors should remember that their duty to protect and do no harm extends to the full array of rights.

Footnotes

[1] The EU’s GDPR article 9, for example, prohibits the processing of sensitive data like biometrics for the purpose of uniquely identifying natural persons, with few exceptions because of their sensitivity and unique link to a person’s identity. The GDPR does not apply to government bodies and law enforcement when data are gathered and processed for the prevention, investigation, detection or prosecution of criminal offenses or the execution of criminal penalties or for preventing threats to public safety.

[2] The GDPR, for example, stipulates that vital interest can only be used as a basis for the collection of ‘special categories of personal data’, where the data subject is physically or legally incapable of giving consent. These special categories of data are a subset of personal data that warrant special protection because of their nature (for example, data on racial or ethnic origins, political opinions, religious belief or sexual orientation). Biometric data is another of these special categories of personal data. The data being collected in relation to vulnerable migrants will fall under these categories and so, even where GDPR does not apply, consent represents the appropriate standard.

[3] The 24 people that Rohingya Human Rights Watch spoke to during its research, who knew their names were listed for repatriation, refused to board the buses lined up on the camps to take them back to Myanmar. They instead went into hiding. They feared that they would be forcibly returned, as others had been in the past, to the authorities they had fled as soldiers killed their families and burned their villages. Recent events suggest returns may begin soon.

[4] In December 2022, German researchers were able to purchase on eBay several of these devices and discovered that the machines held the unencrypted personal data profiles of 2,632 Afghans and Iraqis.