For such a broad, diverse set of technologies, the public debate about biometrics can often appear perplexingly narrow. For the past few years, automated facial recognition, and especially its use by the police, has often seemed like the only biometric controversy in town.
In the UK, police trials (and de facto operational deployments) of facial recognition have stoked controversy, prompting calls for interventions ranging from a blanket ban on the technology, to better oversight and codes of practice to an overhaul of the current law governing the use of the technology by the public sector – as well as spirited defences of its potential benefits.
In the USA, a patchwork of state-level regulation has begun to emerge in response to facial recognition, spurred in part by research (predominantly by Black women) on how biometric technologies can exacerbate existing inequalities, as well as creating new ones. This legislation ranges from requirements for private entities to obtain consent before collecting biometric data, as in Texas and Washington, to more absolute restrictions on the use of the technology, such Oregon’s prohibition of the use of facial recognition by private entities within the city of Portland. One of the most recent additions, Virginia’s House Bill 2031, has been presented as banning police use of facial recognition without approval from the state legislature.
In the European Union, where ambitions to develop a comprehensive regulatory response to digital technologies are perhaps strongest and most realised, the narrow focus on facial recognition is particularly striking. After flirting with, but ultimately walking back from the idea of a moratorium on facial recognition, the European Commission has now published proposals on the regulation of high-risk forms of AI, which effectively single out live facial recognition, or ‘real time remote biometric identification’ as the form of biometrics most in need of limitation. Though the proposals do make reference to other uses of biometric technology and data, notably the use of the technology for categorisation and emotion recognition, they arguably fall short of proposing adequate safeguards on the use of these technologies.
Biometrics is bigger than facial recognition
While it is easy to see why facial recognition remains the most explored deployment of biometric technology, its near monopoly on lawmakers’ attention and public discourse will be hard to justify indefinitely. Facial recognition is just one of an emerging set of biometric technologies enabled by recent advances in machine learning, sensor technology, availability of biometric data and increasing prevalence of digital human labour. As well as automated facial recognition, the emergence of effective systems for voice recognition, gait analysis, typing signature analysis and emotion recognition (to name but a few) are set to pose difficult questions for policymakers.
As these forms of biometrics become more conspicuous, and as COVID-19 heightens the incentives of states and companies to use them, the need for public debate about whether and how they might be governed or regulated will become increasingly pressing. In the clamour to return to economic productivity, governments and companies may well be drawn to the capacity of biometric systems to monitor and assess public health, adherence to social distancing and worker performance. As these uses of technologies become more visible, it will become increasingly difficult to restrict the public discourse on biometrics to facial recognition.
But while a broadening out of conversation around biometrics is urgently needed, just how far should it be opened out, and how are we to make sense of what we find?
For any politician or policymaker thinking about how to conceive of and respond to the emergence of this family of technologies, two questions are particularly difficult.
Firstly, we need to ask whether it makes sense to talk about biometric technologies as a truly cohesive set of technologies, presenting common challenges and admitting to common remedies, or whether they bear nothing more than a family resemblance, and will therefore resist systematic analysis or regulation. If there isn’t a unifying set of harms common to all biometric technologies, then it may well be justifiable to respond to the challenges posed by biometric technologies on a case-by-case basis – indeed, this may be the only workable approach.
Secondly, we need to understand the novelty of this set of technologies. Are there things that these biometric technologies can do, and harms they can present, that are really different from those made possible by other data-driven technologies? And if so, is the difference one of kind, or one of degree? If there is no principled or substantial difference in the kinds of risks presented by biometric information to those posed by the collection and processing of other forms of personal data, then a reasonable conclusion might be that solving the problem of biometrics is simply a case of getting data protection right as a whole (not that that’s a straightforward task).
To answer these questions convincingly, we need to be clear what we’re talking about when we talk about biometrics.
The first barrier to be overcome here is definitional. To understand the nature and novelty of the challenges posed by biometric technologies, we need to be working from a clear, shared understanding of what counts (or should count) as biometric data.
Of particular interest are questions of whether biometric data should have to be capable of uniquely identifying a natural person (and if so, what should count as unique identification), whether laws about biometric data should cover unprocessed, or ‘raw’ biometric data, such as photos of faces and voice recordings, and what kinds of behavioural data, if any, we should count as biometric data.
This task is complicated further by the fact that different jurisdictions answer these questions differently, and further still by the fact that many legal definitions of biometric data (bound up with historical uses of biometric technologies) seem at odds with our intuitive ideas about what should be in scope. The UK GDPR, for instance, defines biometric data as that which allows or confirms the unique identification of a natural person, thereby excluding potentially significant information about a person’s body that arguably should be the subject of regulation, such as that used by biometric emotion recognition systems. Working out what we mean (or should mean) when talking about biometrics isn’t as simple as consulting the relevant legislation.
The second barrier is more substantive. To talk precisely about biometrics, we need to establish the harms and challenges with which biometric technologies might be associated, and how these correspond to different uses and forms of the technology. Applications of biometric technologies prompt an array of related and often ill-defined worries, relating to practical concerns about accuracy and bias, to deeper questions about privacy, power dynamics and the erosion of societally important forms of uncertainty – such as people’s emotional reactions to one another, or people’s propensity to behave in particular ways.
Disentangling the different conceivable harms posed by biometric technologies, and establishing which applications of the technology correspond to which harms, is a precondition for establishing whether there’s a common set of challenges posed by biometrics and, where harms aren’t uniform, which interventions are needed in which circumstances.
The rest of this blog series is devoted to exploring these two barriers in more detail, with the subsequent blog post exploring three kinds of definitional questions about biometric data – and their consequences for policy – and the final piece mapping the different kinds of harms posed by biometrics against current and conceivable future applications of the technology. In neither case is the intention to answer the question of how policymakers should concretely respond to the rise of biometrics. Rather, the aim is simply to clarify the different things we might be referring to when we talk about this vast, varied family of technologies, and what, specifically, is at stake.
Image credit: KOHb
What we can learn from international developments in the governance and regulation of biometric technologies
Exploring the gaps and risks relating to biometrics in the EU's draft AI regulation
Reflections from round one of the Citizens' Biometric Council.
To mark the beginning of an independent review on the governance of biometric data, Ada hosted a debate on UK biometrics regulation