Skip to content
Blog

Containing the canary in the AI coalmine – the EU’s efforts to regulate biometrics

Exploring the gaps and risks relating to biometrics in the EU's draft AI regulation

Carly Kind

30 April 2021

Reading time: 13 minutes

Surveillance Camera on a Brick Wall

The draft AI regulation published by the European Union last week is significant because it’s the first of its kind in the world – a comprehensive, cross-sectoral, supranational attempt to regulate artificial intelligence (AI) and algorithmic products across a range of ‘high-risk’ sectors.

While only a week old, the Commission’s proposal has already achieved an impressive feat: it has shifted the policy window away from a conversation about whether to regulate artificial intelligence, opening up a new discourse about how to regulate artificial intelligence.

Biometrics technologies feature heavily in the Commission’s draft, and for good reason. Controversial technologies such as facial recognition have acted as a canary in the AI coalmine, provoking individual alarm and societal debate about the role of AI in our societies. In the coming months, the proposed regulation will be critiqued from many perspectives, and there are patently limitations in several areas, but here we will analyse its potential effects on uses of biometric-identification technologies, including facial-recognition technologies.

The Ada Lovelace Institute’s national survey of public attitudes to facial recognition published in 2019 demonstrated how widely public attitudes to facial recognition differed across a range of use cases and contexts, and revealed general support for more regulation, particularly of police use of the technology.

In the intervening years, as research, predominantly by Black women, has built an evidence base for the ways biometric technologies exacerbate existing inequalities and foster new harms, governments and technology companies have taken steps to limit or halt the use of facial recognition and other biometric identification technologies. But few legislatures have proposed stronger regulation of biometrics, with the State of Illinois being a prominent exception. To understand this dynamic in the UK, the Ada Lovelace Institute has commissioned Matthew Ryder QC to undertake an analysis of the regulatory framework for biometrics, and propose options for legislative reform.

In this light, the EU’s proposed regulation, which addresses biometrics from multiple perspectives, is instructive in both its approach and its flaws. A well-received feature of the regulation is that it gives biometrics technologies the prominence they deserve. In particular, the focus on transparency, including a requirement for biometrics (and other high-risk) systems to be recorded in the EU database of standalone high-risk systems is a welcome inclusion. Of the more than 30 recommendations levied by the Citizens’ Biometrics Council we convened in 2020, a good number are reflected in the Commission’s proposal, including the establishment of minimum standards of technology design. But the regulation also contains numerous omissions, including one that correlates with a strong recommendation that emerged from our Citizens’ Biometrics Council – the need for a single point of authoritative oversight.

Below, we analyse the regulation from the perspective of six potential areas of use – predominantly policing (in both live contexts and in criminal investigations), but also schools, corporate recruitment, supermarkets, airports and public transport. These six use cases are those which grounded the findings of Beyond Face Value, our national public attitudes survey conducted in July 2019, which surfaced the UK public’s nuanced responses to biometrics technologies’ use in different contexts for the first time.

Policing

Responding to public discourse and media coverage of high-profile facial recognition uses, the regulation focuses its firepower on police use of ‘remote real-time biometric identification systems’ in publicly accessible spaces. That is, systems that are used by police in publicly accessible spaces, and which:

  • operate at a distance without knowing whether the relevant person will be present in an area
  • capture biometric data and compare it with an existing sample or template without significant delay, and
  • are used specifically for the purpose of uniquely identifying an individual.

The regulation effectively prohibits police using biometric identification systems for the detection of minor crimes or generalised surveillance, and subjects other uses of biometrics to prior judicial authorisation. The current draft permits police to use remote real-time biometric systems to look for victims of crime, prevent specific and imminent threats to life, and detect and prosecute perpetrators or suspects of serious crimes (crimes punishable by a sentence of more than three years) (Art. 5(1)).

It also permits the use of remote, real-time biometrics categorisation systems – that is, AI systems that assign people to categories based on their physical appearance, such as ethnicity or sexual orientation – and remote, real-time emotion-recognition systems. It allows Member States to install the technical infrastructure for use of such systems in publicly accessible spaces. And it permits ‘post remote’ biometric identification, meaning the use of biometric identification on, for example, CCTV footage, crime-scene images or other images.

It is not surprising that the regulation prohibits almost no uses of biometric technology by police; indeed that approach accords with what our research showed about public expectations, which generally support police use of facial recognition in prescribed and regulated contexts. Yet the approach taken by the Commission here leaves numerous gaps and risks.

While the distinction of remote, real-time biometrics is a useful one, and alludes to real concerns about the use of covert biometric identification in public spaces, this framing does little to engage with the agency of the individuals being identified. The regulation doesn’t mention consent, for example, which our research has demonstrated has an important role to play for many people. The distinction between real-time and ‘post’ use risks arbitrariness, not only because biometric categorisation using multiple features may in effect permit identification (for example searching for brown middle-aged males passing by a specific CCTV camera location) but also because the outcomes of remote biometric identification in public spaces on freedom of assembly and the normalisation of surveillance would be equivalent in both circumstances. Some of the most controversial facial recognitions would qualify as ‘post’ use, such as the Clearview AI tool sold to police forces internationally. And the disparity in limitations imposed on biometric identification, in comparison to biometric categorisation or emotion recognition, fails to account for the range of harms that might fall from the latter applications.

Aside from the provisions pertaining to remote real-time biometric identification in publicly accessible spaces, the regulation applies a limited number of restrictions on the use of biometrics. ‘Post remote’ biometric identification, such as the use of facial recognition on recorded CCTV footage, is classified as a ‘high-risk’ activity (Art. 6 (2)), meaning that the systems used by police must have undergone risk assessment processes and a conformity assessment, and adhere to a range of rules concerning dataset quality and provenance. However, the system of conformity assessment primarily rests on self-certification by technology providers, rather than subjecting systems to independent authorisation or quality control, provided biometrics systems adhere to harmonised standards or common specifications. Police, as users of a technology, have few responsibilities to ensure the systems adhere to any rules.

There is a requirement in the regulation that any identification resulting from the system must be verified and confirmed by two natural persons (Art. 14) before any decision is taken. However, this restriction applies only to biometric identification. Where police use AI for biometric categorisation – classifying people in public places as of a certain ethnicity or political orientation, for example – they are under no obligation to include human oversight, or to notify people that the system is in use (police are explicitly exempt from transparency obligations pertaining to biometric categorisation (Art. 52(2))).

Overall, police use of biometric categorisation for purposes other than identification does not even appear to be classified as a high-risk use of biometrics.1 And emotion recognition categorically is not considered to be a high-risk system, despite the fact that it has been widely critiqued not only for potentially unsafe applications but for doubts about its accuracy. As such, police could use biometrics technologies to, for example, scan public spaces for people of particular ethnicity or age, or of particular sexual or political orientation; or could use systems purporting to analyse CCTV footage for people looking ‘suspicious’ or ‘nervous’, without any restriction, risk management approach or oversight. This is a particularly concerning omission, and our Citizens’ Biometrics Council expressed serious concern about types of biometrics applications, especially with regards to the risks they pose to marginalised communities.

There is one provision that acts as a fallback safeguard against AI systems ‘presenting a risk at a national level’ (Art. 65), which requires users of AI systems to suspend the use of a system and notify authorities where a product has the potential to adversely affect:

  • health and safety of persons in general
  • health and safety in the workplace
  • protection of consumers
  • the environment
  • public security, and
  • other public interests.

However, this provision is sufficiently broadly drawn that it seems unlikely that police users of biometric technologies would view it as requiring them to pause systems out of concern for broader potential societal harms, for example, the normalisation of surveillance, inhibiting effects on protests and demonstrations, or wide-spread discrimination.

Schools

In Beyond Face Value we interrogated public attitudes to the use of facial recognition in schools for the purpose of identification (e.g. in place of a ‘roll call’) and for the purpose of categorisation or emotion recognition, such as to monitor pupils’ facial expressions and their behaviour. Our research found that more than 65% of people surveyed were uncomfortable with the use of facial recognition in schools for any reason.

Both identification and emotion-recognition biometrics systems would be permitted under this regulation, although both would be considered high-risk systems. A biometric roll-call system would be classified as a high-risk biometrics system, requiring schools to ensure they procured products that had undergone a conformity assessment and received a CE marking. Although emotion-recognition systems do not fall within the category of a high-risk biometrics system, they would be likely to fall within the definition of a system used for education and vocational training, in particular as a system designed for ‘assessing students in educational and vocational training institutions’, making them ‘high-risk’ systems with the attendant obligations.

However, as explained above, designation as a high-risk system in the regulation does not mean its use is prohibited or even restricted. It does not place any considerable obligations under the users or procurers of those systems – in this case, schools and educational facilities. And it would be unlikely to trigger schools and governments to consider the long-term societal risks and harms associated with the normalisation of technological surveillance of children.

Instead, it focuses on shaping technologies before they enter the market, requiring developers of systems to undertake a range of steps such as undertaking risk assessment, verifying data quality and governance, building in a range of human oversight and transparency measures, and ensuring that systems automatically generate logs. The developers themselves are entitled to self-verify compliance with the rules (unless systems don’t conform to harmonised standards or common specifications published by the Commission, in which case a third party has to verify conformity).

Corporate recruitment

As explained above, the regulation does not appear to classify emotion-recognition systems or biometric categorisation systems as high risk, meaning that none of the rules in the regulation mandatorily apply to them, with one exception – Article 52, which requires those deploying the system to inform people exposed to it that they’re interacting with emotion recognition technology.

However, the regulation also defines employment, worker management and access to employment as a high-risk area, so it is likely that emotion recognition used for recruitment would nevertheless fall within a high-risk category, even if not by virtue of it being a biometric technology. Regardless, potential applicants would have no right to opt-out of or withdraw consent for emotion recognition in recruitment under the regulation.

Supermarkets

Biometrics technologies are being used in two very different ways in supermarkets and high-street stores – for tracking shoppers as they move throughout the store (for the purpose of retail outlets understanding where shoppers spend the most time) and for uniquely identifying shoppers and comparing them against a ‘watchlist’ of previous shoplifters. Only the latter use would appear to be captured by the regulation, and – as remote, real-time biometric identification – would be classified as a high-risk system, but would not be prevented by the regulation. This is a significant departure from what our research understood to be public support for biometric identification by the private sector in commercial premises – only 7% of those we surveyed in Beyond Face Value thought facial recognition should be used in supermarkets to track shopper behaviour, for example, and the Citizens’ Biometrics Council expressed serious discomfort with this kind of application.

The regulation doesn’t require any disclosure by supermarkets that biometric identification systems are in operation. However, it would potentially require disclosure of the use of categorisation systems for tracking shoppers, depending on the features of the system.

Airports and public transport

Remote, real-time biometric identification in airports (instead of manual passports and check-ins) and on public transport (in place of a rail pass or bank card) would be permitted by the regulation within the restrictions of a high-risk AI system. Emotion-recognition systems could also be used for detecting ‘nervous’ or ‘suspicious’ travellers, and categorisation systems could be used to track ‘fare dodgers’. However, airports and public transport would probably be considered as ‘publicly accessible spaces’ and therefore fall within the restrictions applicable to police use of biometric-identification systems on those premises.

Conclusion

If the regulation is a litmus test of the current thinking around biometrics technologies, it’s notable that it suffers from an outdated distinction between biometric identification and classification, fails to adequately grapple with the risks of emotion recognition, and relies heavily on a system of self-regulation and certification. As the first attempt to contain the unregulated growth of applications of AI, it is laudable for attempting what many thought impossible – a cross-sectoral framework for identifying and mitigating the risks AI systems pose. The focus on risk mitigation at the pre-market stage, however, inhibits a real consideration of the wide range of potential use cases for biometrics technologies, and results in certain concerning applications falling through the cracks.

The Ada Lovelace Institute continues to work on the governance of biometrics, and intends to publish the independent review of the governance of biometrics by Matthew Ryder in summer 2021.

 

We will also be following the debate around the EU AI regulation as it connects to our work on the regulatory inspection of algorithms, and public sector use of AI.

Image credit: Artystarty

  1. On our reading of Article 6(2) and Annex III, biometric identification and categorisation systems only fall into the high-risk category when they are used for ‘biometric identification’. This is despite the definition of categorisation not containing an identification element. This is either an error, and biometric categorisation does qualify as a high-risk system even when not used for identification, or a substantial oversight in the regulation.

Related content