Skip to content
Blog

The human rights flaws in police facial recognition trials

Dr Daragh Murray explains how use of live facial recognition technology by the Metropolitan Police Service fails to comply with human rights law.

22 August 2019

Reading time: 5 minutes

Dr Darragh Murray

At the beginning of July, Professor Peter Fussey and myself released a report on the Metropolitan Police Service’s use of live facial recognition technology to identify specific individuals in public places. Operating in real time, the system detects a person’s face, produces a digital copy of it and then matches it with an existing record using biometric data (this record is known as a watchlist).

We joined the final six of ten Live Facial Recognition deployment trials, starting our independent academic research in June 2018. We sat in debriefing and planning sessions, had access to live facial recognition control rooms and reviewed legal documents and the Metropolitan Police Service’s own methodology research on the subject.

The study found a fundamental lack of engagement with human rights throughout the live facial recognition trial process. This is a particular concern. Our research led us to the conclusion that human rights compliance was not effectively built into the Metropolitan Police Service’s decision-making processes from the outset.

Our report also raised a number of concerns regarding specific issues arising from the trials. These concerns were shared with the Metropolitan Police Service, who declined to exercise their right of reply.

First, the research methodology adopted by the Metropolitan Police Service focused primarily on the technical aspects of the trial process. There was little clarity as to how the test deployments were intended to satisfy the non-technical objectives. In particular, it is unclear how the trial process intended to evaluate the utility of live facial recognition as a policing tool. As such, the overall benefit of conducting the trials – from a research perspective – was questionable.

Second, since live facial recognition interferes with human rights protections, its use must be ‘in accordance with the law’. A primary purpose of this requirement is to protect against arbitrary rights interferences, and to ensure that the exercise of State authority is foreseeable. However, no explicit legal basis exists which authorises and regulates the use of live facial recognition, and the implicit legal basis identified by the Metropolitan Police Service is both unclear and overly broad. This raises serious concerns regarding the legality of the live facial recognition deployments.

Third, the Metropolitan Police Service did not effectively engage with the ‘necessary in a democratic society’ requirement established by human rights law. This requirement is intended to ensure that the potential harm caused by the deployment of live facial recognition does not outweigh the utility.

Engaging with the ‘necessary in a democratic society’ test therefore requires some form of impact or risk assessment. The Metropolitan Police Service did prepare a number of impact/risk assessment documents. However, these documents failed to engage fully with the likely impact of facial recognition technology. A key cause of this problem was the classification of live facial recognition as an overt surveillance tool, and a failure to appreciate its invasiveness. For instance, the right to privacy analysis did not include consideration of all those subject to biometric processing, but was restricted to the small subset of individuals on the watchlist, despite the fact that any form of biometric processing will bring into play the right to privacy.

Fourth, our report raised concerns about a lack of broader engagement with the public and civil society organisations, particularly at a point in time when this engagement could have informed the trial process.

Not just a data protection issue – the case for a moratorium

Ultimately, our report highlights the human rights concerns associated with the use of facial recognition technology. The only way to address these concerns is to effectively incorporate human rights considerations into decision-making processes – particularly pre-deployment – and to develop overarching regulation. As noted by the Surveillance Camera Commissioner, Tony Porter, this is not just a data protection issue, and ‘the use of technology enhanced surveillance has to be conducted and held to account within a clear and unambiguous framework of legitimacy and transparency.’

In this context we welcome the Ada Lovelace Institute’s call for a moratorium or ‘pause’ on the use of live facial recognition. A pause in deployment by police is absolutely necessary in order to ensure human rights law compliance and to make time for an informed public debate on the issue at the national level. It is important that this debate be nuanced. Given the significantly different ways in which live facial recognition can be deployed, there is no ‘one size fits all’ approach to facial recognition technology and each distinct deployment must be evaluated for human rights compliance. As a population, we must also decide whether we want this technology in our lives, and if so, under what circumstances.

Essentially, the police must demonstrate the necessity of the technology, weighing it against other existing, less intrusive tools, and ensuring that its deployment does not undermine democratic principles. If certain live facial recognition deployments can then be considered necessary in a democratic society, a sufficient legal basis must be established to guarantee that the use of live facial recognition does not interfere with human rights in an arbitrary way. The police must inform the general public of the deployment of live facial recognition technology in a meaningful way, paying careful attention to concerns in the local communities.

The importance of a public debate is reinforced by recent investigations indicating the extent of private deployments of facial recognition in London and around the UK.

In the hope to foster a national public debate, the ESRC Human Rights, Big Data and Technology project will host a public event at the Royal Society in London on 10 September, with invited speakers presenting on the topic of facial recognition technologies and human rights.

About the author

Dr Daragh Murray is a lecturer at the Human Rights Centre & School of Law at the University of Essex.