Skip to content
Blog

Facial recognition and policing – a test case of technology and consent

Giles Herdale argues for urgent action on biometric technology to preserve the principle of policing by consent in a digital age.

Giles Herdale

9 September 2019

Reading time: 6 minutes

Giles Herdale

Sir Robert Mark, widely regarded as one of the greatest Metropolitan Police Commissioners of the post-war era, described policing as “the anvil on which society beats out the problems and abrasions of social inequality, racial prejudice, weak laws and ineffective legislation.”1 These comments were made almost 50 years ago, in the context of Mark’s relentless drive to reform and modernise the Met which was then associated with corruption and racism. And yet these same remarks could apply equally to the current controversy over police powers and trials of privacy intrusive technology.

The focus on trials being carried out by police forces in the UK of automated facial recognition illustrates a long-standing source of controversy about police powers and capabilities.  Indeed, this debate goes back to the foundation of policing as it now understood in the early 19th century, where there was deep unease in some quarters that policing would become a tool of repression of the citizen by the state. It is no coincidence that the first Commissioners of the Metropolitan Police set about winning public consent through setting out principles of prevention, public cooperation, minimum use of force, fairness and restraint.

These nine principles, attributed to Sir Robert Peel, set out the doctrine of policing by consent, that has shaped and differentiated British policing for nearly 200 years and remain central today.  Policing must negotiate daily the tightrope between regard for liberty and use of coercive powers, all the while maintaining the trust and support of the public.

Policing must adapt to keep up with offenders

We are now in a period of unprecedented flux, where demands and expectations of policing are changing at a faster pace than ever before. The traditional, locally organised and accountable model of policing envisaged at the time of Peel is challenged by the globalisation of digital communications and by increasing mobility and connectivity. Rapid rises in the prevalence of online offending and the growth of digital evidence have placed existing systems and processes for crime prevention and investigation under considerable stress. For example referrals of online child abuse and exploitation have risen, and the massive amount of data associated with complex investigations means that it is increasingly the case that it is impossible for this volume to be reviewed by humans alone.

It is therefore vital for the continued relevance and effectiveness of policing that it is able to engage with these changing requirements and adapt accordingly. This need for innovation has been highlighted by Sir Tom Winsor, Chief Inspector of Constabulary, the independent regulator of policing in England and Wales: “It is essential that the police are given the means to [invest in new technology]. For example, body-worn video, fully-functional hand-held mobile devices, facial recognition and artificial intelligence, and the connected systems and infrastructure to support them, are all things in which police forces must invest for the long term. If they don’t, they are left playing catch-up as offenders intensify and increase their abuse of modern technology to cause harm.”

New methods of crime prevention, investigation and evidence gathering are being developed in response to these changing demands, often involving privacy intrusive technology, including (but not limited to) facial recognition. Other applications include downloading data from devices for investigative purposes, machine learning algorithms searching data sets against matching criteria, and predictive policing applications including hotspot mapping and offender profiling. All depend on the collection and analysis of significant amounts of data, including sensitive personal data.

It is in this context that trials by police forces of facial recognition technology need to be considered. There are significant concerns over the deployment of such technology, well described elsewhere, including concerns over algorithmic bias, around its accuracy and reliability, especially in matching diverse facial characteristics, together with the effect of the collateral intrusion of conducting biometric scans in public places, and the broader question of the necessity and proportionality of such intrusive technology. The lack of an explicit legal framework has led to calls for a moratorium on the use of facial recognition until these concerns have been addressed.

Mission-led, not technology-led

Last week the high court ruled on the challenge brought against South Wales Police trials of an Automated Facial Recognition (Locate) system, ruling that there was a lawful basis for the trial, and that the police had complied with the requirements of the Human Rights Act.  Nonetheless, the questions about the proportionality and efficacy of such deployments are unlikely to go away, and the chief constable of South Wales, Matt Jukes, acknowledged as such in responding to the judgment “I’m pleased that the court has recognised the responsibility that South Wales Police has shown in our programme. There is, and should be, a political and public debate about wider questions of privacy and security. It would be wrong in principle for the police to set the bounds of our use of new technology for ourselves.”

This debate about use of such technology in policing has to start with clarity over the requirement. This must be mission-led, rather than technology-led, and include consideration of necessity, proportionality and collateral intrusion. It needs to be underpinned by transparency and independent oversight from the outset. Yet at present, there is a lack of consistent, national framework to govern the deployment of privacy-intrusive capabilities by police and ensure that their use is necessary, proportionate and effective. There has been a notable lack of policy direction either from Government or by any of the national policing bodies. Instead it has been left to individual forces, supported in some cases by local ethics panels, to muddle through this complex and legally challenging landscape.

Having been involved with an Independent Digital Ethics Panel for Policing, which for the past five years has sought to provide guidance and direction to a range of practitioners operating in this space, it is clear that the lack of a shared framework has become increasingly untenable. It is clearly unrealistic that individual police forces (of which there are 50 across the UK) should separately navigate technological complexity and ethical ambiguity across a host of potential applications. As Peter Fussey and Daragh Murray highlight in the Independent Report on the London Metropolitan Police Facial Recognition Trials: “the absence of national leadership at government level, and clear lines of responsibility about whether trials should be conducted, and if so how, leaves police evaluation teams with the enormous task not only of conducting scientific evaluation, but also compensating for a lack of national leadership by recreating and reinterpreting policy anew.”

There is a strong case for public interest in maintaining police effectiveness and legitimacy in line with the principles set out by Peel. As such, the move by the Centre for Data Ethics and Innovation, in partnership with RUSI, to develop a draft code of practice for predictive policing and algorithmic use must be welcomed. Such a move will help set the policy direction that has been conspicuous by its absence, but implementation must involve a range of actors, from government, police leaders and practitioners, oversight bodies and regulators, independent scrutiny and advisory bodies and the public. Understanding of the basis for public consent requires a breadth of independent research, review of legislation, and public consultation, which I am glad that the Ada Lovelace Institute is working to pursue. The speed of change in technology and the impact on fundamental rights is such that this must be a vital national priority, if the principle of policing by consent is to be preserved in a digital age.

About the author

Giles Herdale is Associate Fellow at the Royal United Services Institute (RUSI) and Co-Chair of the Independent Digital Ethics Panel for Policing (IDEPP)

Footnotes

  1. Speech given by the Commissioner of the Metropolis at Bramshill Police College 14th August 1975.