Carly Kind introduces the Ada Lovelace Institute’s emerging research on understanding public attitudes to facial recognition technologies, proposing a way forward for regulators, policymakers and industry in the UK.
Some of the most ethically challenging applications of data and AI are in the field of digital recognition and prediction technologies such as facial recognition, biometric identification, predictive policing, social credit scoring and consumer profiling. At stake in the public and private development and adoption of these technologies are not only individual rights and collective values, but fundamental understandings of identity, social mobility, and liberties.
As a result, we believe there needs to be an informed policy debate and public conversation on the ethical and legal conditions for the deployment of facial recognition technologies, and the other digital recognition technologies which will follow. We have taken the first step in catalysing this public conversation in the UK, partnering with YouGov to shine a light for the first time on public attitudes in Britain to public and private sector deployment of facial recognition technology. We expect to publish the early stage findings of that research in September 2019, followed up by a longer-term programme of wider research and public engagement.
What are the issues raised by facial recognition technology?
Facial recognition has emerged as the most urgent AI application that warrants better ethical understanding. Facial recognition is a visible and evocative embodiment of a data-driven future, with potentially widespread applications in areas of public and private life as diverse as border control, workforce productivity, policing of online and offline crime, and student performance. These technologies have become in some senses synonymous with and symbolic of the ethical questions posed by data-driven technologies more broadly.
The status of facial recognition in some public and media discourses responds to the widespread commercial and public sector use of the technology. Indeed, limited applications of facial recognition have been used at borders or on online platforms for some time. As the sophistication of AI technologies has accelerated in very recent years, more expansive applications of facial recognition technology have begun to be rolled out. Advancements in computer vision and probabilistic systems have improved the accuracy of facial recognition technologies, permitted their application in uncontrolled environments, and expanded their capabilities to include emotion detection and recognition.
Some examples of recent and current applications of facial recognition technologies include:
- Use in public spaces by public bodies such as the police: The UK’s Metropolitan Police have been running trials of live facial recognition systems over the past year, as have other police forces in the UK, including the South Wales Police. These trials have attracted extensive interest and are the subject of a judicial review challenge in Wales. Campaigners have argued the systems are inaccurate in 96% of cases.
- Use in airports for a wide range of purposes: While facial recognition has been in use in arrivals at border entry points in airports for more than a decade, new technologies being piloted will soon permit travellers in the UK and Australian airports to move throughout departures without encountering passport checks. These may be for the purpose of convenience and efficiency, but also serve the purpose of enhancing border control.
- Use in sale of goods and in supermarkets. American department stores and fast food chains in China are using facial recognition to map shopper responses and engage with customer purchases. In supermarkets across the UK, facial recognition is being used to judge the age of shoppers when they buy age restricted items.
- Use for commercial applications by big tech: Facebook and Google have built up their respective facial recognition systems, DeepFace and FaceNet, over the past five years, the commercial applications of which are seemingly endless. Amazon’s facial recognition product, Rekognition, is being sold to police forces, despite shareholder efforts to prevent the sales on ethical grounds.
- Use internationally to identify and profile individuals and groups: CCTV cameras across China are being equipped with facial recognition tech tasked with identifying ethnic minorities. Police are being kitted out with AI-enabled sunglasses which perform facial recognition tasks. Chinese schools are using facial recognition to support “intelligent education”. India’s Aadhaar digital identity system – the world’s largest, with more than a billion entrants – has recently been equipped with facial recognition. Numerous other countries, from Brazil to Cameroon to Indonesia, are following suit, seeking to build national digital identity systems that use facial recognition technology. AI facial recognition systems aiming to predict people’s sexual orientation with a high degree of accuracy has garnered much interest from the Russian government. Japan is also expected to use facial recognition for identification of authorised persons during the 2020 Olympic Games in Tokyo.
Despite the ongoing rollout of facial recognition technologies in the UK, and internationally, there is a concerning lack of clarity of the legal and policy environment in which they are being used, in both the public and private sectors. Moreover, the ethical implications and considerations are complex, varied, and unresolved, and include not only concerns around the technology’s accuracy, but its potentially discriminatory impact, public legitimacy, social licence, and societal consequences. This ethical and legal shortfall has resulted in, at one extreme, regulatory inaction, and at the other, blunt measures such as an outright ban on the tech.
The case for a moratorium on the use of facial recognition tech
We need to build the evidence base to provide the foundations upon which policy, regulation and technical development in the field of digital recognition technologies should be built. However, this will take time. Indeed, it should take time – technology policy and regulation should not be reactionary or rushed, but rather fit for purpose and sufficiently adaptable to prevent redundancy as a result of technological advancements. But what should be done in the meantime?
We are engaging stakeholders in discussions about the establishment – by consensus – of a voluntary moratorium on future public and private sector deployment of facial recognition technology. Occupying the middle ground between inaction and prohibition, a moratorium provides for time and space for informed thinking and the building of public trust. It has been used with effect in the field of bioethics. The United Kingdom’s insurance sector agreed a moratorium, with specific and defined contours, on the use of genetic testing in insurance settings in 2001, and moratoria are also currently being discussed in the context of gene editing. A moratorium in the context of facial recognition technology and surveillance technology more generally also enjoys support from civil society actors such as think-tank Doteveryone and the UN Special Rapporteur for freedom of expression, David Kaye.
We’ll be taking conversations on a moratorium on facial recognition forward over the summer in the hope of progressing this idea. In the meantime, we’ll be bringing other voices into this conversation, and hope to share some of these with you on our blog. We firmly believe that the ethical take-up and dissemination of technologies must be accompanied by – and indeed, is dependent on – input from a diverse range of actors and voices, and we see part of our role as convening and providing a platform for those voices.
Get in touch if would like to find out more about our work on biometrics and facial recognition.