More than 60% of the British public support ‘laws and regulations’ to guide the use of AI, according to a new national survey by the Ada Lovelace Institute and The Alan Turing Institute, with substantial input from LSE’s Methodology Department, published today (Tuesday 6 June).
The nationally representative survey of over 4,000 adults in Britain comes at a time when conversations around AI regulation and the need to mitigate risks are heightening. The survey highlights the complex and nuanced range of attitudes towards the use of AI across different contexts.
‘AI’ can be difficult to define*, subject to multiple interpretations and often poorly understood, which is why the survey asked about specific and clearly described uses of AI, from facial recognition and targeted advertising to driverless cars and robotic vacuum cleaners.
The survey found that the public see clear benefits for many uses of AI, particularly technologies relating to health, science and security. When offered 17 examples of AI technologies to consider, respondents thought the benefits outweigh concerns for 10 of these. For example, 88% of the public say AI is beneficial for assessing the risk of cancer, 76% can see the benefit of virtual reality in education and 74% think climate research simulations could be advanced using the technology.
The survey also showed that people often think speed, efficiency and improving accessibility are the main advantages of AI. For example, 82% think that earlier detection is a benefit of using AI with cancer scans and 70% feel speeding up border control is a benefit of facial recognition technology.
However, attitudes do vary across different technologies. Almost two thirds (64%) are concerned that workplaces will rely too heavily on AI for recruitment, rather than using professional judgement, and 61% are concerned that AI will be less able than employers and recruiters to take account of individual circumstances.
Public concerns extend beyond use of AI in the workplace. People are most concerned about advanced robotics, for example 72% express concern about driverless cars and 71% about autonomous weapons. Over three quarters (78%) worry that the use of robotic care assistants in hospitals and nursing homes would mean patients missing out on human interaction and over half (57%) are concerned that smart speakers gather personal information that could be shared with third parties.
Awareness of AI technologies also varies greatly depending on context. 93% are aware of the use of AI in facial recognition for unlocking mobile phones, but only 19% are aware of the use of AI for assessing social welfare eligibility.
The survey also asked about attitudes towards the governance of AI more generally. When asked what would make them more comfortable with the use of AI, almost two thirds (62%) chose ‘laws and regulations that prohibit certain uses of technologies and guide the use of all AI technologies’ and 59% chose ‘clear procedures for appealing to a human against an AI decision’.
The Ada Lovelace Institute and The Alan Turing Institute have conducted this new survey to understand how people experience AI as well as their awareness of these technologies, their perceived benefits and concerns and how attitudes differ across different groups of people.
By informing AI researchers, developers and policymakers about the concerns and benefits that the public associate with AI, this research can help to maximise the potential benefits of AI.
Andrew Strait, Associate Director at the Ada Lovelace Institute, said:
‘AI technologies are developing faster than ever and more organisations, in both the private and public sector, are expanding their use of AI. However, it is important that companies and policymakers are aware of public expectations and concerns.
‘Our research provides a detailed picture of how the public perceive the use of AI across a range of contexts. We hope that it will help AI companies and policymakers understand and respond to the public’s nuanced attitudes towards AI and its regulation.’
Professor Helen Margetts, Programme Director for Public Policy at The Alan Turing Institute and Principal Investigator, said:
‘We conducted this survey to better understand people’s attitudes to AI technologies, at a time when AI has become entwined with many aspects of daily life. It’s important to delve into people’s perceptions of the possible benefits and concerns associated with the various uses of AI. The survey showed that for the majority of technologies, people saw more benefits than concerns. But their views of these technologies were highly nuanced, in that they could see benefits and concerns simultaneously.
‘Studies like this can be helpful in considering the development and deployment of AI, especially with the advent of newer generations of AI such as ChatGPT. People’s clear support for the regulation of AI showed how important it is to get the governance right, to ensure that the uses of these technologies embody fairness and transparency, and that people can benefit from them safely.’
*Participants were given the following definition of AI: ‘AI is a term that describes the use of computers and digital technology to perform complex tasks commonly thought to require intelligence. AI systems typically analyse large amounts of data to take actions and achieve specific goals, sometimes autonomously (without human direction).’
This report was co-authored by The Alan Turing Institute (Professor Helen Margetts, Dr Florence Enock, Miranda Cross) and the Ada Lovelace Institute (Aidan Peppin, Roshni Modhvadia, Anna Colom, Andrew Strait, Octavia Reeve) with substantial input from LSE’s Methodology Department (Professor Patrick Sturgis, Katya Kostadintcheva, Oriol Bosch-Jover).
UK public attitudes to regulating data and data-driven technologies
First survey of public opinion on the use of facial recognition technology reveals the majority of people in the UK want restrictions on its use