Skip to content
Report

How do people feel about AI?

A nationally representative survey of public attitudes to artificial intelligence in Britain

Roshni Modhvadia

6 June 2023

Reading time: 97 minutes

Browse the findings on our dedicated microsite 'Attitudes to AI'

'Attitudes to AI' highlights key findings from 'How do people feel about AI?', with interactive charts and bite-size content.

Executive summary

Artificial intelligence (AI) technologies already interact with many aspects of people’s lives. Their rapid development has resulted in increased national attention on AI and surrounding policy. In November 2022, the Ada Lovelace Institute and The Alan Turing Institute conducted a nationally representative survey of over 4,000 adults in Britain, to understand how the public currently experience AI.

We asked people about their awareness of, experience with and attitudes towards different uses of AI. This included asking people what they believe are the key advantages and disadvantages, and how they would like to see these technologies regulated and governed.

While the term AI appears frequently in public discourse, it can be difficult to define and is often poorly understood, particularly as it encompasses a wide range of technologies that are used in different contexts and for distinct purposes. There is no single definition of AI, and the public may see the term applied in a wide variety of settings.

Making matters even more challenging is the fast pace of AI development. OpenAI’s ChatGPT was released two weeks after we began our fieldwork. The widespread media coverage of generative AI – AI that can generate content such as images, videos, audio and text – has probably already impacted public discourse, and this survey therefore reflects the attitudes of the British public before the surge of interest in this topic.

The multifaceted and continually evolving nature of AI can present a challenge for public attitudes research, as it can be difficult to ask people meaningfully how they feel about a complex topic which may evoke different interpretations. Taking this into account, we focused on asking people about specific technologies that make use of AI and we gave people clear descriptions of each.

We asked the British public about their attitudes towards and experiences with 17 different uses of AI. These uses ranged from applications that are visible and commonplace, such as facial recognition for unlocking mobile phones and targeted advertising on social media; to those which are less visible, such as assessing eligibility for jobs or welfare benefits; and applications often associated with more futuristic visions of AI, such as driverless cars and robotic care assistants.

For each specific use of AI, people were given the opportunity to express their perceptions of the benefits and their concerns about the technology, recognising that people may see potential benefit and concern simultaneously. We also offered people the chance to tell us how they thought each technology might yield both benefits and risks. Additionally, respondents were asked more general questions about their preferences for AI governance and regulation, including how explainable they would like AI decision-making to be.

Broadly, our findings highlight the complex and nuanced views that people in Britain have about the many different uses of AI across public and personal life. People’s awareness varies greatly across the different technologies we asked about, with the highest levels of awareness reported for everyday applications, such as facial recognition for unlocking mobile phones, and applications that are less commonplace but have received media attention, such as driverless cars. Public awareness is lowest for less visible technologies, such as AI for assessing eligibility for welfare or risk in healthcare outcomes. Key findings relating to public attitudes across these technologies are summarised below.

Key findings:

  • For the majority of AI uses that we asked about, people had broadly positive views, but expressed concerns about some uses. Many people think that several uses of AI are generally beneficial, particularly for technologies related to health, science and security. For 11 of the 17 AI uses we asked about, most people say they are somewhat or very beneficial. The use of AI for detecting the risk of cancer is seen as beneficial by nine in 10 people.
  • The public also express concern over some uses of AI. For six of the 17 uses, over 50% find them somewhat or very concerning. People are most concerned about advanced robotics such as driverless cars (72%) and autonomous weapons (71%).
  • People’s perceived benefit levels outweigh concerns for 10 of the 17 technologies, while concerns outweigh benefits for five of the 17. For two technologies, benefits and concerns are evenly balanced.
  • Digging deeper into people’s perceptions of AI shows that the British public hold highly nuanced views on the specific advantages and disadvantages associated with different uses of AI. For example, while nine out of 10 British adults find the use of AI for cancer detection to be broadly beneficial, over half of British adults (56%) are concerned about relying too heavily on this technology rather than on professional judgements, and 47% are concerned about the difficulty in knowing who is responsible for mistakes when using this technology.
  • People most commonly think that speed, efficiency and improving accessibility are the main advantages of AI across a range of uses. For example, 70% feel speeding up processing at border control is a benefit of facial recognition technology.
  • However, people also note concerns relating to the potential for AI to replace professional judgements, not being able to account for individual circumstances, and a lack of transparency and accountability in decision-making. For example almost two-thirds (64%) are concerned that workplaces will rely too heavily on AI for recruitment compared to professional judgements.
  • Additionally, for technologies like smart speakers and targeted social media advertisements, people are concerned about personal data being shared. Over half (57%) are concerned that smart speakers will gather personal information that could be shared with third parties while 68% are concerned about this for targeted social media adverts.
  • The public want regulation of AI technologies, though this differs by age.
  • The majority of people in Britain support regulation of AI. When asked what would make them more comfortable with AI, 62% said they would like to see laws and regulations guiding the use of AI technologies. In line with our findings showing concerns around accountability, 59% said that they would like clear procedures in place for appealing to a human against an AI decision.
  • When asked about who should be responsible for ensuring that AI is used safely, people most commonly choose an independent regulator, with 41% in favour. Support for this differs somewhat by age, with 18–24-year-olds most likely to say companies developing AI should be responsible for ensuring it is used safely (43% in favour), while only 17% of people aged over 55 support this.
  • People say it is important for them to understand how AI decisions are made, even if making a system explainable reduces its accuracy. For example, a complex system may be more accurate, but may therefore be more difficult to explain. When considering whether explainability is more or less important than accuracy, the most common response is that humans, not computers, should make ultimate decisions and be able to explain them (selected by 31%). This sentiment is expressed most strongly by people aged 45 and over. Younger adults (18–44) are more likely to say that an explanation should only be given in some circumstances, even if that reduces accuracy.

Taken together, this research makes an important contribution to what we know about public attitudes to AI and provides a detailed picture of the ways in which the British public perceive issues surrounding the many diverse applications of AI. We hope that the research will be useful in helping researchers, developers and policymakers understand and respond to public expectations about the benefits and risks that these technologies may pose, as well as public demand for how these technologies should be governed.

1.   How to read this report

If you’re a policymaker or regulator concerned with AI technologies:

  • The report highlights the nuance in the perceived benefits and concerns that adults in Britain identify across a range of AI uses. Section 4.2 presents an overview of the perceived benefits and concerns; and Section 4.3 provides more detail on the specific benefits and concerns for each type of technology.
  • Section 4.4 identifies a widely shared expectation for independent regulation that involves explainability and redress. It includes more detail on age differences and expectations of responsibility by different stakeholders.

If you’re a developer or designer building AI-driven technologies, or an organisation or body using them or planning to incorporate them:

  • Section 4.4 includes findings related to the expectations and trust the public have for different stakeholders, including private companies and government, and the views from the public on who is responsible for ensuring AI is used safely.
  • Sections 4.2 and 4.3 cover people’s perceived benefits and concerns for different AI uses, with insights on expectations around capabilities and risk.

If you’re a researcher, civil society organisation, public participation practitioner or member of the public interested in technology and society:

  • Section 3 includes an overview of the survey methodology. There is more detail in the appendices and the separate technical report.[1] In Appendix 6.1, we include the descriptions of each AI use that we shared with respondents before asking about their awareness and experience of the uses; and about their view of the potential benefits and concerns.
  • Section 4.1 includes an overview of people’s awareness and experience of different AI uses. An overview of overall net benefits and concerns for each technology can be found in Section 4.2. Section 4.3 includes specific perceived benefits and concerns about particular technologies.

2.   Introduction

Artificial intelligence (AI) technology, and its widespread use in many aspects of public and private life, is developing at a rapid pace. It is therefore crucial to understand how people experience the many applications of AI, including their awareness of these technologies, their concerns, the perceived benefits, and how attitudes differ across demographic groups. To effectively inform the design of policy responses, it is also important to understand people’s views on how these technologies should be governed and regulated.

To answer these questions, The Alan Turing Institute and the Ada Lovelace Institute partnered to conduct a new, nationally representative random sample survey of the British public’s attitudes towards, and experiences of, AI. While previous surveys have tackled related questions, there remain several gaps in our understanding of public attitudes to AI.

For example, other work has tended to ask about a single definition of AI or has only covered specific uses, meaning that findings regarding positive or negative sentiment toward AI are broad and somewhat ambiguous. Additionally, few large-scale studies elicit people’s preferences for how AI technologies should be regulated, or how explainable a decision made by an AI system should be.

Asking people about their views on AI in general can be difficult because the term is hard to define and often poorly understood. Previous surveys have tended to find that people’s knowledge of AI is low, and that few are able to define the term.

Only 13% of respondents in a 2022 Public Attitudes to Data and AI tracker survey,[2] and 10% in a 2017 Royal Society survey reported being able to give a full explanation of AI.[3] However, the limited evidence available to date suggests that people tend to be aware of some specific applications of AI, including in healthcare, job application screening, driverless cars, and military uses.[4] [5] [6] [7] [8]

With these considerations in mind, we sought to examine attitudes towards a large and varied set of AI uses in society. We wanted to include routine uses that people may not typically think of as AI, and that are often excluded from other studies, such as targeted advertising and smart speakers, as well as uses more commonly associated with the term, such as advanced robotics.

Importantly, we aimed to capture the potential complexity of the public’s views. Previous studies suggest that people’s attitudes to AI are nuanced and vary according to specific uses and across countries.[9] For example, people tend to be more supportive of the use of AI where it enhances human decision-makers, such as in healthcare settings,[10] but are more negative where it is seen as replacing human decision-making, such as in cases of criminal justice and driverless cars.[11] We therefore sought to delve deeper into some of the factors underlying these differences, offering people the chance to express both benefits and concerns about uses of AI, recognising that people may simultaneously see positives and negatives in these technologies.

We also wanted to understand what people think about the specific benefits and risks associated with different AI uses. Other surveys have found that people report feeling concerned about the potential risks associated with AI, rather than feeling optimistic about the benefits. For example, less than half of the US public believe AI technologies will ‘improve things over the current situation’, and in particular they express high concern about the potential for AI to increase inequality.[12]

To build on these findings, we offered people the chance to express how they thought each technology might yield benefits and risks by selecting from a range of possibilities designed to reflect overall themes including accuracy, speed, bias, accountability, data security, job security and more. Our aim was to acknowledge that people may have nuanced views of all the possible benefits and concerns surrounding AI uses, rather than simply measuring positive or negative sentiment, or attitudes to only a few potential risks.

To effectively inform policy responses to public concerns surrounding the development and use of AI, it is crucial to understand attitudes towards its governance and regulation. Previous research shows some support for independent or government regulation of AI, with a 2019 UK Department for Business, Energy and Industrial Strategy (BEIS) report showing 33% favour an independent AI regulator, and 22% favour a government regulator.[13]

The same report showed that the UK public are not confident that UK data protection regulations can adapt to new technologies, expressing concerns over adequate regulation in the face of a fast-changing landscape. Additionally, citizens’ juries have found that people prioritise the explainability of an AI system over its accuracy,[14] and other work offers important resources and guidelines for aiding AI explainability.[15] However, there is currently little available evidence about explainability preferences from a large-scale and recent sample.

Through the results of this survey, we provide a detailed picture of how the British public perceive issues surrounding the many diverse applications of AI. We hope that the research will be useful for informing researchers, developers and policymakers about the concerns and benefits that the public associate with AI, thereby helping to maximise the potential benefits of AI.

3. Methodology

In this chapter we provide a summary of the key aspects of the study’s methodology. A technical report[16] containing full details of the methodological approach including how we designed our questions for the study can be accessed separately.[17]

Sample

The sample was drawn from the Kantar Public Voice random probability panel.[18] This is a standing panel of people who have been recruited to take part in surveys using random sampling methods. At the time the survey was conducted, it comprised 24,673 active panel members who were resident in Great Britain and aged 18 or over. This subset of panel members was stratified by sex/age group, highest educational level and region, before a systematic random sample was drawn.

We undertook fieldwork in November and December 2022, and issued the survey in three stages: a soft launch with a random subsample of 500 panel members, a launch with the remainder of the main panel members, and a final launch with reserve panel members.

A total of 4,010 respondents completed the survey and passed standard data quality checks.[19] The majority of respondents completed the questionnaire online, while 252 were interviewed by telephone either because they do not use the internet or because this was their preference.

Respondents were aged between 18 and 94. Unweighted, a total of 1,911 (48%) identified as male, and 2,096 (52%) as female, with no sex recorded for three participants. The majority (3,544; 88%) of respondents were white; 261 (7%) were Asian or Asian British; 90 (2%) were Black, African, Caribbean or Black British; and 103 (3%) were mixed, multiple or other ethnicities; with no ethnicity recorded for 12 participants.[20]

The data was weighted based on official statistics to match the demographic profile of the population (see technical report).[21] However, with a sample size of 4,010, it is not possible to provide robust estimates of differences across minority ethnic groups, so these are not reported here.

Survey

We told respondents that the questions focus on people’s attitudes towards new technologies involving artificial intelligence (AI), and presented the following definition of AI to them:

AI is a term that describes the use of computers and digital technology to perform complex tasks commonly thought to require intelligence. AI systems typically analyse large amounts of data to take actions and achieve specific goals, sometimes autonomously (without human direction).

Respondents then answered some general questions about attitudes to new technologies and how confident they feel using computers for different tasks. They were then asked questions about their awareness of and experience with specific uses of AI; how beneficial and concerning they perceive each use to be; and about the key risks and benefits associated with each.

The specific technologies we asked about were:

  • facial recognition (uses were unlocking a mobile phone or other device, border control, and in policing and surveillance)
  • assessing eligibility (uses were for social welfare and for job applications)
  • assessing risk (uses were risk of developing cancer from a scan and loan repayments)
  • targeted online advertising (for consumer products and political adverts)
  • virtual assistants (uses were smart speakers and healthcare chatbots)
  • robotics (uses were robotic vacuum cleaners, robotic care assistants, driverless cars and autonomous weapons)
  • simulations (uses were simulating the effects of climate change and virtual reality for educational purposes).

These 17 AI uses were chosen based on emerging policy priorities and increased usage in public life. See Section 4.1 or Appendix 6.1 for the descriptions of each use. See the technical report [22] for information about our questionnaire design.

To keep the duration of the survey to an average of 20 minutes, we employed a modular questionnaire structure. Each person responded to questions about nine of the 17 different AI uses. All participants were asked about facial recognition for unlocking a mobile phone and then responded to one of the two remaining uses of facial recognition.

They were then asked about one of the two uses for the other technologies, other than robotics, for which there were four uses. For robotics, each participant considered either robotic vacuum cleaners or robotic care assistants, and then either driverless cars or autonomous weapons. After responding to questions for each specific AI use, participants answered three general questions about AI governance, regulation and explainability.

The survey was predominantly made up of close-ended questions, with respondents being asked to choose from a list of predetermined answers.

Analysis

We analysed the data between January 2023 and March 2023, using descriptive analyses for all survey variables followed-up with chi-square testing of differences across specific demographic groups. We then used regression analyses to understand relationships between demographic and attitudinal variables, and perceived benefit of specific technologies (see Appendix 6.3 for further information).

We analysed the data using the statistical programming language R, and used a 95% confidence level to assess statistically significant results. Analysis scripts and the full survey dataset can be accessed on the Ada Lovelace Institute GitHub site.[23]

In this report, we generalise from a nationally representative sample of the population of Great Britain to refer to the ‘British public’ (sometimes shortened to ‘the public’) or ‘people in Britain’ (sometimes shortened to ‘people’) throughout. This phrasing does not refer to British nationals, but rather to people living in Great Britain at the time the survey was conducted.

 4. Key findings

We asked about the uses of AI listed below. Detailed definitions for each technology can be found in Appendix 6.1.

Facial recognition…
… to unlock a mobile phone … at border control
… for policing and surveillance
Assess eligibility…
… for welfare benefits … for a job
Determine risk…
… of cancer from a scan … of repaying a loan
Targeted advertisements online…
… for consumer products … for political parties
Virtual assistant technologies…
… smart speakers … virtual assistants for healthcare
Robotics…
… robotic vacuum cleaners … robotic care assistants
… driverless cars …autonomous weapons

 

4.1. Awareness and experience of AI uses

To understand people’s awareness of and experience with each of the AI technologies included, participants were asked to indicate whether they had heard of each technology before and their self-reported personal experience with each. The question on personal experience was not included for autonomous weapons, driverless cars, robotic care assistants and simulation technologies for advancing climate change research, where direct experience would be unlikely for most respondents.

Overall, awareness of and experience with AI technologies varies substantially according to the specific use.

Awareness of AI technologies is mixed. For 10 of the 17 technologies we asked about, over 50% of the British public say they have heard of them before. Awareness is highest for the use of facial recognition for unlocking mobile phones, with 93% having heard of this before. People are also largely aware of driverless cars (92%) and robotic vacuum cleaners (89%).

People are least aware of the use of AI for assessing eligibility for welfare benefits, with just 19% having heard of this before. People are also less aware of robotic care assistants (32%), using AI to detect risk of cancer from a scan (34%), and using AI to assess eligibility for jobs or risks relating to loan repayments (both 35%). It is important to note that people’s awareness of technologies for assessing risk and eligibility is relatively low. Some of these technologies are already being used in public services,[24] and these results show that people may be largely unaware of the technologies that help make decisions which directly impact their lives.

Awareness of AI technologies differs somewhat according to age, with people aged 75 and over less likely to indicate they have heard of the use of facial recognition for unlocking mobile phones (69% reported being aware, compared to 95% of under 75s), border control (61% reported being aware, compared to 72% of under 75s), or for consumer social media adverts (68% reported being aware, compared to 89% of under 75s).

Our findings about people’s awareness of AI technologies align with those from other studies, which highlight gaps in awareness of AI that are less visible in day-to-day life or the media.

For example, a Centre for Data Ethics and Innovation (CDEI) 2022 mixed-methods study [25] found that the public have high levels of awareness of more visible uses of AI, such as recommendation systems, and futuristic associations of AI based on media images such as robotics. In contrast, the same study found low levels of awareness of AI in technologies that are part of wider societal systems’, such as the prioritisation of social housing.

People report mixed levels of personal experience with AI technologies. Over 50% of the public report personal experience with four of the 13 technologies we asked about. People report most experience with targeted online adverts for consumer products (with 81% reporting some or a lot of experience), smart speakers (with 64% reporting some or a lot of experience), and facial recognition for unlocking mobile phones and at border control (with 62% and 59% respectively reporting some or a lot of experience).

People report least experience with AI for determining risk of cancer from a scan (8%), for calculating welfare eligibility (11%) and with facial recognition for police surveillance (12%).

Experience with some of the technologies differs according to age. People aged 75 and over report less experience with facial recognition to unlock mobile phones (23% report having some or a lot of experience compared to 67% of under 75s), facial recognition at border control (32% report having some or a lot of experience compared to 62% of under 75s), and social media advertisements for consumer products (51% vs 84%) and political parties (18% report having some or a lot of experience compared to 52% of under 75s).

Figure 1 shows level of awareness for each of the 17 AI uses, and Figure 2 shows how much personal experience people report having with the 13 AI uses for which experience level was asked.

4.2. How beneficial do people think AI technologies are, and how concerning?

To find out about overall attitudes towards different AI technologies, for each technology they were asked about, respondents indicated the extent to which they think the technology will be beneficial, and the extent to which they are concerned about the technology.

The extent to which AI is perceived as beneficial or as concerning varies greatly according to the specific use.

The British public tend to perceive facial recognition technologies, virtual and robotic assistants, and technologies having health or science applications as very or somewhat beneficial.

A majority says facial recognition for unlocking mobile phones, at border control and for police surveillance is somewhat or very beneficial. In addition, over half also say that virtual assistants, both smart speakers and healthcare assistants; simulations to advance knowledge in both climate change research and in education; risk assessments for cancer and loan repayments; and robotics for vacuum cleaners and care assistants are beneficial.

AI uses with the highest percentage of people indicating ‘very’ or ‘somewhat’ beneficial are cancer risk detection (88% think beneficial) and facial recognition for border control and police surveillance (87% and 86% respectively think beneficial).

These attitudes resonate with previous research, which found that people are positive about the role of AI in improving the efficiency of day-to-day tasks, the quality of healthcare, and the ability to save money on goods and services.[26] Figure 3 shows how beneficial people believe each use of AI to be.

The British public are most concerned about AI uses that are associated with advanced robotics, advertising and employment.

More than half of British adults are somewhat or very concerned about the use of robotics for driverless cars and autonomous weapons, the use of targeted advertising online for both political and consumer adverts, for calculating job eligibility and for virtual healthcare assistants.

These findings complement those from previous studies that indicate concern around the use of AI in contexts that replace humans, such as driverless cars,[27] and in advertising.[28] Figure 4 shows the level of concern people have about each use of AI.

The proportion of the public selecting ‘don’t know’ in response to how concerned they are about each AI use is relatively small, suggesting little ambivalence or resignation towards AI across different uses.

The British public do not have a single uniform view of AI – rather, there are mixed views about the extent to which AI technologies are seen as beneficial and concerning depending on the type of technology.

To further understand these views, we created net benefit scores by subtracting the extent to which each respondent indicated the AI use was concerning from the extent to which they indicated the AI use was beneficial.

Positive scores indicate that perceived benefit outweighs concern, negative scores indicate that concern outweighs perceived benefit and scores of zero indicate equal levels of concern and perceived benefit. More detail on this analysis can be found in Appendix 6.3.

  • Benefit level outweighs concern for 10 of the 17 technologies. These are: cancer risk detection; simulations for climate change research and education; robotic vacuum cleaners; smart speakers; assessing risk of repaying a loan; robotic care assistants; and facial recognition for unlocking mobile phones, border control and police surveillance. These findings add to the Ada Lovelace Institute’s 2019 research into attitudes towards facial recognition, where findings showed that most people support the use of facial recognition technology where there is demonstrable public benefit.[29]
  • Concern outweighs benefit level for five of the 17 technologies. These are: autonomous weapons; driverless cars; targeted social media advertising for consumer products and political ads; and AI for assessing job eligibility.
  • Some technologies are seen as more divisive overall, with equal levels of concern and perceived benefit reported. This is the case for virtual healthcare assistants, and welfare eligibility technology. Figure 5 shows mean net benefit scores for each technology.

4.2.1. Individual and group level differences in perceptions of net benefits

We analysed whether perceived net benefits for each AI technology differed according to differences in the sample such as sex, age, education level, and how aware, informed or interested people are in new technologies.

  • The public think differently about facial recognition technologies depending on their level of education, how informed they feel about new technologies, and their age.
    • People who feel more informed about technologies or who hold degree-level qualifications are significantly less likely than those who feel less informed or do not hold degree-level qualifications to believe that the benefits of facial recognition technologies outweigh the concerns.
    • People aged 65 and over are significantly more likely than those under 65 to believe that the benefits of facial recognition technologies outweigh the concerns.
  • Awareness of a technology is not always a significant predictor of whether or not people perceive it to be more beneficial than concerning. For uses of AI in science, health, education and robotics, being aware of the technology is associated with perceiving it to be more beneficial than concerning. These include: virtual healthcare assistants, robotic care assistants, robotic vacuum cleaners, autonomous weapons, cancer risk prediction, and simulations for climate change and education.
  • However, awareness can also exacerbate concerns. Being aware of the use of targeted social media advertising (both for consumer and political ads) is associated with concern outweighing perceived benefits. Those who feel more informed about technology are also less likely to see targeted advertising on social media for consumer products as beneficial, compared with those who feel less informed.

Appendix 6.3 provides more information about the analyses outlined in this section, including further results showing the effects of demographic and attitudinal differences on perceived net benefit for each technology. Appendix 6.3 also includes a figure showing how the perceived net benefits for each AI technology differ according to differences in sex, age, education level, and how aware, informed or interested people are with new technologies.

These findings support existing research from the Ada Lovelace Institute into public attitudes around data, suggesting that public concerns should not simply be dismissed as reflecting a lack of awareness or understanding of AI technologies, and further that raising awareness alone will not necessarily increase public trust in these systems.[30]

More qualitative and deliberative research is needed to understand the trade-offs people make between specific benefits and concerns.

The nuanced impact of awareness about attitudes towards AI technologies is evident in the range of specific benefits and concerns people select relating to each one technology, described in the next section.

4.3. Specific benefits and concerns around different AI uses

To further understand how people view the possible benefits and concerns surrounding different uses of AI, we asked respondents to select specific ways they believe each technology to be beneficial and concerning from multiple choice lists.

The benefits and concerns included in each list were created to reflect common themes, such as speed and accuracy, bias and accountability, though each list was specific to each technology (see full survey for all benefits and concerns listed for each technology). Participants could select as many statements from each list as they felt applied, with ‘something else’, ‘none of the above’, and ‘don’t know’ options also given for each.

Overall, people most commonly identify benefits related to speed, efficiency and accessibility, and most commonly express concerns related to overreliance on technologies over professional human judgement, being unable to account for personal circumstances, and a lack of transparency and accountability in decision-making processes.

However, the specific benefits and concerns most commonly selected vary across technologies.

The following sections describe the specific benefits and concerns that people chose for each AI use. We cluster these by categories of technologies for risk and eligibility assessments, facial recognition technologies, robotics, virtual assistants, targeted online advertising, and simulations for science and education.

Tables 1–12 show the three most commonly chosen benefits and concerns for each technology. A full list of benefits and concerns presented to participants and the percentage of people selecting each can be found in Appendix 6.3.4.

4.3.1. Risk and eligibility assessments

We asked about the following uses of assessing eligibility and risk using AI: to calculate eligibility for jobs, to assess eligibility for welfare benefits, to predict the risk of developing cancer from a scan, and to predict the risk of not repaying a loan.

The public’s most commonly chosen benefit for risk and eligibility assessments is speed (for example, ‘applying for a loan will be faster and easier’).

Just under half, 43%, think speed is a benefit of using AI to assess eligibility for welfare benefits, 49% for job recruitment, and 52% for assessing risk of repaying a loan. An overwhelming majority of 82% think that earlier detection of cancer is a key advantage in using AI to predict the risk of cancer from a scan, a consensus not reached in any other technologies.

In addition to speed, reduction of human bias and error are seen as key benefits of technologies in this group. For the use of AI in recruiting for jobs and for assessing risk of repaying a loan, the technologies being less likely than humans to ‘discriminate against some groups of people in society’ is the second most commonly selected benefit, selected by 41% and 39% respectively.

Reduction in ‘human error’ is the second most commonly selected benefit for the use of AI in determining risk of cancer from scans and for assessing eligibility for welfare benefits, selected by 53% and 38% respectively.

The technologies being more accurate than human professionals overall, however, is not selected as a key benefit of most uses of AI in this group. Less than one third of people in Britain perceive this to be a key benefit for the use of AI in determining risk for the repayment of loans (29% selected), determining eligibility for welfare benefits (22% selected) and determining eligibility for jobs (13% selected).

An exception to this pattern is in the use of AI to determine risk of cancer from scans, where 42% of people perceive a key benefit as improved accuracy over professionals.

The most common concerns the British public have about using AI for these eligibility and risk assessments include the technology being less able than a human to account for individual circumstances, overreliance on technologies over professional judgement, and a lack of transparency about how decisions are made.

These concerns are particularly high in relation to the use of AI in job recruitment processes with 64% saying they think that professionals will ‘rely too heavily on their technology rather than their professional judgements’; 61% saying that the technology will be ‘less able than employers and recruiters to take account of individual circumstances’; and 52% saying that ‘it will be more difficult to understand how decisions about job application assessments are reached’.

These concerns add to findings from CDEI’s latest research into public expectations around AI governance, where people felt it was important to have a clear understanding of the criteria AI uses to make decisions in the case of job recruitment and to have the ability to challenge such decisions.[31]

The British public express repeated concerns around a lack of human oversight in AI technologies, even for the use of AI to determine cancer risk from a scan – a technology that is seen as largely beneficial.

As seen in the previous section, AI for predicting risk of cancer from a scan is perceived to be one of the most beneficial technologies in the survey.

Yet, over half of British adults (56%) still express concern about relying too heavily on this technology rather than professional judgements, while 47% are concerned that if the technology made a mistake it would be difficult to know who is responsible. These attitudes suggest that the public see value in human oversight in AI for cancer risk detection, even when this use of AI is perceived as largely positive.

4.3.2. Facial recognition

We asked the British public about the following uses of facial recognition technologies: its use for unlocking mobile phones, for policing and surveillance and facial recognition use at border control.

Most of the British public feel speed is the main benefit offered by facial recognition technologies.

Over half, 61%, of people say ‘it is faster to unlock a phone or personal device’ in relation to phone unlocking, 77% say ‘the technology will make it faster and easier to identify wanted criminals and missing persons’ in relation to policing and surveillance and 70% identify ‘processing people at border control will be faster’ as a benefit in relation to border control.

Although half of the public perceive accuracy to be a substantial benefit of these technologies, half have concerns around these technologies making mistakes.

On the one hand, the technology being more accurate than professionals is the second most selected benefit for the use of facial recognition in policing and surveillance (chosen by 55% of people) and the use of facial recognition at border control (chosen by 50% of people). On the other hand, the most commonly selected concern for policing and surveillance is false accusations (54% of people worry that ‘if the technology makes a mistake it will lead to innocent people being wrongly accused’); while for border control, the most selected concern is related to accountability (‘if the technology makes a mistake, it will be difficult to know who is responsible for what went wrong’).

Therefore, while speed is seen by a majority as a benefit, there are a range of concerns that are mentioned by approximately half of people over the use of facial recognition for border control and police surveillance. A survey conducted by the Ada Lovelace Institute in 2019 found that a majority supported facial recognition technology when there was a demonstrable public benefit and appropriate safeguards in place.[32]

Very few people identify concerns about the use of facial recognition in policing, surveillance and border control as discriminatory technologies. However, there may be socio-demographic differences around these concerns.

The responses suggest that Black people, students and those with no formal qualifications might be more concerned about the discriminatory potential of these technologies.

However, it is important to note that our sample sizes for various subgroups are too small to be statistically significant, and we need to follow up these indicative findings through other research methods.

More research is also needed to understand the lived experiences of different groups and concerns about how these technologies can impact, or can be perceived to impact, people in different ways.

4.3.3. Robotics

In the case of robotics, specific benefits vary depending on the area in which the AI is applied, with accessibility and speed being the most common benefits.

Accessibility is the most commonly selected benefit for robotic technologies that can make day-to-day activities easier for people who otherwise might not be physically able to do them (driverless cars and vacuum cleaners), highlighting positive perceptions, and potentially high expectations, around AI making tasks easier for all of society.

People are concerned about a lack of human interaction in AI technologies, the potential overreliance on the technology at the expense of human judgement and issues of who to hold accountable when the technology makes a mistake. As with benefits, concerns also vary depending on where robotics are applied.

For robotic care assistants, people note significant advantages relating to efficiency (that is, faster, and more accurate). However, people are most worried about the potential loss of human interaction (78% worry that ‘patients will miss out on the human interaction they would otherwise get from human carers’), suggesting that people do not want AI-powered technologies to replace human-to-human care.

This is consistent with findings from the Public Attitudes to Science survey in 2019, which found that people were concerned that the use of AI and robotics in healthcare would reduce human interaction, and that the public were open to the idea of the use of this technology to support, rather than replace, a doctor.[33]

Nearly half of people identify concerns relating to the technology leading to job cuts to human caregiving professionals (46%), and that it would be difficult to assign responsibility for what went wrong if the robot care assistant made a mistake (45%).

In the case of driverless cars, the most selected concerns relate to: lack of reliability (62% chose ‘the technology will not always work, making the cars unreliable’); accountability for mistakes (59% chose ‘if the technology makes a mistake, it will be difficult to know who is responsible for what went wrong’); and lack of clarity on how decisions were made (51% chose ‘it will be more difficult to understand how the car makes decisions compared to a human driver’).

Similarly, people’s concerns about autonomous weapons centre on overreliance on the technology (selected by 54%) and lack of clarity on who would be responsible if the technology made a mistake (selected by 53%).

4.3.4. Virtual assistants

In relation to virtual assistants, we asked specifically about smart speakers and about the use of virtual assistants in healthcare.

The British public most commonly chose accessibility and speed as benefits in relation to virtual assistants, a similar finding to the benefits chosen for robotics.

Accessibility (‘The technology will allow people with difficulty using devices to access features more easily’) is the most selected benefit of smart speakers, selected by 71% of people. To a lesser extent, accessibility is also the top benefit mentioned in relation to virtual health assistants (53% chose ‘The technology will be easier for some groups of people in society to use, such as those who have difficulty leaving their home’).

Speed is the second most selected benefit for both technologies. Over half, 60%, of people selected speed as a benefit for smart speakers, while 50% selected it for virtual mental health assistants.

People are most concerned about the gathering and sharing of personal data for smart speakers. This is also a common concern across other technologies that are more visible and commonplace in day-to-day lives, such as the use of facial recognition for unlocking mobile phones, and targeted online social media advertisements.

Over half (57%) of the British public selected ‘the technology will gather personal information which could be shared with third parties’ as a concern. This concern aligns with previous research into attitudes towards the use of personal data, where data security and privacy were felt to be the greatest risk for data use in society.[34] This concern is particularly salient among those who are more generally concerned by smart speakers, where the top two concerns relate to personal information. In this group, 79% are concerned that their personal information could be shared with third parties and 68% are concerned their personal information is less safe and secure. These concerns suggest that people see data security as more significant for AI technologies that are designed for more personal use, particularly in spaces like home or work.

The biggest concern in relation to virtual assistants in healthcare relates to the potential difficulty for some people to use it, and the technology not being able to account for individual differences.

Almost two thirds of the British public (64%) identify difficulty in use (‘some people may find it difficult to use the technology’) as a concern in relation to virtual assistants in healthcare, which is higher than the 53% who mention accessibility as a benefit. This concern reiterates the value people place on AI technologies working for all members of society. Another major concern raised around virtual assistants in healthcare is that the technology may not account for individual circumstances as well as human healthcare professionals (63%).

Those with experience of virtual assistants in healthcare are more likely than those without to report concerns around the technology being more inaccurate than humans. Concerns include: suggesting diagnosis and treatment options; the difficulty of assigning who is accountable when the technology makes mistakes; and the technology being less effective for some members of society.

However, those with experience of these technologies are also more likely to report benefits relating to accessibility, helping the health system save money, personal information being secure and the technology being less likely than healthcare professionals to discriminate against some groups of people in society.

4.3.5. Targeted online advertising

While discovery of new and relevant content is the most mentioned benefit for the use of consumer or political targeted online advertising, the public identify invasions of privacy and personal information being shared with third parties as the most prevalent concerns, highlighting a tension between personalisation of content and privacy.

Half of the public (50%) chose ‘it will help people discover new products that might be of interest to them’ as a benefit in relation to targeted online consumer advertisement, while only one third (33%) select this as a benefit for targeted online political advertisement (‘It will help people discover new political representatives who might be of interest to them’). Similar proportions for both technologies mention the relevance of ads as a benefit for consumer targeted advertising (53%) and for political ads (32%).

However, as seen in previous sections, people are highly concerned about these uses of AI.

Over two thirds of people (69%) identify invading privacy as a concern for targeted online consumer advertisements, while 51% identify this for political advertisements.

Similarly, 68% selected ‘the technology will gather personal information which could be shared with third parties’ as a concern for consumer adverts while 48% selected this concern for political adverts.

This suggests that while the public might find social media advertising more helpful in discovering relevant content, especially for consumer adverts, they are also less trusting of what is done with their personal information.

This resonates with the findings from an online study on online advertising in the UK and France which found that most participants were concerned about how their browsing activity was being used even when they saw some of the benefits related to discovery. The study concluded that participants wanted their data, and their ability to choose how it is used, to be respected and to be able to ‘practically, meaningfully, and simply curate their own advertising experience’.[35]

4.3.6. Simulations

We asked about two uses of AI simulations for advancing knowledge, one relating to the use of AI for climate change research and another around the use of virtual reality for educational purposes.

The public see the main benefits of simulations for science and education as making it faster and easier to enhance knowledge and understanding, as well as enabling a greater number of people to learn or benefit from research. However, the public are concerned about inequalities in access to the technology, meaning not everyone will benefit.

When asked about the use of new simulation technologies to advance climate change research, around two thirds of people said: that they would ‘make it faster and easier for scientists and governments to predict climate change effects’ (64%); that it would ‘predict issues across a wider range of regions and countries, meaning more people will experience the benefits of climate research’ (64%); and that it would ‘allow more people to understand the possible effects of climate change’ (63%).

In relation to the use of simulation technologies like virtual reality for education, the potential to ‘increase the quality of education by providing more immersive experiences’ (66%), and its potential to ‘allow more people to learn about history and culture’ (60%) are the most selected benefits (Table 11).

Overall, the public choose few concerns in relation to AI for climate change research.

People don’t express many specific concerns about the use of simulation technologies for advancing climate change research. Over one third (36%) selected the risk that ‘the technology will predict issues in some regions better than others, meaning that some people do not experience the benefits of these technologies’. After this concern, however, the most selected answer is ‘None of these’ (26%), followed by 21% who selected inaccuracy as a concern.

The public are most concerned about inequalities in access and control over narratives in education in relation to the development of virtual reality for education.

Over half (51%) of British adults are concerned that ‘some people will not be able to learn about history and culture in this way as they will not have access to the technology’ in the development of virtual reality for education. This concern is followed by giving control over to technology developers on ‘what people learn about history or culture’ which is selected by 46% of people.

4.4. Governance and explainability

4.4.1. Explainability

To understand how explainable the British public think a decision made by an AI system should be when explainability trades off with accuracy, we first informed participants that: ‘Many AI systems are used with the aim of making decisions faster and more accurately than is possible for a human. However, it may not always be possible to explain to a person how an AI system made a decision.’ We then asked people which of the following statements best reflects their personal opinion:

  • Making the most accurate AI decision is more important than providing an explanation.
  • In some circumstances an explanation should be given, even if that makes the AI decision less accurate.
  • An explanation should always be given, even if that makes all AI decisions less accurate.
  • Humans, not computers, should always make the decisions and be able to explain them to the people affected.

When there are trade-offs between the explainability and accuracy of AI technologies, the British public value the former over the latter: it is important for people to understand how decisions driven by AI are made.

Figure 6 shows that only 10% of the public feel that ‘making the most accurate AI decision is more important than providing an explanation’, whereas a majority choose options that reflect a need for explaining decisions. Specifically, almost one third (31%) indicate that humans should always make the decisions (and be able to explain them), followed by 26% who think that ‘sometimes an explanation should be given, even if it reduces accuracy’ and another 22% who choose ‘an explanation should always be given, even if it reduces accuracy’.

People’s preferences for explainable AI decisions dovetail with the importance of transparency and accountability demonstrated by people’s specific concerns about each technology (described in Section 4.3). Here, for all technologies[36] (except for driverless cars and virtual health assistants) the proportion of concerns mentioning ‘it is unclear how decisions are made’ is higher than mentions of ‘inaccuracy’.

 

People’s preferences for explainability over accuracy change across age groups.

Older people choose explainability and human involvement over accuracy to a greater extent than younger people. For those aged 18–44, ‘sometimes an explanation should be given even if it reduces accuracy’ was the most popular response (Figure 7). At the youngest end of the age spectrum (18–24) ‘humans should always make the decisions and be able to explain them’ is the least popular response, whereas this becomes the first choice from 45+ and above and the highest for respondents aged 65+.

4.4.2. Governance and regulation

To find out about people’s views on the regulation of AI, we asked people to indicate what (if anything) would make them more comfortable with AI technologies being used. Participants could select as many they felt applied from a list of seven possible options.

Public attitudes suggest a need for regulation that involves redress and the ability to contest AI-powered decisions.

People most commonly indicated that ‘laws and regulations that prohibit certain uses of technologies and guide the use of all AI technologies’ would increase their comfort with the use of AI, with 62% in favour. People are also largely supportive of ‘clear procedures for appealing to a human against an AI decision’ (selected by 59%). Adding to the concerns expressed about data security and accountability, 56% of the public want to make sure that ‘personal information is kept safe and secure’ and 54% want ‘clear explanations of how AI works’.

Figure 8 shows the proportion of people selecting each option when asked what, if anything, would make them more comfortable with AI technologies being used.

We also asked participants who they think should be most responsible for ensuring AI is used safely from a list of seven potential actors. People could select up to two options.

The British public want regulation of AI technologies. ‘An independent regulator’ is the most popular choice for governance of AI.

Figure 9 shows 41% of people feel that ‘An independent regulator’ should be responsible for the governance of AI, the most popular choice of the seven presented. Patterns of preferred governance do not change notably depending on whether people feel well informed about new technologies or not.

Results add to a PublicFirst poll conducted in March 2023 with 2,000 UK adult respondents which found that 62% of respondents supported the creation of a new government regulatory agency, similar to the Medicines and Healthcare Products Regulatory Agency (MHRA), to regulate the use of new AI models.[37]

People’s preferences for the governance of AI changes across age groups.

While people overall most commonly select ‘an independent regulator’, Figure 10 shows 43% of 18–24-year-olds think that the ‘companies developing the technology’ should be most responsible for ensuring AI is used safely. In contrast, only 17% of people over 55 select this option.

This could reflect more in-depth experiences by young people with different technologies and associated risks, and therefore demands for more responsibility on developers. Especially since young people also report the highest exposure to technology driven problems such as online harms’.[38] That 18–24-year-olds most commonly say that the companies developing the technologies should be responsible for ensuring AI is used safely raises questions about private companies’ corporate responsibility alongside regulation.

To understand people’s concerns about who develops AI technologies, we asked people how concerned, if at all, they feel about different actors producing AI technologies. We asked this in the context of hospitals asking an outside organisation to produce AI technologies that predict the risk of developing cancer from a scan, and the Department for Work and Pensions (DWP) asking an outside organisation to produce AI technologies for assessing eligibility for welfare benefits.

We asked people how concerned they are about each of the following groups producing AI in each context:

  • private companies
  • not-for-profit organisations (e.g. charities)
  • another governmental body or department
  • universities/academic researchers.

For both the use of AI in predicting cancer from a scan, and assessing eligibility for welfare benefits, the British public are most concerned by private companies developing the technologies and least concerned by universities and academic researchers developing the technologies

For the development of AI which may be used to assist the Department for Work and Pensions in assessing eligibility for welfare benefits, the public are most concerned about private companies developing the technology, with 66% being somewhat or very concerned. Just over half, 51%, of people are somewhat or very concerned about another governmental body or department developing the technology, and 46% somewhat or very concerned about not-for-profit organisations developing the technology.

People are generally least concerned about universities or academic researchers developing this technology, with 43% being somewhat or very concerned. While this is the lowest percentage of concern compared to other stakeholders, this is still a sizable proportion of people expressing concern, which suggests the need for more trusted stakeholders to also be transparent about their role and approach to developing technologies.

Regarding the development of AI that may help healthcare professionals predict the risk of cancer from a scan, there is a very similar pattern of concerns over who develops the technology. People are most concerned with private companies developing the technology with 61% being somewhat or very concerned, followed by a governmental body (44%). People are less concerned with not-for-profit organisations and universities or academic researchers developing the technology. Overall level of concern about developers was lower for technologies that predict risk of cancer than technologies which help assess eligibility for welfare.

Figure 11 shows the extent to which people feel concerned by the following actors developing new technologies to assess eligibility for welfare benefits and predict the risk of developing cancer: private companies, governmental bodies, not-for-profit organisations and universities/academic researchers.

While we asked about concerns over the development of a specific technology rather than overall trust, our findings resonate with results from the second wave of a CDEI survey on public attitudes towards AI, which found that on average, respondents most trusted the NHS and academic researchers to use data safely, while trust in government, big tech companies and social media companies was lower.[39]

5. Conclusion

This report provides new insights into the British public’s attitudes towards different AI-powered technologies and AI governance. It comes at a time when governments, private companies, civil society and the public are grappling with the rapid pace of development of AI and its potential impacts across many areas of life.

A key contribution of this survey is that it highlights complex and nuanced views from the public across different AI applications and uses. People identify specific concerns about technologies even when they see them as overall more beneficial than concerning, and acknowledge potential benefits about particular technologies even when they also express concern.

The public are aware of the use of AI in many visible, commonplace technologies, such as the use of facial recognition for unlocking phones, or the use of targeted advertising in social media. However, awareness of AI technologies used in public services with potential high impact on people, like the use of AI for welfare benefits eligibility, is low.

The public typically see advantages of several uses of AI as improving efficiency, and accessibility. However, people worry about the security of their personal data, the replacement of professional human judgements, and the implications for accountability and transparency in decision-making. While applications of AI in health, science, education and security are overall perceived positively, applications in advanced robotics and targeted advertising online are viewed as more concerning.

There is a strong desire among the public for independent regulation, more information on how AI systems make decisions, and the ability to challenge decisions made by AI. Younger adults also tend to place responsibility on the companies developing AI to ensure that the technologies are used safely.

Future work will benefit from understanding how different groups of people in society are impacted differently by various uses of AI. However, this study highlights important considerations for policymakers and developers of AI technologies and how they can help ensure AI technologies work for people and society:

  • Policymakers and developers of AI systems must work to support public awareness and enhance transparency surrounding the use of less visible applications of AI used in the public domain. This is particularly true for areas that have significant impacts on people’s lives, such as in assessments for benefits, financial support or employment.
  • The findings show that the public expect many AI technologies to bring improvements to their lives, particularly around speed, efficiency and accessibility. It is important for policymakers and developers of these technologies to meet public expectations, work to strengthen public trust in AI further, and therefore help to maximise the benefits that AI has the potential to bring.
  • While people are positive about some of the perceived benefits of AI, they also express concerns, particularly around transparency, accountability, and loss of human judgement. As people’s interaction with AI increases across many areas of life, it is crucial for policymakers and developers of AI to listen to public concerns and work towards solutions for alleviating them.
  • People call for regulation of AI and would like to see an independent regulator in place, along with clear procedures for appealing against AI decisions. Policymakers working on AI regulatory regimes should consider the establishment of an independent regulatory body of AI technologies and ensure that the public have opportunities to seek redress if AI systems fail or make a mistake.
  • People in older age groups are particularly concerned about the explainability of AI decisions and lack of human involvement in decision-making. It is important for policymakers and civil society organisations to work to ensure older members of society in particular do not feel alienated by the increasing use of AI in many decision-making processes.
  • Lastly, policymakers must acknowledge that the public have complex and nuanced views about uses of AI, depending on what the technology is used for. Debates or policies will need to go beyond general assumptions or one-size-fits-all approaches to meet the demands and expectations from the public.

6. Appendix

6.1. Descriptions for each technology use case

The following definitions were provided to survey respondents:

Facial recognition

Facial recognition technologies are AI technologies that can compare and match human faces from digital images or videos against those stored elsewhere.

The technology works by first being trained on many images, learning to pick out distinctive details about people’s faces.

These details, such as distance between the eyes or shape of the chin, are converted into a face-print, similar to a fingerprint.

  • Mobile phone

One use of facial recognition technology is for unlocking mobile phones and other personal devices.

Such devices use this technology by scanning the face of the person attempting to unlock the phone through the camera, then comparing it against a saved face-print of the phone’s owner.

  • Border control

Another use of facial recognition technology is to assist with border control.

‘eGates’ at many international airports use facial recognition technologies to attempt to automatically verify travellers’ identities by comparing the image on their passport with an image of their face taken by a camera at the gate.

If the technology verifies the person’s identity, the eGate will open and let them through, otherwise they will be sent to a human border control officer.

  • Police surveillance

Another use of facial recognition technology is in policing and surveillance.

Some police forces in Britain and elsewhere use this technology to compare video footage from CCTV cameras against face databases of people of interest, such as criminal suspects, missing persons, victims of crime or possible witnesses.

Eligibility

Some organisations use AI technologies to help them decide whether someone is eligible for the programmes or services they offer.

These AI technologies draw on data from previous eligibility decisions to assess the eligibility of a new applicant.

The recommendations of the technology are then used by the organisation to make the decision.

  • Welfare eligibility

AI technologies that assess eligibility are sometimes used to determine a person’s eligibility for welfare benefits, such as Universal Credit, Jobseeker’s Allowance or Disability Living Allowance.

Here, AI technologies are trained on lots of data about previous applicants for similar benefits, such as their employment history and disability status, learning patterns about which features are associated with particular decisions.

Many applications will only be considered for the benefit once the computer has marked them as eligible.

  • Job eligibility

One use of AI technologies for assessing eligibility is for reviewing people’s job applications. The technology will look at a person’s job application or CV and automatically determine if they are eligible for a job.

Here, AI technologies are trained on lots of data from decisions about previous applicants for similar roles, learning patterns about which features are associated with particular hiring outcomes.

Many employers who use this technology will only read the applications that the computer has marked as an eligible match for the role.

Risk

AI technologies may be used by organisations to predict the risk of something happening.

When predicting the risk, these AI technologies draw on a wide range of data about the outcomes of many people to calculate the risk for an individual.

The recommendations these technologies make are then used by organisations to make decisions.

  • Cancer risk

One use of AI technologies for calculating risk is for assessing a medical scan to identify a person’s risk of developing some types of cancer.

Here, AI technologies are trained on many scans from past patients, learning patterns about which features are associated with particular diagnoses and health outcomes.

The technology can then give a doctor a prediction of the likelihood that a new patient will develop a particular cancer based on their scan.

  • Loan repayment risk

One use of AI technologies for calculating risk is to assess how likely a person is to repay a loan, including a mortgage.

Here, AI technologies are trained on data about how well past customers have kept up with repayments, learning which characteristics make them likely or unlikely to repay.

When a new customer applies for a loan, the technology will assess a range of information about that person and compare it to the information it has been trained on. It will then make a prediction to the bank about how likely the new customer will be able to repay the loan.

Targeted online advertising

Targeted advertising on the internet tailors adverts to a specific user. These kinds of ads are commonly found on social media, online news sites, and video and music streaming platforms.

The technology uses lots of data generated by tracking people’s activities online to learn about people’s characteristics, attitudes and interests.

The technology then uses this data to generate adverts tailored to each user.

  • Targeted social media advertising for consumer products

Targeted adverts on social media are sometimes used by companies to suggest consumer products such as clothes, gadgets and food.

These ads are targeted at people according to their personal characteristics and previous behaviour on social media. They are intended to encourage people to buy particular products.

  • Targeted social media advertising for political parties

Targeted adverts on social media are sometimes used by political parties to suggest political content to users.

These ads are targeted at people according to their personal characteristics and previous behaviour on social media. They are intended to encourage people to support a specific political party.

Virtual assistant technologies

Virtual assistant technologies are devices or software that are designed to assist people with tasks like finding information online or helping to arrange appointments. The technologies can often respond to voice or text commands from a human.

The technologies work by being ‘trained’ on lots of information about how people communicate through language, learning to match certain words and phrases to actions that they have been designed to carry out.

  • Virtual assistant smart speakers

One example of a virtual assistant technology is a smart speaker.

These technologies are small computers that are connected to the internet and which can respond to voice commands to do things such as, turn appliances in the home on and off, answer questions about any topic, set reminders, or play music.

  • Virtual assistants in healthcare

One example of a virtual assistant is for assessing information about a person’s health.

These AI technologies aim to respond to healthcare queries online, including about appointments or current symptoms.

The technologies are able to automatically suggest a possible diagnosis or advise treatment. For more serious illnesses, the technologies may suggest a person seeks further medical advice, for example by booking a GP appointment or by going to hospital.

Robotics

Robotic technologies are computer-assisted machines which can interact with the physical world automatically, sometimes without the need for a human operator.

These technologies use large amounts of data generated by machines, humans and sensors in the physical world to ‘learn to’ carry out tasks that would previously have been carried out by humans.

  • Robotic vacuum cleaners

One example of robotic technologies are robotic vacuum cleaners, sometimes called a ‘smart’ vacuum cleaner.

This is a vacuum cleaner that can clean floors independently, without any human involvement.

Robotic vacuum cleaners use sensors and motors to automatically move around a room while being able to detect obstacles, stairs and walls.

  • Robotic care assistants

One example of robotic technologies are robotic care assistants. These technologies are being developed to help carry out physical tasks in care settings such as hospitals and nursing homes.

Robotic care assistants are designed to support specific tasks, such as helping patients with mobility issues to get in and out of bed, to pick up objects, or with personal tasks such as washing and dressing.

When these technologies are used, a human care assistant will be on-call if needed.

  • Driverless cars

Another use of robotic technologies is for driverless cars. These are vehicles that are designed to travel on roads with other cars, lorries and vans, but which drive themselves automatically without needing a human driver.

Driverless cars can detect obstacles, pedestrians, other drivers and road layouts by assessing their physical surroundings using sensors and comparing this information to large amounts of data about different driving environments.

  • Autonomous weapons

Another use of robotic technologies is for autonomous weapon systems used by the military.

These include missile systems, drones and submarines that, once launched, can automatically identify, select or attack targets without further human intervention.

These technologies decide when to act by assessing their physical surroundings using sensors and comparing this information to large amounts of data about different combat environments.

Advancing knowledge through simulations

New computer technologies are being developed to advance human knowledge about the past and the future.

These technologies work by taking large amounts of data that we already have, and using this to create realistic simulations about how things were in the past, or how they might be in the future.

These ‘simulation technologies’ aim to allow people to study and learn about places and events that would otherwise be impossible or difficult to directly experience.

  • Climate change research

One example of using new simulation technologies for advancing knowledge is for research about climate change.

New simulation technologies can analyse large amounts of past data in order to simulate the future impacts of climate change in particular areas. This data could come from weather and environmental data, pollution data, and data on energy usage from individual homes.

For example, these technologies can help scientists and governments to predict the likelihood of a significant flood occurring in a particular region over the next 10 years, along with how the flood may impact agriculture and health.

  • Virtual reality for culture and education

One example of using new simulation technologies for advancing knowledge is the development of virtual reality for education.

Here, a person can wear a virtual reality headset at home or school that will show them a three-dimensional simulation of a museum or historical site, using a range of data about the museum or historical site.

These technologies are designed to allow people to learn more about history or culture through games, videos and other immersive experiences.

6.2. Limitations

While this study benefits from including a large, random probability sample representative of the population of Great Britain, the work is limited by several features which we address here. As discussed in the methodology, the sample size of our survey is not sufficiently large to provide robust estimates for different minority ethnic groups. We also do not have representation from Northern Ireland in our survey, meaning findings from this report cannot be generalised to the United Kingdom.

We recognise the complexity of AI as a subject matter of our survey, and although we contextualised all the uses of AI we included in the survey, we were still not able to capture the granularity of some of these uses. For example, we asked about autonomous weapons in the broad sense, but acknowledge that attitudes may vary depending on whether they are framed as in use by participant’s own nation or other nations, or whether the system is for defensive or offensive uses.

We also asked respondents about awareness and experience with uses of AI, but cannot gather from the survey alone what type of experience they have had with each technology. For example, in the case of AI to assess job eligibility, we do not know whether experience relates to using these services to recruit or to using these services when applying for a job.

The list of concerns and benefits we presented respondents with, though grounded in literature surrounding AI, is also not exhaustive. While we left an open-text option for all benefit and concerns questions, very few respondents filled these in. It was important to keep these questions short due to time restrictions for the survey overall, and therefore the benefits and concerns presented in this report are not definitive across the uses of AI we surveyed.

We asked generally about feelings towards the governance and regulation of AI technologies as a whole rather than for specific uses of AI. As discussed in the report, AI is complex and difficult to define, and our findings show that attitudes towards AI are nuanced and vary depending on the application of AI. Future research should look at public attitudes to regulating specific technologies.

Finally, although we used both online and offline methods, we recognise that we still may not have reached those that are truly digitally excluded in Great Britain, those with restrictions on their leisure time, and those with additional requirements that may have made participating in this survey challenging. Therefore the ability to generalise our findings is limited.

Overall, we acknowledge that a survey alone cannot be a perfect representation of public attitudes. Attitudes may change depending on time and context and include trade-offs across different groups in the population and across different technologies that are difficult to explore using this method.

This survey was designed and in the field in November 2022, just before generative AI and large language models like ChatGPT became a widely covered media topic. It is probable that these advances have already impacted public discourse towards some AI technologies since our survey. Therefore there is a need for rich qualitative research to follow up the insights we have presented here.

6.3. Analysis and additional tables

In this section we provide more detail on the analysis conducted to understand in which cases perceived benefits of each AI use outweighed concerns and vice versa (Section 6.3.1.), and the regression analysis conducted to understand differences on the extent to which different groups are more likely to see technologies as more or less beneficial (Section 6.3.2.).

Section 6.3.3. includes detail on the type of analysis conducted to derive some of the attitudinal variables used. Section 6.3.4. provides tables with the full list of specific benefits and concerns, and the percentage of respondents selecting these, for all 17 of the technologies included in the survey.

6.3.1. Net benefit analysis

A mean net benefit score was calculated for each technology by subtracting the benefit score from the concern score. When net benefit scores were negative, concern outweighed benefit. When scores were positive, benefit outweighed concern. Scores of zero indicated equal concern and benefit.

The benefit and concern variables were coded in the following ways:

To what extent do you think that the use of [AI technology] will be beneficial?

  • ‘Very beneficial’ was re-coded as 3
  • ‘Fairly beneficial’ was 2
  • ‘Not very beneficial’ was 1
  • ‘Not at all beneficial’ was 0.

To what extent are you concerned about the use of [AI technology]?

  • ‘Very concerned’ was re-coded as 3
  • ‘Somewhat concerned’ was 2
  • ‘Not very concerned’ was 1
  • ‘Not at all concerned’ was 0.

‘Prefer not to say’ and ‘don’t know’ options were re-coded as missing values.

6.3.2. Regression analysis

To understand how demographics and attitudinal variables are related to the perceived net benefits of AI, we fitted linear regression models for each individual AI technology using the same set of predictor variables. The dependent variable in each model is ‘net benefit’, calculated as described above. The independent variables in the models were:

  • Age (65 and older compared to younger than 65), this coding is chosen because it represents the main age difference across AI uses)
  • Sex (male compared to female)
  • Education (having a degree compared to not having a degree)
  • Social class (NS-SEC 1-3 compared to NS-SEC 4-7)
  • Awareness of the technology (aware compared to not aware)
  • Experience with using the technology (experience compared to no experience)
  • Tech interested (self-reported interest in technology)
  • Tech informed (self-reported informedness about technology)
  • Digital literacy (high compared to low)
  • Comfort with technology (high compared to low)

Figure 12 presents the results for all 17 regressions in a single plot. Each square in the plot represents the expected change in net benefit for a unit increase in the corresponding independent variable on the vertical axis, controlling for all other variables included in the model.

Statistically significant coefficients (p < 0.05) are shown in pink, while green coefficients denote non-significant coefficients. Coefficient estimates higher than 0 indicate a higher net benefit and conversely coefficients lower than 0 are associated with lower net benefit (or higher concern) on a particular variable.

Taking the age variable as an example, people aged over 65 are significantly more likely to see simulation in climate change, predicting cancer risk, all three facial recognition technologies and autonomous weapons as net beneficial. On the other hand, this age group is significantly more likely to see consumer and political social media advertising, job eligibility and driverless cars as net concerning. There are no significant differences between age groups for the remaining AI uses.

Figure 12 illustrates how patterns of perceived net benefit vary substantially across demographic groups and attitudinal indicators. Only ‘comfort with technology’ shows a consistent relationship, with people who are more comfortable with technology significantly more likely to see net benefits across all 17 AI uses.

Being a graduate, on the other hand, is associated with expressing net concerns on most AI uses, although several are non-significant and one is in the opposite direction (graduates are more likely to see autonomous weapons as net beneficial). Sex shows a near equal mix of positive, negative and non-significant associations across use cases. These results reinforce the conclusion from the descriptive analyses; public perceptions of AI are complex and highly nuanced, varying according to the specific technology and the context in which it is used.

6.3.3. Principal component analysis

The independent variables ‘digital literacy’ and ‘comfort with technology’ are summary measures of multiple items produced using principal component analysis. The ‘digital literacy’ measure is based on eight survey questions each covering the level of confidence in different information technology skills, ranging from using the internet for finding information to setting up an online account to buy goods (see Table 13).

‘Comfort with technology’ is a measure derived from seven questions which cover attitudes towards new technologies and their impact on society, for example, whether the respondent finds it easy to keep up with new technologies or whether AI is making society better (see Table 14). The summary score for each measure is taken as the first principal component in a principal component analysis. Tables 15 and 16 include the factor loadings for each measure from the principal component analysis.

Table 13: Digital literacy scale (response options 1-4 recoded from least to most confident)

Variable name Question wording
DIG_LIT_1 Use the internet to find information that helps you solve problems
DIG_LIT_2 Attach documents to an email and share it
DIG_LIT_3 Create documents using word processing applications (e.g. a CV or a letter)
DIG_LIT_4 Set up an email account
DIG_LIT_5 Organise information and content using files and folders (either on a device, across multiple devices, or on the Cloud)
DIG_LIT_6 Recognise and avoid suspicious links in emails, websites, social media messages and pop-ups
DIG_LIT_7 Pay for things online
DIG_LIT_8 Set up an online account that enables you to buy goods and services (e.g. Amazon account, eBay, John Lewis)

Table 14: ‘Comfort with technology’ scale (response options scale 1-11, using a slider question approach)

Variable name Question wording
TECHSELF_1 TECHSELF. Do not seek out new technologies or gadgets…When new technologies or gadgets are introduced, like to try them’
TECHSELF_2 TECHSELF. Overall, new technologies make quality of life worse…Overall, new technologies improve quality of life’
TECHSELF_3 TECHSELF. Find it difficult to keep up to date with new technologies…Find it easy to keep up to date with new technologies
TECHSELF_4 TECHSELF. Do not like my online activity being tracked…Fine with my online activity being tracked
TECHSELF_5 TECHSELF. So long as the technology works, don’t need to know how it works…Knowing how new technologies work is important
TECHSOCIAL_1 Are changing society too quickly…Are changing society at a good pace
TECHSOCIAL_2 Are making society worse…Are making society better

Table 15: Digital literacy: Principal Component Analysis

Factor Loadings
Variable name Component 1 Component 2
DIG_LIT_1 0.7861 0.2621
DIG_LIT_2 0.8691 -0.1814
DIG_LIT_3 0.8126 -0.4233
DIG_LIT_4 0.8343 -0.0224
DIG_LIT_5 0.8112 -0.3837
DIG_LIT_6 0.7034 0.2331
DIG_LIT_7 0.8126 0.3514
DIG_LIT_8 0.8609 0.2043

 

Table 16: Comfort with technology: Principal Component Analysis

Factor Loadings
Variable name Component 1 Component 2
TECHSELF_1 0.8297 0.3404
TECHSELF_2 0.8210 0.0611
TECHSELF_3 0.8357 0.3023
TECHSELF_4 0.5226 -0.4599
TECHSELF_5 0.6570 0.4723
TECHSOCIAL_1 0.7584 -0.4245
TECHSOCIAL_2 0.7234 -0.4607

6.3.4. Full list of specific benefits and concerns chosen for each technology

Table 17: Full list of specific benefits and percentage of respondents selecting for all 17 technologies

Technology Benefit option Percentage selecting
Cancer risk prediction The technology will enable earlier detection of cancer, allowing earlier monitoring or treatment 82%
  There will be less human error when predicting people’s risk of developing cancer 53%
  The technology will be more accurate than a human doctor at predicting the risk of developing cancer 42%
  The technology will reduce discrimination in healthcare 32%
  People’s personal information will be more safe and secure 11%
  Something else (please specify) 2%
  None of these 3%
  Don’t know 6%
Job eligibility Reviewing applications will be faster and easier for employers and recruiters 49%
  The technology will be more accurate than employers and recruiters at reviewing  applications 13%
  There will be less human error in determining eligibility for a job 22%
  The technology will be less likely than employers and recruiters to discriminate against some groups of people in society 41%
  The technology will save money usually spent on human resources 32%
  People’s personal information will be more safe and secure 10%
  Something else (please specify) 1%
  None of these 13%
  Don’t know 10%
Loan repayment risk Applying for a loan will be faster and easier 52%
  The technology will be more accurate than banking professionals at predicting the risk of repaying a loan 29%
  There will be less human error in  decisions 37%
  The technology will be less likely than banking professionals to discriminate against some groups of people in society 39%
  The technology will save money usually spent on human resources 31%
  People’s personal information will be more safe and secure 11%
  Something else (please specify) 0%
  None of these 8%
  Don’t know 12%
  Prefer not to say 0%
Welfare eligibility Determining eligibility for benefits will be faster and easier 43%
  The technology will be more accurate than fare officers at determining eligibility for benefits 22%
  There will be less human error in determining eligibility for benefits 38%
  The technology will be less likely than fare officers to discriminate against some groups of people in society 37%
  The technology will save money usually spent on human resources 35%
  People’s personal information will be more safe and secure 14%
  Something else (please specify) 0%
  None of these 12%
  Don’t know 14%
Facial recognition at border control Processing people at border control will be faster 70%
  People will not have to answer personal questions sometimes asked by border control officers 32%
  The technology will be more accurate than border control officers at detecting people who do not have the right to enter 50%
  The technology will be less likely than border control officers to discriminate against some groups of people in society 40%
  People’s personal information will be more safe and secure 18%
  The technology will save money usually spent on human resources 42%
  Something else (please specify) 1%
  None of these 4%
  Don’t know 3%
Facial recognition  for mobile phone unlocking It is faster to unlock a phone or personal device 61%
  People’s personal information will be more safe and secure 53%
  Something else (please specify) 1%
  None of these 8%
  Don’t know 6%
Facial recognition for policing and surveillance The technology will make it faster and easier to identify wanted criminals and missing persons 77%
  The technology will be more accurate than police officers and staff at identifying wanted criminals and missing persons 55%
  The technology will be less likely than police officers and staff to discriminate against some groups of people when identifying criminal suspects 41%
  The technology will save money usually spent on human resources 46%
  People’s personal information will be more safe and secure 11%
  Something else (please specify) 0%
  None of these 3%
  Don’t know 4%
Autonomous weapons The technologies will enable faster military response to threats 50%
  The technologies will preserve the lives of some soldiers 54%
  The technologies will be more accurate than human soldiers at identifying targets 34%
  The technologies will be less likely than human soldiers to target people based on particular characteristics 26%
  The technologies will lead to fewer civilians being harmed or killed 36%
  The technology with save money usually spent on human resources 22%
  Something else (please specify) 1%
  None of these 9%
  Don’t know 15%
  Prefer not to say 0%
Driverless cars It will make travel by car easier 30%
   It will free up time to do other things while driving like working, sleeping or watching a movie 30%
   Driverless cars will drive with more accuracy and precision than human drivers 32%
   Driverless cars will be less likely to cause accidents than human drivers 32%
   It will make travel by car easier for disabled people or for people who have difficulty driving 63%
   The technology will save money usually spent on human drivers 19%
   Something else (please specify) 1%
   None of these 17%
   Don’t know 6%
Robotic care assistant  The technology will make caregiving tasks easier and faster 47%
   The technology will be able to do tasks such as lifting patients out of bed more accurately than caregiving professionals 45%
   The technology will be less likely than caregiving professionals to discriminate against some grou of people in society 37%
   The technology will save money usually spent on human resources 34%
   Something else (please specify) 0%
   None of these 12%
   Don’t know 11%
   Will benefit the care workers 0%
Robotic vacuum cleaner  The technology will do the vacuuming, saving people time 68%
   The technology will be more accurate than a human at vacuuming 12%
   It will make vacuuming possible for people who have difficulty doing manual tasks 84%
   Something else (please specify) 1%
   None of these 3%
   Don’t know 3%
Smart speaker  The technology will allow people to carry out tasks faster and more easily 60%
   The technology will allow people with difficulty using devices to access features more easily 71%
   People’s personal information will be more safe and secure 5%
   People will be able to find information more accurately 39%
   Something else (please specify) 0%
   None of these 7%
   Don’t know 6%
Virtual healthcare assistant  It is a faster way for people to get help for their health and symptoms than speaking to a healthcare professional 50%
   The technology will be more accurate than a healthcare professional at suggesting a diagnosis and treatment options 13%
   The technology will be less likely than healthcare professionals to discriminate against some groups of people in society 31%
   The technology will be easier for some groups of people in society to use, such as those who have difficulty leaving their home 53%
   The technology will save money usually spent on human resources 35%
   People’s personal information will be more safe and secure 8%
   Something else (please specify) 1%
   None of these 9%
   Don’t know 9%
Targeted online consumer ads  People will be able to find products online faster and more easily 39%
  The adverts people see online will be more relevant to them than adverts that are not targeted 53%
   It will help people discover new products that might be of interest to them 50%
   Something else (please specify) 0%
   None of these 17%
   Don’t know 3%
Targeted online political ads  People will be able to find political information online faster and more easily 35%
   The political adverts that people see online will be more relevant to them than political adverts that are not targeted 32%
   It will help people discover new political representatives who might be of interest to them 33%
   It will increase the diversity of political perspectives that people engage with 22%
   Something else (please specify) 0%
   None of these 22%
   Don’t know 12%
Simulations for climate change research  The technology will be more accurate than scientists and government researchers alone at predicting climate change effects 41%
   The technology will make it faster and easier for scientists and governments to predict climate change effects 64%
   The technology will predict issues across a wider range of regions and countries, meaning more people will experience the benefits of climate research 64%
   This technology will allow more people to understand the possible effects of climate change 63%
   Something else (please specify) 1%
   None of these 6%
   Don’t know 12%
Simulations for education  People will gain a more accurate understanding of historical events and how people lived in the past 57%
   The technology will make it easier and faster to learn about history and culture 58%
  The technology will increase the quality of education by providing more immersive experiences 66%
   The technology will allow more people to learn about history and culture 60%
   Something else (please specify) 1%
   None of these 6%
   Don’t know 10%
   Prefer not to say 0%

Table 18:  Full list of specific concerns and percentage of respondents selecting for all 17 technologies

Technology Concern option Percentage selecting
Cancer risk prediction  The technology will be unreliable and cause delays to predicting a risk of cancer 17%
   The technology will gather personal information which could be shared with third parties 24%
   People’s personal information will be less safe and secure 13%
   The technology will not be as accurate as a human doctor at predicting the risk of developing cancer 19%
   The technology will be less effective for some groups of people in society than others, leading to more discrimination in healthcare 17%
   Doctors will rely too heavily on the technology rather than their professional judgements 56%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 47%
   It will be more difficult to understand how decisions about potential health outcomes are reached 32%
   Something else (please specify) 1%
   None of these 10%
   Don’t know 7%
Job eligibility  The technology will be unreliable and cause delays to assessing job applications 19%
   The technology will not be as accurate as employers and recruiters at reviewing job applications 39%
   The technology will be less able than employers and recruiters to take account of individual circumstances 61%
   The technology will be more likely than employers and recruiters to discriminate against some groups of people in society 19%
   The technology will gather personal information which could be shared with third parties 32%
   People’s personal information will be less safe and secure 19%
   It will lead to job cuts. For example, for trained recruitment staff 34%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 40%
   Employers and recruiters will rely too heavily on the technology rather than their professional judgements 64%
   It will be more difficult to understand how decisions about job application assessments are reached 52%
   Something else (please specify) 1%
   None of these 3%
   Don’t know 7%
Loan repayment risk  The technology will be unreliable and cause delays to assessing loan applications 18%
   The technology will gather personal information which could be shared with third parties 37%
   People’s personal information will be less safe and secure 21%
   Banking professionals may rely too heavily on the technology rather than their professional judgements 51%
   The technology will not be as accurate as banking professionals at predicting the risk of repaying a loan 21%
   The technology will be more likely than banking professionals to discriminate against some groups of people in society 16%
   It will be more difficult to understand how decisions about loan applications are reached 49%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 43%
   It will lead to job cuts. For example, for trained banking professionals 33%
   The technology will be less able than banking professionals to take account of individual circumstances 52%
   Something else (please specify) 1%
   None of these 4%
   Don’t know 8%
   Prefer not to say 0%
Welfare eligibility  The technology will be unreliable and will cause delays to allocating benefits 24%
   The technology will not be as accurate as welfare officers at determining eligibility for benefits 29%
   The technology will be more likely than welfare officers to discriminate against some groups of people in society 13%
   The technology will gather personal information which could be shared with third parties 32%
   People’s personal information will be less safe and secure 19%
   It will lead to job cuts. For example, for trained welfare officers 35%
   It will be more difficult to understand how decisions about allocating benefits are reached 45%
   Welfare officers will rely too heavily on the technology rather than their professional judgements 47%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 47%
   The technology will be less able than welfare officers to take account of individual circumstances 55%
   Something else (please specify) 1%
   None of these 5%
   Don’t know 10%
   Prefer not to say 0%
Facial recognition at border control  The technology will be unreliable and cause delays when it breaks down 44%
   The technology will not be as accurate as border control officers at detecting people who do not have the right to enter 20%
   The technology will gather personal information which could be shared with third parties 29%
   People’s personal information will be less safe and secure 15%
   The technology will be more likely than border control officers to discriminate against some groups of people in society 10%
   Border control officers will rely too heavily on the technology rather than their professional judgements 41%
   Some people might find it difficult to use the technology 42%
   It will lead to job cuts. For example, for trained border control officers 47%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 47%
   It will be more difficult to understand how decisions are reached 26%
   Something else (please specify) 1%
   None of these 6%
   Don’t know 4%
Facial recognition for mobile phone unlocking  The technology will be unreliable, making it take longer to unlock your phone or personal device 21%
   The technology will gather personal information which could be shared with third parties 40%
   The technology will make it easier for other people to unlock your phone or personal device 23%
   People’s personal information will be more safe and secure 19%
   Some people may find it difficult to use the technology 41%
   The technology will be less effective for some groups of people in society than others 33%
   Something else (please specify) 1%
   None of these 12%
   Don’t know 3%
Facial recognition for policing and surveillance The technology will be unreliable and will cause delays identifying wanted criminals and missing persons 15%
  The technology will not be as accurate as police officers and staff at identifying wanted criminals and missing persons 13%
   If the technology makes a mistake it will lead to innocent people being wrongly accused 54%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 48%
   The technology will be more likely than police officers and staff to discriminate against some groups of people in society 15%
   The technology will gather personal information which could be shared with third parties 38%
   People’s personal information will be less safe and secure 21%
   It will lead to job cuts. For example, for trained police officers and staff 30%
   Police officers and staff will rely too heavily on the technology rather than their professional judgements 46%
   Something else (please specify) 1%
   None of these 8%
   Don’t know 4%
Autonomous weapons  The technologies will be unreliable and may miss or not fire at targets 41%
   The technologies will lead to more civilians being harmed or killed 33%
   The technologies will not be as accurate at identifying targets as human soldiers 29%
   The technologies will be more likely than human soldiers to target people based on particular characteristics 22%
   Defence staff will rely too heavily on the technologies rather than their professional judgements 54%
   It will lead to job cuts. For example, for trained defence staff 25%
   If the technologies make a mistake, it will be difficult to know who is responsible for what went wrong 53%
   It is more difficult to understand how military decisions are reached 39%
   The technologies will lead to more soldiers being harmed or killed 21%
   Something else (please specify) 2%
   None of these 5%
   Don’t know 11%
   Prefer not to say 0%
Driverless cars  The technology will not always work, making the cars unreliable 62%
   Getting to places will take longer as the cars will be overly cautious 25%
   Driverless cars will not be as accurate or precise as humans are at driving 38%
   The technology will gather personal information which could be shared with third parties 22%
   The technology will be less effective for some groups of people in society than others 26%
   Some people may find it difficult to use the technology 46%
   It will lead to job cuts. For example, for truck drivers, taxi drivers, delivery drivers 44%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 59%
   It will be more difficult to understand how the car makes decisions compared to a human driver 51%
   Driverless cars will be more likely to cause accidents than human drivers 36%
   Something else (please specify) 2%
   None of these 4%
   Don’t know 2%
   Prefer not to say 0%
Robotic care assistant  The technology will be unreliable and cause delays to urgent caregiving tasks 34%
   The technology will not be able do tasks such as lifting patients out of bed as accurately as caregiving professionals 37%
   The technology will be less effective for some groups of people in society than others 33%
   It will lead to job cuts. For example, for trained caregiving professionals 46%
   The technology will not be safe, it could hurt people 41%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 45%
   The technology will gather personal information which could be shared with third parties 20%
   Patients will miss out on the human interaction they would otherwise get from human carers 78%
   Something else (please specify) 1%
   None of these 3%
   Don’t know 7%
   Prefer not to say 0%
   Technology may miss subtle signs when assisting patients 1%
Robotic vacuum cleaner  The technology will be unreliable and not always work, for example, the motion sensors will not detect steps or surface change 45%
   The technology will not be as accurate as a human at vacuuming 42%
   The technology will be a safety hazard, you might trip on them 40%
   The technology will gather personal information which could be shared with third parties 12%
   People’s personal data will be less safe and secure 9%
   Some people may find it difficult to use the technology 39%
   The technology will be less effective for some groups of people in society than others 18%
   Something else (please specify) 1%
   None of these 14%
   Don’t know 5%
Smart speaker  The technology will be unreliable and cause delays to doing tasks 18%
   The technology will not always give accurate responses 51%
   The technology will be less effective for some groups of people in society than others 32%
   Some people may find it difficult to use the technology 44%
   The technology will gather personal information which could be shared with third parties 57%
   People’s personal information will be less safe and secure 41%
   Something else (please specify) 0%
   None of these 7%
   Don’t know 6%
Virtual healthcare assistant  The technology will be unreliable and cause delays to getting help 31%
   The technology will not be as accurate as a healthcare professional at suggesting a diagnosis and treatment options 51%
   The technology will be less able than healthcare professionals to take account of individual circumstances 63%
   The technology will be less effective for some groups of people in society than others 38%
   Some people may find it difficult to use the technology 64%
   The technology will gather personal information which could be shared with third parties 35%
   People’s personal information will be less safe and secure 24%
   It will lead to job cuts. For example, for trained healthcare professionals 38%
   If the technology makes a mistake, it will be difficult to know who is responsible for what went wrong 49%
   It will be more difficult to understand how decisions about diagnoses and treatments are reached 47%
   Something else (please specify) 2%
   None of these 2%
   Don’t know 6%
Targeted online consumer ads  The technology will be inaccurate and will show people adverts that are not relevant to them 29%
   The technology will gather personal information which could be shared with third parties 68%
   People’s personal information will be less safe and secure 50%
   The technology invades people’s privacy 69%
   Something else (please specify) 2%
   None of these 6%
   Don’t know 4%
Targeted online political ads The technology will be inaccurate and will show people political adverts that are not relevant to them 33%
   The technology will gather personal information which could be shared with third parties 48%
   People’s personal information will be less safe and secure 29%
   It will reduce the diversity of political perspectives that people engage with 46%
   The technology invades people’s privacy 51%
   Something else (please specify) 1%
   None of these 5%
   Don’t know 9%
Simulations for climate change research  The technology will be unreliable, making it harder to predict the impacts of climate change and extreme weather 17%
   The technology will not be as accurate as scientists and government researchers alone at predicting climate change events 21%
   The technology will gather personal information which could be shared with third parties 13%
   The technology will predict issues in some regions better than others, meaning that some people do not experience the benefits of these technologies 36%
   Something else (please specify) 1%
   None of these 26%
   Don’t know 18%
Simulations for education research  Some people will not be able to learn about history and culture in this way as they will not have access to the technology 51%
   People will gain a less accurate understanding of historical events and how people lived in the past 17%
   The technology will gather personal information which could be shared with third parties 18%
   The technology will be unreliable, making it harder to learn about history and culture 11%
   The technology will allow those developing the technology to control what people learn about history or culture 46%
   Something else (please specify) 1%
   None of these 15%
   Don’t know 11%
   Prefer not to say 0%

6.4. Sample sizes

Table 19: Weighted and unweighted sample size of respondents for each technology 

Technology Unweighted sample size Weighted sample size
Facial recognition – Unlocking mobile phones 4,010 4,002
Facial recognition – Police surveillance 1,993 1,987
Facial recognition – Border control 2,017 2,015
Risk and eligibility – Welfare 2,015 2,012
Risk and eligibility – Loan repayment 1,999 1,991
Risk and eligibility – Job eligibility 1,995 1,990
Risk and eligibility – Cancer risk 2,011 2,011
Smart speaker – Virtual assistant 2,028 2,011
Smart speaker – Virtual healthcare assistant 1,982 1,991
Robotics – Robotic care assistant 1,985 1,973
Robotics – Robotic vacuum cleaner 2,025 2,029
Robotics – Driverless cars 1,992 2,021
Robotics – Autonomous weapons 2,018 1,981
Social media targeted advertising – Consumer ads 2,010 2,002
Social media targeted advertising – Political ads 2,000 2,000
Simulations – Climate change 2,036 2,015
Simulations – Education 1,974 1,987

Table 20: Weighted and unweighted sample size of respondents by various socio-demographic variables

Demographic Unweighted sample size Weighted sample size
Survey format Online 3,757 3,647
Telephone 253 355
Region England 3,520 3,461
Scotland 303 345
Wales 187 196
Age band 18–24 years 341 408
25–34 years 709 682
35–44 years 741 654
45–54 years 692 666
55–64 years 696 645
65–74 years 513 517
75+ years 318 431
Socio-economic status SEC1, 2 1,642 1,477
SEC3 555 494
SEC4 262 298
SEC5 165 173
SEC6, 7 634 664
SEC8 122 145
Students 201 209
NA 429 543
Education level Degree level qualification(s) 1,562 1,407
No academic or vocational qualifications 283 443
Non-degree level qualifications 2,155 2,139
NA 10 13
Ethnic group Asian or Asian British 261 296
Black British, Caribbean or African 90 103
White 3,544 3,476
Any other ethnic group 103 116
NA 12 12
Sex Female 2,096 2,037
Male 1,911 1,961
NA 3 4.4

Partner information and acknowledgements

This report was co-authored by The Alan Turing Institute (Professor Helen Margetts, Dr Florence Enock, Miranda Cross) and the Ada Lovelace Institute (Aidan Peppin, Roshni Modhvadia, Anna Colom, Andrew Strait, Octavia Reeve) with substantial input from LSE’s Methodology Department (Professor Patrick Sturgis, Katya Kostadintcheva, Oriol Bosch-Jover).

We’d like to also thank Kantar for their contributions in designing the survey and collecting the data. This project was made possible by a grant from The Alan Turing Institute and the Arts and Humanities Research Council (AHRC).

About The Alan Turing Institute

The Alan Turing Institute is the national institute for data science and artificial intelligence (AI). Established in 2015, we are named in honour of Alan Turing, whose pioneering work in theoretical and applied mathematics, engineering and computing laid the foundations for the modern-day fields of data science and AI. Headquartered at the British Library in London, we partner with organisations across government, industry, academia and the third sector to undertake world-class research that benefits society.


Footnotes

[1] Kantar, ‘Technical Report: How Do People Feel about AI?’ (GitHub, Ada Lovelace Institute 2023) <https://github.com/AdaLovelaceInstitute>

[2] Centre for Data Ethics and Innovation, ‘Public Attitudes to Data and AI: Tracker Survey (Wave 2)’  (2022) <https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-2>

[3] The Royal Society and Ipsos MORI, ‘Public Views of Machine Learning’ (2017) <https://royalsociety.org/topics-policy/projects/machine-learning>

[4] ibid.

[5] BEIS, ‘Public Attitudes to Science’ (Department for Business, Energy and Industrial Strategy/Kantar Public 2019) <https://www.kantar.com/uk-public-attitudes-to-science>

[6] Centre for Data Ethics and Innovation (n 1).

[7] Ada Lovelace Institute, ‘Beyond Face Value: Public Attitudes to Facial Recognition Technology’ (2019) <https://www.adalovelaceinstitute.org/report/beyond-face-value-public-attitudes-to-facial-recognition-technology>

[8] Baobao Zhang, ‘Public Opinion Toward Artificial Intelligence’ (Open Science Framework, 2021) preprint <https://osf.io/284sm>.

[9] European Commission. Directorate General for Communication. Citizens’ Knowledge, Perceptions, Values and Expectations of Science. (2021) <https://data.europa.eu/doi/10.2775/071577>.

[10] BEIS (n 5).

[11] Centre for Data Ethics and Innovation (n 2) 2; The Royal Society and Ipsos MORI (n 3).

[12] Lee Rainie and others, ‘AI and Human Enhancement: Americans´Openness Is Tempered by a Range of Concerns’ (Pew Research Center, 2022) <https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence>

[13] BEIS (n 5).

[14] Sabine N van der Veer and others, ‘Trading off Accuracy and Explainability in AI Decision-Making: Findings from 2 Citizens’ Juries’ (2021) 28 Journal of the American Medical Informatics Association 2128 <https://academic.oup.com/jamia/article/28/10/2128/6333351>

[15] The Alan Turing Institute, ‘Project ExplAIN’ (2023) <https://www.turing.ac.uk/news/project-explain>.

[16] Kantar (n 1).

[17] ibid.

[18] Kantar, ‘Public Voice’ (2022) <https://www.kantar.com/uki/expertise/policy-society/public-evidence/public-voice>.

[19] The technical report specifies a total of 4,012 but one 16-year-old was removed from the dataset as the survey was for adults aged 18+, while another provided their sex as ‘other’ so was removed on account of being the only participant identifying in this way and therefore having a very large weighting. Further information available in the limitations section.

[20] While participants indicated more specific ethnic identities at the time of recruitment to the Public Voice panel, we combine them into these broader categories for providing an overview of the sample.

[21] Kantar (n 1).

[22] Ada Lovelace Institute (n 1).

[23] Ada Lovelace Institute (n 1).

[24] Lina Dencik and others, ‘Data Scores as Governance: Investigating Uses of Citizen Scoring in Public Services’ (Data Justice Lab, 2018) https://datajusticelab.org/data-scores-as-governance.

[25] Britain Thinks and CDEI, ‘AI Governance’ (2022) <https://www.gov.uk/government/publications/cdei-publishes-research-on-ai-governance>.

[26] Centre for Data Ethics and Innovation (n 2).

[27] The Royal Society and Ipsos MORI (n 3).

[28] BEIS, ‘BEIS Public Attitudes Tracker: Artificial Intelligence Summer 2022, UK’ (Department for Business, Energy and Industrial Strategy 2022) <https://www.gov.uk/government/statistics/beis-public-attitudes-tracker-summer-2022>.

[29] Ada Lovelace Institute (n 7).

[30] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (2022) <https://www.adalovelaceinstitute.org/evidence-review/public-attitudes-data-regulation/> accessed 12 December 2022.

[31] Britain Thinks and CDEI (n 25).

[32] Ada Lovelace Institute (n 7).

[33] BEIS (n 5).

[34] Centre for Data Ethics and Innovation (n 2).

[35] European Interactive Digital Advertising Alliance, ‘Your Online Voices’ (2022) <https://edaa.eu/your-online-voices-your-voice-your-choice>

[36] For which both ‘inaccuracy; and ‘unclear how decisions are made’ were potential given concerns to choose from.

[37] Jonathan Dupont, Seb Wride and Vinous Ali, ‘What Does the Public Think about AI?’ (Public First 2023) <https://publicfirst.co.uk/ai/>

[38] Florence Enock and others, ‘Tracking Experiences of Online Harms and Attitudes Towards Online Safety Interventions: Findings from a Large-Scale, Nationally Representative Survey of the British Public’ (2023)] SSRN Electronic Journal <https://www.ssrn.com/abstract=4416355>

[39] Centre for Data Ethics and Innovation (n 2).

Related content