Skip to content
Evidence review

What do the public think about AI?

Understanding public attitudes and how to involve the public in decision-making about AI

Octavia Reeve , Anna Colom , Roshni Modhvadia

26 October 2023

Reading time: 93 minutes

Four members of public sitting at table discussing AI

Understanding public attitudes towards artificial intelligence (AI), and how to involve people in decision-making about AI, is becoming ever-more urgent in the UK and internationally. As new technologies are developed and deployed, and governments move towards proposals for AI regulation, policymakers and industry practitioners are increasingly navigating complex trade-offs between opportunities, risks, benefits and harms.

Taking into account people’s perspectives and experiences in relation to AI – alongside expertise from policymakers and technology developers and deployers – is vital to ensure AI is aligned with societal values and needs, in ways that are legitimate, trustworthy and accountable.

As the UK Government and other jurisdictions consider AI governance and regulation, it is imperative that policymakers have a robust understanding of relevant public attitudes and how to involve people in decisions.

This rapid review is intended to support policymakers – in the context of the UK AI Safety Summit and afterwards – to build that understanding. It brings together a review of evidence about public attitudes towards AI that considers the question: ‘What do the public think about AI?’ In addition, it provides knowledge and methods to support policymakers to meaningfully involve the public in current and future decision-making around AI.

Introduction

Why is it important to understand what the public think about AI?

We are experiencing rapid development and deployment of AI technologies and heightened public discourse on their opportunities, benefits, risks and harms. This is accompanied by increasing interest in public engagement and participation in policy decision-making, described as a ‘participatory turn’ or ‘deliberative wave’.

However, there is some hesitation around the ability or will of policy professionals and governments to consider the outcomes of these processes meaningfully, or to embed them into policies. Amid accelerated technological development and efforts to develop and coordinate policy, public voices are still frequently overlooked or absent.

The UK’s global AI Safety Summit in November 2023 invites ‘international governments, leading AI companies and experts in research’ to discuss how coordinated global action can help to mitigate the risks of ‘frontier AI’.[1] Making AI safe requires ‘urgent public debate’.[2]These discussions must include meaningful involvement of people affected by AI technologies.

The Ada Lovelace Institute was founded on the principle that discussions and decisions about AI cannot be made legitimately without the views and experiences of those most impacted by the technologies. The evidence from the public presented in this review demonstrates that people have nuanced views, which change in relation to perceived risks, benefits, harms, contexts and uses.

In addition, our analysis of existing research shows some consistent views:

  • People have positive attitudes about some uses of AI (for example, in health and science development).
  • There are concerns about AI for decision-making that affects people’s lives (for example, eligibility for welfare benefits).
  • There is strong support for the protection of fundamental rights (for example, privacy).
  • There is a belief that regulation is needed.

The Ada Lovelace Institute’s recent policy reports Regulating AI in the UK[3] and Foundation models in the public sector[4] have made the case for public participation and civil society involvement in the regulation of AI and governance of foundation models. Listening to and engaging the public is vital not only to make AI safe, but also to make sure it works for individual people and wider society.

Why is public involvement necessary in AI decision-making?

This rapid review of existing research with different publics, predominantly in the UK, shows consistency across a range of studies as to what the public think about different uses of AI, as well as providing calls to action for policymakers. It draws important insights from existing evidence that can help inform just and equitable approaches to developing, deploying and regulating AI.

This evidence must be taken into account in decision-making about the distribution of emerging opportunities and benefits of AI – such as the capability of systems to develop vaccines, identify symptoms of diseases like cancers and help humans adapt to the realities of climate change. It should also be considered in decision-making to support governance of AI-driven technologies that are already in use today in ways that permeate the everyday lives of individuals and communities, including people’s jobs and the provision of public services like healthcare, education or welfare.

This evidence review demonstrates that listening to the public is vital in order for AI technologies and uses to be trustworthy. It also evidences a need for more extensive and deeper research on the many uses and impacts of AI across different publics, societies and jurisdictions. Public views point towards ways to harness the benefits and address the challenges of AI technologies, as well as to the desire for diverse groups in society to be involved in how decisions are made.

In summary, the evidence that follows presents an opportunity for policymakers to listen to and engage with the views of the public, so that policy can navigate effectively the complex and fast-moving world of AI with legitimacy, trustworthiness and accountability in decision-making processes.

What this rapid evidence review does and does not do

This review brings together research conducted with different publics by academics, researchers in public institutions, and private companies, assessed against the methodological rigour of each research study. It addresses the following research questions:

  • What does the existing evidence say about people’s views on AI?
  • What methods of public engagement can be used by policymakers to involve the public meaningfully in decisions on AI?

As a rapid evidence review, this is not intended to be a comprehensive and systematic literature review of all available research. However, we identify clear and consistent attitudes, drawn from a range of research methods that should guide policymakers’ decision-making at this significant time for AI governance.

More detail is provided in the ‘Methodology’ section.

How to read this review

…if you’re a policymaker or regulator concerned with AI technologies:

The first part of this review summarises themes identified in our analysis of evidence relating to people’s views on AI technologies. The headings in this section synthesise the findings into areas that relate to current policy needs.

In the second part of the report, we build on the findings to offer evidence-based solutions for how to meaningfully include the views of the public in decision-making processes. The insights come from this review of evidence alongside research into public participation.

The review aims to support policymakers in understanding more about people’s views on AI, about different kinds of public engagement and in finding ways to involve the public in decisions on AI uses and regulation.

…if you’re a developer or designer building AI-driven technologies, or a deployer or organisation using them or planning to incorporate them:

Read Findings 1 to 5 to understand people’s expectations, hopes and concerns for how AI technologies need to be designed and deployed.

Findings 6 and 7 will support understanding of how to include people’s views in the design and evaluation of technologies, to make them safer before deployment.

…if you’re a researcher, civil society organisation, public participation practitioner or member of the public interested in technology and society:

We hope this review will be a resource to take stock of people’s views on AI from evidence across a range of research studies and methods.

In addition to pointing out what the evidence shows so far, Findings 1 to 6 also indicate gaps and omissions, which are designed to support the identification of further research questions to answer through research or public engagement.

Clarifying terms

The public

Our societies are diverse in many ways, and historic imbalances of power mean that some individuals and groups are more represented than others in both data and technology use, and more exposed than others to the opportunities, benefits, risks or harms of different AI uses.

 

There are therefore many publics whose views matter in the creation and regulation of AI. In this report, we refer to ‘the public’ to distinguish citizens and residents from other stakeholders, including the private sector, policy professionals and civil society organisations. We intentionally use the singular form of ‘public’ as a plural (‘the public think’), to reinforce the implicit acknowledgement that society includes many publics with different levels of power and lived experiences.

Safety

While the UK’s AI Safety Summit of 2023 has been framed around ‘safety’, there is no consensus definition of this term, and there are many ways of thinking about risks and harms from AI. The idea of ‘safety’ is employed in other important domains – like medicines, air travel and food – to ensure that systems and technologies enjoy public trust. As AI increasingly forms a core part of our digital infrastructure, our concept of AI safety will need to be similarly broad.[5]

 

The evidence in this report was not necessarily or explicitly framed by questions about ‘safety’. It surfaces people’s views about the potential or perceived opportunities, benefits, risks and harms presented by different uses of AI. People’s lived experience and views on AI technologies are useful, to understand what safety might mean in its broader scope and where policymakers’ attention – for example in national security – does not reflect diverse publics’ main concerns.

AI and AI systems

We use the UK Data Ethics Framework’s definition to understand AI systems, which they describe as technologies that ‘carry out tasks that are commonly thought to require human intelligence. [AI systems] deploy digital tools to find repetitive patterns in very large amounts of data and use them, in various ways, to perform tasks without the need for constant human supervision’.[6]

 

With this definition in mind, our analysis of attitudes to AI includes attitudes to data because data and data-driven technologies (like artificial intelligence and computer algorithms) are deeply intertwined, and AI technologies are underpinned by data collection, use, governance and deletion. In this review, we focus on research into public attitudes towards AI specifically, but draw on research about data more broadly where it is applicable, relevant and informative.

Expectations

Public attitudes research often describes public ‘expectations’. Where we have reported what the public ‘expect’ in this review, our interpretation of this term means what the public feel is required from AI practices and regulation. ’Expectation’, in this sense, does not refer to what people predict will happen.

 

Summary of findings

What do the public think about AI?

  • 1.  Public attitudes research is consistent in showing what the public think about some aspects of AI, which the findings below identify. This evidence is an opportunity for policymakers to ensure the views of the public are included in next steps in policy and regulation.
  • 2. There isn’t one ‘AI’: the public have nuanced views and differentiate between benefits, opportunities, risks and harms of existing and potential uses of different technologies.
    • The public have nuanced views about different AI technologies.
    • Some concerns are associated with socio-demographic differences.
  • 3. The public welcome AI uses when they can make tasks efficient, accessible and supportive of public benefit, but they also have specific concerns about other uses and effects, especially if AI uses that replace human decision-making affect people’s lives.
    • The public recognise potential benefits of uses of AI relating to efficiency, accessibility and working for the public good.
    • The public are concerned about an overreliance on technology over professional judgement and human communication.
    • Public concerns relate to the impacts of uses of AI, on jobs, privacy or societal inequalities.
    • In relation to foundation models: existing evidence from the public indicates that they have concerns about uses beyond mechanical, low-risk analysis tasks, and around their impact on jobs.
  • 4. Regulation and the way forward: people have clear views on how to make AI work for people and society.
    • The evidence is consistent in showing a demand for regulation of data and AI that is independent and has ‘teeth’.
    • The public are less trusting of private industry developing and regulating AI-driven technologies than other stakeholders.
    • The public are concerned about ethics, privacy, equity, inclusiveness, representativeness and non-discrimination. The use of data-driven technologies should not exacerbate unequal social stratification or create a two-tiered society.
    • Explainability of AI-driven decisions is important to the public.
    • The public want to be able to address and appeal decisions determined by AI.
  • 5. People’s involvement: people want to have a meaningful say over decisions that affect their everyday lives.
    • The public want their views and experiences to be included in decision-making processes.
    • The public expect to see diversity in the views that are included and heard.

How can involving the public meaningfully in decision-making support safer AI?

  • 6. There are important gaps in research with underrepresented groups, those impacted by specific AI uses, and in research from different countries.
    • Different people and groups, such as young people or people from minoritised ethnic communities, have distinct views about AI.
    • Some people, groups and parts of the world are underrepresented in the evidence.

 

  • 7. There is a significant body of evidence that demonstrates ways to meaningfully involve the public in decision-making, but making this happen requires a commitment from decision-makers to embed participatory processes.
    • Public attitudes research, engagement and participation involve distinct methods that deliver different types of evidence and outcomes.
    • Complex or contested topics need careful and deep public engagement.
    • Deliberative and participatory engagement can provide informed and reasoned policy insights from diverse publics.
    • Using participation as a consultative or tick-box exercise risks the trustworthiness, legitimacy and effectiveness of decision-making.
    • Empirical practices, evidence and research on embedding participatory and deliberative approaches can offer solutions to policymakers.

 

Different research methods, and the evidence they produce

There are three principal types of evidence in this review:

  1. Representative surveys, which give useful, population-level insights but can be consultative (meaning participants have low agency) for those involved.
  2. Deliberative research, which enables informed and reasoned policy conclusions from groups reflective of a population (meaning a diverse group of members of the public).
  3. Co-designed research, which can embed people’s lived experiences into research design and outputs, and make power dynamics (meaning knowledge and agency) between researchers and participants more equitable.

 

Different methodologies surface different types of evidence. Table 1 in the Appendix summarises some of the strengths of different research methods included in this evidence review.

 

Most of the evidence in this review is from representative surveys (14 studies), followed closely by deliberative processes (nine processes) and qualitative interviews and focus groups (six studies). In addition, there is one study involving peer research. This gap in the number of deliberative studies compared to quantitative research, alongside evidence included in Finding 7, may indicate the need for more in-depth public engagement methods.

Detailed findings

What do the public think about AI?

Finding 1: Public attitudes research is consistent in showing what the public think about some aspects of AI, which the findings below identify.

 

This evidence is an opportunity for policymakers to ensure the views of the public are included in next steps in policy and regulation.

Our synthesis of evidence shows there is consistency in public attitudes to AI across studies using different methods.

These include positive attitudes about some uses of AI (for example, advancing science and some aspects of healthcare), concerns about AI making decisions that affect people’s lives (for example, assessing eligibility for welfare benefits), strong support for the protection of fundamental rights (for example, privacy) and a belief that regulation is needed.

The evidence is consistent in showing concerns with the impact of AI technologies in people’s everyday lives, especially when these technologies replace human judgement. This concern is particularly evident in decisions with substantial consequences on people’s lives, such as job recruitment and access to financial support; when AI technologies replace human compassion in contexts of care; or when they are used to make complex and moral judgements that require taking into account soft factors like trust or opportunity. People are also concerned about privacy and the normalisation of surveillance.

The evidence is consistent in showing a demand for public involvement and for diverse views to be meaningfully engaged in decision-making related to AI uses.

We develop these views in detail in the following findings and reference the studies that support them.

Finding 2: There isn’t one ‘AI’

 

The public have nuanced views and differentiate between benefits, opportunities, risks and harms of existing and potential uses of different technologies

The public have nuanced views about different AI technologies

  • The public see some uses of AI as clearly beneficial. This was an insight from the joint Ada Lovelace Institute and The Alan Turing Institute’s research report How do people feel about AI?, which asked about specific AI-driven technologies.[7] In the nationally representative survey of the British public, people identified 11 of the 17 technologies we asked about as either somewhat or very beneficial. The use of AI for detecting the risk of cancer was seen as beneficial by nine out of ten people.
  • The public see some uses of AI as concerning. The same survey found the public also felt other uses were more concerning than beneficial, like advanced robotics or targeted advertising. Uses in care were also viewed as concerning by half of people, with 55% either somewhat or very concerned by virtual healthcare assistants, and 48% by robotic care assistants. In a separate qualitative study members of the UK public suggested that ‘the use of care robots would be a sad reflection of a society that did not value care givers or older people’.[8]
  • Overall, the public can simultaneously perceive the benefits as well as the risks presented by most applications of AI. More importantly, the public identify concerns to be addressed across all technologies, even when seen as broadly beneficial, as found in How do people feel about AI? by the Ada Lovelace Institute and The Alan Turing Institute.[9] Similarly, a recent survey in the UK by the Office for National Statistics found that, when people were asked to rank whether AI would have a positive or negative impact in society, the most common response was in between both ends, or neutral.[10] In a recent qualitative study in the UK, USA and Germany, participants also ‘saw benefits and concerns in parallel: even if they had a concern about a particular AI use case, they could recognise the upsides, and vice versa’.[11] Other research, including both surveys and qualitative studies in the USA[12] [13] and Germany,[14] has also found mixed views depending on the application of AI.

This nuance in views, depending on the context in which a technology is used, is illustrated by one of the comments of a juror during the Citizens’ Biometrics Council:

‘Using it [biometric technology] for example to get your money out of the bank, is pretty uncontroversial. It’s when other people can use it to identify you in the street, for example the police using it for surveillance, that has another range of issues.’
– Juror, The Citizens’ Biometrics Council[15]

 Some concerns are associated with socio-demographic differences

  • Higher awareness, levels of education and levels of information are associated with more concerns about some types of technologies. Individual differences in education levels can exacerbate concerns. The 2023 survey of the British public How do people feel about AI? found that those who have a degree-level education and feel more informed about technology are less likely to think that technologies such as facial recognition, eligibility technologies and targeted advertising in social media were beneficial.[16] A prior BEIS Public Attitudes Tracker reported similar findings.[17]
  • In the USA, the Pew Research Center found in 2023 that ‘those who have heard a lot about AI are 16 points more likely now than they were in December 2022 to express greater concern than excitement about it.’[18] Similarly, existing evidence suggests that public concerns around data should not be dismissed as uninform,[19] which goes against the assumption that the more people know about a technology, the more they will support it.

Finding 3: The public welcome AI uses that can make tasks efficient, accessible and supportive of public benefit

 

But they also have specific concerns about other uses and effects, especially if AI uses that replace human decision-making affect people’s lives.

The public recognise potential benefits of uses of AI relating to efficiency, accessibility and working for the public good

  • The public see the potential of AI-driven technologies in improving efficiency including speed, scale and cost-saving potential for some tasks and applications. They particularly welcome its use in mechanical tasks,[20] [21] in health, such as for early diagnosis, and in the scientific advancement of knowledge.[22] [23] [24] [25] For example, a public dialogue on health data by the NHS AI Lab found that the perceived benefits identified by participants included ‘increased precision, reliability, cost-effectiveness and time saving’ and that ‘through further discussion of case studies about different uses of health data in AI research, participants recognised additional benefits including improved efficiency and speed of diagnosis’.[26]
  • Improving accessibility is another potential perceived benefit of some AI uses, although other uses can also compromise For example, How do people feel about AI? by the Ada Lovelace Institute and The Alan Turing Institute found that accessibility was the most commonly selected benefit for robotic technologies that can make day-to-day activities easier for people.[27] These technologies included driverless cars and robotic vacuum cleaners. However, there is also a view that these benefits may be compromised due to digital divides and inequalities. For example, members of the Citizens’ Biometrics Council, who reconvened in November 2022 to consider the Information Commissioner’s Office (ICO)’s proposals for guidance on biometrics, raised concerns that while there is potential for biometrics to make services more accessible, an overreliance on poorly designed biometric technologies would create more barriers for people who are disabled or digitally excluded.[28]
  • For controversial uses of AI, such as certain uses of facial recognition or biometrics, there may be support when the public benefit is clear. The Citizens’ Biometrics Council that Ada convened in 2021 felt the use of biometrics was ‘more ok’ when it was in the interests of members of the public as a priority, such as in instances of public safety and health.[29] However, they concluded that the use of biometrics should not infringe people’s rights, such as the right to privacy. They also asked for safeguards related to regulation as described in Finding 4, such as independent oversight and transparency on how data is used, as well as addressing bias and discrimination or data management. The 2023 survey by the Ada Lovelace Institute and The Alan Turing Institute, How do people feel about AI?, found that speed was the main perceived benefit of facial recognition technologies, such as its use to unlock a phone, for policing and surveillance and at border control. But participants also raised concerns related to false accusations or accountability for mistakes.[30]

The public are concerned about an overreliance on technology over professional judgement and human communication

‘“Use data, use the tech to fix the problem.” I think that’s very indicative of where we’re at as a society at the moment […] I don’t think that’s a good modality for society. I don’t think we’re going down a good road with that.’
– Jury member, The rule of trust[31]

  • There are concerns in the evidence reviewed that an overreliance on data-driven systems will affect people’s agency and autonomy.[32] [33] [34] Relying on technology over professional judgement seems particularly concerning for people when AI is applied to eligibility, scoring or surveillance, because of the risk of discrimination and not being able to explain decisions that have high stakes, including those related to healthcare or jobs.[35] [36] [37] [38]
  • The nationally representative survey of the British public How do people feel about AI? found that not being able to account for individual circumstances was a concern related to this loss of agency. For example, almost two thirds (64%) of the British public were concerned that workplaces would rely too heavily on AI over professional judgement for recruitment.
  • Qualitative studies help to understand that this concern relates to fear of losing autonomy as well as fairness over important decisions, even when people can see the benefits of some uses. For example, in a series of workshops conducted in the USA, a participant said: ‘[To] have your destiny, or your destination in life, based on mathematics or something that you don’t put in for yourself… to have everything that you worked and planned for based on something that’s totally out of your control, it seems a little harsh. Because it’s like, this is what you’re sent to do, and because of [an] algorithm, it sets you back from doing just that. It’s not fair.[39]
  • Autonomy remains important, even when technologies are broadly seen as beneficial. Research by the Centre for Data Ethics and Innovation (CDEI) found that, even when the benefits of AI were broadly seen to outweigh the risks in terms of improving efficiency, the risks are more front-of-mind with strong concern about societal reliance on AI and where this may leave individuals and their autonomy.’[40]
  • There is a concern that algorithm-based decisions are not appropriate for making complex and moral judgements, and that they will generate ‘false confidence in the quality, reliability and fairness of outputs’.[41] [42] A study involving workshops in Finland, Germany, the UK and the USA found as examples of these complex or moral judgements those that ‘moved beyond assessment of intangibles like soft factors, to actions like considering extenuating circumstances, granting leniency for catastrophic events in people’s lives, ‘giving people a chance’, or taking into account personal trust’[43]. A participant from Finland said: ‘I don’t believe an artificial intelligence can know whether I’m suitable for some job or not.[44]
  • Research with the public also shows concerns that an overreliance on technology will result in a loss of compassion and the human touch in important services like health care.[45] [46] This concern is also raised in relation to technologies using foundation models: ‘Imagine yourself on that call. You need the personal touch for difficult conversations.[47]

Concerns also relate to the impacts of uses of AI on jobs, privacy or societal inequalities

  • Public attitudes research also finds some concern about job loss or reduced job opportunities for some applications of AI. For example, in a recent survey of the British public, the loss of jobs was identified as a concern by 46% of participants in relation to the use of robotic care assistants and by 47% in relation to facial recognition at border control as this would replace border staff.[48] Fear of the replacement or loss of some professions is also echoed in research from other countries in Europe[49] [50] and from the USA.[51] [52] For example, survey results from 2023 found that nearly two fifths of American workers are worried that AI might make some or all of their job duties obsolete.[53]
  • The public care about privacy and how people’s data is used, especially for the use of AI in everyday technologies such as smart speakers or for targeted advertising in social media.[54] [55] For example, the surveyHow do people feel about AI? found that over half (57%) of participants are concerned that smart speakers will gather personal information that could be shared with third parties, and that 68% are concerned about this for targeted social media adverts.[56] Similarly, the 2023 survey by the Pew Research Center in the USA found that people’s concerns about privacy in everyday uses of AI are growing, and that increase relates to a perceived lack of control over people’s own personal information.[57]
  • The public have also raised concerns about how some AI uses can be a threat to people’s rights, including the normalisation of surveillance.[58] Jurors in a deliberation on governance during pandemics were concerned about whether data collected during a public health crisis – in this case, the COVID-19 pandemic – could subsequently be used to surveil, profile or target particular groups of people. In addition, survey findings from March 2021 showed that minority ethnic communities in the UK were more concerned than white respondents about legal and ethical issues around vaccine passports.[59] In the workplace, whether in an office or working remotely, over a third (34%) of American workers were worried that their ‘employer uses technology to spy on them during work hours’, regardless of whether or not they report knowing they were being monitored at work.[60]
  • The public also care about the risk that data-driven technologies exacerbate inequalities and biases. Deliberative engagements ask for proportionality and a context-specific approach to the use of AI and data-driven technologies.[61] [62] For example, bias and justice were core themes raised by the Citizens’ Biometrics Council that Ada convened in 2021. The members of the jury made six recommendations to address bias, discrimination and accuracy issues, such as ensuring technologies are accurate before they are deployed, fixing them to remove bias and taking them through an Ethics Committee.[63]

‘There is a stigma attached to my ethnic background as a young Black male. Is that stigma going to be incorporated in the way technology is used? And do the people using the technologies hold that same stigma? It’s almost reinforcing the fact that people like me get stopped for no reason.’
– Jury member, The Citizens’ Biometrics Council[64]

Foundation models: existing evidence from the public indicates that they have concerns about uses beyond mechanical, low-risk analysis tasks and around their impact on jobs

The evidence from the public so far on foundation models[65] is consistent with attitudes to other applications of AI. People can see both benefits and disadvantages relating to these technologies, some of which overlap with attitudes towards other applications of AI, while others are specific to foundation models. However, evidence from the public about these technologies is limited, and more public participation is needed to better understand how the public feel foundation models should be developed, deployed and governed. The evidence below is from a recent qualitative study by the Centre for Data Ethics and Innovation (CDEI).[66]

  • People see the role of foundation models as potentially beneficial in assisting and augmenting mechanical, low-stakes human capabilities, rather than replacing them.[67] For example, participants in this study saw foundation models as potentially beneficial when they were doing data synthesis or analysis tasks. This could include assisting policymaking by synthesising population data or advancing scientific research by speeding up analysis or finding new patterns in the data, which were some of the potential uses presented to participants in the study.

 

‘This is what these models are good at [synthesising large amounts of population data]… You don’t need an emotional side to it – it’s just raw data.’,
– Interviewee, Public perceptions of foundation models[68]

 

  • Similar concerns around job losses found in relation to other applications of AI were raised by participants in the UK in relation to technologies built on foundation models.[69] There was concern that the replacement of some tasks by technologies based on foundation models would also mean workers lose the critical skills to judge whether a foundation model was doing a job well.
  • Concerns around bias extend to technologies based on foundation models. Bias and transparency were front of mind: ‘[I want the Government to consider] transparency – we should be declaring where AI has been applied. And it’s about where the information is coming from, ensuring it’s as correct as it can be and mitigating bias as much as possible.’ There was a view that bias could be mitigated by ensuring that the data training these models is cleaned so that it is accurate and representative.[70]
  • There are additional concerns about trade-offs between accuracy and speed when using foundation models. Inaccuracy of foundation models is a key concern among members of the public. This inaccuracy would require checks that may compromise potential benefits such as speed and make the process more inefficient. As this participant working in education said: ‘I don’t see how I feed the piece of [homework] into the model. I don’t know if in the time that I have to set it up and feed it the objectives and then review afterwards, whether I could have just done the marking myself?[71]
  • People are also concerned by the inability of foundation models to provide emotional intelligence. The lack of emotional intelligence and inability to communicate like a human, including understanding non-verbal cues and communication in context, was another concern raised in the study from the Centre for Data Ethics and Innovation, which meant participants did not see technologies based on foundation models as useful in decision-making.[72]

 

‘The emotional side of things… I would worry a lot as people call because they have issues. You need that bit of emotional caring to make decisions. I would worry about the coldness of it all.’
– Interviewee, Public perceptions of foundation models[73]


Finding 4: Regulation and the way forward: People have clear views on how to make AI work for people and society.

 

The evidence is consistent in showing a demand for regulation of data and AI that is independent and has ‘teeth’.

  • The public demand regulation around data and AI.[74] [75] [76] [77] Within the specific application of AI systems in biometric technologies, the Citizens’ Biometrics Council felt an independent body is needed to bring governance and oversight together in an otherwise crowded ecosystem of different bodies working towards the same goals.[78] The Council felt that regulation should also be able to enforce penalties for breaches in the law that were proportionate to the severity of such breaches, surfacing a desire for regulation with ‘teeth’. The Ada Lovelace Institute’s three-year project looking at COVID-19 technologies highlighted that governance and accountability measures are important for building public trust in data-driven systems.[79]
  • The public want regulation to represent their best interests. Deliberative research from the NHS AI Lab found that: ‘Participants wanted to see that patients’ and the public’s best interests were at the heart of decision-making and that there was some level of independent oversight of decisions made.’[80] Members of the Ada Lovelace Institute’s citizens’ jury on data governance during a pandemic echoed this desire for an independent regulatory body that can hold data-driven technology to account, adding that they would value citizen representation within such a body.[81]
  • Independence is important. The nationally representative public attitudes survey How do people feel about AI? revealed that a higher proportion of individuals felt that an independent regulator was best placed to ensure AI is used safely than other bodies, including private companies and the Government.[82] This may reflect differential relations of trust and trustworthiness between civil society and other stakeholders involved in data and AI, which we discuss in the next section.

The public are less trusting of private industry developing and regulating AI-driven technologies than they are of other stakeholders

  • Evidence from the UK and the USA finds that the public do not trust private companies as developers or regulators of AI-driven technologies, and instead hold higher trust in scientists and researchers or professionals and independent regulatory bodies, respectively.[83] [84] [85] [86] [87] For example, when asked how concerned they are with different stakeholders developing high-impact AI-driven technologies, such as systems that determine an individual’s eligibility for welfare benefits or their risk of developing cancer from a scan, survey results found that the public are most concerned by private companies being involved and least concerned by the involvement of researchers or universities.[88]
  • UK research also shows that the public do not trust private companies to act with safety or accountability in mind. The Centre for Data Ethics and Innovation’s public attitudes survey found that only 43% of people trusted big technology companies to take actions with data safely, effectively, transparently and with accountability, with this figure decreasing to 30% for social media companies specifically.[89]
  • The public are critical of the motivations of commercial organisations that develop and deploy AI systems in the public sector. Members of a public dialogue on data stewardship were sceptical of the involvement of commercial organisations in the use of health data.[90] Interviews with members of the UK public on data-driven healthcare technologies also revealed that many did not expect technology companies to act on anyone’s interests but their own.[91]

‘[On digital health services] I’m not sure that all of the information is kept just to making services better within the NHS. I think it’s used for [corporations] and large companies that do not have the patients’ best interests at heart, I don’t think.’

– Interviewee, Access Denied? Socioeconomic inequalities in digital health services [92]

The public are concerned about ethics, privacy, equity, inclusiveness, representativeness and non-discrimination, and about exacerbating unequal social stratification and creating a two-tiered society

  • The public support using and developing data-driven technologies when appropriate considerations and guardrails are in place. An earlier synthesis of public attitudes to data by the Ada Lovelace Institute shows support for the use of data-driven technologies when there is a clear benefit to society,[93] with public attitudes research into AI revealing broad positivity for applications of AI in areas like health, as described earlier in this report. Importantly, this positivity is paralleled by high expectations around ethics and responsibility to limit how and where these technologies can be used.[94] However, perceptions around innovation and regulation are not always at odds with each other. A USA participant from a qualitative study stated that ‘there can be a lot of innovation with guardrails’.[95]
  • There is a breadth of evidence highlighting that principles of equity, inclusion, fairness and transparency are important to the public:
    • The Ada Lovelace Institute’s deliberative research shows that the public believe equity, inclusiveness and non-discrimination need to be embedded into data governance during pandemics for governance to be considered trustworthy,[96] or before deploying biometric technologies.[97] The latter study highlighted that data-driven systems should not exacerbate societal inequalities or create a two-tiered society, with the public questioning the assumption that individuals have equal access to digital infrastructure and expressing concern around discriminatory consequences that may arise from applications of data-driven technology.[98]
    • Qualitative research in the UK found that members of the public feel that respecting privacy, transparency, fairness and accountability underpins good governance of AI.[99] Ethical principles such as fairness, privacy and security were valued highly in an online survey of German participants in the evaluation of the application of AI in making decisions around tax fraud.[100] These participants equally valued a range of ethical principles, highlighting the importance of taking a holistic approach to the development of AI-driven systems. Among children aged 7–11 years in Scotland, who took part in deliberative research, fairness was a key area of interest after being introduced to real-life examples of uses of AI.[101]
    • The public also emphasise the importance of considering the context within which AI-driven technologies are applied. Qualitative research in the UK found that in high-risk applications of AI, such as mental health chatbots or HMRC fraud detection services, individuals expect more information to be provided on how the system has been designed and tested than for lower-risk applications of AI, such as music streaming recommendation systems.[102] As mentioned earlier in this report, members of the Ada Lovelace Institute’s Citizens’ Biometrics Council similarly emphasised proportionality in the use of biometric technology across different contexts, with use in contexts that could enforce social control deemed inappropriate, while other uses around crime prevention elicited mixed perspectives.[103]
  • Creating a trustworthy data ecosystem is seen as crucial in avoiding resistance or backlash to data-driven technologies.[104] [105] Building data ecosystems or data-driven technologies that are trustworthy is likely to improve public acceptance of these technologies. However, a previous analysis of public attitudes to data suggests that aims to build trust can often place the burden on the public to be more trusting rather than demand more trustworthy practices from other stakeholders.[106] Members of a citizens’ jury on data governance highlighted that trust in data-driven technologies is contingent on the trustworthiness of all stakeholders involved in the design, deployment and monitoring of these technologies.[107] These stakeholders include the developers building technologies, the data governance frameworks in place to oversee these technologies and the institutions tasked with commissioning or deploying these technologies.
  • Listening to the public is important in establishing trustworthiness. Trustworthy practices can include better consultation with, listening to, and communicating with people, as suggested by UK interviewees when reflecting on UK central Government deployment of pandemic contact tracing apps.[108] These participants felt that mistrust of central Government was in part related to feeling as though the views of citizens and experts had been ignored. Finding 5 further details public attitudes around participation in data-driven ecosystems.

‘The systems themselves are quite exclusionary, you know, because I work with people with experiences of multiple disadvantages and they’ve been heavily, heavily excluded because they say they have complex needs, but what it is, is that the system is unwilling to flex to provide what those people need to access those services appropriately.’

– Interviewee, Access Denied? Socioeconomic inequalities in digital health services [109]

Explainability of AI-driven decisions is important to the public

  • It is important for people to understand how AI-driven decisions are made, even if that reduces the accuracy of that decision for reasons relating to fairness and accountability.[110] [111] [112] [113] [114] [115] The How do people feel about AI? survey of British public attitudes by the Ada Lovelace Institute and The Alan Turing Institute found that explainability was important because it helped with accountability and the need to consider individual differences in circumstance.[116] When balancing accuracy of an AI-powered decision against an explanation into how that decision was made, or the possibility of humans making all decisions, most people in the survey preferred the latter two options. At the same time, a key concern across most AI technologies – such as virtual healthcare assistants and technologies that assess eligibility for welfare or loan repayment risk – was around accountability for mistakes if things go wrong, or the need to consider individual and contextual circumstances in automated decision-making.
  • Exposing bias and supporting personal agency is also linked to support for explainability. In scenarios where biases could impact decisions, such as in job application screening decisions, participants from a series of qualitative workshops highlighted that explanations could be a mechanism to provide oversight and expose discrimination, as well as to support personal agency by allowing individuals to contest decisions and advocate for themselves: ‘A few participants also worried that it would be difficult to escape from an inaccurate decision once it had been made, as decisions might be shared across institutions, leaving them essentially powerless and without recourse.’[117]
  • However, there are trade-offs people make between explainability and accuracy depending on the context. The extent to which a decision is mechanical versus subjective, the gravity of the consequences of the decision, whether it is the only chance at a decision or whether information can help the recipient take meaningful action are some of the criteria identified in research with the public when favouring accuracy over explainability.[118]
  • The type of information people want from explanations behind AI-driven technologies also varies depending on context. A qualitative study involving focus groups in the UK, USA and Germany found that transparency and explainability were important, and that the method for providing this transparency depended on the type of AI technology, use and potential negative impact: ‘For AI products used in healthcare or finance, they wanted information about data use, decision-making criteria and how to make an appeal. For AI-generated content, visual labels were more important.’[119]

The public want to be able to address and appeal decisions determined by AI

  • It is important for the public that there are options for redress when mistakes have been made using AI-driven technologies.[120] [121] When asked what would make them more comfortable with the use of AI, the second most commonly chosen option by the public in the How do people feel about AI? attitudes survey was ‘procedures in place to appeal AI decisions’, selected by 59% of people, with only ‘laws and regulation’, selected by more people (62%).[122] In line with the value of explanations in providing accountability and transparency as previously discussed, workshops with members of the general public across several countries also found that explanations accompanying AI-made decisions were seen as important, as they could support appeals to change decisions if mistakes were made.[123] For example, as part of a study commissioned by the Centre for Data Ethics and Innovation, participants were presented with a scenario in which AI was used to detect tax fraud. They concluded that they would want to understand what information is used, outside of the tax record, in order to identify someone’s profile as a risk. As the quote below shows, understanding the criteria was important to address a potential mistake with significant consequences:

‘I would like to know the criteria used that caused me to be flagged up [in tax fraud detection services using AI], so that I can make sure everything could be cleared up and clear my name.’
– Interviewee, AI Governance[124]

The public ask for agency, control and choice in involvement, as well as in processes of consent and opt-in for sharing data

  • The need for agency and control over data and how decisions are made was a recurrent theme in our rapid review of evidence. People are concerned that AI systems can take over people’s agency in high-stakes decisions that affect their lives. In the Ada Lovelace Institute’s and The Alan Turing Institute’s recent survey of the British public, people noted concerns about AI replacing professional judgements, not being able to account for individual circumstances and a lack of transparency and accountability in decision-making. For example, almost two thirds (64%) were concerned that workplaces would rely too heavily on AI for recruitment compared to professional judgements.[125]
  • The need for control is also mentioned in relation to consent. For example, the Ada Lovelace Institute’s previous review of evidence Who Cares what the Public Think? found that ‘people often want more specific, granular and accessible information about what data is collected, who it is used by, what it is used for and what rights data subjects have over that use.’[126] A juror from the Citizens’ Biometrics Council also referenced the importance of consent:

‘One of the things that really bugs me is this notion of consent: in reality [other] people determine how we give that consent, like you go into a space and by being there you’ve consented to this, this and this. So, consent is nothing when it’s determined how you provide it.’
– Jury member, The Citizens’ Biometrics Council[127]

  • Control also relates to privacy. Lack of privacy and control over the content people see in social media and the data that is extracted was also identified as a consistent concern in the recent survey of the British public conducted by the Ada Lovelace Institute and The Alan Turing Institute.[128] In this study 69% of people identified invasion of privacy as a concern around targeted consumer advertising and 50% were concerned about the security of their personal information.
  • Consent is particularly important in high-stakes uses of AI. Consent was also deemed important in a series of focus groups conducted in the UK, USA and Germany, especially ‘where the use of AI has more material consequences for someone affected, like a decision about a loan, participants thought that people deserved the right to consent every time’.[129] In the same study, participants noted consent is about informed choice, rather than just choosing yes or no.
  • The need for consent is ongoing and complicated by the pervasiveness of some technologies. Consent remained an issue for members of the Citizens’ Biometric Council that the Information Commissioner’s Office (ICO) reconvened in While some participants welcomed the inclusion of information on consent in the new guidance by the ICO, others remained concerned because of the increased pervasiveness of biometrics, which would make it more difficult for people to be able to consent.[130]
  • The demand for agency and control is also linked to demands for transparency in data-driven systems. For example, the citizens’ juries the Ada Lovelace Institute convened on health systems in 2022 found that ‘agency over personal data was seen as an extension of the need for transparency around data-driven systems. Where a person is individually affected by data, jurors felt it was important to have adequate choice and control over its use.’[131]

‘If we are giving up our data, we need to be able to have a control of that and be able to see what others are seeing about us. That’s a level of mutual respect that needs to be around personal data sharing.’
– Jury member, The Rule of Trust [132]

Finding 5: People’s involvement: people want to have a meaningful say over decisions that affect their everyday lives.

 

The public want their views and experiences to be included in decision-making processes.

  • There is a demand from participants in research for more meaningful involvement of the public and of lived experience in the development of, implementation of and policy decision-making on data-driven systems and AI. For example, in a public dialogue for the NHS AI Lab, participants ‘flagged that any decision-making approaches need to be inclusive, representative, and accessible to all’. The research showed that participants valued a range of expertise, including the lived experience of patients.[133]
  • The public want their views to be valued, not just heard.[134] In the Ada Lovelace Institute’s peer research study on digital health services, participants were concerned that they were not consulted or even informed about new digital health services.[135] The research from the NHS AI Lab also found that, at the very least, when involvement takes place, the public wants their views to be given the same consideration as the views of other stakeholders.[136] The evidence also shows expectation for inclusive engagement and multiple channels of participation.[137]

There needs to be diversity in the views that are included and heard

  • A diversity of views and public participation need to be part of legislative and oversight bodies and processes.[138] The Citizens’ Biometrics Council that the Ada Lovelace Institute convened in 2020 also suggested the need to include the public in a broad representative group of individuals charged with overseeing an ongoing framework for governance and a register on the use of biometric technologies.[139] Members of the Ada Lovelace Institute’s citizens’ jury on data governance during a pandemic advocated for public representation in any regulatory bodies overseeing AI driven technologies.[140] Members of a public dialogue on data stewardship particularly stressed the importance of ensuring those that are likely to be affected by decisions are involved in the decision-making process.[141]

‘For me good governance might be a place where citizens […] have democratic parliament of technology, something to hold scrutiny.’
– Jury member, The Rule of Trust.[142]

  • This desire for involvement in decisions that affect them is felt even by children as young as 7–11 years old. Deliberative engagement with children in Scotland shows that they want agency over the data collected about them, and want to be consulted about the AI systems created with that data.[143] The young participants wanted to make sure that many children from different backgrounds would be consulted when data was gathered to create new systems, to ensure outcomes from these systems were equitable for all children.

‘We need to spend more time in a place to collect information about it and make sure we know what we are working with. We also need to talk to lots of different children at different ages.’
– Member of Children’s Parliament, Exploring Children’s Rights and AI[144]

How can involving the public meaningfully in decision-making support safer AI?

Finding 6: There are important gaps in research with underrepresented groups, those impacted by specific AI uses, and in research from different countries.

 

Different people and groups, like young people or people from minoritised ethnic communities, have distinct views about AI.

Evidence points to age and other socio-demographic differences as factors related to varying public attitudes to AI.[145] [146] [147]

  • Young people have different views on some aspects of AI. For example, the survey of British public attitudes How do people feel about AI? showed that the belief that the companies developing AI technologies should be responsible for the safety of those technologies was more common among people aged 18–24 years old than in older age groups. This suggests that younger people have high expectations of private companies and some degree of trust in them carrying out their corporate responsibilities.[148]
  • Specific concerns around technology may also relate to some socio-demographic characteristics. Polling from the USA suggests worries around job losses due to AI are associated with age (workers under 58 are more concerned than those over 58) and ethnicity (people from Asian, Black and Hispanic backgrounds are more concerned than those from white backgrounds).[149] And although global engagement on AI is limited, the available evidence suggests that there may be wide geographical differences in feelings about AI and fairness, and trust in both the companies using AI, and in the AI systems, to be fair.[150] [151]

Some people, groups and parts of the world are underrepresented in the evidence

  • Some publics are underrepresented in some of the evidence.[152] [153]
    • Sample size, recruitment, methods used for taking part in research, as well as other factors can affect the quality of insights that research is able to represent across different publics. For example, the How do people feel about AI? survey of public attitudes is limited in its ability to represent the views of groups of people who are racially minoritised, such as Black or Asian populations, due to small sample sizes. This can be a methodological limitation of representative, quantitative research, and so is present in the research findings despite a recognition by researchers that these groups may be disproportionately affected by some of the technologies surveyed.[154] There is therefore a need for quantitative and qualitative research among those most impacted and least represented by some uses of AI, especially marginalised or minoritised groups and younger age groups.
  • There is an overrepresentation of Western-centric views:
    • Existing evidence identified comes from English-speaking Western countries, often conducted by ‘a small group of experts educated in Western Europe or North America’.[155] [156] This is also evidenced in the gaps of this rapid review, and the Ada Lovelace Institute recognises that as a predominantly UK-based organisation, it might face barriers to discovering and analysing evidence emerging from across the world. In the context of global summits and discussions on global governance, and particularly recognising that the AI supply chain transcends boundaries of nations and regions, there is a need for research and evidence that includes different contexts and political economies, where views and experiences may vary in different ways across AI uses.

Finding 7: There is a significant body of evidence that demonstrates ways to meaningfully involve the public in decision-making.

 

But making this happen requires a commitment from decision-makers to embed participatory processes.

As described in the findings above, the public want to be able to have a say in – and to have control over – decisions that impact their lives. They also think that members of the public should be involved in legislative and oversight processes. This section introduces some of the growing evidence on how to do this meaningfully.

Public attitudes research, engagement and participation involve distinct methods that deliver different evidence and outcomes 

Different methods of public engagement and participation produce different outcomes, and it is important to understand their relative strengths and limitations in order to use them effectively to inform policy (see Table 1).

Some methods are consultative, whereas others enable a deeper involvement. According to the International Association for Public Participation (IAPP) framework, methods can embed the public deeper into decision-making to increase the impact they have on those decisions.[157] This framework has been further developed in the Ada Lovelace Institute’s report Participatory data stewardship, which sets out the relationships between different kinds of participatory practices.[158]

Surveys are quantitative methods of collecting data that capture immediate attitudes influenced by discourse, social norms and varied levels of knowledge and experience.[159] [160] They produce responses predominantly by using closed questions that require direct responses. Analysis of survey results helps researchers and policymakers to understand the extent to which some views are held across populations, and to track changes over time. However, quantitative methods are less suited to answering the ‘why’ or ‘how’ questions. In addition, they do not allow for an informed and reasoned process. As others have pointed out: ‘surveys treat citizens as subjects of research rather than participants in the process of acquiring knowledge or making judgements.’[161]

Some qualitative studies can provide important insight into people’s views and lived experience, such as focus groups or interviews. However, there is a risk that participation remains at the consultative level, depending on how the research is designed and embedded in decision-making processes.

Public deliberation can enable deep insights and recommendations to inform policy through an informed, reasoned and deliberative process of engagement. Participants are usually randomly selected to reflect the diversity of a population or groups, in the context of a particular issue or question. They are provided with expert guidance and informed, balanced evidence, and given time to learn, understand and discuss. These processes can be widened through interconnected events to ensure civil society and the underrepresented or minoritised groups less likely to attend these deliberative processes are included in different and relevant ways.[162] There is a risk that the trust of participants in these processes is undermined if their contributions are not seriously considered and embedded in policies.

Complex or contested topics need careful and deep public engagement

We contend that there is both a role and a need for methods of participation that provide in-depth involvement. This is particularly important when what is at stake are not narrow technical questions but complex policy areas that permeate all aspects of people’s lives, as is the case with the many different uses of AI in society. The Ada Lovelace Institute’s Rethinking data report argued the following:

‘Through a broad range of participatory approaches – from citizens’ councils and juries that directly inform local and national data policy and regulation, to public representation on technology company governance boards – people are better represented, more supported and empowered to make data systems and infrastructures work for them, and policymakers are better informed about what people expect and desire from data, technologies and their uses.[163]

Similar lessons have been learned from policymaking in climate. The Global Science Partnership finds that: ‘Through our experience delivering pilots worldwide as part of the Global Science Partnership, we found that climate policy making can be more effective and impactful when combining the expertise of policymakers, experts and citizens at an early stage in its development, rather than through consulting on draft proposals.’[164]

Other research has also argued that some AI uses, in particular those that risk civil and human rights, are in more need of successfully incorporating public participation. For example, a report by Data & Society finds that AI uses related to access to government services and benefits, retention of biometric or health data, surveillance or uses that bring new ethical challenges like generative AI or self-driving cars, require in-depth public engagement.[165]

Deliberative and participatory engagement can provide informed and reasoned policy insights from diverse publics

Evidence about participatory and deliberative approaches shows their potential for enabling rigorous engagement processes, in which publics who are reflective of the diversity of views in the population are exposed to a range of knowledge and expertise. According to democratic theorists, inclusive deliberation is a key mechanism to enable collective decision-making.[166]

Through a shared process of considered deliberation and reasoned judgement with others, deliberative publics are able to meaningfully understand different data-driven technologies and the impact they are having or can have on different groups.[167] [168]

Research on these processes shows that ‘deliberating citizens can and do influence policies’, and that they are being implemented in parliamentary contexts by civil society, private companies and international institutions.[169]

Using participation as a tick-box exercise risks the trustworthiness, legitimacy and effectiveness of decision-making

Evidence from public participation research identifies the risk of using participation to simply tick a box to demonstrate public engagement, or as a stamp of approval for a decision that has already been substantially made. For example, participants in a deliberative study by the NHS AI Lab discussed the need for public engagement to be meaningful and impactful, and considered how lived experience would impact decision-making processes alongside the agendas of other stakeholders.[170]

There is a need to engage with the public in in-depth processes that are consequential in their influence in government policy.[171] Our Rethinking data report also referred to this risk:

‘In order to be successful, such initiatives need political will, support and buy-in, to ensure that their outcomes are acknowledged and adopted. Without this, participatory initiatives run the risk of ‘participation washing’, whereby public involvement is merely tokenistic.’[172]

Other lessons from public engagement in Colombia, Kenya and the Seychelles also represent the need for ‘deep engagement at all stages through the policymaking process’ to improve effectiveness, trust and transparency.[173]

Experiences and research on institutionalising participatory and deliberative approaches can offer solutions to policymakers

The use of participatory and deliberative approaches and the evidence of its impact are growing in the UK and many parts of the world[174] in what has been described as a ‘turn toward deliberative systems’[175] or ‘deliberative wave’.[176] [177] However, there is a need for policy professionals and governments to take the results from these processes seriously and embed them in policy.

Frameworks like OECD’S ‘Institutionalising Public Deliberation’ provide a helpful summary of some of the ways in which this can happen, including examples like the Ostbelgien model, the city of Paris’ model or Bogota’s itinerant assembly.[178]

Ireland’s experience running deliberative processes that culminated in policy change,[179] or the experience of the London Borough of Newham with its standing assembly, offer other lessons.

At a global level, the Global Assembly on the Climate and Ecological Crisis held in 2021 serves as a precedent for what a global assembly or similar permanent citizens’ body on AI could look like, including civil society and underrepresented communities.[180] An independent evaluation found that the Global Assembly ‘established itself as a potential player in global climate governance, but it also spotlighted the challenges of influencing global climate governance on the institutional level.’[181] This insight shows the importance of these processes to be connected to decision-making organs for them to be consequential.

Deliberative and participatory processes have been used for decades in many areas of policymaking, but their use by governments to involve the public in decisions on AI remains surprisingly unexplored:

‘Despite their promising potential to facilitate more effective policymaking and regulation, the role of public participation in data and technology-related policy and practice remains remarkably underexplored, if compared – for example – to public participation in city planning and urban law.’[182]

This review of existing literature demonstrates ways to operationalise or institutionalise the involvement of the public into legislative processes and lessons on how to avoid them becoming consultative exercises. We do not claim that all these processes and examples have always been successful, and point to evidence of a lack of commitment from governments to implement the recommendations by citizens being one of the reasons why they can fail.[183] We contend that there is currently a significant opportunity for governments to consider processes that can embed the participation of the public in meaningful and consequential ways – and that doing this will improve outcomes for people affected by technologies and for current and future societies.

 

Conclusions

This rapid review shows public attitudes research is consistent in showing what the public think about the potential benefits of AI, their concerns and how they think it should be regulated.

  • It is important for governments to listen to and act on this evidence, paying attention in particular to different AI uses and how they currently have impacts on people’s everyday lives. AI uses affecting decision-making around services and jobs, or affecting human and civil rights, require particular attention. The public do not see AI as just one thing and have nuanced views about its different uses, risks and impacts. AI uses in advancing science and improving health diagnosis are largely seen as positive, and so are its uses in tasks that can be made faster and more efficient. However, the public are concerned about relying on AI systems to make decisions that impact people’s lives, such as in job recruitment or accessing financial support, either through loans or welfare.
  • The public are also concerned with uses of AI that replace human judgement, communication and emotion, in aspects like care or decisions that need to account for context and personal circumstances.
  • There are also concerns about privacy, especially in relation to uses of AI in people’s everyday lives, like targeted advertising, robotic home assistants or surveillance.
  • There is emerging evidence that the public have equivalent concerns about the use of foundation models. Whereas they may be welcome when they facilitate or augment mechanical, low-risk tasks or speed up data analysis, the public are concerned about trading off accuracy for speed. They are also concerned about AI uses replacing human judgement or emotion, and about their potential to amplify bias and discrimination.

Policymakers should use evidence from public attitudes research to strengthen regulation and independent oversight of AI design, development, deployment and uses and to meaningfully engage with diverse publics in the process.

  • Evidence from the public shows a preference for independent regulation with ‘teeth’ that demands transparency and includes mechanisms for assessing risk before deployment of technologies, as well as for accountability and redress.
  • The public want to maintain agency and control over how data is used and for what purposes.
  • Inclusion and non-discrimination are important for people. There is a concern that both the design and uses of AI technologies will amplify exclusion, bias or discrimination, and the public want regulatory frameworks that prevent this.
  • Trust in data-driven systems is contingent on the trustworthiness of all stakeholders involved. The public find researchers and academics more trustworthy than the private sector. Engaging the public in the design, deployment, regulation and monitoring of these systems is also important to avoid entrenching resistance.

Policymakers should use diverse methods and approaches to engage with diverse publics with different views and experiences, and in different contexts. Engaging the public in participatory and deliberative processes to inform policy requires embedded, institutional commitment so that the engagement is consequential rather than tokenistic.

  • The research indicates differences in attitudes across demographics, including age and socio-economic background, and there is a need for more evidence from underrepresented groups and specific publics impacted by specific AI uses.
  • There is also a need for evidence from different contexts across the globe, especially considering that the AI supply chain transcends political jurisdictions.
  • The public want to have a say in decisions that affect their lives, and want spaces for themselves and representative bodies to be part of legislative and monitoring processes.
  • Different research and public participation approaches result in different outcomes. While some methods are best suited to consulting the public on a particular issue, others enable them to be involved in decision-making. Participatory and deliberative methods enable convening publics that are reflective of diversity in the population to offer informed and reasoned conclusions that can inform practice and policy.
  • Evidence from deliberative approaches shows ways for policymakers to meaningfully include the public in decision-making processes at a local, national, regional and global level, such as through citizens’ assemblies or juries and working with civil society. These processes need political will to be consequential.

Methodology

To conduct this rapid evidence review, we combined research with the public carried out by the Ada Lovelace Institute with studies by other research organisations, assessed against criteria that gave a high level of confidence in the robustness of the research.

We used keyword-based online searches to identify evidence about public attitudes in addition to our own research. We also assessed the quality and relevance of recent studies encountered or received through professional networks that we had not identified through our search. We incorporated these in the review when the assessment of studies delivered methodological confidence. A thematic analysis was conducted to categorise and identify recurrent themes. These themes have been developed and structured around a set of key findings that aim to speak directly to policy professionals.

The evidence resulting from this process is largely from the UK, complemented by other research done predominantly in other English-speaking countries. This is a limitation for a global conversation on AI, and more research from across the globe and diverse publics and contexts is needed.

We focus on research conducted within recent years. The oldest evidence included dates from 2014, and the vast majority has been published since 2018. We chose this focus to ensure findings are relevant, given that events of recent years have had a profound influence on public attitudes towards technology, such as the Cambridge Analytica scandal[184], the growth and prominence of large technology companies in society, the impacts of the COVID-19 pandemic and the popularisation of large-language models including ChatGPT and Bard.

Various methodologies are used in the research we have cited, from national online surveys to deliberative dialogues, qualitative focus groups and more. Each of these methods has different strengths and limitations, and the strengths of one approach can complement the limitations of another.

Table 1: Research methodologies and the evidence they surface

Research method Type of evidence Level of citizen involvement[185]
Representative surveys ·       Understanding the extent to which some views are held across some groups in the population.

·       Potential to track how views change over time or how they differ across groups in the population

 

 

Consultation
Deliberative processes like citizens’ juries or citizens’ assemblies  

·       Participants reflective of the diversity in a population reach conclusions and policy recommendations based on an informed and reasoned process, that considers pros and cons and different expertise and lived experiences.

 

 

Involve, collaborate or empower (depending on the extent to which the process is embedded in decision-making and recommendations from participants are consequential)
Qualitative research like focus groups or in-depth interviews ·       In-depth understanding of the type of views that exist on a topic either in a collective or individual setting, the contextual and socio-demographic reasons behind those views and an understanding of trade-offs people make in their thinking about a topic. Consultation
Co-designed research ·       Participants’ lived experience and knowledge is included in the research process from the start (including the problem that needs to be solved and how to approach the research), and power in how decisions are made is distributed. Involve, collaborate or empower (depending on the extent to which power is shared across participants and researchers and the extent to which it has an impact on decision-making)

Acknowledgements

This report was co-authored by Dr Anna Colom, Roshni Modhvadia and Octavia Reeve.

We are grateful to the following colleagues for their review and comments on a draft of this paper:

  • Reema Patel, ESRC Digital Good Network Policy Lead
  • Ali Shah, Global Principal Director for Responsible AI at Accenture and advisory board member at the Ada Lovelace Institute
  • Professor Jack Stilgoe, Co-lead Policy and Public Engagement Strategy, Responsible AI UK

Bibliography

Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (2021) <https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/>

Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (2021) <https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/>

Ada Lovelace Institute, ‘Participatory Data Stewardship: A Framework for Involving People in the Use of Data’ (2021) <https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/>

Ada Lovelace Institute, ‘Rethinking Data and Rebalancing Digital Power’ (2022) <https://www.adalovelaceinstitute.org/project/rethinking-data/>

Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (2022) <https://www.adalovelaceinstitute.org/wp-content/uploads/2022/07/The-rule-of-trust-Ada-Lovelace-Institute-July-2022.pdf>

Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (2022) <https://www.adalovelaceinstitute.org/evidence-review/public-attitudes-data-regulation/>

Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/09/ADALOV1.pdf>

Ada Lovelace Institute, ‘Listening to the Public. Views from the Citizens’ Biometrics Council on the Information Commissioner’s Office’s Proposed Approach to Biometrics.’ (2023) <https://www.adalovelaceinstitute.org/report/listening-to-the-public/>

Ada Lovelace Institute, ‘Regulating AI in the UK’ (2023) <https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 1 August 2023

Ada Lovelace Institute, ‘Foundation Models in the Public Sector: Key Considerations for Deploying Public-Sector Foundation Models’ (2023) Policy briefing <https://www.adalovelaceinstitute.org/policy-briefing/foundation-models-public-sector/>

Ada Lovelace Institute, ‘Lessons from the App Store’ (2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/06/Ada-Lovelace-Institute-Lessons-from-the-App-Store-June-2023.pdf> accessed 27 September 2023

Ada Lovelace Institute and Alan Turing Institute, ‘How Do People Feel about AI? A Nationally Representative Survey of Public Attitudes to Artificial Intelligence in Britain’ (2023) <https://www.adalovelaceinstitute.org/report/public-attitudes-ai/> accessed 6 June 2023

American Psychological Association, ‘2023 Work in America Survey: Artificial Intelligence, Monitoring Technology, and Psychological Well-Being’ (https://www.apa.org, 2023) <https://www.apa.org/pubs/reports/work-in-america/2023-work-america-ai-monitoring> accessed 26 September 2023

BEIS, ‘Public Attitudes to Science’ (Department for Business, Energy and Industrial Strategy/Kantar Public 2019) <https://www.kantar.com/uk-public-attitudes-to-science>

BEIS, ‘BEIS Public Attitudes Tracker: Artificial Intelligence Summer 2022, UK’ (Department for Business, Energy & Industrial Strategy 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1105175/BEIS_PAT_Summer_2022_Artificial_Intelligence.pdf>

BritainThinks and Centre for Data Ethics and Innovation, ‘AI Governance’ (2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146010/CDEI_AI_White_Paper_Final_report.pdf>

Budic M, ‘AI and Us: Ethical Concerns, Public Knowledge and Public Attitudes on Artificial Intelligence’, Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (ACM 2022) <https://dl.acm.org/doi/10.1145/3514094.3539518> accessed 22 August 2023

‘CDEI | AI Governance’ (BritainThinks 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1177293/Britainthinks_Report_-_CDEI_AI_Governance.pdf> accessed 22 August 2023

Central Digital & Data Office, ‘Data Ethics Framework’ (GOV.UK, 16 September 2020) <https://www.gov.uk/government/publications/data-ethics-framework> accessed 23 May 2023

Centre for Data Ethics and Innovation, ‘Public Attitudes to Data and AI: Tracker Survey (Wave 2)’ (2022) <https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-2>

Children’s Parliament, Scottish AI Alliance and The Alan Turing Institute, ‘Exploring Children’s Rights and AI. Stage 1 (Summary Report)’ (2023) <https://www.turing.ac.uk/sites/default/files/2023-05/exploring_childrens_rights_and_ai.pdf>

Cohen K and Doubleday R (eds), Future Directions for Citizen Science and Public Policy (Centre for Science and Policy 2021)

Curato N, Deliberative Mini-Publics: Core Design Features (Bristol University Press 2021)

Curato N and others, ‘Global Assembly on the Climate and Ecological Crisis: Evaluation Report’ [2023] https://eprints.ncl.ac.uk <https://eprints.ncl.ac.uk> accessed 26 October 2023

Daedalus, ‘Twelve Key Findings in Deliberative Democracy Research’ (2017) 146 Daedalus 28 <https://direct.mit.edu/daed/article/146/3/28-38/27148> accessed 6 August 2021Davies M and Birtwistle M, ‘Seizing the “AI Moment”: Making a Success of the AI Safety Summit’ (7 September 2023) <https://www.adalovelaceinstitute.org/blog/ai-safety-summit/>

Doteveryone, ‘People, Power and Technology: The 2020 Digital Attitudes Report’ (2020) <https://doteveryone.org.uk/wp-content/uploads/2020/05/PPT-2020_Soft-Copy.pdf> accessed 21 September 2023

Farbrace E, Warren J and Murphy R, ‘Understanding AI Uptake and Sentiment among People and Businesses in the UK’ (Office for National Statistics 2023)

Farrell DM and others, ‘When Mini-Publics and Maxi-Publics Coincide: Ireland’s National Debate on Abortion’ [2020] Representation 1 <https://www.tandfonline.com/doi/full/10.1080/00344893.2020.1804441> accessed 19 July 2021

Gilman M, ‘Democratizing AI: Principles for Meaningful Public Participation’ (Data & Society 2023) <https://datasociety.net/wp-content/uploads/2023/09/DS_Democratizing-AI-Public-Participation-Brief_9.2023.pdf> accessed 5 October 2023

Global Assembly Team, ‘Report of the 2021 Global Assembly on the Climate and Ecological Crisis’ (2022) <http://globalassembly.org>

Global Science Partnership, ‘The Inclusive Policymaking Toolkit for Climate Action’ (2023) <https://www.globalsciencepartnership.com/_files/ugd/b63d52_8b6b397c52b14b46a46c1f70e04839e1.pdf> accessed 3 October 2023

Goldberg S and Bächtiger A, ‘Catching the “Deliberative Wave”? How (Disaffected) Citizens Assess Deliberative Citizen Forums’ (2023) 53 British Journal of Political Science 239 <https://www.cambridge.org/core/product/identifier/S0007123422000059/type/journal_article> accessed 8 September 2023

González F and others, ‘Global Reactions to the Cambridge Analytica Scandal: A Cross-Language Social Media Study’ [2019] WWW ’19: Companion Proceedings of The 2019 World Wide Web Conference 799

Grönlund, Kimmo, Bächtiger, André, and Setälä, Maija, Deliberative Mini-Publics. Invovling Citizens in the Democratic Process (ECPR Press 2014)

Hadlington L and others, ‘The Use of Artificial Intelligence in a Military Context: Development of the Attitudes toward AI in Defense (AAID) Scale’ (2023) 14 Frontiers in Psychology 1164810 <https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1164810/full> accessed 24 August 2023

IAP2, ‘IAP2 Spectrum of Public Participation’ <https://iap2.org.au/wp-content/uploads/2020/01/2018_IAP2_Spectrum.pdf>

Ipsos, ‘Global Views on AI 2023: How People across the World Feel about Artificial Intelligence and Expect It Will Impact Their Life’ (2023) <https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report%20-%20NZ%20Release%2019.07.2023.pdf> accessed 3 October 2023

Ipsos MORI, Open Data Institute and Imperial College Health Partners, ‘NHS AI Lab Public Dialogue on Data Stewardship’ (NHS AI Lab 2022) <https://www.ipsos.com/en-uk/understanding-how-public-feel-decisions-should-be-made-about-access-their-personal-health-data-ai>

Kieslich K, Keller B and Starke C, ‘Artificial Intelligence Ethics by Design. Evaluating Public Perception on the Importance of Ethical Design Principles of Artificial Intelligence’ (2022) 9 Big Data & Society 205395172210929 <https://journals.sagepub.com/doi/10.1177/20539517221092956?icid=int.sj-full-text.similar-articles.3#:~:text=The%20results%20suggest%20that%20accountability,systems%20is%20slightly%20less%20important.> accessed 22 August 2023

Kieslich K, Lünich M and Došenović P, ‘Ever Heard of Ethical AI? Investigating the Salience of Ethical AI Issues among the German Population’ [2023] International Journal of Human–Computer Interaction 1 <http://arxiv.org/abs/2207.14086> accessed 22 August 2023

Landemore H, Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many (2017)

Lazar S and Nelson A, ‘AI Safety on Whose Terms?’ (2023) 381 Science 138 <https://www.science.org/doi/10.1126/science.adi8982> accessed 13 October 2023

‘Majority of Britons Support Vaccine Passports but Recognise Concerns in New Ipsos UK KnowledgePanel Poll’ (Ipsos, 31 March 2021) <https://www.ipsos.com/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-uk-knowledgepanel-poll> accessed 27 September 2023

Mellier C and Wilson R, ‘Getting Real About Citizens’ Assemblies: A New Theory of Change for Citizens’ Assemblies’ (European Democracy Hub: Research, 10 October 2023)

Milltown Partners and Clifford Chance, ‘Responsible AI in Practice: Public Expectations of Approaches to Developing and Deploying AI’ (2023) <https://www.cliffordchance.com/content/dam/cliffordchance/hub/TechGroup/responsible-ai-in-practice-report-2023.pdf>

Nussberger A-M and others, ‘Public Attitudes Value Interpretability but Prioritize Accuracy in Artificial Intelligence’ (2022) 13 Nature Communications 5821 <https://www.nature.com/articles/s41467-022-33417-3> accessed 8 June 2023

OECD, ‘Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave’ (OECD 2021) <https://www.oecd-ilibrary.org/governance/innovative-citizen-participation-and-new-democratic-institutions_339306da-en> accessed 5 January 2022

OECD, ‘Institutionalising Public Deliberation’ (OECD) <https://www.oecd.org/governance/innovative-citizen-participation/icp-institutionalising%20deliberation.pdf>

Rainie L and others, ‘AI and Human Enhancement: Americans´Openness Is Tempered by a Range of Concerns’ (Pew Research Center 2022) <https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/>

Thinks Insights & Strategy and Centre for Data Ethics and Innovation, ‘Public Perceptions of Foundation Models’ (Centre for Data Ethics and Innovation 2023) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1184584/Thinks_CDEI_Public_perceptions_of_foundation_models.pdf>

Tyson A and Kikuchi E, ‘Growing Public Concern about the Role of Artificial Intelligence in Daily Life’ (Pew Research Center 2023) <https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/>

UK Government, ‘Iconic Bletchley Park to Host UK AI Safety Summit in Early November’ <https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november>

van der Veer SN and others, ‘Trading off Accuracy and Explainability in AI Decision-Making: Findings from 2 Citizens’ Juries’ (2021) 28 Journal of the American Medical Informatics Association 2128 <https://academic.oup.com/jamia/article/28/10/2128/6333351> accessed 3 May 2023

Woodruff A and others, ‘A Qualitative Exploration of Perceptions of Algorithmic Fairness’, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (ACM 2018) <https://dl.acm.org/doi/10.1145/3173574.3174230> accessed 22 August 2023

Woodruff A and others, ‘“A Cold, Technical Decision-Maker”: Can AI Provide Explainability, Negotiability, and Humanity?’ (arXiv, 1 December 2020) <http://arxiv.org/abs/2012.00874> accessed 22 August 2023

Wright J and others, ‘Privacy, Agency and Trust in Human-AI Ecosystems: Interim Report (Short Version)’ (The Alan Turing Institute)

Zhang B and Dafoe A, ‘Artificial Intelligence: American Attitudes and Trends’ [2019] SSRN Electronic Journal <https://www.ssrn.com/abstract=3312874> accessed 22 August 2023

[1] UK Government, ‘Iconic Bletchley Park to Host UK AI Safety Summit in Early November’ <https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november>.

[2] Seth Lazar and Alondra Nelson, ‘AI Safety on Whose Terms?’ (2023) 381 Science 138 <https://www.science.org/doi/10.1126/science.adi8982> accessed 13 October 2023.

[3] Ada Lovelace Institute, ‘Regulating AI in the UK’ (2023) <https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 1 August 2023.

[4] Ada Lovelace Institute, ‘Foundation Models in the Public Sector: Key Considerations for Deploying Public-Sector Foundation Models’ (2023) Policy briefing <https://www.adalovelaceinstitute.org/policy-briefing/foundation-models-public-sector/>.

[5] Matt Davies and Michael Birtwistle, ‘Seizing the “AI Moment”: Making a Success of the AI Safety Summit’ (7 September 2023) <https://www.adalovelaceinstitute.org/blog/ai-safety-summit/>.

[6] Central Digital & Data Office, ‘Data Ethics Framework’ (GOV.UK, 16 September 2020) <https://www.gov.uk/government/publications/data-ethics-framework> accessed 23 May 2023.

[7] Ada Lovelace Institute and Alan Turing Institute, ‘How Do People Feel about AI? A Nationally Representative Survey of Public Attitudes to Artificial Intelligence in Britain’ (2023) <https://www.adalovelaceinstitute.org/report/public-attitudes-ai/> accessed 6 June 2023.

[8] James Wright and others, ‘Privacy, Agency and Trust in Human-AI Ecosystems: Interim Report (Short Version)’ (The Alan Turing Institute).

[9] Ada Lovelace Institute and Alan Turing Institute (n 7).

[10] Emily Farbrace, Jeni Warren and Rhian Murphy, ‘Understanding AI Uptake and Sentiment among People and Businesses in the UK’ (Office for National Statistics 2023).

[11] Milltown Partners and Clifford Chance, ‘Responsible AI in Practice: Public Expectations of Approaches to Developing and Deploying AI’ (2023) <https://www.cliffordchance.com/content/dam/cliffordchance/hub/TechGroup/responsible-ai-in-practice-report-2023.pdf>.

[12] Lee Rainie and others, ‘AI and Human Enhancement: Americans´ Openness Is Tempered by a Range of Concerns’ (Pew Research Center 2022) <https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/>.

[13] Baobao Zhang and Allan Dafoe, ‘Artificial Intelligence: American Attitudes and Trends’ [2019] SSRN Electronic Journal <https://www.ssrn.com/abstract=3312874> accessed 22 August 2023.

[14] Kimon Kieslich, Marco Lünich and Pero Došenović, ‘Ever Heard of Ethical AI? Investigating the Salience of Ethical AI Issues among the German Population’ [2023] International Journal of Human–Computer Interaction 1 <http://arxiv.org/abs/2207.14086> accessed 22 August 2023.

[15] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (2021) <https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/>.

[16] Ada Lovelace Institute and Alan Turing Institute (n 7).

[17] BEIS, ‘Public Attitudes to Science’ (Department for Business, Energy and Industrial Strategy/Kantar Public 2019) <https://www.kantar.com/uk-public-attitudes-to-science>.

[18] Alec Tyson and Emma Kikuchi, ‘Growing Public Concern about the Role of Artificial Intelligence in Daily Life’ (Pew Research Center 2023) <https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/>.

[19] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (2022) <https://www.adalovelaceinstitute.org/evidence-review/public-attitudes-data-regulation/>.

[20] Thinks Insights & Strategy and Centre for Data Ethics and Innovation, ‘Public Perceptions of Foundation Models’ (Centre for Data Ethics and Innovation 2023) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1184584/Thinks_CDEI_Public_perceptions_of_foundation_models.pdf>.

[21] Allison Woodruff and others, ‘“A Cold, Technical Decision-Maker”: Can AI Provide Explainability, Negotiability, and Humanity?’ (arXiv, 1 December 2020) <http://arxiv.org/abs/2012.00874> accessed 22 August 2023.

[22] Ada Lovelace Institute and Alan Turing Institute (n 7).

[23] Ipsos MORI, Open Data Institute and Imperial College Health Partners, ‘NHS AI Lab Public Dialogue on Data Stewardship’ (NHS AI Lab 2022) <https://www.ipsos.com/en-uk/understanding-how-public-feel-decisions-should-be-made-about-access-their-personal-health-data-ai>.

[24] BEIS (n 17).

[25] Woodruff and others (n 21).

[26] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[27] Ada Lovelace Institute and Alan Turing Institute (n 7).

[28] Ada Lovelace Institute, ‘Listening to the Public. Views from the Citizens’ Biometrics Council on the Information Commissioner’s Office’s Proposed Approach to Biometrics.’ (2023) <https://www.adalovelaceinstitute.org/report/listening-to-the-public/>.

[29] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15).

[30] Ada Lovelace Institute and Alan Turing Institute (n 7).

[31] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (2022) <https://www.adalovelaceinstitute.org/wp-content/uploads/2022/07/The-rule-of-trust-Ada-Lovelace-Institute-July-2022.pdf>.

[32] BritainThinks and Centre for Data Ethics and Innovation, ‘AI Governance’ (2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146010/CDEI_AI_White_Paper_Final_report.pdf>.

[33] Ada Lovelace Institute and Alan Turing Institute (n 7).

[34] Allison Woodruff and others, ‘A Qualitative Exploration of Perceptions of Algorithmic Fairness’, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (ACM 2018) <https://dl.acm.org/doi/10.1145/3173574.3174230> accessed 22 August 2023.

[35] Ada Lovelace Institute and Alan Turing Institute (n 7).

[36] Woodruff and others (n 21).

[37] Rainie and others (n 12).

[38] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[39] Woodruff and others (n 34).

[40] BritainThinks and Centre for Data Ethics and Innovation (n 32).

[41] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[42] Woodruff and others (n 21).

[43] ibid.

[44] ibid.

[45] ibid.

[46] Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/09/ADALOV1.pdf>.

[47] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20).

[48] Ada Lovelace Institute and Alan Turing Institute (n 7).

[49] Marina Budic, ‘AI and Us: Ethical Concerns, Public Knowledge and Public Attitudes on Artificial Intelligence’, Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (ACM 2022) <https://dl.acm.org/doi/10.1145/3514094.3539518> accessed 22 August 2023.

[50] Kieslich, Lünich and Došenović (n 14).

[51] Rainie and others (n 12).

[52] American Psychological Association, ‘2023 Work in America Survey: Artificial Intelligence, Monitoring Technology, and Psychological Well-Being’ (https://www.apa.org, 2023) <https://www.apa.org/pubs/reports/work-in-america/2023-work-america-ai-monitoring> accessed 26 September 2023.

[53] ibid.

[54] BEIS (n 17).

[55] Tyson and Kikuchi (n 18).

[56] Ada Lovelace Institute and Alan Turing Institute (n 7).

[57] Tyson and Kikuchi (n 18).

[58] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[59] ‘Majority of Britons Support Vaccine Passports but Recognise Concerns in New Ipsos UK KnowledgePanel Poll’ (Ipsos, 31 March 2021) <https://www.ipsos.com/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-uk-knowledgepanel-poll> accessed 27 September 2023.

[60] American Psychological Association (n 52).

[61] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[62] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15).

[63] ibid.

[64] ibid. ibid.

[65] ‘Explainer: What Is a Foundation Model?’ <https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 26 October 2023

[66] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[67] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[68] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[69] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[70] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[71] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20).

[72] ibid. ibid.

[73] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[74] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[75] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[76] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15).

[77] Doteveryone, ‘People, Power and Technology: The 2020 Digital Attitudes Report’ (2020) <https://doteveryone.org.uk/wp-content/uploads/2020/05/PPT-2020_Soft-Copy.pdf> accessed 21 September 2023.

[78] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15).

[79] Ada Lovelace Institute, ‘Lessons from the App Store’ <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/06/Ada-Lovelace-Institute-Lessons-from-the-App-Store-June-2023.pdf> accessed 27 September 2023.

[80] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[81] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[82] Ada Lovelace Institute and Alan Turing Institute (n 7).

[83] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[84] Ada Lovelace Institute and Alan Turing Institute (n 7).

[85] Centre for Data Ethics and Innovation, ‘Public Attitudes to Data and AI: Tracker Survey (Wave 2)’ (2022) <https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-2>.

[86] Zhang and Dafoe (n 13).

[87] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[88] Ada Lovelace Institute and Alan Turing Institute (n 7).

[89] Centre for Data Ethics and Innovation (n 84) 2.

[90] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[91] Wright and others (n 8).

[92] Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (n 46).

[93] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[94] ibid.

[95] Milltown Partners and Clifford Chance (n 11).

[96] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[97] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (Ada Lovelace Institute 2021) <https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/>.

[98] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[99] ‘CDEI | AI Governance’ (BritainThinks 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1177293/Britainthinks_Report_-_CDEI_AI_Governance.pdf> accessed 22 August 2023. ibid.

[100] Kimon Kieslich, Birte Keller and Christopher Starke, ‘Artificial Intelligence Ethics by Design. Evaluating Public Perception on the Importance of Ethical Design Principles of Artificial Intelligence’ (2022) 9 Big Data & Society 205395172210929 <https://journals.sagepub.com/doi/10.1177/20539517221092956?icid=int.sj-full-text.similar-articles.3#:~:text=The%20results%20suggest%20that%20accountability,systems%20is%20slightly%20less%20important.> accessed 22 August 2023.

[101] Children’s Parliament, Scottish AI Alliance and The Alan Turing Institute, ‘Exploring Children’s Rights and AI. Stage 1 (Summary Report)’ (2023) <https://www.turing.ac.uk/sites/default/files/2023-05/exploring_childrens_rights_and_ai.pdf>.

[102] ‘CDEI | AI Governance’ (n 98).

[103] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (n 96).

[104] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[105] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[106] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[107] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[108] Wright and others (n 8).

[109] Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (n 46). ibid.

[110] Ada Lovelace Institute and Alan Turing Institute (n 7).

[111] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[112] Sabine N van der Veer and others, ‘Trading off Accuracy and Explainability in AI Decision-Making: Findings from 2 Citizens’ Juries’ (2021) 28 Journal of the American Medical Informatics Association 2128 <https://academic.oup.com/jamia/article/28/10/2128/6333351> accessed 3 May 2023.

[113] Woodruff and others (n 21).

[114] Anne-Marie Nussberger and others, ‘Public Attitudes Value Interpretability but Prioritize Accuracy in Artificial Intelligence’ (2022) 13 Nature Communications 5821 <https://www.nature.com/articles/s41467-022-33417-3> accessed 8 June 2023.

[115] Woodruff and others (n 34).

[116] Ada Lovelace Institute and Alan Turing Institute (n 7).

[117] Woodruff and others (n 21).

[118] ibid.

[119] Milltown Partners and Clifford Chance (n 11).

[120]  Woodruff and others (n 21).

[121] Ada Lovelace Institute and Alan Turing Institute (n 7).

[122] ibid.

[123] Woodruff and others (n 21).

[124] BritainThinks and Centre for Data Ethics and Innovation (n 32). ibid.

[125] Ada Lovelace Institute and Alan Turing Institute (n 7).

[126] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[127] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15). ibid.

[128] Ada Lovelace Institute and Alan Turing Institute (n 7).

[129] Milltown Partners and Clifford Chance (n 11).

[130] Ada Lovelace Institute, ‘Listening to the Public. Views from the Citizens’ Biometrics Council on the Information Commissioner’s Office’s Proposed Approach to Biometrics.’ (n 28).

[131] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[132] ibid.

[133] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23). ibid.

[134] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[135] Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (n 46).

[136] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[137] ibid.

[138] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15). ibid.

[139] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (n 96). ibid.

[140] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31). ibid.

[141] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23). ibid.

[142] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[143] Children’s Parliament, Scottish AI Alliance and The Alan Turing Institute (n 100).

[144] ibid.

[145] Ada Lovelace Institute and Alan Turing Institute (n 7).

[146] BEIS, ‘BEIS Public Attitudes Tracker: Artificial Intelligence Summer 2022, UK’ (Department for Business, Energy & Industrial Strategy 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1105175/BEIS_PAT_Summer_2022_Artificial_Intelligence.pdf>.

[147] Tyson and Kikuchi (n 18).

[148] Ada Lovelace Institute and Alan Turing Institute (n 7).

[149] American Psychological Association (n 52).

[150] Ipsos, ‘Global Views on AI 2023: How People across the World Feel about Artificial Intelligence and Expect It Will Impact Their Life’ (2023) <https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report%20-%20NZ%20Release%2019.07.2023.pdf> accessed 3 October 2023.

[151] Tyson and Kikuchi (n 18).

[152] Ada Lovelace Institute and Alan Turing Institute (n 7).

[153] Rainie and others (n 12).

[154] Ada Lovelace Institute and Alan Turing Institute (n 7).

[155]

Wright and others (n 8).

[156] Woodruff and others (n 21).

[157] IAP2, ‘IAP2 Spectrum of Public Participation’ <https://iap2.org.au/wp-content/uploads/2020/01/2018_IAP2_Spectrum.pdf>.

[158] Ada Lovelace Institute, ‘Participatory Data Stewardship: A Framework for Involving People in the Use of Data’ (2021) <https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/>.

[159] Lee Hadlington and others, ‘The Use of Artificial Intelligence in a Military Context: Development of the Attitudes toward AI in Defense (AAID) Scale’ (2023) 14 Frontiers in Psychology 1164810 <https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1164810/full> accessed 24 August 2023.

[160] Katie Cohen and Robert Doubleday (eds), Future Directions for Citizen Science and Public Policy (Centre for Science and Policy 2021).

[161] ibid. ibid.

[162] Claire Mellier and Rich Wilson, ‘Getting Real About Citizens’ Assemblies: A New Theory of Change for Citizens’ Assemblies’ (European Democracy Hub: Research, 10 October 2023).

[163] Ada Lovelace Institute, ‘Rethinking Data and Rebalancing Digital Power’ (2022) <https://www.adalovelaceinstitute.org/project/rethinking-data/>.

[164] Global Science Partnership, ‘The Inclusive Policymaking Toolkit for Climate Action’ (2023) <https://www.globalsciencepartnership.com/_files/ugd/b63d52_8b6b397c52b14b46a46c1f70e04839e1.pdf> accessed 3 October 2023.

[165] Michele Gilman, ‘Democratizing AI: Principles for Meaningful Public Participation’ (Data & Society 2023) <https://datasociety.net/wp-content/uploads/2023/09/DS_Democratizing-AI-Public-Participation-Brief_9.2023.pdf> accessed 5 October 2023.

[166] Hélène Landemore, Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many (2017).

[167] Nicole Curato, Deliberative Mini-Publics: Core Design Features (Bristol University Press 2021).

[168] OECD, ‘Institutionalising Public Deliberation’ (OECD) <https://www.oecd.org/governance/innovative-citizen-participation/icp-institutionalising%20deliberation.pdf>.

[169] Nicole Curato and others, ‘Twelve Key Findings in Deliberative Democracy Research’ (2017) 146 Daedalus 28 <https://direct.mit.edu/daed/article/146/3/28-38/27148> accessed 6 August 2021.

[170] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[171] Ada Lovelace Institute, ‘Rethinking Data and Rebalancing Digital Power’ (n 162).

[172] ibid.

[173] Global Science Partnership (n 163).

[174] Grönlund, Kimmo, Bächtiger, André, and Setälä, Maija, Deliberative Mini-Publics. Involving Citizens in the Democratic Process (ECPR Press 2014).

[175] Curato and others (n 168).

[176] OECD, ‘Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave’ (OECD 2021) <https://www.oecd-ilibrary.org/governance/innovative-citizen-participation-and-new-democratic-institutions_339306da-en> accessed 5 January 2022.

[177] Saskia Goldberg and André Bächtiger, ‘Catching the “Deliberative Wave”? How (Disaffected) Citizens Assess Deliberative Citizen Forums’ (2023) 53 British Journal of Political Science 239 <https://www.cambridge.org/core/product/identifier/S0007123422000059/type/journal_article> accessed 8 September 2023.

[178] OECD (n 167).

[179] David M Farrell and others, ‘When Mini-Publics and Maxi-Publics Coincide: Ireland’s National Debate on Abortion’ [2020] Representation 1 <https://www.tandfonline.com/doi/full/10.1080/00344893.2020.1804441> accessed 19 July 2021.

[180] Global Assembly Team, ‘Report of the 2021 Global Assembly on the Climate and Ecological Crisis’ (2022) <http://globalassembly.org>. ibid.

[181] Nicole Curato and others, ‘Global Assembly on the Climate and Ecological Crisis: Evaluation Report’ (2023).

[182] Ada Lovelace Institute, ‘Rethinking Data and Rebalancing Digital Power’ (n 162).

[183] Mellier and Wilson (n 161).

[184] Felipe González and others, ‘Global Reactions to the Cambridge Analytica Scandal: A Cross-Language Social Media Study’ [2019] WWW ’19: Companion Proceedings of The 2019 World Wide Web Conference 799.

[185] Ada Lovelace Institute, ‘Participatory Data Stewardship: A Framework for Involving People in the Use of Data’ (n 157).


Image credit: Kira Allman

Related content