The Ada Lovelace Institute has today published a new rapid review of evidence to help policymakers – in the context of the UK AI Safety Summit and afterwards – build a robust understanding of public attitudes about AI and how to involve the public in AI policymaking.
Taking into account people’s diverse perspectives and experiences in relation to AI – alongside expertise from policymakers and AI developers and deployers – is vital for ensuring AI is aligned with societal values and needs in ways that are legitimate, trustworthy and accountable.
The Institute’s new report brings together a review of evidence on the question: ‘What do the public think about AI?’ In addition, it provides knowledge and methods to support policymakers to meaningfully involve the public in current and future decision-making around AI.
The review demonstrates that people have nuanced views about AI, which change in relation to perceived risks, benefits, harms, contexts and uses. It also demonstrates that there are some clear and consistent public views on AI. These include:
- People have positive attitudes to some uses of AI (for example, in health and science development).
- There are concerns about AI for decision-making with substantial consequences on people’s everyday lives (for example, job recruitment and access to financial support).
- There is strong support for the protection of fundamental rights (for example, privacy).
- There is a belief that regulation is needed.
Public views point towards a need to harness the benefits and address the challenges of AI technologies, as well as to the desire for diverse groups in society to be involved in how decisions are made.
The review also draws on existing evidence and experiences of meaningful public participation in policymaking, such as lessons from existing deliberative democratic practices, and provides evidence-based solutions for meaningfully involving the public in decisions on AI.
Octavia Reeve, Associate Director at the Ada Lovelace Institute, said:
‘Understanding public attitudes towards AI, and how to involve people in AI decision-making, is becoming ever-more urgent in the UK and internationally.
‘Governing AI requires the meaningful involvement of people and communities, particularly those most affected by technologies. We hope that our rapid review will support and guide policymakers at this significant time for AI governance.’
Anna Colom, Public Participation and Research Lead at the Ada Lovelace Institute, said:
‘Decisions about AI cannot be made legitimately without the views and experiences of those most impacted. However, public voices are still frequently overlooked or absent in decision-making.
‘Our rapid review provides a timely synthesis of evidence, however there is still a need for more extensive and deeper research and public participation addressing the many uses and impacts of AI across different publics, societies and jurisdictions.’
Understanding public attitudes and how to involve the public in decision-making about AI
UK public attitudes to regulating data and data-driven technologies