New uses of artificial intelligence (AI) technologies are rapidly making their way into our everyday lives – from deciding whether we are approved for a loan or hired for a job, to making healthcare diagnoses. It’s becoming increasingly clear that AI is not just about technology; it’s also about people. Yet people are often absent from decisions about how these technologies are used, and conversations around their potential benefits and harms.
Including the perspectives, experiences and visions of those affected by AI technologies is vital to ensure their uses and regulation are aligned with societal values, rights and needs. And it is an important part of ensuring that AI technologies are used in ways that are just, legitimate, trustworthy and accountable. It will also help create a more equitable society, as frequently it’s marginalised people who are most impacted by data and AI.
As national and regional governments, and supranational organisations (like the EU) try to establish adequate regulatory frameworks for the use of AI and data technologies, it is important to understand how those in power across policymaking and industry can engage the public effectively and meaningfully.
This post is the first in a series of blog posts that explores existing evidence on and experiences of public participation in policymaking in diverse contexts across different geographical regions. We hope this wide range of perspectives, voices and lessons learned can inform how to engage the public in present policy efforts to regulate AI.
To begin the series, we will set the scene by highlighting the current discourse around public participation in AI. We will look at the current policy landscape through the lens of civil society actors and academics who have recently been advocating for meaningful public participation in debates surrounding the use and governance of AI. And we will touch on a few well-known examples of public engagement on similarly complex topics.
Advocating for public engagement at the UK AI Summit
Just before the end of 2023, the UK’s Global Summit on AI Safety showed how the private sector dominates mainstream discourses on AI. This was reflected in the meeting’s agenda, which was framed around ‘frontier’ AI models (a contested term which broadly encompasses systems with newer, more powerful and potentially dangerous capabilities) rather than the current benefits, risks and harms caused by uses of AI technologies that are already affecting the general public. It was telling that very few civil society organisations were invited to the Summit, despite the significant impact of these technologies on people and society.
As a counterpoint, speakers at the Summit as well as at the parallel fringe events stressed the need for the inclusion of diverse voices from the public. Marietje Schaake, member of the AI Advisory Board for the United Nations and former Member of the European Parliament, referenced the suggestion, already made by civil society representatives, to improve openness and participation by involving a random selection of citizens in any advisory body on AI.
In a piece published around the time of the Summit, Professor Hélène Landemore, Nigel Shadbolt and John Tasioulas argued that the event highlighted the fundamental question of how genuine deliberation by the public and accountability will be brought into AI-related decision-making processes to ensure AI developments serve the common good.
Even more to the point, Professor Noortje Marres pointed out that there was no mention of mechanisms for involving citizens and affected groups in the governance of AI in the official Summit communiqué, stating that at present AI is ‘profoundly undemocratic’.
Similar calls to involve the public – especially those most affected by the use of AI technologies and those underrepresented in research and debates – were made in an open letter, signed by international organisations and co-ordinated by Connected by Data, the Trades Union Congress and Open Rights Group. A post-Summit communiqué by civil society and experts who attended the event also pointed out:
‘[It is critical that] AI policy conversations bring a wider range of voices and perspectives into the room, particularly from regions outside of the Global North’.
In the context of the Summit, perhaps the most compelling call for people to be more involved in shaping AI policy came from members of the public themselves. The People’s Panel on AI, a jury-type randomly selected group of people, broadly reflective of the diversity of the population in England, reviewed the AI Summit fringe events and wrote a set of recommendations for policymakers and the private sector. One of them in particular focuses on the need for ‘a system of governance for AI in the UK that places citizens at the heart of decision-making’.
What the research says on public involvement in AI decision-making
This Summit-related people’s panel is not alone in demanding more participation, and calls for more publication participation are growing.
The need to involve people in decisions on AI regulation, design and deployment has been identified in existing research on public attitudes towards data and AI. Just before the Summit, the Ada Lovelace Institute published a rapid evidence review, ‘What do the public think about AI?’, to take stock of existing public participation research.
A key finding of the review was that the public want to have a meaningful say on decisions that affect their everyday lives, not just to be consulted. More specifically, some of the research projects covered by the review show that people expect their views, in the full spectrum of their diversity, to be taken as seriously as those of other stakeholders, including in legislative and oversight processes. The evidence review also found that there are important gaps in the research conducted together with underrepresented groups, those impacted by specific AI uses, and in research from countries outside of Europe and North America.
As argued in the review, in-depth involvement of the public is ‘particularly important when what is at stake are not narrow technical questions but complex policy areas that permeate all aspects of people’s lives, as is the case with the many different uses of AI in society’.
Complex or contested topics, those that can threaten civil and human rights in particular, require in-depth public participation. Mentioning specific cases, Professor Gilman (University of Baltimore), among other scholars, maintained that AI uses related to accessing government services or requiring the use of health and biometric data require a serious and long-lasting engagement with the public. A similar point can be generalised to all uses of emerging AI technologies that raise new ethical challenges.
Looking forward: how can we meaningfully involve the public?
There is a significant body of evidence that shows different ways to meaningfully involve the public in policymaking.
Evidence from deliberative democracy, social movements and civil society, and public engagement in policy can offer some inspiration. Clearly, public participation requires commitment on the part of decision makers to embed participatory processes in ways that are truly consequential. At the same time, there should also be scope for bottom-up approaches from civil society to claim spaces for engagement and participation when people’s views are not taken into consideration by those developing, deploying and regulating novel technologies.
In the rapid evidence review, we draw on examples in which these processes are embedded in governance structures, such as those included in the Organisation for Economic Co-operation and Development’s (OECD’s) ‘Institutionalising Public Deliberation’ framework; like Belgium’s Ostbelgien model; Paris’s model for a permanent citizens’ assembly; and Bogota’s itinerant assembly. These are different models for embedding continuous participation in the decision-making process. For example, the Ostbelgien model in the German-speaking community of Belgium includes a permanent citizens’ council that changes every 18 months and initiates, coordinates and monitors one to three citizens’ panels every year.
AI both affects and transcends countries and regions across its supply chain, making it even more important that we consider examples of public engagement from different places and contexts. Others have pointed out that AI is part of global interlocking systems of oppression and that its harms can therefore only be addressed ‘from the bottom up, from the perspectives of those who stand the most risk of being harmed’. However, as our evidence review identified, there is a gap in what we know is available or accessible from different parts of the world.
The next instalment of this blog series will share research on and experiences of public participation (bottom-up as well as top-down) in data and AI policymaking – from different countries and at different scales, from local to national to regional.
We hope these case studies will contribute ideas and lessons to support both public officials and civil society organisations in the task of engaging people in AI policymaking so their views truly count.