Exploring the role of public participation in commercial AI labs
What public participation approaches are being used by the technology sector?
As AI technologies become more prevalent in and influential to people’s everyday lives, researchers, policymakers and practitioners are increasingly asking if – and how – the public should be able to participate in the development and oversight of these technologies.
In this project, we want to understand how public participation is, and can, be applied in commercial AI development. Through this research, we aim to explore what public participation is being conducted or explored by companies, what barriers they face in conducting it meaningfully, what best practice looks like, and what impact public participation projects have on the company, and on participants.
Public participation brings a wide range of perspectives to complex, value-laden political and societal issues. Participatory approaches – from small-scale public deliberation to large-scale polling – are frequently used in domains like healthcare and within democratic institutions like local government to increase opportunity for participation and benefit from ‘the wisdom of the crowd’.
Proponents of public participation argue that there should be increased participation in conversations around what AI technologies should look like, how they should be governed, and the kinds of societies we might want to build with them.
However, while there is growing interest in ‘participatory AI’, there is limited available evidence of public engagement and participation in commercial technology companies, where AI systems are formulated, designed and developed. The question of ‘what participatory approaches (if any) are being used by the technology sector?’ is underexplored.
Ada’s previous research has identified that:
- members of the public hold important perspectives about the design, implementation and regulation of emerging data-driven technologies, such as biometric technologies, and want opportunities to ensure products and research reflect their values
- some research, policy and communication teams within technology companies are interested in exploring how public participation could help them consider the possible societal implications of their work, but feel hindered by hurdles that prevent meaningful public participation in practice.
For the public to be meaningfully involved in AI development and oversight, across all stages of the product lifecycle, we first need to build the evidence base for how public participation in the private technology sector is structured in different companies , and, importantly, how it’s viewed.
As part of this research, we will be conducting interviews with both public participation experts and teams at tech companies who may be well placed to consider the use of public participation in their organisation. To gain rich insights into attitudes and approaches to public participation within technology design and development, and the empirical claims for public participation and deliberation, we are also completing a literature review and will publish an annotated bibliography for use as a community resource.
If you are interested in the ongoing development of this work, please get in touch with Lara Groves (email@example.com), lead researcher for this project.
Image credit: Orbon Alija
Participatory data stewardship
A framework for involving people in the use of data
How charting public perspectives can show the way to unlocking the benefits of location data
A review of existing research on public attitudes towards location data and related ethical considerations
Beyond face value: public attitudes to facial recognition technology
First survey of public opinion on the use of facial recognition technology reveals the majority of people in the UK want restrictions on its use.
Algorithmic accountability for the public sector
Research with AI Now and the Open Government Partnership to learn from the first wave of algorithmic accountability policy.