Skip to content
Blog

Public deliberation could help address AI’s legitimacy problem in 2019

The importance of public legitimacy was illustrated by a series of public, highly controversial events that took place in 2018.

Reema Patel

8 February 2019

Reading time: 5 minutes

The importance of public legitimacy – which refers to the broad base of public support that allows companies, designers, public servants and others to design and develop AI to deliver beneficial outcomes – was illustrated by a series of public, highly controversial events that took place in 2018.

Beyond the Cambridge Analytica/Facebook scandal, these events included: the first pedestrian death caused by a self-driving Uber car: a fatal car crash from Tesla’s autopilot; and the deportation of UK residents in error.

Such scandals may have incurred a direct reputational cost to Facebook, Tesla and Uber, but there has also been a wider legitimacy cost to those seeking to use AI and data for social good.

Here are twelve approaches technologists, policymakers and opinion-formers might wish to take to ensure 2019 is the year of greater public legitimacy for AI:

  1. Develop ‘deep’ public deliberation methods that can keep pace with, and ensure that there is dialogue with tech developers and designers. Public deliberation enables experts and technologists to collaborate with citizens on difficult problems that neither can solve without the others input. Novel approaches to public engagement are required so we can respond to the pace of technological change. Experimental teams of publics, public engagement experts, service designers and developers could be brought together to innovate new methods of public engagement.
  2. Scale up public deliberation to inform a national action plan for regulators and government. Citizen juries and citizens’ reference panels (such as those delivered by the RSA/DeepMind, and the forthcoming juries to be delivered by the Information Commissioner’s Office) can help maximise impact by scaling up this kind of ‘deep’ engagement to co-create a national action plan with government and regulators.
  3. Embed public involvement more deeply in how we think through design, implementation and governance of tech. We need an ongoing and sustained infrastructure that incentivises technologists to respond to shifting public values. Good public engagement is a process taking place at the inception, design, implementation and governance stages of new tech. We need to thread public involvement throughout.
  4. Develop a better analysis of the power dynamics and market forces that are shaping data and AI. Few people would have predicted that Facebook’s business model would draw upon advertising as its primary source of revenue, or that many others (e.g Google) similarly do so. There is an extent to which many of us have convinced ourselves that we are getting these services for free – but effective public engagement requires a better understanding of what the effect of data and AI on power dynamics and market forces is likely to be.
  5. Recognise that creating and shaping AI legitimacy will need some compromise and ‘slow down’ – which is a test of organisational cultures that have adopted a ‘move fast, break things’ rhetoric. How can we ensure that we encourage the existence of organisational behaviour that can bring others with it and respond to public values, even if that means at the cost of moving slightly more slowly – with a longer term pay off for both tech and society?
  6. Develop more trustworthy governance. There is a need for society itself to consider what it can do to ensure that developers engage and listen to the needs of the wider public. Tech companies can help by designing good governance structures that shapes organisational incentives and behaviour. It is crucial that the governance itself is trustworthy. Good governance should align with public values and for society itself to be engaged in shaping governance structures.
  7. Support diverse publics to articulate their visions of the future. Entrepreneurs and innovators are expert storytellers who often have access to the ear of policymakers and decision makers. We must also ensure that futures thinking is something that all people (such as carers and taxi-drivers) can engage in.
  8. Reach out to the publics and civil society organisations. Whilst organised and invited spaces can be a very good way of getting answers to very specific questions, they are also costly. We could orientate ourselves differently by going to where these conversations are already taking place. It would require total culture change, but by listening to these dialogues we would gain a better perspective on the issues.
  9. Engage with many publics – who do not always speak with a united/homogenous voice. In a liberal individualist society, we are at risk of assuming that there is equality of voice between diverse publics, when in reality, the power gradient is steeper than we might wish to acknowledge. Given the lack of homogeneity and power imbalance, what matters is discovering the rich stories that bind us together. These seem both more important, and more realistic, than finding propositions to which we can all assent, which can often be lowest common denominators.
  10. Understanding AI ethics and data ethics by analogy. Are there analogous challenges or issues that we have confronted in the past, which arise again in a new context? What is data and AI most analogous to, and what can we learn from the past? 
  11. Enable more effective civic and stakeholder action. Numerous examples of this have emerged recently – for instance, Google employees placing pressure on their employer not to bid for the Pentagon contract – with a view to shaping the ‘right’ choices. Trustworthiness of technology may not always be sufficient for AI legitimacy, because it relies on those who have power making the right choices in isolation.
  12. Build AI and data literacy to better communicate the relevance of AI/data to civil society and to publics. How best can we think through broadcasting, communicating and soliciting information in a mediated way, so as to increase wider public awareness about these issues beyond a group of self-selecting individuals?

Core to the need to realise public legitimacy for AI in 2019, then, is a need to identify our vision for a society which involves and consults with citizens. This requires us to understand a rapidly changing and dynamic context, built on deep and historic traditions. The pay-off would be the creation of a public deliberation infrastructure for data and AI which can help shape the way we govern AI and the incentives and behaviours for AI within both industry and practice.

We would like to thank the attendees to the Ada Lovelace Institute’s roundtable, ‘Finding Legitimacy’, for their valuable contributions in helping to shape the content of this article.