We face messy, subjective challenges: who gets to define what fair algorithms are? Can the uses of data be redefined for social good? How can we balance the beneficial potential of facial recognition with important societal values like privacy and agency?
When facing these questions – questions about ethics and values – it’s near impossible to settle on objective truths. We draw on philosophy, law and history, but may still struggle to build consensus, especially when ethics and values are intertwined with societal expectations and norms. But when there are no right answers, we must move towards better answers by drawing on a diversity of expertise and experience.
In practice, drawing on diversity means that rather than just building a more complete picture of an issue, we’ll create a more accurate one too. Studies demonstrate that diverse groups are better than homogenous expert groups at finding the solution to a specific challenge. And cases like Apple’s failure to include menstrual tracking in its Health app reveal that if technologists are going to answer the needs and wants of a broad range of users, they need a diverse range of people in their user research and their developer teams.
However, convening diverse voices isn’t just about trying to find better answers or building a company’s user base. Data and AI will affect people across society, and often will affect marginalised or underrepresented groups more seriously. Our research on attitudes towards facial recognition found that Black and Minority Ethnic respondents were more concerned about consenting to or opting-out of the use of facial recognition. This is plausibly linked to concerns about bias in facial recognition systems, highlighted by studies which show that facial recognition algorithms are often less accurate for people of colour.
This is why we believe that convening diverse voices must include everyone: we need to create space in the debate for people from across the entire spectrum of social, cultural, economic, political and identity backgrounds.
Capturing complex issues and nuanced perspectives
If our goal is to include as many voices as possible, one approach is to run surveys and polls, to try to reach a broad and representative sample of people. Surveys can be a powerful tool for measuring mass trends in attitudes, establishing broad baselines in opinion, or understanding what proportion of the public agree with particular statements – they help us understand the pulse of a population’s attitudes.
For instance, in our research on attitudes to facial recognition technology (FRT), we learned that 55% of people think the government should limit police uses of FRT to specific circumstances, and that there’s more support for its use when there’s a clear public benefit, like making passport control a more efficient process.
But we know that the issues raised by technologies like facial recognition are deeply complex. The 55% of people who think there should be limits on police uses of FRT may still recognise its benefit in tackling crime. The other 45% might not think there should be limits, but may still hold reservations or questions about the technology’s use.
In other words, the ethical and societal challenges of data and AI cannot be represented as binary choices. But relying solely on quantitative methods, like surveys and polls, risks boiling these complex issues down into statements which can be answered with ‘yes’ or ‘no’, ‘strongly disagree’ or ‘not sure’. Instead of presenting issues as either-or choices (implement new technology despite concerns, or ban it altogether despite potential benefits) we need methods that capture the far-more-nuanced reality.
As well as reducing complexity in the issues, these approaches reduce complexity in peoples’ opinions. This makes it difficult to understand the frictions, contradictions and balances within an individual’s perspective: a person can see the benefits of a technology but be concerned about it at the same time. To fully explore issues and perspectives, we need to create spaces for people to deliberate topics in-depth.
People as co-creators, not data sources
Giving people a chance to say what they think through focus groups or engagement workshops captures the nuance in people’s opinions and creates complex data to analyse and describe in reports and recommendations. This qualitative research is useful for understanding and describing complex perspectives, and is justifiably relied on across social research, whether academic, market or user. But this approach doesn’t necessarily convene.
Imagine the problem we’re trying to solve is like a puzzle, except we’ve lost the picture on the front of the box, and the pieces are spread out across people in society. To solve the puzzle, we have to invite everyone to bring their pieces to the table. Then, when we’re putting the puzzle together, we shouldn’t just take everyone’s piece and send them away while we try to solve it by ourselves. That would be an extractive process: it would treat people as subjects to analyse, exploring complex topics but reducing people to pieces of information. It would also exclude the collective reasoning power that can help solve the problem. Instead, we should work together to fit the pieces into place.
The many forms of engaging and involving people, from informing them, through consulting their opinion, to empowering them to make decisions is described in a well-known spectrum of public participation. Convening diverse voices should take many forms from across this spectrum, but to foster genuine public debate it must shift agency from those who have the resources to ask the question to those who the answer affects.
Convening diverse voices at Ada
This ethos is behind work we’ve already been doing, like citizens’ juries with Wellcome’s Understanding Patient Data team and the Office for Life Sciences. It will also underpin future projects, like a citizens’ assembly on biometrics technology we’ll run in 2020. Building from our survey on attitudes to facial recognition, this assembly will create space for public deliberation on biometrics technologies. The goal is not just to explore issues in more depth, but to empower people to say what they want policy and practice around new AI and data technologies to look like.
Whenever we’re tackling messy, subjective challenges – ones which draw on people’s values and perspectives – surveys, focus groups, interviews and other powerful social research methods, done robustly, help us to measure widespread trends or understand people’s attitudes.
But convening diverse voices will deepen our ability to address these challenges. It will allow us to genuinely articulate what issues matter to people; align our research with these priorities; and effectively represent the interests of people and society to policymakers and technologists.
Even more than this: convening diverse voices supports our society’s democratic principles. If you’re building something that you want to work for everyone, then you need to include everyone in its design. Everyone should have a say, everyone should have their voice heard. Everyone should get to bring their piece of the puzzle into the picture.
Bringing together 50 members of the UK public to deliberate on the use of biometrics technologies like facial recognition
Report with recommendations and findings of a public deliberation on biometrics technology, policy and governance
A research partnership with NHS AI Lab exploring the potential for algorithmic impact assessments in an AI imaging case study
Findings from a rapid expert deliberation to consider the risks and benefits of the potential roll-out of digital vaccine passports