Skip to content
Blog

Making visible the invisible: what public engagement uncovers about privilege and power in data systems

Lived experience insights at Citizens’ Biometrics Council and Community Voice workshops show technology can mediate power asymmetries and privilege.

Reema Patel , Aidan Peppin

5 June 2020

Reading time: 9 minutes

Post-it Note with Bias and Racial Profiling written on it

Recent events in the United States have a striking parallel with a societally defining moment in the UK – following an extensive inquiry, campaigning by Stephen Lawrence’s parents and widespread public and media coverage, the Macpherson report concluded that the Metropolitan Police was institutionally racist. This term originates from work by Carmichael and Hamilton (1967) who argue that individual racism is often visible because of its overt nature, but institutional racism is less evident because it originates in historical practices, is perpetuated through longstanding practices in society, and is subject to less challenge, scrutiny and public condemnation as a consequence.  

In our work to ensure data and AI work for people and society, addressing institutional racism, structural inequality and injustice is crucial. In our Rethinking Data prospectus, we make the claim that data is not, and never has been neutral: how data is gathered, interpreted and used reflects accepted social norms.  

The choices that societies make about the production and use of data often reflect an unequal distribution of power in our systems. Asymmetries of power can take different forms and can intersect – examples we are all familiar with include inequalities of race, class, gender, disability, health status and more. Systemic injustice can (through instances such as institutional racism) contribute to asymmetries of power.  

To take two examples: transgendered and gender non-binary individuals have been disproportionately impacted by the emphasis on the use of biometric data to categorise or make assumptions along a binary divide, often resulting in a misgendering of people. And in our Beyond Face Value report published last year, we found that 56% of people from black and minority ethnic backgrounds agreed that the public should have the opportunity to consent and/or opt out of the use of facial recognition technologiesversus 46% of all respondents.  

To shine more light on perspectives towards biometrics technologies, we convened the Citizens’ Biometrics Council earlier this year. The Council is a long-form public deliberation, convening citizens from all walks of life to shape and inform perspectives about the governance of biometrics technologies in the UK. 

Significant concerns were raised about biometrics in the research literature and our survey, and to address these we wanted to ensure meaningful representation and inclusion in this processWe convened Community Voice workshops to explore in depth the disproportionate impact of biometrics technologies with individuals from black and minority ethnic backgroundsas well as with disabled people and with LGBTQI individuals. Our aim was to better learn from and amplify the perspectives of those who have traditionally been marginalised from the debate about biometrics and facial recognition technologies. 

The conversations were rich and varied – and many people spoke to the intersectional nature of their experience. Here are some key points that emerged.

1. Negative past experiences of discrimination shape perspectives towards technology 

“It might be families too who have young black sons, I have a son and we’ve had things with the police, not to do with biometrics, but when they introduce something new you reject it.”   

Yeah, because of historical experience.” – citizens from Bristol Community Voice workshop 

The whole thing I’m scared of is the police, for me the terror comes from the police. The police misidentifying certain people. I don’t trust the police that much. citizen from Bristol Community Voice workshop

One of the most prominent themes that emerged was how negative past experiences of interactions with law enforcement, state, justice and other public and private institutions condition attitudes towards the use of technologies. When systemic injustice exists, the use of technologies and data systems perpetuates and amplifies these injustices: it does not ameliorate themThis is a well-studied phenomenon, and people’s experiences reflect the individual, human consequences. People shared personal stories which varied from incorrect data or algorithmic faults leading to errors in decisions, through to active discrimination at borders or in experiences with the police. When discussing the emergence of biometric technologies like facial recognition, which are often less accurate for people of colour, the concern was that these discriminatory experiences will only increase as a consequence. The sense that this was inevitable reflects the lack of trust in institutions to deploy technology responsibly.  

2. Social prejudice, stigma and exclusion can be entrenched through the design, development and use of technology 

“There’s a big issue for trans people too, being scanned at an airport. My body will tell you I’m trans whether I want to or not.”– citizen from Brighton Community Voice workshop 

“What’s to stop biometric testing being bolted on to your medical status? Certain countries bar you if you’re HIV positive.”  citizen from Brighton Community Voice workshop 

In all our conversations, we heard how it is a privilege to experience technology without concern for how it will cause prejudice and stigma. For those whose identities are subject to discrimination – overtly or covertly, institutionalised or socially – technologies which categorise and make judgements across gender, racial or other identity-related factors will perpetuate those prejudices. We heard stories from people who had suffered challenges with law enforcement, discrimination in job recruitment or risk of physical harm at international borders because of the way technologies have categorised them. These concerns are pertinent to current debates around  public health identity systems emerging in response to COVID-19. Will an individual’s health status become another reductive category which creates prejudice and discrimination.

3. Technology, when designed, developed and deployed poorly, can limit access and actively exclude

“Apps like Google home and Siri don’t always work if you have a speech impairment etc. This is another challenge – are we going to be maintaining appropriate and accessible services for people? Are there going to be people who cannot access all of these things?”  citizen from the Manchester Community Voice workshop 

Many reflected on the problem of a ‘one size fits all’ mindset. When a single technology is deployed in a widespread manner, there’s risk of exclusion. Those who cannot engage with a biometric technology – perhaps because of a disability or lack of access to digital technology – may be excluded from the benefits or prevented from participating. This is especially concerning as those who are excluded in this way are often those who are already at greater risk from other harms, such as poorer health outcomes.

4. The use of technologies can undermine expressions of identity, crowding out difference 

“If there’s a CCTV camera, you’re less likely to act outside of what’s acceptable, because you’re under observation.  So you modify your own behaviour, you stop being as wild, or as wonderful, or as kinky, or as strange, or as bizarre, as beautiful as you could possibly be […] And no-one has asked us if we want to live in that society.”– citizen from Brighton Community Voice workshop 

“I had to fight very hard to get my passport changed from male to female 30 years ago, and I don’t want something on there to say this person was once a man, I just don’t want it. I want my recognition.”– citizen from Brighton Community Voice workshop 

When technologies reduce the complexity of people’s social, cultural or gender identity into discrete categories, people feel that they ignore the individual and ignore the rich spectrum of identities that make us who we are: whether it is the colour of one’s skin, or a gender identity which does not conform to historic, stereotyped and false binaries. Once labelled by a biometric or identity system, the concern is that no other aspects of one’s identity will be considered equally. Technological identity systems which judge people by pre-defined ‘boxes’ disempower individuals to express and feel their identity uniquely. 

5. Engagement, participation and representation matters

People don’t ask us if we want to use these technologies, tasers are just introduced by the Home Office. I’m not asked, it’s imposed on me by the country and the state. So, there’s this question of how do you build trust with people who don’t ask your consent in the first place? citizen from Bristol Community Voice workshop 

The inbuilt bias, of the programmers, the coders, the policy   my partner’s a coder, his office is 95% white male  those biases are going into the programmes that they are writing, that we’re using every day […] It’s those systematic biases – not individual biases – that work their way into programmes.” – citizen from Brighton Community Voice workshop

There’s strong awareness that those involved in developing and deploying systems do not adequately represent the diversity of the populations they affect. This awareness implies acute knowledge of both the lack of diversity in the technology sector and the power asymmetries that are reflected in who has the privilege to make decisions and influence institutional practice and policy. This lack of diverse representation permeates both those who develop technology and the pre-existing social biases that are reflected in many foundational datasets. The deployment of these biased systems leads to the exacerbation of existing social injustices and affects who is able to shape the narratives surrounding the technology, perpetuating a feedback loop for who is able to enter the industry and the embedded biases they reflect. This point is made in a joint report by the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence (2019).  

Not only should there be more diversity, but those we spoke with felt strongly about the importance of public participation: people want to and have the right to engage with and shape decisions about technologies that have huge social impact. There are a wide range of ways through which this can happen – through deliberation, such as citizen juries, and through involving Black, Asian and minority ethnic, LGBTQI and disabled people in the design, development and deployment stages of technologies.

Making visible the invisible: acknowledging privilege and power

It is one thing to recognise asymmetries of power in the use, design and deployment of data-driven technologies. But reflections from these perspectives and the Community Voice workshops prompt us to further examine the concept of privilege – reminding us that not everybody’s experience of the benefits from, use of, and access to technologies is necessarily equal or fair. We are often reminded that data and AI, used effectively, can create public good and save lives. Data certainly has the potential to save lives, but what do we do about the fact that data may be more able to save some people’s lives than others? 

It is possible to be ‘technoprivileged’ – to be privileged in one’s own ability to use and benefit from technologies, and in escaping some of the adverse consequences of technologies’ deployment. As invisible as the power asymmetries behind the deployment of technologies is the invisibility of privilege that some approaches to technologies can often confer. As made clear by those we have engaged with, understanding these interactions and ensuring the voices of those affected are heard are vital to ensuring that these technologies reduce, not amplify, injustice in society. 

Related content