Skip to content
Blog

Society and people: systemic racial injustice and cleaning up our own house

What should the Ada Lovelace Institute do to address systemic racial injustice?

19 June 2020

Reading time: 5 minutes

Paper stick men holding hands

The murder of George Floyd has sharpened the focus on systemic racial injustice. We all bear responsibility for the fact that it took a public death of a Black man at the hands of law enforcement officers to make visible what was already in plain sight, and must mobilise around the only defensible course of action – to respond to George Floyd’s death and the Black Lives Matter protests by transforming into different, better ways of being people and societies.

As an organisation, we stand in solidarity with Black people and with disempowered communities finding voices to express their own experience of structural inequalities. With our mission to make data and AI work for people and society, we commit through our work at the Ada Lovelace Institute to expose evidence of discrimination and inequalities, and to dismantle structures and systems that perpetuate injustice.

The unequal distribution of the benefits of digital transformation confirms that all people are not equal when viewed through technology’s lens. We already know that societal biases are replicated in algorithmic decision-making systems, and can predict that the effects of new technologies to stop the spread of COVID-19 and support social distancing will be distributed disproportionately among already disadvantaged population groups, exacerbating societal inequalities and increasing asymmetries of power.

There are contradictory and rapidly evolving questions for an organisation dedicated to ensuring the benefits of data and AI are justly and equitably distributed to disentangle. As the structures and institutions around us are being disrupted, how do we understand what that means for structural and institutional inequalities, and the discrimination that extends from them? What will that mean for technology’s relationship with society? How do we apply that emerging knowledge to make sure data-driven technologies and data practices change in a direction that is good for society? And how can we anticipate and act to limit the effects on unfairly disadvantaged groups?

We will need transparency, legal structures and public consultation to introduce both new technologies and new applications of existing technologies into societies in ways that limit profiling and prevent discrimination. Even with these measures, we must guard against ‘technoprivilege’ operating to widen existing inequalities for digitally disadvantaged individuals and social groups – and be alive to intersectional inequalities compounding factors of ethnicity, gender, sexual orientation and socioeconomic status. Data and AI won’t work for people and society until they work for everyone.

Clean up your own house

Like many organisations, we have responded to the evidence of racial injustice with a groundswell impetus to examine our own practice. In a whole-team meeting, we created a list of 56 things we should do differently – from the language we use, to hiring practices, to the intentioned use of references to draw out authority, to how we understand inclusion, diversity and representation in our public engagement, to who we platform in our events and blogs, to reconsidering what the ‘mainstream’ can learn from currently marginalised communities. As we embed this into our ongoing practice, these will be small but important steps towards becoming the organisation we want to be.

For a small, funded civil society organisation with a mission to make data and AI work for people and society, there is a larger, and radical, question of how we take our work forward. Mitigating inequalities is embedded through our work to rethink data practices, narratives and regulations, to examine the state’s dependencies on algorithmic decision-making, and to ensure technology supports our rights to identities and liberties. Equity and diversity are strategic values that guide our work. The impetus to examine our own practices is good and necessary, and must go to the root of our foundations and assumptions as an organisation.

But are we doing enough? And are we doing the right things? Are there new issues we must now address, and are there projects we were prioritising that are less important than understanding and mitigating emerging harms? This is a time to be agile and radical, and we must interrogate our theories of change, and ask whether the ideas we propagate and the language we use are enabling or preventing us from making the change we say we want?

Working collectively, smaller groups, organisations and voices can amplify evidence of injustices in social and technical infrastructures and systems, and shift power imbalances. Public opinion and activism have this week mobilised the evidence-based research of Joy Buolamwini, Timnit Gebru and Deborah Raji to convince IBM to pull out of the facial recognition technology market citing concerns about ‘racial profiling and violations of basic human rights and freedoms’, Microsoft to pull back from selling facial recognition systems to police departments until there is a federal law and Amazon to ban police use of facial recognition technology for a year. We must celebrate and amplify those successes, and the voices of the authors of the research that underpins our work.

This brings us to the perspectives of those who will be most impacted by technologies: the public. As part of our evidence gathering to support policymaking and regulatory reform, we published Beyond Face Value in September 2019, a public attitudes survey that provided the first evidence base for what people in the UK wanted facial recognition technology to do, including detailed insights into differences of opinion between White and Black and Minority Ethnic respondents. Amplifying public voices and ensuring they are heard will be critical for trust and confidence in, and democratic legitimacy for, more widespread use of data-driven decision-making and algorithmic systems in our society and economy.

There are more questions than answers, and we recognise the need to do more in our work to make data and data-driven technologies work for people and society. We commit to interrogating  these questions in our programmes and projects, and to convening gritty conversations with interdisciplinary experts from all communities to surface issues and amplify under-represented voices. And in doing so, we commit to do our part to bring systemic changes to eradicate racial injustice in data and AI, and to prevent data and AI being used to maintain structures of oppression or exacerbate existing inequalities.