Skip to content

Justice and equalities

Using, designing, deploying and governing data and AI in ways that reflect social, economic, racial and environmental justice

The core standpoint of the Ada Lovelace Institute is that the benefits of data and AI must be justly and equitably distributed, and their use must enhance individual and societal wellbeing.

This vision for just data and AI is still far from being a reality. Ensuring justice and fairness in algorithmic systems that struggle to take account of cultural or societal context, and which don’t value difference or deviation from the norm, is a foundational challenge, one which requires sociotechnical solutions that can only come through interdisciplinary work.

Algorithmic bias and discrimination, which see automated systems delivering differential outcomes for minority or underrepresented groups, remain a fundamental flaw in many AI applications, and unrepresentative and biased datasets impede the development of algorithmic tools that respect and reinforce equalities.

At Ada, we see data and AI as integral elements in a functional society, and are working to ensure that everyone can participate in creating and achieving a positive vision for an equitable future in which everyone shares in the societal, intellectual, commercial and financial benefits of these technologies.

Our work on justice and equalities aims to:

  • achieve racial justice in the use of data and algorithmic systems: This requires not only addressing problems of missing data and ensuring datasets are reflective of the societies in which we live, but also recognising the ways in which (even technically unbiased) technologies can exclude or objectify certain communities, reinforce racist practices or give an appearance of objectivity to discriminatory attitudes and norms.
  • strive for economic justice: This dictates that we recognise the market incentives behind extractive data practices that see individual privacy and data rights sacrificed for corporate gain, to the exclusion of public benefit. The long-term impacts of automation and AI on labour, work, productivity and social purpose must be anticipated and compensated through the proper distribution and redistribution of the benefits that flow through automation, and warrant exploration of innovative mechanisms such as cross-jurisdictional taxes on digital markets.
  • reinforce environmental justice: Through an agenda that interrogates the technosocietal infrastructures that create and perpetuate environmental hazards, we will consider the impact of our digital lives on our planet. In particular, by examining the effects of corporations whose professional practices and ethical codes shape the resources available to marginalised communities, we can address and reduce the environmental impact of the technology sector.
  • understand and reconceptualise structural justice: By questioning the experiences of different groups affected by AI and algorithmic systems, we can question whether technologies and their applications enjoy a social license, public trust and legitimacy. Viewing this through the lens of institutions, structures and accepted norms, we can question the sustainability of investment in data and AI, its implications for the use of public funds, and better understand data and AI’s impact for society.

Through our work we are addressing use cases of AI that raise particular concerns around algorithmic bias and racial justice, such as facial recognition and biometrics technologies.

We are researching the impact of data-driven technologies on social inequalities and health inequalities, and seeking to understand how regulation should be evolved to appropriately protect and ensure equalities in an AI-driven world, including through expanding our notion of equalities to take account of data-driven discriminatory practices that treat individuals unfairly not only on the basis of their race or identity, but on the basis of their digital activity.

Related projects

Related report

Transparency mechanisms for UK public-sector algorithmic decision-making systems

Explainer for Government, local government, policymakers and researchers

From the Ada blog

Related events

This is not a cookie notice

The Ada Lovelace Institute website doesn’t have a cookie notice, because we don’t collect or share personal information about you.

We use only the minimum necessary cookies to make our site work, and to enable sharing on platforms like Twitter, YouTube and Google. We have no control over the tracking technologies used by these sites and services.

You can disable all cookies by changing your own browser settings, though this may change how the website works.

It’s part of what we do, making data and AI work for people and society. Find out more.