The core standpoint of the Ada Lovelace Institute is that the benefits of data and AI must be justly and equitably distributed, and their use must enhance individual and societal wellbeing.
This vision for just data and AI is still far from being a reality. Ensuring justice and fairness in algorithmic systems that struggle to take account of cultural or societal context, and which don’t value difference or deviation from the norm, is a foundational challenge, one which requires sociotechnical solutions that can only come through interdisciplinary work.
Algorithmic bias and discrimination, which see automated systems delivering differential outcomes for minority or underrepresented groups, remain a fundamental flaw in many AI applications, and unrepresentative and biased datasets impede the development of algorithmic tools that respect and reinforce equalities.
At Ada, we see data and AI as integral elements in a functional society, and are working to ensure that everyone can participate in creating and achieving a positive vision for an equitable future in which everyone shares in the societal, intellectual, commercial and financial benefits of these technologies.
Our work on justice and equalities aims to:
- achieve racial justice in the use of data and algorithmic systems: This requires not only addressing problems of missing data and ensuring datasets are reflective of the societies in which we live, but also recognising the ways in which (even technically unbiased) technologies can exclude or objectify certain communities, reinforce racist practices or give an appearance of objectivity to discriminatory attitudes and norms.
- strive for economic justice: This dictates that we recognise the market incentives behind extractive data practices that see individual privacy and data rights sacrificed for corporate gain, to the exclusion of public benefit. The long-term impacts of automation and AI on labour, work, productivity and social purpose must be anticipated and compensated through the proper distribution and redistribution of the benefits that flow through automation, and warrant exploration of innovative mechanisms such as cross-jurisdictional taxes on digital markets.
- reinforce environmental justice: Through an agenda that interrogates the technosocietal infrastructures that create and perpetuate environmental hazards, we will consider the impact of our digital lives on our planet. In particular, by examining the effects of corporations whose professional practices and ethical codes shape the resources available to marginalised communities, we can address and reduce the environmental impact of the technology sector.
- understand and reconceptualise structural justice: By questioning the experiences of different groups affected by AI and algorithmic systems, we can question whether technologies and their applications enjoy a social license, public trust and legitimacy. Viewing this through the lens of institutions, structures and accepted norms, we can question the sustainability of investment in data and AI, its implications for the use of public funds, and better understand data and AI’s impact for society.
Through our work we are addressing use cases of AI that raise particular concerns around algorithmic bias and racial justice, such as facial recognition and biometrics technologies.
We are researching the impact of data-driven technologies on social inequalities and health inequalities, and seeking to understand how regulation should be evolved to appropriately protect and ensure equalities in an AI-driven world, including through expanding our notion of equalities to take account of data-driven discriminatory practices that treat individuals unfairly not only on the basis of their race or identity, but on the basis of their digital activity.
Research into how the accelerated adoption of data-driven systems amidst COVID-19 might have affected inequalities
Matthew Ryder QC is leading an independent legal review of the governance of biometric data, commissioned by the Ada Lovelace Institute.
Bringing together 50 members of the UK public to deliberate on the use of biometrics technologies like facial recognition.
Developing foundational tools to enable accountability of public administration algorithmic decision-making systems.
Transparency mechanisms for UK public-sector algorithmic decision-making systems
Explainer for Government, local government, policymakers and researchers
From the Ada blog
Celebrating Ada Lovelace Day 2020 with six women computer scientists.
When the direction of travel is towards more extensive use of biometrics and surveillance, do we need more or less oversight?
New forms of technology are coming. How do we ensure they’re deployed in a way that conforms to equality regulation?
An online salon exploring AI ethics and near-future fiction.
What will be the enduring impact of the COVID-19 crisis on surveillance practices?