Skip to content

Society, Justice & Public Services

Shaping how political choices and public services will be designed in the information age.

Who we are and what we do

The Society, Justice & Public Services research domain builds evidence on how people and society are affected by data and AI and seeks to shape how political choices and public services will be designed in the information age. We lead Ada’s research on questions of social justice and social policy.

As data and AI touch every aspect of our lives – from education and healthcare to elections and relationships – we explore how people’s lives and livelihoods are affected by these technologies. We amplify underheard voices, consider how social policy should respond to rebalance power of data and AI, and support individuals and communities to flourish. We examine how the use of data-driven technologies and AI affect our notion of and expectations of both the state and the ‘social contract’, and seek to articulate a positive vision for society in the information age.

In particular, we interrogate:

  • how inequalities (and our understanding of inequalities) are shaped by data and AI
  • how data affects notions of identity, community, diversity and solidarity
  • what impact data and AI have on access to information, agency and decision-making.

Through our work, we seek to influence the practice, policy and design of public services. Our research builds a sociotechnical understanding of AI in public services from the ground up, examining how the use of AI affects complex systems and attempts to deal with ‘wicked’ problems. We build evidence using a range of methods, including desk research, workshops and roundtables, ethnographic studies and public participation to understand people’s experiences and society’s needs. The evidence we gather is used to shape politics and policies to improve wellbeing, justice and inclusivity.

We engage with frontline professions, public services, expert bodies, citizens and affected people, regulators and government departments to explore how different actors across the public sector could use AI in the public interest. Our public and expert deliberations aim to shape how public services should use AI effectively and with legitimacy, and how public services should evolve in light of developments in AI and data-driven technologies. We seek to build capacity and knowledge among those working in the public sector, enabling their critical engagement with, and informed decision-making about, emerging technologies.

What we are working on

We are currently working on the following projects:

Current project

Education and AI

The role of AI and data-driven technologies in primary and secondary education in the UK

Current project

Gender and AI

How does the use of data-driven systems exacerbate inequalities in access to primary healthcare for transgender and non-binary people in the UK?

Current project

AI and genomics futures

This joint project with the Nuffield Council on Bioethics explores how AI is transforming the capabilities and practice of genomic science.

Our impact

We want to see a world in which the public sector uses AI and data-driven technologies in the public interest. Below are some ways in which our recent work has helped to move in that direction.

Our programme of research and deliberation on COVID-19 technologies shaped the domestic use of vaccine passports and contact-tracing apps. Across the early stages of the pandemic we mapped the rapidly evolving technical landscape, drew together multidisciplinary experts to deliberate on the conditions under which these technologies could be effective and ethical, and produced evidence from across 34 countries. We also engaged with individuals instrumental in the development, deployment and regulation of these tools in the UK; with European and US stakeholders; and with the World Health Organisation. The final report, Lessons from the App Store: Insights and learnings from COVID-19 technologies, was published in July 2023 alongside a Data Explorer and policy briefing.

We have also conducted research and produced explainers on foundation models: the type of general-purpose model powering services like ChatGPT. Our research has examined the risks and opportunities of this technology and explored the principles, regulations and practices necessary to deploy foundation models in the public sector safely, ethically and equitably, and we have given evidence to the Committee on Standards in Public Life and the Public Accounts Committee. Our explainer What is a foundation model? has been widely cited by civil society and Government, notably referenced in the Government’s AI Safety Summit discussion paper and used to inform its AI white paper consultation response.

We have undertaken a two-year programme of work examining the intersection of data and health inequalities, culminating with our report Access denied? produced in partnership with the Health Foundation. This report set out recommendations for policymakers to overcome several key challenges with data-driven health services, including digital exclusion, lack of public confidence in data use, and poor data quality. Using a participatory research method, we collaborated with a group of peer researchers with lived experience of poverty, who conducted interviews in their communities about digital health services, health data use and health inequalities. As a result of this project, Ada was invited to give oral evidence at a roundtable in March 2023 as part of the Independent Review on Equity in Medical Devices. In November 2023, Access Denied? was referenced in a House of Lords Debate on the COVID-19 Committee report.

Our three-year programme on biometrics aimed to disentangle the complex ethical and policy challenges raised by biometric technologies and explore the potential for regulatory and oversight reform. Working together with other Ada research domains, we convened the Citizens’ Biometrics Council and commissioned an independent review of the governance of biometric data by Matthew Ryder KC. The 2022 Ryder Review identified a regulatory gap, and fragmented and inadequate governance of biometrics. Ada combined this evidence with public engagement from the first national study of public attitudes to facial recognition technology, Beyond face value (2019), and the report of the Citizens’ Biometrics Council (2021), to produce Countermeasures in 2022, which made specific policy recommendations on biometrics to the UK Government. This work has been impactful in shifting media and policy conversations beyond the police uses and identification function of biometrics, and has given policymakers a better understanding of public opinion on biometrics use and governance.