Skip to content
Project

Accountability of algorithmic decision-making systems

Developing foundational tools to enable accountability of public administration algorithmic decision-making systems.

Reading time: 4 minutes

Through convening and research, this project aims to develop foundational tools to enable accountability of UK public-sector algorithmic decision-making, such as a typology and a public register.

Transparency mechanisms for UK public-sector algorithmic decision-making systems. Report cover featuring Ada Lovelace Institute logo

Transparency mechanisms for UK public-sector algorithmic decision-making systems

Project background

From social care to policing, algorithmic decision-making systems (known as ‘ADMs’) are being used and deployed across a range of public services in the UK. At the level of service delivery, they are transforming the quality of the services and, in turn, how we – the people that these systems are meant to serve – relate to them.

There is currently little transparency about ADMs in use. It is difficult for the general public as well as researchers, regulators and advocacy groups to obtain information about the rationale behind decisions that underpin their adoption and their effects on services communities and people.

This means that we are only able to respond to the potential negative effects such systems might have on people and society once they have already happened, and one at a time. To ensure these transformations don’t produce negative outcomes, we need to understand how ADMs work in practice.

Making information on ADMs public in a proactive way may contribute to much-needed democratic scrutiny on ongoing changes and challenges in public administration. However, at present, there is no common language to categorise the ADM protocols in use and systematically analyse their effects. Further, existing transparency mechanisms fail to proactively capture key aspects of ADMs and in a way that is contextualised.

Project overview

Our research, Transparency mechanisms for UK public-sector algorithmic decision-making systems, aims to build meaningful transparency and accountability of ADMs by:

  • Blueprinting the preliminary structure for a typology that we commissioned to Dr Michael Veale (UCL)
  • Reviewing current transparency mechanisms available in the UK for public services and their existing or potential application to ADMs
  • Establishing what a public register of ADMs that carry out local and central government functions should look like.

This research addresses the following questions:

  • What common terminologies should we use for ADMs, and how can we effectively categorise them?
  • What do we need to know in order to effectively understand the working of ADMs?
  • How could a register contribute to accountability and to popularising understanding of ADMs?
  • What should such a register look like to be best effective?
  • How should it relate to existing transparency mechanisms?
  • What should its governance be?

To create coherence out of this landscape, and systematically strengthen transparency practices across government, the research focused on currently underused informational assets that, taken together (if not individually), can enable us to make meaningful extractions and inferences about the way ADM systems are used across government.

A public register of ADMs?

To help establish what a public register of ADMs in use in local and central government might look like and how it should be administered, we convened an event with national and international experts looking at questions of mandatory reporting for public administration ADMs. You can watch the event and read the summary here.

Part of this research is developed through a research fellowship at the University of Cardiff’s Data Justice Lab.

Related content