Skip to content

Algorithm accountability

Ensuring public oversight, enabling scrutiny and challenging asymmetries of power between those deploying, and those impacted by algorithmic systems

As algorithmic decision-making systems and AI are designed and deployed at unprecedented scale and speed across the public and private sectors, there is a pressing need to ensure public oversight, enable scrutiny of systems and challenge asymmetries of power between those deploying algorithmic systems and those impacted by them.

Algorithms and AI offer new possibilities for the delivery of public services, advancements in healthcare research, efficiencies in the labour market and the personalisation of online services.

But their ‘black box’ nature – the opacity with which they are designed and deployed – creates the appearance of an absence of human control and responsibility, and can bring them into conflict with important societal values concerned with just and fair decision-making, transparency and accountability.

The lack of transparency and accountability in relation to algorithmic systems not only undermines fairness and due process for individuals affected by those systems, but harbours corrosive effects on political institutions, competitive markets, and the democratic legitimacy of governments and public bodies.

Of particular concern to Ada is the expansion of algorithmic decision-making systems across the public sector, from AI ‘streaming tools’ in use in immigration applications and predictive analytics in policing, to risk scoring systems to support welfare and social care decision-making in local councils.

Ongoing investigative efforts by researchers reveal the extensive application of predictive analytics in public services across the UK and have highlighted individual cases of concern, but there remains a persistent, systemic deficit in public understanding about where and how these systems are used that needs to be addressed.

Opaque and unaccountable AI and algorithmic systems are not trustworthy systems, and their use will have negative effects on individuals, particularly underrepresented and vulnerable groups, and on societies.

The Ada Lovelace Institute is committed to understanding how AI and algorithms can be made more transparent, accountable and trustworthy. We see this problem as complex and multifaceted, and not one that can be effectively addressed through high-level principles, and as such we aim to:

  • Build the evidence: To understand how to make AI and algorithmic systems more transparent and accountable, we need to observe and document how, where and why they are being used, what their effects are, and the interface between technologies and governance mechanisms such as regulation, ethical codes and best practices.
  • Develop tools and methodologies: We are working to develop tools and methodologies that enable AI and algorithms to be assessed for societal impact, and regulatory and normative compliance. Algorithmic audits and impact assessments are two categories of tools that have received some attention but need further iteration, and we are working on the development of best practices and frameworks to enable them to be trialled and improved in specific use cases.
  • Equip policymakers, regulators and oversight bodies: Achieving algorithmic accountability will require regulators to develop the skills and capabilities to inspect AI and other systems for compliance with rules and norms. We work with regulators and experts to develop methodologies for inspecting and understanding algorithms in the public and private sectors.

Related projects

Related events