Algorithm accountability
Ensuring public oversight, enabling scrutiny and challenging asymmetries of power between those deploying, and those impacted by algorithmic systems
As algorithmic decision-making systems and AI are designed and deployed at unprecedented scale and speed across the public and private sectors, there is a pressing need to ensure public oversight, enable scrutiny of systems and challenge asymmetries of power between those deploying algorithmic systems and those impacted by them.
Algorithms and AI offer new possibilities for the delivery of public services, advancements in healthcare research, efficiencies in the labour market and the personalisation of online services.
But their ‘black box’ nature – the opacity with which they are designed and deployed – creates the appearance of an absence of human control and responsibility, and can bring them into conflict with important societal values concerned with just and fair decision-making, transparency and accountability.
The lack of transparency and accountability in relation to algorithmic systems not only undermines fairness and due process for individuals affected by those systems, but harbours corrosive effects on political institutions, competitive markets, and the democratic legitimacy of governments and public bodies.
Of particular concern to Ada is the expansion of algorithmic decision-making systems across the public sector, from AI ‘streaming tools’ in use in immigration applications and predictive analytics in policing, to risk scoring systems to support welfare and social care decision-making in local councils.
Ongoing investigative efforts by researchers reveal the extensive application of predictive analytics in public services across the UK and have highlighted individual cases of concern, but there remains a persistent, systemic deficit in public understanding about where and how these systems are used that needs to be addressed.
Opaque and unaccountable AI and algorithmic systems are not trustworthy systems, and their use will have negative effects on individuals, particularly underrepresented and vulnerable groups, and on societies.
The Ada Lovelace Institute is committed to understanding how AI and algorithms can be made more transparent, accountable and trustworthy. We see this problem as complex and multifaceted, and not one that can be effectively addressed through high-level principles, and as such we aim to:
- Build the evidence: To understand how to make AI and algorithmic systems more transparent and accountable, we need to observe and document how, where and why they are being used, what their effects are, and the interface between technologies and governance mechanisms such as regulation, ethical codes and best practices.
- Develop tools and methodologies: We are working to develop tools and methodologies that enable AI and algorithms to be assessed for societal impact, and regulatory and normative compliance. Algorithmic audits and impact assessments are two categories of tools that have received some attention but need further iteration, and we are working on the development of best practices and frameworks to enable them to be trialled and improved in specific use cases.
- Equip policymakers, regulators and oversight bodies: Achieving algorithmic accountability will require regulators to develop the skills and capabilities to inspect AI and other systems for compliance with rules and norms. We work with regulators and experts to develop methodologies for inspecting and understanding algorithms in the public and private sectors.
Related projects
Accountability of algorithmic decision-making systems
Developing foundational tools to enable accountability of public administration algorithmic decision-making systems.
Delivering responsible data through the National Data Strategy
Examining how the commitment to responsible data in the UK's National Data Strategy could be realised and what it misses.
Regulatory inspection of algorithmic systems
Establishing mechanisms and methods for regulatory inspection of algorithmic systems, sometimes known as 'algorithm audit'.
Accountability of algorithmic decision-making systems
Developing foundational tools to enable accountability of public administration algorithmic decision-making systems.
Regulatory inspection of algorithmic systems
Establishing mechanisms and methods for regulatory inspection of algorithmic systems, sometimes known as 'algorithm audit'.
Delivering responsible data through the National Data Strategy
Examining how the commitment to responsible data in the UK's National Data Strategy could be realised and what it misses.
From the Ada blog
Accountability for algorithms: a response to the CDEI review into bias in algorithmic decision-making
Reviewing bias is welcome, and stopping the amplification of historic inequalities is essential.
Reinventing online platforms: is the new EU regulatory package enough?
Without mandatory interoperability, will the Digital Market Act and Digital Service Act be enough to reset the rules for big tech?
Algorithms in social media: realistic routes to regulatory inspection
Establishing systems, powers and capabilities to scrutinise algorithms and their impact.
Related events
Regulating for algorithm accountability: global trajectories, proposals and risks
Exploring how we can ensure that algorithmic systems and those deploying them are truly accountable.
Almost Future AI
An online salon exploring AI ethics and near-future fiction.