Skip to content
Report

Examining the Black Box

Tools for assessing algorithmic systems

Identifying common language for algorithm audits and impact assessments

29 April 2020

A report by the Ada Lovelace Institute and DataKind UK clarifies the terms around algorithmic audits and impact assessments, and the current state of research and practice.

As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.

This report is primarily aimed at policymakers, to inform more accurate and focused policy conversations. It may also be helpful to anyone who creates, commissions or interacts with an algorithmic system and wants to know what methods or approaches exist to assess and evaluate that system.

Clarifying terms and approaches

Through literature review and conversations with experts from a range of disciplines, we’ve identified four prominent approaches to assessing algorithms that are often referred to by just two terms: algorithm audit and algorithmic impact assessment. But there is not always agreement on what these terms mean among different communities: social scientists, computer scientists, policymakers and the general public have different interpretations and frames of reference.

While there is broad enthusiasm among policymakers for algorithm audits and impact assessments, there is often lack of detail about the approaches being discussed. This stems both from the confusion of terms, but also from the different maturity of the approaches the terms describe.

Clarifying which approach we’re referring to, as well as where further research is needed, will help policymakers and practitioners to do the more vital work of building evidence and methodology to take these approaches forward.

We focus on algorithm audit and algorithmic impact assessment. For each, we identify two key approaches the terms can be interpreted as:

  • Algorithm audit
    • Bias audit: a targeted, non-comprehensive approach focused on assessing algorithmic systems for bias
    • Regulatory inspection: a broad approach, focused on an algorithmic system’s compliance with regulation or norms, necessitating a number of different tools and methods; typically performed by regulators or auditing professionals
  • Algorithmic impact assessment
    • Algorithmic risk assessment: assessing possible societal impacts of an algorithmic system before the system is in use (with ongoing monitoring often advised)
    • Algorithmic impact evaluation: assessing possible societal impacts of an algorithmic system on the users or population it affects after it is in use

Further research and practice priorities

For policymakers and practitioners, it may be disappointing to see that many of these approaches are not ‘ready to roll out’; that the evidence base and best-practice approaches are still being developed. However, this creates a valuable opportunity to contribute – through case studies, transparent reporting and further research – to the future of assessing algorithmic systems.

We look at the state of research and practice in each approach and make a series of recommendations, tailored to regulators, civil society, researchers, public and private sectors and data scientists on the ground. There is scope for a range of important work across sectors and approaches. We’re excited to see some underway, such as Data & Society’s new paper looking at the challenges of translating existing impact assessment models to algorithmic systems, and to further some ourselves.

Our follow on research and publications

Continuing our collaboration with DataKind UK, we are exploring translating some of these findings into practical, accessible advice and information for social change organisations. DataKind UK is an expert voice in the tech for good and non-profit sector, through their significant work building the capacity of the social sector to use data effectively and responsibly, and their network of pro-bono data scientists. 

In addition, the Ada Lovelace Institute is moving forward the discussion on regulatory inspection of algorithms (sometimes referred to as ‘algorithm audit’). In the coming months we will be hosting a series of workshops looking at regulatory inspection of algorithms with cross-disciplinary groups in three domains: digital media platforms, in collaboration with Reset; pricing and competition in collaboration with Inclusive Competition; and equalities, in collaboration with the Institute for the Future of Work, following on from their recent report on AI in hiring and assessing impacts on equality. These workshops aim not only to further the conversation in their respective domains, but also to identify shared needs, methodologies, challenges and solutions for the regulatory inspection of algorithmic systems across sectors.

Related content