Skip to content
Project

Working it out

Lessons from the New York City algorithmic bias audit law

Project lead
Lara Groves

Project background

As artificial intelligence (AI) systems increasingly affect our day-to-day lives, it is essential that developers of these systems are held accountable for ensuring their products operate lawfully, safely and ethically.  One of the ways to test AI systems is the use of algorithm audits, which can help developers evaluate how the system may perform and what kinds of biases it may have against particular demographic groups.

Policymakers across the EU and North America are working to implement algorithmic auditing ecosystems in legislation, but it is not clear what steps policymakers can take to effectively implement these regimes. There are currently no standardised or consistent practices for algorithm auditors to follow, and the ecosystem of potential audits is still emerging.

In 2022, New York City (NYC) passed a bias audit law (Local law 144) to create the first legally mandated algorithmic bias auditing regime. Under this law, employers and employment agencies using automated employment decision tools (AEDTs), such as CV scanning tools or recruitment tools, must undertake an independent bias audit using a second party auditor. The results of these bias audits must be publicly listed and made available to individuals applying for NYC city roles.

What are algorithm audits for?

In our report Examining the Black Box, we distinguish between two types of audit:

  1. Bias audit: a targeted, non-comprehensive approach focused on assessing algorithmic systems for bias.
  2. Regulatory inspection: a broader approach focused on an algorithmic system’s compliance with regulation or norms and requiring a number of different tools and methods.

While a regulatory inspection is typically undertaken by a regulator or an auditing/compliance professional, a bias audit can be undertaken by either a third-party (independent researchers, investigative journalists or data scientists) or a second party, where a contracted company performs an audit on behalf of a customer. In the context of the NYC law, the customer is the employer or employment agency.

Project overview

In partnership with Data & Society, the Ada Lovelace Institute (Ada) will explore what lessons can be learned from the NYC algorithmic bias audit law for other governments implementing similar schemes.

We will be conducting interviews with the auditors working under the emerging NYC bias audit regime to explore their experiences and the dynamics between auditors, their clients, and developers of AEDTs.

The key questions we will seek to answer include:

  • What are the practical components of a bias audit in this context?
  • What aspects or components make for an effective bias auditing regime?
  • What are the experiences of auditors, and how can we use those experiences to inform wider policy and practice?

This project builds on existing Ada research to:

This project is part of a wider forthcoming project series from Data & Society that will explore the NYC algorithmic bias audit law.


Image credit: LumineImages