Skip to content
Virtual event

Regulating for algorithm accountability: global trajectories, proposals and risks

Exploring how we can ensure that algorithmic systems and those deploying them are truly accountable.

Register here
Date and time
7:00pm – 8:00pm, 3 December 2020 (GMT)
Location
Virtual event

Join the Ada Lovelace Institute, the Institute for the Future of Work and international experts in algorithm accountability to explore how we can ensure that algorithmic systems and those deploying them are truly accountable. The event will surface different global approaches, discuss them in relation to their governance landscapes, explore possible risks and consider regulatory options.

Co-chairs

  • Carly Kind

    Director, Ada Lovelace Institute
  • Anna Thomas

    Director, Institute for the Future of Work

Speakers

  • Benoit Deshaies

    A/Director, Data and Artificial Intelligence, Office of the Chief Information Officer, Treasury Board of Canada Secretariat
  • Albert Fox Cahn

    Executive Director of Surveillance Technology Oversight Project and participant in the New York City Automated Decision Systems Task Force
  • Helen Mountfield

    Principal of Mansfield College, Oxford and Chair of the Institute for the Future of Work’s Equality Task Force
  • Craig Jones

    Deputy Chief Executive, Data System Leadership group, New Zealand

The extensive use of algorithmic decision-making in all domains of social life requires specific accountability mechanisms and regulations that ensure meaningful redress. This is an especially hard task when little information on the algorithms in use is available in the public domain and their implementation rational, and the organisations ultimately responsible for their functioning, remain opaque.

While there is little consensus over what approach to take, countries across the world have started designing and applying different mechanisms to boost algorithm accountability.

In 2018, New York City launched a Task Force to make recommendations on how the city should manage automated decision-making systems. Earlier this year, New Zealand issued an Algorithmic Charter to be deployed in case of high-risk applications. Canada has developed a model of Algorithmic Impact Assessment, a scorecard that helps identify the level of risk of an algorithm and mitigation factors. More recently, the UK Institute for the Future of Work’s Equality Task Force has released a report highlighting gaps in legal protection and mechanisms for accountability, and calling for new legislation: an Accountability for Algorithms Act.

The Ada Lovelace Institute, the Institute for the Future of Work and international experts in algorithm accountability surface key concerns and relate them to the governance and regulatory landscapes of different national contexts. In this event we ask:

  • How do we ensure that algorithmic systems and the agencies and organisations deploying them are truly accountable? Is new regulation necessary?
  • What can we learn from the different approaches in New Zealand, Canada and New York City?
  • How do they relate to their respective regulatory and administrative contexts?

We are using Zoom for virtual events open to more than 40 attendees. Although there are issues with Zoom’s privacy controls, when reviewing available solutions we found that there isn’t a perfect product and we have chosen Zoom for its usability and accessibility. Find out more here.

A recording and summary of the talk will be available on the Ada Lovelace Institute website shortly afterwards.

Image credit: Orbon Alija

Related content

This is not a cookie notice

The Ada Lovelace Institute website doesn’t have a cookie notice, because we don’t collect or share personal information about you.

We use only the minimum necessary cookies to make our site work, and to enable sharing on platforms like Twitter, YouTube and Google. We have no control over the tracking technologies used by these sites and services.

You can disable all cookies by changing your own browser settings, though this may change how the website works.

It’s part of what we do, making data and AI work for people and society. Find out more.