Skip to content
Virtual event

Regulating for algorithm accountability: global trajectories, proposals and risks

Exploring how we can ensure that algorithmic systems and those deploying them are truly accountable

Technology data binary code network conveying connectivity, Data and information protection protocol. Secure connection.
Date and time
7:00pm – 8:00pm, 3 December 2020 (GMT)
Location
Virtual event

The Ada Lovelace Institute, the Institute for the Future of Work and international experts in algorithm accountability to explore how we can ensure that algorithmic systems and those deploying them are truly accountable. The event surfaces different global approaches, discusses them in relation to their governance landscapes, explores possible risks and considers regulatory options.

Watch the event in full below:

This video is embedded with YouTube’s ‘privacy-enhanced mode’ enabled although it is still possible that if you play this video it may add cookies.  Read our Privacy policy and Digital best practice for more on how we use digital tools and data. 

Co-chairs

  • Carly Kind

    Director, Ada Lovelace Institute
  • Anna Thomas

    Director, Institute for the Future of Work

Speakers

  • Benoit Deshaies

    A/Director, Data and Artificial Intelligence, Office of the Chief Information Officer, Treasury Board of Canada Secretariat
  • Albert Fox Cahn

    Executive Director of Surveillance Technology Oversight Project and participant in the New York City Automated Decision Systems Task Force
  • Helen Mountfield

    Principal of Mansfield College, Oxford and Chair of the Institute for the Future of Work’s Equality Task Force
  • Craig Jones

    Deputy Chief Executive, Data System Leadership group, New Zealand

The extensive use of algorithmic decision-making in all domains of social life requires specific accountability mechanisms and regulations that ensure meaningful redress. This is an especially hard task when little information on the algorithms in use is available in the public domain and their implementation rational, and the organisations ultimately responsible for their functioning, remain opaque.

While there is little consensus over what approach to take, countries across the world have started designing and applying different mechanisms to boost algorithm accountability.

In 2018, New York City launched a Task Force to make recommendations on how the city should manage automated decision-making systems. Earlier this year, New Zealand issued an Algorithmic Charter to be deployed in case of high-risk applications. Canada has developed a model of Algorithmic Impact Assessment, a scorecard that helps identify the level of risk of an algorithm and mitigation factors. More recently, the UK Institute for the Future of Work’s Equality Task Force has released a report highlighting gaps in legal protection and mechanisms for accountability, and calling for new legislation: an Accountability for Algorithms Act.

The Ada Lovelace Institute, the Institute for the Future of Work and international experts in algorithm accountability surface key concerns and relate them to the governance and regulatory landscapes of different national contexts. In this event we ask:

  • How do we ensure that algorithmic systems and the agencies and organisations deploying them are truly accountable? Is new regulation necessary?
  • What can we learn from the different approaches in New Zealand, Canada and New York City?
  • How do they relate to their respective regulatory and administrative contexts?

We are using Zoom for virtual events open to more than 40 attendees. Although there are issues with Zoom’s privacy controls, when reviewing available solutions we found that there isn’t a perfect product and we have chosen Zoom for its usability and accessibility. Find out more here.

Image credit: your

Related content