The Ada Lovelace Institute, the Institute for the Future of Work and international experts in algorithm accountability to explore how we can ensure that algorithmic systems and those deploying them are truly accountable. The event surfaces different global approaches, discusses them in relation to their governance landscapes, explores possible risks and considers regulatory options.
Watch the event in full below:
Carly KindDirector, Ada Lovelace Institute
Anna ThomasDirector, Institute for the Future of Work
Benoit DeshaiesA/Director, Data and Artificial Intelligence, Office of the Chief Information Officer, Treasury Board of Canada Secretariat
Albert Fox CahnExecutive Director of Surveillance Technology Oversight Project and participant in the New York City Automated Decision Systems Task Force
Helen MountfieldPrincipal of Mansfield College, Oxford and Chair of the Institute for the Future of Work’s Equality Task Force
Craig JonesDeputy Chief Executive, Data System Leadership group, New Zealand
The extensive use of algorithmic decision-making in all domains of social life requires specific accountability mechanisms and regulations that ensure meaningful redress. This is an especially hard task when little information on the algorithms in use is available in the public domain and their implementation rational, and the organisations ultimately responsible for their functioning, remain opaque.
While there is little consensus over what approach to take, countries across the world have started designing and applying different mechanisms to boost algorithm accountability.
In 2018, New York City launched a Task Force to make recommendations on how the city should manage automated decision-making systems. Earlier this year, New Zealand issued an Algorithmic Charter to be deployed in case of high-risk applications. Canada has developed a model of Algorithmic Impact Assessment, a scorecard that helps identify the level of risk of an algorithm and mitigation factors. More recently, the UK Institute for the Future of Work’s Equality Task Force has released a report highlighting gaps in legal protection and mechanisms for accountability, and calling for new legislation: an Accountability for Algorithms Act.
The Ada Lovelace Institute, the Institute for the Future of Work and international experts in algorithm accountability surface key concerns and relate them to the governance and regulatory landscapes of different national contexts. In this event we ask:
- How do we ensure that algorithmic systems and the agencies and organisations deploying them are truly accountable? Is new regulation necessary?
- What can we learn from the different approaches in New Zealand, Canada and New York City?
- How do they relate to their respective regulatory and administrative contexts?
We are using Zoom for virtual events open to more than 40 attendees. Although there are issues with Zoom’s privacy controls, when reviewing available solutions we found that there isn’t a perfect product and we have chosen Zoom for its usability and accessibility. Find out more here.
Image credit: your
Requirements that governments and developers will need to deliver in order for any vaccine passport system to deliver societal benefit
Findings from a rapid expert deliberation to consider the risks and benefits of the potential roll-out of digital vaccine passports
A research partnership with NHS AI Lab exploring the potential for algorithmic impact assessments in an AI imaging case study