Skip to content
Virtual event Ethics and accountability in practice

From principles to practice: what next for algorithmic impact assessments?

We are convening experts from policy, industry, healthcare and AI ethics to discuss our recent case study and the future of AIAs.

Register here
A pictoral representation of a process workflow
Date and time
4:00pm – 5:00pm, 28 March 2022 (BST)
Partner
NHS AI Lab

From education to healthcare, finances to employment, AI systems are increasingly being used to make important decisions affecting our lives. As these technologies become more widespread, there is a pressing need to implement new mechanisms, processes and tools to ensure they are developed with oversight and scrutiny.

Over recent years public and private-sector organisations have invested significant resources into addressing potential AI harms, including designing and developing AI ethics principles and accountability mechanisms. But there is still work to be done in demonstrating exactly how these principles and mechanisms can be translated into practices for developers and policymakers to implement.

One emerging accountability mechanism generating a lot of interest, including from legislators around the world, is the ‘algorithmic impact assessment’ (AIA). AIAs have the potential to help build public trust, mitigate potential harm and maximise potential benefit of AI systems by assessing possible societal impacts before implementation, including through meaningful public participation.

This online event draws from and build on Ada’s recently published report, Algorithmic impact assessment: a case study in healthcare, to discuss the next steps for AIA research and implementation.

In the report, we outline the first known, detailed proposal for the use of an AIA for data access in a healthcare context – the National Health Service (NHS) in England. By trialling this detailed process, the NHS will be the first health system in the world to use this approach.

Watch back the event:

This video is embedded with YouTube’s ‘privacy-enhanced mode’ enabled although it is still possible that if you play this video it may add cookies. Read our Privacy policy and Digital best practice for more on how we use digital tools and data.

Following on from this research, the Ada Lovelace Institute convened experts representing policy, industry, healthcare and AI ethics to discuss some of the wider questions and practical challenges raised by this case study, including:

  • Given the relative infancy of AIAs, what does ‘good’ look like?
  • Could AIAs be legally mandated? What would that look like? What would be required?
  • How can AIAs make institutions creating and developing AI systems accountable to the individuals and communities those systems affect?
  • How do AIAs relate to other types of mechanism, including other forms of impact assessment, such as data protection impact assessments (DPIAs)?
  • What needs to be developed to support the implementation of AIAs more widely across different sectors?

Related content