From education to healthcare, finances to employment, AI systems are increasingly being used to make important decisions affecting our lives. As these technologies become more widespread, there is a pressing need to implement new mechanisms, processes and tools to ensure they are developed with oversight and scrutiny.
Over recent years public and private-sector organisations have invested significant resources into addressing potential AI harms, including designing and developing AI ethics principles and accountability mechanisms. But there is still work to be done in demonstrating exactly how these principles and mechanisms can be translated into practices for developers and policymakers to implement.
One emerging accountability mechanism generating a lot of interest, including from legislators around the world, is the ‘algorithmic impact assessment’ (AIA). AIAs have the potential to help build public trust, mitigate potential harm and maximise potential benefit of AI systems by assessing possible societal impacts before implementation, including through meaningful public participation.
This online event draws from and build on Ada’s recently published report, Algorithmic impact assessment: a case study in healthcare, to discuss the next steps for AIA research and implementation.
In the report, we outline the first known, detailed proposal for the use of an AIA for data access in a healthcare context – the National Health Service (NHS) in England. By trialling this detailed process, the NHS will be the first health system in the world to use this approach.
Watch back the event:
Following on from this research, the Ada Lovelace Institute convened experts representing policy, industry, healthcare and AI ethics to discuss some of the wider questions and practical challenges raised by this case study, including:
- Given the relative infancy of AIAs, what does ‘good’ look like?
- Could AIAs be legally mandated? What would that look like? What would be required?
- How can AIAs make institutions creating and developing AI systems accountable to the individuals and communities those systems affect?
- How do AIAs relate to other types of mechanism, including other forms of impact assessment, such as data protection impact assessments (DPIAs)?
- What needs to be developed to support the implementation of AIAs more widely across different sectors?
This report sets out the first-known detailed proposal for the use of an algorithmic impact assessment for data access in a healthcare context.
Pioneering framework for assessing the impact of medical AI set to be trialled by NHS in world-first pilot
The Ada Lovelace Institute has designed an algorithmic impact assessment (AIA) for the NHS AI Lab, the first known example within healthcare.