Skip to content
Project Algorithm accountability

Algorithmic impact assessment in healthcare

A research partnership with NHS AI Lab exploring the potential for algorithmic impact assessments in an AI imaging case study

10 March 2021

Reading time: 3 minutes

Scientists Working on Computer In Modern Laboratory

From automated diagnostics to personalised medicine, the healthcare sector has seen a surge in the application of data-driven technologies (including AI) to deliver health outcomes. To increase the rate of innovation in this space, public-health agencies have sought to make health data they control more accessible to researchers and private sector firms.

But while data-driven technologies have the potential to revolutionise healthcare, they also come with serious risks to the public, including perpetuating algorithmic bias, impeding transparency and public scrutiny and creating risks to individual privacy.

There is therefore a pressing need to understand and mitigate the potential impacts of data-driven systems before they are developed and deployed, and to address serious risks, including:

  • that they will perpetuate ‘algorithmic bias’, exacerbating health inequalities by replicating entrenched social biases and racism in existing systems;
  • that they may operate at a level of abstraction or obfuscation that impedes transparent explanation of the systems, undermining public scrutiny and accountability;
  • that they may incentivise the collection of personal data, tracking and the normalisation of surveillance, creating risks to individual privacy.

Understanding the potential impacts of data-driven systems before they are developed is the best way to mitigate these potential harms. While there is a growing academic literature around how public-sector agencies can conduct algorithmic impact assessments (AIAs),1 there remains a lack of case studies of these frameworks in practice, particularly in a healthcare setting.

Building on Ada’s existing work on assessing algorithmic systems, this project will identify a best practice impact assessment model for use in the NHS AI Lab‘s programme on AI imaging. It will develop actionable steps on AIAs for the NHS AI Lab, as well as to help inform wider AIA research and practice at the intersection of health and data science, and in the public and private sectors.

The AI imaging programme provides a novel case study as it sits at the intersection of two audiences for impact assessment: industry developers and the public sector. Existing research has tended to focus on either public sector procurers of systems (such as the Canadian AIA), or the developers of new technology themselves (such as use of human rights impact assessments in industry). This case study offers a new lens through which to examine and develop algorithmic impact assessment at the intersection of public sector and industry development.

At this end of this project, we hope to identify and test an impact assessment model for use in the NHS AI Lab imaging programme and produce a report with generalisable findings for other public-sector agencies seeking to implement similar processes. This will provide a missing piece of the puzzle in practical evidence on the use of algorithmic impact assessments.

This research is being funded by a grant of £66,000 from NHS AI Lab and is governed by a memorandum of understanding (MOU), which is available here.

Further work and opportunities

If you are interested in this work, you can keep up to date on our website, following us on Twitter and by subscribing to our fortnightly newsletter.

Image credit: poba

Footnotes

  1. The term ‘algorithmic impact assessment’ has been interpreted in multiple ways as we identified in Examining the Black Box – here we refer to the increasingly most prominent understanding: a process for assessing possible societal impacts of an algorithmic system before the system is in use (with ongoing monitoring often advised).

Related content

Algorithm accountability

Ensuring public oversight, enabling scrutiny and challenging asymmetries of power between those deploying, and those impacted by algorithmic systems