Research in the fields of artificial intelligence (AI) and data science is often quickly turned into products and services that affect the lives of people around the world. It can be used in the provision of public services like social care, determining which information is amplified on social media, what jobs or insurance people are offered, and even who is deemed a risk to the public by police and security services.
Since products and services built with AI and data science research can have substantial effects on people’s lives, it is essential that this research is conducted safely and responsibly, and with due consideration for the broader societal impacts it may have.
To address this challenge, the Ada Lovelace Institute, the Institute for Data Science and AI at the University of Exeter, and the Alan Turing Institute have examined the role of academic and corporate Research Ethics Committees (RECs) in evaluating AI and data science research for ethical issues, investigating the kinds of common challenges these bodies face.
We are grateful to the Arts and Humanities Research Council who sponsored this work with a £100k grant.
Looking before we leap
The fields of AI and data science have exploded in the last two decades, now accounting for 3% of all peer-reviewed journal publications and 9% of published paper conferences in all scientific fields. AI and data science techniques are also leading to advances in other academic domains such as history, economics, genomics, biology, and more.
However, traditional research governance and ethical accountability processes have struggled to keep pace with the growth of these fields. Public and private institutions have faced challenges establishing processes to flag, review, and mitigate the unique ethical challenges posed by AI and data science research, which can include the treatment of data curators and labellers, dataset and model documentation standards, consent and privacy risks, the identification of indirect and long-term beneficiaries, and evaluations of potential collaborators, data sources and funders, amongst other concerns.
These failures come with high stakes – as several prominent researchers have highlighted, inadequately reviewed research can unleash a deluge of ethical risks that are carried downstream into subsequent products, services, and follow-on research. They also pose a risk to the longevity of the field – if researchers fail to demonstrate due consideration for the ethical implications of their work, it may become a domain that future researchers find undesirable to work in, a challenge that both the nuclear power and tobacco research have encountered. Even worse, poor practices could set a precedent of future normative standards of ‘acceptable’ research.
Research ethics committees (RECs, also known as institutional review boards, or IRBs, in the USA) are charged with ensuring researchers comply with ethical norms and legal requirements, but often with a narrow focus on research involving human beings as research subjects and data protection issues. Beyond research methods, RECs could find themselves poorly equipped to evaluate the wider societal impacts of AI research such as considerations for the public good and the ethics of incidental findings.
While there are nascent cross-industry initiatives that have sprung up to broaden the discussion of how researchers can navigate ethical risks, and some researchers have even begun to unpack the challenge of how to design and launch an AI research review committee, there remain few detailed and robust methodological resources available to analyse and resolve AI and data-based ethics problems.
To address this challenge, the Ada Lovelace Institute, the Institute for Data Science and AI at the University of Exeter, and the Alan Turing Institute co-convened a series of workshops with experts from academia and industry who have experience and expertise establishing and managing institutional ethics review processes.
These workshops explored cutting-edge practical frameworks, data ethics review processes and research governance models that can be implemented to address the unique ethical risks that are emerging in association with data science and AI research. Building from shared experiences, challenges and insights into today’s best practices, the workshops are a contribution to the development of state-of-the-art institutional review procedures for AI research labs and university departments across the UK and EU.
Through this project, we have developed two primary outputs. The first is a report, Looking before we leap, which outlines the challenges RECs are facing and recommendations for how RECs can address these issues, including the types of frameworks, processes and governance practices that could support them in future. The second is six mock AI and data science research proposals that represent hypothetical submissions to a REC. These are for use by students, researchers, members of RECs, funders and other actors in the research ecosystem to further develop their ability to spot and evaluate common ethical issues in AI and data science research.
Image credit: AnthiaCumming
Expanding ethical review processes for AI and data science research
Six case studies to support learning about common ethical issues in AI and data science research
Ethical review processes for AI and data science research
To help us to better understand data, including its uses and ethical implications, various analogies are used