A new report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge sets out a broad roadmap for work on the ethical and societal implications of technologies driven by algorithms, data and AI (ADA).
The roadmap identifies the questions for research that need to be prioritised in order to inform and improve the standards, regulations and systems of oversight of ADA-based technologies. Without this, the report’s authors conclude the recent proliferation of various codes and principles for the ethical use of ADA-based technologies will have limited effect.
We welcome this new report which will be helpful in shaping the future work of the Ada Lovelace Institute.
The report is based on an examination of current research and policy literature related to the ethical and societal implications of ADA-based technologies. The impact of these technologies shapes practically every question of public policy, but the report reveals a lack of consensus on the core ethical issues relevant to the use of data and AI, and how they apply to specific situations. While there is some agreement on the values that should underpin an ethical approach, there is insufficient consideration of the tensions between these values. There is also a lack of evidence on the current uses and impact of ADA-based technologies, their future capabilities, and of the perspectives of different groups of people.
To address these gaps, the roadmap sets out detailed questions for research based around three main tasks.
- Uncovering and resolving the ambiguity inherent in commonly used terms, such as privacy, bias, and explainability. This will require identifying how these terms are used in different disciplines, sectors, publics and cultures, and building consensus in ways that are culturally and ethically sensitive. Where consensus cannot be reached, there is a need to develop terminology to prevent different groups from talking past one another.
- Identifying and resolving tensions between the ways technology may both threaten and support different values. The roadmap identifies four central tensions:
- Using algorithms to make decisions and predictions more accurate versus ensuring fair and equal treatment.
- Reaping the benefits of increased personalisation in the digital sphere versus enhancing solidarity and citizenship.
- Using data to improve the quality and efficiency of services versus respecting the privacy and informational autonomy of individuals.
- Using automation to make people’s lives more convenient versus promoting self-actualisation and dignity.
- Building a more rigorous evidence base for discussion of ethical and societal issues. This should include research on the impacts of ADA-based technologies on different groups, particularly those that might be disadvantaged or underrepresented. It should also include public engagement, to understand the perspectives of different groups of people.
Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence at Cambridge said: “In recent years, there has been a lot of attention on how to manage these powerful new technologies. Much of it has centred on agreeing ethics ‘principles’ like fairness and transparency. Of course, it’s great that corporations, governments and others are talking about this, but principles alone are not enough. Instead of representing the outcome of meaningful ethical debate, to a significant degree they are just postponing it – because they are vague and come into conflict in practice. They also risk distracting from developing measures with real bite, like regulation. This report points the way to the hard thinking that we as a society must do in order to really harness these technologies for good and avoid the kind of scandals we saw so much of last year.”
Tim Gardam, Chief Executive of the Nuffield Foundation said: “The report reveals just how far there is to go to address the question of how society should equitably distribute the transformative power and benefits of data and AI while mitigating harm. The questions identified will be valuable in stimulating new ideas for the Nuffield Foundation’s digital society research funding, and for informing the work of the Ada Lovelace Institute, a new independent research and deliberative body that we have established to ensure data and AI work for people and society.”