Your editorial “Making decisions that computers cannot” ( May 22) identifies a central issue surrounding the relationship between the artificial intelligence sector and the ethical and social frameworks in which it operates. You ask who will be left to speak for the common good. The Nuffield Foundation is funding the Ada Lovelace Institute, in partnership with the Alan Turing Institute, the Royal Society, the British Academy, the Royal Statistical Society and others. It will bring together researchers, those in public policy and civil society organisations and, crucially, those in the tech sector who recognise the urgency of these challenges.
Their shared deliberation will result in independent academic research.
We need more informed public consideration, to enable exploration of the risks and trade-offs posed by the use of data and AI, as citizens will rightly demand a greater voice in the design and deployment of such technologies. We need better evidence about the impacts of technology; much of the current research is either being led by investigative journalism — which is inevitably drawn to the worst cases — or by industry — which profiles the best outcomes. We need to build a more considered research base that explores who is most at risk of harm and the cumulative distributional social impacts. Above all, we need to embed ethical thinking in the tech industry, as an inherent part of its culture.
There are many in the sector who recognise the urgent need to establish common norms to translate ethical principles into practical decisions, as well as to explore the question of whether the underlying logic of any innovation reflects the values we want in a future society.
Chief Executive, Nuffield Foundation
This letter was originally published in the Financial Times on 29 May 2018.