What is the next digital revolution and how can the UK further embrace it to remain a world-leading digital economy? How can industry and government ensure citizens remain central to emerging tech and the changing world?
Carly Kind introduces the Ada Lovelace Institute’s emerging research on understanding public attitudes to facial recognition technologies, proposing a way forward for regulators, policymakers and industry in the UK.
A new report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge sets out a broad roadmap for work on the ethical and societal implications of technologies driven by algorithms, data and AI (ADA).
The importance of public legitimacy – which refers to the broad base of public support that allows companies, designers, public servants and others to design and develop AI to deliver beneficial outcomes – was illustrated by a series of public, highly controversial events that took place in 2018.
Picture a system that makes decisions that have a huge impact on a person’s prospects and life course; or even that makes decisions that are literally life and death. Imagine that system is hugely complex, and also opaque: it is very hard to see how it comes to the conclusions it does. A system that is discriminatory by its nature: it sorts people into winners and losers; but the criteria by which it does so are not clear.
The Nuffield Foundation has appointed the first board members to lead the strategic development of the Ada Lovelace Institute, an independent research and deliberative body with a mission to ensure data and Artificial Intelligence (AI) work for people and society.
Over the past few years debates about data have frequently made headline news. To help us to better understand data, including its uses and ethical implications, various analogies have been used. Although analogies can help us get a better grasp of this complex issue, we should be wary of the limitations of these comparisons.
There is a growing expectation that technologies and algorithms should align with, and reflect, commonly held public and societal values. But to make this happen, society needs to be kept ‘in the loop’. Do we need an implicit social contract between those developing and designing the tech and those who may be affected by it?