At least four grants of £10,000 each will be awarded to researchers, scholars and creative practitioners to execute research or creative projects designed to surface alternative, critical and diverse perspectives on data and AI in relation to racial justice.
The monopolisation of AI is not just – or even primarily – a data issue. Monopolisation is driven as much by the barriers to entry posed by fixed capital, and the ‘virtuous cycles’ that compute and labour are generating for the AI providers.
How data, bodies and experience entwine – and how we might make a world where instead of predicting our individual risk we understand the complexities of living together, in situations of inequality and injustice.
In this long read, we highlight three issues that arise out of the European Commission’s data strategy, which we believe require further thought as the EU considers how to design a positive vision of data governance that works for people and societies.
In the current crisis, we are hearing a lot about digital tools to track, monitor and share data about people. In the workplace, technology has the potential to help us respond to the health pandemic – and causes concerns about data, privacy and power.
by Andrew Pakes, Research Director, Prospect Union
Governments across the world have opted to develop technologies, such as digital contact tracing, symptom tracking, and the creation of data stores as central to their strategies in responding to COVID-19.
How do the public expect the NHS, and third-party organisations to steward their data? What are the conditions that enable effective data stewardship – and what are the conditions that don’t? (or, as our recently convened Citizens’ Biometrics Council put it – ‘What is or isn’t okay?’).
This year’s International Women’s Day theme ‘Each for Equal’ has particular resonance for black women who experience discrimination – for being female, for being black, but more specifically, for the unique identity of being black females.
New research led by Understanding Patient Data in association with the Ada Lovelace Institute shows the public will only support third parties using NHS-held health data when there is benefit to patients across the UK, rather than short-term financial gain for the NHS. Benefits to patients includes things like improving disease detection or developing new medicines and treatments.
The Ada Lovelace Institute marked the beginning of an independent review it has commissioned on the governance of biometric data by hosting a debate on UK biometrics regulation. The Facial Recognition and Biometrics – Technology and Ethics event, held in London on 29 January 2020, was jointly organised with the Foundation for Science and Technology and chaired by Baroness Beeban Kidron OBE.
The Ada Lovelace Institute and the Arts and Humanities Research Council (AHRC) have today launched JUST AI, a network of researchers and practitioners, led by Dr Alison Powell from LSE, that will establish a multidisciplinary research base around ‘just AI’ – AI that is ethical, works for the common good and is effectively governed and regulated.
The Ada Lovelace Institute has commissioned Matthew Ryder QC to lead an independent review of the governance of biometric data. The review will examine the existing regulatory framework and identify options for reform that will protect people from misuse of their biometric data, such as facial characteristics, fingerprints, iris prints and DNA.
Today the Ada Lovelace Institute launches Rethinking Data a programme of research and public engagement that will understand and evolve new data narratives, and learn from international data access and data partnerships – to inform best practice and create regulations that strengthen data rights.
The use of NHS health data to develop new technologies raises important questions for people and society – who ‘owns’ the data, where are the benefits (financial gains and new knowledge that can lead to better healthcare) recouped, and how are those benefits distributed?
What is the next digital revolution and how can the UK further embrace it to remain a world-leading digital economy? How can industry and government ensure citizens remain central to emerging tech and the changing world?
Carly Kind introduces the Ada Lovelace Institute’s emerging research on understanding public attitudes to facial recognition technologies, proposing a way forward for regulators, policymakers and industry in the UK.
A new report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge sets out a broad roadmap for work on the ethical and societal implications of technologies driven by algorithms, data and AI (ADA).
The importance of public legitimacy – which refers to the broad base of public support that allows companies, designers, public servants and others to design and develop AI to deliver beneficial outcomes – was illustrated by a series of public, highly controversial events that took place in 2018.
Picture a system that makes decisions that have a huge impact on a person’s prospects and life course; or even that makes decisions that are literally life and death. Imagine that system is hugely complex, and also opaque: it is very hard to see how it comes to the conclusions it does. A system that is discriminatory by its nature: it sorts people into winners and losers; but the criteria by which it does so are not clear.
A new, a vast, and a powerful language In words that are at one and the same time prophetic, optimistic and haunting Ada Lovelace wrote: “A new, a vast, and a powerful language is developed for the future use of analysis, in which to yield its truths so that these may become of more speedy and […]
The Nuffield Foundation has appointed the first board members to lead the strategic development of the Ada Lovelace Institute, an independent research and deliberative body with a mission to ensure data and Artificial Intelligence (AI) work for people and society.
The Ada Lovelace Institute is an independent research and deliberative body with a mission to ensure data and AI work for people and society. Our new prospectus sets out how the Ada Lovelace Institute will promote informed public understanding of the impact of AI and data-driven technologies on different groups of people. It will guide […]
Over the past few years debates about data have frequently made headline news. To help us to better understand data, including its uses and ethical implications, various analogies have been used. Although analogies can help us get a better grasp of this complex issue, we should be wary of the limitations of these comparisons.
There is a growing expectation that technologies and algorithms should align with, and reflect, commonly held public and societal values. But to make this happen, society needs to be kept ‘in the loop’. Do we need an implicit social contract between those developing and designing the tech and those who may be affected by it?
Your editorial “Making decisions that computers cannot” ( May 22) identifies a central issue surrounding the relationship between the artificial intelligence sector and the ethical and social frameworks in which it operates. You ask who will be left to speak for the common good. The Nuffield Foundation is funding the Ada Lovelace Institute, in partnership […]
Last week we surely reached peak hype on tech ethics with this headline: The Sun was responding to the House of Lord’s AI report, the latest of a growing number of interventions from the government, industry and civil society seeking to grapple with the ethical and social issues technology is posing. Public debate on tech […]
I would like in a moment to set out the proposal for an independent Convention on Data Ethics and Artificial Intelligence. This has been developed by the Nuffield Foundation over recent months in partnership with The Alan Turing Institute, the Royal Statistical Society, the Nuffield Council on Bioethics, the Wellcome Trust, the Royal Society, the British […]