In 2022, alongside the compounding pressures of economic stress, war and political turbulence, and underscored by the climate crisis, the use and development of data and AI continued to proliferate. Ensuring that the opportunities of data and AI, and the technologies, systems and services that make use of them, truly and holistically benefit people and society is an increasingly vital consideration for policymakers, industry and researchers.
In our fourth year building the Ada Lovelace Institute to meet these challenges, we’ve worked with collaborators and partners to identify and respond to existing and emerging issues. From researching ethics and consequences for the people affected by technologies, to informing regulatory and industry practice, we have continued to aim to be robust, relevant, responsive and impactful.
Working with and connecting collaborators and partners
Through our research projects and processes, we have had the privilege of working with a host of varied and valued collaborators and partners – gathering and stretching the established bounds of what ‘expertise’ is and how it is used. We’ve worked with organisations like the ICO, who partnered on a new project with members of our Citizens’ Biometrics Council; the NHS, Health Foundation and a group of peer researchers affiliated with the APLE Collective on -19 and lived experiences of healthcare technologies; and the Nuffield Council on Bioethics on genomics and AI futures.
We also connected researchers, thinkers and subject-matter experts together with convenings, public events and blog posts – interrogating established ideas and sharing new perspectives through our interdisciplinary approach. And we listened to the voices of the public – those living with the effects of technology – and represented their views to policymakers and technology developers.
Foundational and responsive research
Working with partners, we’ve had the opportunity to bring to fruition two foundational research projects launched in the early days of Ada: ‘Rethinking data’ and our work on biometric technologies.
Rethinking data interrogates many of the fundamental and pertinent questions that inspired Ada’s establishment: how do we move beyond tired data metaphors and extractive data practices to rethink the role of data in our societies, and articulate the social value of data? How can we develop more ambitious visions for how data should be used and governed?
Overseen by the Rethinking data expert working group, co-chaired by Professor Diane Coyle and Paul Nemitz, we used expert convenings, futures and foresight methodologies to develop a blueprint in which we propose four cross-cutting interventions that can re-centre people and society and contest the increasingly entrenched systems of digital power.
Throughout 2023 we’ll be running an event series to delve further into the opportunities evidenced in the research and identify avenues for our recommendations to inform future policy initiatives, including the reform of data protection rules in the UK.
The Ryder Review, an independent review of the governance of biometric data by Matthew Ryder QC was published alongside Countermeasures, a report that links the Ryder Review’s findings to Ada’s policy recommendations for the UK Government.
This work on biometrics has made an important contribution to a national debate on the regulation of new AI technologies, and builds previous research on biometrics undertaken by the Institute including Beyond Face Value, the first national study of public attitudes to facial recognition technology, and the report of the Citizens’ Biometrics Council, a public deliberation initiative which engaged 50 members of the public for more than 300 hours of discussion and debate on the question, ‘What is and isn’t okay when it comes to biometrics?’
The rich debate around AI regulation has given birth to the first comprehensive legislation on AI, which includes biometric technologies, the draft EU AI Act, which imposes strict limitations on the use of biometrics technologies in line with some of our recommendations.
We’ve also published research on public sector recommendation systems, research ethics, and UK public attitudes to regulating data and data-driven technologies among a whole host more. And we’ve embarked on two new data and AI projects led by Visiting Senior Researchers on climate justice, and gender and AI.
Expanding our policy impact in the EU and UK
This year we continued to take our research and apply it directly to the policy world. Our evaluation of the landscape led us to expand our work in the EU, to connect and inform policy decision-making.
Since 2021, the Ada Lovelace Institute has operated an office in Brussels and has become a respected, expert voice in discourse and negotiations around the EU AI Act and accompanying policy, including the AI Liability Directive. With the vital input of Newcastle University Professor Lilian Edwards, who has worked with Ada over the past year, we contributed targeted research into the AI Act process.
Some of our recommendations have been directly reflected in the latest text of the Act as promulgated by the European Parliament, including new provisions for updating the list of high-risk AI applications, and inserting a requirement for the Commission to consult with groups affected by AI technologies.
In parallel to our work in the EU, we continued to share existing research with, and develop responsive research for, UK policymakers engaged in ongoing policy and regulatory developments. We’ve engaged consistently with the reform of the data protection framework over the past year, and in March 2022 produced Who cares what the public think?, an evidence review of studies of UK public attitudes to regulating data and data-driven technologies.
We’ve engaged at length with DCMS and the Office for AI around the forthcoming whitepaper on AI regulation, made numerous written submissions to Government consultations and inquiries, and given evidence to Parliamentary Select Committees.
Developing our practices
While working hard to deliver against our strategy and research agenda, we have spent considerable time thinking hard together about our ways of working, research development process, peer review practices and communications strategies.
We have also been engaged in a year-long evaluation of the JUST AI racial justice fellowship programme that Ada established in 2020 with support from the AHRC. As the fellowships come to a close, we are working on externalising the lessons learned from the programme. We’re pleased to see the experiences of JUST AI and the fellowship programme translate into the new AHRC programme, Enabling a Responsible AI Ecosystem, led by Professors Shannon Vallor and Ewa Luger, for which Ada is a collaborating partner.
The Ada Lovelace Institute in 2023
What can you expect from the Ada Lovelace Institute in 2023?
As a research institute that wants to produce robust evidence that leads to change, we’re developing a major new initiative to refine and strengthen our research expertise. This means interrogating and expanding research methodologies, investing in research design and buttressing structures to ensure the robustness, reflexiveness and credibility of our research.
Our research agenda will see us publishing reports on tech sector public participation initiatives and AI and genomics. We’ll produce a series of projects looking back at the legacy of COVID-19 technologies, and publish a major study of public attitudes to AI together with the Alan Turing Institute. We’ll be following developments on AI regulation in the UK, EU and beyond closely, and seeking to bring Ada’s unique interdisciplinary expertise and focus on people and society to those debates.
Alongside all of this, we’ll also be recognising that the return to ‘normality’ of 2022 looks very different to different groups, and that there is – more than ever – an imperative to ensure that data and AI support just and equitable social policies, and improvements to public services and infrastructure that particularly address the needs of the most disadvantaged.
Underpinning all our work we’ll be considering how Ada’s agenda can better address the social, ethical and technical issues of data and AI, to maximise benefits and minimise their negative impacts on people and society.
We’ll preserve the arm’s length distance from government and private-sector funding that underpins our independence and enables us to work in the areas of least resource, greatest salience and need. This wouldn’t be possible without our funders, and we are grateful to the Nuffield Foundation, the EU AI Fund, the Health Foundation, Generation Foundation, Open Society Foundations and the AHRC for their support.
As always, we’ll continue to seek out the support and input from you, our partners, collaborators and stakeholders, on how we can best achieve our mission.
Subscribe to our newsletter to stay in touch and up to date over what will be another important year for the Ada Lovelace Institute.