Skip to content

Impact

We are committed to delivering and measuring impact in our work, to achieve real change in the systems and approaches that govern data and AI

How we understand impact

Impact is the beneficial change that the Ada Lovelace Institute produces in the world. We take a relational approach to impact, looking at our work as part of a contribution alongside other actors.  We attribute impact to our activities realistically in the ecosystem and the complex conditions we seek to change.

We look for indicators and evidence of influence or change in the world, resulting from Ada’s work towards several outcomes:

  • Building evidence through our research
  • Convening diverse voices
  • Shaping policy and practice on AI and data in the UK, EU and internationally.
  • Amplifying the voices of people to ensure that public opinions, attitudes and concerns inform debates and decision-making about data and AI.

Who we work with

As well as pursuing our own research agenda, much of Ada’s influence comes through working in partnership with other organisations as an effective way to amplify our capacity, broaden our expertise and ensure the impact of our work. We have entered into several new partnerships, with academia, civil society and policy:

  • Alongside the University of Edinburgh and the BBC, Ada launched Bridging Responsible AI Divides (BRAID), a national research programme funded by the Arts and Humanities Research Council.
  • We contributed to the strategy group and public participation working group of UKRI-funded Responsible AI (RAI).
  • Ada joined Partnership on AI, a non-profit, bringing together diverse voices from within the AI community.
  • We partnered with Digital Good Network and Liverpool Civic Data Cooperative to launch a new Participatory and Inclusive Data Stewardship project.
  • Ada has embarked on an ambitious programme of collaborative research with the Nuffield Foundation as part of ‘Grown up? Journeys to adulthood’. Within this, Ada is leading on multidisciplinary research to understand the interface between young people’s online and offline lives.

Find out more about who we work with.

Recent achievements

Public and policymaker awareness of AI and its potential impacts for people and society has never been greater, and this has implications for the scope and direction of Ada’s work.

Despite the growing enthusiasm about the promise of AI technologies to drive progress, we still lack adequate information about their reliability, efficacy, safety and impacts on people.

There is a growing need to ask the right questions about these technologies: First, do they work? Second, do they work well enough for everyone? And finally, do they work well in context – not just under test conditions, but in the real world, on the street, in the hospital or in the classroom?

Ada has sought to bring calm, caution and evidence to hype cycles.

We have done this through engaging in research and discussions about AI risk, safety and regulation, working to understand and amplify what people feel about and want from AI.

We have produced timely evidence in response to the needs of policymakers and informed and influenced emerging policy responses, both in the UK and the EU.

Putting people at the centre of AI

Ada and the Alan Turing Institute’s nationally representative survey of public attitudes to AI  is now in its second wave. This vital evidence will help to understand people’s views of technologies from autonomous weapons to cancer-predicting AI tools. It will also enable us to track attitudes over time and see where legitimacy and trust might be changing.

We reconvened the Citizens’ Biometrics Council for their views on the Information Commissioner’s Office’s (ICO) proposed approach to biometric data. The council’s reflections on the proposals provide detailed recommendations around practicalities of consent, transparency and accessibility, as well as purpose, data collection and storage, and opt-out processes. The ICO confirmed that the Council’s perspectives and recommendations had been used to inform its guidance on biometric technologies.

Influencing policy

Ada has pushed against the narrative that AI technologies are too fast-moving and complex to regulate.

The highest profile forum for AI policy in recent years has been the global AI Safety Summit, attended by representatives from governments, industry, civil society and academia. The first summit was held at Bletchley Park in the UK and Francine Bennett, Ada’s Interim Director, was one of a handful of civil society representatives to attend the Summit. This was followed by two more global meetings of international policymakers in Seoul and San Francisco.

At the Seoul summit and the San Francisco convening, Ada argued for a renewed focus on context-specific evaluations of AI systems in collaboration with sectoral regulators and new statutory powers to replace the existing voluntary approach.

In the UK, after a change in government, a new data bill was introduced, on which Ada briefed Parliament. The UK Conservative government’s white paper on AI regulation, proposing a ‘contextual, sector-based regulatory framework’, cited Ada’s blog post on the value chain of general-purpose AI three times. Our initial response – welcoming the engagement with AI harms but highlighting the failure to include legislation – was covered by BBC News, The Times and The Guardian, and we authored an opinion piece for the New Statesman.

In Brussels, our EU policy engagement focused on the AI Act, the world’s first example of comprehensive regulation legislation. Over half of our 18 recommendations were implemented in some form in the final EU AI Act.  Several policy outcomes in the final EU AI Act relate to specific recommendations or themes identified in Ada’s research and engagement on the Act: The inclusion of ‘affected persons’ as a legally significant category is something that Ada has been advocating for since 2021.

Other recommendations include the establishment of a new AI Office to ensure coordinated regulatory oversight, the inclusion of general-purpose AI models so that accountability is more logically distributed along the value chain, and the requirement for public bodies to undertake fundamental rights impact assessments. The final Act text also reflects our recommendation to ensure diverse expertise in standards development.

Ada’s work did not stop with the passage of the Act, as preparation for implementation quickly got underway. We supported the EU Code of Practice on General Purpose AI models, which will detail the obligations for GPAI models via a co-regulatory approach – joining working groups covering transparency and copyright, risk assessment and mitigation, and corporate governance.

Building evidence

With the societal and democratic impacts of data-driven technologies being brought into sharp relief – from the fallout of the Post Office Horizon scandal to the varied worries about and actual use of AI in elections around the world – we also saw great desire for AI to solve longstanding issues affecting the delivery of public services.

Ada had the rare opportunity to get under the bonnet with the London Borough of Barking & Dagenham and published an observational study of their early use of the OneView data system and predictive analytics tools.

Throughout Ada’s research it’s become clear that it’s crucial to get procurement right if we want AI in public services to work well for people and society. A National Taskforce for the Procurement of AI in Local Government to address the multiple challenges in this area in a joined-up way – the recommendation from our procurement project – received an enthusiastic response from across the local government procurement landscape.

Ada’s work on foundation models (general-purpose models powering systems like ChatGPT) examined the risk and opportunities of this technology and explored the principles, regulations and practices necessary to deploy them in the public sector safely, ethically and equitably.

We also conducted and published research looking at the evaluation of foundation models, which found that current evaluations are not enough to prevent unsafe products from entering the market. At the Seoul AI Safety Summit and the San Francisco convening, Ada argued for a renewed focus on context-specific evaluations of AI systems in collaboration with sectoral regulators and new statutory powers to replace the existing voluntary approach.

Our explainer What is a foundation model? Continues to be widely cited, notably referenced in the Bletchley AI Safety Summit discussion paper. Government also took notice: Ada’s diagram of the foundation model supply chain was featured in the UK Conservative government’s consultation response paper on AI regulation.