Skip to content
Blog

Building blocks: four recommendations to strengthen the foundations for AI in the public sector

How can government ensure trustworthy use of AI and deliver public value?

Elliot Jones , Imogen Parker

19 May 2025

Reading time: 11 minutes

It has been a busy start to the year for those working on public sector AI.

Prime Minister Keir Starmer set out his ambitions on the fundamental reform of the British state, describing AI as ‘a golden opportunity… an opportunity we are determined to seize’. The underlying mantra of this approach is that ‘no person’s substantive time should be spent on a task where digital or AI can do it better, quicker and to the same high quality and standard.’

This was premised by the infamous desire to ‘mainline’ AI into the nation’s veins. As a colleague pointed out, it doesn’t take much imagination to extend the metaphor and wonder whether what is being mainlined is going to be good for us.

Rapidly deploying AI throughout the public sector has become one of the government’s top priorities, seemingly driven by (at least) three entangled factors.

First, there is a desire for AI to deliver on a more ‘satisfying experience’. This is part of a long-standing drive for digitisation, exemplified in the Blueprint for Modern Digital Government. The Blueprint is well-grounded in the difficulties people might face when interacting with existing services, the challenges of legacy systems, and the aspirations to make the state interfaces more functional, connected and better designed.

A second factor is the desire to use AI to make public services more productive and efficient to meet budget demands. The government hopes that new tools will reap savings through simplification and automation, such as those used to transcribe internal and external meetings or automate consultation analysis, with the Department for Science, Innovation and Technology (DSIT) claiming there may be £36 billion in unrealised government savings. These aspirations extend to improving public-facing services’ productivity and quality of performance through e.g. personalised healthcare and education technology.

Thirdly, the government wants to signal the UK’s willingness to embrace AI as a way to attract investment. Tech companies are a key part of its economic strategy, with Secretary of State for DSIT Peter Kyle characterising the present time as a ‘narrow window [of opportunity] to secure a stake in the future of AI’. This motivation intersects with the decision to step back from regulation, encouraging regulators to take pro-growth choices, opening up data assets through the National Data Library, and using the prize of government contracts to encourage investment.

Against this backdrop, the moment is ripe to review our research from the last six years and put to use what we know about delivering public value through deploying data and AI in the public sector.

Lessons from six years of working on public sector AI

To that end, we published a policy briefing, Learn Fast and Build Things, which synthesises key points from over 30 reports examining the use of data and AI in healthcare, education, local government and policy, and cross-cutting work on transparency, accountability, biometrics, procurement and foundation models.

Our synthesis is not comprehensive but includes crucial findings which have emerged repeatedly throughout our work. These are grouped into four lessons for the successful deployment of AI.

Lesson 1: Contextualise AI

  • Lack of clear terminology about AI is inhibiting learning and effective use.
  • AI is only as good as the data underpinning it.
  • AI systems are not deployed in a vacuum – context is important.

Lesson 2: Learn what works

  • The public sector does not have a comprehensive view of where AI is being deployed in government and public services.
  • There is not enough evidence on the effectiveness and efficiency of AI tools.

Lesson 3: Deliver on public expectations and public sector values

  • Successful use of AI requires public licence.
  • Public procurement of AI is not fit for purpose.
  • Gaps in AI governance undermine the sector’s ability to ensure tools are safe, effective and fair.

Lesson 4: Think beyond the technology

  • The adoption of AI will have wider societal consequences.
  • See AI not just as an opportunity to automate the public sector, but to reimagine it.

Some of these findings are well-known, but not well addressed, and are worth reiterating. These lessons may also be useful beyond current policy questions, and indeed beyond the UK context.

In this blog, we consider how to act upon them. If our synthesis landed on the desk of those leading digital transformation within government – alongside an extensive to-do list and a myriad of competing pressures – where should they start?

We propose four targeted recommendations to strengthen the foundations for AI in the public sector that could be begun in 2025:

  1. Establish a What Works Centre for AI in Public Services
  2. Revisit, revise and strengthen the Algorithmic Transparency Recording Standard
  3. Create a taskforce for local government AI procurement to support practice, share expertise and scale negotiating power
  4. Prioritise, fund and leverage public attitudes research on AI and data

Establish a What Works Centre for AI in Public Services

The lack of evidence on AI’s efficacy, costs, and impacts on services and people is a major obstacle to its successful deployment across the public sector, and to ensuring it is fair, safe and delivers value for money.

The Public Accounts Committee (PAC) expressed concern that ‘there is no systematic mechanism for bringing together and disseminating the learning from all the pilot activity across government’. The PAC recommended to DSIT to ‘set up a mechanism for systematically gathering and disseminating intelligence on pilots and their evaluation.’ A What Works Centre (WWC), based on the model of those already established in other domains of UK policy, would be an effective mechanism to perform this function.

The cross-cutting nature of AI means that it will likely impact policy areas covered by existing WWCs, e.g. education achievement under the Education Endowment Foundation or crime reduction under the College of Policing. However, none of the existing centres have the scope or expertise to oversee developing evidence on the performance and safety of AI tools, or to ensure that such evidence helps address the issues of public sector professionals, like social workers or planning officers deciding if a new tool is sufficiently reliable to support their work.

Inquiries by this new WWC could start from a public sector ‘problem’, rather than a technology, and compare data-driven approaches with existing alternatives or other novel interventions.

It could host a centralised repository of AI use cases in public services to document the lessons learned and share them back across government.

A WWC focusing on AI in the public sector could boost a growing landscape of institutions and policy tools. It could complement the Evaluation Task Force’s new annex to the Magenta book on evaluating AI interventions, and contribute to the ‘learn’ aspect of the government’s ‘test and learn’ approach to improving public services. It could interface with the independent Responsible AI Advisory Panel and bolster the work of the digital centre of government, supporting practitioners with access to best practice expertise to shape standards.

The novel WWC should also contribute to the Government Digital Service (GDS)’s wider monitoring and evaluation strategy.

Revisit, revise and strengthen the Algorithmic Transparency Recording Standard

The digital centre of government has committed to empowering public servants to work in the open to improve services, build trust and enhance transparency by building on the Algorithmic Transparency Recording Standard (ATRS).

Now that the ATRS is being more extensively populated, the government should review its overall effectiveness and revisit its Mandatory Scope and Exemptions Policy.

As the ATRS is used by different stakeholders, including members of the public, civil servants, journalists and NGOs, the review should examine whether it meets their different expectations and promotes confidence in how the algorithms in use shape services.

The same review could establish whether and how to reflect in the ATRS the uses of novel AI technologies, from general purpose tools like ChatGPT to cross-departmental products, or ‘slipstream’ AI tools like Microsoft Copilot, which are being automatically integrated into software used across government.

A careful analysis of the ATRS may also contribute to the existing taxonomical classification of AI products produced by the UK Incubator for Artificial Intelligence. Those reviewing the ATRS could look for clusters of algorithms, with similar technical features, purposes, and contexts of use, and group them together. This will help address the lack of shared definitions for AI as well as establish a vocabulary that captures both technical and social aspects of deployment projects.

Create a taskforce for local government AI procurement

As we’ve outlined before, getting AI procurement right is vital for ensuring that it works effectively and in the public interest. This is especially urgent in local government, where the diversity of the sector, a fragmented purchasing landscape and constrained resources, as well as the upcoming reorganisation of many English local authorities, add to the challenge. Local authorities make important decisions about communities but often lack the support they need.

Our research, with the input of a roundtable of experts and feedback from regulators, industry and central government, has confirmed the ongoing need for a local-central government partnership. The roundtable, including regional and local government experts, academics and civil society representatives, also identified the desired initial outcomes for such a partnership, including:

  • Aligning the goals of central and local government, factoring in the contextual needs of local authorities and making room to address them directly.
  • Ensuring technology deployment programmes gain and are worthy of the trust of the communities they affect, with local government confidently holding suppliers accountable.
  • Enabling councils to share ideas among themselves, and work on procurement together.
  • Supporting the application of clear standards across the procurement process.
  • Establishing a decision-making model that is attentive to market concentration and its effects on AI procurement (e.g. vendor lock-in dynamics).

Prioritise, fund and leverage public attitudes research on AI and data

Building trustworthy AI that is legitimate to the public is an objective in itself, but consulting the public is also a way to embrace technology with a shared sense of confidence and avoid hampering otherwise helpful initiatives due to a loss of public trust.

GDS should continue to fund and resource annual waves of the existing Tracker Survey on public attitudes to data and AI  and reassign the responsibility for it, now that its original owner, the Responsible Technology Adoption Unit, has been dissolved.

Future surveys should include questions about specific public sector use cases, mapping onto existing or proposed AI interventions, to develop a more granular understanding of public acceptability.

Alongside this, GDS could encourage and advocate for in-depth participatory or deliberative research with people and communities who will be affected by uses of high-profile, high-risk or highly sensitive AI applications.

Finally, the government should prioritise engaging with the public on the development of the National Data Library and other initiatives which repurpose data from its original intended use or let new actors, public or private, use it. These projects and new purposes will have to gain legitimacy and require careful consideration with input from the public, especially when they enable large-scale data monetisation.

Conclusion: explore what public interest AI means in public services

At present, despite the enthusiasm, the government seems to lack a clear vision for how and where technology should shape public services.

Secretary of State Peter Kyle has talked about the need for a more ‘satisfying experience’ for users, ensuring that people can interact with public services through similar channels to those found in private services like online banking.

Productivity and efficiency are often the priority focus of public sector AI evaluations. But these are, of course, means rather than ends for public service redesign.

And such evaluations leave the key question unanswered: how to use the potential of AI to deliver public value?

Interesting pilots focusing on local issues are already taking place within the government’s own Test, Learn, Grow programme. Beginning with questions of public service reform, they catalyse public participation and insights from diverse perspectives to tackle specific problems. While these pilots may include the use of technology, they focus on ‘outcomes, not technology’.

This is the type of work we have underway at Ada. Starting with more relational public services, like education and social care, our programme of research will work with public service users, as well as different actors across specific services, to build a fuller picture of how the system might be improved or affected with or without AI.

Centring the users of a service and those who deliver it – their needs, expectations and priorities — should clarify where technology should be incorporated, and where it should be curtailed. It can help identify red lines, preferences and conditions for how people want technology to mediate their relationship with the state or the public — where the use of a generic tool to automate transcription, for example, might elicit different levels or comfort in a social-work meeting, a maternity appointment, reporting a crime, speaking to an MP, or in a meeting with a teacher.

Starting from the user communities, rather than the tool, will help make value distinctions about the role of technology in different scenarios, where it should be designed to optimise for personalisation, precision, or productivity.

Setting these distinctions will also highlight where competing priorities — of central government, the exchequer, procurement leads, frontline professionals, and members of the public — need to be acknowledged and balanced.

This research may not give us one perfect account of a ‘positive vision’ for the adoption of AI, but it will help build ‘case law’ around its use, identifying unexpected priorities and testing assumptions about where technology should be deployed. It will also offer a provocation, to those arguing for rapid AI adoption across public services, to articulate the vision they are aiming for.

Related content