Skip to content
Press release

Nearly 9 in 10 people in the UK support independent regulation of AI

Our polling reveals that the UK public prioritise AI safety and positive social impacts over economic gains, speed of innovation and competition

4 December 2025

Reading time: 2 minutes

An over-the-shoulder shot of people walking on a central shopping street in Edinburgh on a sunny day in spring.

As momentum behind meaningful legislation on AI in the UK has appeared to stall, new research from the Ada Lovelace Institute shows that this delay – and the government’s broader shift away from regulation – is increasingly out of step with public attitudes.

The nationally representative polling examines not only whether the UK public support regulation of AI, but also how they expect it to function, and where gaps between public expectations and policy ambition may lie. Key findings include:

  • The public support independent regulation. The UK public do not trust private companies to self-regulate. There is strong public support (89%) for an independent regulator for AI, equipped with enforcement powers.
  • The public prioritise fairness, positive social impacts and safety. AI is firmly embedded in public consciousness and 91% of the public feel it is important that AI systems are developed and used in ways that treat people fairly. They want this to be prioritised over economic gains, speed of innovation and international competition when presented with trade-offs.
  • The public feel disenfranchised and excluded from AI decision-making, and mistrust key institutions. Many people feel excluded from government decision-making. 84% fear that, when regulating AI, the government will prioritise its partnerships with large technology companies over the public interest.
  • The public expect ongoing monitoring and clear lines of accountability. People support mechanisms such as independent standards, transparency reporting and top-down accountability to ensure effective monitoring of AI systems, both before and after they are deployed.

Nuala Polo, UK Public Policy Lead at the  Ada Lovelace Institute, said:

“Our research is clear: there is a major misalignment between what the UK public want and what the government is offering in terms of AI regulation. The government is betting big on AI, but success requires public trust. When people do not trust that government policy will protect them, they are less likely to adopt new technologies, and more likely to lose confidence in public institutions and services, including the government itself.”

Michael Birtwistle, Associate Director at the Ada Lovelace Institute, said:

“Examples of the unmanaged risks – and sometimes fatal harms – of AI systems are increasingly making the headlines. Trust is built with meaningful incentives to manage harm. We see these incentives in food, aviation and medicines – consequential technologies like AI should not be treated any differently.  Continued inaction on AI harms will come with serious costs to the potential benefits of adoption.”

Related content