Skip to content
Press release

UK must strengthen its AI regulation proposals to improve legal protections, empower regulators and address urgent risks of cutting-edge models

The Ada Lovelace Institute today published a new report analysing the UK’s proposals for AI regulation.

18 July 2023

Reading time: 3 minutes

The Ada Lovelace Institute today published a new report analysing the UK’s proposals for AI regulation –  the White Paper, as well the Data Protection and Digital Information (DPDI) Bill, the Foundation Models Taskforce and the AI Safety Summit.

The report identifies three ‘tests’ for the UK’s approach and provides specific recommendations to ensure these can be met, drawing on extensive desk research, workshops with experts from across industry, civil society and academia, and independent legal analysis.

Coverage

The Government’s proposals devolve the regulation of AI to existing regulators, with support from ‘central functions’. However, many AI contexts are not comprehensively covered by regulators, such as recruitment, policing, central government itself and parts of the private sector.

Independent legal analysis, commissioned by the Institute and conducted by data rights law firm AWO, has found that in many contexts, the main protections offered by cross-cutting legislation such as the UK GDPR and the Equality Act may often fail to protect people from harm or give them a viable route to redress.

To improve coverage, the Institute recommends considering an ‘AI ombudsman’ to directly support people affected by AI, reviewing existing protections, legislating to introduce better protections where necessary, and rethinking the DPDI Bill due to its implications for AI regulation.

Capability

Regulating AI is resource intensive. Regulators and other actors – such as civil society organisations and third-party AI assurance providers – must be given the right resources and powers, as part of an overall ecosystem of accountability.

To empower regulators, the Institute recommends establishing a new statutory duty for them to have regard to the AI principles, exploring a common set of powers for regulators, dramatically increasing their funding for AI, as well as directly facilitating and funding civil society involvement in AI regulation.

Urgency

The Government envisions a timeline of at least a year before implementation, with more time needed to evaluate and iterate. Under ordinary circumstances, that could be considered a reasonable schedule.

However, there are significant harms associated with AI use today, many of which are felt disproportionately by the most marginalised. The pace at which foundation models are being utilised risks scaling and exacerbating these harms.

To address urgent risks, the Institute is calling for robust governance of foundation models, underpinned by legislation, and for a review of how existing legislation can be applied to these models. The Institute also recommends mandatory reporting requirements for foundation model developers, pilot projects to develop better expertise and monitoring in government, and a more diverse range of voices at the AI Safety Summit.

Michael Birtwistle, Associate Director at the Ada Lovelace Institute, said: 

‘The Government rightfully recognises that the UK has a unique opportunity to be a world-leader in AI regulation and the Prime Minister should be commended for his global leadership on this issue.

‘However, the UK’s credibility on AI regulation rests on the Government’s ability to deliver a world-leading regulatory regime at home. Efforts towards international coordination are very welcome, but they are not sufficient. The Government must strengthen its domestic proposals for regulation if it wants to be taken seriously on AI and achieve its global ambitions.’

Matt Davies, UK Public Policy Lead at the Ada Lovelace Institute, said:

‘We welcome the Government’s serious engagement with this difficult challenge. The UK’s current proposals have the potential to avoid the drawbacks of a ‘one size fits all’ approach by establishing an adaptable and context-specific regime.

‘However, in their current form they risk failing to adequately protect people and society against the potential harms of AI. We call on the Government to address the current regulatory gaps, empower our regulators, and do more to tackle the urgent risks from cutting-edge foundation models.’

Alex Lawrence-Archer, Solicitor at AWO, said:

‘For ordinary people to be effectively protected from online harms, we need regulation, strong regulators, rights to redress and realistic avenues for those rights to be enforced. Our legal analysis shows that there are significant gaps which mean that AI harms may not be sufficiently prevented or addressed, even as the technology that threatens to cause them becomes increasingly ubiquitous.’