Skip to content
Blog

Regulating AI in the UK: three tests for the Government’s plans

Will the proposed regulatory framework for artificial intelligence enable benefits and protect people from harm?

Michael Birtwistle , Matt Davies

13 June 2023

Reading time: 8 minutes

Outline map of United Kingdom infographics with data charts representing communication, internet and technology

It seems as if AI is everywhere you look right now – not only in new and emerging use cases across different business sectors, but implicated in every conversation about present and future societies.

Coverage of ‘foundation models’ that power systems like ChatGPT, the potential for job displacement, and the need for ‘guardrails’ are converging public and political interest in how AI is regulated.

Against this noisy backdrop, jurisdictions around the world are publishing regulatory proposals for AI,[1] and the UK is no exception. The UK Government is currently consulting on its AI Regulation White Paper, as well as passing the Data Protection and Digital Information (DPDI) Bill– which will reduce rather than enhance AI-relevant safeguards.

The White Paper is an important milestone on the UK’s journey towards comprehensive regulation of AI, and at Ada we have welcomed the Government’s engagement with this challenge.

Its proposals will shape UK AI governance for years to come, affecting how trustworthy the technology will be, and – when things go wrong – how well people are protected and how meaningfully they can access redress.

It adopts a much more distributed model for regulation than proposed elsewhere, creating a more challenging path to achieving these outcomes, while promising more proportionate governance that enables companies to innovate.

Ahead of the White Paper consultation closing on 21 June, we explain significant features of the UK Government’s proposals and how we intend to test them against three challenges: coverage, capability and urgency.

The White Paper framework

There is currently no holistic body of law governing the development, deployment or use of AI in the UK. Instead, developers, deployers and users abide by the existing patchwork of rules under the UK regulatory ecosystem. This includes ‘horizontal’ cross-cutting frameworks, such as human rights, equalities and data protection law, and ‘vertical’ domain-specific regulation, such as the regime for medical devices.

Where the EU takes a primarily rules-based approach to AI governance, the UK is proposing a ‘contextual, sector-based regulatory framework’, anchored in institutions and this patchwork of existing regulatory regimes.

The UK approach rests on two main elements: AI principles that existing regulators will be asked to implement, modelled loosely on those published by the OECD, and a set of new ‘central functions’ to support them to do so.

The principles act as a set of instructions to regulators, describing outcomes they should ensure for AI use within their domains, such as fairness or the ‘appropriate transparency’ of AI systems.

The ‘central functions’ are intended to provide cross-cutting support to regulators by creating a common understanding of AI risks, foresight of future developments, better coordination and other mechanisms for improving regulatory capacity. It’s envisioned in the first instance these will be delivered by Government in partnership with regulators and other AI actors.

In some ways, the UK Government is setting itself a harder regulatory challenge than other international legislators, as it’s often more difficult to achieve policy outcomes on a devolved or distributed basis than with a single accountable institution.

However the prize here, if successful, is context-specific regulation, which means that the impacts of AI will be judged as closely as possible to the point of use in specific instances, making governance more likely to be proportionate.

In preparation for our own submission to the consultation, we’ve developed three lenses through which we’ll be stress-testing the UK Government proposals.

Test 1: Coverage – How well does the UK’s regulatory patchwork address AI?

The UK’s existing mix of regulators may have difficulty overseeing all contexts in which AI might be used, given there are no new powers envisioned for them, no new legal obligations for organisations deploying or using AI, or any new rights for affected individuals and groups.

Many parts of the UK economy are only partially regulated. Equalities law and data protection apply to contexts like recruitment and employment, but in neither case is there a domain-specific regulator that would, for example, be responsible for ensuring that the AI White Paper’s safety principle is enforced.

Many sectors, such as policing and education, are diffusely regulated, with myriad public bodies responsible for parts of the regulatory pie and no clear overall responsibility for enforcing principles.[2]

Additionally, many existing regulators focus on outcomes, meaning – in practice – that they’re only equipped to look at technology at the point of use or commercialisation. AI models (like GPT-4) are often the building blocks behind specific technological products that are available to the public (like Bing) and sit upstream of complex value chains. This means that regulators may struggle to reach the companies or other actors most able to address AI-related harms, creating uncertainty around legal liability for negative outcomes.

Finally, there are questions around the effectiveness of existing and prospective regulatory regimes to provide recourse and redress for AI-related harms. Our research on biometric data already demonstrates the insufficiency of the current legal framework to govern this sensitive type of data appropriately.

The UK’s data protection regime is being further watered down by the DPDI Bill, just as AI is hitting the economic and political mainstream. Among other changes, most elements of the existing accountability framework for personal data use will be required only for ‘high risk processing’, and the approach to automated decision-making will be more permissive – further eroding the coverage for AI provided by underlying regulation.

Test 2: Capability – How well-equipped are UK regulatory institutions to deliver the principles?

The powers and resourcing of UK regulators vary considerably, and especially their capability to implement the AI principles.

In terms of powers, regulators may be obliged to deprioritise or even ignore the AI principles, if they come into conflict with their statutory duties – a challenge acknowledged in the White Paper. Regulators may also need enhanced statutory powers to discharge some of their new responsibilities around AI. For example, to conduct technical audits of an AI system, regulators will need the ability to access, monitor and audit technical infrastructures, code and data, which many may not be empowered to do currently.[3]

In terms of the resources available to regulators, AI is likely to become a central component of the digital economy and, with the advent of ‘foundation models’, the notions of infrastructure and safety have come to constitute a useful framing to think about AI governance and how to fund it adequately.

If we look at regulators in other domains where safety and public trust are paramount and where underlying technologies form important parts of national infrastructure, their funding is in the region of tens of millions of pounds, if not higher.[4]

These sums form a useful point of comparison for the scale of the challenge of governing a general-purpose technology like AI. Regardless of whether resources are delivered centrally or on a distributed basis, it will be important to ensure that regulators – and policy capacity within government itself – are properly resourced.

Test 3: Urgency – Will the UK approach enable a timely response to urgent risks?

A final question is whether regulation will be implemented in time to address the urgent risks posed by some AI systems.

The UK Government envisions a timeline of at least a year before the first iteration of the new AI framework is implemented, with further time needed to evaluate its effectiveness and address any emerging limitations.

Under ordinary circumstances, that would be considered a reasonable schedule for establishing a long-term framework for governing an economically and societally cross-cutting technology.

But there are urgent risks associated with AI use today, particularly arising from the capabilities of general-purpose AI and their associated risks, such as ‘hallucination’ (confidently asserting untrue statements) or scaling misinformation.

These risks should be considered critical because they will flow into a wide range of downstream uses, and because of the pace at which they are being integrated into the economy and our everyday lives. Uses range from search engines to productivity software, and through broadly available application programming interfaces (APIs) on which businesses can build their own services.

Unless Government intervention is timely, this unchecked distribution will only make the regulatory challenge harder to solve.

In summary, these three tests will be at the heart of Ada’s response to the White Paper. We’ll be focusing on interim and longer-term solutions needed to meet them, as without these the UK’s approach might fail to prevent the worst impacts of AI on people, undermining its international ambitions. We encourage everyone with an interest in UK AI governance to respond to the consultation, and to reach out to us at hello@adalovelaceinstitute.org to discuss your views.

Footnotes

[1] E.g. the EU AI Act and the White House’s ‘Blueprint for an AI Bill of Rights’.

[2] For example, Ofqual and Ofsted address exams and education provision respectively and the Department for Education oversees policy, but there is no clear responsibility for the governance of education technologies (‘Edtech’) in classrooms.

[3] ‘Regulate to innovate’, Ada Lovelace Institute, November 2021, https://www.adalovelaceinstitute.org/report/regulate-innovate/.

[4] The Civil Aviation Authority has a revenue of £140m and staff of over 1,000, and the Office for Nuclear Regulation around £90m with around 700 staff.

Related content