Skip to content
Project The future of regulation

Emerging processes for frontier AI safety

The UK Government has published a series of voluntary safety practices for companies developing frontier AI models

corridor with pink door at end - glowing lights
Project lead
Andrew Strait

On 27 October 2023, the UK Government released a series of voluntary safety practices for companies developing frontier AI models. This paper builds on similar efforts from the White House, the G7, and the Partnership on AI’s Safety Protocols (which the Ada Lovelace Institute also contributed to).

Emerging processes for frontier AI safety’ lists a series of practices – from red-teaming and evaluations before launch, to responsible data collection and auditing – that frontier AI model developers should use during the design, development and deployment of these systems.

The paper states that it provides a ‘snapshot of promising ideas and emerging processes and associated practices in AI safety today. It should not be read as government policy that must be enacted, but is intended as a point of reference to inform the development of frontier AI organisations’ safety policies and as a companion for readers of these policies.’

What is a frontier AI model?

The Government’s paper defines ‘frontier AI’ as ‘highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.’ As our explainer notes, the term frontier model was popularised by major tech industry companies to refer to cutting-edge AI models, for example, those that may have newer or better capabilities than other foundation models. As new models are introduced, they may be labelled as ‘frontier models’. And as technologies develop, today’s frontier models will no longer be described in those terms.

This makes the term ‘frontier AI’ problematic and contested, as it refers to a moving target with no agreed way of measuring whether a model is ‘frontier’ or not. What constitutes ‘advanced’ is unclear. Its definition hinges on whether a model has new ‘capabilities’ or ‘tasks,’ but what constitutes a ‘capability’ or ‘task’ is also not clear. This raises a problem, as many ‘capabilities’ of a model will only become apparent in the context in which it is used. This term also excludes all AI systems that are currently in use today, raising concerns that these policies apply to AI systems that may never exist. This is why the Ada Lovelace Institute prefers the term ‘foundation model’ instead as it is a more definable term that applies to models that are used today.

What is a frontier model provider?

It remains unclear – based on the Government’s description – what constitutes a ‘frontier model provider’, but the paper refers to ‘leading AI organisations’ that operate at the ‘frontier of AI capabilities.’ This would suggest only a small subset of large providers, such as OpenAI, Google, Google DeepMind and others. As the paper states: ‘It is [therefore] intended as a potential menu for a very small number of AI organisations at the cutting edge of AI development. While there may be some processes and practices relevant for different kinds of AI organisations, others – such as responsible capability scaling – are specifically developed for frontier AI and are not designed for lower capability or non-frontier AI systems.’

The Ada Lovelace Institute’s role

The Ada Lovelace Institute signed a Memorandum of Understanding with the UK Government’s Department for Science, Innovation and Technology to review two drafts of the document. We provided feedback on: the issues of the industry-proposed risk management process known as ‘responsible capability scaling’ ; the lack of attention to data collection practices; the definition of key terms like ‘dangerous capabilities’ and ‘frontier AI’ ; and some key points around evaluations and watermarking.

While we can see some of our comments reflected in the final draft, we remain concerned about whether and how these voluntary practices will be implemented. As we have argued in previous work , we urgently need national regulation on all kinds of AI systems, including frontier AI.

This paper offers some ideas for practices that companies can put in place to reduce risks, but it is no substitute for regulation requiring adoption of practices that ensure AI systems operate safely, legally and ethically.

We therefore see this paper as an initial step in the journey towards regulation. It should be accompanied by a clear roadmap towards formal legal requirements with strong penalties for non-compliance, and followed up by tangible steps to implement and improve on the white paper released earlier this year. Anything less than this will be remembered as a failure to seize the current ‘AI moment’ and deliver a future in which these powerful systems are made to work for people and society.

Over the coming months, the Ada Lovelace Institute will be diving deeper into the topic of foundation model safety with research projects exploring the types of risks and harms associated with these systems; how they can be evaluated; and lessons from other sectors for regulating them.

If you would like to discuss our research in this area, please contact the team at hello@adalovelaceinstitute.org


Photo by Efe Kurnaz on Unsplash

Related content