Skip to content
Press release

Legal analysis reveals urgent need for laws and regulations that guard against harms of Advanced AI Assistants

New paper from the Ada Lovelace Institute and AWO finds that existing legal rules offer no effective protection against wide range of risks

1 December 2025

Reading time: 2 minutes

A new paper from the Ada Lovelace Institute, in partnership with law firm AWO, has found that the law in England and Wales provides no meaningful protection against the harms from Advanced AI Assistants (‘Assistants’).

Assistants are AI systems that act as intermediaries for their users. They are typically powered by foundation models like LLMs; are able to engage in fluid, natural-language conversation; show high degrees of user personalisation; and are designed to adopt particular human-like roles in relation to their users. Examples include AI therapists and legal advisors, agents, and companions.  As they have become more widely used by the general public, Assistants have made headlines for encouraging suicide, enabling delusional thought patterns and their general ability to sway people’s opinions.

The analysis from the Ada Lovelace Institute and AWO reveals that there is no legislation that specifically covers Assistants. Because they pose risks of some nuanced, diffuse and social harms – such as emotional wellbeing harms and influencing of opinion – they are not well covered by existing laws (e.g. consumer protection law) and sectoral rules (e.g. financial services regulation or regulation of legal professionals).

The paper also finds that Assistants present novel issues to law and regulation that break the rationale behind existing legal frameworks, such as the legal status of Assistant ‘decisions’ or the ability of Assistants to ‘market’ themselves in conversations with users.

These gaps in legal protection mean that people and businesses will find it difficult if not impossible to seek redress from harm due to a lack of transparency and lack of legal standards.

Julia Smakman, Senior Researcher at the Ada Lovelace Institute, said:

‘At a time when we are hearing more and more stories about the serious – and sometimes deadly – risks of Advanced AI Assistants, it is deeply concerning to see the lack of legal protections in the UK for people and businesses harmed by these technologies. Assistants have the potential to disempower consumers, lead to widespread deskilling, undermine people’s mental health, and degrade the quality of public and professional services. We need urgent action from policymakers and regulators to address these risks by filling in these significant legal gaps.’

Alex Lawrence-Archer, Solicitor at AWO, said:

‘This work shows that the law in England and Wales provides uneven and often inadequate protection against harms which may be caused by Advanced AI Assistants – including in areas which the public might assume are regulated. We found recurring barriers: unclear regulatory thresholds, gaps in duties, and real difficulty in evidencing AI failures. Effective protection depends on clear legal obligations and rights which can be practically enforced. Those are not available for now, and it is not clear whether the law can keep pace with the rapid adoption of these tools.’

 

 

 

Related content