Risky business
An analysis of the current challenges and opportunities for AI liability in the UK
Reading time: 177 minutes

How to read this paper
- If you are a policymaker or regulator working on AI more generally, read the ‘Executive summary’ that provides an overview of why liability for AI in the UK is currently not working as it should. It points to seven challenges and potential routes to addressing them.
- If you are a policymaker or regulator specifically thinking about AI liability, read ‘The current situation’, which explains the current situation for AI liability in the UK by describing how contracts redistribute liability risk, how liability risk is perceived by UK companies, and includes illustrative examples.For a deeper understanding of the challenges in non-contractual liability for AI and suggestions for areas of policy development, read the sections that discuss the challenges for AI liability in depth, from ‘Establishing a breach of the duty of care’ to ‘Unfair contractual clauses’.
- If you are a legal professional or (legal) researcher, read the sections that discuss the challenges for AI liability in depth (from ‘Establishing a breach of the duty of care’ to ‘Unfair contractual clauses’) for an analysis of legal challenges that AI poses for non-contractual liability.
- If you are a reader without a strong legal background (or who wants to refresh their understanding of non-contractual liability), refer to the ‘Appendix’. This includes an extensive overview of what non-contractual liability is; the main forms it comes in (fault-based, strict, product liability, vicarious liability); an in-depth exploration of relevant legal elements (causation, damages); and the interplay between non-contractual liability and insurance, and non-contractual liability and contracts.
The Appendix also includes an ‘Overview of relevant legislation and legislative proposals’ related to AI liability that are ongoing at the time of publication of this paper and relevant to keep an eye on.
Executive summary
This paper focuses on civil liability for AI in (mainly) the UK, although many of the core findings will be relevant for other jurisdictions. Liability refers to the legal responsibility that someone has for their actions or omissions. This paper in particular focuses on contractual and non-contractual liability.
Clear liability laws can provide a route to redress for parties harmed by AI systems, create incentives for risk management and prevention, and help clarify legal risks for people and organisations who develop or use AI.
Currently in the UK, pre-existing liability rules are not sufficient to achieve these aims, meaning a fundamental incentive to ensure AI risk is managed effectively and distributed fairly is broken. In the context of wide-scale adoption of general-purpose AI systems across public services and the economy, the impact is that unmanaged legal and financial risk is loaded onto downstream deployers of AI, such as local authorities and small businesses.
These downstream deployers have few effective means of addressing these risks and the resulting harms can affect their products, services and the people who use them. There is also an absence of any meaningful mechanism for deployers to seek redress.
The findings of this paper are based on a legal analysis, an expert roundtable, further expert input and desk research. The paper finds:
- The liability system is too burdensome on people or organisations harmed by AI systems, who will struggle to obtain sufficient evidence and prove fault and causation in AI contexts. This inhibits the incentivising effect of liability law in encouraging risk management at the point that risk arises.
- Contract terms are used to shield large corporate AI actors from liability risk, who instead push risk down to smaller actors (people or companies) lower in the value chain.
- Companies are not always aware of their risk exposure from AI use or have some awareness but deploy AI regardless. Many AI adopters are (inadvertently) taking on large or unclear liability risks, in many instances out of fear of falling behind their competition.
- Pre-existing liability rules are not always fit to neatly deal with challenges introduced by AI systems, such as their opacity, unpredictability, autonomous capabilities and their propensity to cause immaterial or systemic harms. This means liability law will fail to encourage the management of non-material risks (such as impacts on mental health or the environment).
This paper sets out these challenges in more detail and, where appropriate, provides pathways and considerations for addressing these issues. Liability for AI is complex and there is not one ‘easy fix’ to make the UK’s liability system fit for AI. Instead, we lay out different policy routes and complementary measures that policymakers can explore to address AI liability challenges. This paper points to:
1. Challenges with establishing a breach of the duty of care, as the newness of AI technologies means there is not yet a clear established ‘standard of care’ that sets expectations on the type of safety precautions that AI companies should be taking.
- Promoting the development of safety practices through legal rules, AI safety research, standards and assurance can help push towards the crystallisation of a standard of care.
- Strict liability can be a solution in particularly hazardous situations.
- Professionalisation of AI occupations can help formalise a professional standard of care that AI engineers can be held to.
2. Challenges relating to agentic and autonomous capabilities, as this raises questions about the allocation of responsibility where AI systems have acted outside of the realm of effective control of a user.
- Taxonomising agents based on their capabilities can clarify who can exercise control in different scenarios, and what control looks like in that context.
- Agent IDs and other agent visibility measures will be necessary to identify if the responsible party in an accident is an AI agent, and who that AI agent is acting on the behalf of.
- Vicarious liability and legal personhood are legal routes that have been proposed by some authors as ways to hold the relevant actors liable in AI agent contexts, although these approaches face practical and legal hurdles.
3. Challenges relating to complex value chains and opacity which can confound proving causation in AI contexts.
- Solutions such as joint and several liability can assist in cases where multiple actors contributed to a final harmful outcome.
- Measures that facilitate transparency and mandate disclosure of evidence or reversals of the burden of proof (in limited contexts) can ease the burden on claimants.
4. Challenges relating to open-source AI, as publishing software as open source will sever the control a developer has over subsequent developers and uses of that software.
- Platforms for developing and running open-source AI models can play a larger role in obtaining relevant information about the specifics of a model before uploading it, and removing problematic AI models from their platforms.
5. Challenges relating to unpredictability and lack of foreseeability, as the inherent predictive nature of new AI models makes it more complex to foresee how an AI system will act and what potential damages it could create down the line.
- Documenting incidents of AI harms in databases can clarify the range of harms and risks associated with different AI systems, making them more easily predictable.
- Human-computer interaction (HCI) research can help clarify how users are likely to interact with an AI product, thereby supporting the development of technical measures to support human oversight.
6. Challenges relating to types of damages, as AI systems are more likely to cause immaterial damages such as violations to human rights or pure economic loss, or cause damages that are only noticeable at a systemic level.
- Allowing for (limited) immaterial damages in AI cases can support claimants in obtaining redress for a wider scope of harms.
- Collective redress can ease the burden on claimants and make it easier to obtain redress for smaller damages that affect a large group of people.
7. Challenges relating to unfair contractual clauses, as these can have a significant impact on the distribution of liability and shield some actors from liability risk.
- Updating legislation on unfair contractual terms and clarifying their application in an AI context can support smaller developers and deployers of AI who face contractual relations with powerful AI vendors.
- Structures for pre-market approval could include approval of the standard terms and conditions under which the product is offered.
Policymakers will need to take steps to address these challenges to prevent the costs of AI harms from fully landing with downstream developers and users, who may not have the capacity to take appropriate measures to protect themselves and others from AI harms.
Liability is a key legal mechanism to catch and help prevent the negative consequences from AI, but its pre-existing legal norms in the UK are too burdensome on affected people and too unclear for companies to know how to navigate AI risk in the context of a rapidly evolving technology.
Introduction
Liability refers to the legal responsibility that a person or company has for their actions or omissions. In the past few years of the ‘AI boom’, the harms caused by AI systems have repeatedly attracted media attention. These harms range from autonomous car crashes and biased recruitment tools to mental health impacts from prolonged engagement with AI chatbots. Such cases raise the question of who, if anyone, will and should be held responsible for such harms.
Some high-profile cases relating to AI liability have similarly made headlines, however these cases tend to be in other jurisdictions (such as the US) and venture into unexplored legal territory, carrying large amounts of legal uncertainty. Bringing a liability case for an AI harm is complex, uncertain and can be very burdensome on claimants. As a result, people may struggle to see their damages compensated by the responsible party.
At the same time, people and companies may not always know what kind of liability risk they may be exposing themselves to by using an AI system in their personal life or in their business.
Liability may arise from various sources of law, such as criminal law, contract law and non-contractual liability (also known as ‘tort law’ in common law countries). This paper mainly focuses on civil liability: how contracts and non-contractual liability impact on the distribution of liability risk and people’s ability to obtain compensation when they have been harmed. It explores how the current contract and non-contractual liability may apply to AI, and the challenges in that context.
Although there are varied perspectives on the ‘purpose’ of a liability system, this paper considers three broad aims when thinking about AI liability from a UK policy standpoint:[1]
- Redress: Allow affected people and organisations a clear path to legal recourse when they have been harmed, so that the cost of the harm is moved from the affected party to the party responsible for the harm.
- Proper risk distribution and safety incentives: Place the burden of liability with the actor(s) best situated to prevent harm from materialising to create a socially optimal allocation of risk, so that they will be incentivised to prevent harms from happening. Legal responsibility should not be unfairly pushed away from some actors and onto others.
- Regulatory clarity: Reduce uncertainty about legal risk for people and organisations who develop or use AI, ensuring it does not inhibit appropriate adoption.
Non-contractual liability is a key piece of the AI governance puzzle to ensure that AI is safe, effective and accountable. It provides people and businesses with a course of action besides regulatory scrutiny. It may complement forms of risk reduction that are aimed at making AI products safe before they hit the market (ex ante), such as developing reliable methods for AI evaluations and a strong third party assurance regime.[2]
Still, such methods will not be able to prevent all AI harms from materialising. A clear liability regime can play an important role in providing remedies after AI systems are launched on the market (ex post) and instilling trust that AI users will be adequately compensated if an AI product does cause harm, which may increase public trust in AI overall.[3]
At the moment, the objectives of redress, risk distribution and legal clarity stated above are not fully realisable under current UK law. Contracts impact on the allocation of risk between actors among the value chain, shielding larger actors, like upstream AI developers, from liability risk, while disproportionately pushing the risk towards smaller actors with less negotiating power.
Where it is possible to bring liability cases and obtain legal recourse through non-contractual liability, claimants are faced with challenges in bringing successful claims.[4]
Such challenges include:
- Complex value chains and opacity: As there are ‘many hands’ that contribute to the development and deployment of an AI system, it can be complex to pin down what actor(s) in the value chain can be held liable for an AI system that caused harm.
- Unclear standard of care: If potential liable actors have been identified, it can be challenging to establish if and how their behaviour falls short of the reasonable precautions they should have taken, as the standard of care for AI developers is not clear.
- Autonomous and agentic capabilities: As AI systems become more autonomous, both in decision-making and their actions, this may create a barrier to effectively controlling the behaviour of an AI system, which raises questions about who to hold liable when an autonomous AI system causes harm.
- Open source: AI systems themselves or their underlying models can be published open source, which means that their developers no longer have any control over their use and functioning after they have been published. This raises questions about how responsibility for (partially) open-source AI systems should be attributed.
- Unpredictability: LLM-based AI systems are fundamentally unpredictable, which raises questions about how to establish the reasonable foreseeability of certain harms.
- Limited coverage of immaterial or systemic harms: The main routes within non-contractual liability (product liability and negligence) primarily – though not solely – cover personal injury and property damage, making more immaterial harms difficult to cover within existing legal pathways.
This paper explores how contracts between actors in the AI value chain and challenges in bringing legal cases can impact the distribution of liability risk and people’s ability to access redress when they have suffered harm due to an AI system.
The findings of this paper are based on an extensive literature review, a roundtable with UK lawyers and legal experts, and a review by UK lawyers with domain expertise. It aims to describe the current situation for civil liability in the UK with regard to AI harms, as well as lay out a range of options for addressing some of the challenges within that system.
List of abbreviations and legal terminology used in this paper
Tort
A wrongful act that forms a ground to hold actors legally liable under non-contractual liability
Tortfeasor
The actor (person or company) that commits a legally wrongful act
Externality
A secondary, unintended impact of an activity
LLM
Large language model
EHRC
Equality and Human Rights Commission
ICO
Information Commissioner’s Office
FCA
Financial Conduct Authority
CPA 1987
Consumer Protection Act 1987
CRA 2015
Consumer Rights Act 2015
EU PLD
European Union’s Product Liability Directive
UCTA 1977
Unfair Contract Terms Act 1977
AILD Proposal
European Union’s Artificial Intelligence Liability Directive proposal
The basics: what is non-contractual liability for?
Non-contractual or tort liability can enable people and organisations to obtain redress and ensure that actors responsible for damages are held accountable. For people and organisations, non-contractual liability is a route through which they can be compensated if they incur damages because of an AI system. For deployers and developers, the risk of being sued for damages resulting from their AI system may incentivise them to invest more in preventative measures.
Non-contractual liability is situated within a wider body of regulatory and non-regulatory mechanisms that aims to incentivise societally desirable behaviour and discourage negative behaviours. Besides laws, behaviours can also be guided through market-based incentives, such as due diligence requirements, procurement guidelines, licensing regimes, insurance and effective competition between competitors.[5]
Non-contractual liability is not the only ‘carrot or stick’ approach through which behaviour can be guided, but it supplements and interacts with other forms of liability that may be applicable to AI – such as contractual liability – and these ‘softer’ market incentives.
Although this paper focuses on non-contractual liability, it also touches on other regimes that interact with non-contractual liability and may impact the effectiveness of a non-contractual liability regime.
Rationales for imposing non-contractual liability
As stated in the introduction, liability can help people and companies obtain compensation for damages that they have incurred due to an AI system under certain circumstances.
Liability can also help incentivise actors, such as AI companies, to act with appropriate caution when developing or deploying AI systems. But there are several underlying rationales to keep in mind when considering what the ‘best’ way of allocating non-contractual liability looks like. These rationales can be roughly divided into justice and economic considerations.
Firstly, non-contractual liability is linked to ‘corrective justice’ theories. Corrective justice concerns itself with the gains and losses that one person or company may cause another – it seems unjust that someone may benefit from committing a wrong and thereby cause a loss to someone else or move the risk for that loss to another actor.[6]
Non-contractual liability is therefore also focused on ‘making the victim whole again’ after they have suffered a loss through a wrong committed by someone else, usually through means of financial compensation and a commitment to preventing similar wrongs from happening again.[7] Non-contractual liability is the legal expression of the idea that people in society have a certain normative relationship to each other, and therefore should exercise appropriate care.
Secondly, from an economic point of view, non-contractual liability is a way to push people and organisations to consider the potential negative side effects caused by their activities.[8] For example, if you know your AI system sometimes states inaccurate information and that you can be sued if such inaccurate information causes harm to someone, then you will likely have a stronger reason to make your AI system more accurate or place appropriate warnings. Taking precautions is beneficial as it may lessen your liability exposure.
If the liability burden is placed with the right actors, this will incentivise them to take an economically efficient amount of precaution, where the cost is still lower than the costs of being sued and having to compensate for damage to the affected people.[9] This relates to theories about deterrence, where ensuring that the right people and organisations are held liable will deter them from taking unreasonably risky actions.
Within deterrence theories, actors that can contribute to increasing or decreasing the risk of harm through their conduct (including developers and users of goods and services) should bear some part of the liability burden to prevent moral hazard.[10]
Moral hazard occurs when an actor fails to take appropriate care as they know they will not be liable for ensuing harms. For example, prior to the 2008 financial crisis, some major banks acted more recklessly because they knew they would be ‘bailed out’ by the government if they were at risk of going bust.[11]
Contracts and non-contractual liability
Non-contractual liability rules apply to people and organisations regardless of whether or not they agree to them. In contrast, contracts and ensuing legal responsibilities are entered into voluntarily by actors when they sign a contract. As long as the contractual provisions are not against the law, they can also cap liability at a certain amount or include provisions on ‘indemnities’. An indemnity is when one party is required to compensate another party when they incur a loss.
For example, some AI companies state that they will compensate users that are held liable for copyright infringement. The liability does not ‘change’: the user is still liable, but they are compensated by the AI company.[12] The report section on ‘Impact of contracts on liability in the UK’ will address in more depth how contracts are currently used in the UK to change and impact the distribution of AI liability risks.
Private governance
This paper rests on the assumption that the non-contractual liability system is an appropriate route to properly placing incentives for safe behaviour and providing effective routes for redress for individuals, as a complement to regulatory action. There are other authors who support purely market-based solutions and private governance.[13]
Private governance typically refers to governance by non-government organisations, such as assurance or insurance companies. This paper does not preclude that such private governance approaches may complement a non-contractual liability regime; in fact, this paper describes how insurance can work in tandem with non-contractual liability in some situations (see the section on ‘Insurance’ in the Appendix) and suggests that standards developed by standardisation bodies can help establish the level of care that should be exercised by a tech company (see the section on ‘Establishing a breach of the duty of care’).
However, unless paired with binding regulation, private governance systems are voluntary and not necessarily paired with fines in case of non-compliance. Non-contractual liability can thus provide a stronger consequence in case of harm. Although the non-contractual liability system can be slow and expensive to navigate, it has emerged over decades of precedent and statute and has adapted to meet the demands of previous periods technological change, such as the industrial revolution and the emergence of cars, trains and railways.
Courts in the United States have allowed non-contractual liability cases regarding automated vehicles and chatbots to proceed, signalling that pre-existing liability rules can conceivably be applied to newer technologies.[14]
Still, issuing legal guidance that clarifies how exactly these rules apply in such new contexts would give claimants more certainty, and would be preferable to having to wait for (slow) court processes and judgments to come through. Policymaker intervention is welcome and needed in this space to ensure legal certainty and redress for affected people and organisations.
Non-contractual liability provides a flexible ‘catch-all’ type of redress, with negligence applying to AI companies even in the absence of new AI regulation or private governance structures. As discussed, non-contractual liability sits within a complex system of other forms of liability (contracts, criminal), private governance and regulation. All these instruments used together can push towards safe and accountable AI.
Non-contractual liability is just one piece of the puzzle: it allows affected people to still have access to a legally binding form of redress when regulation is absent (or regulators are ineffective) and when voluntary mechanisms prove insufficient to hold companies accountable.
The current situation
Widespread AI use is still relatively new. The technology has advanced in recent years, creating new use cases and potential risks to people and society. While AI liability cases will likely still take time to make their way through the courts, there are open questions on how liability risk is currently distributed between actors in the AI value chain and what obstacles may arise for people and organisations wanting to obtain redress for AI harms.
We conducted a roundtable with lawyers and legal experts in the UK to obtain evidence on what the current landscape for AI liability risk looks like.
AI liability risk and risk appetite in the UK
UK lawyers stated a range of AI liability risks that are front of mind for businesses in the UK. The risks considered by companies will be sector- and use case-specific, but generally include the risks outlined below.
Intellectual Property (copyright infringement, database rights)
| Example | Type of liability | Other routes to redress (regulatory action) |
| AI model provider copies and then trains an AI model on copyrighted data without legislative exemption or the output of the model is too similar to copyrighted material. | Copyright infringement is a strict liability tort in the UK. | For small claims (under £10,000) a complaint can be lodged with the Intellectual Property Enterprise Court.[15] |
Data protection infringement
| Example | Type of liability | Other routes to redress (regulatory action) |
| An employee shares personal data of clients with a chatbot from an external AI company and does so without a lawful basis. | Misuse of private information is a strict liability tort in the UK.
|
There is also a non-civil liability route to redress, by making a complaint to the UK’s data protection authority, the ICO. |
Third-party liability (personal injury, property damage, economic loss)
| Example | Type of liability | Other routes to redress (regulatory action) |
| Any injury or damage to a ‘third party’ (i.e. a bystander, not the user or developer): a self-driving car injures a pedestrian or damages someone’s property; AI involved in clinical decision-making recommends the wrong treatment, leading to a patient’s injury. | Most likely negligence (professional), but occasionally strict liability if in a regulated sector (self-driving cars) or product liability. | Potential routes to make complaints to regulators or supervisory bodies if the incident occurs in a regulated sector (e.g. General Medical Council for a healthcare worker using AI). |
Liability stemming from non-compliance with sector-specific or AI regulation
| Example | Type of liability | Other routes to redress (regulatory action) |
| A company uses a type of AI that is prohibited through AI regulation (such as an emotion recognition system in the workplace under the EU AI Act). | Potentially strict liability based on regulatory breach, but would require parliamentary intent favouring strict liability. | Potential routes to make complaints to the regulator supervising the enforcement of the breached regulation, such as the FCA for financial services. |
Liability due to discrimination
| Example | Type of liability | Other routes to redress (regulatory action) |
| A company uses an AI tool that discriminates against protected groups in recruitment or credit scoring contexts. | Statutory liability for discrimination;[16] vicarious liability (employer responsible for employee/ contractor). | Make a complaint to the Equality and Human Rights Commission. |
Breach of contract
| Example | Type of liability | Other routes to redress (regulatory action) |
| Contract contains a clause that prohibits the use of AI for the fulfilment of the contract, but an employee or subsidiary does use AI in the course of performing the contract. | Contractual liability, subject to the conditions stipulated in the contract. | N/A |
It should be noted that these are the some of the main areas of concern for UK companies that were shared during the roundtable, but may not cover all possible liability risks for UK businesses and individual consumers.
As can be seen in the tables above, for many of the strict and fault-based non-contractual liability grounds, there are also routes to redress via a regulatory body (such as the ICO or EHRC).
However, these regulatory bodies usually cannot award financial compensation to an affected person in response to a complaint. Additionally, regulators are not always well resourced enough to respond in a timely manner to (all) lodged complaints.
The route to civil liability is therefore a helpful complementary route to enable people to claim compensation. It is generally allowed for an affected person to simultaneously lodge a complaint with a regulator and to sue someone in court for the losses they have incurred.
Despite these risks being present, experts at the roundtable agreed that there was still a large appetite for AI purchases and adoption in the UK market. Concerns about liability exposure did not outweigh fears within businesses of ‘missing out’ on a new technology or lagging behind competitors.
Companies therefore proceed with AI adoption either in spite of known liability risks, or because they are not fully aware of them. Either way, this can lead to situations where companies are faced with higher liability exposure than anticipated, and few commercial pathways to shifting it.
An exception to this, according to roundtable participants, are regulated sectors such as financial services or companies delivering aspects of public services, requiring them to be compliant with certain public sector requirements. These companies showed more hesitancy and concern in AI adoption due to potential liability risks.
Impact of contracts on liability in the UK
Liability burdens can be placed with certain actors for economic and justice reasons, but actors can privately reallocate how liability burdens are distributed through contractual clauses. This use of contracts to distribute liability is called ‘private ordering’.[17]
Especially in the context of large AI providers, contracts often take the shape of standard terms and services, both for business-to-consumer (B2C) and business-to-business (B2B) customers. According to lawyers consulted for this paper, even larger companies are usually presented with standard terms of use when adopting products directly from large AI providers.
As will be explained in the section ‘Overview of relevant legislation in the UK and EU’ below, AI can be offered as a product or as a service. In the EU, both are covered under the (updated) EU product liability directive and other EU consumer protection legislation respectively.
In the UK, AI will only be seen as a ‘product’ in limited circumstances (when embedded in a tangible good) and as a service or ‘digital content’ in most other cases.[18] Where AI is provided as a service there will always be some kind of contract or standard terms and services that apply.
Also, there will be a contractual basis for most AI software-as-product situations. This means that contracts play a big role in the potential (re)distribution of liability for AI products and services.
Contracts can impact on risk distribution for AI harms in three main ways:
- Contracts may set the standard for what non-performance of the contract means. For example, if a contract focuses on the sale of a car, then the contract can be considered to be breached or not performed if the car is against the specifications agreed in the contract when delivered (for example, it is broken or the wrong colour). For generative AI, defects may be more difficult to prove as a chatbot that ‘hallucinates’ is not necessarily considered defective, but contracts can still set out certain requirements that a contracted AI system needs to meet.[19] Contracts may also set limitations for the period within which non-performance of the contract can be claimed.
- Contracts may rebut presumptions under non-contractual liability. For example, if an artist creates a piece of art and a tech company uses it without their consent, then the artist could sue them for copyright infringement (which is a tort subject to strict liability in the UK). However, if the artist and the tech company have a contract in place that gives the company a licence to use the artist’s art, then there are no copyright infringements and no grounds to claim non-contractual liability.
- Contracts may contain provisions on limitations of non-contractual liability. Usually, contracts between a software or data provider and a customer contain a liability clause that caps the maximum liability that the provider can be required to pay. For example, a contract may contain a clause that states that if an AI system causes some kind of harm, the provider will only compensate the customer up to a certain amount of money. If the actual damages are higher than the liability cap, then the customer themselves will have to absorb the loss for the leftover amount.
Example: Liability cap in terms of use
OpenAI’s business terms of use contains a clause that limits OpenAI’s liability.[20] OpenAI states that if damages occur during the use of an OpenAI product (like ChatGPT) by a business, OpenAI will only carry liability for ‘the amount you paid for the service that gave rise to the claim during the 12 months before the liability arose or one hundred dollars ($100)’.
If a loss occurs that is higher than what the business has paid for the OpenAI product in the past year, then the leftover amount must be paid by the business themselves.
Contracts will be in place between all different actors in the AI value chain. Businesses are usually, at least in theory, able to negotiate the terms of the contract through which they obtain a licence to use an AI system. This will include negotiating about limitations or exclusions of liability.
Figure 1: Example of an AI supply chain for a recruitment tool
However, legal experts that were consulted during the roundtable to inform this paper stated that in practice almost all business clients do not have a strong position in contract negotiations with big tech companies and often have to settle for standard terms of use.
A business has a slightly better chance at negotiating with tech companies if they have a larger spend or are subject to legal requirements, such as public sector duties if it is a government client. However, most businesses are faced with standard contractual clauses that shift liability risks away from large AI model vendors.
The legal experts consulted for this research expressed that they see a tendency in contracts to distribute liability away from large AI model providers and move it down the value chain.
Legal experts further expressed that if the final buyer of the AI system (usually the downstream deployer) is also a large actor, this may lead to a ‘squeeze’ of smaller AI companies that may find themselves positioned between the model developer and large downstream deployers.
Neither the model developer nor the deployer will accept liability, so this leads to mainly SMEs in the middle of the value chain carrying liability burdens.
Overview of relevant legislation in the UK and EU
Both in the UK and the EU there are various pieces of relevant legislation that apply to AI systems. AI systems may be offered as a product or as a service. A product usually means that there is a form of transfer of ownership without further engagement with the seller, while a service implies there is an ongoing relationship with the seller (e.g. for maintenance or updates).
This section first provides an overview of the ways in which AI may be sold (as a product, service or digital content) and which relevant regulations apply in the UK and EU. We then provide three example scenarios (a chatbot giving wrong information, an automated vehicle crash and discriminatory hiring), which we discuss in detail to show where gaps arise in the current liability system in the UK.
The gaps identified are summarised into seven challenges for AI liability that will be discussed in the next section.
For more explanation, the section on ‘Contracts and liability’ in the Appendix sets out the different legislations in these tables in more detail.
AI as a product / good
| In the UK | Relevant regulation | |
| AI will mostly not be considered as a ‘good’ but may be if it is supplied on physical hardware (e.g. on a USB) or potentially if it is embedded in a physical object (e.g. a smart device).[21] |
|
|
| In the EU | Relevant regulation | |
| Software is considered a product under the Product Liability Directive. | EU Product Liability Directive (B2C). | |
| EU has limited guidance on unfair contractual clauses in B2B contexts.[22] | EU consumer contract law,[23] such as the Unfair Contract Terms Directive. | |
AI as a service
| In the UK | Relevant regulation | |
| Regulation applies especially where software can be updated by a developer after sale (continuing relationship). |
|
|
| Usually governed by contracts / standard terms and conditions. | Unfair Contract Terms Act 1977 (B2B). | |
| In the EU | Relevant regulation | |
| Regulation applies to software-as-a-service. | EU Product Liability Directive (B2C). | |
| EU consumer contract law.[24] |
AI as digital content
| In the UK | Relevant regulation |
| ‘Data which are produced and supplied in digital form’, could cover forms of software / AI. | Consumer Rights Act (B2C). |
| In the EU | Relevant regulation |
| N/A | |
Example scenarios in the UK
Imposing non-contractual liability onto an actor generally requires multiple elements: a violation of a duty of care (not living up to ‘standard of care’) or some form of strict liability, causation (‘but for’ and reasonable foreseeability) and damage.
AI introduces several challenges that impact on one or more of these elements and make it harder for claimants to effectively sue for compensation through liability. The examples below indicate some of these challenges.
The Appendix provides an in-depth overview of each of these elements.
Example scenario 1: Chatbot gives wrong information
A company hosts an AI chatbot on its website to provide information to customers. The chatbot hallucinates in response to a customer query and provides wrong information about the company’s return policy. The customer consequently sends back an unsatisfactory product after the return window and their refund is refused. The customer sues the company for the chatbot providing wrong information and to get their money back.[25] The company wants to know if they can sue the AI provider for the costs caused by the chatbot’s hallucination.
Considerations:
- It is likely that the AI chatbot was provided by an AI provider under standard terms and conditions. Even for large corporate clients, AI companies tend not to negotiate contractual terms but offer ‘take it or leave it’ provisions. This will likely include a liability exclusion and/or cap to the extent legally possible. If the AI chatbot was provided by a smaller ‘middle’ company (perhaps a customer service system provider), then the contractual terms may not fully push liability onto the user, but challenges remain in proving that the service is not ‘as contracted for’ (see below).
- As the affected party is a company, they cannot rely on consumer protection law (CRA 2015), or on product liability law (CPA 1987) as it does not cover software as a product. The Unfair Contract Terms Act 1977 does apply to B2B contracts and stipulates that a trader cannot limit liability for negligence resulting in death or personal injury, and otherwise may not limit liability unless it ‘satisfies the requirement of reasonableness’.[26] Although the burden of proof to show that the liability limitation is reasonable lies with the AI provider here, courts tend to be reluctant in invalidating clauses. It is therefore unclear if relying on unreasonableness will be successful.
- If the claim can proceed, the company would have to prove that the AI chatbot does not meet the contracted standards for quality. For AI-as-a-service (as AI software does not qualify as a ‘product’ or ‘good’ under UK law) this would amount to proving that the supplier did not act with ‘reasonable care and skill’ or that the service is not ‘as contracted for’.[27] This can be challenging to prove as tech companies are not required to publish materials about the safety precautions they have taken, and it is possible that at the time of a court case it will not yet be fully clear what the standard of care is within the AI industry overall. Moreover, all LLM-based AI systems are known to ‘hallucinate’ from time to time, so it will be challenging to argue that the chatbot was not up to industry standards or falls short of what the company contracted for.
- If it can be shown that the chatbot’s hallucination amounts to a failure of the developer to act with reasonable care and skill, and that the service is not as contracted for, then the company can also sue for the additional damages it has incurred (such as the third-party liability costs from the lawsuit) if the company can show that the breach of contract actually caused damage and that this damage was a reasonably foreseeable result from the breach.
- Similarly, for a claim through negligence, it may be easy to establish that there is a duty of care between the company and the AI developer (there is a paid-for service in place, after all), however the standard of care is a lot more challenging to determine. Therefore, proving that the AI developer breached a duty of care will be difficult. It will also be a challenge to prove that the company’s damages were reasonably foreseeable and could have been prevented by taking reasonable measures, as even AI developers cannot always predict what patterns their AI system will detect and follow. Moreover, if contractual liability caps do apply then this will limit any potential compensation.
This example shows how deployers of AI products may get caught between affected third parties who (reasonably) sue to have their damages compensated and more upstream AI developers who may use contracts or standard terms and services to push liability down the value chain.
It also shows the challenges a company may encounter in trying to prove that an AI system does not meet the contracted-for standard. A chatbot that hallucinates is not necessarily ‘broken’ or performing below reasonable industry care and skill. Moreover, such industry standards are generally unclear.
Companies will likely experience similar challenges in holding upstream AI companies accountable for AI systems performing poorly in other ways, such as an AI system that turns out to be discriminatory or generates outputs that are too similar to copyrighted materials.[28]
This example shows how deployers of AI systems, or actors lower in the value chain, can be burdened with liability risks for harms they themselves could not have reasonably prevented.
Example scenario 2: Automated vehicles
A pedestrian crosses the street at a zebra crossing and gets hit by a self-driving car that fails to slow down and let the pedestrian cross. The car was in full self-driving mode at a high level of automation. The person in the car, the ‘driver’, was not in control of the vehicle and was not signalled by the car to take back control in time. The pedestrian is injured and wants to obtain damages for personal injury.
Considerations:
- The pedestrian does not have a contractual relationship with the ‘driver’ of the automated vehicle (AV) and therefore must rely on non-contractual liability. There are two candidates for who can be held liable: the ‘driver’ and the automated vehicle’s manufacturer. It is established that the car was in self-driving mode at the time of the accident, and the driver had no control over the vehicle. The driver is therefore not at fault for the accident and cannot be held liable. The accident was therefore caused by a fault in the vehicle itself, if we assume there were no other contributing environmental factors at play.
- Automated vehicles have a somewhat special position in the UK: there is dedicated legislation in place to manage liability questions for self-driving car collisions. The Automated and Electric Vehicles Act 2018 establishes that automated vehicles must be insured and that the insurer must pay out compensation to the victims of an accident if it has been established that the automated vehicle caused the accident.[29]
- Provided that the pedestrian has evidence of the damage that resulted from the accident, the insurer must therefore make the initial pay-out to the pedestrian under a form of strict liability. The insurer can then recover the costs from the party that is liable. Essentially, the burden of proving liability is shifted from the pedestrian to the insurer.[30]
- The insurer will in turn try to hold the vehicle developer or manufacturer liable, most likely through negligence or product liability. The insurer has a right to obtain data from the vehicle manufacturer to assist in determining who is liable. The manufacturer likely has a duty of care towards the pedestrian, as they are legally responsible for the safety and functioning of the vehicle while in automated driving mode under the UK’s Automated Vehicles Act 2024.
- As an automated vehicle is a tangible object, product liability might apply, but this is not fully certain in the UK context. For product liability, the insurer would have to prove that the vehicle was defective, i.e. not of the quality that a consumer is entitled to expect. Even with data sharing requirements, it may be challenging to prove where a fault in the vehicle originated from.
- For negligence, the insurer would have to prove that the pedestrian’s personal injury was foreseeable and that the developer should have taken more reasonable precaution – in practice, that their conduct was not in line with the industry standard of care. Even with data sharing requirements, this can be challenging to prove.
- In this scenario, it is relatively easy for the pedestrian to obtain compensation from the insurance company for the damages resulting from personal injury, as the insurer will have to pay them out quickly. However, there are questions around fairness and distribution of risk: if the insurance company cannot recover any of the pedestrian’s damages from the vehicle manufacturer, this does not incentivise the manufacturer(s) to improve the safety of their vehicles.[31]
- The premiums of automated vehicle insurance will be very high if the ‘driver’ cannot decrease the chance of accidents happening through responsible behaviour, and the vehicle manufacturer is not held liable and therefore similarly not incentivised to practice responsible behaviour.
This example shows the interplay between strict liability, insurance and risk allocation, and how insurance can help ensure that affected people are compensated quickly.
However, the example also shows how an insurance scheme does not automatically incentivise the right actors to take precautions to prevent harms from materialising, if the insurer cannot recover costs from the liable party (the vehicle manufacturer).
Moreover, automated vehicles are subject to a dedicated regime that helps affected parties obtain swift compensation and shields ‘drivers’ when they are actually not in charge of the vehicle.
Not all automated devices are subject to insurance, such as automated lawnmowers.[32]
- If an automated lawnmower caused damage, then the affected person would have to rely on fault liability (negligence) to obtain compensation. They would have to prove that the owner of the lawnmower did not act with sufficient care, for example if they deployed it outside of recommended uses. But if this is not the case, the affected person may struggle to show that there was a breach of a duty of care which caused the damage. Especially as, unlike in the automated vehicle example above, there is no duty imposed on the lawnmower’s manufacturer to share data with the claimant.
- Moreover, if the owner was held liable for the damage caused by the automated lawnmower while not being able to effectively control how the lawnmower operated, this may raise questions of fairness.
- If they (as a third party) would try to sue the manufacturer of the lawnmower, they might similarly struggle to evidence that the manufacturer did not exercise sufficient care (negligence) or to prove that the lawnmower was defective (product liability).
- If the affected person was the owner of the lawnmower, they might be able to bring a claim against the manufacturer through product liability. But they would first have to prove that the automated lawnmower was defective, which requires technical documentation and expertise.[33]
This shows the difficulties of establishing a breach of a duty of care and proving causation or defectiveness in the case of (integrated) AI products. It also highlights the relevance of having clear legal rules for who is liable in the case of an autonomously operating AI system.
In the automated vehicle scenario, the ‘driver’ is shielded from liability by dedicated legislation for accidents that are out of their control. In the automated lawnmower scenario, the question of who is liable is not as clearly defined. It may seem unfair to hold the owner liable if they could not control how the lawnmower operated, but there are (again) challenges in proving that the AI developer had a duty of care towards the affected person, and that their failure to take reasonable care caused the lawnmower to cause damage. There is also no dedicated regime in place for information sharing that would help the affected person obtain data on how and why the lawnmower malfunctioned.
Example scenario 3: Hiring and discrimination
A jobseeker is applying for jobs. Most of the companies they apply for use AI hiring systems to sift through the CVs of applicants and rank the most promising candidates. The jobseeker keeps being rejected by the AI hiring systems, even though they are overqualified for most of the jobs they apply for. They decide to experiment with a few versions of the CV. When they change the font and background colour and upload the CV to the system, they start to be invited to interviews.
The jobseeker is frustrated. It appears that they have applied for dozens of jobs but were filtered out by the AI system due to the design of their CV. This has potentially lost them the opportunity to interview for jobs they were well-qualified for and possibly months of lost income.
‘CV layout’ is (mostly) irrelevant to the quality of their application and it is not a protected characteristic under discrimination laws. The jobseeker wonders if they can sue the companies that were hiring or the AI developer that developed the system for lost opportunity and potential lost income.
Considerations:
- There is no contract or standard terms of service in place between the jobseeker and the hiring companies or the AI developer. The jobseeker cannot rely on the Consumer Rights Act 2015 and other legislation on consumer contracts.
- Although affected third parties can make claims against the trader under the UK Consumer Protection Act 1987, an AI system like this is not considered a ‘product’ in the UK so the UK CPA 1987 does not apply. The CPA 1987 also only covers personal injury and damage to property, so would be inapplicable to the damages that the jobseeker is seeking to recover.
- The only route open is negligence, which requires the jobseeker to prove a duty of care, breach of duty, causation, reasonable foreseeability and damages. In this scenario, all of those are difficult to prove:
- The jobseeker would have to prove that the AI developer had a duty of care towards them. The developer did not deal with the jobseeker directly, the case concerns ‘pure economic loss’, and the AI developer likely has not assumed responsibility for this kind of damage. It is unlikely that the duty of care would be established. However, there may be a duty of care between the hiring company and the jobseeker.
- Although it may be argued that an AI hiring system should not filter out candidates based on arbitrary non-content related aspects such as CV layout or font, it is not clearly established practice that this would breach a duty of care on behalf of the prospective employer.
- The jobseeker cannot definitively prove that their application was rejected because of the CV design, and that this was caused by either the instructions given by the hiring company or by the AI developer’s actions. It is thus also challenging to satisfy the ‘but for’ test.
- It is not clear if it was reasonably foreseeable at the time that the AI developer put the system on the market, or when the hiring company deployed it, that it would filter applicants based on CV design and that this would harm applicants.
- The jobseeker’s damages constitute loss of opportunity and pure economic loss, which is complex to establish a duty of care for. Estimating the damages might also be complex, as it is not clear that the jobseeker would have been hired for the job. Even if product liability in the UK covered software, it does not cover ‘pure economic loss’ damages such as this.
This example shows how the probabilistic nature of many AI systems can create new types of unforeseen vulnerabilities and harms that do not easily fit into existing regulations. It also shows challenges in establishing a breach of the duty of care and for proving causation. Even with technical expertise and access to sufficient evidence, the jobseeker would struggle to definitively prove that their rejection was caused by the CV design and that such harm was reasonably foreseeable.
Establishing a duty of care between the AI deployer (the hiring company) and the jobseeker will be slightly easier (although still challenging) than between the jobseeker and the AI developer.
This may challenge notions of fairness in liability distribution along the value chain as it is the AI developer that has most insight into the AI system and would be better placed to foresee potential harms and tackle them at scale.
Even if a duty of care can be established, it is very unclear what the standard of care would be in this situation for both the hiring company and the AI developer. Additionally, this example illustrates the challenge of finding a route to compensation for immaterial damages such as financial loss or loss of opportunity.
Summary of challenges
The main challenges for liability for AI systems, possible solutions and, where identified, the potential limitations of those solutions are summarised below.
Figure 2: Liability in the AI value chain
Challenge: Not clear what ‘duty of care’ entails for different actors involved in developing and operating AI systems
Legal problem
Establishing a breach of the duty of care: having to establish what the standard of care is for that actor.
Possible solutions
- Incentivise AI safety research, invest into the development of AI standardisation and promotion of responsible industry practice.
- Strict liability for some very high risk, dangerous or prohibited AI uses.
- Professionalisation of AI development professions
Limitations
Industry standards can set too low of a bar, development may be too slow and uncertain for high and immediate risks.
Challenge: Agentic and autonomous capabilities & capacity for human control
Legal problem
Duty of care: having to establish what the standard of care is for actors developing or operating AI systems with higher levels of autonomy.
Possible solutions
- Strict liability for systems with high level of autonomy.
- Consider how humans may interact with AI agents to determine when a user ceases to be legally responsible.
- Vicarious liability and legal personhood.
- Agent IDs and visibility measures.
Limitations
Strict liability is sometimes argued to ‘deter innovation’, but can also be seen as redirecting innovation towards less risky applications.
Challenge: Complex value chains & opacity
Legal problem
Causation: difficulty proving the harm would not have happened but for the action/omission of a certain actor.
Possible solutions
- Joint and several liability.
- Transparency enhancing measures.
- Duty to disclose evidence.
- Reversal burden of proof in cases of high technical complexity.
Limitations
Reversing evidentiary burdens may increase the burden on companies – this is problematic if they do not have access to technical documentation across the whole value chain.
Challenge: Open-source AI
Legal problem
Causation: does open-source software ‘sever’ the liability of upstream developers?
Possible solutions
- Liability located with downstream actor that derives economic benefit.
- Role of hosting platforms.
Challenge: Unpredictability of AI systems (particularly LLM-based)
Legal problem
Causation: difficulty establishing that a harm is reasonably foreseeable.
Possible solutions
- Tracking of AI harms.
- Promoting human-computer interaction research.
- Strict liability for unpredictable high-risk systems.
Challenge: Harms are ‘hard to measure’ and substantiate, immaterial and systemic
Legal problem
Damages: many liability systems only cover material harms.
Possible solutions
- Allow for (capped) immaterial damages in AI cases.
- Collective action lawsuits.
Challenge: Contracts or standard terms of service distribute liability away from upstream actors
Legal problem
Provisions may preclude users or downstream actors from bringing a liability claim against a more upstream company.
Possible solutions
Expand work on illegal unfair contractual clauses for AI to support.
Limitations
Current prohibitions on unfair contractual clauses relating to liability focus on death or personal injury.
The seven challenges and the potential ways of addressing them stated in the table above will be discussed in more detail in the subsequent sections of this report.
Establishing a breach of the duty of care: the standard of care
Takeaways
- Negligence uses a flexible standard of care that can evolve along with our knowledge and understanding of technology and safety practices. The standard of care is higher for specialised professionals.
- The standard of care will be crystallised over time and will draw on industry practice, legal standards and scientific knowledge.
- If industry practice around safety lags, this may impact the standard of care that AI developers are held to. Consequently, the standard of care can be ‘too low’, which would induce suboptimal precaution and increase the risks of AI products above societally desirable levels.
- The promotion of AI safety science in academia and industry, and the development of independent standards can help prevent a suboptimal standard of care.
- In instances of AI systems that carry a very high risk, it may be warranted to impose strict liability, potentially coupled with mandatory insurance.
- Professionalisation of AI developers can help improve the understanding of ethics of those working in AI labs, and create personal incentives for AI developers to practice safety. Fiduciary duties can also be imposed on whole institutions.
A standard of care is the reasonable level of precaution that an actor is required to take.[34] For AI, the standard of care may influence when AI developers can be held liable for failing to practice sufficient safety precautions if their product consequently causes harms.
This is mostly relevant for liability through the tort of negligence (i.e. the actor not acting in line with the standard of care will breach their duty of care if damage occurs as a result).
The standard of care is less relevant for forms of strict liability, as that generally does not require the defendant to be at fault; they are held liable regardless of whether they exercised sufficient care or not.
For new goods or activities, a standard of care will generally emerge over time. For example, someone driving a car must perform the task with the care and skill of an ‘ordinary driver’.[35] Since the introduction of the car, legislation and case law has developed to set out what ‘the care and skill of an ordinary driver’ means. Over time, standards have been established such as adhering to traffic rules, paying attention to the road and making sure everyone in the vehicle wears a seat belt. At the same time, a safety regime has emerged that clarifies what the standard of care is for car manufacturers, such as performing certain safety tests.
Challenges for AI liability
AI is a fast-developing technology and the advancing technology makes it challenging for the legal system to keep up and provide clarity on the standard of care that a developers or user should be held to when an AI system causes harm, even if the product is already on the market.[36]
As a result, it may be unclear for affected persons and courts what standard of care a developer or user of an AI product should be held to when an AI system causes harm. Indirectly, this lack of clarity also creates a liability risk for developers, users and potentially insurers, as they do not know what level of precaution they should exercise to be able to defend themselves from liability risk. In short, there are no clearly established and recognised ‘best practices’ around AI development or deployment.
A standard of care may be established through the development of industry practice, regulation and scientific research. In the context of AI this means that the following kinds of instruments will likely be weighed by courts in liability cases to establish what the standard of care is for the AI developer:
- Industry standards: For example, those developed by standards bodies such as CEN/CENELEC,[37] the NIST AI Risk Management Framework[38] or voluntary commitments[39] to AI Safety, like the Frontier AI Safety Commitments.[40] Assurance mechanisms and certification regimes may help track and enforce such standards.[41]
- Legal standards: Like the EU AI Act and accompanying Code of Practice,[42] the UK’s regulatory principles,[43] or Colorado’s Consumer Protection for AI Act.[44]
- Scientific research: The reasonable care that an AI developer or deployer is supposed to take will develop alongside scientific advances. If better techniques for evaluating AI models and systems are developed (that are not significantly more expensive), it is to be expected that such techniques will be adopted by the industry. For example, if an AI model fails basic well-established red-teaming exercises that other AI models can pass, that may indicate that its developers did not exercise sufficient care in evaluating the model and developing its safeguards.
This system of reasonable care has its advantages and disadvantages. On the one hand it is flexible, as the threshold for reasonable care will move along with scientific breakthroughs and advances. For example, the UK’s Consumer Protection Act (CPA) 1987 states that a producer is only liable for a defective product if ‘the state of scientific and technical knowledge at the relevant time was not such that [the producer] might be expected to have discovered the defect’.
In other words, complying with the ‘state-of-the-art’ of scientific knowledge and practice can be a defence for producers to protect them from liability. In this sense, the standard of reasonable care is more flexible than a legislative standard.
On the other hand, if an entire industry lags in implementing safety practices, there is a risk that the bar for reasonable care will be set too low. In general, sometimes a majority of actors acting in a certain market may be ‘engaging in common patterns of unreasonably dangerous conduct, and courts must correct such errors’.[45]
Also, in areas that are still emerging, contested or where reaching consensus is challenging, it may be difficult to establish that there is a ‘standard industry practice’ that can inform the reasonable care that an actor should take.[46]
Additionally, judges determining what the standard of care is in a specific case will consider industry standards in absence of applicable statutes and case law – but are not bound by it.[47]
Moreover, AI companies are not always incentivised to be transparent about the risks that they know their products create, as this would require them to also take measures to address those risks if this is technically and economically feasible. The section on ‘Complex value chains and opacity’ addresses transparency issues in more detail.
Potential solutions
AI safety research, standards and assurance
As stated above, it will take time to develop a standard of care for AI through case law and jurisprudence. Additionally, there is a risk of the standard of care being set ‘too low’ if industry standards on safety practices lag behind AI development.
Still, promoting the development of safety practices through standards and scientific research, for example through AI safety institutes and grant funding for AI safety work, may contribute to the development of AI safety standards and the crystallisation of a standard of care.
Industry actors are best placed to understand how their products work in a technical sense. They also have access to user data that will tell them how consumers are using their products. This means that industry actors themselves will always be best positioned to understand how their products might cause harm and what can be done to prevent those harms, and to anticipate new emerging risks.
For risks that are clear and well-defined, independent assurance bodies may play a role in setting best practices on the due diligence and evaluations required for addressing such risks.
Additionally, legal rules can also help clarify what safety precautions can be expected from tech companies. In essence, product safety legislation sets a legal standard for the behaviours and precautions to be expected from product developers.
As industry, academia and standards bodies can together help advance AI safety science and create standards based on that, each of them should be incentivised and enabled to do so. As the standard of care is flexible and will adjust to developments in science and ethics, contributions from various sources that support the development of ‘best practices’ in AI safety can contribute to crystallising the standard of care over time.
Strict liability
In contexts where it has been established that the use of a certain AI system carries unreasonably high risks, it may be warranted to impose strict liability on developers and/or users who choose to develop and deploy such systems. An example of this could be the development and use of an AI agent with a high level of autonomy and limited ability for human oversight (see the section on ‘Agentic and autonomous capabilities’). Strict liability is often used to govern ‘dangerous activities’ that will always carry an element of risk even when appropriate care is taken, especially where actors derive economic benefit from creating a risk.[48]
Strict liability negates the need to establish the standard of care (and prove breach of duty) and is therefore less burdensome on the affected person and the courts. In an economic sense, strict liability makes sense in situations where negligence claims are difficult or in practice do not hold liable the person best placed to prevent the harm, resulting in difficulties in obtaining redress as well as suboptimal levels of deterrence and precaution.
Strict liability in such situations, if imposed on the ‘cheapest cost avoider’, can be effective in pushing the strictly liable actor to take higher levels of precaution as they know they will be held liable for any damage they create.[49]
However, it may lead to moral hazard if the ‘victim’ could have also played a role in preventing the damage.[50] Some authors consider that strict liability may cause actors to become overly careful and may lead to an economically suboptimal level of a certain activity. Mandatory insurance could provide a solution here (see the case study on cars and nuclear power plants in the section on ‘Insurance’ in the Appendix).[51]
Professionalisation of AI developers and fiduciary duties
The profession of AI developer is specialised and requires a high level of technical expertise. Research has shown that AI developers are often aware of ethical concerns related to their work, but either do not have enough knowledge or organisational support to act upon it.[52]
Other highly specialised professions that encounter ethical dilemmas in their work, such as legal and medical professionals, are subject to fiduciary duties towards their patients, and ethical considerations. For clinicians, this is best known as the Hippocratic oath, ‘do no harm’.[53] These regulated professionals must complete ethics courses to be awarded their title and can be stripped of it or fined if found to act in contradiction with their duty of care.
Some authors have suggested that the profession of ‘AI developer’ or ‘AI engineer’ could also become subject to professionalisation and oversight by a regulatory body, like regular ‘engineers’ already are in the UK (see the section on ‘Fault liability’ in the Appendix).[54] This would help raise awareness about responsible development and ethics among the AI workforce, and could create personal incentives for AI developers to create a culture of social responsibility in AI companies.[55] It would formalise a professional standard of care that individual AI developers can be held to.
There are some drawbacks to this approach. Some research shows that even in regulated professions, fiduciary duties are followed more by the letter than in spirit, essentially making their professional duty of care into a compliance checklist.[56] Additionally, professionalisation increases the barrier to entry of a profession, potentially creating professional protectionism.[57] There is also a more fundamental difference between AI developers and lawyers or doctors. The latter two have a clear object (their client or patient) in whose best interest they are required to act. For AI developers, it is unclear who the fiduciary duty is towards.[58]
In some industries, the fiduciary duty is imposed on a whole service provider rather than on an individual professional, such as in the financial services industry. Financial service providers must adhere to the ‘consumer duty’, which means that they need to deliver good outcomes for consumers, requiring the providers to act in good faith towards their customers and avoid causing foreseeable harm to them.[59]
A breach of the consumer duty can provide a basis for a liability claim through the court system but may also be taken to the Financial Ombudsman, who can provide dispute resolution for affected customers and support them in obtaining compensation.[60] Senior managers within financial services can be held personally liable for, among others, violations of the consumer duty (see the section on ‘Vicarious liability’ in the Appendix for further explanation).[61]
Agentic and autonomous capabilities: autonomous systems and their ‘controller’
Takeaways
- Increasingly autonomous and agentic features in AI systems create challenges for the attribution of liability by shifting the ability to control the actions of the AI agent away from the user.
- It may be appropriate to impose strict liability for AI agents that operate at a very high level of autonomy, where the agent is able to pursue open-ended goals in complex environments, and where human oversight is limited if not non-existent. Such strict liability should primarily be imposed on the AI developer, analogous to the UK Automated Vehicles Act, but should not completely shield the AI agent user from liability risks as this might create moral hazard. Liability should be shifted away from users for outcomes they cannot control, but still hold users liable when the choice to use a certain AI agent for a certain task is careless.
- Using strict liability for very autonomous AI agents may also be helpful to steer towards AI agents that are subject to human oversight (that would not be subject to strict liability) and away from ‘uncontrollable’ agents.
- Vicarious liability and legal personhood are debated topics in research circles, but both come with significant and potentially unsurmountable hurdles.
- Introducing frameworks that increase visibility of AI agents, such as agent IDs and agent activity logs, will be necessary to help track agent activity and allocate liability.
After the introduction of large language models (LLMs), AI companies have started developing ‘agentic AI systems’, to the extent that some have called 2025 the ‘year of AI agents’.[62]
Agentic AI systems, sometimes referred to as ‘AI agents’, are AI systems that are able to take actions: they can ‘autonomously plan and execute complex tasks in digital environments with only limited human oversight’.[63] The terms ‘AI agent’ and ‘agentic AI system’ are used interchangeably here.
Examples include an ‘AI agent’ that works as a personal assistant (like OpenAI’s Operator) but may also include an AI system that automates workflows without necessarily ‘conversing’ with a human.
Earlier versions of agentic AI systems may include rule-based systems that execute functions in an ‘if-then’ function. For example: ‘If the temperature drops below 18°C, then [the agent] turns on the heating.’ Newer versions of agentic AI systems tend to be LLM-based, which can make them more adaptable, but also more unpredictable.
Figure 3: Example workflow of an agentic AI system
Challenges for AI liability
Earlier research has highlighted that agentic AI systems raise new challenges around ‘safety, alignment and misuse’.[64] The more autonomous systems become, the greater their potential for facilitating harmful misuse or accidents. Additionally, if agentic AI systems become more widespread, they might interact with each other in multi-agent systems, amplifying risks.[65]
Most of the challenges introduced by agentic AI systems for liability apply to the questions raised by AI systems in general, but the (increasing) autonomy of agentic AI systems may intensify them:
- Damages: Agentic AI systems may cause damages that are immaterial or systemic (see the section on ‘Types of harms’).
- Allocation of responsibility: It may also be challenging to trace back responsibility for harms resulting from the use of agentic AI systems to the responsible actors, due to their complex value chains (see the section on ‘Complex value chains and opacity’) and potential delegation of responsibilities to other AI agents. Additionally, for agentic systems with high levels of autonomy, there is a lack of visibility – it may not always be clear ‘when, where, how, and by whom certain agents are being used’.[66]
- Harm prevention: Agentic AI systems may also further complicate the foreseeability of harms, as their ability to autonomously interact with online environments increases their unpredictability (see the section on ‘Unpredictability and reasonable foreseeability’).
The technical features built into AI agents may also promote or hinder a user’s ability to effectively provide oversight of the AI agent and its actions. Additionally, AI agents may not always be perfectly aligned with their user’s intentions.
This section will focus on the problem of invisibility and some of the challenges around allocation of responsibility, notably the nuancing of human control over autonomous systems and technical features that hinder or enable oversight. Other challenges will be covered in other sections of this paper on ‘Types of harms’, ‘Complex value chains and opacity’ and ‘Unpredictability and reasonable foreseeability’.
Invisibility
To properly identify where AI agents have been used and potentially have caused harm, it is critical to have visibility into AI agents.[67] Whereas through the GDPR people have a right to know when they are subject to automated decision-making if it has a legal effect or otherwise similarly significantly affects them, it may not always be clear to third parties when they are dealing with an AI agent on the internet or with a real person. As AI agents can now solve CAPTCHAs,[68] it may become increasingly difficult to track AI agents online, distinguish whether a harm was caused by a human or an AI agent, and identify who was behind the AI agent that caused the harm.
Nuancing control
Increasingly autonomous agentic AI systems raise questions about the interplay of control and responsibility between the system and its user. Other AI systems are only capable of advising on actions or can act with a very limited range of pre-programmed actions. The paradigm shift towards more autonomous agentic AI systems means we are moving from AI systems that tell you how to fill in a form to ones that do it for you, and from agentic workflows that can control the temperature of your house to an agentic AI system that can manage your household tasks and professional appointments.
If an AI system can make a plan and execute it autonomously, then the user has less control over the eventual outcome. The user may struggle to foresee how the AI agent will act (due to increased unpredictability) and also have less opportunity to take precautionary measures, depending on the opportunities for human oversight that are built into the AI agent.
This is in tension with one of the fundamental tenets of negligence: ‘One cannot be liable for circumstances beyond what the reasonable person can account for.’[69]
Liability claims are often (but not always) directed at the actor at the bottom of the value chain (usually the deployer). The further up the value chain you go, the further removed the actor will be from the harm and the harder it becomes to establish a duty of care and/or causation and foreseeability between the claimant and the potential defendant.
This is the case even when the user’s control over an agent or autonomous system is limited or shared with the developer. This may lead to the creation of ‘moral crumple zones’. This term, coined by Madeleine Clare Eilish, refers to formal or informal mechanisms that aim to protect the integrity of a technological system, at the expense of the nearest human operator.[70]
Examples of potential AI ‘moral crumple zones’
Although cases relating to AI liability are still limited, some researchers have sounded the alarm at potential moral crumple zones they fear may emerge in an AI context:
- Clinicians may absorb liability for wrong recommendations by medical AI systems that end up harming patients, even if they have limited insight in how an AI system comes to its conclusions and are instructed to follow the AI system.[71]
- ‘Drivers’ of self-driving cars may absorb liability for accidents caused by an automated vehicle, even if the self-driving features of the vehicle were engaged. In the UK, dedicated legislation prevents this ‘moral crumple zone’, but such legislation does not exist in all jurisdictions.
- In aviation, pilots have been blamed for not correctly responding to emergencies and taking over control appropriately from autopilot systems, even though some system features made it complex for the pilots to understand how to intervene.[72]
Eilish argues there is a wealth of research that shows humans are not well-suited to ‘supervise’ an autonomous system, as they will lose focus and skills to correct the system where needed. Still, failures to intervene and properly course-correct the system tend to be pinned on the failures of human supervisors of autonomous systems, while the systems and their developers are not held responsible.[73]
Possible solutions
Levels of autonomy: taxonomising agents
Not all agents are created equal. Technical features built into the agentic AI system will determine how autonomously the agentic AI system can take actions, and what the range of those actions is.[74]
Enabling an agent to have unrestricted access to the (online) world may decrease predictability and lead to more unforeseeable consequences. Features such as ‘chain-of-thought reasoning’ that give insight into the planning and steps taken by an agentic AI system to achieve a certain goal,[75] can increase a user’s ability to keep an eye on the agent, especially when paired with mandatory ‘sign offs’ before an agent executes a plan.
Liability should be placed appropriately depending on the level of autonomy of the agentic AI system[76] and the opportunities for control and oversight this gives to users, to ensure that the liability burden is appropriately placed with the actor who can prevent harms through more responsible behaviour.
If an agentic AI system is designed to have a very high level of autonomy, with potentially limited transparency and opportunities for effective human oversight built in, it may be appropriate to shift liability away from the user and towards the developers, in line with the approach taken in the UK Automated Vehicles Act.[77]
Still, the user of an agentic AI system with a high level of autonomy is deriving value from the use of that agent and has chosen to use the AI agent in a certain context. It is therefore not appropriate to completely shift liability away from the user in such contexts.[78]
Overall, as agentic AI systems have wide-ranging capabilities and risks attached to them, it is helpful to create systems for categorising and understanding them so that control and liability can be understood and attributed in an appropriate way.
Vicarious liability (‘agency law’) and legal personhood
In scholarship on liability and AI agents, beyond considerations around a specific strict liability rule, two major strands focus on vicarious liability (‘agency law’) and legal personhood.
In vicarious liability (further explained in the section on ‘Vicarious liability’ in the Appendix), ‘principals’ can be held liable for wrongful acts committed by their ‘subordinates’ or ‘legal agents’. This is mostly used in employer-employee contexts, where the employer is the ‘principal’ and the employee is the ‘agent’. Some scholars suggest that AI agents should be seen as the legal agents of whoever controls them: a deployer and/or developer(s).[79]
However, the UK courts and legal commentators have stated that vicarious liability does not work for AI agents as it (in most readings) requires a ‘human agent’ or at the very least requires the agent to have legal personhood and an ability to act with consent or intention.[80]
Furthermore, holding a principal liable through vicarious liability would still first require the claimant to prove that the agent committed a tort (a wrongful act under liability law) and to date, no AI agent or AI system has ever been held liable for committing a tort.[81]
Additionally, there would be open questions regarding how to identify an AI agent’s principal.[82] Using the doctrine of vicarious liability to cover AI agents would require a departure from earlier legal precedent and there would still be significant open legal questions to address before it could be usefully operationalised.
Another approach put forward by legal scholars is to consider awarding (a form of) legal personhood to AI agents.[83] A ‘legal person’ is an entity (human or non-human) that is subject to rights and duties under the law. Usually, human adults will have full legal personhood, meaning that they can drive, vote, conclude contracts and make financial decisions for themselves.
Children and corporations are examples of entities with limited legal personhood. Children do have legal rights but with restrictions – for example they are not allowed to vote or drive. Corporations can employ people, sign legally binding contracts, and be fined and held liable, but they cannot vote and do not have human rights.
Awarding a limited form of legal personhood to AI agents would allow them to carry certain legal duties towards their user. In a legal sense, it would be possible to hold the AI agent itself liable for damages it caused. The AI agent itself, however, does not hold assets and can therefore not compensate affected parties, so eventually the claim would still have to be brought against the person behind the AI. However, the risk here is that the AI agent as a legal entity would be used to shield the actors developing or using the AI agent from personal liability.[84]
Agent IDs and other visibility measures
A considerable practical problem in holding person(s) or companies liable for damages caused by an autonomously operating AI agent is that it can be troublesome to identify the owner or user behind the AI agent. Some authors have developed a proposal for IDs for AI systems that can help address such issues.[85]
Alongside agent IDs, other measures such as activity logs have also been proposed to help increase the visibility of AI agent activity. The activity logs can also help track if a specific AI agent interacted with a third party or online service (where consequently harm was caused).[86]
Infrastructures and technical interventions such as agent IDs and activity logs will be necessary to make it possible on a practical level to hold responsible parties liable. Developing such infrastructures will require further research and investment.
Complex value chains and opacity
Takeaways
- The ‘many hands problem’ and opacity of AI systems make it difficult to evidence how an AI system may have caused damage, and who is responsible for it.
- Joint and several liability may be appropriate to relieve some of the burden on claimants.
- Challenges related to opacity and trade secrecy make it difficult for claimants to evidence the breach of a duty of care, the defectiveness of a product and/or causation. Legal obligations to disclose evidence and/or reverse the burden of proof in cases of particular technical complexity may be warranted to level the playing field between AI developers and affected parties.
The ‘but for’ test in liability requires that the claimant proves that the harm would not have materialised ‘but for’ the actions or omissions of the defendant.[87] The value chain of AI systems can be quite complex, and harms can materialise from different steps within the chain.
AI harms can be the consequence of poor data selection or cleaning, choices made by the foundation model developer, the AI application developer and the actual deployer and/or user of the AI system. This is known as the ‘many hands problem’.[88]
Moreover, choices made by more downstream actors can impact the effectiveness of safeguards implemented by upstream actors. For example, research has shown that even fine-tuning a model on benign additional data can erode safeguards put in place by upstream developers.[89]
Figure 4: The foundation model supply chain
Challenges for AI liability
As a result of this complex interplay of actors in the value chain, and the difficulty in untangling their impact on the outputs an AI system produces, it can be challenging for a claimant to prove that the harm would not have materialised ‘but for’ the actions of a specific actor in the value chain.
This is exacerbated by the opacity of AI systems, and the lack of explainability regarding how they reached certain decisions or recommendations. This opacity means that it can be hard, if not impossible, to ascertain how an AI system makes decisions and why a final decision has been taken.[90]
For example, we do not know why an AI system may mark one job candidate as a ‘yes’ and another as a ‘no’. It may be that there is a bias in the system, but it is hard to evidence this, and even more challenging to evidence where in the value chain this bias originated.
There is debate on whether AI systems need to be opaque in order to function well, with some arguing that it is possible to build equally capable systems that are transparent.[91] There are also nascent research efforts in uncovering the ‘thinking processes’ of AI systems.[92] For the time being, it is essentially impossible to ascertain to a legal standard of certainty where or what in the value chain caused an AI system to create damage.
AI companies further maintain this opacity by protecting their models and algorithms as ‘commercial secrets’.[93] Companies tend to shield the inner workings of their models to maintain a competitive advantage over other AI companies, thereby making their AI models and systems even more inscrutable to the public, legal institutions and potential oversight boards.[94]
Potential solutions
Dividing up liability
Existing product liability regimes, such as the UK’s Consumer Protection Act (CPA) 1987 and the EU’s Product Liability Directive (PLD), hold different actors in the value chain jointly and severally liable for damages resulting from their products. This solution relieves some of the burden from affected persons. They only have to sue one of the potentially liable parties for the whole of their damage, for example the party that will most easily be proven liable or has the deepest pockets.
This prevents the claimant from having to prove how every single actor in the value chain is liable for part of the damage. Instead, they only have to prove this once. Additionally, both the CPA 1987 and the EU’s PLD have clauses that allow a claimant to hold suppliers or distributors liable if they fail to disclose the producer or ‘economic operator’ of a product when asked by the claimant.
The singled-out defendant (if found liable) can seek to obtain recourse from the other potentially liable parties. Depending on who the liable party is, they may be better placed in terms of resources and access to information than the claimant to pursue the other potential defendants.
However, joint and several liability still requires the claimant to prove that at least one defendant is liable for their damages, which still raises issues around establishing a duty of care, proving breach and causation.
Moreover, both the CPA 1987 and the EU’s PLD have clauses that allow a claimant to hold suppliers or distributors liable if they fail to disclose who the producer or ‘economic operator’ is when asked to do so by the claimant.
The section on ‘Unfair contractual clauses’ addresses the issue of contracts and terms and conditions, and the section on ‘Complex value chains and opacity’ discusses the division of liability along the value chain.
Transparency
Advances in explainable AI and methods for increasing transparency may help address some of the issues around opacity. For example, impact assessment methods usually require that findings are published to improve transparency.[95] It is also considered to be good practice to publish model or system cards along with a new AI model or system that details the safety testing the model has been subjected to and how it scored.[96]
There are also some legal initiatives that mandate increased transparency: the EU AI Act contains transparency provisions; California’s recently signed SB53 mandates transparency reporting; and New York’s RAISE Act similarly requires companies to report on their approach to assessment of AI risk and safety precautions.[97]
However, some of these statutes are limited to reporting on very specific types of risk (‘catastrophic risk’ which is defined quite narrowly) or do not require companies to have their reporting audited and verified by an independent third party, which lessens the trustworthiness of the self-reported information.
Still, as there is no standard format for model cards, not all model cards are sufficiently comprehensive or provide all relevant information.[98] As mentioned above, AI companies usually will not share very detailed information about their models due to competition concerns – and aforementioned legislative initiatives contain carve-outs that allow companies to redact parts of their transparency reports due to ‘trade secrets’ or ‘public safety’ concerns.
These transparency solutions are therefore not always effective in making information about the workings of an AI model publicly available and, even if they are, it can be hard to translate high-level model transparency to ‘what happened in this specific case’.
A technical feature that may help increase transparency is chain-of-thought (CoT) reasoning.[99] In CoT, an AI model will break down a problem into a series of smaller steps and list those steps in a log. Human users are able to review the logs and see how a model reasoned and planned towards executing a certain goal. However, AI models are not always faithful in what they state in their CoT.[100]
Researchers at Anthropic have described how the model Claude sometimes makes up plausible-sounding steps in its CoT.[101] This fake reasoning is still very persuasive and shows the limits of the reliability of CoT and so-called reasoning models in increasing the transparency of AI models.
Duties to disclose and reversal burden of proof
As stated in the previous sections, obtaining reliable information about the inner workings of an AI system is challenging for claimants. Even when models log their reasoning, this cannot necessarily be trusted and AI companies tend to avoid publishing detailed information about their AI models. This makes it very difficult to obtain evidence and prove causation in liability cases. For negligence and product liability, it has to be proven that either a duty of care was breached and this caused damage, or that a product was defective and caused damage.
For this reason, legal requirements to disclose relevant information about the AI system after harm has occurred can play an important role in assisting the claimant to bring their case to court.
The EU’s revised PLD includes a provision on the disclosure of evidence by the defendant. It sets out that the defendant is ‘required to disclose relevant evidence that is at the defendant’s disposal’, although measures may be required to ‘preserve the confidentiality of that information’.[102] If the defendant fails to disclose relevant evidence, then the EU PLD states that defectiveness of the product shall be presumed.[103]
Additionally, the EU PLD states that a national court shall presume defectiveness where ‘the claimant faces excessive difficulties, in particular due to technical or scientific complexity, in proving the defectiveness of the product or the causal link between its defectiveness and the damage, or both’.[104] The manufacturer or producer then still has the right to rebut any of these presumptions.[105]
Open-source AI
Takeaways
- Open-sourcing a model severs the control that upstream actors have on downstream uses of their model, which therefore makes it undesirable to hold upstream actors liable for eventual harms caused by their models. They do not derive direct economic benefit from publishing their open-source model, nor are they able to prevent harms from materialising.
- Liability is best placed with downstream actors who do derive economic benefit from the exploitation of the downstream publication or use of the AI model or system.
- There is potentially a role for platforms that host libraries of open-source AI models to ensure that models are uploaded with appropriate system and safety information, and to develop methods to flag, review and potentially remove harmful models.
Open-source code is code that can be inspected, altered and distributed by others than the original code developers. Making software available through ‘open source’ is an alternative to licensing it. Through licensing, a technology company will allow another actor to use its software, but the user will not be able to see or alter its code. Open source essentially gives users full access to the code, and allows them to alter and update it to their liking. Examples of open-source AI models are Pythia, OLMo, Amber and T5.[106]
Some models have elements of open source in that they allow users to fine-tune the model, but do not fulfil all requirements, such as Meta’s Llama.[107] Open source can therefore be seen as a gradient, from more to less open, but only the models that impose no restrictions on how users use them can be called fully ‘open source’.
Challenges for AI liability
Open-source AI can create conditions to ‘democratise’ AI and share best practices.[108] On the other hand, open-sourcing AI can ‘sever’ the link between an AI product and its developer, if the source code of the AI product has been altered after its release. The original model may still be largely responsible for how an AI system behaves, but it can be hard to differentiate how the modification impacted on the outcomes produced by the AI system. As what happens to the AI model after it has been open-sourced is out of the hands of the original developer, it can be challenging and unjust to hold them accountable.
The EU’s revised Product Liability Directive explicitly excludes ‘free and open-source software developed or supplied outside the course of a commercial activity’.[109] If an open-source software component is subsequently integrated into a product that is placed on the market, then the developer of the product can be held liable, but not the developer of the software as they did not place the software on the market.[110]
This is very relevant as many proprietary software applications have open-source components: some research suggests that open-source components may make up more than 80 per cent of the code ‘under the hood’.[111] Such a liability burden placed on the end-developer of the product will require product developers to be careful in using open-source components and to do their due diligence regarding what kind of open-source AI they integrate into their AI systems.
Potential solutions
As open-source developers receive no direct economic benefit from the open-source software they publish nor have any control over how their software will be used, it is challenging to hold these upstream actors liable from a tort liability perspective. After all, these upstream developers have no ability to take precautions to prevent harmful downstream uses of their model.
Limitations to this approach potentially include that open-source AI developers who act maliciously (such as creating a backdoor in their software for cyber-attacks) should bear responsibility for such malicious behaviour.[112] For such exceptional and malicious behaviour, a special liability rule could be developed.
Still, many AI systems contain open-source elements, and many argue that open-source resources are necessary for AI and software development in general. In this context, the developer should have access to reliable information about the open-source AI they are using, especially as they are likely the ones to bear the liability risks.
Here, potentially, there is a role for platforms that function as ‘model marketplaces’, such as Hugging Face and GitHub, which could require certain information to be supplied at the time of uploading the model.[113]
Additionally, such intermediaries could develop methods and/or rely on user feedback to screen models and flag them for review to weed out problematic AI models. Researchers have suggested creation of template ‘evidence packs for model flagging’. These would help clarify what kind of information is necessary to understand the model’s potential for particular kinds of harm, or how to collect information to document misuse of a model off-platform.[114]
This would not stop problematic models from being accessible online, but would allow good faith model developers to have better information and tools to make an informed decision on the risks associated with an open-source AI model. However, this additional responsibility for evaluating models would require a significant increase in screening capacity for such platform intermediaries, and may have implications for the culture of openness in open-source software sharing.
Unpredictability and reasonable foreseeability
Takeaways
- The technical features of generative AI systems make their responses unpredictable due to their use of statistical inference.
- The general-purpose nature of some AI systems creates challenges for the ‘reasonable foreseeability’ of AI harms and the uses of AI products.
- Incidents and evidence of AI harms should be documented so that harms likely to be caused by AI systems become clear, and therefore fall into the ‘reasonably foreseeable consequences’ of the use of a certain AI model or system.
- AI developers should have a duty to identify risks stemming from the use of their AI products, address these risks and incorporate warnings in the design of their product. AI developers can be held liable for defects that they knew of at the time of placing the product on the market, or that they should have known of. AI developers are likely best placed to understand and foresee risks stemming from their products.
- Research on human-computer interaction (HCI) may help in clarifying how users are likely to interact with an AI system. This may provide insights relevant for responsible AI design.
- Strict liability may be considered for AI systems that combine a lack of foreseeability with a high-risk classification.
Generative AI systems use statistical inference to arrive at answers to user prompts. This means that they identify patterns in their training data and derive the statistically best possible response to a user’s query. It also means that the responses given by an AI model are inherently unpredictable and cannot be traced back through ‘decision-tree type reasoning’ as was used in earlier generations of AI systems.[115]
Challenges for AI liability
The lack of predictability inherent to AI complicates a core element of classic tort liability: reasonable foreseeability. This is a precondition for determining, under tort law, which precautions a reasonable person would have taken in order to prevent harms from materialising (see the section ‘Fault liability’ in the Appendix), and harms that were not reasonably foreseeable are deemed ‘too remote’ to be covered by tort law.
Besides the inherent unpredictability deriving from the statistical nature of generative AI systems, lack of foreseeability can also stem from the wide range of uses of these systems. For more narrow-use AI (such as AI used for diagnosis of a specific disease), it is easier to foresee what a system may be used for and what its impact or side effects can be in those contexts.
Determining foreseeable use for general-purpose AI systems is generally more challenging, as these systems have a wide range of potential uses. For example, ChatGPT can be used to write a poem but also to answer medical questions.
Additionally, AI systems often have continuous learning capabilities, which means that their functioning may change over time. This may also make it increasingly difficult to foresee how a system may be used, and how it will function in all its different use cases.[116]
For example, for the determination of defectiveness in product liability, claimants generally have to prove that the harm occurred in the context of a reasonably foreseeable use.
The EU’s Product Liability Directive defines this as ‘the use for which a product is intended in accordance with the information provided by the manufacturer or economic operator placing it on the market, the ordinary use as determined by the design and construction of the product, and use which can be reasonably foreseen where such use could result from lawful and readily predictable human behaviour’.[117]
The UK’s Consumer Protection Act 1987 similarly stipulates that a ‘defect’ will be based on what a person is entitled to expect in relation to a product, and will take into account ‘what might reasonably be expected to be done with or in relation to the product’.[118]
Potential solutions
Documenting incidents of AI harms
The lack of reasonable foreseeability of AI systems stems from their technical features and broad range of potential uses. As described in the section ‘Complex value chains and opacity’, AI systems are opaque and, to date, no truly reliable methods have been developed to explain their reasoning.
Still, simply by looking at their widespread usage and the incidents that have occurred, it seems likely that certain harms will become foreseeable with time. For example, it is now established knowledge that AI systems sometimes hallucinate.[119] Although hallucination rates have been going down, the issue continues to persist at the time of writing.[120]
It is therefore foreseeable for developers and for deployers and users that AI systems will sometimes provide factually incorrect information. There may be a duty for the AI developer to try and decrease the hallucination rates of their models, and also to provide appropriate warnings to users about the reliability and accuracy of the information provided by their models.
Research has shown that warnings decrease the perceived accuracy of hallucinated AI content, but do not fully diminish the ‘liking’ and sharing of misinformation.[121] Still, appropriate warnings and user instructions influence the reasonable expectations a user may have from a product in the context of product liability. Case law around product liability has established some guidelines for when warnings may be deemed sufficient.[122]
Initiatives such as AI incident databases may be helpful in gathering information on harms or near harms caused by the deployment of AI systems.[123] Researchers on AI safety are similarly conducting valuable research on detecting trends and categorising AI harms.
However, AI labs themselves will always be best positioned to foresee and understand potential harms that their AI models might cause, as they have the best insight into the AI’s capabilities and the data on how users are using AI.
Under product liability, developers cannot ignore defects they knew about at the time they placed the product on the market, as developers can only use the ‘developmental risk defence’ for liability of risks that were scientifically undiscoverable at the time of deployment.[124]
Requirements around disclosure of evidence and the shifting of the burden of proof (see the section in the Appendix on ‘Causation’ and the section ‘Complex value chains and opacity’) may help establish what risks AI developers were aware of, or should have been aware of, at the time they placed their AI model on the market, and what reasonable precautions were available to them.
Human-computer interaction research
Research on human-computer interaction (HCI) may also help establish how users may interact with an AI product. For example, the Law Commission report on the UK’s Automated Vehicles Act includes a section on ‘requirement for transition demands’ that describes how an automated vehicle may hand control back over to the ‘driver’. The report states that a transition demand should use a clear, multi-sensory signal and that it must be ‘timely’, allowing sufficient time for the driver to regain situational awareness.
The Law Commission recommends that the driver’s legal responsibility for how the vehicle drives should only arise after the end of this transition period.[125] It cites some literature reviews that state that 10 seconds is an adequate time, although the Law Commission concludes that there is no single, accepted takeover time: ‘Sufficient time will vary depending on the external environment, the user’s alertness, and their personal characteristics.’[126]
The transition time should also take into account the kinds of activities that are permitted as ‘non-driving activities’ by the person in the driver’s seat of an automated vehicle , such as eating, texting or watching a film.[127]
The analysis of the transition period by the Law Commission shows the nuances of how people may interact with an automated system. The discussion taps into existing HCI literature on the limitations of a human’s ability to supervise automated systems. It shows that such existing research on HCI can help establish what a reasonable interaction may look like when a person uses an automated system, in this case an automated vehicle.
This can increase the foreseeability of a certain harm or use of an AI system. For example, based on this research, it is reasonably foreseeable that a driver who only has mere seconds to take back control of an automated vehicle will likely not be able to do so effectively and that developers should consider this in the design of their products.
Other HCI research may provide similar insights on how developers should expect their users to interact with their AI systems, and what they should consider when designing human-facing AI systems.
Strict liability
In tort liability, there is precedent for attaching strict liability to dangerous activities or objects (such as keeping of certain animals, using or storing fireworks, or operating a nuclear power plant). An activity may be considered hazardous due to its unpredictability (such as the responses of a wild animal or the easily inflammable nature of fireworks) or due to its capacity to create catastrophic harm (fires, nuclear meltdowns). Both may be true for AI systems, as they are unpredictable and at their worst can create large, systemic impacts.
The rationale for strict liability is that the potential harm-doer derives a certain economic benefit from the hazardous activity, even though this creates a high risk for third parties or may create catastrophic harm if the risk materialises.
In such cases, strict liability may be used to ensure an easy route to redress for affected parties and to create a very strong incentive for the potential harm-doer to take precautions – as they must internalise the full cost of the negative externalities of the hazardous activity.[128]
Some legislative proposals already make distinctions between high- and low-risk AI systems, such as the EU AI Act. For systems that combine this feature of unpredictability with high risk, strict liability may be appropriate.
Types of harms
Takeaways
- Many AI harms (immaterial damages, systemic damages, pure economic loss) do not fall within the types of harms that are typically covered within tort law (personal injury or property damage), meaning that AI developers and deployers do not carry liability risk for some of the negative externalities that their AI system causes. This creates an economic failure because, as a result, the affected parties are left with these harms without having an effective redress option, while AI companies have no incentive to take safety precautions to prevent such harms.
- Allowing claimants to sue for certain well-defined immaterial damages can help close this gap. The immaterial damages can be capped at a maximum amount.
- Allowing claimants to sue collectively through collective action lawsuits may help cover damages that only become apparent at a systemic level, rather than in an individual case, and may decrease the burden for individual claimants. Collective action lawsuits also make it easier for affected persons to recover smaller damages that they would otherwise not bother going to court for.
Challenges for AI liability
Liability laws are generally better suited to cover material harms, whereas some of the main harms caused by AI systems are immaterial damages (human rights violations, such as privacy and non-discrimination), collective damages (misinformation) or pure economic loss (lost earnings from a missed job opportunity). Personal injury and property damage tends to be well covered by negligence and product liability across jurisdictions, but the picture for immaterial harms is more nuanced.
In English law, pure economic loss is only covered by negligence in very limited circumstances (usually an assumption of responsibility by the defendant) and it can be covered by other torts. There are also some specific torts that may open a path to civil recourse for fundamental rights harms.[129] In some European jurisdictions, there have been examples of immaterial damages being awarded for violations of privacy rights and the human right to life.
Potential solutions
Immaterial damages
As immaterial damages can be harder to quantify than material damages (like property damage or medical costs), allowing for tort liability claims based on them can cause legal uncertainty and unclear liability risks for AI developers and deployers.
Courts and legislators tend to be hesitant to allow for pure economic loss, as it is often seen to be a threat of indeterminate liability. It is therefore wise that precautions are taken to reduce this uncertainty. These may include covering only certain categories of immaterial harms, like those that are most likely to materialise from the use of AI systems (such as human rights violations or psychological harms), and stipulating that they can only be claimed in AI contexts, or capping the maximum amount to be rewarded in immaterial damages.
Example: Immaterial damages under GDPR
There are some examples of legal systems awarding damages for immaterial and/or systemic harms. For example, the GDPR does give the right to receive damages for ‘non-material harms’.[130] The Court of Justice of the European Union has stated that the right to receive damages, including non-material damages, under the EU should be ‘broadly interpreted’, but that the claimant does need to demonstrate that they have suffered actual harm and that this harm was caused by a GDPR infringement.[131]
However, damages awarded for non-material damages under the GDPR are generally not high enough to have a deterring effect on big corporations.[132] For example, Dutch courts have been awarding fines up to 2,500 euros in GDPR cases based on individual complaints.[133]
Collective redress
Some jurisdictions allow for collective action lawsuits. This can be a response to harms that are only noticeable on a ‘collective level’. Collective action lawsuits reduce the burden of going to courts for individual claimants. This can help claimants recover smaller damages that they would otherwise not go to court for, as the burden of going to court would be too high.
In the UK, there are three types of collective redress. One is generally referred to as a ‘class action’: the collective proceeding. The collective proceeding allows one representative to lodge a complaint on behalf of a class of people. Here it is not necessary to provide evidence of the damage suffered by every single individual claimant; it is enough to prove that the class as a whole has suffered a loss. This proceeding is currently only available for losses resulting from infringements of competition law.
Two other forms of collective redress exist (group litigations and representative actions), but those tend to be more burdensome on individual claimants and therefore less effective at easing the barrier to obtaining redress for damages in the context of AI. The ‘Collective damages’ section in the Appendix discusses these forms of collective redress in more detail.
| Type of collective redress | Rule | Benefit | Challenge |
| Group litigation | Individual claims bundled together, opt-in. | Lower cost for courts, claimants can share litigation costs and risk. | Opt-in requirement is onerous on claimants (usually not economically viable for smaller claims). |
| Collective proceeding (class action) | One representative for class of people. Opt-in or opt-out. | Lower burden: merely prove that the class as a whole has suffered a loss (no need to prove this for each individual). | Only applies to damages resulting from competition infringements under the Consumer Rights Act 2015. |
| Representative action | One claim with multiple claimants with an identical legal interest. | Lower costs for courts and claimants. | The bar for having an ‘identical legal interest’ is high and not easy to meet (see example Lloyd v Google in Appendix). |
There are some successful examples of collective action cases in other jurisdictions for immaterial harms: violation of the right to privacy and violation of the right to life.
Example: Collective actions based on GDPR
In the Netherlands,[134] some cases have been initiated by collective action groups who are suing big tech companies for privacy violations under the GDPR on behalf of affected individuals.[135] A tentative trend towards these kinds of class action lawsuits to recuperate immaterial damages from tech companies has been observed.[136]
These collective action suits could have a stronger deterring effect on tech companies, as even when the awarded damages per claimant are relatively low, the amount of claimants per collective claim is high enough to make the entire claim worth billions.
It is still unclear to what extent these lawsuits will be successful, but there are currently several of these multibillion-euro claims pending in Dutch courts.[137] There are mass liability claims against TikTok,[138] Google,[139] Oracle and Salesforce.[140]
However, in England, class action lawsuits in this way are not allowed, as the case of Lloyd v Google shows (see the section on ‘Harms and damage’ in the Appendix), and other routes to collective redress are more challenging to qualify for.
Example: Collective actions in climate litigation
In the field of climate litigation, class action lawsuits have been used to obtain redress for immaterial damages through liability law. This is a relevant example of how other jurisdictions allow for liability cases as redress for fundamental rights violations. AI may similarly cause such fundamental rights violations.
In the Dutch case of Milieudefensie v Royal Dutch Shell, a Dutch district court found that Shell violated its duty of care to reduce CO2 emissions and thereby risked future violation of Dutch citizens’ right to life.
In this case, the court did not award damages but ordered Shell to amend its business practices and reduce emissions.[141] The Dutch Court of Appeal upheld this judgment in November 2024 and ordered Shell to reduce its emissions but stated they could not legally tie the duty to reduce emissions reduction to a specific percentage.[142]
Unfair contractual clauses
Takeaways
- Contracts are used by upstream AI developers to push liability downstream to actors lower in the value chain, including SMEs that develop AI applications and deployers that can be both businesses and individual consumers. This means that liability is not always placed with the actor best able to prevent the harm from happening, and may place significant liability burdens on downstream actors.
- Legislation only limits contractual clauses that exclude or limit liability for death or personal injury through negligence, but does allow for limitations of liability for many other AI harms.
- This societally suboptimal distribution of liability through ‘private ordering’ can be countered by taking measures such as expanding legislation on unfair contractual clauses to specifically consider AI contracts or regulatory approval of terms and conditions under which models are placed on the market.
Challenges for AI liability
As shown in the section ‘The current situation’, contracts unevenly distribute the burden of liability along the value chain. This finding was shared by legal experts who participated in our roundtable, but also aligns with earlier findings on ‘private ordering’ (the use of contracts and/or terms and conditions to establish norms between parties) in an AI context. A report by Professor Lilian Edwards and others looked at the terms and conditions under which generative AI products were offered. They found that liability for at least some harms (such as copyright infringement) was fully pushed down to downstream actors.[143]
Large developers of AI models use standard clauses in contracts to push liability downstream, both in B2B and B2C contexts. This means that upstream AI developers are currently not properly incentivised to minimise risks to limit their own liability exposure.
For both businesses and individual consumer deployers, this pushing down of liability means that they may be stopped from lodging a civil claim against an upstream AI developer, who may have limited or excluded their liability for damages caused by their AI system.
For actors lower down the value chain, such as SMEs developing applications on top of foundation models, this means that they may get ‘squeezed’ between their (big) clients and large upstream AI developers, bearing the brunt of liability exposure for AI harms, even though they may not always be best positioned to prevent those harms from occurring.
Legislation on unfair contractual clauses (the UK Consumer Rights Act (CRA) 2015 and the Unfair Contract Terms Act (UCTA) 1977) does touch on such liability exclusion provisions but only prohibits clauses that exclude or limit liability for death or personal injury through negligence.
The CRA 2015 and UCTA 1977 subject clauses exclude or limit liability for property damage or economic loss to a reasonableness (UCTA 1977) or fairness (CRA 2015) requirement. This means that upstream AI developers can legally limit downstream actors, deployers and users from suing them for other kinds of damages, such as property damage, financial losses or immaterial harms, unless this is considered unfair or unreasonable.
Possible solutions
Expand unfair contractual terms
It seems clear that the current use of contracts leads to a suboptimal distribution of risk along the AI value chain. Downstream actors and deployers are saddled with more AI risk than is societally desirable to effectively minimise AI risk, while large AI model providers are pushing away legal responsibility for the outputs of their models.
The current regulation of unfair contract terms are limited to very specific harms. Given the serious harms that AI systems are likely to lead to, and which may not fall under ‘death or personal injury’ (see the section on ‘Types of harms’), expanding these liability exclusion or limitation prohibitions to cover a broader range of harms for AI contracts would support a more equal distribution of risk.
An earlier draft of the EU AI Act included a prohibition on unfair contractual terms imposed on SMEs or start-ups with a weaker bargaining position, but this article did not make it into the final adopted text.[144]
Evidence provided to us by legal experts in the UK suggests that even bigger clients do not have a strong negotiating position with large AI companies. The clients are usually forced to accept the terms laid down, in which AI companies limit their liability to the greatest extent legally possible.
A broader prohibition on contractual limitations on liability would therefore legally empower companies and deployers further down the value chain in their contract negotiations with AI companies.
Anything short of legally binding prohibitions (such as voluntary model clauses for AI contracts) will likely not make a difference in the legal position of smaller actors negotiating contracts with large AI model providers.
Pre-approval of terms and conditions
The Ada Lovelace Institute has previously argued, in our publication Safe before sale, that an FDA-inspired model of pre-market approval for AI models could help reduce societal risk.[145] Other researchers have suggested that this could include pre-market approval of the terms and conditions under which the model is offered.[146]
Another version of this would be a model where an empowered AI or consumer regulator is able to review the terms and conditions of AI models already on the market and order unfair clauses to be amended where they unfairly disadvantage actors with a weaker bargaining position.
Conclusion
Liability for AI can provide an ex post remedy for harms resulting from the use or deployment of AI. Our research has shown that it is burdensome and complex for people and organisations to bring legal cases when they have been harmed by an AI system. We have also found that the burdens of liability risk are not equitably distributed, with large actors higher on the value chain shielding themselves and pushing liability down towards smaller developers and deployers.
This paper has set out the most pressing issues around contractual and non-contractual liability for AI. We have proposed various levers and routes that can be explored to address these challenges. These levers can be considered separately or together, but overall show that making liability laws fit for purpose for the age of AI is a knotty problem without a single, easy fix.
Measures should be implemented to support claimants in making their cases: measures that improve transparency, facilitate access to technical documentation and perhaps include a (rebuttable) presumption of causality after a certain threshold has been met. Exploring ways to make ‘diffuse’, immaterial or smaller AI harms more easily recoverable could be worthwhile, such as allowing for (limited) immaterial damages in specific contexts, or class action lawsuits in AI cases.
Additionally, policymakers need to consider the risk landscape of AI in the UK. Contracts and power imbalances are currently being used by large AI companies to limit their own liability exposure, while pushing liability risk down the value chain. Updates to unfair contract terms for AI contracts could prove helpful in empowering downstream developers and deployers, and protecting them from being loaded with undue liability risk.
Lastly, policymakers should consider ways they can support the development and clarification of the standard of care that should be expected from AI developers, and what reasonable precautions and safety practices should become the industry standard. In the case of high-risk types of AI, such as AI agents with a high level of autonomy, the case for strict liability can be made.
Overall, liability can be a flexible and helpful tool to help shape AI safety practices, and to protect and support the people and organisations who might be impacted by this new technology, but this does require some steps by policymakers to make our liability laws fit for our current age.
Methodology
This discussion paper was developed through a combination of desk research, an expert roundtable and further review by legal experts. The roundtable brought together a range of legal professionals, both practicing lawyers from UK law firms and academics to reflect on the distribution of liability through contracts for UK businesses and key challenges around AI liability in the UK. The roundtable was held under Chatham House rule to enable participants to speak freely. An earlier draft of the paper was reviewed by three legal experts to ensure the accuracy of the legal sections.
Acknowledgements
This paper was lead-authored by Julia Smakman. Particular thanks is given to the following experts for their comments on an earlier draft of the paper:
- Professor Donal Nolan, University of Oxford
- Eleanor Hobson, Partner at Kemp IT Law LLP
- Tom Whittaker, Jacob Pockney, Zachary Bourne and Mopé Akinyemi at Burges Salmon LLP
Any mistakes or inaccuracies in this paper are wholly the author’s and not those of the reviewers.
The research team would also like to thank all the roundtable participants for their shared insights on AI liability in the UK.
Appendix
This appendix is a reference for readers who do not have a legal background and/or want to understand more about how liability law works, and its interplay with insurance and contracts. The appendix provides background knowledge to further understand why and how AI creates certain challenges for liability.
What is non-contractual liability and how does it work?
Liability law differs per jurisdiction, and especially between common law jurisdictions (like the UK) and civil law jurisdictions (like most EU countries). Although each country might have its own nuances with regards to the elements of establishing liability in different cases, there are many concepts that cross over between different jurisdictions.
In this section, we discuss some of the main doctrines of attributing liability: fault liability, strict liability, product liability and vicarious liability. We zoom in on two elements of establishing liability in a legal sense, namely causation and damages. Lastly, we explain how liability law and insurance interact with each other.
Main liability doctrines and elements
There are four major doctrines based on which liability can be imposed. These forms of liability exist in most European jurisdictions.
Figure 5: AI liability ‘flow chart’
Fault liability: negligence
Under fault liability, an actor is liable because they did (or failed to do) something that then caused harm. For AI, the most relevant form of fault liability is ‘negligence’.[147] An actor is negligent when they had a duty of care towards another party, did not exercise reasonable care (this failure is also known as a ‘breach’ of the duty of care), and caused damage to another party which was foreseeable.[148]
An actor can thus be held liable for harms that they caused, and that were a reasonably foreseeably consequence of their action or omission. This is in line with economic liability rationales and with corrective justice theories: the person causing the damage must internalise this negative externality and ensure that they are not leaving someone else worse off as a result of their unreasonable conduct.
Negligence can be burdensome on the affected person, who must provide proof of their harm or damage, that this damage was caused by the defendant, that the defendant owed them a duty of care and that the defendant did not exercise reasonable care.
The standard of care is defined in an objective way: it references the degree of care, competence and skill to be expected from an average person in the defendant’s position.[149] For example, although a defendant might argue that they themselves did not know that AI systems could hallucinate, it can be argued that by now the ‘average person’ is aware of this relatively common phenomenon and therefore the defendant should have known about this risk.
Typically, a standard of care will emerge over time and evolve as we learn more about a new activity or product and the kinds of risks associated with it. A standard of care may also be established through statutes or regulatory guidance. In the context of AI, industry standards, academic research and legal requirements may be influential in determining whether an actor has acted negligently.[150]
For some activities that are carried out by professionals, the standard of care may be that of the ‘reasonable professional’ rather than the ‘reasonable layperson’. In this case, the institution that regulates a profession will usually issue guidelines on the standard of care that someone in that profession can be held to.
Professional standards of care: engineers
Various fields requiring a high level of professional expertise have ‘professionalised’ certain professions. Professionalisation provides a hallmark of quality and the ‘professional’ is held to the standard of care defined by their regulatory institution. It usually shows that the professional has passed certain (study, examination, work experience) requirements, and is subject to standards on their work quality and ethics.
Usually regulated professions are supervised by professional bodies. These professional bodies can also strip someone of their title if they find that the person in question has not acted in line with their professional standards regarding work quality or ethics (i.e. malpractice).
Although the title ‘engineer’ is not a protected term in the UK (unlike ‘doctor of medicine’, ‘barrister’ or ‘solicitor’), the engineering profession is overseen by the Engineering Council.[151]
Registration with the Engineering Council provides a benchmark of quality, and prospective clients can check with the council whether the person they want to hire is indeed a registered engineer. Members are held to a professional standard of care,[152] and may be sued for malpractice when they have failed to act in line with this standard. Professional standards are written up and kept up to date by the council.
Strict liability
Strict liability places liability for damage with a specific person, regardless of intent. This means that the actor does not need to negligently have breached a duty of care to be held liable. This type of liability is typically applicable to dangerous activities that are more likely to lead to harms, such as keeping dangerous animals or working with hazardous substances. Strict liability eases the burden on the victim, as they do not have to prove a ‘fault’ or negligence by the defendant.
From an economic perspective, strict liability can induce an actor to observe an optimal level of care as well as an optimal level of activity as they are made to internalise the potential costs of an accident.[153]
On the other hand, strict liability may create moral hazard as it shifts the liability risk fully onto one actor, thereby disincentivising other actors who may be able to influence the likelihood of harm occurring (like the victim themselves) from taking appropriate care. Also, the risk of strict liability is that actors may become too careful and will reduce their activity below the efficient level.[154]
Some examples of strict liability in the UK that may be relevant in an AI context include copyright infringement, misuse of private information (can be relevant in the context of data protection breaches, among others) and defamation.
Product liability (‘defect liability’)
Product liability is a type of liability that forms a catch-all for liability for harm caused by products. In the UK, the more specific term ‘defect liability’ may be used to refer to liability covering harms caused by defects in a product. It exists to ensure that consumers who incur harm or damage as the result of defective products are adequately compensated by the producer of that product.
Essentially, it requires the consumer to prove that they have incurred damage, and that this damage has been caused by a defect in the product. The producer must provide compensation regardless of whether there is negligence or fault on their part.[155]
A ‘defect’ in a product means that the product does not provide the safety that a person is entitled to expect. Assessing the reasonable expectation of safety may include, among others, the characteristics of the product, how the product has been marketed, its accompanying instructions and warnings, the time it was placed on the market, and reasonably foreseeable use of the product.[156]
Generally, a product is defective if it does not live up to the safety ‘persons are generally entitled to expect’.[157] A product may thus be defective if its instructions fail to detail unsafe uses (‘duty to warn’), or even when a product is not safe to be used in a way that could be considered a ‘misuse’ but is reasonably foreseeable under the circumstances (i.e. failure to always operate a product with full concentration at all times).[158]
At the same time, product developers can generally rely on the ‘developmental risk’ defence, which means that the risk posed by the product was not ‘scientifically discoverable’ at the time when the product was placed on the market.[159]
Some jurisdictions cover only ‘tangible goods’ as products, whereas others also include software within the scope of product liability. The EU has recently updated its Product Liability Directive (PLD) to expressly include software within the types of products it covers, but in the UK standalone software is likely not covered by the UK’s Consumer Protection Act 1987 (CPA 1987), as it is not a ‘tangible good’.[160]
This means that the defect liability regime in the UK will most likely not cover AI, whereas the EU’s regime covers any AI product. However, the UK’s Law Commission has announced a review of the CPA 1987 to see if it is ‘fit for purpose‘ in the digital age, so the situation in the UK may be subject to change.[161]
In the UK, the CPA 1987 extends liability beyond the initial manufacturer and applies it to producers that have trademarked the product in some kind of way, importers and suppliers.[162] If a supplier cannot provide information on the producer of a product, they are liable themselves.[163]
In the EU, the PLD holds ‘economic operators’ liable, which include manufacturers, authorised representatives and importers. Distributors can be held liable if they fail to identify an economic operator.[164]
Vicarious liability
Vicarious liability is also a form of strict liability and it applies when people are held liable for the wrongs of others. Generally, this is in the context of an employer being liable for wrongful acts committed by employees in the course of their employment.
With regards to people being held liable for persons acting on their behalf, which are legally called their ‘agents’, liability depends on the degree of control that the person has over their agent.[165]
Example: Individual liability for senior managers in financial services
The financial services market has several redress mechanisms built in as the traditional liability regime might fall short in protecting consumers, requiring them to navigate the legal system against large corporations with deep pockets. These mechanisms include: a financial ombudsman to help consumers navigate compensation claims, an individual accountability regime for senior managers in financial services, and a government-backed financial services compensation scheme.[166]
In the UK, this individual accountability regime arose in the aftermath of the 2008 financial crisis, where senior bankers were claimed to have ‘avoided accountability for failings on their watch by claiming ignorance or hiding behind collective decision-making’.[167]
Now, senior managers need to receive prior approval before taking on a Senior Management Function (SMF) from the Financial Conduct Authority. People holding an SMF are the most senior decision-makers in the financial services company.[168] If it is found that the company in question has breached financial services regulation, then the SMF responsible for that section of the company can be held personally responsible, but only if their conduct was below the standard which would be reasonable in all the circumstances at the time of the conduct concerned.[169]
Causation
Causation is relevant for all forms of liability. Even in cases of strict liability there is a requirement to prove that the damage was caused by the defendant, even if there is no need to prove that the defendant was ‘at fault’ for causing it. Causation in liability law is usually operationalised through the ‘but for’ test: the harm would not have materialised ‘but for’ the act or omission of the defendant.
Once a (‘but for’) causal link has been established, courts generally assess whether the link between the cause and the harm is close enough. If the harm is a too remote consequence of the established cause, courts may deny compensation. Generally, they will consider the causal link to be ‘close’ enough if the type of harm that occurred was reasonably foreseeable as a consequence when the defendant committed the act (or acted negligently).
For example, it is reasonably foreseeable that someone might fall into a hatch that is left open, so therefore the person opening the hatch should take reasonable precautions (like putting up a sign, or not leaving the hatch open unattended) to prevent that harm from happening. Similarly, it might be reasonably foreseeable to the developer of an AI image generator that users may try to generate explicit images of a real person (deepfakes), which is now illegal in some jurisdictions.[170] Therefore, the developer should take some precautions to prevent the model from generating such outputs.
Also in product liability, for example in the EU’s PLD, a claimant must provide evidence that the damage occurred due to a malfunction of the product during a ‘reasonably foreseeable’ use of that product.[171]
Multiple (possible) causes
If there is some ambiguity concerning the causation (i.e. the damage could result from multiple causes), jurisdictions generally assign liability in an ‘all-or nothing’ fashion. In common law countries such as the UK, there is usually a 50 per cent likelihood threshold (i.e. the likelihood that the act caused the harm is 50 per cent or higher) that needs to be crossed for the courts to accept the causality requirement as being fulfilled.[172] Besides this threshold, the UK requires a ‘but for’ test of causality to be satisfied, which is explained in the section below.
Proportional liability[173] is a way of attributing uncertain causation under fault liability that exists in some, but not many, jurisdictions, like in Spain or the Netherlands.[174] This is an alternative to the ‘all-or-nothing’ approach described above. It is applied in cases where causation cannot be conclusively identified, meaning that there are multiple possible causes for the harm.
For example, one can identify the probability that a worker’s lung cancer has been caused by exposure to asbestos on a certain job (e.g. 20 per cent). The court may then decide to attribute 20 per cent of the damages to the employer who exposed the employee to the asbestos.[175]
In another example, in a recent US case involving a self-driving car crash, liability was attributed for 33 per cent to the car manufacturer (Tesla) and for 67 per cent to the driver as both were found to have contributed to causing the accident.[176]
In the case of product liability, there may be contributory negligence where the damage was partially caused by the manufacturer’s defective product and partially by the claimant’s own negligent behaviour. This may reduce (but not eliminate) the liability of the manufacturer.[177]
If multiple actors have contributed to causing a damage, then they can be held jointly and severally liable. This means that if two or more actors caused a harm together, the harmed person only has to sue one of them for the full amount. The actor held liable can then sue the other liable actor to recoup some of the damages. The doctrine of joint and several liability aims to ease the burden on affected people by simplifying their liability claim and not requiring them to sue all contributors individually.
Alternatively, multiple contributing actors can be subject to several liability, which means that each potential contributor is only liable for their ‘share’ of the damage. For several liability to apply (instead of joint and several liability), the contributors need to show that the damage is indeed severable.[178]
Burden of proving causation
The burden of proof is usually on the victim to prove the causal link, for both ‘all-or-nothing’ and ‘proportional’ liability. In certain jurisdictions under specific circumstances, it is possible to shift the burden of proof to the defendant instead of the victim, although this is rare in England and Wales. In such situations, there will be a ‘presumption of causality’, which can be rebutted by the defendant.
An example from another jurisdiction is the EU’s PLD which states that there is a ‘presumption of causality’ where the claimant has proven a defect in a product and where the claimant’s damage is typically consistent with the type of defect in question.[179]
Additionally, in situations where the claimant faces ‘excessive difficulties’ due to technical complexity in proving defectiveness or causality, the EU’s PLD instructs that the court shall presume the defectiveness or causality (which can be rebutted by the AI developer).[180]
This shifting of the burden of proof may be useful in situations of high information asymmetry or in areas where high technical expertise is needed. This may be highly relevant in the context for AI, where data sharing is not obligatory and information asymmetry is high.[181]
Harms and damage
Types of damage
Not all harms are protected equally under liability law. Some harms are generally covered by most jurisdictions, while others are only available for some jurisdictions, or the courts are very hesitant in awarding damages to remedy them. Most jurisdictions will cover material harms, which refers to damage to property or physical injury.[182] Among others, this could then mean recompense for the cost of repairing the claimant’s property or medical costs they have incurred as a result of a physical injury.
However, at the same time, most liability systems do not cover ‘pure economic loss’ (loss of profit or income not caused by material damage).[183] This means that ‘loss of chance’ or lost income as the result of ‘losing out on a job’ is usually difficult to get compensation for through liability law. This is also true in England and Wales, unless the claimant can show that the defendant assumed liability for income loss.
Immaterial harms are harms that are ‘not physical’, such as psychological harms (distress, trauma) or violations of fundamental rights (right to privacy, right to non-discrimination). Immaterial harms are generally more challenging to recover through liability law as many jurisdictions do not recognise immaterial harms as harms that can be compensated by liability law. Also harms that occur on a societal level rather than individual level are difficult to address through liability, as these systemic harms are more likely to be immaterial at an individual level (misinformation, human rights violations, damage to democracy) and may be challenging to evidence in an individual case.
Collective damages
Some, not all, jurisdictions allow for collective redress lawsuits, sometimes also called collective actions or class actions. In a collective redress case, a group of claimants can collectively bring a claim against an actor that has caused them harm.
Under English law, there are three ways in which collective redress can be obtained: group litigations, collective proceedings and representative actions.
Group litigations require claimants to ‘opt-in’ by bringing individual claims which are then bundled together. The eventual decision by the court will be binding on all the claims.
Collective proceedings (sometimes also called ‘class actions’) are a relatively new phenomenon under English law. They can be ‘opt-in’ or ‘opt-out’, and the claim is brought by one ‘representative’ (an individual or organisation) who claims to represent a whole class of people. In the UK, a framework for such collective proceedings exists under the CRA 2015 for infringements of competition law.[184]
Lastly, representative actions allow multiple claimants to jointly make one claim, but this requires all claimants to have an identical interest (same remedies, same defences, same fact patterns), which is a relatively high threshold.
| Type of collective redress | Rule | Benefit | Challenge |
| Group litigation | Individual claims bundled together. Opt-in. | Lower cost for courts, claimants can share litigation costs and risk. | Opt-in requirement is onerous on claimants (usually not economically viable for smaller claims). |
| Collective proceeding (class action) | One representative for class of people. Opt-in or opt-out. | Lower burden: Merely prove that the class as a whole has suffered a loss (no need to prove this for each individual). | Only applies to damages resulting from competition infringements under the Consumer Rights Act 2015. |
| Representative action | One claim with multiple claimants with an identical legal interest. | Lower costs for courts and claimants. | The bar for having an ‘identical legal interest’ is high and not easy to meet (see example Lloyd v Google below). |
As shown above, the collective proceeding is the least burdensome way of obtaining collective redress. The claim is usually brought by a ‘claim vehicle’, which is an organisation that represents the interests of the group of claimants. Collective proceedings can be a cost-effective way for claimants to get compensation through liability law, as they do not have to pay for the cost of the lawsuit themselves.
A loss that might be too small to be worth going through the court system for might accumulate to a large amount if the losses of a large group of claimants are bundled together. Still, although they are less burdensome on individual claimants, such class action lawsuits can take years to make their way through the courts.[185] Also, at least in the UK, collective proceedings are only allowed for competition infringements, which may cover some, but not all, of the damages that may result from AI systems.
Example: Lloyd v Google LLC (2021, UK Supreme Court)
Lloyd sued Google for infringements of Google’s duties as a data controller under the UK’s Data Protection Act, namely Google’s tracking of the internet activity of millions of iPhone users in 2011-2012 and using the collected data for commercial purposes without the users’ knowledge or consent.[186] This breach of data rights has been the basis of settlements between Google and claimants in other jurisdictions.
Lloyd sued Google on his own behalf as well as on behalf of all other iPhone users in England and Wales during the relevant time period via a representative action. The UK’s Supreme Court ruled that Lloyd could not bring a representative action on behalf of ‘iPhone users’, as the members of the class did not sufficiently share the ‘same interest’ in the claim.[187]
The judge in the case stated that there would still be a need to provide individualised evidence of unlawful processing (i.e. how long their data had been monitored, individual circumstances) to determine the amount of damages to be awarded to a claimant.[188] In other words, the group members’ individual circumstances were not uniform enough to forego an individualised assessment of damages for each claimant.
This case illustrates the difficulty in forming representative actions in the UK, as the claimants’ interests need to be sufficiently similar and identifiable for the representative action to be permitted. As this complaint did not concern competition law, ‘collective proceedings’ was not available as a collective redress mechanism.
Insurance
The art of pricing risk
Insurance is sometimes referred to as ‘the art of pricing risk’.[189] Liability insurance can be an effective way to avoid burdensome legal processes, while still ensuring compensation for victims of harms and incentivising risk-averse behaviours through premium pricing.[190]
Insurance, like liability law, is primarily a post-deployment tool to manage and distribute risks, but can also have impacts ex ante.[191] Liability law systems and insurance are intertwined, as insurance is shaped by liability risks.[192]
Insurance is more flexible than liability law; the insurance industry can change its new policies in real-time (and old policies when they renew) to reflect changes in knowledge about risks.[193]
Generally, insurance premiums will represent how risky a certain product or activity is. If something represents a high risk or if the risk is unclear, the insurance premium will be higher, which may disincentivise the activity or adoption of the product.[194] If the risk of a product or activity is too uncertain, insurance companies may refrain from offering insurance policies for that product or activity.[195]
This may be the case for a new product like AI, where companies may not yet have a good overview of the risks different AI systems may pose in different deployment contexts. If a product’s risk can be adequately estimated, and it is safe (or safe if proper precautions are taken), then the premium should be relatively inexpensive.[196]
Insurance may also prescribe certain conditions under which the good or activity is covered. Insurance might cover ‘normal use’ of a good, but not reckless use.
Example: Car liability insurance
Some jurisdictions have implemented compulsory insurance schemes. For example, the UK and the EU have implemented obligatory ‘third party car insurance’.[197] Third-party car insurance insures the driver against liability exposure for physical injury or damage to other people’s vehicles or property. Beyond the mandatory third-party insurance, drivers may also choose to insure for damages or loss to their own car by fire or theft, and other expenses.[198]
The insurance protects the driver against financial ruin and ensures that victims can recover their losses effectively. If people have not made a claim against their insurance, their premium will gradually lower over time. Additionally, if it is shown that a driver was acting against certain prescribed behaviours when causing the damage (e.g. driving recklessly or under the influence, violating certain traffic laws), then in some cases insurance will not cover damages resulting from such driving behaviours (unless such exclusions are legally prohibited to ensure pay-out to third party victims). In these ways car liability insurance policies incentivise responsible driving.
Mandatory insurance
Liability insurance can be optional, but in some cases also legally mandated. In the EU, the European Parliament’s resolution on AI in a digital age stressed the importance of high-risk AI being subject to strict liability laws, but with mandatory insurance cover.[199] However, such mandatory insurance coverage for AI has to date not been enacted in the UK or EU.
Strict liability combined with mandatory insurance coverage is a mechanism that is used more often for risky activities that do have a societal benefit, such as driving cars (see the above example, ‘Car liability insurance’) and operating nuclear power plants.
Still, insurance coverage here is dependent on adhering to safety practices, and insurance companies may still decide not to offer insurance to a specific actor or company if they deem the risk to be too high. Also, this would require a clear societal need and benefit to the activity or good to be insured, and for newer technologies like AI this is yet be determined.
Example: Nuclear power plants’ insurance pools
As the effects of a nuclear accident can be very damaging, countries have had to come up with an answer on how to manage the high risks of the sector. On the one hand, it should be easy for victims to get compensated. On the other hand, the damages of a nuclear accident are likely so high that it would be challenging for the operator to meet this financial burden without going bankrupt or to find insurance to cover it.[200]
Countries have also realised that this exposure to very high liability claims could deter operators from establishing nuclear power reactors, which would hinder the development of nuclear technology.[201]
Countries therefore had to develop a system that both adequately protected potential victims, but did not expose the nuclear industry to potentially ruinous liability burdens as this would limit innovation and access to nuclear energy. As a result, nuclear operators are subject to strict liability, but this liability has been limited to a maximum amount by the majority of countries. Only a handful of countries allows unlimited liability for nuclear accidents.[202]
Often, the state will cover any damages beyond the maximum amount. Additionally, nuclear operators must maintain sufficient financial security to cover this nuclear liability.[203] In several countries, including the UK, insurance pools have been set up, which are paid into by nuclear operators across the country. This insurance pool covers the risk of nuclear accidents for any of the nuclear operators paying into it.[204]
Contracts and liability
Generally, when someone buys a product or a service, there will be a contract as the basis. This can take the form of a bespoke contract that is negotiated between the contracting parties but may also be limited to a standard terms and services agreement that is drawn up by the seller and agreed to by the customer.
Contracts cause ‘contractual liability’ to arise and may also impact on non-contractual liability. Contractual liability is taken on voluntarily by contracting parties, whereas non-contractual liability is primarily established by law and objective standards of care and applies whether the liable party agrees to it or not.
Contractual liability arises through the non-performance of a (provision in a) contract. Simply put, if you buy a good and it is broken or not in line with what you agreed with the seller in the contract, then the seller is legally liable to fix that – usually either by giving your money back or still performing the contract (by giving you the correct good).
By comparison, non-contractual liability arises through the committing of a ‘tort’, a wrongful act towards someone else (see the sections on ‘Fault liability’ and ‘Strict liability’ above).
In principle, contracting parties may contract on whatever they like and on whichever terms they like, unless the contract is against the law or public policy. For example, murder is illegal in the UK, so therefore a contract to kill someone in exchange for money will not be legally valid.
In less extreme cases, the government has created some interventions to make sure that ‘weaker parties’ are not pressured into contracts that are not in their best interest or subject to terms that are considered ‘unfair’. The clearest example of this is consumer protection laws and legislation on unfair contractual clauses. In the UK, this is covered in the Consumer Rights Act 2015 and the Unfair Contract Terms Act 1977.
In B2C contexts, AI companies may offer their products directly to consumers under standardised ‘terms of service’. Generally, the terms of service of large AI companies contains clauses on liability, which tend to limit the liability of AI companies and place at least some of the risk with the consumer/deployer. In B2B contexts, the company selling the AI system and the company buying it (or buying the license to use it) may also include clauses on liability. Such clauses are only valid insofar as they are not considered unlawful or unenforceable.
The Consumer Rights Act 2015 covers contracts on the sale of goods, digital content, and services.[205] It contains a list of contract terms that are to be deemed unfair, such as a contract term that limits or excludes the liability of the seller for personal injury to the consumer resulting from the negligence of the seller.[206] It also creates rights for consumers with regards to the good, service or digital content they buy, namely that it is of satisfactory quality, fit for purpose, as described or seen, and/or performed with reasonable skill.[207]
The Unfair Contract Terms Act 1977 applies to B2B contracts and requires contract clauses dealing with exclusion or limitation of liability to be reasonable and to be enforceable.[208] This provides some protection to businesses in the context of standard contract terms (like standard terms of use often used by large AI providers) but not as strongly as the Consumer Rights Act 2015.
Still, third parties (people who are not one of the two parties who entered the contract with each other) are not bound by contractual clauses and can therefore sue the deployer or developer to hold them liable under non-contractual liability if a duty of care has been breached.
For example, a pedestrian (‘the third party’) who has been hit by a self-driving car will not be subject to contractual clauses between the car’s user and the car’s developer, but the driver and developer may have determined, through contractual clauses, which of them carries the risk for third-party liability claims. Nonetheless, the claim brought by the third party will be subject to regular liability rules.
Overview of relevant legislation and legislative proposals
Relevant legislation
EU: (Updated) Product Liability Directive (2024, passed)
The EU’s updated Product Liability Directive (PLD) came into force in late 2024. It has been referenced throughout this paper. A key change to the PLD is the inclusion of software and AI systems as ‘products’ that are covered by the directive. In contrast, the UK’s Consumer Protection Act 1987 only covers ‘tangible goods’ as products, which excludes software.[209]
The updated PLD also introduces a shift in the burden of proof, where, once a ‘defect’ in a product has been established, the causal relation between the defect and damage is presumed and the burden lies on the defendant to prove the causal link does not exist. It also contains provisions requiring the disclosure of evidence by AI developers. The EU’s PLD is part of the EU’s consumer protection legislation, and thus only applies in consumer contexts. It also only applies to physical injury or property damage.
At a glance: The updated PLD makes it easier for consumers to launch claims against AI developers. It continues (compared to its previous version) to allow for joint liability for different actors in the value chain and alleviates burdens around proving causality. Still, difficulties proving the ‘defectiveness’ of a product persist. The updated PLD does not cover immaterial harms and does not apply to businesses who have suffered from an AI harm.
EU: AI Liability Directive (2022, withdrawn)
The EU’s AI Liability Directive was conceived in 2022 and withdrawn in February 2025. The AI Liability Directive would have introduced a harmonised liability regime for fault-based AI liability that is applicable more generally, and also applicable where the harmed party is not a consumer but, for example, a business.
The directive also contained a provision on a rebuttable presumption of causality and allowed courts to order evidence disclosure to aid claimants to build their case. It covered a wider range of damage than the PLD, as it covered damages based on life, physical integrity, property and the protection of fundamental rights.[210]
The directive was withdrawn due to political pressure and concerns from industry, amid a general Brussels trend towards deregulation.
At a glance: The AI Liability Directive addressed some of the gaps left by the updated PLD, as it covered business claimants and extended covered damages to include fundamental rights infringements. Critics of the proposed directive stated that it made it easier for claimants to sue AI developers, and the directive intervened too much in the member states’ own non-contractual liability systems.
California, USA: Bill SB 53 (2025, passed)
Bill SB 53 was passed and signed into law on 29 September 2025. The law has four main components: create a transparency reporting mechanism for frontier AI models, a safety incident reporting mechanism, whistleblower protections, and set up the groundwork for a public compute infrastructure.[211]
SB 53 mandates transparency reporting but limits these obligations to ‘frontier developers’ that develop ‘frontier models’ and limits the reporting to risks that are ‘catastrophic risks’. Catastrophic risk is defined as a single incident causing 50 deaths or more, or more than $1 billion in damages, in which an AI model is involved in (a) the development or release of a CBRN weapon, (b) conduct without meaningful human oversight that constitutes a cyber-attack or murder, assault, torture, or theft, or (c) evading control from its developer or user. The law mandates that large frontier developers report on their approach to, among others, assessing their frontier model for catastrophic risk, and safety mitigations.
At a glance: SB 53 does not contain provisions on liability for AI harms. It does introduce transparency mechanisms that can give high-level insight into the catastrophic risks associated with frontier AI models, but does not support claimants in accessing information that may be relevant for a specific case or provide information about an AI model’s propensity to cause a broader range of harms beyond catastrophic harms. SB 53 only covers the most advanced AI models and does not cover AI systems.
California, USA: Bill SB 813 (2025, in process)
Another proposal that has recently drawn attention is California’s Bill SB 813.[212] The bill proposes that the Attorney-General appoint ‘multistakeholder regulatory organisations’ (MROs) that audit AI companies and their products and services.
Successfully completing such a voluntary audit would grant AI companies a certification that would provide a defence for AI companies in liability cases. In an earlier draft, the certification gave AI companies an affirmative defence, but, in an amended draft at the time of writing this paper, this has been toned down to a rebuttable presumption of reasonable care.[213] Either way, the process provides a form of ‘liability shield’ for certified AI developers.
At a glance: Concerns regarding this model of ‘private governance’ (as the MROs are private organisations) include that MROs would be competing to certify AI providers, as the certification is voluntary in exchange for a defence in liability cases. This could lead to a ‘race to the bottom’ between MROs, and there are rightful concerns about a private organisation being able to provide ‘liability shields’ – which is something most regulators cannot even do.
In general, there are concerns that the current AI evaluation science that MROs would rely on is not yet up to speed to conclusively prove that AI models are actually safe in wide-ranging contexts, and that the certification would not mean that the AI models are safe, but would still provide a shield against AI litigation.
The bill is due to be debated further over the course of 2025, and it is not certain if and when the bill would become law.
California, USA: Bill SB 1047 (2024, vetoed)
In 2024, California considered a different bill to tackle the issue of AI liability: SB 1047.[214] The bill failed when it was vetoed by California’s governor in September 2024.[215]
The scope of the bill was limited to damages caused by AI models exceeding a certain compute threshold and training cost that amounts to ‘critical harm’, which was defined as ‘mass casualties or $500 million of damage’.
The bill required developers of AI models capable of such harm to include a ‘kill switch’ to enable the immediate shut down of a system, cybersecurity protections and safety protocols to mitigate foreseeable downstream risks of the model.
California governor Gavin Newsom vetoed the bill because ‘it only applied to the most capable models’, and did not consider whether a ‘less capable’ model could still lead to critical risks when used in high-stakes settings. Newsom stated that SB 1047 could have led to a ‘false sense of security’ when models falling under the bill’s threshold could lead to similarly harmful impacts.[216]
At a glance: Although SB 1047 would have introduced some legal consequences for catastrophic harms, the scope of the bill was very limited and did not cover more ‘mundane’ harms by AI systems, which – though less catastrophic – can have a large impact on society overall, and are the risks most likely to be faced by members of the public.
Relevant ongoing cases
Florida, USA: Garcia v Character Technologies
The claimant, Megan Garcia, is the parent of a 14-year-old boy from Florida who took his own life after prolonged contact with a Game of Thrones-themed chatbot on Character.AI. Over months of chatting, the boy became more isolated and addicted to the chatbot. Garcia is suing Character.AI, the company’s two co-founders, and Google, for whom the co-founders worked for before starting Character.AI and maintained close ties with.
Garcia brings this action for ‘strict product liability, negligence per se, negligence, wrongful death and survivorship, loss of filial consortium, unjust enrichment, violations of Florida’s Deceptive and Unfair Trade Practices Act, and intentional infliction of emotional distress’.[217]
Relevance: The case can shed light on whether chatbots (i.e. a non-physical good) can be seen as a product under Florida’s product legislation and whether the suicide was reasonably foreseeable and fulfils the requirements for negligence.
The case is also of legal interest due to the relationship between the start-up Character.AI and Google. It may shed light on the liability of big tech companies exercising some level of influence over the business practices of smaller companies.
Current status: The court denied (in part) a motion to dismiss from the defendants on 21 May 2025, enabling the case to go forward.[218]
Texas, USA: A.F. et al v Character Technologies
The claim was filed by two families whose children experienced harmful effects from interaction with chatbots hosted by Character.AI. A character on Character.AI engaged in conversations of a sexual nature with a 17-year-old boy and reportedly alienated him from his family and community. The chatbot stated that the boy’s parents were abusive for limiting his screen time and ‘they aren’t surprised when children kill their parents after years of emotional and physical abuse’. The other child, an 11-year-old, was exposed to sexual content as well.
The parents are bringing claims for strict liability under Texas’ product law (both for the defectiveness of the product and failure to warn), negligence, unjust enrichment, intentional infliction of emotional distress, and privacy violations.
Relevance: Similar to the Garcia case, this case has the potential to respond to questions about whether a chatbot can be seen as a product, whether the actions of Character.Ai amount to negligence, and the relationship between Google and Character.Ai.
Current status: The suit has been filed and is awaiting hearing.
California, USA: Mobley v Workday, Inc.
Mobley is a Black man over the age of 40 who states that he has applied for over 100 jobs using Workday AI hiring tools and was rejected every time. Mobley has therefore sued Workday, Inc. for discrimination in employment, as he claims that he was filtered out by the AI tool due to his age, ethnicity or mental health conditions. The suit has become a collective action suit, allowing other workers over 40 years of age to join.
Relevance: The defendant in this case is not the deployer of the AI tool but the vendor, which is interesting for questions of liability along the AI value chain and whether upstream actors can be held liable directly for damages occurring more downstream. The case is also relevant because Mobley argues that Workday, Inc. acts legally as ‘an agent’ for the potential employers by having its tools take over hiring decisions from those employers.
Current status: On 12 July 2024, the court partially granted and partially denied a motion to dismiss.[219] The judge allowed Mobley to move forward on the claim that Workday, Inc. acted as an agent for the prospective employers as it had taken over (part of) the hiring process and filtering and could be held liable as an agent. The case is still ongoing.
Footnotes
[1] Christopher J. Robinette, ‘Torts Rationales, Pluralism, and Isaiah Berlin’ (SSRN, 29 August 2006) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=925286> accessed 8 September 2025.
A recent appraisal on the status for civil liability rules in the EU listed a threefold regulatory purpose: effective victim compensation, regulatory clarity for economic operators and harmonisation between member states. See: Andrea Bertolini, ‘Artificial Intelligence and Civil Liability’ (European Parliament, 2025) <https://www.europarl.europa.eu/thinktank/en/document/IUST_STU(2025)776426> accessed 8 September 2025
[2] Elliot Jones, Mahi Hardalupas and William Agnew, ‘Under the Radar?’ (Ada Lovelace Institute, 25 July 2024) <https://www.adalovelaceinstitute.org/report/under-the-radar/> accessed 8 September 2025;
Lara Groves, ‘Code & Conduct’ (Ada Lovelace Institute, 5 June 2024) <https://www.adalovelaceinstitute.org/report/code-conduct-ai/> accessed 8 September 2025.
[3] This would align with findings from a 2025 nationally representative survey by the Ada Lovelace Institute and the Alan Turing Institute that stated that 72 per cent of the UK public agreed that laws and regulations would increase their comfort with AI. See: Roshni Modhvadia and others, ‘How Do People Feel about AI?’ (Ada Lovelace Institute and Alan Turing Institute) <https://attitudestoai.uk/> accessed 8 September 2025;
Julia Smakman and Matt Davies, ‘New Rules?’ (Ada Lovelace Institute, 31 October 2024) <https://www.adalovelaceinstitute.org/report/new-rules-ai-regulation/> accessed 8 September 2025.
[4] Christiane Wendehorst, ‘AI Liability in Europe’ (Ada Lovelace Institute, 22 September 2022) <https://www.adalovelaceinstitute.org/resource/ai-liability-in-europe/> accessed 8 September 2025.
[5] Philip Moreira Tomei, Rupal Jain and Matija Franklin, ‘AI Governance through Markets’ (arXiv.org, 5 March 2025) <https://arxiv.org/abs/2501.17755> accessed 8 September 2025.
[6] Jules L. Coleman, ‘Tort Law and the Demands of Corrective Justice’ (1997) 67 Indiana Law Journal 349; Christopher H. Schroeder, ‘Corrective Justice and Liability for Increasing Risks’ (1990) 37 UCLA Law Review 439; Christopher J. Robinette, ‘Torts Rationales, Pluralism, and Isaiah Berlin’ (SSRN, 29 August 2006) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=925286> accessed 8 September 2025.
[7] The notion of ‘making the victim whole again’ should be cautioned with the understanding that money can only go so far in ‘righting a wrong’ that has been committed. Even if a party receives financial compensation, this does not mean that they are exactly in the same position as before the damage occurred.
[8] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025; Robert Cooter and Thomas Ulen, ‘An Economic Theory of Tort Law’, Law & Economics (6th edn, Addison-Wesley 2012).
[9] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[10] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[11] Kevin Dowd, ‘Moral Hazard and the Financial Crisis’ (HeinOnline, Winter 2009) <https://heinonline.org/HOL/LandingPage?handle=hein.journals/catoj29&div=15&id=&page=> accessed 8 September 2025.
[12] Catherine O’Callaghan, ‘OpenAI Offers to Indemnify CHATGPT Customers for Copyright Infringement’ (Lexology, 14 November 2023) <https://www.lexology.com/library/detail.aspx?g=671fdd7f-3cef-4606-bb40-6f1c3dbaefe0> accessed 8 September 2025.
[13] Dean W. Ball, ‘A Framework for the Private Governance of Frontier Artificial Intelligence’ (arXiv.org, 15 April 2025) <https://arxiv.org/abs/2504.11501> accessed 8 September 2025.
[14] Tech Policy Press, ‘Megan Garcia V. Character Technologies, et Al.’ <https://www.techpolicy.press/tracker/megan-garcia-v-character-technologies-et-al/> accessed 8 September 2025; Johana Bhuiyan, ‘Jury Orders Tesla to Pay More than $200m to Plaintiffs in Deadly 2019 Autopilot Crash’ (The Guardian, 1 August 2025) <https://www.theguardian.com/technology/2025/aug/01/tesla-fatal-autopilot-crash-verdict> accessed 8 September 2025.
[15] ‘Defend Your Intellectual Property’ (GOV.UK, 18 May 2015) <https://www.gov.uk/defend-your-intellectual-property/take-legal-action> accessed 8 September 2025.
[16] Alice Taylor and Liam Elphick, ‘Discrimination Law and the Language of Torts in the UK Supreme Court’ [2019] SSRN <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3402216> accessed 8 September 2025.
[17] Lilian Edwards, ‘Private Ordering and Generative AI What Can We Learn from Model Terms and Conditions?’ (SSRN, 17 December 2024) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5026677> accessed 8 September 2025.
[18] In other jurisdictions, the question of whether AI (or software in general) falls under product liability legislation is still an open question. There are two cases in the US that will have to address this question (Garcia v Character Technologies, A.F. et al v Character Technologies). See the Appendix on relevant ongoing cases.
[19] Joseph Dumit and Andreas Roepstorff, ‘AI Hallucinations Are a Feature of LLM Design, Not A Bug’ (Nature News, 4 March 2025) <https://www.nature.com/articles/d41586-025-00662-7> accessed 8 September 2025.
[20] ‘EU Terms of Use’ (OpenAI, 29 April 2025) <https://openai.com/policies/eu-terms-of-use/> accessed 8 September 2025.
[21] The current status of AI as a ‘good’ if embedded within a physical object is not fully legally clear in the UK. Although English courts have ruled that software in purely digital form is not a good (see: St Albans City and District Council v ICL [1996] EWCA Civ 1296 (26 July 1996)), it is not fully clear if a smart device that has AI embedded into a tangible object would be covered as a ‘good’ under the mentioned regulations. This may be subject to change as the Law Commission has announced a review of the UK’s CPA 1987, although it is not yet clear what the scope of this review will be. See: Law Commission, ‘Product Liability’ (Law Commission) <https://lawcom.gov.uk/project/product-liability/> accessed 8 September 2025.
[22] The EU has some guidance on unfair contractual clauses in a B2B context in specific regulations, for example the Data Act contains provisions on unfair contractual clauses for data sharing contexts, the Platform-to-Business Regulation which regulates contractual arrangements between platforms and businesses.
[23] European Commission, ‘Consumer Contract Law’ (European Commission) <https://commission.europa.eu/law/law-topic/consumer-protection-law/consumer-contract-law_en> accessed 8 September 2025.
[24] European Commission, ‘Consumer Contract Law’ (European Commission) <https://commission.europa.eu/law/law-topic/consumer-protection-law/consumer-contract-law_en> accessed 8 September 2025.
[25] A similar case happened in Canada where an airline was held liable by a Canada Tribunal for the wrong information provided by its chatbot. See: Maria Yagoda, ‘Airline Held Liable for Its Chatbot Giving Passenger Bad Advice – What This Means for Travellers’ (BBC News, 23 February 2024) <https://www.bbc.co.uk/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know> accessed 8 September 2025.
[26] Unfair Contract Terms Act 1977 (c 50) s 2.
[27] AI software is not a ‘good’ in the sense of the Sale of Goods Act 1979, but it can qualify as a service under the Supply of Goods and Services Act 1982. The Supply of Goods and Services Act 1982 stipulates that a supplier must perform a service with ‘reasonable care and skill’ (Supply of Goods and Services Act 1982, s 13).
[28] The user’s intent and actions are also generally relevant here. If the user instructed the AI to generate an infringing output or to discriminate against a certain group of people, then the user will likely be on the hook themselves, and may even have to indemnify the AI provider.
[29] Automated and Electric Vehicles Act 2018, s 2(1).
[30] Ben Gardner, ‘Automated Vehicles Act: Spotlight on Liability’ (Shoosmiths, 17 June 2024) <https://www.shoosmiths.com/insights/articles/automated-vehicles-act-spotlight-on-liability> accessed 8 September 2025; Bogdan Ciacli, ‘Liability for Self-Driving Cars: Getting Rid of Negligence?’ (Cambridge University Law Society (CULS)) <https://www.culs.org.uk/per-incuriam/liability-for-self-driving-cars-getting-rid-of-negligence> accessed 8 September 2025.
[31] Christiane Wendehorst, ‘AI Liability in Europe’ (Ada Lovelace Institute, 22 September 2022) <https://www.adalovelaceinstitute.org/resource/ai-liability-in-europe/> accessed 8 September 2025.
[32] Nor do all countries have dedicated legislation on automated vehicles like the UK does, which includes rights for insurance companies to recover harms from upstream AI developers.
[33] Christiane Wendehorst, ‘AI Liability in Europe’ (Ada Lovelace Institute, 22 September 2022) <https://www.adalovelaceinstitute.org/resource/ai-liability-in-europe/> accessed 8 September 2025.
[34] NB: A ‘standard of care’ is a legal concept that relates to the expectations placed upon an actor to take reasonable precautions (further explained in the Appendix) that does not have the same meaning as the word ‘standard’ used in technical contexts.
[35] Nettleship v Weston [1971] 2 QB 691
[36] Bryan H. Choi, ‘Negligence Liability for AI Developers’ (Lawfare, 26 September 2024) <https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers> accessed 8 September 2025.
[37] CEN-CENELEC, ‘Artificial Intelligence’ (CEN-CENELEC, n.d.) <https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/> accessed 20 September 2025; Autoriteit Persoonsgegevens, ‘De rol van productstandaarden voor AI systems’ (Autoriteit Persoonsgegevens, September 2025) <https://www.autoriteitpersoonsgegevens.nl/documenten/de-rol-van-productstandaarden-voor-ai-systemen> accessed 20 September 2025.
[38] NIST, ‘AI Risk Management Framework’ (NIST, 5 May 2025) <https://www.nist.gov/itl/ai-risk-management-framework> accessed 8 September 2025.
[39] Anthropic, ‘Anthropic’s Transparency Hub: Voluntary Commitments’ (Anthropic, 28 August 2025) <https://www.anthropic.com/transparency/voluntary-commitments> accessed 8 September 2025.
[40] Department for Science, Innovation & Technology, ‘Frontier AI Safety Commitments, AI Seoul Summit 2024’ (GOV.UK, 7 February 2025) <https://www.gov.uk/government/publications/frontier-ai-safety-commitments-ai-seoul-summit-2024/frontier-ai-safety-commitments-ai-seoul-summit-2024> accessed 8 September 2025.
[41] Lara Groves, Amy Winecoff and Miranda Bogen, ‘Going Pro?’ (Ada Lovelace Institute, 10 July 2025) <https://www.adalovelaceinstitute.org/report/going-pro/> accessed 8 September 2025.
[42] European Commission, ‘The General-Purpose AI Code of Practice’ (European Commission) <https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai> accessed 8 September 2025.
[43] Department for Science, Innovation & Technology, ‘Implementing the UK’s AI Regulatory Principles: Initial Guidance for Regulators’ (GOV.UK, 6 February 2024) <https://www.gov.uk/government/publications/implementing-the-uks-ai-regulatory-principles-initial-guidance-for-regulators> accessed 8 September 2025.
[44] Consumer Protections for Artificial Intelligence Act, Colorado General Assembly, May 17 2024.
[45] James A. Henderson, ‘Learned Hand’s Paradox: An Essay on Custom in Negligence Law’ (2017) 105 California Law Review 165 <https://lawcat.berkeley.edu/record/1127964?v=pdf> accessed 9 September 2025.
[46] Bryan H. Choi, ‘Negligence Liability for AI Developers’ (Lawfare, 26 September 2024) <https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers> accessed 8 September 2025.
[47] Gregory Smith and others, ‘Liability for Harms from AI Systems’ (RAND, 20 November 2024) <https://www.rand.org/pubs/research_reports/RRA3243-4.html> accessed 6 November 2025.
[48] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025;
Beatriz Botero Arcila, ‘AI Liability along the Value Chain’ (Mozilla Foundation, 9 June 2025) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5209735> accessed 9 September 2025.
[49] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[50] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[51] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[52] Tricia A. Griffin, Brian P. Green and Jos V.M. Welie, ‘The Ethical Wisdom of AI Developers’ (SpringerLink, 20 March 2024) <https://link.springer.com/article/10.1007/s43681-024-00458-x> accessed 9 September 2025
[53] Chinmayi Sharma, ‘AI’s Hippocratic Oath’ (2024) 102 Washington University Law Review 1101 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759742> accessed 9 September 2025.
[54] Chinmayi Sharma, ‘AI’s Hippocratic Oath’ (2024) 102 Washington University Law Review 1101 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759742> accessed 9 September 2025; Brendt Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI’ (Nature News, 4 November 2019) <https://www.nature.com/articles/s42256-019-0114-4> accessed 9 September 2025.
[55] Chinmayi Sharma, ‘AI’s Hippocratic Oath’ (2024) 102 Washington University Law Review 1101 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759742> accessed 9 September 2025.
[56] Brendt Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI’ (Nature News, 4 November 2019) <https://www.nature.com/articles/s42256-019-0114-4> accessed 9 September 2025.
[57] Chinmayi Sharma, ‘AI’s Hippocratic Oath’ (2024) 102 Washington University Law Review 1101 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4759742> accessed 9 September 2025.
[58] Brendt Mittelstadt, ‘Principles Alone Cannot Guarantee Ethical AI’ (Nature News, 4 November 2019) <https://www.nature.com/articles/s42256-019-0114-4> accessed 9 September 2025.
[59] FCA, ‘Consumer Duty’ (FCA, n.d.) <https://www.fca.org.uk/firms/consumer-duty> accessed 14 October 2025.
[60] Financial Ombudsman Service, ‘Financial dispute resolution that’s fair and impartial’ (Financial Ombudsman Service, n.d.) <https://www.financial-ombudsman.org.uk/> accessed 14 October 2025.
[61] Financial Conduct Authority, ‘Senior Managers Regime’ (FCA, 30 March 2023) <https://www.fca.org.uk/firms/senior-managers-and-certification-regime/senior-managers-regime> accessed 10 September 2025.
[62] Emily Hamilton, ‘2025 Is the Year of AI Agents, OpenAI CPO Says’ (Axios, 23 January 2025) <https://www.axios.com/2025/01/23/davos-2025-ai-agents> accessed 9 September 2025.
[63] Lisa Soder and others, ‘An Autonomy-Based Classification’ (Interface, 2 April 2025) <https://www.interface-eu.org/publications/ai-agent-classification> accessed 9 September 2025.
[64] Iason Gabriel and others, ‘The Ethics of Advanced AI Assistants’ (Google Deepmind, 19 April 2024) <https://deepmind.google/discover/blog/the-ethics-of-advanced-ai-assistants/> accessed 9 September 2025; Harry Farmer & Julia Smakman, ‘Delegation Nation’ (Ada Lovelace Institute, 4 February 2025) <https://www.adalovelaceinstitute.org/policy-briefing/ai-assistants/> accessed 9 September 2025.
[65] Alan Turing Institute, ‘Multi-Agent Systems’ (The Alan Turing Institute) <https://www.turing.ac.uk/research/interest-groups/multi-agent-systems> accessed 9 September 2025;
World Economic Forum, ‘A Primer on the Evolution and Impact of AI Agents’ (World Economic Forum, December 2024) <https://www.weforum.org/publications/navigating-the-ai-frontier-a-primer-on-the-evolution-and-impact-of-ai-agents/> accessed 9 September 2025; Lewis Hammond and others, ‘Multi-Agent Risks from Advanced AI’ (arXiv.org, 19 February 2025) <https://arxiv.org/abs/2502.14143> accessed 9 September 2025.
[66] Alan Chan and others, ‘Visibility into AI Agents’ [2024] The 2024 ACM Conference on Fairness, Accountability, and Transparency 958.
[67] Alan Chan and others, ‘Visibility into AI Agents’ [2024] The 2024 ACM Conference on Fairness, Accountability, and Transparency 958.
[68] Chris Smith, ‘Watch ChatGPT’s Operator AI Agent Solve a CAPTCHA like a Human’ (BGR, 24 January 2025) <https://www.bgr.com/tech/watch-chatgpts-operator-ai-agent-solve-a-captcha-like-a-human/> accessed 9 September 2025; Kevin Hurler, ‘Chat-GPT Pretended to Be Blind and Tricked a Human into Solving a CAPTCHA’ (Gizmodo, 16 March 2023) <https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471> accessed 9 September 2025.
[69] Andrew D. Selbst, ‘Negligence and AI’s Human Users’ (2020) 100 Boston University Law Review 1315 <https://heinonline.org/HOL/LandingPage?handle=hein.journals/bulr100&div=40&id=&page=> accessed 9 September 2025.
[70] Madeleine C. Elish, ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction’ (SSRN, 3 April 2016) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236> accessed 26 August 2025.
[71] Tom Lawton and others, ‘Clinicians Risk Becoming “Liability Sinks” for Artificial Intelligence’ (Future Healthcare Journal, 13 September 2024) <https://www.sciencedirect.com/science/article/pii/S2514664524000055#bib0001> accessed 26 August 2025.
[72] Madeleine C. Elish, ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction’ (SSRN, 3 April 2016) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236> accessed 26 August 2025.
[73] Madeleine C. Elish, ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction’ (SSRN, 3 April 2016) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2757236> accessed 26 August 2025.
[74] Lisa Soder and others, ‘An Autonomy-Based Classification’ (Interface, 2 April 2025) <https://www.interface-eu.org/publications/ai-agent-classification> accessed 9 September 2025; Margaret Mitchell and others, ‘AI Agents Are Here, What Now?’ (Hugging Face, 13 January 2025) <https://huggingface.co/blog/ethics-soc-7> accessed 9 September 2025.
[75] Anthropic, ‘Reasoning Models Don’t Always Say What They Think’ (Anthropic, 3 April 2025) <https://www.anthropic.com/research/reasoning-models-dont-say-think> accessed 9 September 2025.
[76] Lisa Soder and others, ‘An Autonomy-Based Classification’ (Interface, 2 April 2025) <https://www.interface-eu.org/publications/ai-agent-classification> accessed 9 September 2025.
[77] Lisa Soder and others, ‘An Autonomy-Based Classification’ (Interface, 2 April 2025) <https://www.interface-eu.org/publications/ai-agent-classification> accessed 9 September 2025.
[78] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[79] Anat Lior A, ‘AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy’ (SSRN, 14 September 2019) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3446115> accessed 27 August 2025.
[80] Jonathan Kewley, ‘Who’s Responsible for Agentic AI?’ (Clifford Chance, 22 May 2025) <https://www.cliffordchance.com/insights/thought_leadership/ai-and-tech/who-is-responsible-for-agentic-ai.html> accessed 27 August 2025. Also, see our forthcoming legal analysis on Advanced AI Assistants in collaboration with AWO.
[81] Gabriel Weil and others, ‘Insuring Emerging Risks from Ai’ (Oxford Martin School, 19 November 2024) <https://www.oxfordmartin.ox.ac.uk/publications/insuring-emerging-risks-from-ai> accessed 27 August 2025.
[82] Anat Lior, ‘AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy’ (SSRN, 14 September 2019) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3446115> accessed 27 August 2025.
[83] Gerhard Wagner, ‘Robot, Inc.: Personhood for Autonomous Systems?’ (FLASH: The Fordham Law Archive of Scholarship and History, 2 November 2019) <https://ir.lawnet.fordham.edu/flr/vol88/iss2/8/> accessed 27 August 2025.
[84] Gerhard Wagner, ‘Robot, Inc.: Personhood for Autonomous Systems?’ (FLASH: The Fordham Law Archive of Scholarship and History, 2 November 2019) <https://ir.lawnet.fordham.edu/flr/vol88/iss2/8/> accessed 27 August 2025.
[85] Alan Chan and others, ‘IDs for AI Systems’ (arXiv.org, 28 October 2024) <https://arxiv.org/abs/2406.12137> accessed 27 August 2025.
[86] Alan Chan and others, ‘Visibility into AI Agents’ [2024] The 2024 ACM Conference on Fairness, Accountability, and Transparency 958.
[87] Barnett v Chelsea & Kensington Hospital [1969] 1 QB 428.
[88] Beatriz Botero Arcila, ‘AI Liability along the Value Chain’ (Mozilla Foundation, 9 June 2025) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5209735> accessed 9 September 2025; Ian Brown, ‘Allocating Accountability in AI Supply Chains’ (Ada Lovelace Institute, 29 June 2023) <https://www.adalovelaceinstitute.org/resource/ai-supply-chains/> accessed 9 September 2025.
[89] Xiangyu Qi and others, ‘Fine-Tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!’ (arXiv.org, 5 October 2023) <https://arxiv.org/abs/2310.03693> accessed 9 September 2025.
[90] Bartosz Brożek and others, ‘The Black Box Problem Revisited. Real and Imaginary Challenges for Automated Legal Decision Making’ (2024) 32 Artificial Intelligence and Law 427.
[91] Cynthia Rudin and Joanna Radin, ‘Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition’ (2019) 1 Harvard Data Science Review <https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/8> accessed 15 July 2024.
[92] Anthropic, ‘Reasoning Models Don’t Always Say What They Think’ (Anthropic, 3 April 2025) <https://www.anthropic.com/research/reasoning-models-dont-say-think> accessed 9 September 2025.
[93] Ryan Abbott, Rita Matulionyte and Tatiana Aranovich, ‘Trade Secrets versus the AI Explainability Principle’, Research handbook on intellectual property and artificial intelligence (Edward Elgar Publishing Limited 2022).
[94] Ian Brown, ‘Allocating Accountability in AI Supply Chains’ (Ada Lovelace Institute, 29 June 2023) <https://www.adalovelaceinstitute.org/resource/ai-supply-chains/> accessed 9 September 2025.
[95] Jenny Brennan, ‘AI Assurance?’ (Ada Lovelace Institute, 18 July 2023) <https://www.adalovelaceinstitute.org/report/risks-ai-systems/> accessed 9 September 2025.
[96] For an example of a system card, see: OpenAI, ‘OpenAI GPT-4.5 System Card’ (OpenAI, 27 February 2025) <https://openai.com/index/gpt-4-5-system-card/> accessed 9 September 2025.
[97] EU AI Act, Article 50. Senate of California, ‘SB 53 Artificial intelligence: Large developers’ (Legislative Information California, 29 September 2025) <https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53> Accessed 15 October 2025. New York State Senate, ‘Assembly Bill A6453A’, 2025, <NY State Assembly Bill 2025-A6453A> accessed 17 November 2025.
[98] Weixin Liang and others, ‘What’s Documented in AI? Systematic Analysis of 32k AI Model Cards’ (arXiv.org, 7 February 2024) <https://arxiv.org/abs/2402.05160> accessed 9 September 2025.
[99] Qiguang Chen and others, ‘Towards Reasoning Era: A Survey of Long Chain-of-Thought for Reasoning Large Language Models’ (arXiv.org, 18 July 2025) <https://arxiv.org/abs/2503.09567> accessed 9 September 2025.
[100] Anthropic, ‘Reasoning Models Don’t Always Say What They Think’ (Anthropic, 3 April 2025) <https://www.anthropic.com/research/reasoning-models-dont-say-think> accessed 9 September 2025.
[101] Anthropic, ‘Tracing the Thoughts of a Large Language Model’ (Anthropic, 27 March 2025) <https://www.anthropic.com/research/tracing-thoughts-language-model> accessed 9 September 2025.
[102] Updated EU Product Liability Directive (hereafter: EU PLD), Art 9(1) and 9(5).
[103] EU PLD, Art 10(2)(a).
[104] EU PLD, Art 10(3).
[105] EU PLD, Art 10(5).
[106] ‘Open Source AI’ (Open Source Initiative) <https://opensource.org/ai> accessed 26 August 2025.
[107] ‘Meta’s Llama License Is Still Not Open Source’ (Open Source Initiative, 18 February 2025) <https://opensource.org/blog/metas-llama-license-is-still-not-open-source> accessed 26 August 2025.
[108] Julien Simon, ‘Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration’ (Hugging Face, 15 January 2022) <https://huggingface.co/blog/intel> accessed 9 September 2025.
[109] EU PLD, Art 2(2).
[110] EU PLD, Recital 14 and 15.
[111] Julis Musseau and others, ‘Is Open Source Eating the World’s Software?’ (ACM Digital Library, 17 October 2022) <https://dl.acm.org/doi/abs/10.1145/3524842.3528473> accessed 9 September 2025; John Speed Meyers and Paul Gibert, ‘Questioning the Conventional Wisdom on Liability and Open Source Software’ (Lawfare, 18 April 2024) <https://www.lawfaremedia.org/article/questioning-the-conventional-wisdom-on-liability-and-open-source-software> accessed 9 September 2025.
[112] John Speed Meyers and Paul Gibert, ‘Questioning the Conventional Wisdom on Liability and Open Source Software’ (Lawfare, 18 April 2024) <https://www.lawfaremedia.org/article/questioning-the-conventional-wisdom-on-liability-and-open-source-software> accessed 9 September 2025.
[113] For a more comprehensive discussion of the role that model marketplaces can take in moderating open-source models, see: Robert Gorwa and Michael Veale, ‘Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries’ (ArXiv.org, 15 February 2024) <https://arxiv.org/html/2311.12573v2#S5> accessed 9 September 2025.
[114] Robert Gorwa and Michael Veale, ‘Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries’ (ArXiv.org, 15 February 2024) <https://arxiv.org/html/2311.12573v2#S5> accessed 9 September 2025.
[115] Luisa Coheur, ‘From Eliza to Siri and Beyond’ (2020) 100 Communications in Computer and Information Science 29 <https://link.springer.com/chapter/10.1007/978-3-030-50146-4_3#citeas> accessed 9 September 2025.
[116] David Fernández Llorca and others, ‘Liability Regimes in the Age of AI: A Use-Case Driven Analysis of the Burden of Proof’ (2023) 76 Journal of Artificial Intelligence Research 613.
[117] EU PLD, Recital 46.
[118] Consumer Protection Act 1987 (hereafter: CPA 1987), s 3(2)(b).
[119] Gabrijela Perković, Antun Drobnjak and Ivica Botički, ‘Hallucinations in LLMs: Understanding and Addressing Challenges’ (IEEE Xplore, 28 June 2024) <https://ieeexplore.ieee.org/abstract/document/10569238> accessed 9 September 2025.
[120] Zvi Mowshowitz, ‘o3 is a lying liar’ (Don’t Worry About the Vase, 23 April 2025) <https://thezvi.substack.com/p/o3-is-a-lying-liar> accessed 10 September 2025.
[121] Mahjabin Nahar and others, ‘Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations’, (ArXiv.org, 4 April 2024) <https://arxiv.org/abs/2404.03745> accessed 10 September 2025.
[122] Gaurav Yadav, Robert Reason and Morgan Simpson, ‘Product Liability as a Model for UK AI Security’ (SSRN, 6 May 2025) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5216191> accessed 10 September 2025.
[123] OECD, ‘AI Incident Base’ (OECD, 25 April 2024) <https://oecd.ai/en/catalogue/tools/ai-incident-database> accessed 10 September 2025.
[124] Gaurav Yadav, Robert Reason and Morgan Simpson, ‘Product Liability as a Model for UK AI Security’ (SSRN, 6 May 2025) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5216191> accessed 10 September 2025.
[125] Law Commission and Scottish Law Commission, Automated Vehicles: joint report (Law Com No 404, 2022).
[126] Law Commission and Scottish Law Commission, Automated Vehicles: joint report (Law Com No 404, 2022).
[127] Law Commission and Scottish Law Commission, Automated Vehicles: joint report (Law Com No 404, 2022).
[128] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[129] Hopkins R (Civil Liability for Human Rights Violations, October 2022) <https://www.law.ox.ac.uk/sites/default/files/2022-10/9._civil_liabilities_for_human_rights_violations_england_and_wales.pdf> accessed 27 August 2025
[130] ‘Art. 82 GDPR – Right to Compensation and Liability’ (General Data Protection Regulation (GDPR)) <https://gdpr-info.eu/art-82-gdpr/> accessed 11 July 2024.
[131] Kristof Van Quathem Aleksiev Aleksander, ‘Rounding up Five Recent CJEU Cases on GDPR Compensation’ (Inside Privacy, 18 April 2024) <https://www.insideprivacy.com/cybersecurity-2/rounding-up-five-recent-cjeu-cases-on-gdpr-compensation/> accessed 26 July 2024.
[132] Stephen Mulders, ‘Collective Damages for GDPR Breaches: A Feasible Solution for the GDPR Enforcement Deficit?’ (2022) 8 European Data Protection Law Review (EDPL) 493.
[133] Stephen Mulders, ‘Collective Damages for GDPR Breaches: A Feasible Solution for the GDPR Enforcement Deficit?’ (2022) 8 European Data Protection Law Review (EDPL) 493.
[134] These cases are originating in the Netherlands as it already has a more established collective action regime under Article 305a of the Dutch Civil Code. This collective action system is an early adoption of the EU’s Representative Actions Directive that came into force on 24 December 2020.
[135] Natasha Lomas, ‘Google’s Adtech Targeted by Dutch Class-Action Style Privacy Damages Suit’ (TechCrunch, 12 September 2023) <https://techcrunch.com/2023/09/12/google-dutch-adtech-privacy-damages-suit/> accessed 1 August 2024.
[136] Stephen Mulders, ‘Collective Damages for GDPR Breaches: A Feasible Solution for the GDPR Enforcement Deficit?’ (2022) 8 European Data Protection Law Review (EDPL) 493.
[137] Stephen Mulders, ‘Collective Damages for GDPR Breaches: A Feasible Solution for the GDPR Enforcement Deficit?’ (2022) 8 European Data Protection Law Review (EDPL) 493.
[138] Stichting Massaschade & Consument, ‘Miljoenen Nederlandse TikTok-Gebruikers Krijgen van Rechtbank Groen Licht in Collectieve Rechtszaak’ (Stichting Massaschade & Consument, 2024) <https://www.massaschadeconsument.nl/nieuws/2024-01-10-miljoenen-nederlandse-tiktok-gebruikers-krijgen-van-rechtbank-groen-licht-in-collectieve-rechtszaak/> accessed 1 August 2024.
[139] Stichting Massaschade & Consument, ‘Google Collectieve Actie’ (Stichting Massaschade & Consument, 2024) <https://www.massaschadeconsument.nl/collectieve-acties/google/> accessed 1 August 2024.
[140] De Rechtspraak, ‘Collectieve vorderingen The Privacy Collective tegen Oracle en Salesforce ontvankelijk’ (de Rechtspraak, 18 June 2024) <https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Gerechtshoven/Gerechtshof-Amsterdam/Nieuws/Paginas/Collectieve-vorderingen-The-Privacy-Collective-tegen-Oracle-en-Salesforce-ontvankelijk.aspx> accessed 1 August 2024.
[141] De Rechtspraak, ‘Royal Dutch Shell moet CO2-uitstoot terugbrengen’ (De Rechtspraak, 2024) <https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/Royal-Dutch-Shell-moet-CO2-uitstoot-terugbrengen.aspx> accessed 26 July 2024. This case is still subject to appeal, which is currently ongoing.
[142] De Rechtspraak, ‘Haagse Hof wijst vordering vermindering CO2-uitstoot door Shell af’ (De Rechtspraak, 12 November 2024).
[143] Lilian Edwards, ‘Private Ordering and Generative AI: What Can We Learn From Model Terms and Conditions?’ (SSRN, 17 December 2024) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5026677> accessed 10 September 2025.
[144] European Parliament Draft Compromise Amendments, Art 28a (European Parliament, 9 May 2023) <https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf> accessed 10 September 2025.
[145] Merlin Stein and Connor Dunlop, ‘Safe before sale’ (Ada Lovelace Institute, 14 December 2023) <https://www.adalovelaceinstitute.org/report/safe-before-sale/> accessed 7 November 2025.
[146] Lilian Edwards, ‘Private Ordering and Generative AI: What Can We Learn From Model Terms and Conditions?’ (SSRN, 17 December 2024) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5026677> accessed 10 September 2025.
[147] This report will not address intentional torts in depth.
[148] A duty of care is a responsibility that one actor has towards another, even when there are no contractual grounds for this. Often-cited examples are that car drivers have a duty of care towards other road users to drive responsibly, and doctors and teachers have a duty of care to, respectively, their patients and students to act in their best interest. Some duties of care and the kinds of behaviours they prescribe are well established in law, whereas others still require further development. A duty of care can be breached by an act, but also by an omission (a failure to do something). For example, a developer of an AI system may have a duty of care towards the end user of his product. If he negligently makes a product available that causes harm, this may result in a liability claim. Usually, proving that a duty of care exists is relatively straightforward in cases where a positive action by Person or Org A caused damage to Person B, it can be harder to prove what kind of precautions were owed as a result of that duty of care.
[149] Law of AI, second edition, 7-308
[150] Bryan H. Choi, ‘Negligence Liability for AI Developers’ (Lawfare, 26 September 2024) <https://www.lawfaremedia.org/article/negligence-liability-for-ai-developers> accessed 8 September 2025.
[151] Engineering Council, ‘Our role as a regulator’ (Engineering Council, 2025) <https://www.engc.org.uk/our-role-as-regulator> accessed 10 September 2025.
[152] Engineering Council and Royal Academy of Engineering, ‘Statement of Ethical Principles’ (Engineering Council, 2005) <https://www.engc.org.uk/resources-and-guidance/guidance-for-the-profession/ethical-principles> accessed 10 September 2025.
[153] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[154] Miriam Buiten, Alexandre de Streel and Martin Peitz, ‘The Law and Economics of AI Liability’ (Computer Law & Security Review, 18 February 2023) <https://www.sciencedirect.com/science/article/pii/S0267364923000055> accessed 8 September 2025.
[155] ‘Liability for Defective Products – European Commission’ <https://single-market-economy.ec.europa.eu/single-market/goods/free-movement-sectors/liability-defective-products_en> accessed 10 July 2024.
[156] CPA 1987, s 3; EU PLD, Art 7.
[157] CPA 1987, s 3(2).
[158] Gaurav Yadav, Robert Reason and Morgan Simpson, ‘Product Liability as a Model for UK AI Security’ (SSRN, 6 May 2025) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5216191> accessed 10 September 2025; EU PLD, Recital 31.
[159] CPA 1987, s 4(1e)
[160] EU PLD, Art. 4(1); St Albans City and DC v International Computers Ltd [1996] 4 All ER 481 (CA). There may be some limited exceptions for ‘digital content’ as defined under the Consumer Rights Act 2015 (hereafter CRA 2015).
[161] Law Commission, ‘Law Commission to review the law relating to product liability’ (Law Commission, 31 July 2025) <https://lawcom.gov.uk/news/law-commission-to-review-the-law-relating-to-product-liability/> accessed 10 September 2025.
[162] CPA 1987, s 2(2).
[163] CPA 1987, s 2(3).
[164] EU PLD, Art 8.
[165] Noam Kolt, ‘Governing AI Agents’ (SSRN, 2 April 2024) <https://papers.ssrn.com/abstract=4772956> accessed 17 July 2024.
[166] An example of this is the recent announcement of the FCA to consult on a compensation scheme for consumers duped by unfair car loans. See: Financial Conduct Authority, ‘FCA to consult on motor finance compensation scheme’ (FCA, 4 August 2025) <https://www.fca.org.uk/news/press-releases/fca-consult-motor-finance-compensation-scheme> accessed 10 September 2025.
[167] Bank of England, ‘Discussion Paper 1/23 – Review of the Senior Managers and Certification Regime (SM&CR)’ (Bank of England, 22 July 2024) <https://www.bankofengland.co.uk/prudential-regulation/publication/2023/march/review-of-the-senior-managers-and-certification-regime> accessed 24 July 2024.
[168] Financial Conduct Authority, ‘Senior management functions’ (FCA, 11 May 2015) <https://www.fca.org.uk/firms/approved-persons/senior-management-functions> accessed 10 September 2025.
[169] Financial Conduct Authority, ‘Senior Managers Regime’ (FCA, 30 March 2023) <https://www.fca.org.uk/firms/senior-managers-and-certification-regime/senior-managers-regime> accessed 10 September 2025; Financial Conduct Authority, ‘FCA Handbook: DEPP 6.2 Deciding whether to take action’ (FCA, 3 June 2025) <https://handbook.fca.org.uk/handbook/depp6/depp6s1?timeline=true> accessed 10 September 2025.
[170] Ministry of Justice, ‘Government Crackdown on Explicit Deepfakes’ (GOV.UK, 7 January 2025) <https://www.gov.uk/government/news/government-crackdown-on-explicit-deepfakes> accessed 28 August 2025
[171] EU PLD, Art 10.
[172] Directorate-General for Justice and Consumers (European Commission) and others, Comparative Law Study on Civil Liability for Artificial Intelligence (Publications Office of the European Union, 2021) <https://data.europa.eu/doi/10.2838/77360> accessed 15 July 2024.
[173] Proportional liability is different from ‘proportionate liability’. Proportional liability applies to the attribution of causation, meaning that a tortfeasor can still be held liable even when it is not shown with certainty that he committed the fault with a probability of more than 50 per cent. Proportionate liability refers to the possibility, once causality has been established, to attribute the harm to more than one person in the supply chain. Proportionate liability goes against the doctrine of ‘joint and several liability’, where the victim can sue one tortfeasor for their full damages, even when there are multiple wrongdoers.
[174] HR 31 March 2006, JOL 2006, 199 (Nefalit/Karamus) (Neth.), ECLI:NL:HR:2006:AU6092.
Miquel Martín-Casals, ‘Proportional Liability in Spain: A Bridge Too Far?’ in Diego M Papayannis and Miquel Martín-Casals (eds), Uncertain Causation in Tort Law (Cambridge University Press 2015) <https://www.cambridge.org/core/books/uncertain-causation-in-tort-law/proportional-liability-in-spain/FC38AB2BB102D86CC51DE42B7C0DE583> accessed 11 July 2024.
[175] HR 31 March 2006, JOL 2006, 199 (Nefalit/Karamus) (Neth.), ECLI:NL:HR:2006:AU6092.
[176] David Ingram and Maria Piñero, ‘Tesla hit with $243 million in damages after jury finds its Autopilot feature contributed to fatal crash’ (NBC News, 1 August 2025) <https://www.nbcnews.com/news/us-news/tesla-autopilot-crash-trial-verdict-partly-liable-rcna222344> accessed 10 September 2025.
[177] EU PLD, Art 13 ; CPA 1987, s 6(4).
[178] Beatriz Botero Arcila, ‘AI Liability along the Value Chain’ (Mozilla Foundation, 9 June 2025) <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5209735> accessed 9 September 2025.
[179] EU PLD, Art 10(3)
[180] EU PLD, Art. 10(4).
[181] Hannah Barakat, ‘How Information Asymmetry Inhibits Efforts for Big Tech Accountability’ (Tech Policy Press, 16 April 2025) <https://www.techpolicy.press/how-information-asymmetry-inhibits-efforts-for-big-tech-accountability/> accessed 10 September 2025.
[182] A ‘material’ harm does not refer to how important or sizeable the harm in question is, it just refers to the physicality of the harm.
[183] Christiane Wendehorst, ‘AI Liability in Europe’ (Ada Lovelace Institute, 22 September 2022) <https://www.adalovelaceinstitute.org/resource/ai-liability-in-europe/> accessed 8 September 2025.
[184] CRA 2015, s 47(a).
[185] Stephen Mulders, ‘Collective Damages for GDPR Breaches: A Feasible Solution for the GDPR Enforcement Deficit?’ (2022) 8 European Data Protection Law Review (EDPL) 493.
[186] Lloyd v Google LLC [2021] UKSC 50.
[187] Lloyd v Google LLC [2021] UKSC 50.
[188] Lloyd v Google LLC [2021] UKSC 50.
[189] Anat Lior, ‘Insuring AI: The Role of Insurance in Artificial Intelligence Regulation’ (2021) 35 Harvard Journal of Law & Technology (Harvard JOLT) 467.
[190] Anat Lior, ‘Insuring AI: The Role of Insurance in Artificial Intelligence Regulation’ (2021) 35 Harvard Journal of Law & Technology (Harvard JOLT) 467.
[191] Anat Lior, ‘Insuring AI: The Role of Insurance in Artificial Intelligence Regulation’ (2021) 35 Harvard Journal of Law & Technology (Harvard JOLT) 467.
[192] Anat Lior, ‘Insuring AI: The Role of Insurance in Artificial Intelligence Regulation’ (2021) 35 Harvard Journal of Law & Technology (Harvard JOLT) 467.
[193] Anat Lior, ‘Insuring AI: The Role of Insurance in Artificial Intelligence Regulation’ (2021) 35 Harvard Journal of Law & Technology (Harvard JOLT) 467.
[194] Ariel Dora Stern and others, ‘AI Insurance: How Liability Insurance Can Drive the Responsible Adoption of Artificial Intelligence in Health Care’ (2022) 3 NEJM Catalyst <http://catalyst.nejm.org/doi/10.1056/CAT.21.0242> accessed 25 July 2024.
[195] Gabriel Weil, ‘The Pros and Cons of California’s Proposed SB-1047 AI Safety Law’ (Lawfare, 8 May 2024) <https://www.lawfaremedia.org/article/california-s-proposed-sb-1047-would-be-a-major-step-forward-for-ai-safety-but-there-s-still-room-for-improvement> accessed 10 September 2025.
[196] Gabriel Weil, ‘The Pros and Cons of California’s Proposed SB-1047 AI Safety Law’ (Lawfare, 8 May 2024) <https://www.lawfaremedia.org/article/california-s-proposed-sb-1047-would-be-a-major-step-forward-for-ai-safety-but-there-s-still-room-for-improvement> accessed 10 September 2025.
[197] ‘Vehicle Insurance’ (GOV.UK, nd) <https://www.gov.uk/vehicle-insurance#:~:text=You%20must%20have%20motor%20insurance,repair%20to%20your%20own%20vehicle> accessed 11 September 2025; ‘Car insurance validity in the EU’ (Your Europe, 2024) <https://europa.eu/youreurope/citizens/vehicles/insurance/validity/index_en.htm> accessed 11 September 2025.
[198] Citizens Advice, ‘Getting vehicle insurance’ (Citizens Advice, 2020) <https://www.citizensadvice.org.uk/consumer/insurance/types-of-insurance/vehicle-insurance/vehicle-insurance-types/> accessed 11 September 2025.
[199] Proposal for EU AI Liability Directive (withdrawn) (hereafter: AILD Proposal); European Law Institute, ‘Guiding Principles for Updating the Product Liability Directive for the Digital Age’ <https://www.europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_Guiding_Principles_for_Updating_the_PLD_for_the_Digital_Age.pdf> accessed 11 September 2025.
[200] World Nuclear Association, ‘Liability for Nuclear Damage’ (World Nuclear Association, 15 March 2021) <https://world-nuclear.org/information-library/safety-and-security/safety-of-plants/liability-for-nuclear-damage> accessed 12 July 2024.
[201] Nuclear Energy Agency, ‘Nuclear Liability’ (Nuclear Energy Agency) <https://www.oecd-nea.org/jcms/pl_31319/nuclear-liability> accessed 12 July 2024.
[202] Nuclear Energy Agency, ‘Nuclear Liability’ (Nuclear Energy Agency) <https://www.oecd-nea.org/jcms/pl_31319/nuclear-liability> accessed 12 July 2024.
[203] Nuclear Energy Agency, ‘Nuclear Liability’ (Nuclear Energy Agency) <https://www.oecd-nea.org/jcms/pl_31319/nuclear-liability> accessed 12 July 2024.
[204] World Nuclear Association, ‘Liability for Nuclear Damage’ (World Nuclear Association, 15 March 2021) <https://world-nuclear.org/information-library/safety-and-security/safety-of-plants/liability-for-nuclear-damage> accessed 12 July 2024.
[205] CRA 2015.
[206] CRA 2015, Sch 2 para. 1.
[207] CRA 2015, s 9-11, 34-36, 49.
[208] Unfair Contract Terms Act 1977 (hereafter: UCTA 1977).
[209] The UK’s CPA 1987 is the Act that transposed the previous version (before the 2024 update) of the EU Product Liability Directive into UK law.
[210] AILD Proposal, Art 2(9).
[211] Senate of California, ‘SB 53 Artificial intelligence: Large developers’ (Legislative Information California, 29 September 2025) <https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53> Accessed 15 October 2025.
[212] SB-813 Multistakeholder regulatory organisations, California Senate Bill (2025-26) (hereafter: SB-813). < https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB813> accessed 11 September 2025.
[213] SB-813, 8894.4.
[214] SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, California Senate Bill (2024) <https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047> accessed 11 September 2025.
[215] Sigal Samuel, Kelsey Piper and Dylan Matthews, ‘California’s governor has vetoed a historic AI safety bill’ (Vox, 29 September 2024) <https://www.vox.com/future-perfect/369628/ai-safety-bill-sb-1047-gavin-newsom-california> accessed 11 September 2025.
[216] Gavin Newsom, ‘Veto Message’ (Office of the Governor, 29 September 2024) <https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf> accessed 11 September 2025.
[217] Megan Garcia v Character Technologies Inc, US District Court Middle District of California, Civil No. 6:24-cv-01903-ACC-EJK, First Amended Complaint, 9 November 2024.
[218] Megan Garcia v Character Technologies Inc, US District Court Middle District of California, Case No.: 6:24-cv-1903-ACC-UAM, Court Order on Motion to Dismiss, 21 May 2025.
[219] Derek Mobley v. Workday Inc, US District Court, N.D. California, Case No. 23-cv-00770-RFL, Court order on Motion to Dismiss, 12 July 2024.




