Who pays for AI risks?
Exploring how industrial levies can shape market behaviour, counter harms and improve governance
2 July 2025
Reading time: 12 minutes
Investments in AI and the technical developments of agentic systems are increasing at pace and, as with every new technology, there is a need to establish how the burden of its risks should be borne across society.
In practical terms, this means asking who should be financially responsible for mitigating such risks and countering potential harm. The notion of industrial levies, common in most sectors, may be one that comes to mind.
Recent proposals around AI levies focus on outputs and redistribution of corporate profits, with levies paid by generative AI users or developers to compensate artists whose work is substituted by AI outputs, and taxes imposed on companies’ excess profit.
This blog explores AI levies more broadly, drawing on the ‘polluter pays’ principle of environmental policy, according to which industries responsible for any kind of pollution are held financially accountable. Defined as taxes or fees imposed on, for instance, developers or deployers of AI models or systems, AI levies could contribute to shape market behaviour and improve the resourcing of existing regulatory systems.
The ‘polluter pays’ principle defines pollution based on specific criteria: where it occurs, how it manifests itself (from a known or diffuse source) and whom it impacts (local communities or everybody). The same criteria could be used to define ‘AI pollution’ or externalities and establish adequate levies to anticipate and counter its associated risks. For example, AI-generated disinformation on social media platforms could be understood in terms of where it occurs (on social media platforms) how it manifests itself (diffusely, as algorithms act across multiple networks) and who it impacts (everybody globally). What kind of levy would be appropriate to counter or respond to these or other specific types of harm?
In this blog, I look at four historical and contemporary case studies of levies, modelled on the notion of ‘polluter pays’ and applied or proposed in other industries: the pharmaceutical sector, digital media, fintech (cryptocurrency) and the petrochemical industry to discuss key lessons for AI levies and raise questions for future research.
While the current political headwinds suggest little appetite for governing or taxing upstream AI developers, this was also the case in the early days of other key technology industries, where different types of levies were implemented.
Industrial levies in other sectors
The following case studies of levies have been selected to address issues that are relevant to AI. Each presents a model of taxation intervening in the context of a large-scale risk. The sectors relevant to two of the examples – digital platforms and cryptocurrency – have complex market dynamics like those of the AI industry, with services operating across jurisdictions and creating challenges for fair taxation and legal oversight. In all four examples accountability is essential. Legal and financial responsibilities must be clearly assigned to those that create, deploy or benefit from a system, according to the Extended Producer Responsibility (EPR). This policy approach, which should also apply to AI, holds a producer responsible for its products during their entire lifecycle, including the management of the final or intermediate waste they generate.
The case studies are organised according to their purpose, how they benefit or benefited their context of reference, and what we can learn from them to develop AI governance.
Comprehensive Environmental Response, Compensation, and Liability Act / Superfund (United States, 1980)
After several incidents in the 1970s exposed contaminated sites that threatened public health and the environment (i.e. Love Canal in the state of New York), the Comprehensive Environmental Response, Compensation, and Liability Act (1980) established the Superfund. This was paid for by chemical and petroleum industries through a levy to clean up orphaned toxic waste sites.
Purpose: Harm prevention.
Benefits: The levy helped remediate hundreds of contaminated sites, mitigating environmental damages and public health risks.
Lessons for AI: A levy can internalise an industry’s externalities by ensuring that only those responsible for negative impacts bear their mitigation costs. This is obviously relevant for situations where AI might create large-scale societal risks. For example, an adequate AI levy could fund interventions, such as ones to counter AI-enabled cyberterrorism attacks on critical digital infrastructure by putting in place AI-enhanced cyber defence or compensation schemes for those affected.
The case of the Superfund also emphasises the importance of political will in establishing effective and durable levies. After the 1994 mid-term elections, changes in the American political environment and a new pro-business strategy (known also as the ‘Republican Revolution’), aimed at cutting taxes, balancing the budget and dismantling various welfare programmes. Congress did not renew the Superfund levy, which expired at the end of 1995. The cleaning of toxic waste sites became increasingly underfunded in the 2000s and has been running on public funds. In 2021, the Bipartisan Infrastructure Law enabled a partial reinstatement of the Superfund but not at its full original scale.
Cyclamed Programme (France, 1993)
Building on the EPR principle, the French Ministry of Health developed the Cyclamed Programme as part of its waste management public health provision, to reduce the risk of water pollution.
Cyclamed operates as non-profit association comprised of pharmaceutical companies, pharmacists and medicines’ distributors. Its funds, paid by pharmaceutical manufacturers, go towards the collection and safe disposal of unused or expired drugs. Although not a classic levy, which usually collects money through a taxation system, the programme operates as one since companies pay into it fees proportionate to their market share.
Purpose: Harm prevention.
Benefits: Cyclamed’s funds pay for ‘take-back’ initiatives, through which pharmacies collect unused drugs from patients to prevent water pollution. The programme continually updates the list of medicines to gather and has reduced drug residues in water, with an estimated collection rate of 60 per cent of all unused drugs.
Lessons for AI: As in the case of the Superfund, Cyclamed shows how to use targeted fees to implement industry oversight and mitigate industry-specific risks. In the AI context, the Cyclamed approach could involve requiring AI companies to fund ‘take-back’ programmes for obsolete AI models or datasets. Developers would be responsible for withdrawing models or datasets that are no longer in use or are deemed unsafe and guarantee for their appropriate handling (e.g. deletion, archiving). The EPR principle could inspire an accountability model across the AI lifecycle, centred around producer-funded solutions.
Digital Services Tax (DST) (UK, 2020)
Introduced as part of the Finance Act 2020, the DST aims to tax large tech companies operating in the UK but based in another, usually low-tax, jurisdiction. If their global revenue from digital services exceeds £500 million, and they derive more than £25 million from interactions between UK-based users and their digital products − for instance when people use search engines, social media or online marketplaces − companies pay a 2 per cent tax, with the first £25 million of revenue being exempt.
Purpose: Market rebalancing.
Benefits: The DST has generated notable public revenue, while highlighting a model of taxation for global tech companies. While still in force at the time of writing, this levy was part of recent trade negotiations with the USA in exchange for a ‘carve-out’ from recently proposed trade tariffs for UK companies selling products in the USA. Pressure on the UK government to make DST-related concessions might continue in future trade talks
Lessons for AI: The DST aims at tax fairness, targeting large AI companies that operate in multiple jurisdictions and produce revenues through interactions between users and AI. This approach might offer a model for tackling cross-border market dynamics, which is especially relevant given the complexity of taxation and licensing agreements for AI models and international projects for AI infrastructure.
Cryptocurrency Mining Levy (Kenya, Barbados and France Coalition, 2023)
A cross-border coalition of states, co-led by Kenya, Barbados and France, recently proposed a levy on cryptocurrency mining activities to address energy consumption and other environmental impacts. As cryptocurrency mining can move across borders to avoid regulation, this initiative aims at establishing a globally coordinated approach, with each country imposing a levy on mining activities, based on energy use or mining output.
The proposal is based on previous research by the International Monetary Fund and argues for a levy on cryptocurrency transactions, with 0.1% of value tax per transaction.
The levy, which is not yet active, is part of wider policy and institutional developments, led by the broader Global Solidarity Levies Task Force and announced in a 2024 report presented at COP29 in Baku, Azerbaijan. Political negotiations around the levy are still ongoing at the time of writing.
Purpose: Harm prevention.
Benefits: This levy could reduce negative externalities, such as the booming carbon footprint and pressure on energy grids, by fostering accountability across the cryptocurrency mining industry. By tying fees directly to energy consumption, the levy encourages clean energy usage and could fund environmental programmes to develop renewable energy infrastructure.
Lessons for AI: This initiative could provide a model for managing cross-border energy consumption and environmental AI impacts. Inputs, such as compute use, and output results, like agentic AI systems, could help determine which companies should pay the levy, ensuring that those who gain the most and generate the greatest risks are responsible for mitigation costs.
The type of levy that the Kenya, Barbados and French coalition has proposed in the context of cryptocurrency mining could address issues that are typical of the AI sector, such as companies moving to jurisdictions with weaker regulation to avoid oversight, and countries gradually reducing regulatory standards to attract private investments. For the initiative to work, in the context of cryptocurrency mining and AI alike, however, international collaboration is crucial.
Key insights
This brief analysis presents insights for the role that levies can play in the context of AI governance.
- Levies vary in purpose and collection criteria.
While some of them address tangible externalities like pollution, others tackle market imbalances, as in the case of the DST.The collection criteria also vary. We see examples of companies responsible for harms then being taxed, in a straightforward application of the ‘polluter pays’ principle, as well as examples of corporations paying a fee proportionate to their market share into organisations that address industry externalities.The approach to determining who is responsible for risks and damages, and therefore subjected to levies, also varies. In the case of the Superfund, investigations into incidents indicated which companies should be targeted. In the case of Cyclamed, producers are instead required to pay the collection of unused drugs before any risks arise. Finally, the UK’s DST takes a threshold-based approach: it targets only companies exceeding set revenue benchmarks, and therefore with significant market share. - Levies tend to have positive benefits for both society and industry.
For instance, the DST has raised an average £800 million a year, with funds flowing into the UK government’s Consolidated Fund.Levies mitigate large-scale risks without imposing additional burdens on taxpayers and can rebalance the market by ensuring all players pay a fair share of taxes. - Political will is crucial to ensure the effectiveness and durability of levies.
In the examples above, the Superfund was allowed to expire after the 1994 ‘Republican Revolution’, and established levies such as the UK’s DST could be undermined by future trade negotiations. It may be easier to future proof levies against political changes when they are conceived as public funds rather than as government policies. However, levies are usually applied through specific legal mechanisms and sometimes include regulatory loopholes, which allow politicians to exert influence and either renew or scrap their provision. - Engaging all relevant stakeholders might improve a levy’s durability.
By operating through a non-profit association, Cyclamed, the most durable levy among our case studies, engages all stakeholders of the pharmaceutical sector − from manufacturing companies to pharmacists and distributors ─ and promotes an inclusive approach to decision-making. A similar AI levy could engage AI developers, downstream providers and AI assurance companies to achieve shared responsibility for the safe development and deployment of AI across the supply chain. - Levy models have already been used for full lifecycle accountability.
An application of the EPR in the AI sector could help hold AI developers accountable for their AI systems from development to disposal, including the initial deployment phase, maintenance and post-deployment updates and the retirement of models. Similarly to how Cyclamed funds the safe disposal of expired drugs, an AI levy could support the safe retirement of models and secure archiving or deletion of training datasets. It could also pay for public investigations into AI-related incidents. - Multi-national types of levies could establish frameworks for tackling cross-border risks.
The approach proposed in the context of the Cryptocurrency Mining Levy could help to manage the environmental risks derived from multi-national AI infrastructural projects, like the construction and management of data centres.
Questions for future research
Designing effective AI levies will raise further questions for research. Among other issues, policymakers will need to consider what the criteria for levy collection should be: company size, compute power and the degree of systemic risks. To ensure transparency and accountability for levies, policymakers must also acknowledge how a government will ensure that the funds raised through levies are used responsibly and for the purpose of improving the AI sector and the type of governance practices that will be used in the process (e.g. independent audits, oversight boards, etc.).
While this research review highlights that levies could serve both as a deterrent (harm prevention) and a reinvestment mechanism (e.g. Cyclamed, Cryptocurrency Mining Levy), future research could provide more evidence to develop levy models that merge these aims, addressing the sector’s challenges while also improving its practices.
Finally, this analysis reveals that political will is crucial to ensure effective and durable levies. Given the perpetual risk that political shifts might alter the purposes and approaches of levies, future research should ask: how can we design future-proofed levies? What mechanisms, including meaningful public engagement, might build consensus around levies and their use? And which metrics should be developed to measure levies’ long-term success and effectiveness in reducing AI-driven harms?