Skip to content
News

Ada Lovelace Institute responds to final General-Purpose AI Code of Practice

Gaia Marcus, Director of the Ada Lovelace Institute, has responded to the General-Purpose AI Code of Practice.

15 July 2025

Reading time: 3 minutes

European EU flags in front of the Berlaymont building, headquarters of the European commission in Brussels

Responding to the publication of the final version of the EU’s General-Purpose AI Code of Practice, Gaia Marcus, Director of the Ada Lovelace Institute, said: 

“The Code of Practice is a crucial mechanism for clarifying and codifying the obligations of general-purpose AI providers (AI developers of the largest models) that may pose systemic risk under the EU AI Act, and ensuring good social and economic outcomes from the most powerful models.  

“The final version of this Code iteration is a mixed success, comprising some meaningful protections emerging from what has ultimately been an unsatisfactory process. 

“We are pleased to see the mechanisms around public transparency and external assessment preserved, and the risk taxonomy requiring monitoring of both specified risks and the identification of other risks. The work of the chairs in developing workable compliance mechanisms in previous drafts should be commended. 

“However, the Commission’s claim that ‘the Code has been drafted in an inclusive and transparent process’ does not reflect the closed-doors process conducted with industry following the end of the official multi-stakeholder process, granting regulated entities unprecedented influence and veto over the mechanisms that will describe their compliance. 

“This influence is clearly visible in the final version’s diluted safeguards. The range of risks is narrower than originally drafted, and providers will still have a great deal of latitude in considering external risk assessments and ultimately determining the ‘risk acceptance’ and appetite themselves, which is unlikely to be sufficient for meaningfully mitigating systemic risks. 

“Nevertheless, the Code represents one of the most robust tools we have so far for describing effective risk mitigation approaches.  The next year of rules implementation will benefit from the Commission’s clear support for the Code. 

“The Code is intended to be a living guidance document, and the inclusiveness and responsiveness of its update process will be key to ensuring its provisions remain effective.  

“We should be relieved that important safety measures have survived intact, but cognisant that providers’ strategy has to date been to anchor expectations in borderline compliance, characterising basic safety mechanisms commonly used in other sectors as unworkable. 

“Policymakers will need to be much more ambitious in future iterations of the Code to address the emerging impacts of these technologies, and design a review process that is robust to undue influence. The Code will need to continually and rapidly raise the bar to reflect the state of the art in safety mechanisms and the affordances of the latest general-purpose AI systems.  

“We would expect the participation of academia and civil society in the update process to be formalised. It is also essential that updates can happen quickly and in a targeted manner, without re-opening every element of the first iteration. As Technology Law Professor Philipp Hacker recently pointed out, it may therefore be best to see these as three separate Codes. The three topics covered will have varying application and would make sense for the update process to reflect that. This would allow vital updates to be made on a per-chapter basis, better ensuring developments in the technology or safety incidents can be quickly reflected in the Codes. 

“We look forward to the AI Office setting out its plans for provider adherence to the existing Code and establishing a timetable for the update process.”

Related content