Following on from the political agreement reached between the EU institutions, Michael Birtwistle, Associate Director, Ada Lovelace Institute, said:
‘The Ada Lovelace Institute welcomes the political agreement reached in December 2023 on the EU AI Act. To ensure that AI is developed and deployed in the interest of people and society, we strongly encourage EU capitals and the European Parliament to approve the final text.
‘The Act was created with the aim of protecting health, safety and fundamental rights, and providing legal certainty and clarity. While not perfect, the final text represents a workable, pragmatic compromise that, if properly implemented, can help achieve these objectives. The legislation could support EU leadership by becoming an important and influential global blueprint for AI regulation.
‘There has understandably been a lot of attention on more contentious and potentially flawed elements of the AI Act in the final weeks of negotiations. Despite legitimate concerns around some aspects of the Act, it offers the EU the best possible option for mitigating the full range of risks that may arise from the development and deployment of AI.
‘The Act will ban certain uses of AI to help prevent some of the most dystopian AI scenarios. These prohibitions will at least set a common EU floor for human rights protections – but should not be seen as a ceiling. They will need to be updated over time as harms manifest, and national governments should take the initiative to go further to offer stronger safeguards, as allowed in the Act.
‘The Act’s obligations on high-risk AI, such as data governance and human oversight, will help reduce the very real risks of harm which could arise, for example, through automated monitoring in workplaces or schools, automated decisions for social benefits or computer vision techniques for medical diagnoses.
‘The Act’s transparency and disclosure rules for generative AI will act as an important safeguard, with the risk of electoral manipulation already rising.
‘The AI Act represents the first serious attempt to mitigate the risks of general purpose AI models, which have the potential to pose wide-scale systemic harms and act as a single point of failure as more and more people interact with them in their daily lives. This is something recognised in international fora (the G7 Hiroshima Process and the Bletchley Declaration) and across jurisdictions (the UK, US, China).
‘In addition, the AI Act supports innovation; it doesn’t hinder it. Research shows many examples of regulation enabling greater innovation and competition. EU businesses will benefit from proper assurance of the underlying technology and greater certainty around its safety. Our own research shows that people expect AI products to be safe and want them to be regulated.
‘While the Act is inevitably a product of compromise, and its implementation will no doubt bring challenges, Europe has a rare opportunity to establish robust rules, institutions and processes to protect the interests of its people, businesses and society while maximising the potential benefits of AI.
‘Such an opportunity will not present itself again anytime soon. Blocking or delaying the AI Act until 2025 (and possibly beyond) would take the EU back to square one, weakening EU leadership and empowering the biggest AI companies to accelerate the development and deployment of AI without safeguards.
‘To protect its people and support its businesses, European lawmakers must pass the AI Act and turn urgently to implementation and enforcement.’
Five areas of focus for the trilogues
Civil society participation in standards development
A closer look at the implications of API and open-source accessible GPAI for the EU AI Act
18 recommendations to strengthen the EU AI Act