Ada in Europe
We examine how existing and emerging regulation in the EU strengthens, supports or challenges interests of people and society – locally and globally.
As part of its ‘Digital Decade’ goal, the European Union has set out to regulate the different key elements of the digital age, from Artificial Intelligence to online platforms. The emerging legislative approaches will have profound impacts on the use and experience of data and AI in Europe, and those that seek to operate in the region.
Following the model of the GDPR, it’s likely that these regulatory approaches will have influence beyond the EU – and may even become global standards. The EU AI Act, for example, is the first comprehensive attempt to regulate these technologies.
Our Brussels office aims to understand how these developments affect the future of global regulation. To connect with our European Public Policy Lead, email Connor Dunlop.
Ada’s approach
We aim to consider how the interests of people and society are met by emerging and existing EU regulation and to shape relevant legislative processes. To do this, we will provide evidence-based research, including technical research to explore accountability mechanisms in practice, and piloting and evaluation projects, bringing together different disciplinary perspectives and engaging with the public to deliberate on the use of data and AI.
Our work on European policy is partially responsive, undertaking ongoing convening and commissioning, but we also have three substantive priorities for policy research.
1. The EU AI Act
We undertake convening, policy analysis and engagement on the AI Act. See our explainer, policy briefing and expert opinion from Professor Lilian Edwards.
In 2022 we undertook further research on issues of AI regulation, with deep dives into AI and labour; emotion recognition and general-purpose AI; technical mandates and standards, as well as exploring alternative global modes for governance. We also commissioned analysis on how liability law could support a legal framework for AI. See our Future of Regulation programme for further details.
Our three-year programme on the governance of biometrics brings legal analysis and public deliberation to the issue of use, risks and safeguards.
2. Rethinking data and rebalancing power
This three-year programme reconsidered the narrative, uses and governance of data. The expert working group co-chaired by Professor Diane Coyle (Bennett Institute of Public Policy, Cambridge) and Paul Nemitz (Director, Principal Adviser on Justice Policy, EU Commission, and Member of the German Data Ethics Commission), sought to answer a radical question:
What is a more ambitious vision for the future of data that extends the scope of what we think is possible?’
The final report was published in November 2022. See Rethinking data and rebalancing power.
3. Ethics and accountability in practice
We monitor, develop, pilot and evaluate mechanisms for public and private scrutiny and accountability of data and AI. This includes our report on technical methods for auditing algorithmic systems, a collaboration with AI Now and the Open Government Partnership which surveyed algorithmic accountability policy mechanisms in the public sector, and a pilot of algorithmic impact assessments (AIAs) in healthcare. See the programme page for further details.
4. AI standards and civil society participation
In the AI Act, EU policymakers appear to rely on technical standards to provide the detailed guidance necessary for compliance with the Act’s requirements for fundamental rights protections.
In March 2023, we published a discussion paper exploring the role of standards in the AI Act and whether the use of standards to implement the Act’s essential requirements creates a regulatory gap in terms of the protection of fundamental rights. The paper goes on to explore the role of civil society organisations in addressing that gap, as well as other institutional innovations that might improve democratic control over essential requirements.
In May 2023, we hosted an expert roundtable on EU AI standards development and civil society participation and we have published a write-up of the discussion.
Image credit: DKosig
Related content
An EU AI Act that works for people and society
Five areas of focus for the trilogues
Explainer: What is a foundation model?
This explainer is for anyone who wants to learn more about foundation models, also known as 'general-purpose artificial intelligence' or 'GPAI'.
AI assurance?
Assessing and mitigating risks across the AI lifecycle
Keeping an eye on AI
Approaches to government monitoring of the AI landscape
Expert explainer: Allocating accountability in AI supply chains
This paper aims to help policymakers and regulators explore the challenges and nuances of different AI supply chains
Discussion paper: Inclusive AI governance
Civil society participation in standards development
The value chain of general-purpose AI
A closer look at the implications of API and open-source accessible GPAI for the EU AI Act
Rethinking data and rebalancing digital power
What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem towards people and society?
Expert explainer: AI liability in Europe
Legal context and analysis on how liability law could support a more effective legal framework for AI
Expert explainer: The EU AI Act proposal
A description of the significance of the EU AI Act, its scope and main points
People, risk and the unique requirements of AI
18 recommendations to strengthen the EU AI Act
Expert opinion: Regulating AI in Europe
Four problems and four solutions
Three proposals to strengthen the EU Artificial Intelligence Act
Recommendations to improve the regulation of AI – in Europe and worldwide
Technical methods for regulatory inspection of algorithmic systems
A survey of auditing methods for use in regulatory inspections of online harms in social media platforms
Algorithmic accountability for the public sector
Learning from the first wave of policy implementation
The Citizens’ Biometrics Council
Report with recommendations and findings of a public deliberation on biometrics technology, policy and governance
Upcoming and previous events
EU AI standards development and civil society participation
In May 2023, the Ada Lovelace Institute hosted an expert roundtable on EU AI standards development and civil society participation.
Inform, educate, entertain… and recommend?
Exploring the use and ethics of recommendation systems in public service media