As part of its ‘Digital Decade’ goal, the European Union has set out to regulate the different key elements of the digital age, from Artificial Intelligence to online platforms. The emerging legislative approaches will have profound impacts on the use and experience of data and AI in Europe, and those that seek to operate in the region.
Following the model of the GDPR, it’s likely that these regulatory approaches will have influence beyond the EU – and may even become global standards. The EU AI Act, for example, is the first comprehensive attempt to regulate these technologies.
Our Brussels office aims to understand how these developments affect the future of global regulation. To connect with our European Public Policy Lead, email Connor Dunlop.
We aim to consider how the interests of people and society are met by emerging and existing EU regulation and to shape relevant legislative processes. To do this, we will provide evidence-based research, including technical research to explore accountability mechanisms in practice, and piloting and evaluation projects, bringing together different disciplinary perspectives and engaging with the public to deliberate on the use of data and AI.
Our work on European policy is partially responsive, undertaking ongoing convening and commissioning, but we also have three substantive priorities for policy research.
1. The EU AI Act
We undertake convening, policy analysis and engagement on the AI Act. See our explainer, policy briefing and expert opinion from Professor Lilian Edwards.
In 2022 we undertook further research on issues of AI regulation, with deep dives into AI and labour; emotion recognition and general-purpose AI; technical mandates and standards, as well as exploring alternative global modes for governance. We also commissioned analysis on how liability law could support a legal framework for AI. See our Future of Regulation programme for further details.
Our three-year programme on the governance of biometrics brings legal analysis and public deliberation to the issue of use, risks and safeguards.
2. Rethinking data and rebalancing power
This three-year programme reconsidered the narrative, uses and governance of data. The expert working group co-chaired by Professor Diane Coyle (Bennett Institute of Public Policy, Cambridge) and Paul Nemitz (Director, Principal Adviser on Justice Policy, EU Commission, and Member of the German Data Ethics Commission), sought to answer a radical question:
What is a more ambitious vision for the future of data that extends the scope of what we think is possible?’
The final report was published in November 2022. See Rethinking data and rebalancing power.
3. Ethics and accountability in practice
We monitor, develop, pilot and evaluate mechanisms for public and private scrutiny and accountability of data and AI. This includes our report on technical methods for auditing algorithmic systems, a collaboration with AI Now and the Open Government Partnership which surveyed algorithmic accountability policy mechanisms in the public sector, and a pilot of algorithmic impact assessments (AIAs) in healthcare. See the programme page for further details.
4. AI standards and civil society participation
In the AI Act, EU policymakers appear to rely on technical standards to provide the detailed guidance necessary for compliance with the Act’s requirements for fundamental rights protections.
In March 2023, we published a discussion paper exploring the role of standards in the AI Act and whether the use of standards to implement the Act’s essential requirements creates a regulatory gap in terms of the protection of fundamental rights. The paper goes on to explore the role of civil society organisations in addressing that gap, as well as other institutional innovations that might improve democratic control over essential requirements.
In May 2023, we hosted an expert roundtable on EU AI standards development and civil society participation and we have published a write-up of the discussion.
Image credit: DKosig
Array ( [s] => [posts_per_page] => 12 [meta_key] => sb_post_date [order] => DESC [orderby] => meta_value [paged] => 1 [post_type] => Array (  => blog-post ) [tax_query] => Array (  => Array ( [taxonomy] => keywords [field] => slug [terms] => Array (  => just-ai ) ) ) )
Five areas of focus for the trilogues
This explainer is for anyone who wants to learn more about foundation models, also known as 'general-purpose artificial intelligence' or 'GPAI'.
Assessing and mitigating risks across the AI lifecycle
Approaches to government monitoring of the AI landscape
This paper aims to help policymakers and regulators explore the challenges and nuances of different AI supply chains
Civil society participation in standards development
A closer look at the implications of API and open-source accessible GPAI for the EU AI Act
What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem towards people and society?
Legal context and analysis on how liability law could support a more effective legal framework for AI
A description of the significance of the EU AI Act, its scope and main points
18 recommendations to strengthen the EU AI Act
Four problems and four solutions
Recommendations to improve the regulation of AI – in Europe and worldwide
A survey of auditing methods for use in regulatory inspections of online harms in social media platforms
Learning from the first wave of policy implementation
Report with recommendations and findings of a public deliberation on biometrics technology, policy and governance
Array ( [s] => [posts_per_page] => 2 [meta_key] => sb_post_date [order] => DESC [orderby] => meta_value [paged] => 1 [post_type] => Array (  => event ) [tax_query] => Array (  => Array ( [taxonomy] => keywords [field] => slug [terms] => Array (  => europe ) ) ) )
Upcoming and previous events
In May 2023, the Ada Lovelace Institute hosted an expert roundtable on EU AI standards development and civil society participation.
Exploring the use and ethics of recommendation systems in public service media