The future of regulation
Forecasting and shaping the role of regulation in mediating power over data and AI for the good of people and society
The context
In the coming decades, countries and regions around the world will be confronted with a question to which there is not yet a clear answer: how should data and AI be regulated for the benefit of people and society?
With the outsized power and dominance of big tech companies, emerging AI capabilities that defy existing regulatory categorisation, and growing unease about an extractive and exploitative data economy, governments are being asked to consider whether existing regulatory regimes are sufficient to ensure the benefits of the rapid changes in technology are equally distributed, and the harms minimised and accounted for.
The debate about the right way to manage and regulate data and digital systems is not new, and remains unresolved. In the European Union, the General Data Protection Regulation (GDPR) sets the global standard for data protection regulation but in the absence of well-resourced enforcement bodies, its full potential is yet to be determined. Following the UK’s exit from the European Union, the UK Government launched a consultation to explore changes to the UK’s data regime. The USA has yet to adopt a federal privacy law, although California’s Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) signals an appetite for greater data regulation. Meanwhile, antitrust authorities in all three jurisdictions are exploring novel approaches at the intersection of competition, consumer protection and data–protection law to curtail the power of large tech companies.
Data and AI are the focus of digital strategies in various jurisdictions – Europe has published the first comprehensive proposal for the regulation of AI writ large. The UK has published a new AI strategy that signals an intention to explore regulatory frameworks for AI. The Cyberspace Administration of China passed a set of draft regulations for algorithmic systems, and in the USA, government offices are outlining positions and principles for a national framework on AI. Brazil has also adopted its principles and foundations for AI regulation.
As data and AI are increasingly viewed as opportunities to carve out economic and geopolitical advantage, the Ada Lovelace Institute begins an analysis of the regulatory landscape by asking what regulatory approaches can support achieving a vision fit for the middle of the century which facilitates the responsible use of data and purpose–driven innovation, tackles data injustice and asymmetries of power and strengthens data rights and regulation.
Ada’s approach
The Future of regulation programme explores how regulation can be used to rebalance power over data and AI to the benefit of people of society. We will undertake responsive work to bring evidence to bear on new strategic and regulatory opportunities and debates, and consider emerging and potentially transformative approaches to the regulation of data and AI.
This programme focuses on the following interrelated areas:
-
The future of data regulation
In January 2020 we launched the Rethinking data programme by setting up a working group to explore changes in the digital ecosystem that can enable countervailing visions for the use and governance of data. The final report was published in 2022. We are also engaging in a live policy debate in the UK on potential post-Brexit reforms to the data regulation framework.
-
The future of AI regulation
With the publication of the EU’s draft AI regulation (the EU AI Act), and the forthcoming UK white paper on AI regulation, the question of whether, and how, AI should be regulated will be a live and contested one over the coming years. Analysing the European approach as it evolves, we seek to ensure the EU AI Act results in AI regulation that works for people and society. We will also translate developments in AI regulation at the European level for British and international audiences.
The impact we seek
The future of regulation programme enables us to achieve our strategic goals in the following ways:
- We are anticipating transformative innovations in the field of governance and regulation, equipping policymakers and practitioners with up-to-date analysis of the context, impact, scope and implications of proposed regulations.
- We are rebalancing power over data and AI by supporting the development of regulatory mechanisms for accountability and transparency through interpretation, scrutiny, expert evidence and convenings.
- We are promoting meaningful data stewardship by exploring transformative mechanisms for supporting responsible and trustworthy data governance.
- We are creating space for diverse scholarship through convening interdisciplinary conversations and equipping lawmakers and policymakers with access to expertise from across the sciences and humanities.
Projects
Emerging processes for frontier AI safety
The UK Government has published a series of voluntary safety practices for companies developing frontier AI models
Working it out
Lessons from the New York City algorithmic bias audit law
Climate tech for all?
The equality implications of AI-for-climate solutions
Private-sector data for public good: modelling data access mandates
This project aims to model the legal backbone necessary for enabling access to data mandates in practice.
Rethinking data and rebalancing digital power
What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem towards people and society?
Reports
Keeping an eye on AI
Approaches to government monitoring of the AI landscape
AI assurance?
Assessing and mitigating risks across the AI lifecycle
Regulating AI in the UK
Strengthening the UK's proposals for the benefit of people and society
Discussion paper: Inclusive AI governance
Civil society participation in standards development
Rethinking data and rebalancing digital power
What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem towards people and society?
Events
Ada Lovelace Institute hosts ‘Taking back control of data: scrutinising the UK’s plans to reform the GDPR’
Exploring the foundational premises for delivering ‘world-leading data protection standards’ that benefit people and achieve societal goals
From the Ada blog
AI regulation and the imperative to learn from history
What can we learn from policy successes and failures, to ensure frontier AI regulations are effective in practice?
Seizing the ‘AI moment’: making a success of the AI Safety Summit
Reaching consensus at the AI Safety Summit will not be easy – so what can the Government do to improve its chances of success?
Regulating AI in the UK: three tests for the Government’s plans
Will the proposed regulatory framework for artificial intelligence enable benefits and protect people from harm?
What will the role of standards be in AI governance?
Why standards are at the centre of AI regulation conversations and the challenges they raise
The value chain of general-purpose AI
A closer look at the implications of API and open-source accessible GPAI for the EU AI Act
The Ada Lovelace Institute in 2022
Ada’s Director Carly Kind reflects on the last year and looks ahead to 2023
- AI and data ethics
- AI policy
- Algorithm impact assessment
- Biometric technologies
- Biometrics
- Biometrics regulation
- Contact tracing
- Data governance
- Data regulation
- Digital vaccine passports
- Enabling a responsible AI ecosystem
- Ethics and accountability in practice
- Europe
- Facial recognition technology
- Health data
- Health data and COVID-19 tech
- Health technology
- JUST AI
- Public attitudes
- Public-sector use of data & algorithms
- Recommendation systems
- The future of regulation
How does digital constitutionalism reframe the discourse on rights and powers?
A theoretical lens to understand digital policy developments
The role of collective action in ensuring data justice
Five preconditions to protecting people from data-driven collective harms
How the GDPR can exacerbate power asymmetries and collective data harms
Exploring how power asymmetries operate across the law and collective harms
The case for collective action against the harms of data-driven technologies
To what extent are the GDPR's data rights an effective tool for enabling collective action?
The way ahead on AI liability issues
Will the developing EU liability framework for regulating AI prove sufficient?
The political economy of data intermediaries
How do we build data institutions and intermediaries that work for everyone?
Two steps forward, one step back: the EU’s plans for improving gig working conditions
A critique of the EU’s proposed approach to employment status and algorithmic management for platform workers
Getting under the hood of big tech
Auditing standards in the EU Digital Services Act
Making interoperability work in practice: forms, business models and safeguards
Equitable interoperability as a standard
Three proposals to strengthen the EU Artificial Intelligence Act
Recommendations to improve the regulation of AI – in Europe and worldwide
Beyond the regulation of big platforms – supporting different visions for digital ecosystems
An introduction to the Rethinking data programme
From ‘walled gardens’ to open meadows
How interoperability could be the key to addressing platform power
Containing the canary in the AI coalmine – the EU’s efforts to regulate biometrics
Exploring the gaps and risks relating to biometrics in the EU's draft AI regulation
Is the goal of antitrust enforcement a competitive digital economy or a different digital ecosystem?
Antitrust ferment and opportunity in digital markets