Skip to content
Blog

Three proposals to strengthen the EU Artificial Intelligence Act

Recommendations to improve the regulation of AI – in Europe and worldwide

13 December 2021

Reading time: 11 minutes

Circuit board with CPU. Motherboard system chip with glowing processor. Computer´s technology and internet concept.

The European Commission’s Artificial Intelligence Act (‘the AI Act’) proposal is significant as the world’s first comprehensive attempt to regulate AI, addressing issues such as data-driven or algorithmic social scoring, remote biometric identification and the use of AI systems in law enforcement, education and employment.

Since its release in April2021, the AI Act has not only created waves within industry, civil society and the institutions of the European Union (EU) but has also attracted global attention from anyone interested in how AI will be regulated – including those considering legislation and regulation in the UK. 

Regulating AI is among the toughest of legal challenges worldwide, and the Commission’s willingness to engage with ensuring the development of ‘trustworthy AI’ is commendable. The potential of the AI Act to become a global standard in the regulation of AI, and to serve as inspiration for other regulatory initiatives around the world, is exciting. Conversely, there is a risk that the AI Act could fall short of its goal of providing a framework for ‘trustworthy AI’, offering solutions that are, in practice, unworkable or ineffective, and instead create a harmful global model. 

In the spirit of avoiding this latter outcomewe identify significant developments that will be required to the infrastructure around the Act, and set out three ‘big ideas’ to strengthen the proposal and maximise protection and promotion of the interests of people and society.  

As the European Parliament concludes a long debate on its approach to the AI Act, this is a key moment to think about these issues and actively work to ensure that the goals of the proposal will be met. The final decision, of the Conference of the Committee Chairs to name the Internal Market committee (IMCO), represented by Brando Benifei and Civil Liberties (LIBE) committee, represented by Dragos Tudorache, as co-leads, means not only that the work of the European Parliament can begin in earnest but also sends a clear signal that the regulation of AI is not just about economics and the functioning of the internal market, but a sociotechnical challenge with crucial impacts on individuals, their rights and society.  

Taken alongside the recently published Council progress report, the direction of travel appears to be towards making AI in Europe compatible with fundamental rights. 

Participating in the AI Act legislative process and recognising its significance  

Acknowledging the global significance of the Act, the Ada Lovelace Institute – an independent research institute that has worked predominantly in the UK since its inception in 2019 – has decided to widen its focus to consider comprehensively EU regulatory developments. We now have a base in Brussels, close to the heart of EU policymaking, and with the capacity to mobilise our research to play a role in ensuring that this new regulatory framework will work for the benefit of people and society.  

In early 2022, the Ada Lovelace Institute will publish two position papers relating to the AI Act. One will suggest concrete ways in which EU legislators should strengthen the Act to guarantee user and societal interests. As well as helping to shape the final version of the Act, we will aim to clarify how it fits within the emerging EU digital regulatory framework and the EU legal framework as a whole.  A second will take a more aspirational look at what a global model for regulating AI might look like in the future. 

As an organisation that centres convening and collaboration in our research methodologies, we will encourage reflection on the big challenges around regulation and thinking outside the limits of the framework currently being envisaged. We will hold a series of events designed to explore how the AI Act interacts with instruments like the draft Digital Services Act, Digital Markets Act and product liability reforms, as well as existing consumer law. Going forward, we plan to engage leading European and global experts to discuss how legislation should tackle the challenges of the algorithmic society. 

As we outline in our recent report, Regulate to innovateclear, unambiguous rules are necessary in order to create a digital world that benefits people and society. If it successfully navigates the pitfalls, the AI Act has the potential to create an invaluable global model.  

In advance of the legal analysis, we are putting forward three areas where the proposal will need serious reflection: 

  1. Understanding the lifecycle of AI.
  2. Ensuring that those who use, and are impacted by, AI are empowered to participate in its regulation.
  3. Reshaping the AI Act to be truly ‘risk-based’.

1. Understanding the lifecycle of AI: who should the AI Act regulate? 

AI cannot be regulated as if it were a single product or service. Instead, it should be recognised and regulated as a process that operates through a lifecycle of construction, adaptation and deployment, with complex impacts on people and society.   

The AI Act draws its inspiration largely from existing product safety legislation, which conceives of AI ‘providers’ as the equivalent of the manufacturers of real-world, consumer products like microwave ovens or toys. But AI systems are not ordinary consumer products, like kitchen utensils. Nor are AI systems ‘one-off’ services, but rather complex processes delivered dynamically through multiple actors – ‘the AI supply chain’ or ‘AI lifecycle’. 

By taking this approach, the AI Act as currently drafted places most duties on ‘manufacturers’ or ‘developers’ (called ‘providers’ in the proposal) at the point of the AI lifecycle when data-driven systems are first brought to market – it is their job to ‘get AI right’. This overlooks the extent to which these systems learn and evolve to create differential impacts post-release, and that downstream deployers can put them to use in ways that are significantly different from the developers’ original intentions. Deployers of AI (‘users’ in the terminology of the AI Act) can become providers when they substantially modify an AI system, but this will be difficult to enforce, and fails to capture the complexity of how AI is used in the real world. 

To take an example, let’s imagine applying this principle to regulating ‘general purpose’ AI systems such as Open AI’s large language model GPT3, which, some studies have shown, can produce biased outcomes. Because reviews carried out before the AI system is brought to market cannot fully predict unintended consequences, providers may claim they cannot anticipate how these systems will be used in future deployments. Yet downstream deployers (‘users’ in the terminology of the AI Act) will be able to see actual uses and impacts, but may have neither the legal nor practical resources to make these systems compliant with human rights.  

Similarly, an AI system used to manage labour and review performance may seem equitable in the abstract, but could become a vehicle for discrimination when deployed in the workplace. For example, we have seen in recent Uber disputes in the UK, how facial-recognition systems used to verify identity discriminate against workers of colour who make-up the majority of the Uber workforce.  

In our current ethics and accountability in practice work, we are investigating how algorithmic impact assessments (AIAs) can be used to enforce the ethics and accountability of AI systems before they are sent out to market. Despite AIAs emerging as a methodology for identifying and documenting the potential impacts of an AI system,  the AI Act lacks anything like a comprehensive ex ante impact assessment, and even the need for certification for ‘essential requirements’ only applies to ‘high-risk’ AI.  

2. Ensuring that those who use, and are impacted by, AI are empowered to participate in its regulation, and that their voices are given meaningful consideration 

As noted above, the AI Act proposal frames ‘users’, confusingly, as the deployers of AI systems, rather than as ‘end-users’ in the more commonly understood sense – consumers, citizens or those ultimately affected by AI systems. Indeed users, in the sense of those affected by an AI system, are left out of the proposal altogether. They are not given a voice in the shaping of the technical standards that define the Act, or empowered in any substantial way to complain about or challenge AI systems operating in the market.  

This approach is incompatible with an instrument which, regardless of its Internal Market legislative basis, aims to enhance and promote the substance of the EU Charter of Fundamental Rights. Instead, we propose including end-users and their voices in the emerging framework that is meant to protect them.  

The proposal repeatedly fails to create meaningful space for individuals’ participation. It does not contain provisions to consult them at the very start when providers of ‘high-risk’ AI systems are required certify that they meet the requirements of the Act, even though it’s the individual users who will suffer the potential negative impacts. Nor does it give end-users a voice in the decision-making process about what standards high-risk AI providers should meet – this is the purview of technical bodies, which in practice are often unelected and industry-dominated, and therefore lacking democratic legitimacy. 

Significantly, the proposal also does not give users any right to challenge or complain about AI systems in use, if or when they go wrong and harmful impacts occur. This is a missed opportunity to ensure effective enforcement. Experience with the General Data Protection Regulation (GDPR) shows, in areas like targeted advertising and data transfers out of the EU, that users as activists and complainants are as crucial to post-launch enforcement as regulators. The position paper of the European Consumer Organisation (BEUC) provides a particularly comprehensive vision of how a meaningful right to redress can be given to individuals, and we will build on some of the ideas in that paper in our full analysis in early 2022. 

Informed by our work on participatory data governance – understanding the spectrum of possible approaches from informing to involving those affected by technologies – our analysis will also consider ways to boost public participation in the technical committees that will set the standards that high-risk AI systems must meet, and in assessing the impacts systems have on rights before they are put on the market. We will also interrogate the resources that regulators would need to effectively enforce the AI Act, and that civil society would need to represent the voices of the real AI users – those affected by the technologies in use. 

3. Reshaping the AI Act to be truly ‘risk-based’, with categories of risk that reflect the interests of those affected by technologies – in particular marginalised groups – and putting measures in place to futureproof it.  

As it stands, the proposal introduces a number of categories of risk, which form the foundational structure of the Act, and include:  

  1. AI systems giving rise to unacceptable risks’, such as social scoring by public bodies, which are in principle prohibited. 
  2. ‘High-risk’ AI systems, including (some) AI used for law enforcement and private-sector credit scoring systems, which need to demonstrate that they meet ‘essential requirements’. 
  3. ‘Limited risk’ systems,such as deepfakes or AI-enabled chatbots, which are subject to relatively low-level requirements of transparency.  

There is currently no substantial justification in the Act to determine why a system would be allocated to a particular category. This lack of clear criteria for inclusion in a risk category is problematic, particularly when considering criteria for adding new systems to the list of high-risk systems. 

The criteria accompanying the various levels of risk should also be adequately future proofed. Currently, the list of unacceptable risks is completely closed, and the mechanism that allows the list of high-risk AI systems to be expanded (Article 7) is very limited. High-risk AI systems cannot be materially extended by adding new categories, but only partially modified by adding subcategories to what is already present. 

In putting forward a list of clear criteria, which can be judicially reviewed, the EU legislators should go beyond risks to individual rights, to consider risks to groups, vulnerable individuals including migrants or ethnic minorities, as well as the rule of law and societal risks more broadly.  

Since the nature of AI is defined by its rapid and unpredictable evolution, it is likely to evolve dynamically in the future, and there is a need to ensure that the scope of this mechanism will be expanded, to help futureproof the Act. 

These suggestions to strengthen the proposal are necessarily limited and partial, but aim to provoke debate and anticipate the evidence that our forthcoming analysis of how EU legislators should strengthen the Act to guarantee user and societal interests will bring forward in  early 2022.

In bringing forward these ideas we build on the work done by other organisations on this topic. Most recently, 115 civil society organisations have put forward their recommendations on how to improve the proposal. Building collaboratively in the ecosystem of work being undertaken in this area, our aim is to bring both our expertise and unique perspective as an independent research organisation whose main focus is not on digital rights, but on ensuring technology works for the benefit of people and society.  

The ideas presented in this post have been developed together with Prof. Lilian Edwards, Ada’s expert legal adviser on the EU AI Act.

Related content