Skip to content
Virtual event

Accountable AI: a route to effective regulation

Exploring the foundational premises for delivering ‘world-leading data protection standards’ that benefit people and achieve societal goals

Uses of data: research, justice and medical research
Date and time
2:00pm – 3:00pm, 20 October 2021 (BST)

On 10 September, the UK Government published its proposal for amending the current data protection regime (the UK GDPR). The aim is to create ‘a pro-growth and pro-innovation data regime whilst maintaining the UK’s world-leading data protection standards’.

At the Ada Lovelace Institute, our mission is to ensure that data and AI work for people and society. In order to explore whether the Government’s plans will enable these aims, we are organising a series of five events, each looking at different sections, questions, statements and framing in the Government’s consultation and asking what benefits and challenges are brought by the proposals.

See the full event schedule.

Session 3: Accountable AI: a route to effective regulation

The third event in our series focuses on accountable AI. This theme is prevalent through chapters 1 and 2 of the consultation, particularly the sections on automated decision-making and accountability frameworks, where the Government suggests accountability mechanisms present barriers to innovation.

This event interrogates the assumption that accountability measures are barriers and whether they can also deliver benefits to society, and explores questions including:

  • What does a robust accountability framework for data and AI systems look like?
  • How do we ensure accountability systems go beyond tick-box exercises and make a positive difference?
  • How can we design appropriate obligations, rights and controls to match the ethical, legal and societal implications of automated decision making?

Scroll down for a summary of the key points discussed, and /or watch the event back here:

This video is embedded with YouTube’s ‘privacy-enhanced mode’ enabled although it is still possible that if you play this video it may add cookies. Read our Privacy policy and Digital best practice for more on how we use digital tools and data.



  • David Erdos

    Co-Director CIPIL, University of Cambridge
  • Lillian Edwards

    Professor of Law, Innovation & Society, Newcastle University
  • Cosmina Dorobantu

    Director for Public Policy, Alan Turing Institute
  • Jennifer Cobbe

    Senior Research Associate, Department of Computer Science and Technology, University of Cambridge
  • Divij Joshi

    PhD candidate, UCL Faculty of Laws

Key points discussed during the event:

  • Proposals to abolish Article 22 are not in line with the UK’s signature, and preparations to ratify, the Data Protection Convention 108+.
  • Data protection impact assessments (DPIAs) are crucial for a risk-based system.
  • Removing all DPIAs and record-keeping requirements will come at a cost of accountability. The benefits, as a result, are likely to accrue to big tech companies, not the public.
  • Accountability mechanisms need to have meaningful community participation, and need to be holistic, not a one-off process.
  • Current systems may not work effectively because the guidance and implementation is lacking and there are low levels of compliance – the ICO is not sufficiently resourced and has under-emphasised the importance of enforcement.
  • Data-protection law is important but does not necessarily need to be the law covering everything. Separate laws and/or regulations may be more appropriate for some circumstances.
  • Current data-protection law relies on an individual engaged subject, and this is likely to be worsened if DPIAs and data-protection officers are removed and charges are introduced.

Speaker interventions

Valentina Pavel, Legal Researcher at the Ada Lovelace Institute, introducing the event, noted that the Data: A New Direction consultation sees the current accountability framework as an unnecessary burden on organisations, which acts as a tick-boxing exercise, rather than helping to protect people and deliver societal benefits. Chapter 2 of the consultation document, she said, is aiming to ‘develop a more agile regulatory approach that supports innovation and protects citizens.’ The aim of today’s event is to explore what a meaningful and robust accountability framework for data and AI systems looks like in practice.

Asked about the role of accountability in innovation, David Erdos, co-director of the Centre for Intellectual Property and Information Law at the University of Cambridge stated that he had ‘a lot of sympathy with the idea that the GDPR includes overburdensome red tape: but little with the idea that data protection is not first and foremost about protecting individuals.’ Innovation and growth, he said, were by-products of good regulation and data protection.

David talked about the importance of clarity about what personal data is at issue. Developers, he noted, are often very good at thinking about what is coming into a system, but may not think fully through what data is being generated, even when ordinary data is being used to generate sensitive data. Proactive transparency, as well as reactive transparency, can provide that clarity.

The Government, he said, is proposing to change the legislation for the purposes of training and testing AI systems, and for detecting and correcting bias in AI systems. Both, particularly the latter, are laudable aims. But the current research conditions – including non-particularity, not focussing on a particular data subject and treating them differently, and non-malfeasance – are important safeguards in this context, and if they are removed, he said, we need to look at other accountable safeguards. Minimal safeguards, he argued, include explaining the logic of processing needs to individuals, and retaining individual rights to object.

David also talked about the proposal to abolish Article 22 – to remove human intervention for ‘significant’ decision-making. He noted that that provision is in place because AI regulation is in a very nascent stage, and the Data Protection Convention 108+ says that data subjects must have the right to their views taken into consideration. From the point of view of accountability under international law, he said, it is disturbing that the UK Government, while saying that they have signed and are preparing to ratify the Data Protection Convention 108+, do not mention that abolishing Article 22 is not compatible with Convention obligations.

Cosmina Dorobantu, Director for Public Policy at the Alan Turing Institute, talked about the growing literature on general principles to guide the development and deployment of AI systems. The principles tend to include things like fairness, sustainability, safety, but also overarching principles like accountability and transparency, which act as enablers for the other principles. As a result, she said, the ability to address considerations related to fairness, sustainability and safety, depends crucially on the existence of effective accountability mechanisms and the availability of information about an AI system. The principle of accountability plays a key role in AI in general, and in data protection in particular.

But the million-dollar question, she said, is how we move from principles to practice. The current accountability framework proposes one answer to this question for data protection. It’s not a perfect solution, but there are no perfect solutions. However, she argued, this does not mean the current system is a waste of time or energy. It would not be a waste of time or energy to try to improve the current system. But, Cosmina said, it is not clear that the proposed reform actually represents an improvement.

Cosmina talked about the work currently being done at the Alan Turing Institute on a practical and holistic approach to the responsible design, development and deployment of data-driven technologies. David Leslie and Christopher Burr are leading research into ethical assurance, which they organise into three categories:

  1. Properties that support reflection and deliberation, which provide concrete means for the anticipation and pre-emption of potential risks and harms.
  2. Properties that support documentation and accountability.
  3. Properties that build trust and confidence through transparent communication.

All three categories of properties are also crucial, she said, for data protection. By proposing a more flexible approach, the Government runs the risk of losing what we already have. The proposal to remove the requirement for DPIAs, for example, lessens the properties that support reflection and deliberation, while the proposal to remove record-keeping requirements lessens properties within the current system that support documentation and accountability.

The proposed reforms, Cosmina said, risk the result of a system that has fewer of the properties we desire when it comes to responsible data use. The reforms come at a cost. The Government argues that these costs are offset by benefits in the form of reducing the burdens to organisations by providing more options and more flexibility. But a key question, Cosmina said, was: which are the organisations that will benefit from this more flexible approach? Will it be the SMEs struggling to stay afloat in the pandemic and energy crisis? Or will it be the large online platforms, with nearly unlimited resources, who are able to develop their own privacy-management programme?

Asked about key ingredients for an accountable approach that benefits both companies and citizens, Divij Joshi, PhD candidate in the UCL Faculty of Laws, noted that there is not a single solution that will fit every context. Divij was part of the team from Ada, AI Now and the Open Government Partnership, that put together a recent comparative. In the EU and the UK, he noted, there have been different variations of Article 22 implemented in different jurisdictions, overseen by different departments and different ministries, with different kinds of guidelines, impact assessments and audits. At the end of the day, however, everyone has been asking the same questions: has this been effective at ensuring accountability at all? It’s a difficult question to answer, Divij said, because, as Cosmina said, this is a nascent field, but what is clear is that you need to have key enabling factors in place and then build on them. However, this is not what the Government is proposing in their consultation. Instead, he said, they argue for the removal of accountability measures that are not yet working; and they take a techno-libertarian position that accountability mechanisms are themselves opposed to innovation.

Divij pointed out that transparency and participation are at the core of building anything that can eventually be meaningful towards public accountability. Mechanisms that are transparent about how data is being used and processed within the context of data protection (and indeed beyond) are important. But so too is participation, because people should know what is happening, both in terms of regulatory proposals, but also in terms of how you are trying to build accountability frameworks that are eventually meaningful. Divij used the example of impact assessments – a common accountability mechanism, but one that is limited when it only takes into account organisational perspectives. For example, he said, a human rights impact assessment, conducted by a large social media company that does not adequately consult affected people, is not going be effective because there will be a mismatch between how the organisation assumes people feel about the impact of a particular technology, and what people actually experience.

Divij also noted that, as well as a lack of these enabling factors, there is a lack of internal regulatory capacity, political will, and incentives built into accountability frameworks, both in the UK and in other countries. While there may be procedural requirements – for example, Article 22 in the context of automated decision-making – if there is a lack of enforcement, it’s difficult to say whether they result in any accountability. That doesn’t mean, he said, that those procedurals are wrong, or even that they are unhelpful, it means that there has not been the capacity, will, or incentives to enforce them. Divij proposed an alternative approach: taking a step back to see why the current systems aren’t working. Is it simply that accountability is a barrier, or is it that accountability measures haven’t been implemented effectively enough?

Lilian Edwards, Professor of Law, Innovation & Society at Newcastle University and Expert Advisor on EU AI Regulation at the Ada Lovelace Institute, noted that the consultation document presents strong aims when it comes to accountability, but it also asserts that the current data-protection framework is disproportionately burdensome, a claim that is not backed up by evidence. If the problem is a lack of clarity, she said, then the solution is better guidance, or even making better use of the guidance that has already been produced by the ICO, the Alan Turing Institute, or the European Data Protection Board, amongst others. The risk of, instead, changing the legislation, Lilian said, is that it may take away the benefits that we are trying to enhance.

Commenting specifically on Article 22, Lilian noted that it appears to be set up as the ‘sacrificial lamb’ in this consultation. It is an ill-drafted, old provision, dating back to the 1980s and 1990s, she said, and so it has little effect in either protecting individual rights or creating accountability. Having said that, throwing it out entirely would not be useful either, she said. It has been useful in the Dutch Uber cases, where workers – low-paid, often racialised workers who face fines, disciplinary action or sacking by their invisible, much more powerful, software overlords – had only two avenues of recourse: subject access requests, to find out what data was being collected about them and how it was used; and the right to object to solely automated processing.

It’s hard, Lilian said, to imagine a world in which we don’t think it’s a good idea for people to be able to object to truly automated processing which has these Kafkaesque abusive effects. She cited other examples, such as facial-recognition systems, which may be used to decide whether someone is a likely terrorist, or should be excluded to a public area; or systems which tell you that you cannot have a loan but not why; or the Ofqual algorithm which decided whether a student went to a university that they wanted. Not many people would say that these systems, with no recourse mechanisms, were accountable. Lilian noted that with the Ofqual algorithm, the Government backed down, so we never found out if that was an Article 22 decision. But a worrying development within that was about what constitutes a solely automated decision when a human inputs the data at the start of the process, which refers to all decisions.

Lilian said that she is in favour of rethinking Article 22 – it would be a lot more useful if it was better phrased. She is in favour of investigating what type of decisions we call ‘solely automated’ and which ones we consider ‘assisted’, as well as which ones we need to have which remedies against. This, she argued, needs a full review – perhaps by a Select Committee – not something rushed through in three pages of a consultation. She pointed out that we should look towards, for example, the draft EU AI regulation, which has a much more sophisticated set of proposals about what human oversight might really mean, and which puts the impetus on the builders – not the deployers – of a system to build it in a way which is transparent enough to be overseeable by humans.

Lilian also pointed out that the consultation proposes a risk-based system instead of a one-size-fits all one, but then goes against this in its proposal to remove the requirement for DPIAs, which are a crucial part of understanding what risks there actually are to particular activities. She also pointed out that the data-protection officer (DPO) function can be provided by means of a service, which means it does not have to be a burden on your company. She finished her intervention by emphasising that it is not possible to have a database society of any degree of fairness, accountability or transparency, without giving people the fundamental right to know what data is being collected about them and what is done with that data, and the proposal to impose charges for subject access requests acts against this, particularly affecting those who are least empowered and most economically deprived.

Jennifer Cobbe, Senior Research Associate in the Department of Computer Science and Technology at the University of Cambridge, said that it was important to be clear about what we actually mean by algorithmic accountability and transparency. She noted that what we want to prioritise isn’t making the algorithms themselves transparent or accountable – although that had been a diversion in research and policy work a few years ago – but about making the people and organisations responsible for those algorithmic systems accountable for their design, deployment and use. This, she said, would help regulators and other oversight bodies scrutinise algorithmic systems to identify problems before they are live, and spot developing problems in order to minimise harm. This kind of holistic accountability, Jennifer said, would make the whole process reviewable, as a form of ongoing accountability that is not just a single event or one-off objection but a process between the people developing, overseeing, and directly affected by systems.

Jennifer also agreed with Lilian that the GDPR – and particularly Article 22 – does not offer a route to accountability in its current form. Like Lilian, she is not opposed to reform – but the reforms outlined in the consultation don’t offer improvements, she said, and are misguided. She questioned the extent to which data-protection law is the right mechanism. It’s important and should be strengthened, but it may not be a tool that should be stretched to cover all technological issues. Instead, she said, data-protection law should provide a solid basis of widely applicable general protections for individuals in circumstances where their personal data is being processed. But on top of that should be a range of more specific, focussed laws and regulations.

Echoing some of the discussions in the first event of this series, Jennifer also argued for being more critical of the concept of innovation, and more recognition that sometimes so-called barriers to innovation aren’t necessarily a bad thing at all. Not all new ideas, she pointed out, are good ideas. Jennifer agreed with David that the point of data-protection law is to protect individuals around the processing of their personal data – just because something is new or innovative doesn’t mean that it is not desirable, and that there should be some kind of legal oversight of it.

Asked to comment on specific provisions in the consultation, David noted that despite the GDPR being a risk-based regulation, when you go through the specific provisions, the literalist approach leaves little space for an actual risk-based assessment. He gave the example of health data: most employers have some kind of sick-leave system, and so the recordkeeping provision applies. David pointed to the list prepared by the ICO of situations when a DPIA is necessary, which he said is opaque, difficult to interpret, and could be very expansively interpreted. For example, the provision on matching personal data could be interpreted to cover any kind of comparison of any two pieces of data from different sources.

David argued that DPIAs are important, but for truly high-risk processing, he also noted that DPOs do have a role, but that he would not oppose making the requirements for public and private sectors more equal. He said that the much more important part, however, was that there is a lack of enforcement, so low levels of compliance and prioritisation of data protection is not seen as a priority. We really do need a more effective law, he said, in place of the current situation, which can be seen as a form of ‘privacy theatre’ where protections mean very little because companies know that the chances of them being enforced are very slim.

Cosmina agreed, saying that she feared that the proposed reforms would lead to even less enforcement than we have seen thus far. She pointed out that even though the ICO has doubled in size – and now have about 800 staff – they still aren’t anywhere near resourced enough to effectively tackle big-tech companies. The more flexible approach proposed in the consultation, she said, would actually increase the burden on the ICO, as they would need to spend more time and resources understanding how each company is trying to meet their legal obligations, in order to understand if they actually are.

Cosmina pointed out that in general, governance structures often kick in too late – they’re often focused on work that has already done, rather than anticipating future possible problems. She feared that the consultation – which includes proposals to remove data protection impact assessments, and to remove requirements for prior consultation with the ICO – shifts the balance still further towards fixing issues once they arise, rather than preventing harms from happening in the first place. She agreed with Jennifer’s point that not all innovation is good or reasonable, citing the October 2021 news articles about facial-recognition technology being used to give children in the UK access to school lunches.

Divij made the point that when talking about regulating artificial intelligence, we need to be clear about what we are trying to regulate, and what we are trying to do with that regulation. Once we’ve done that, he said, we can think about where that regulation needs to be housed, and to what extent issues of personal-data protection converge with the potential harms of algorithmic decision-making, for example. Some countries have created new regulations, but the EU data protection regime that was transposed into the GDPR (and then into national legislation) goes back to 1995, and it’s not entirely clear how they cover artificial intelligence now. Procedural protections for automated decision-making, he said, don’t necessarily fit into data-protection regimes – there is definitely a role for data-protection regulators, but the GDPR may not necessarily cover discriminatory effect based on data that is not individual nor personal.

Divij asked if data-protection experts are necessarily equipped to understand discrimination law, and the potential for bias and discrimination from AI technologies. This, he said, is something that needs to be worked out before deciding whether these obligations should fall under data-protection law, or whether there needs to be a separate regulation on AI.


Responding to questions raised by members of the audience, Lilian said that the draft EU AI regulation doesn’t use impact assessments, but does require self-certification for certain systems that are deemed to be high-risk. She also pointed out that under the current system, we only seem to have one framework for innovation – who can we sell or transfer our data to, so that they will invest in creating innovation that will benefit the UK. We would all like the products of AI innovation, she said, but we find again and again, in our public-engagement work, that people are not at all keen on their sensitive health data being transferred to private companies, for research or innovation or however it is framed. A better idea, she suggested, would be to consider different economic models of innovation and investment. One example, she suggested, would be to invest more in our existing public bodies – like the NHS – for them to make use of data for innovation and social benefit, so we don’t have the uncertainty of further processing and dissemination of data, and we get the long-term benefits as well as control.

Lilian also talked about potential other frameworks for enforcement. As well as increasing resourcing for existing regulators so that there is ‘equality of arms’ with tech companies, she said, we could look at multi-regulator co-operation – we live in a world made of data, but the ICO cannot be the regulator of everything. The ICO doesn’t have the external expertise to make decisions about fundamental rights such as equality, or freedom of expression. Lilian pointed out that without statutory basis, regulators like the ICO, CMA, and FCA already co-operate with each other – that could be expanded and made statutory. She also suggested a potential Ombudsman for AI – a body that could respond to individual complaints in a world where people mostly don’t have a clear idea of how their data is being used or who the data controllers are, as well as referring complaints to the right regulator, tracking patterns of complaints, and perhaps even organising representative actions that at present are left to civil society. She is in favour of GDPR reform, she said, but it needs to be thoughtful reform.

Jennifer agreed with Lilian, and referred the audience to a blog post she had written last year calling for an Ombudsman on public-sector automated decision-making, that would take up complaints and investigations. She pointed out that a major weakness of data-protection law is that it is too dependent on a neoliberal model of ‘active, engaged data subject’ who monitors what is happening and lodges complaints accordingly. Jennifer noted that the consultation proposals to remove DPIAs and DPOs, and introduce charges for subject access requests, burdens individuals still more and dissuades people from engaging with their rights. In her opinion, she said, much of data protection is not worth the paper it’s written on – if data subjects’ choices aren’t backed up by a regulator or regulatory regime that takes them seriously, then what is the benefit of giving them more control? More accountability isn’t a bad thing, but an effective regulator has to step in to make sure that these obligations are complied with.

Related content