Skip to content
Blog

Accountability for algorithms: a response to the CDEI review into bias in algorithmic decision-making

Reviewing bias is welcome, and stopping the amplification of historic inequalities is essential.

Anna Thomas

27 November 2020

Reading time: 6 minutes

CDEI report cover with justice scales

The Centre for Data Ethics and Innovation (which was established with a unique mandate to develop a governance regime for data-driven technologies) was tasked by Government to advise on how to address ‘bias’ that may be created or amplified by algorithmic decision-making.  A landscape summary was released in July 2019. An interim report, which focused on statistical bias, followed. The final report is published today (27 November 2020).

I have an interest to declare: I am one of three independent advisers to the CDEI Review (with Dr Reuben Binns and Robin Allen QC). And although I support the findings of the report, I think the report’s terms of reference (which hinge on the contested word ‘bias’) have led to a review that does not go far enough. Rather than advising the Government to provide leadership and coordination by seeking further guidance, the review should have advised that fresh legislation is needed in order to achieve its stated aims.

And I bring contextual research: the Institute for the Future of Work (IFOW), which I have the privilege to lead, published a report last month on bias, inequality and accountability for algorithms: Mind The Gap: How to Fill the Equality and AI Accountability Gap in an Automated World.

The pandemic has seen an explosion of digital technologies at work. Over the summer we saw public frustration boil over about the harms and accountability in the wake of the Ofqual A- level grading farrago. Even today, a new survey suggests 1 in 5 employers are tracking workers online or planning to do so.

Invisible and pervasive, automated technologies involving mass data-processing have taken over an extraordinary variety of tasks traditionally carried out by people, such as HR professionals, teachers, vast numbers of managers and public servants, and many others in response to drives to meet new demands and increase efficiency.

Against this background, the CDEI Bias Review has much to offer. Three things, in particular, stand out:

First, the report rightly proposes moving from an after-the-fact approach, driven largely by individuals with limited access to relevant information after adverse impacts have hit, to pre-emptive action and governance by decision-makers, from the earliest point in the technology innovation cycle and right through its deployment. As the report says, a ‘more rigorous and proactive approach’  to identifying and mitigating bias is now required.

This is a significant shift – and the main message of IFOW’s Mind the Gap report, which recommends that a new statutory duty should be introduced to ensure that users and developers of algorithms have to think ahead, and systematically evaluate impacts on equality so they can make reasonable adjustments.

Second, it recognises the scale and breadth of both individual and collective harms, which are posed by use of automated technologies trained on data that embed historic inequalities and patterns of behaviour and resource. In turn, as the scale and speed at which these tools are adopted increase, so too must the pace, breadth and boldness of our policy response to meet these challenges and rebuild public trust.

The report’s recognition of this issue should not be downplayed: it is a milestone. Challenges connected to the potential of algorithmic systems to amplify and project different forms of individual and collective inequality into the future have too often been minimised, or avoided altogether.

Understanding and responding to adverse equality (and other) impacts will mean building cross-disciplinary capabilities and expertise at the CDEI itself and more widely within Government, regulators and industry (as IFOW and the Ada Lovelace Institute have recently modelled).

The CDEI report recognises that ‘bias’ in algorithmic decision-making systems (which are inherently sociotechnical) reflect wider problems in society: ‘as work has progressed’, it says, ‘it has become clear that we cannot separate the question of algorithmic bias from the question of biased decision-making more broadly’.

The CDEI has demonstrated an admirable willingness to develop their own expertise and engage a wider stakeholder base and the public as part of follow-up work: because society as a whole will need to be engaged in the process of assessing the trade-offs and reasonable adjustments required to counter the harms caused by bias. Specifically, a formal forum and mechanisms for wider engagement to give effect to this purpose will be the next step: ‘foster effective partnerships between civil society, government, academic and industry’.

Third, many of the recommendations to improve public-sector transparency beyond strict requirements of the existing legal regimes – including a new, mandatory transparency duty –  are strong, and supported by a detailed summary of existing legal requirements. But the report (while recognising that decision-making is dispersed and traditional divisions do not always stand up) stops short of extending this recommendation to the private sector.

This takes us to what I believe is the report’s Achilles’ heel. The truth is that voluntary guidance, coordination and self-regulation have not worked, and further advisory or even statutory guidance will not work either. In spite of striking moves in the right direction, today’s report, with its focus on ‘bias’ associated with individual prejudice, stops short from making the logical leap to regulation.

If strong, anticipatory governance is indeed crucial (as both IFOW and the CDEI say) then new regulatory mechanisms are required to ensure that the specific actions which have been identified as necessary are taken.

Our analysis of case studies – which shows the harms acutely felt by workers in the most insecure jobs – has shown that existing legal frameworks have not kept pace with use of algorithmic systems trained on data that encodes structural inequalities. There is firm evidence that principles-based approaches do not translate into practical action: protection and enforcement are inadequate, and inconsistencies abound.

This is why our Equality Task Force has recommended fresh legislation in the public interest: an Accountability for Algorithms Act. Algorithms and artificial intelligence must be used in fair and transparent ways across the public and private sectors; equality impacts must be assessed over time and reasonable adjustments made to counter adverse impacts detected; ethical principles which have given us a normative basis for regulation should be placed on a statutory footing; and human agency clearly affirmed. Humans must be properly accountable for their decisions about the design and use of algorithms.

Being first to regulate in this area could have considerable advantages for the UK. Regulating early should inspire better innovation, offer clarity and build trust. In the same way that medical regulation makes the UK an attractive proposition for the life sciences industry, thoughtful regulation would help foster new industries and jobs in responsible technology. These strengths should be leveraged to make the most of ‘first mover’ advantage.

The CDEI report is right that we are sitting at a small window of opportunity and in recognising that the law should be improved with time. We at the Institute for the Future of Work say: roll on the second phase of the debate.


-> Read the CDEI Review into bias in algorithmic decision-making

-> Read the Institute for the Future of Work’s Executive briefing on an accountability for algorithms act

Related content