Skip to content
Blog

Meaningful transparency and (in)visible algorithms

Can transparency bring accountability to public-sector algorithmic decision-making (ADM) systems?

Cansu Safak , Imogen Parker

15 October 2020

Reading time: 19 minutes

Black and white algorithm by Mika Baumeister

Over the last two months, we’ve seen the demise of three major public-sector algorithmic programmes – the Home Office visa streaming tool, the ‘Most Serious Violence’ programme under consideration by the West Midlands Police Ethics Board, and Ofqual’s A-level grading algorithm – following public outcry and ethical critique. In the background, local councils have been ‘quietly’ abandoning welfare and social care decision-making systems.

These high-profile retractions have taken place against a shift in public sentiment towards greater scepticism and mistrust of ‘black box’ technologies, evidenced in increasing awareness of the possible risks for citizens of the potentially invasive profiling and hard-to-interrogate decisions made by algorithmic decision-making (ADM) systems, and how those decisions impact on their lives. And a rise in Government rejection rates for ‘right to know’ public record requests made under freedom of information legislation.

Research conducted primarily through investigative methods has revealed the extensive application of predictive analytics in public services across the UK, 1bringing into focus the increasing adoption of technological solutions to address governance problems like the distribution of benefits or the allocation of resources in social care.

Ongoing efforts by researchers reveal individual cases of concern, but there remains a persistent, systemic deficit in public understanding about where and how these systems are used. This points to a fundamental issue of transparency.

Transparency has a long tradition in anti-corruption work and is a central fixture in algorithmic accountability debates, both in the fields of regulation and data ethics. In recent years, there have been repeated calls for transparency of ADM systems, although with little effect – possibly because such calls have tended to take the form of high-level rhetorical positioning rather than substantive proposals.

Civil society organisations have gone one step further, to suggest public-sector organisations should be using transparency mechanisms such as mandatory reporting obligations and the institution of public registers.2

Read more on transparency registers, as well as other complementary mechanisms for algorithmic accountability in the public sector in our report with the AI Now Institute and Open Government Partnership: Algorithmic accountability for the public sector.

Despite this debate, there remains little consensus in the UK or across Europe about the form transparency should take with respect to ADM systems. Debates about transparency often skip over a more fundamental conversation about what transparency is trying to achieve and fail to grapple with the challenges of implementing it in a systematic way.

In this article, we uncover some of the problems with transparency and point towards ways of thinking and practices that can help to support a robust system of governance for ADM systems in use in the public sector.

Read more about the different types of transparency mechanisms currently in use in the UK and examine their contributions and limitations with a Transparency mechanisms explainer. [119KB PDF]

Problems with transparency

The expansion of ADM systems across public service delivery has seen an equivalent proliferation of ethical principles and frameworks aiming to articulate conditions of accountability around these systems3. One of the key concepts and demands emerging through this agenda has been transparency and, alongside it, related concepts like explainability and auditability.

Within this context, transparency has been used to cover a range of meanings, from the specific to the general: from recording third-party involvement to publishing specific, technical code; or from Google’s AdSetting transparency tool to government policy.

Over the last year, there have been commendable efforts to address the problem of enabling meaningful transparency and deepening the degree of engagement a person or group of people may have with information about algorithmically generated decisions. An important project in the UK context is Project ExplAIn – a collaboration by the Turing Institute and the ICO leading to new guidance on how to explain decisions made by AI to affected individuals. The guidance sets a high bar – and demonstrates a great deal of thinking – on what information should be (proactively) put forward to individuals, and how to communicate that information.

There’s an interesting question to explore about how the notion of explanation itself relates to understanding. To start from the point of understanding rather than explanation would mean to start from the interests, perspectives and frameworks of thought of the individual, rather than the position of the ‘object’ of study.

In ideal circumstances, people would have the capacity to understand and autonomously judge a system, including questioning the justification for its existence, from the ground up. Individuals, in this idealised scenario, would not simply receive information on how a system works, its application and related organisational responsibilities. Instead, they would access and work through that information as part of a collective body, and in a context in which the ‘social constructedness’ of values – and of the process of assigning meaning to decisions and explanations themselves, with their tensions and competitive interests – is acknowledged and accounted for as much as possible.

In the absence of these ideal circumstances – the reality in which we are working – we need to at least setup mechanisms that square the available information with the available resources to receive it and discuss it critically. For meaningful transparency, this means we need to consider the ecosystem of information surrounding ADS systems: the structures, capabilities and institutions needed to make the information meaningful.

We are certainly not the first to explore the issue of meaningful transparency. Mike Annany and Kate Crawford challenge the false equation of transparency with meaning, in their article Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. The expose the fallacy of assuming a direct link between visibility and understanding. This connection is problematic because it assumes not only the legibility of information but also the competence of audiences to interpret and leverage information as an instrument of accountability.

Sun-ha Hong has also presented a forceful critique of transparency in his book Technologies of Speculation, emphasising the democratic consequences of the proliferation of data. Hong comments on the burden placed on the liberal democratic citizen, who has been enlisted as a ‘free auditor of the state’4 through the labour demanded by the task of translating ever-growing repositories of data and information into knowledge.

In the UK, the limitation of visibility as a mechanism of meaningful transparency is exemplified by the response to the key recommendation made by Nick Macpherson in 2013, that all Government departments should list their ‘Business Critical Models’(BCMs)5 in response to the modelling failure in the evaluation of bids for the Intercity West Coast Franchise.

The implementation of the proactive reporting of BCMs, however limited, sets an important precedent on this issue, but also serves as a warning against tokenistic exercises. The five-year progress update states that many BCM lists have not been updated since 2014, and a review of the lists available online offers little insight into what type of analysis the models actually perform. To avoid the same fate for ADM transparency mechanisms, we must consider whether we can expect them to strengthen accountability structures.

The minimal objective of imposing a transparency obligation on ADM systems is to free the public, including the research community and civil society, of the burden of creating basic visibility of the systems, in order to develop a more ambitious and purposeful form of accountability. The main challenge to this goal is ensuring that transparency mechanisms implemented for this purpose do not become a substitute for meaningful, analytical assessment of ADM systems.

A further dimension of the problem of transparency in public-sector ADM systems is identifying exactly which systems are of interest. There is descriptive uncertainty around ADM systems, owing to the challenge of classifying them in a way that aligns both their technical features and areas of use.

Often these technologies take the shape of multifunctional data systems, built and iterated with exploratory purposes. This creates difficulties in posing meaningful enquiries, especially to public officials who may not be equipped to assess the salience of the various algorithmic systems in their domain.

Regulatory frameworks have attempted to bring some specificity to this problem through reference to the level of risk posed, or the degree of human intervention6. These parameters, however, remain open to interpretation as a result of minimal enforcement, limited guidance and few legal precedents (to date).

Thinking about transparency targets

When systems are imprecisely defined and developed, resolving how they interact with other structures, and how to target transparency at these structures can be complicated. Policy proposals have grappled with this issue by specifying particular targets for transparency. The European Parliament’s Panel for the Future of Science and Technology has offered a breakdown of the areas in which transparency may be demanded, listing: data, algorithms, goals, outcomes, compliance, influence and usage.

This approach is reflective of much of the scholarly work that defines the conditions and targets of algorithmic transparency, which has recently been reviewed and expanded on by Associate Professor at the University of Colorado Law, Margot Kaminski, in Understanding Transparency in Algorithmic Accountability.

Kaminski’s work is of interest to our investigation because it charts the evolution of debates around algorithmic transparency (in more nuance than it is possible to address here) by identifying different framings (or ‘targets’ when thinking about transparency) of algorithmic systems; first, the algorithm itself; second, the ‘assemblage of human and non-human actors’ around algorithmic systems; and last, private-sector governance mechanisms, which she argues are missed by the first two framings.

1. The algorithm

The first target of transparency is the algorithm itself, which Kaminski notes has received considerable attention from both computer science and regulation. This target can be positioned as the focal point of explainability discourses and seen as a route towards making systems more legally compliant.

However, this framing has been regarded by numerous academics as insufficient in accounting for the human systems that problematise the use of algorithms. Annany and Crawford have stressed that ‘there is no “single” system to see inside when the system itself is distributed among and embedded within environments that define its operation.’

Genuine understanding and engagement with algorithmic processes require the ability to situate them in the context of the larger systems and policies they are a part of and interact with. Algorithmically focused transparency efforts often fail to produce this understanding as a consequence of their concentration on the technical functions of systems. In response to this problem, Annany and Crawford suggest that ‘rather than privileging a type of accountability that needs to look inside systems… we instead hold systems accountable by looking across them.’

2. The ‘assemblage of human and non-human actors’ around algorithmic systems

The second target builds on an understanding of the deficiency of this isolationist framing, also made evident by the examples of recent algorithmic scandals, which have demonstrated that most public-sector systems tend to be uncomplicated in their logic and require transparency of processes more than ‘black box’ explanations.

Kaminski describes how, through considerations of these limitations, many scholars have arrived at an expanded view of algorithmic accountability that takes as its target the ‘assemblage of human and non-human actors’ including the ‘organisations and firms around’ these systems.

But Kaminski critiques this second target, arguing that, ‘even this broadened framing misses a crucial insight about what systems need to be made visible and accountable’.

3. Private-sector governance mechanisms

So Kaminski proposes a third target system: the governance regime. This is based on the observation that new governance approaches (referred to in regulatory theory as ‘collaborative governance’ or ‘new governance’, to describe the public-private partnerships that implement their own codes, standards and principles to create ‘soft’ accountability under legally vague terms) can easily fall to regulatory capture. Therefore, Kaminski argues, the third target of transparency must be the delegation of algorithmic governance power to the private sector.

Using Kaminsky’s third framing, we can more clearly assess what we want to know: which specific organisations and institutions we will want to seek transparency from, what obstructions prevent us from achieving this transparency and how to resolve them.

What do we want transparency to achieve?

To develop transparency around ADM systems, we will need to gain insights into the decision-making processes that determine their goals, the impact of their outcomes, how they interact with social structures and what power relations they engender through continued use.

This may sound straightforward, but experience suggests that these are not static pieces of information that can be easily captured, they require ongoing investigation using a multitude of sources. The knowledge gaps evidenced by the inability to access complete, contextual information on ADM systems is often due to information being spread across different organisational structures, under different practices of documentation.

As Nicholas Diakopoulos has articulated: ‘technical systems are fluid, so any attempt at disclosure has to consider the dynamism of algorithms that may be continually learning from new data’. This dynamic character extends beyond data and into the human and governance systems identified above as transparency targets.

Any informational architecture set up to promote transparency must contend with the way systems relate to the real world, tracking the shifting influences of technology, governance and economics, and the public and private actors embedded within them. This requires an articulation of how to access information as well as what information to access.

Answering the ‘how’ question of transparency may require addressing the conditions around who executes it, with what motives and purposes. The historical context for the establishment of transparency as an ‘unalloyed good’7 according to Hong, is primarily connected to ‘the political shifts in relations of trust and communication across the branches of government and media industries, such as a more adversarial model of journalism and the rise of public advocacy groups.’7

In fact, the popularisation of the term transparency is often credited to the Berlin-based NGO, Transparency International, however, their consideration of naming alternatives such as ‘Honesty International’ and ‘Integrity International’ during their branding process demonstrates the difficulty of locating the principles underpinning an ideal of society that empowers citizens through openness.

This indicates that transparency may be better posed as a question of how to establish a relationship based on trust, rather than how to achieve visibility. The tension between visibility and accountability implies an expectation that these disclosures may be made without honest and sincere intentions. In considering how ADM systems – taken as expansive structures including humans and governance mechanisms – can be targeted, it will help to clarify the fundamental values we wish them to embody through their actions and activities.

Transparency as ‘publicness’

With this in mind, a way of conceptualising the objective of transparency in the context of algorithms and AI can be expressed as ‘building publicness‘. Publicness is itself an abstraction for which there are competing definitions with distinct theoretical commitments in public administration discourses, but it can be broadly taken to denote the set of values that create separateness from the private sphere through an ‘attachment to public sector values… due process, accountability, and welfare provision’.

There is a danger, in deploying new technological systems, of thinking they require entirely new governance mechanisms (and, in fact, a tendency to rely on new mechanisms, as Kaminski points out). However, the positioning of ADM systems within longstanding structures of public service delivery provides a solid framework for evaluating and regulating them. Importantly, these existing mechanisms are more strongly founded in the philosophy and ethics of public accountability.

ADM systems are instruments of public service delivery and should be subject to the same scrutiny that apply to all public policy and decision making. Given the deference to private sector expertise observed across governmental digital transformation agendas (again highlighted by Kaminski), it seems the most effective way of ensuring ADM systems reflect ethical values is to ground algorithmic transparency on the standards of responsibility demanded of public services.

This line of thinking finds further depth in discussions of what constitutes information that is in the public domain, which according to ICO guidance is information that ‘is realistically accessible to a member of the general public at the time of the request. It must be available in practice, not just in theory.’ The current lack of practically available information in the public domain translates into a generalised opacity over the implementation of ADM systems in government, as well as a lack of expert analysis and informed public debate.

Taking these perspectives forward, we propose that meaningful transparency, or transparency as a form of ‘publicness’, can be conceptualised as providing the public with the necessary tools and information to assess and interact with ADM systems as public services. In practical terms this means amplifying existing mechanisms that keep public services in check and making information available to the public with the authentic intention of engaging them in decision-making processes.

Review of transparency mechanisms

In the accompanying document, [119KB PDF] we have reviewed five existing transparency mechanisms and outlined where they need to be mandated or strengthened to varying degrees, in order to establish strong public accountability and responsible engagement with ADM systems.

The purpose of identifying these mechanisms is to encourage greater standardisation of transparency practices and generate a holistic view of ADM systems. In the review we have focused on UK-specific documents and processes, however, we hope that our identification of these may offer insights for other national contexts and help develop a more nuanced understanding of transparency as it relates to ADM systems.

This review of transparency mechanisms can be understood as a preliminary outline of the types of information that should be contained within a register to bring the overarching view required to evaluate ADM systems in an expanded, sociotechnical way.

Towards a public register of ADM systems

The idea of a mandatory public register of ADM systems is emerging as a potential transparency mechanism that could be put into practice in the UK or abroad. A register could facilitate the organisation of different transparency targets under one resource that is easy to access and understand. If a register is established, we propose that it must be maintained as a living document that:

  • provides a means of retrieving and accessing the types of transparency documents produced within different domains and governance frameworks
  • is reviewed and updated regularly, to offer a continuous view of systems through their lifecycles.

To support this view of transparency and reinforce the ‘publicness’ of ADM systems, the Ada Lovelace Institute is working with Dr Michael Veale, UCL Faculty of Laws, to produce a transparency tool that could point the way towards a public register. The goal of this tool is to facilitate a common understanding of the salient features of ADM systems to inform mandatory disclosure schemes.

More immediately, on Thursday 29 October, we are hosting a discussion with international experts in public administration algorithms and algorithmic registers to explore other approaches to transparency of ADM systems. Find out more and register here.

Taking this work forward, it will be important to recognise that any mandatory reporting mechanism will need to constitute more than a base level of reporting. It will also need to sustain a type of transparency that can enable meaningful evaluation and facilitate democratic processes.

Image credit: Mika Baumeister

  1. See Data Justice Lab’s ‘Data Scores as Governance’ project. Available at: https://datajusticelab.org/data-scores-as-governance or TBIJ’s ‘Government Data systems’. Available at: https://www.thebureauinvestigates.com/stories/2019-05-08/algorithms-government-it-systems
  2. See for example Algorithm Watch and Access Now’s recent joint call for a public register in response to the EU White Paper Consultation. Available at: https://www.accessnow.org/cms/assets/uploads/2020/06/EU-white-paper-consultation_Access_Now_June2020.pdf. This is following Algorithm Watch’s earlier submission to the UN Special Rapporteur on extreme poverty and human rights. Available at: https://www.ohchr.org/Documents/Issues/Poverty/DigitalTechnology/AlgorithmWatch.pdf
  3. See for example Luciano Floridi and Josh Cowls’s proposal of explicability, ‘understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’).’ Available at: https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/6
  4. Sun-ha Hong (2020). Technologies of Speculation: the Limits of Knowledge in a Data-Driven Society. New York: NYU Press. p.46
  5. The Macpherson Review defined BCMs as ‘a mechanism for analysing or investigating some aspect of the real world. It is usually a quantitative method, system or approach which applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.’ – Nick Macpherson. (2013) Review of quality assurance of Government analytical models. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/206946/review_of_qa_of_govt_analytical_models_final_report_040313.pdf
  6. As exemplified by the risk-based approaches taken in the GDPR and the recent European Commission AI White Paper, available at: https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
  7. Hong (2020) Technologies of Speculation, p.43
  8. Hong (2020) Technologies of Speculation, p.43

Related content