Skip to content
Virtual event

What forms of mandatory reporting can help achieve public-sector algorithmic accountability?

A look at transparency mechanisms that should be in place to enable us to scrutinise and challenge algorithmic decision-making systems

Date and time
1:00pm – 2:00pm, 29 October 2020 (GMT)

This post summarises the key points of the discussion and debate from the webinar which you can watch in full below:

This video is embedded with YouTube’s ‘privacy-enhanced mode’ enabled although it is still possible that if you play this video it may add cookies.  Read our Privacy policy and Digital best practice for more on how we use digital tools and data. 

Due to issues with the recording the introductory presentation from Matthia Spielkamp is missing. You can read a summary of what he said below and his slides are available here.

Over the last few months, high profile cases of algorithmic decision-making (ADM) systems – such as the Home Office visa streaming tool, the Ofqual’s A-level grading algorithm, which were both abandoned, and the ‘Most Serious Violence’ programme under consideration by the West Midlands Police Ethics Board – have featured in the headlines. And research which reveals the extensive application of predictive analytics in public services across the UK is bringing into focus the increasing adoption of technological solutions to address governance problems.

However, there remains a persistent, systemic deficit in public understanding about where and how these systems are used, pointing to a fundamental issue of transparency.

In this event, we look at the transparency mechanisms that should be in place to enable us to scrutinise and challenge algorithmic decision-making (ADM) systems, in use in central and local government, and their process of deployment.

Among the proposals to achieve a baseline level of transparency is the possibility of instituting a public register as a mechanism for mandatory reporting of ADM systems. The proposal has been raised internationally, at a national and European level, and is now being tested in the cities of Amsterdam and Helsinki.

But while various options are considered for increasing algorithmic accountability in public-sector ADM systems in the UK, it is important to ask: what does effective mandatory reporting look like?

The Ada Lovelace Institute and international experts in public administration algorithms and algorithmic registers, surface key concerns, and relate them to the governance landscapes of different national contexts. In this event we ask:

  • How do we ensure that information on ADM systems is easily accessible and that various actors, from policymakers to the broader public, can meaningfully engage with it?
  • What are the pros and cons of setting up a register? How should it be structured? Is it the best way to enforce mandatory reporting? How will different audiences be able to mobilise the information it collects?
  • How do we ensure that, whichever transparency requirement is in place, it leads to reliable accountability mechanisms?

Chair

  • Imogen Parker

    Associate Director (Society, justice & public services)

Speakers

  • Soizic Penicaud

    Etalab, France
  • Meeri Haataja

    Saidot, Finland
  • Natalia Domagala

    Head of Data Ethics, UK Cabinet Office
  • Matthias Spielkamp

    Algorithm Watch

IMOGEN:

The topic of today’s event – looking at practical tools and the policy development that we need around transparency – feels particularly timely.

In the UK context, it is great to see a direct reference to transparency in the Government’s newly published National Data Strategy, and of course, transparency has long been a key term in the data ethics debate. The word transparency crops up in pretty much every data ethics set of principles we have seen over the recent years and, in the public sector, there is a baseline presumption across Europe that there should exist a level of scrutiny and democratic accountability on algorithmic decision-making systems.

Despite that, I think it is fair to say that we still lack a systematic understanding of what tools are in deployment, let alone the crucial questions on their social impacts and on why transparency around algorithmic decisions is hard to achieve. This is due not just to the fact that the ones in question are new or technical products, but firstly because we face the real challenge on how to articulate the object that we are trying to be transparent about.

We have words like algorithm, automated decision systems, predictive analytics, data analytic and AI in the general sense, which may cover everything from an automated phone system in a GP to a wholesale transformation of a local authority’s data practices. Similarly to the word ‘algorithm’, the word ‘transparency’ can cover many things. People’s desire for transparency may be motivated by their interest in knowing a system’s code, scrutinising and understanding front-line decision-making and decisions over the organisations involved.

A third issue at stake is how we make transparency meaningful, by that I mean ensuring that a person can understand algorithmic decision-making systems in use, that their use is important to them, and ensuring that transparency does contribute to or facilitates some form of accountability. All of us are aware of transparency mechanisms that have not achieved these objectives, and this is a challenge to bear in mind. Alongside this, it is important to consider how we square the necessity for transparency with the limited resources we have to digest information and discuss it critically.

I, for one, am sure that the panellists welcome what you may see as a growing consensus in parts of Europe: that we need more systematic and proactive forms of transparency when it comes to public sector algorithms. So, I am really excited to have these speakers with us today, who can speak to or are grappling with some practical initiatives about how we create mechanisms for algorithmic transparency.

I would like to turn to Matthias from Algorithm Watch. You have argued for the institution of a public register of algorithmic decision-making systems. How did the idea come up and how does it sit with the work you have been doing at the national and international level?

MATTHIAS:

Slides from Matthias’ talk are available here.

The idea of having a public register for automated decision-making in the public sector started from doing a stock-taking exercise in a review of automated decision-making systems in Europe. We first looked at different European countries and tried to find out where automated decision-making systems are in use. We thought that the term ‘automated decision-making’ better captures the challenge at hand, which is about the impact socio-technical systems have on people and societies and not so much about technology. For example, when a meaningful decision on an individual’s life or a decision that has an impact on the public good and society in general is made through a system that includes algorithmic analysis and the use of big data, then we describe the system as performing automated decision-making. It does not have to be a live big-data analysis but if the algorithms are trained on data to come up with a decision-making model, then our definition encompasses it.

In our work, we found out that there are a lot of these systems in use all over the place in Europe. I suppose that many of the people in the audience today are from the UK and the UK has a tradition of automating public sector services. Many people, however, are surprised to see that, for instance, in Germany, public services are being digitised and automated only very slowly. This presents us with an opportunity, right because there have been many mistakes in various countries, such as the United Kingdom and the Nordic countries, which could still be avoid by states that lag slightly behind in this automation process.

Yesterday we released a second edition of the same review, adding a couple of countries case studies and keeping the UK, although it is not part of the EU anymore.

Clearly, the question we are trying to address is how do we achieve transparency? We have a couple of policy recommendations, which were developed from the review. The first one is the one you are familiar with – to increase transparency of ADM systems by establishing a register for the systems used in the public sector. The second recommendation is to introduce legally binding frameworks to support and enable public research, as it is necessary to have access to datasets in order to know how a system works, and then to create a meaningful accountability framework.

In other words, there needs to be a legal obligation for those responsible for the ADM system to disclose and to document the purpose of the system, an explanation of the model, and the information on who developed the system.

With regards to this we are working with Michele Loi, at the University of Zurich in Switzerland, who co-authored a paper that references the notion of ‘design publicity’. ‘Design publicity’ means that, in the first instance, we need information about the goal that the algorithm is designed to pursue and the moral constraints it is designed to respect. In the second instance, we need performance transparency, meaning that we need to know how a goal is translated into a machine learning problem and to be able to establish, through a conformity assessment, whether this translation complies to the set moral constraints and what decisions are taken by consistently applying the same algorithm in multiple scenarios.

IMOGEN:

I will now turn to Soizic from Etalab in France. One of your functions is to support the government and the local Government transparency and decision-making, so tell us more about the French landscape around this and the approach and the governance and in particular your role there.

SOIZIC:

As Imogen mentioned, I work at Etalab, the French Government taskforce for open data, open-source and data policy. We are not a regulatory body, but a government body and our work on transparency stems from our work on open data and open source. Part of our aims is, indeed, helping agencies implement the legal framework that applies to transparency of public sector algorithms. The framework is grounded in public administrative law and the right to access administrative information, and requires an administrative agency, that uses an algorithmic system to make a decision about an individual or a legal entity, to disclose certain information about that system. In particular, the legal framework requires every agency to inventory the main algorithmic treatments, disclose the rules and criteria used by the algorithm and the way that it is involved in the decision-making process.

We can see that this requirement may allude to the idea of a public registry. Indeed, a few weeks after Amsterdam and Helsinki’s registers went live, the French city of Nantes released and published the first French public registry of algorithmic decision-making system. Notably, it showcases only two algorithms so far, and they are not AI, but rule-based benefit allocation algorithms.

One thing to say about Etalab is that we work with a variety of other government agencies and institutional actors involved in this space, including the regulatory body in charge of access to information, the regulatory body in charge of data protection and the ombudsman on discrimination. Indeed, among the difficulties of dealing with the topic of algorithmic transparency in public administration there is the fact that it needs a lot of co-ordination, it is often difficult to pinpoint who we should work with in local and central government. Due to Etalab’s background in open data and open source, we have access to the ones who are interested in those topics. Furthermore, we try to get in touch with data protection officers in government agencies, but it can be difficult to identify who we should address and a lot depends on how much single individuals are interested in the topic.

With regards to this, it is good to mention that working with local and central governments on this issue inevitably means choosing what to focus on. The field we are in is fast-moving and agencies have little resources to allocate to this issue. This is obviously not ideal, but the reality is that we need to acknowledge the circumstances and prioritise where we work and on what.

For instance, one of the main questions we get asked by the agencies we interact with is: what are the main algorithmic treatments we should focus on? The risk is that agencies focus on the algorithms that are easier to explain, the ones that are open, and spend energies with tasks that do not help us tackle the more harmful or dangerous tools in place.

So, what we are trying to do now is working with volunteer public servants to develop practical guides and tools that show examples of what we are looking for. For instance, we have been working on the topic of public registries with a few people who are trying to implement them in their agencies and on establishing a list of potential algorithmic treatments to record, so that agencies can go through it and verify if they are using any of the listed algorithms.

Another important thing to mention concerns difficulties that we have encountered due to the legal framework and its limits – something that also Algorithm Watch’s Automating Society report rightly mentions. One of the main limitations is its exceptions for algorithms pertaining to national security or fraud control and all topics of government that could be sensitive. The problem is obviously that these areas of government are often among the ones where the more dangerous and harmful algorithms are used.

One last thing I wanted to bring attention to is that, since our legal framework is relatively open, there is a risk that we only focus on making algorithms transparent after they are implemented, while we have to think about how to use tools, including registries, as ways to promote transparency during the conception of algorithms, so that we don’t lose track of the main goal we are trying to achieve, which is to protect citizens’ rights.

IMOGEN:

Now to Meeri, could you talk to us about how things are evolving in Amsterdam and Helsinki and the city-focused approach you have taken and, in particular, Saidot’s role in the partnership.

MEERI:

I am CEO and co-founder of Saidot and, from the start of our activities, it has been clear to us that algorithmic transparency is a foundational principle and it will be a very important factor when taking forward AI governance.

Soizic mentioned the fact that public administrations have little resources in the public sector and that is one of the driving factors of our register project, which consists in putting together a scalable and adaptable register platform solution that can be used by Government agencies and private companies, to collect information on algorithmic systems and share them flexibly with different stakeholder groups.

I wanted to start my intervention by showing an article that captures the sense of the work we have been doing. I really wanted to thank Floridi for this and to present the piece as a reference for everyone.

Our register-related work started from a common methodology applied in two cities, Amsterdam and Helsinki. Both published their respective registers about a month ago and everybody is free and welcome to visit the registers.

We conceived them as libraries about different algorithmic or AI systems – by the way, Amsterdam is using the word ‘algorithm’, while Helsinki, ‘AI’ and we are looking forward to learn from practice and feedback with regards to where we draw the line on what systems we should bring to the register.

With this project, we wanted to offer an overview of the cities and collect and report on the systems that are important and influence the everyday lives of citizens.

When you access the register, you can click on the systems reported and get a sense of the whole thing: the systematic information that we are providing, the application cases in which algorithmic systems and AI are used.

There are differences between the two registers as well. For example, the Amsterdam register links to systems source code on GitHub. This cannot be the case for all systems, as many are offered by third party organisations, like in Helsinki, which registers systems all provided by private companies.

During the process, we have conducted research and interviews with clients and stakeholders in order to achieve a model that serves the wider public, meaning not only tech experts by also the ones who do not know very much about technology and are not necessarily interested in it. Basically, what we have achieved is a layered model, where you can find more information based on your interest. Crucial to this process, has been the driving input from the cities, which are looking for both transparency and citizens’ participation.

Clearly, in their current forms, the registers are only a teaser that we hope to further develop and expand also through feedback and on the basis of repeatable design patterns.

If I have to say where I see this process going is towards the development of the user interface and the back end, also towards diversification that can accommodate different metadata models that could be suitable for different national contexts, cities and specific domains, such as education or others which may have specific requirements.

The feedback so far has been very positive. People see that this project is a major step forward in matters of trust of citizens in their governments and look forward to seeing where we will go next with the Amsterdam and Helsinki registers and, possibly, with other cities.

IMOGEN:

Lastly, to Natalia. I am sure many organisations were pleased to see a commitment in the National Data Strategy around the public sector use of algorithms, which is still in the consultation stage. We are not asking you to present a blueprint or to give any government new headlines, but it will be great to hear where you and where the UK Government are on this, what you expect; if there are certain countries you are looking to for models; the research you may do or where you are on the journey towards some form of public sector algorithmic transparency.

NATALIA:

I would like to start with an attempt to situate algorithmic transparency in the UK within its broader institutional context. Algorithmic transparency is a multidisciplinary area at the intersection of open data and transparency policies and data policy. Similarly to what was said earlier about the French governance landscape, our team comes from the wider transparency and openness movement.

Transparency and open data are nothing new in the UK. We have a long-standing tradition of it and a long-standing work with open-by-default policies, which applies to open data for all public sector departments. This approach supports benefits of transparency for a number of outcomes.

First, accountability driving trust in decision-making, by being transparent about the evidence base for decisions and the deliberation behind policy developments. This aligns with reasoning for increasing algorithmic transparency.

The second aspect I want to stress is efficiency. If we release data and models in the open, it is easier to spot duplication and systemic issues, errors that can be reviewed and addressed collectively. Again, this is something that is very much applicable to algorithmic models as well.

The third aspect concerns the economic outcomes. By making data freely available, companies and public sector agencies can use it as the base of innovative product developments and other services.

So, the policy foundations for algorithmic transparency are here already. What we are doing now is building on this foundation, scoping and developing further work on it.

We have the Data Ethics Framework, launched in 2016 and refreshed in 2020. We have the Centre for Data Ethics and Innovation and their excellent research and proactive transparency in the use of algorithmic can be an organic extension of these activities.

The commitment we have in the National Data Strategy focuses on exploring the development of appropriate and effective mechanisms to deliver transparency on the use of algorithms. Governments around the world are looking at this and in relation to the next phase of the Artificial Intelligence developments. Enhancing the capacity to assess and audit decisions made by algorithms should be an essential and integral part of the process of scaling AI deployment in the public sector.

Currently, there is no standardised global model for algorithmic transparency, so the work on scoping the field should be composed of two key parts, the technical aspect and the organisational aspect.

We are looking at models from the other countries and regions and I am particularly interested in the following five issues.

First, the format and technical specifications. What kind of models of transparency tools exist? Are they isolated? Are they part of the wider impact assessment system, as in the case of Canada? What is the thought process behind each particular model?

The second issue is accessibility to non-expert audience and any follow-up activities: can the public, without prior knowledge of data or algorithms, understand the information that is provided? I think this has been done really well in the case of Helsinki and Amsterdam, and also in Canada. The Helsinki register is clear, the website is simple and does not use expert language. There are visuals and examples.

As a third point, what are the enforcement mechanisms? Again, in Canada, they have the directive on automated decision-making, and it is really interesting to hear about the legal framework established in France. In the UK, we are also looking into this.

Fourth, the wider organisational structure and legacy within which each transparency model functions is really important as well. In Canada, the directive applies to automated-decision systems developed after a certain cut-off date. I wonder how this can work in other countries. How do we make sure that the measures that we implement are really effective and cover as many algorithmic decision-making systems as possible.

Last, but not least, we need to think about accountability mechanisms. I really cannot stress this enough. What can citizens do with the published information? What kind of processes are in place to ensure that the citizens can challenge a process? As you have all just said, transparency on its own is not enough.

Q&A session

IMOGEN:

I have a question on a problem that we (at the Ada Lovelace Institute) have been grappling with: how do we define the object we want to bring transparency to? Is there a common set of terms for the sorts of things we are looking at? From a research perspective, we may want to spot trends. I wonder if any of the speakers can reflect on how that works in your own country contexts or in what you are proposing? Does your process start from a descriptive account of an algorithm, or are there some developing and underlying common frameworks that help classify what you are unpicking? For instance, Meeri, what exactly are you recording when you are asking a public office to describe an algorithm, is that description mainly prescriptive text or do you have a typology or an underlying framework that helps you understand and distinguish different types of evolving practices?

MEERI:

We have been calling what we have produced a metadata model and it includes documentation information or images regarding the architecture of a system, but it is also a classification. It is a never-ending job of structuring information and we are learning from feedback. One important aspect is that the back-end of the register needs to support the kind of complete transparency that is relevant to interested stakeholders and owners of the system. For example, from the perspective of accountability, what you see on the public website is the contact information of organisations and departments in charge of a system. Then there is a contact for the person that covers a specific role of responsibility with regards to the system and then there is the name of the suppliers. This is what you see on the website and the underlining model reflects these entries plus the ones that were not published yet at least for each specific system. Choosing which fields to register and which ones to publish is something that we have to manage all the time and that we have decided through collecting feedback from experts and users. For example, cyber security related information cannot always be public to everyone. What we need is a flexible model that can be customised.

IMOGEN:

There is a question in the chat that relates to the previous one, what are the strengths and weaknesses of focusing on a more local level approach versus a national level approach? Soizic, you engage with both levels, would you like to share your thoughts?

SOIZIC:

In my opinion, the tools for governance and transparency can stay the same for both levels. What will change might be the types of algorithms used at locally and centrally. Sometimes, the most intrusive systems operate at the local level. In terms of work on public registers, we have been working together with public servants from both the local and the national levels at the same time. Being a central government agency, we may have had a bias towards focusing on central level transparency, but we need to consider that the local level, especially where there is a tradition of public participation, may actually be the place where these discussions can have more impact.

MATTHIAS:

I agree with you, Soizic, but, at the same time, as I have shown in my quick intro, we think there is a value in advancing a European-wide legislation on public administration algorithmic transparency. I have no idea at the moment how realistic it is, given the priorities that the Commission has set, but it would be valuable because, in many cases, we are talking not only about small and medium-sized companies that provide services, but also about global companies. If they have the ability to deal in different ways with different civil servants, then this puts them in a position of power. If we had a European-wide legislation, there would be a clear directive to follow.

NATALIA:

Yes, I just wanted to add a few thoughts. Obviously, I come to this from a national level perspective, but I think that the biggest difference between the local and the central level discourses is actually in terms of public awareness and how engaged the public is in the whole transparency process. On the local level, the residents are invested in the subject and more likely to see the use of algorithms in their everyday lives. For example, if we look at the parking chatbot in Helsinki, I’m assuming that most adult residents with cars will have used it, contributing to the increasing understanding of how AI is deployed. This can make them more willing to engage with algorithmic transparency measures and provide feedback.

MEERI:

On the one hand, we are interested in collecting and displaying the same type of information at both local and central government levels, the same information is relevant for private provides because these companies are serving the public agencies at both levels. On the other hand, I think that the local governments are in a position to drive transparency initiatives as they are actively implementing AI systems. Also, the issue of public trust is very concrete and tangible for them. However, again, the model we are building is scalable and, so far, I have not come across something that would be so unique that the same approach would not be applicable in a different and bigger context. Then, obviously, there is an important role for governments to play to show examples of best practice.

IMOGEN:

Thank you. There is a question specifically for Natalia, asking about what the government is doing to encourage or to ensure public bodies comply with the Data Ethics Framework. This is a wider question we can ask to the other panellists as well: to what extent are actions around transparency or impact assessments to be thought of within an ethical framework? Would this be strong enough? Or do we need regulations? Perhaps, Natalia can start and if the others wish to come in, please do so.

NATALIA:

Thank you very much. A great question. I am always happy to talk about the Data Ethics Framework, as we recently refreshed it. First of all, the Data Ethics Framework does not substitute the existing legislation. We have the Data Protection Act, the Equalities Act, for instance, and the Data Ethics Framework is mainly a tool to enable our data scientists and data policymakers to innovate in a responsible way, to ensure that they are able to identify ethical considerations and mitigate them within their projects. In terms of the work we are doing now to ensure that the framework is used, there are a few strands of work. First of all, we are trying to embed the framework in as many Government processes as possible. For example, we have recently worked with the Crown Commercial Service on AI procurement systems, so suppliers that are bidding may be required to demonstrate they comply with the Data Ethics Framework. We are looking at challenges and needs in data ethics implementation and are also scoping work on how we can increase data ethics skills in the public sector. And this will naturally connect to whatever we will end up doing on algorithmic transparency as well.

IMOGEN:

Another question has come in through the Q&A chat around procurement of technology. Often the systems in use are not purely developed in the public domain, I wonder if there are examples from your respective countries on where the issue of procurement sits in the transparency discourse.

MATTHIAS:

Mozilla and AI Now have put out a paper and a response to the European Commission ‘s consultation, asking for a change in procurement practices and groups like ourselves and the Bureau of Investigative Journalism have been looking at the use of data in the public sector and procurement. We think that this is a good approach to adopt. Changing procurement rules and implementing transparency requirements is a huge lever to be used. But it is still a long way away and a long-term goal to pursue. From what I understand, there are very limited chances of revising the procurement directive at the moment.

MEERI:

I will quickly comment on this. Amsterdam started working on procurement a year ago when they were buying data-intensive technology from third parties. I think that these procurement terms for algorithmic systems are available now for anyone and there is a very good document explaining the rationale behind them. I think that this kind of transparency is very important. It is really a role of every person procuring AI in Government to recognise they are in a position to push for transparency through procurement as this can have a major impact on the whole private sector. So far, the vendor collaboration on AI transparency, for example, in Helsinki has been smooth with no significant issues. Clearly, this may be an opportunity for the vendors as well and releasing this kind of information will be useful for other cities and government agencies.

Related content