Skip to content
Blog

Algorithms in social media: realistic routes to regulatory inspection

Establishing systems, powers and capabilities to scrutinise algorithms and their impact.

Jenny Brennan

3 November 2020

Reading time: 6 minutes

magnifying glass on the

Inspecting algorithms in social media platforms is a joint briefing from Reset and the Ada Lovelace Institute giving insights and recommendations towards a practical route forward for regulatory inspection of algorithms, sometimes known as algorithm audit, in social media platforms.

The briefing is primarily aimed at policymakers, presenting insights from an expert workshop alongside our corresponding recommendations, with a focus on the UK and European Union. It should also be helpful for regulators and tech companies thinking about methods, skills and capacity for inspecting algorithmic systems.

As algorithms are designed and deployed at unprecedented scale and speed, there is a pressing need for regulators to keep pace. There is a pressing need to ensure public oversight of algorithms and challenge asymmetries of power between those deploying algorithmic systems and their users.

There have been widespread calls to establish systems, powers and capabilities to scrutinise algorithms and their impact – from mathematician Cathy O’Neil, who called for algorithm audits in Weapons of Math Destruction, to the UK Centre for Data Ethics and Innovation report on online targeting, which recommends that the UK Government’s new online harms regulator should have ‘information-gathering powers’, including ‘the power to give independent experts secure access to platform data to undertake audits’. Similarly, the EU’s Digital Services Act public consultation raises questions around auditing capacity for regulatory authorities.

While there is a range of internal regulatory inspection practice for compliance within tech companies, there is not yet a developed methodology for a regulatory algorithm inspection by regulators. There is uncertainty – from policymakers, tech industry and civil society – as to how regulatory inspection would work in practice.

The Ada Lovelace Institute is convening a series of expert workshops with partners in different domains to identify realistic routes to regulatory inspection, and the challenges to them. In addition to furthering the policy discussions in each area, we hope to identify commonalities to enable a broad framework for the powers, capacity, skills and methods needed for regulatory inspection of algorithmic systems.

Our first workshop focused on regulatory inspection of algorithms in social media platforms, as a pressing area of policy interest in the UK and Europe. On 6 August 2020, the Ada Lovelace Institute and Reset brought together a group of international, interdisciplinary experts to identify the technical and policy requirements for algorithm inspection of social media platforms, using the case study of COVID-19 misinformation. This builds on previous work by the Ada Lovelace Institute on methods for inspecting algorithmic systems, and by Reset on digital information market governance and the spread of information. The workshop identified three key insights, from which we’ve drawn three recommendations:

Insights from the expert workshop

  1. The current model of self-regulation is insufficient, and cements information asymmetry between social media platforms and the public. Currently, technology companies can launch, publicise and even reverse misinformation interventions at their discretion. External efforts document troubling gaps between companies’ publicised interventions and the realities of COVID-19 misinformation on their platforms, but public authorities and other relevant third parties cannot access the evidence needed to analyse harms related to the platform.
  2. An algorithm inspection will require detailed evidence on companies’ policies, processes and outcomes, and new methods of access to evidence. Workshop participants identified the types of evidence – on policy, process and outcomes – they would need to analyse harms occurring on the platform and the platform’s expected behaviour in response to harms, and to verify platform claims about the role of algorithms in mitigating or increasing harms. They also suggested methods to access this evidence, from interviews with company staff to an inspector-specific API, many of which required some participation from technology companies.
  3. Algorithm inspection brings with it significant opportunity but will require careful design to deliver on its potential. Governments must develop and enact a public policy agenda that regulates the digital marketplace, and aligns its interests with those of democratic and social integrity. At the same time, audit regimes must be proportional to the types of companies under review, and governments should anticipate and mitigate associated risks, including the potential for abuse.

Recommendations for policymakers

The regulator responsible will need:

  1. Compulsory audit and inspection powers. An independent regulator should be empowered and resourced to enforce its obligations. This governance framework can only work on one condition: it requires transparency between the platforms and an independent regulator. The regulator should have the power to demand the granular evidence necessary to fulfil its supervisory tasks, and have enforcement powers when platforms do not provide that information in a timely manner.
  2. Information-gathering powers that extend to evidence on policy, process and outcomes.
    The regulator must have the authority to request evidence on a social media platform’s policy, process and outcomes, and technology companies will need to ensure they have capacity to respond to these requests, which could include methods such as interviews, APIs, or disclosure of internal policy documentation.
  3. Powers to access and engage third-party expertise. An algorithm inspection requires a multidisciplinary skill set, although relevant expertise for any given inspection will vary based on context and industry. While the regulator should have some skills in-house, it will need the ability to access and instruct third-party expertise. This could include access by academics to conduct research in the public interest.

Regulatory inspection of algorithmic systems

This research forms part of the Ada Lovelace Institute’s wider work on regulatory inspection of algorithmic systems as a vital component of algorithmic accountability – necessary for a world where data and AI work for people and society.

With the UK’s Online Harms and EU Digital Services Act, social media platforms are a fast-growing area for regulatory inspection, with pressing policy developments. What we learn about how to do it has consequences for regulatory inspection of algorithms as a whole – how it is done, and whether it works.

We are therefore exploring the challenge of inspecting algorithms in different domains. With Inclusive Competition Forum we looked at its role in competition cases, considering search, algorithmic pricing and collusion. We are also collaborating with the Institute for the Future of Work to examine regulatory inspection for equalities impacts.

We expect to report on approaches to regulatory inspection of algorithmic systems across domains later in the year. If you are interested in this work, you can keep up to date on our websiteTwitter or our newsletter.

Image credit: fotosipsak

Related content