Skip to content
Press release

New research highlights lessons learnt from first wave of global policy mechanisms to increase algorithmic accountability in the public sector

The first global study analysing the first wave of algorithmic accountability policy for the public sector

24 August 2021

Reading time: 8 minutes

Algorithmic accountability for the public sector cover

Today, the Ada Lovelace Institute (Ada), AI Now Institute (AI Now) and Open Government
Partnership (OGP) publish the first global study analysing the ‘first wave’ of regulatory and
policy tools that aim to ensure accountability for algorithms used in public services.

Governments around the world are increasingly turning to algorithms to assist in urban
planning, prioritise social care cases, make decisions about welfare entitlements, detect
unemployment fraud or make predictions about or surveil individuals or groups in criminal
justice and law enforcement settings. The use of algorithms is often seen as a way to
improve, increase efficiency or lower costs of public services.

But there is growing evidence that the deployment of algorithmic systems by governments to
automate or support decision-making can cause harm, and frequently lacks transparency in
implementation. In recognition of this, regulators, lawmakers and government accountability
organisations have turned to a variety of policy mechanisms to provide accountability.

Downloads

This new research highlights that although this is a relatively new area of technology governance, there is a high level of international activity and different governments are using
a variety of policy mechanisms to increase algorithmic accountability. The report explores in
detail the variety of policies taking shape in the public sector. They include:

  1. Principles and guidelines: Non-binding normative guidance, in the form of
    principles and values, for public agencies to follow e.g the UK Data Ethics
    Framework.
  2. Prohibitions and moratoria: Banning or prohibiting the use of particular kinds of
    ‘high risk’ algorithmic systems. These have been most prominently applied to facial
    recognition technologies used by law enforcement.
  3. Public transparency: Providing information about algorithmic systems to the
    general public so that individuals or groups can learn that these systems are in use,
    and demand answers and justifications.
  4. Algorithmic impact assessments (AIAs): Mechanisms intended for public
    agencies to better understand, categorise and respond to the potential harms or risks
    posed by the use of algorithmic systems, usually prior to their use.
  5. Audits and regulatory inspection: Mechanisms intended to help create an
    independent account of how algorithmic systems function, and account for any flaws,
    biases or bugs in the system.
  6. External/independent oversight bodies: Designed to ensure accountability by
    monitoring the actions of public bodies, and making recommendations, sanctions or
    decisions about their use of algorithmic systems.
  7. Rights to hearing and appeal: Procedures intended to provide forums for affected
    individuals or groups to debate or contest particular decisions that affect them.
  8. Procurement conditions: Conditions applied when governments acquire algorithmic
    systems from private vendors, which limit the design and development of an
    algorithmic system (e.g. to ensure that a system considered for procurement is
    transparent and non-discriminatory).

The study also looked into the successes, challenges, and limitations of these policy
mechanisms from the perspectives of the actors and institutions directly responsible for their
implementation on the ground. The six lessons it draws out provide a useful guide for
policymakers and industry leaders looking to deploy and implement algorithmic
accountability policies effectively. These are:

  1. Clear institutional incentives and binding legal frameworks can support consistent
    and effective implementation of accountability mechanisms, supported by
    reputational pressure from media coverage and civil society activism.
  2. Algorithmic accountability policies need to clearly define the objects of
    governance as well as establish shared terminologies across government
    departments.
  3. Setting the appropriate scope of policy application supports their adoption.
    Existing approaches for determining scope such as risk-based tiering will need to
    evolve to prevent under- and over-inclusive application.
  4. Policy mechanisms that focus on transparency must be detailed and audience
    appropriate to underpin accountability.
  5. Public participation supports policies that meet the needs of affected
    communities. Policies should prioritize public participation as a core policy goal,
    supported by appropriate resources and formal public engagement strategies.
  6. Policies benefit from institutional coordination across sectors and levels of
    governance to create consistency in application and leverage diverse expertise.

Carly Kind, Director, Ada Lovelace Institute said:

This new joint report presents the first comprehensive synthesis of an emergent area of law and policy. What is clear from this mapping of the various algorithmic accountability mechanisms being deployed internationally, is there is clear growing recognition of the need to consider the social consequences of algorithmic systems.

Drawing on the evidence of a wide range of stakeholders closely involved with the implementation of algorithms in the public sector, the report contains important learnings for policymakers and industry aiming to take forward policies in order to ensure that algorithms are used in the best interests of people and society.

Amba Kak, Director of Global Policy and Programs, AI Now Institute said:

As government use of algorithmic systems grows rapidly around the world, so does recognition that there is a need for guardrails around whether, and how, such systems are used. This joint report is the first to take stock of this global policy response aimed at ensuring “algorithmic accountability” in government use of algorithmic systems through audits, impact assessments and more. The report makes the essential leap from theory to practice, by focusing on the actual experiences of those implementing these policy mechanisms and identifying critical gaps and challenges. Lessons from this first wave will ensure a more robust next wave of policies that are effective in holding these systems accountable to the people and contexts they are meant to serve.

Sanjay Pradhan, Chief Executive Officer, Open Government Partnership said:

Advancing transparency and accountability in digital policy tools should be a critical part of a country’s open government agenda. This joint report not only showcases some of the most ambitious and innovative policies on algorithmic transparency but also shows how platforms like the Open Government Partnership can be leveraged to move the needle towards healthy digital governance and accountability.

ENDS

Contact: Hannah Kitcher on 07969 209652 or hkitcher@adalovelaceinstitute.org

Notes

  1. This report presents the findings of a research partnership between the Ada Lovelace Institute (Ada), AI Now Institute (AI Now) and Open Government Partnerships (OGP),
    which was the first global study to analyse the initial wave of algorithmic
    accountability policy for the public sector across jurisdictions. The findings of this
    report are based on:

    1. A database of more than 40 examples of algorithmic accountability policies at various stages of implementation, taken from more than 20 national and local governments.
    2. Semi-structured interviews with decision-makers and members of civil society closely involved with the implementation of algorithmic accountability policies in the UK, Netherlands, France, New Zealand, Canada and Chile, as well as at the local level in Amsterdam City and New York City.
    3. Feedback received at a workshop with members of the Informal Network on Open Algorithms that are implementing commitments focusing on algorithmic accountability through their OGP action plans.
    4. Feedback from participants of a private roundtable at RightsCon 2021 with
      public officials and members of civil society organisations from many of the
      countries reviewed in this report.
    5. A review of existing empirical studies on the implementation of algorithmic accountability policies in various jurisdictions.
  2. The research focused on the North American and European policy contexts due to
    the greater number of implemented policies in these regions, and recognises that it is
    missing critical perspectives from the Global South. The organisations encourage
    more research into wider and emerging policy contexts.
  3. The Ada Lovelace Institute (Ada) is an independent research institute and
    deliberative body with a mission to ensure data and AI work for people and society.
    Ada is funded by the Nuffield Foundation, an independent charitable trust with a
    mission to advance social well-being. It was established in early 2018, in
    collaboration with the Alan Turing Institute, the Royal Society, the British Academy,
    the Royal Statistical Society, the Wellcome Trust, Luminate, techUK and the Nuffield
    Council on Bioethics. Find out more: Adalovelaceinstitute.org | @adalovelaceinst
  4. For the Ada Lovelace Institute (Ada), this research forms part of their wider work on
    algorithm accountability and the public sector use of algorithms. It builds on existing
    work on tools for assessing algorithmic systems, mechanisms for meaningful
    transparency on use of algorithms in the public sector, and active research with UK
    local authorities and government bodies seeking to implement algorithmic tools,
    auditing methods, and transparency mechanisms.
  5. The AI Now Institute at New York University is the world’s first research institute
    dedicated to understanding the social implications of AI technologies. AI Now works
    with a broad coalition of stakeholders, including academic researchers, industry, civil
    society, policy makers, and affected communities, to identify and address issues
    raised by the rapid introduction of AI across core social domains. AI Now produces
    interdisciplinary research to help ensure that AI systems are accountable to the
    communities and contexts they are meant to serve, and that they are applied in ways
    that promote justice and equity. The Institute’s current research agenda focuses on
    four core areas: bias and inclusion, rights and liberties, labor and automation, and
    safety and critical infrastructure.
  6. About the Open Government Partnership (OGP): In 2011, government leaders and
    civil society advocates came together to create a unique partnership—one that
    combines these powerful forces to promote accountable, responsive and inclusive
    governance. Seventy-eight countries and a growing number of local
    governments—representing more than two billion people—along with thousands of
    civil society organizations are members of the Open Government Partnership (OGP).

Related content