Skip to content

Ethics and accountability in practice

Developing tools, mechanisms and processes that ensure AI systems work for people and society.

The context

From education to healthcare, finances to employment, AI and data-driven technologies are increasingly used to make important decisions about our everyday lives. As these technologies become more ubiquitous throughout society, there is a pressing need to develop and implement new mechanisms, processes and tools to ensure they are developed with proper oversight and scrutiny.

In the last few years, public and privatesector organisations have invested an enormous amount of resources into developing an extensive number of high-level AI ethics principles designed to address the risks AI systems may pose. But it remains to be seen how these principles can be translated into operational practices that developers, policymakers and others can implement.  

‘Accountability’ is a common principle discussed in the AI ethics discourse, and can be defined both as a normative virtue that developers of AI systems strive for, and as an institutional mechanism for holding a developer of these systems to account.1 Ada’s work focuses on the latter definition, in which accountability refers to a relationship between an ‘actor’ and a ‘forum According to this definition of accountability, the actor must explain and justify their conduct to the forum, which can pose questions and pass judgment, and the actor may face consequences. 

Ada’s approach 

The Ethics and accountability in practice programme seeks to answer several key questions:  

  1. What does meaningful accountability look like in the context of developing and integrating AI systems?
  2. How can we establish incentive structures that address AI ethics?
  3. Who are the different actors and forums when it comes to the research, development, procurement and deployment of AI systems?
  4. What kinds of consequences, methods, tools, mechanisms and governance processes can developers implement to create meaningful accountability with those impacted by those systems?
  5. Are these practices effective? What kinds of externalities and outcomes do they achieve in different contexts? 

This programme uses a range of methods to answer these questions including surveys, convenings, interviews, ethnography and case studies. We work with a wide range of actors, including industry, practitioners, civil society members, academics, policymakers and regulators. Examples of our work include: 

Defining key terms, synthesis and conducting high-level surveys of the field

  • In March 2020, we published Examining the Black Box, a seminal report outlining different methods for assessing algorithmic systems.  

Building evidence and case studies

  • We worked with NHSX and several healthcare startups to develop an algorithmic impact assessment framework for firms to use when applying for access to a medical image dataset.  
  • We work with developers of AI systems to consider novel approaches to their design, and which support the best interests of people and society. Examples include a project with the BBC that explored how recommendation engines can be designed with public-service values in mind, and a project exploring participatory methods for data stewardship. 

Convening experts and building capacity

  • We work with regulators, civil society organisations and members of the public to deepen their understanding of accountability practices. For example, we have pushed forward novel thinking on frameworks for transparency registers 
  • We’ve convened several workshops to bring together experts from industry, academia and government around key topics. These include a workshop series on the challenges that research ethics committees are grappling with in their reviews of AI and data science research, and a workshop series on regulatory inspection and auditing of AI systems.  

 The impact we seek

Our Ethics and accountability in practice programme enables us to achieve our strategic goals in the following ways: 

  • We have anticipated transformative innovations in approaches to algorithmic accountability, publishing the first synthesis of emerging terms and practices, and the first global survey of algorithmic accountability policies in the public sector. 
  • We are rebalancing power over data and AI through developing, trialling and testing accountability mechanisms to ensure they are designed and deployed in ways that consider their impact on a range of different communities and ensure their benefits are fairly and equitably distributed. 
  • We are promoting sustainable data stewardship by suggesting concrete mechanisms for developing best practices in data stewardship – responsible and trustworthy data governance and practice.  
  • We are interrogating inequalities caused by data and AI by keeping a clear focus on the emergence of bias and discrimination in AI and algorithmic systems, and suggesting sociotechnical mechanisms for identifying and mitigating the impact of AI systems on inequalities. 
Array
(
    [s] => 
    [posts_per_page] => 12
    [meta_key] => sb_post_date
    [order] => DESC
    [orderby] => meta_value
    [paged] => 1
    [post_type] => Array
        (
            [0] => blog-post
            [1] => case-study
            [2] => evidence-review
            [3] => feature
            [4] => job
            [5] => media
            [6] => news
            [7] => press-release
            [8] => project
            [9] => policy-briefing
            [10] => report
            [11] => resource
            [12] => summary
            [13] => survey
            [14] => toolkit
            [15] => event
            [16] => person
        )

)

Projects

Array
(
    [s] => 
    [posts_per_page] => 12
    [meta_key] => sb_post_date
    [order] => DESC
    [orderby] => meta_value
    [paged] => 1
    [post_type] => Array
        (
            [0] => blog-post
            [1] => case-study
            [2] => evidence-review
            [3] => feature
            [4] => job
            [5] => media
            [6] => news
            [7] => press-release
            [8] => project
            [9] => policy-briefing
            [10] => report
            [11] => resource
            [12] => summary
            [13] => survey
            [14] => toolkit
            [15] => event
            [16] => person
        )

)

Reports

Report

13 December 2022

Looking before we leap

Expanding ethical review processes for AI and data science research

Report

29 April 2020

Examining the Black Box

Identifying common language for algorithm audits and impact assessments

Array
(
    [s] => 
    [posts_per_page] => 10
    [meta_key] => sb_post_date
    [order] => DESC
    [orderby] => meta_value
    [paged] => 1
    [post_type] => Array
        (
            [0] => event
        )

    [tax_query] => Array
        (
            [0] => Array
                (
                    [taxonomy] => programmes
                    [field] => slug
                    [terms] => Array
                        (
                            [0] => ethics-and-accountability-in-practice
                        )

                )

        )

)

Events

Virtual event

5:00pm – 6:15pm, 24 January 2023 (GMT)

Looking before we leap?

Ethical review processes for AI and data science research

In-person event

A culture of ethical AI

What steps can organisers of AI conferences take to encourage ethical reflection by the AI research community?

Array
(
    [s] => 
    [posts_per_page] => 20
    [meta_key] => sb_post_date
    [order] => DESC
    [orderby] => meta_value
    [paged] => 1
    [post_type] => Array
        (
            [0] => blog-post
        )

    [tax_query] => Array
        (
            [0] => Array
                (
                    [taxonomy] => programmes
                    [field] => slug
                    [terms] => Array
                        (
                            [0] => ethics-and-accountability-in-practice
                        )

                )

        )

)

From the Ada blog

Blog

The Ada Lovelace Institute in 2022

Carly Kind

Ada’s Director Carly Kind reflects on the last year and looks ahead to 2023

20 December 2022

Footnotes

  1. Bovens, M. (2006). ‘Analysing and Assessing Public Accountability. A Conceptual Framework’. European Governance Papers (EUROGOV) No. C-06-01. Available at: http://www.connex-network.org/eurogov/pdf/egp-connex-C-06-01.pdf.