Ethics and accountability in practice
Developing tools, mechanisms and processes that ensure AI systems work for people and society
The context
From education to healthcare, finances to employment, AI and data-driven technologies are increasingly used to make important decisions about our everyday lives. As these technologies become more ubiquitous throughout society, there is a pressing need to develop and implement new mechanisms, processes, and tools to ensure they are developed with proper oversight and scrutiny.
In the last few years, public– and private–sector organisations have invested an enormous amount of resources into developing an extensive number of high-level AI ethics principles designed to address the risks AI systems may pose. But it remains to be seen how these principles can be translated into operational practices that developers, policymakers, and others can implement.
‘Accountability’ is a common principle discussed in the AI ethics discourse, and can be defined both as a normative virtue that developers of AI systems strive for, and as an institutional mechanism for holding a developer of these systems to account.1 Ada’s work focuses on the latter definition, in which accountability refers to a relationship between an ‘actor’ and a ‘forum’. According to this definition of accountability, the actor must explain and justify their conduct to the forum, which can pose questions and pass judgment, and the actor may face consequences.
Ada’s approach
The Ethics and accountability in practice programme seeks to answer several key questions:
- What does meaningful accountability look like in the context of developing and integrating AI systems?
- How can we establish incentive structures that address AI ethics?
- Who are the different actors and forums when it comes to the research, development, procurement and deployment of AI systems?
- What kinds of consequences, methods, tools, mechanisms and governance processes can developers implement to create meaningful accountability with those impacted by those systems?
- Are these practices effective? What kinds of externalities and outcomes do they achieve in different contexts?
This programme uses a range of methods to answer these questions including surveys, convenings, interviews, ethnography and case studies. We work with a wide range of actors, including industry, practitioners, civil society members, academics, policymakers, and regulators. Examples of our work include:
Defining key terms, synthesis and conducting high-level surveys of the field:
- In March 2020, we published Examining the Black Box, a seminal report outlining different methods for assessing algorithmic systems.
- In collaboration with AI Now and the Open Government Partnership, we co-published a survey of the first wave of algorithmic accountability policy mechanisms in the public sector.
- In December 2021, we published a survey of technical methods for auditing algorithmic systems.
Building evidence and case studies:
- We are working with NHSX and several healthcare startups to develop an algorithmic impact assessment framework for firms to use when applying for access to a medical image dataset.
- We work with developers of AI systems to consider novel approaches to their design, and which support the best interests of people and society. Examples include a project with the BBC to explore how recommendation engines can be designed with public-service values in mind, and a project exploring participatory methods for data stewardship.
Convening experts and building capacity:
- We work with regulators, civil-society organisations and members of the public to deepen their understanding of accountability practices. For example, we have pushed forward novel thinking on frameworks for transparency registers.
- We’ve convened several workshops to bring together experts from industry, academia, and government around key topics. These include a workshop series on the challenges that research ethics committees are grappling with their reviews of AI and data science research, and a workshop series on regulatory inspection and auditing of AI systems.
The impact we seek
Our Ethics and accountability in practice programme enables us to achieve our strategic goals in the following ways:
- We have anticipated transformative innovations in approaches to algorithmic accountability, publishing the first synthesis of emerging terms and practices, and the first global survey of algorithmic accountability policies in the public sector.
- We are rebalancing power over data and AI through developing, trialing and testing accountability mechanisms to ensure they are designed and deployed in ways that consider their impact on a range of different communities and ensure their benefits are fairly and equitably distributed.
- We are promoting sustainable data stewardship by suggesting concrete mechanisms for developing best practices in data stewardship – responsible and trustworthy data governance and practice.
- We are interrogating inequalities caused by data and AI throughbykeeping a clear focus on the emergence of bias and discrimination in AI and algorithmic systems, and suggesting sociotechnical mechanisms for identifying and mitigating the impact of AI systems on inequalities.
Projects
Algorithmic accountability for the public sector
Research with AI Now and the Open Government Partnership to learn from the first wave of algorithmic accountability policy
Exploring principles for data stewardship
An open set of case studies exploring principles for data stewardship
The role of good governance and the rule of law in building public trust in data-driven responses to public health emergencies
A citizen jury deliberation on the trustworthiness of data-driven technologies used in a public health emergency
Reports
Algorithmic impact assessment: a case study in healthcare
This report sets out the first-known detailed proposal for the use of an algorithmic impact assessment for data access in a healthcare context.
Technical methods for regulatory inspection of algorithmic systems
A survey of auditing methods for use in regulatory inspections of online harms in social media platforms
Algorithmic accountability for the public sector
Research with AI Now and the Open Government Partnership to learn from the first wave of algorithmic accountability policy
Participatory data stewardship
A framework for involving people in the use of data
Examining the Black Box: Tools for assessing algorithmic systems
Identifying common language for algorithm audits and impact assessments
Events
From principles to practice: what next for algorithmic impact assessments?
We are convening experts from policy, industry, healthcare and AI ethics to discuss our recent case study and the future of AIAs.
Responsible AI research: challenges and opportunities
Exploring the foundational premises for delivering ‘world-leading data protection standards’ that benefit people and achieve societal goals
Redesigning fairness: concepts, contexts and complexities
Exploring the foundational premises for delivering ‘world-leading data protection standards’ that benefit people and achieve societal goals
Lessons learned from COVID-19: how should data usage during the pandemic shape the future?
Exploring the foundational premises for delivering ‘world-leading data protection standards’ that benefit people and achieve societal goals
Responsible innovation: what does it mean and how can we make it happen?
Exploring the foundational premises for delivering ‘world-leading data protection standards’ that benefit people and achieve societal goals
Exploring participatory mechanisms for data stewardship – report launch event
Involving people in the design, development and use of data and AI systems
Prototyping AI ethics futures: Ethics in practice
Part of a week-long series of events highlighting the new possibilities of a humanities-led, broadly engaging approach to data and AI ethics
Networking with care: Re-making networks and practices of data and AI ethics
Part of a week-long series of events highlighting the new possibilities of a humanities-led, broadly engaging approach to data and AI ethics
Prototyping AI ethics futures: Data walk with Dr Alison Powell
Part of a week-long series of events highlighting the new possibilities of a humanities-led, broadly engaging approach to data and AI ethics
Technology and civic engagement – double book launch with the Ada Lovelace Institute
Dr Alison Powell and Dr Daniel Greene in conversation to discuss their new books
From the Ada blog
Getting under the hood of big tech
Realising the potential of algorithmic accountability mechanisms
The role of the arts and humanities in thinking about artificial intelligence (AI)
How does structural racism impact on data and AI?
Why PETs (privacy-enhancing technologies) may not always be our friends
Disambiguating data stewardship
Why what we mean by ‘stewarding data’ matters