Search
Browse the Ada Lovelace Institute website.
Filter by:
Active filters:
Emerging processes for frontier AI safety
The UK Government has published a series of voluntary safety practices for companies developing frontier AI models
DNA.I.
Early findings and emerging questions on the use of AI in genomics
Keeping an eye on AI
Approaches to government monitoring of the AI landscape
AI assurance?
Assessing and mitigating risks across the AI lifecycle
Looking before we leap
Expanding ethical review processes for AI and data science research
Inform, educate, entertain… and recommend?
Exploring the use and ethics of recommendation systems in public service media
Voices in the Code: A Story about People, Their Values, and the Algorithm They Made
David G. Robinson in conversation with Professor Shannon Vallor
Getting under the hood of big tech
Auditing standards in the EU Digital Services Act
Algorithmic impact assessment: a case study in healthcare
This report sets out the first-known detailed proposal for the use of an algorithmic impact assessment for data access in a healthcare context
Algorithmic accountability for the public sector
Learning from the first wave of policy implementation
New research highlights lessons learnt from first wave of global policy mechanisms to increase algorithmic accountability in the public sector
The first global study analysing the first wave of algorithmic accountability policy for the public sector
Algorithmic accountability for the public sector
Research with AI Now and the Open Government Partnership to learn from the first wave of algorithmic accountability policy.