Search
Browse the Ada Lovelace Institute website.
Filter by:
Active filters:
The trust problem
Conversational AI and the needs of child users
Risky business
An analysis of the current challenges and opportunities for AI liability in the UK
Nearly 9 in 10 people in the UK support independent regulation of AI
Our polling reveals that the UK public prioritise AI safety and positive social impacts over economic gains, speed of innovation and competition
Legal analysis reveals urgent need for laws and regulations that guard against harms of Advanced AI Assistants
New paper from the Ada Lovelace Institute and AWO finds that existing legal rules offer no effective protection against wide range of risks
Let’s get real
The benefits and risks of immersive technologies for impacted communities
Synthetic data, real harm
Why is synthetic data used and what challenges does it raise for AI assurance mechanisms?
Will the UK AI Bill protect people and society?
Assessing the credibility of forthcoming legislative proposals
Ada Lovelace Institute responds to UK Compute Roadmap
Matt Davies, Economic and Social Policy Lead at the Ada Lovelace Institute, has responded to the UK Compute Roadmap.
Ada Lovelace Institute responds to final General-Purpose AI Code of Practice
Gaia Marcus, Director of the Ada Lovelace Institute, has responded to the General-Purpose AI Code of Practice.
Who pays for AI risks?
Exploring how industrial levies can shape market behaviour, counter harms and improve governance
Tokenising culture: causes and consequences of cultural misalignment in large language models
How do AI systems embed cultural values and what risks does this imply?
Mass facial recognition roll-out exists in ‘legal grey area’ due to inadequate governance, says the Ada Lovelace Institute
The UK’s approach to governing facial recognition and other biometric technologies is failing to provide legal certainty or safeguard the public.