Search
Browse the Ada Lovelace Institute website.
Filter by:
Active filters:
Return to reality
An exploration of immersive technologies.
Ada Lovelace Institute statement on the UK’s approach to AI regulation
Michael Birtwistle, Associate Director (Law & policy), comments on the UK Government's response to the AI Regulation white paper consultation.
Meaningful public participation and AI
Lessons and visions for the way forward
Ada Lovelace Institute statement on the EU AI Act
Michael Birtwistle, Associate Director (Law & policy), encourages EU capitals and the European Parliament to approve the AI Act.
Safe before sale
Learnings from the FDA’s model of life sciences oversight for foundation models
Post-Summit civil society communique
Civil society attendees of the AI Safety Summit urge prioritising regulation to address well established harms
Emerging processes for frontier AI safety
The UK Government has published a series of voluntary safety practices for companies developing frontier AI models.
New, independent evidence review helps policymakers understand public attitudes about AI and how to involve the public in AI decision-making
The Ada Lovelace Institute has published a new rapid review of evidence on public attitudes about AI and how to involve the public in AI policy.
Foundation models in the public sector
AI foundation models are integrated into commonly used applications and are used informally in the public sector
AI regulation and the imperative to learn from history
What can we learn from policy successes and failures, to ensure frontier AI regulations are effective in practice?
Seizing the ‘AI moment’: making a success of the AI Safety Summit
Reaching consensus at the AI Safety Summit will not be easy – so what can the Government do to improve its chances of success?
Regulating AI in the UK
Recommendations to strengthen the Government's proposed framework