Skip to content
News

Ada Lovelace Institute curates ‘Ethics & Society’ at CogX 2021

The Ada Lovelace Institute curates a half-day of events at CogX 2021 – the international festival of AI.

10 June 2021

Reading time: 3 minutes

CogX festival 2021

On Monday 14 June from 10:00 to 18.30 BST on the Ethics & Society stage at CogX 2021, Ada hosts a series of conversations to explore how governments, industry, civil society and academia can move from thinking about ethical principles of AI and technology, and move towards specific practices that work better for people and society.  

This builds on the theme of year’s CogX festival: ‘How do we get the next 10 years right?’ and will feature focused conversations around the current state of play with ethics and justice in tech, including questioning what it will take to make meaningful progress in the next 10 years ahead. 

To join our sessions, register to get your free CogX 2021 standard pass here.

Andrew Strait, Associate Director (Research Partnerships) at the Ada Lovelace Institute will moderate the day.

Session 1: Vaccine passports and the next generation of digital infrastructure

10 – 10:45 BST

As the pandemic response moves into vaccine distribution, vaccine certification and health status applications are being offered as a promising digital tool to enable freedoms. But what are the risks? This panel will evaluate the challenges these technologies pose and discuss how these technologies can be used responsibly.

Moderator: Imogen Parker, Ada Lovelace InstitutePanellists: Dr James Wilson, Professor of Philosophy and co-director of the UCL Health Humanities Centre and Dr Linnet Taylor, Associate Professor at the Tilburg Institute for Law, Technology and Society. 

Session 2: How do we get biometrics-based technologies right in the future?

11 – 11:40 BST

Biometrics technologies like facial and gait recognition systems are a source of ongoing controversy. Are there ways to design and build these systems to serve the needs of people they affect and limit potential harms? Is the law surrounding biometrics moving in the right direction? What’s needed to ensure these technologies are developed in ways that maximise potential benefits, without entrenching systemic harms or decreasing public trust?

Moderator: Aidan Peppin, Ada Lovelace Institute. Panellists: Matthew Ryder, senior QC at Matrix Chambers and lead of an independent review of the governance of biometric data and Julie Dawson, Director of Regulatory & Policy, Yoti.

Session 3: What does the future hold for regulating social media?

12:00 – 12:40 BST

In the last year, sweeping new powers have been proposed for UK and EU regulators to inspect, assess and supervise social media platforms. Similar discussions are underway in the USA. But what will regulating look like in the years to come? What should audits and assessments involve, what are the pitfalls and what should their scope include?

Moderator: Aparna Surendra, AWO Panellists: Seyi Akiwowo, Founder and CEO, Glitch and Mark Bunting, Policy Director, Online Harms, Ofcom.

Session 4: Closing the data divide: COVID-19 and beyond

14:00 – 14:40 BST

The data divide has affected who can be represented by, and has agency to shape, data-driven technologies at a time when tech use and adoption have accelerated. Globally, technologies such as symptom trackers, digital contact tracing apps and vaccine passports have challenged us to think through how to make data and AI work fairly and equitably for everyone. This discussion will explore the conditions needed to close the data divide, engendering more representative, proportionate and effective data governance in COVID-19 and beyond.

ModeratorReema Patel, Ada Lovelace Institute Panellists: Professor Ann Phoenix, Professor of Psychosocial Studies, UCL and Dr Nicole Byrne, National Data Guardian for health and adult social care in England. 

Session 5: Can you do ethical research in a corporate environment?

18:00 – 19:00 BST

The last decade has seen a proliferation of ethical principles that sought (on the face of it) to provide governments and industry with a pathway for the safe and responsible development of technology. It spurred the creation of teams like Google’s Ethical AI team, co-led by Margaret Mitchell and Timnit Gebru. Yet as Google’s decision to dismantle the Ethical AI team shows, this grand vision hasn’t gone to plan.

What does this moment tell us about the future of ethics teams within industry firms? How do we ensure technology is developed and studied in a safe, responsible and meaningful way in the next 10 years?

Chair: Carly Kind, Ada Lovelace Institute Speaker: Margaret Mitchell, AI Research Scientist


Register for your free CogX 2021 standard pass here.

Related content