Skip to content
News

Post-Summit civil society communique

Civil society attendees of the AI Safety Summit urge prioritising regulation to address well established harms

1 November 2023

Reading time: 4 minutes

The rotor dials from a 'Bombe' decryption machine used by the code breakers at Bletchley Park to decrypt German World War 2 'Enigma' traffic.

Leading civil society organisations and AI experts based across the UK and USA have today signed a joint communique reflecting on their time at the UK AI Safety Summit so far. The communique calls on governments to prioritise regulation to address well established harms that already impact people’s rights and daily lives, instead of narrowly focusing on ‘frontier’ models.


The undersigned civil society participants at the UK AI Safety Summit call for regulatory action to ensure that the current and future trajectory of AI serves the needs of the public. AI systems impact almost every aspect of our lives, often in highly significant ways – from algorithmic management systems that influence workers’ pay, wellbeing, and autonomy, to automated decision making systems that determine access to social benefits and other resources, to biometrics inserted into migration, security and educational settings, to computer vision techniques influencing medical diagnoses. While potential harms of ‘frontier’ models may have motivated the Summit, existing AI systems are already having significant harmful impacts on people’s rights and daily lives. 

We call for governments to prioritise regulation to address the full range of risks that AI systems can raise, including current risks already impacting the public. The first step must be real enforcement of existing laws that already apply, as well as building new regulatory guardrails where they are needed. There is already a decade’s worth of evidence on the harms associated with existing AI systems: from discrimination, to security and privacy lapses, to competition concerns, to informational harms. Substantial expertise already exists on how to tackle many of these issues, and taking action to do so now is the critical groundwork needed to address potential risks that may arise further down the line. Society will be underprepared for the problems of tomorrow without the institutions, laws, powers, and accountability needed to address the issues of today. This includes laws requiring the independent testing of AI systems at all stages of their development and deployment, establishing modes of redress and legal liability for when things go wrong, and new powers for regulators to enforce sanctions when appropriate. These protections are long overdue.  

Experience has shown that the best way to tackle these harms is with enforceable regulatory mandates, not self-regulatory or voluntary measures. As other sectors like the life sciences, automobile, and aerospace industries show, regulating to make products safe is not at odds with innovation, it enables it – and protects the public in the process.

There remain hard and pressing problems to solve, and it is critical to meaningfully involve the international community in any coordinated governance efforts to identify solutions. We encourage three particular priorities for any such efforts, including proposals for national or international institutes that might emerge from this Summit:

  • First, AI safety must be understood as more than a purely scientific endeavour to be studied in lab settings: AI systems do not exist in a vacuum, but co-exist with people, and are embedded within institutions and power structures. It is critical that AI systems be examined in the contexts in which they are used, and that they be designed to protect the people on whom AI will be deployed, including questioning whether AI is the appropriate tool to adopt in the first place for a particular task at hand. 
  • Second, companies cannot be allowed to assign and mark their own homework. Any research efforts designed to inform policy action around AI must be conducted with unambiguous independence from industry influence, with ample controls to ensure accountability, and with a mandate to provide the access (to data and otherwise) needed by independent evaluators on terms that are established by regulators and researchers. 
  • Third, the need for more research in some areas should not prevent us from taking practical steps to address urgent policy priorities that we already have the tools to solve. The evidence of harm from AI systems is already sufficient to clearly justify baseline protections like mandatory transparency and testing.

Because only a small subset of civil society actors working on Artificial Intelligence issues were invited to the Summit, these are the perspectives of a limited few and cannot adequately capture the viewpoints of the diverse communities impacted by the rapid rollout of AI systems into public use. Here, too, governments must do better than today’s discussions suggest. It is critical that AI policy conversations bring a wider range of voices and perspectives into the room, particularly from regions outside of the Global North. Framing a narrow section of the AI industry as the primary experts on AI risks further concentrating power in the tech industry, introducing regulatory mechanisms not fit for purpose, and excluding perspectives that will ensure AI systems work for all of us. 

Signatories:

Ada Lovelace Institute

AI Now Institute

Algorithmic Justice League

Alondra Nelson, Ph.D., Institute for Advanced Study (U.S.)

Camille Francois, Columbia University’s Institute for Global Politics 

Center for Democracy & Technology

Centre for Long-Term Resilience

Chinasa T. Okolo, Ph.D., The Brookings Institution

Deborah Raji, University of California, Berkeley; Mozilla Foundation

Marietje Schaake, Stanford Human-Centered AI

RealML

Responsible AI UK