A piece of the action
The questions we’re asking ahead of the AI Action Summit
7 February 2025
Reading time: 5 minutes
On 10-11 February, world leaders, tech luminaries, academics, and NGO and civil society representatives from around 80 countries will converge on Paris for the AI Action Summit.
One of the summit’s aims is to ‘commit together to develop the science, solutions and standards that will ensure that artificial intelligence serves the fundamental public interest’. While this aim echoes themes from the Bletchley Declaration and discussions in Seoul, it’s hard to ignore the conspicuous absence of ‘AI safety’ from the framing of the Paris summit. As the USA rejects regulation of these technologies and the UK promises to ‘unleash AI’s potential’, the outcomes may turn on how much this kind of ‘action’ overshadows conversations about preventing harms to people and society.
There are many predictions being made about what will be discussed and decided in Paris next week. But instead of dusting off our crystal ball, we thought we’d channel our curiosity. We want to understand whether the summit will actually move the needle towards AI technologies that work for the economy, governments, businesses and – most importantly – for everyday people. Here are some questions we will be asking.
What will be the reaction to the increasing power imbalance in the AI ecosystem?
Recent developments in the AI ecosystem have signalled the growing concentration in the power of tech companies – from major tech firms’ deepening ties to the US administration, to UK regulators being told to prioritise growth, to major frontier developers increasingly controlling most of the technology stack.
This raises vital questions. Are there adequate incentives for these companies to ensure that new technologies work and are safe? And if we maintain this status quo, who is actually reaping the benefits of new technologies?
At the summit, we’ll be watching closely to see if and how this power imbalance is challenged. Is there widespread awareness of this market concentration, and a strong enough appetite to develop alternatives?
Will there be momentum on public interest?
Public interest AI is a major theme of the AI Action Summit, as countries look to adopt AI and ensure its benefits are enjoyed across society – and are not captured elsewhere.
We already know that there are significant investments occurring in compute – some with public funding and some with the intention of public benefit. But there are open questions about what forms of investment minimise value capture by large companies and maximise value to the public.
In addition, the advent of DeepSeek suggests we may well see the development of approaches that could radically reduce the compute needed to train and run adequate foundation models, making their capabilities more widely accessible to governments, international collaborations and public interest AI projects.
A reported output of the summit will be an international AI foundation that will support and coordinate projects to create openness around the AI value chain, enabling a broader range of actors to participate in building and using AI. This type of international coordination and collaboration around public interest is both much needed and ambitious – and we will be looking out for the commitments nations make to support it.
Will agentic AI get the attention – and governance solutions – it deserves?
The recently released International AI Safety Report highlights the development of general-purpose AI agents as a key emerging trend. Our new research explores the proliferation of Advanced AI Assistants that act on the world both directly (e.g. by sending emails or engaging in financial trading on behalf of a user) and indirectly (e.g. by shaping behaviours or attitudes of users).
Given Advanced AI Assistants’ potential influence over user thought and behaviour – as well as the amount of data they collect on users – their integration into our personal lives and finances could place an unprecedented amount of power in the hands of the small number of large companies developing, providing and controlling these systems. Their governance will require significant thought and care from policymakers, and Paris may give early clues about how they perceive the role of agents.
Will developers be left to continue ‘marking their own homework’ when it comes to upstream AI risks?
When it comes to the risks posed by foundation models, most regulators look at point of use – many lacking the powers or mandate to look at the underlying technology and its developers. Because of this, there are few mechanisms or incentives for developers of these powerful models to be held accountable for harms. The EU AI Act Codes of Practice is currently the only meaningful attempt to place requirements on model developers. And while the UK Government has proposed regulation of the most advanced frontier models, we have yet to see any legislation materialise.
So will the Paris summit manage to secure concrete commitments from developers on mitigating these risks? Or will proposals for new accountability mechanisms – such as international AI governance – emerge?
Will we see a drive toward concrete collective action?
The purpose of summits like this is to build consensus and drive collective solutions for harnessing the benefits of AI. But will collaboration prevail in the midst of the current AI arms race?
The Paris summit presents several positive visions for the future: one where AI is used sustainably – through the reported establishment of the Coalition for Sustainable AI; and one in which the technologies, tooling and infrastructure are widely accessible for use in the public interest – through the AI foundation. But achieving global momentum towards these futures will need intentionality and commitment from governments.
Paris will give us a good sense of whether governments are willing to work together to create the incentives, institutions and alternatives that will enable broad access to and enjoyment of the benefits of AI across the globe.
Related content
Computing Commons
Designing public compute for people and society
Delegation Nation
Advanced AI Assistants and why they matter
Post-Summit civil society communique
Civil society attendees of the AI Safety Summit urge prioritising regulation to address well established harms
Seizing the ‘AI moment’: making a success of the AI Safety Summit
Reaching consensus at the AI Safety Summit will not be easy – so what can the Government do to improve its chances of success?