Skip to content
Virtual event

EU AI standards development and civil society participation

In May 2023, the Ada Lovelace Institute hosted an expert roundtable on EU AI standards development and civil society participation.

Connor Dunlop

Date and time
4 May 2023

In May, the Ada Lovelace Institute hosted an expert roundtable on EU AI standards development and civil society participation as a follow up to our discussion paper, Inclusive AI governance: Civil society participation in standards development.

The roundtable included provocations from two experts[1] followed by moderated breakout room discussions, which were structured around the two top-line policy solutions identified in the paper:

  • Increasing civil society participation in standards development
  • Implementing institutional innovations for democratic control of standards development

The conversation in the breakouts were lively, with varied expertise encompassing standards development bodies, policymakers (from both the EU and the UK), civil society, academia and industry.

Barriers to meaningful participation

One of the key issues identified by participants was the lack of resources committed to ensuring experts in fundamental rights can engage meaningfully in the process.

It was recognised that while routes for civil society input and support for participation vary across standards development bodies (e.g. some have a pay-to-participate model while others do not) and some support does exist (e.g. StandICT grants provided by the European Commission), across the board the core barrier was a lack of financial assistance. Some suggested implementing additional financial incentives and mechanisms via the AI Act as a potential solution.

However, just offering additional funding to get more diverse expertise in AI standards development was not seen as a silver bullet, because once ‘in the room’, participants have found a tendency for industry representatives to drown out the voices of civil society and SMEs.

There were concerns over the ‘culture’ of standards development and the language and processes being difficult to navigate for experts who are not usually involved, whether they be SMEs or civil society. There were similar concerns voiced around transparency, for example, the ‘stress tests’ for standards are published, but not publicly, and stringent confidentiality rules mean there is a lack of public scrutiny over a process with significant societal implications.

Solutions proposed included updating the voting rights (‘Annex III is flawed because they don’t have voting rights’)  and their weighting (one participant suggested civil society and Annex III organisations should be able to block or veto ‘coordinated moves by industry’). To counter-balance some of these cultural challenges, it was suggested that there needed to be training for civil society, such as in drafting standards, and ‘interdisciplinary’ training for the standards development bodies:

‘When one is doing interdisciplinary work, which is how I see civil society engaging in standards work, it is all about compromise. My view is that people involved in the technical process should also have training on the civil society perspective, it goes both ways, there is a need to meet in the middle and to have compromise from both sides.’

Beyond changes to voting and training, another solution with support was a centralised mechanism to coordinate and support input on AI standards at key moments or questions. (‘There are specific groups of non-profits that I’d love to engage with, but there’s no funding on the table for me to do that, to bring civil society in.’)

This was deemed particularly relevant for AI standards development, as there was some recognition that there are many academic experts, for example, who could usefully contribute to key questions around AI (e.g. how to operationalise acceptable levels of ‘accuracy and robustness’, or benchmarks for doing so).

Institutional innovations

The second part of the discussion focused on the ‘institutional innovations’ which might be needed if fundamental rights and other public interests cannot adequately be protected by existing arrangements, undermining one of the AI Act’s key objectives.

A majority of participants agreed that common specifications and a benchmarking institute could be considered as supplementary mechanisms for supporting AI standards development. A large majority also indicated that foundation models may require a novel approach to conformity.

On common specifications, there was some agreement that if AI standards end up ‘fairly weak’, it might make sense for the European Commission to develop harmonised standards ensuring adequate protection for important elements of the AI Act.

However, it was also stressed that common specifications were not a silver bullet. The Commission’s limited expertise in developing harmonised standards, especially for AI, and the lengthy process were both cited as challenges. There was some sentiment that common specifications may only be suitable as a ‘last resort’ and one that must not simply reproduce the issues with AI standards development, i.e. the process must be open, inclusive and accommodate for the sociotechnical nature of AI.

Another supplementary measure which had support was leveraging the expertise of national metrology and benchmarking authorities for developing relevant AI benchmarks.[2] This could allow those with expertise in measurement to operationalise the quantifiable aspects of the AI Act’s essential requirements, such as representativeness of datasets, or accuracy and robustness.

This was seen as a useful means to channel external expertise into regulation and support standards development, in a similar format to the EU’s high-level expert group on AI[3]. However, there was one participant who pushed back on the idea of an expert benchmarking institute, saying that while experts could help, there should not be a ‘one stop shop’ in the form of a centralised institute and they favoured a decentralised approach.

The conversation then turned to the challenges posed for AI standards development by large-scale ‘foundation models’ with general applicability. It was pointed out that the development of these models is increasingly fast-paced, making so-called ‘one-and-done’ AI standards difficult to apply to these models.

However, based on current evidence of harms, one participant suggested you could have standards or benchmarks to measure for the capabilities of these models for hacking or manipulation.

A prevailing sentiment, though, was that new approaches would be needed for these powerful AI models. Ideas proposed included sandboxes for testing conformity before deployment (‘For novel technology, you can say that it’s not in our ballpark. AI has sandbox mechanisms and you can ask a regulator to engage with you in a sandbox. General purpose AI specific obligations should be operationalised but the question is whether standards are the solution?’) and standardising the process for presumed conformity, such as in audits, rather than specific obligations.

The topic of audits came up several times, particularly when participants stressed that standards are not in of themselves sufficient for fostering a safe and responsible AI ecosystem in Europe. There was a strong feeling that continual review and co-governance will be crucial, given the sociotechnical nature of AI and rapidly developing ‘frontier’ of AI.

Regular auditing (both ex ante and ex post) was seen as the best way to ensure a dynamic approach for governing this rapidly evolving technology, as audits would be updateable and context-specific.

Finally, it would be remiss not to highlight the positive sentiment expressed regarding the European Commission’s approach to AI regulation and standards development. The inclusion of protection for ‘fundamental rights’ in the AI Act was itself a form of institutional innovation, given that product safety legislation usually focuses on ‘health and safety’.

In addition, it was recognised that the Commission has taken positive steps to pilot a new approach to standards, with additional recognition of the need for input from civil society[4]. In this regard, some participants felt it was important that those with expertise in fundamental rights adopt a co-governance mindset and support AI standards development to deliver the best outcomes for people and society

Footnotes

[1] Vidushi Marda,  ARTICLE 19. Vidushi leads A19’s work on AI standardisation, research and policy. She has worked extensively in standards bodies like the IEEE, and actively participates in policy windows in the EU and India. She has published widely on standardisation, technical infrastructure, biometrics and machine learning

Hadrien Pouget, Carnegie Endowment for International Peace. Hadrien works on the technical and political challenges faced by those setting AI standards globally. He has recently published on this topic with Lawfare and with the Ada Lovelace Institute.

[2] Standard performance metrics, which aim to distil different models’ capabilities at, e.g. translation, down into a single number that can be compared between models.

[3]A group of experts appointed to provide advice to the European Commission on its AI strategy

[4] https://ec.europa.eu/transparency/documents-register/detail?ref=C(2023)3215&lang=en

Related content