Skip to content
Blog

What will the role of standards be in AI governance?

Why standards are at the centre of AI regulation conversations and the challenges they raise

Hadrien Pouget

5 April 2023

Reading time: 17 minutes

European Union and EU community CE marking concept with sign, symbol and EU flag on a computer key.

As the EU gets closer and closer to finalising the AI Act, the US National Institute for Standards and Technology (NIST) publishes the first version of its AI Risk Management Framework (RMF) and the UK government releases its AI regulation White Paper, the global AI community is shifting its attention to the logical next step in the regulation of artificial intelligence: AI standards.

While what exactly is meant by ‘standards’ can be nebulous, they broadly represent a move towards the operationalisation of AI regulation, giving those deploying and regulating AI systems the processes and technical tools needed to ensure regulatory compliance.

Despite the recent spotlight, standards have been influential in AI regulation and guidelines for a while. The International Organization for Standardization (ISO) and the International Electrotechnical Committee (IEC) have partnered since 2017 to develop AI standards. The AI RMF in the USA draws heavily from ISO’s risk management standards and ISO/IEC’s AI terminology standards, and the EU AI Act’s risk management approach is likely to do the same.

However, as AI standard-setting moves beyond definitions and descriptive frameworks, it is stepping into an electrified space. Regulators are still discussing high-level principles like ‘fairness’, ‘robustness’, ‘transparency’ and ‘right to recourse’, and there remains considerable uncertainty about what these mean in practice.

How do actors check that a system is robust or fair? What are the requirements for a model or process to be transparent? What kinds of documentation need to be collected and made available for recourse? Unsurprisingly, efforts to standardise these principles can be fraught.

Additionally, the organisations developing AI standards are led by industry and have experience tackling technical issues, not necessarily social ones. As a result, there are serious concerns as to whether standards development organisations (SDOs) have the institutional capacity to create standards for a technology like AI, where complex value judgements appear to be deeply woven into technical decisions.

There is no doubt that standards have a role to play in AI governance, bringing clarity to the procedural and technical challenges of managing AI. They will be as important to those deploying AI systems as to the regulators and certifiers auditing them and the judges ruling on whether an organisation has fallen short of its responsibilities. Nonetheless, questions remain over what should or should not be standardised and the role standards should play in AI governance.

Standard-setting culture

To understand AI standards and the role they could play in AI regulation, it is worth taking a step back and acknowledging the strong culture of standard-setting that exists in industry contexts.

An ideal conception[1] of standards sees them as voluntary, consensus-based and formulated by SDOs – private non-profits[2] that convene primarily industry members along with other stakeholders – because they are intended to resolve industry-specific technical issues. When Blu-ray and HD DVD standards, for instance, competed for dominance over the electronic discs market, governments had little reason to get involved.

In this model, industry players bring the technical and contextual knowledge required to ensure standards are effective and adopted to the benefit of industry and, hopefully, consumers.

Standards are a very flexible tool. They can take the form of ‘any agreement’ that might be useful, especially when coordination is important: from providing common definitions to outlining technical and governance processes, encouraging compatibility of devices and providing detailed technical specifications.[3]

In practice, standards frequently touch on issues that interest governments (most commonly physical health and safety) and, in cases like AI technologies, complex social and economic issues become relevant. This makes the voluntary and industry-led picture described above murkier.[4] When this happens, government involvement in the development and enforcement of standards can have varying levels of intensity.

On one end of the spectrum, a government can develop its own ‘standards’ and make them binding, in such a way that the distinction between regulation and standards becomes largely superficial.

More hybrid models of government involvement are also possible. Government offices can consult with SDOs or oversee the process through which they establish standards, to ensure that they respond to government needs. Governments can also include a reference to existing SDOs’ standards in regulation, making their fulfilment either binding or a sufficient-but-not-necessary condition for legal compliance. By working with SDOs, governments can benefit from the technical and contextual knowledge held by industry and regulate complex technologies without having specific, internal expertise.[5]

Notably, government involvement in standard-setting makes global coherence, one of the principles for the development of international standards, more complicated. On paper, standards serve as a tempting focal point for international cooperation. In the ideal vision of standard-setting, politics can be stripped away and purely technical issues tackled jointly.

The reality, however, is different. Since standards can carry some regulatory influence and relate to issues that matter to governments, governments are invested in their content and disagreements can arise. Countries can have differing visions even on questions of physical health and safety, let alone on social issues like ‘fairness’, which are relevant to AI.

A tool for regulating AI?

With regulation on the way, standards are emerging as a potential tool for AI governance because they offer a flexible way for governments to benefit from industry expertise and a supposedly promising path for international cooperation.

Notably, AI standards would have played (and will play) a role regardless of any specific regulation. Industry actors want a clear, coherent approach to AI and are interested in limiting risks resulting from its use (although, of course, the risks they perceive may differ from those feared by other groups in society). For these reasons, the ISO/IEC’s AI standards work began in 2017 – before most countries had an official, high-level AI strategy, let alone specific regulations or guidelines.[7]

Of all standardisation work that is already underway internationally, the ISO/IEC joint effort is the most prominent attempt. The national SDOs that compose ISO/IEC are likely to adopt the agreed international standards, making them official within their country. The Institute of Electrical and Electronics Engineers (IEEE), which has broader membership, is also creating sometimes competing, sometimes complementary AI standards, and, on some aspects, it is moving faster than ISO and IEC.

Turning to national standard-setting institutions, in the USA, NIST has developed the aforementioned voluntary AI RMF, which outlines useful internal governance processes for those aiming to use AI. The USA’s leading standards body, the American National Standards Institute (ANSI), is heavily involved in setting standards at ISO/IEC level, and it is likely that it will respect the standards developed there.

The EU has put itself in a relatively unique position with respect to standards, as they will play an important role in the enforcement of the AI Act, currently being discussed by the EU’s legislative bodies. Standards are intended to serve as ‘objectively verifiable’ ways of complying with legal requirements from the Act and should be ready by early 2025, before it comes into force.

The European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) have recently started to develop AI standards to this end but, until the AI Act is complete, the exact relationship between standards and the Act will not be pinned down. The European standardisation bodies are likely to defer to ISO/IEC standards in some areas and develop unique requirements in others.[8]

As part of the UK’s National AI Strategy, an AI Standards Hub, hosted by the Alan Turing Institute, was established in 2022. The UK is also involved in CEN and CENELEC through its leading standards body, the British Standards Institute (BSI).[9]

In China, the government has traditionally had a heavy-handed role in standard-setting, although it has recently developed a strategy that includes a growing role for industry voices and a goal to become more aligned with international standards.

Challenges facing AI standards

Going forward, we can identify three key challenges for regulators, enforcement agencies, civil society and industry.

Challenge 1: Stakeholder representation and complex value judgements

As already mentioned, the SDOs’ industry-led model of standard-setting works well when issues are simple and relatively technical. Designing booster seats has an important impact on children’s safety, but the relevant measures, tests and mitigation methods are well understood.

As standard-setters tackle more complex social issues, it is unclear that the industry-led model carries the right incentives, understanding or legitimacy required to make the necessary value judgements. This is the case of a technology like AI, where both the technology and the harms it can cause are so complex that it becomes difficult to separate value judgements from technical details.

For instance, the EU’s request for AI standards to CEN and CENELEC is rife with examples of grey areas. Is the ‘representativeness’ of a dataset simply a mathematical property, a legal issue or a moral question? If it is all three, then how can these notions be aligned? Similar questions can be asked about the ‘robustness’ of a model: how thoroughly should a system be tested and which possible situations should be prioritised? The same applies to notions such as ‘accuracy’, ‘transparency’ and many others in the request. The ISO’s standard-setting work programme presents similar issues.

This raises the question of how SDOs’ decisions can be made more legitimate. Government involvement or oversight is likely to suffer from lack of expertise on both the technical front and the interests of specific social groups, given the technical complexity and dizzying array of potential applications. Many SDOs have mechanisms for including a wide range of stakeholders for exactly this reason, from offering full memberships to enabling observer roles, and governments can and do incentivise these forms of engagement.[10]

However, participation is not without friction. SDOs have non-negligible membership fees and application procedures and their work is relatively opaque to outsiders. Current drafts are not usually publicly available and completed standards are paywalled, costing hundreds of pounds to access (not to mention that they will refer to other standards, which are also paywalled). Working on standards requires finding people with the right technical expertise, and these are usually already working in industry. All of this can lead to confusion about where the best point of entry is for newcomers lacking insider knowledge. It can be difficult to judge where or when to engage, whether it is worth engaging, which decisions matter and what the fruits of that engagement could be.

Challenge 2: What should be included in standards?

Naturally, some have called into question whether standards are the right place for discussions so relevant to human rights.

But this is not the first time that standards have been proposed as tools in such thorny issues. For example, medical standards deal with complex systems (bodies) and their decisions have profound impacts on physical and mental health, as well as requiring complex value judgements.

In medicine, this works because medical standards are not comprehensive and form only a piece of the regulatory puzzle. On the one hand, courts offer an important backstop in most legal systems, whether standards exist or not.[11] Compliance with standards is not necessarily enough to absolve a person or company of responsibility if they cause harm. In trials for medical malpractice, it is common to bring in expert witnesses to help establish a baseline for ‘reasonable’ behaviour and add context and nuance. On the other hand, regulatory agencies can pre-empt harms by assessing and approving new drugs and medical interventions.

The added flexibility of expert witnesses and the weight of regulatory action help navigate the treacherous ground of accountability in medical practice in ways that standards alone could not. In this light, AI standards could be viewed as a piece of the puzzle rather than a silver bullet.

With this in mind, questions remain about what should, and should not, be included concretely in AI standards. In part, this will depend on countries’ regulatory regimes (what works for the USA may not work for the EU).

Standards can vary in how prescriptive they are and range from precise requirements to more general guidelines. Instead of including a metric for dataset bias and a bar that must be met, a standard could simply include a list of possible metrics – but maybe even that is too much.[12] A CEN/CENELEC work item, for example, aims to provide a set of ‘trustworthiness characteristics’ for AI, but only with ‘example metrics’.

Another important issue is the ‘verticality’ of standards. Standards are likely to need to be specific to a technology (AI is a general term that captures many types of underlying technologies) and/or its application (different applications of the same technology may require different assessments). The AI RMF, for example, aims to be agnostic on both, but, as a result, does not get into gritty details. It enumerates potential trustworthiness characteristics, but does not point to any specific evaluations, which would require more context.[13]

Finally, there are legitimate concerns about the technical feasibility of assessing the trustworthiness of some AI systems, which are opaque even to their developers and can behave in unexpected ways even after testing.[14]

Challenge 3: Will standards help with international coordination?

All countries and SDOs emphasise the importance of internationally coherent standards, but the picture is murkier in practice, as AI is set to have social, economic and military implications.

Tensions in international standard-setting in general have increased and this is likely to have impacts on AI standards. In particular, China’s behaviour in international standard-setting has been perceived as overly aggressive (although some have argued this is an overreaction). The concerns have notably manifested in the EU’s 2022 standardisation strategy, which sees the EU aiming to reduce foreign influence in European SDOs by focusing power in member state organisations, while not-so-subtly pointing the finger at China.

Similarly, the traditionally uncontroversial vote for the leadership of the International Telecommunications Union, which most recently pitted a Russian and an American against each other, this time earned a statement from the President of the United States himself.

Additionally, the desire, on the one hand, for increased AI sovereignty given the importance of the technology, and, on the other hand, for global coherence, has resulted in a somewhat inconsistent approach to international standards. For instance, the EU and USA have identified standards as a promising avenue to simplify compliance for companies wishing to operate on both side of the Atlantic by at least using a common set of definitions and technical tools, regardless of the differing regulatory regimes. At the same time, the EU does not shy away from deviating from international standards to satisfy European needs.

A thorny but unavoidable issue

Standards have emerged as a natural place for the technical and procedural details of AI governance to be developed. However, AI represents a complex sociotechnical problem for standard-setters, in which value judgements are hard to separate from technical decisions and technical understanding is sometimes limited in the face of complex and opaque technologies. In addition, strategic economic and military implications have made international cooperation more complicated, as countries compete for influence.

All these challenges raise questions about what details should or, perhaps more importantly, should not be left to SDOs. This will of course have implications for other AI governance mechanisms.


If you want to know more about the role of standards regulation for AI governance, you may be interested in our discussion paper ‘Inclusive AI governance: Civil society participation in standards development’.


[1] This ideal vision of standard development is captured more formally in the World Trade Organization’s (WTO) six principles for the development of international standards, which paint them as voluntary, consensus-based and impartial, relevant to markets’ needs, produced in an open and transparent process and applied coherently across the world.

[2] SDOs can vary greatly in the scope of their activities and the geographic area they cover. Nonetheless, most countries have at least one central SDO, which represents the country in international SDOs such as ISO, IEC or regional SDOs such as the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC).

[3] Standards can be startlingly specific. For example, the US federal standard for child restraint systems (booster seats), coming in at over 18,000 words long, is meticulous in describing the features and testing of the seat, down to the temperature at which the child dummies’ clothes should be machine-washed before testing (from 71 °C to 82 °C, in case you were wondering).

[4] It is worth noting that direct government involvement is only one of the ways in which the ‘voluntary’ nature of standards tends to be challenged. Standards might carry some legal weight by acting as a reference for ‘reasonable’ behaviour. In this case, compliance can protect companies from the threat of liability. They can be used as a signal of quality, and adherence can be a precondition for government procurement or an important message to other companies and consumers. Civilian standards can have impacts on military processes, with NATO mentioning that its internal AI standards could be ‘bolstered by coherence with relevant international standard-setting bodies, including for civilian AI standards’. The WTO’s Technical Barriers to Trade agreement prohibits countries from adopting standards that deviate from international standards as a protectionist measure (such as to limit imports). However, as anyone who has forgotten to buy yet another type of plug adapter for their international travel will tell you, international alignment does not always pan out.

[5] The whole range of approaches can be observed in practice. The federal standard for child restraint systems mentioned in footnote 3, for example, is a legally binding federal standard and appears in US law. The US Occupational Health and Safety Administration, on the other hand, often refers to the American National Standards Institute’s (ANSI, an SDO) standards, and often updates the law to refer to updated versions of the standards. The EU, instead, tends to take a hybrid approach. ‘Harmonized standards’ are developed by European SDOs (also called ESOs) to support product regulation, although adherence to them is not mandatory.

[6] For example, the development of international standards on food safety was led by higher-income countries, who have the resources required to successfully participate in international standard-setting. The added safety requires an increased cost that is harder to bear for lower-income countries – especially those hoping to export foods.

[7] China’s strategy, which kicked off a wave of national strategies, was announced in July 2017.

[8] CEN and CENELEC mirror ISO and IEC respectively at the European level. Through the Vienna Agreement for CEN and ISO, and the Frankfurt Agreement for CENELEC and IEC, the organisations coordinate much of their work, aiming to remain aligned as much as possible.

[9] This is not unusual, despite the UK having left the EU. CEN and CENELEC include several other non-EU members. They have slightly modified voting rights.

[10] The EU funds stakeholder advocacy organisations and has CEN/CENELEC standards approved by consultants before official adoption. Similarly, the UK’s AI standards hub aims to ‘help stakeholders navigate and actively participate in international AI standardisation efforts’. When NIST developed the AI RMF, it had several rounds of open calls.

[11] In the EU, for example, non-discrimination is a fundamental right. Standards for the AI Act could offer a sanity check before a product is put on the EU’s market, but compliance would not necessarily absolve a company of responsibility if they were later found to have been discriminatory. In practice this can be complicated, because compliance with standards can offer some legal protection; this is even starker in the EU’s proposed AI Liability Directive, which explicitly uses compliance with the AI Act as a reference.

[12] It is not that these kinds of ethical questions are necessarily impossible to standardise and there is precedent; see for example the US’s four-fifths rule, a rule of thumb used to measure discrimination in employment.

[13] Although NIST is encouraging the development of RMF ‘profiles’ that will give more concrete guidance.

[14] These problems are worse when systems become more complex – with the behaviour of systems like ChatGPT being difficult to constrain even with a dedicated team of the world top AI researchers. ‘Despite its capabilities, GPT-4 […] still is not fully reliable’ and ‘there still exist “jailbreaks” to generate content which violate our usage guidelines.’ (GPT-4, OpenAI website)

Related content