Skip to content
Blog

An infrastructure for safety and trust in European AI

Mandating independent safety assessment

Federica Pizzuti , Elija Leib , Connor Dunlop , Niklas Clement

28 June 2024

Reading time: 12 minutes

close up the scientist hand holding a cpu processor chip, working in the laboratory

 

It is easy to take safety and security for granted, whether steering a car, accessing healthcare or paying for a coffee with a contactless method. This is the result of a complex system of laws, standards, certifications, inspections and audits, designed to streamline services and avoid harm to people.

Artificial intelligence (AI) has so far been an exception to this. These technologies are being rapidly integrated into our daily lives, both as consumer products and as critical infrastructure, but AI models and their downstream applications currently lack the safety guarantees we expect from other critical sectors.

Despite recognition of the potential systemic harms associated with large-scale foundation models – both in international codes (e.g. the G7 Hiroshima Process) and jurisdictional law (like the EU AI Act) – no jurisdiction yet mandates independent, third-party testing to ensure that the most advanced AI products comply with safety rules. This means that the task of assessing the safety of models is left to the companies who produce them.

This lack of compulsory independent oversight raises concerns, as academic research has consistently demonstrated that self-assessments deliver lower safety and security standards than accredited third-party or governmental audits. Without these checks, product developers face strong commercial incentives to prioritise product delivery over safety.

Additionally, risk identification requires multiple viewpoints (for example, expertise in risk domains, context of use, or computer-human interaction), as no single organisation is equipped to anticipate and identify all potential hazards. External scrutiny is essential to ensure that incentives are aligned with finding vulnerabilities, while also representing diverse perspectives.

While security testing is unlikely to identify all risks from advanced AI systems, it still offers crucial insights into product safety. Making independent assessments and testing mandatory, as they are for other industries – like building and infrastructure, healthcare, industrial machinery and automotive – is a necessary step to minimise the potential harms of AI technologies.

This blog post explores the benefits of the independent European Quality Infrastructure ecosystem for safety, public trust and competitiveness; examines how its key methodologies can be applied to the AI context; and reviews what policy interventions are necessary to ensure that external scrutiny is effectively applied to AI systems, and in particular foundation models.

The European Quality Infrastructure

The European Quality Infrastructure refers to the institutions, systems and methods that ‘ensure that products and services are safe, reliable and conform to functional and quality requirements’.

Mandatory third-party conformity assessment services in the EU encompass activities such as testing, inspection and certification (TIC), allowing traditional sectors, from foods to automotives, to maintain high safety standards. The role of independent third-party TIC companies has grown worldwide across sectors, establishing a reliable method of conformity assessment that drives higher levels of compliance and ensures greater protection.

In the European Quality Infrastructure ecosystem, the independence and competence of conformity assessment bodies is ensured by accreditation. National Accreditation Bodies evaluate the level of impartiality, as well as the technical capacity, of Conformity Assessment Bodies to perform their duties fairly, as regulated under EU law and international standards.

Three approaches to testing and certification

We can broadly identify three different approaches to testing and certification before products or services enter the market: certification of quality management systems; product and adversarial testing; and post-market methods of periodical inspection.

The certification of quality management systems provides a certain level of safety by examining the production processes and management structures in place. In practice, this means that the safety management system set up by an organisation is audited by a third party to avoid conflicts of interest. This is a widely used system for food safety, automotive production processes and cybersecurity. For example, in cybersecurity the audit may include a review of the organisation’s information security management system documentation and an audit assessing the effectiveness of this system.

While quality management system assessments inspect organisational structures, the product-testing approach evaluates the product itself. Independent examinations of products are widely used across industries – for example, testing car brakes and lights, chemical levels in toys, the mechanical safety of lifts, and explosion safety assessments for pressure equipment.

Going one step further in evaluating the safety of products, adversarial testing (also known as ‘red-teaming’) actively exploits their vulnerabilities. Examples of this are crash tests in the automotive sector and penetration testing in cybersecurity, where a group of non-hostile hackers tries to breach cybersecurity systems.

While these practices are usually applied before placing a product or service on the market, periodical inspections ensure safety and proper functioning after commercial distribution. This is particularly relevant for commodities such as cars, as wear and tear change their safety profile. The same applies to industrial installations, which must be inspected periodically to protect workers and those who live close by.

Safety as a mandatory precondition

Overall this system of mandatory independent assessments provides a clear standard for manufacturers: safety is not an optional cost to be factored in, but a precondition that everyone has to fulfil. Likewise, it provides minimum levels of safety for consumers and ensures that market competition is driven by aspects like quality or price, rather than a race to the bottom on product safety.

And this market dynamic bears its fruits. European businesses who are part of the Quality Infrastructure ecosystem, including European automotive brands and companies in industrial technology and manufacturing, perform well in comparison to their international counterparts[1]. Enhanced safety through independent conformity assessments fosters the development of a robust consumer trust framework, driving higher demand and providing a competitive edge for manufacturers.

Competitiveness is fostered, not despite, but because of a well-balanced regulatory system, legal certainty for companies and incentives for safety innovation.

Applying the European Quality Infrastructure to AI

In the EU AI Act, the White House Executive Order, the G7 Hiroshima Process and the Bletchley Declaration, regional and national governments have made various commitments to some external scrutiny and testing at least for the most advanced AI products on the market – such as foundation (or general-purpose AI (GPAI)) models.

Such ambitions make sense, given the complexity and novelty of these models. If integrated across the economy they could become single points of failure for multiple downstream applications, amplifying safety risks. However, these are currently just ambitions and do not reflect current practice.

For instance, the EU AI Act obliges GPAI models posing ‘systemic risks’[2] to undergo ‘model evaluation’ and ‘adversarial testing’. However, it does not require these assessments to be done by independent experts. In the US executive order, the red-teaming requirement can be fulfilled by internal teams. In the UK, although the Government has sought to ensure that model evaluations are conducted by its AI Safety Institute, this is based on voluntary commitments, with companies ultimately deciding the level of access.

How would independent testing work?

What would independent testing methodologies look like and deliver, if applied to foundation models? What would it take to make them effective?

First, the safe development of AI models is the basis for their safe deployment. The independent assessment of quality management systems could provide a cost-effective solution to ensure that the safety management systems are adequate. This would involve introducing procedures to support design control, quality assurance and (external) validation before, during and after the development of a high-impact AI model or system.

Second, the equivalent of mechanical or chemical product testing in the AI realm would be audits or evaluations which assess, for example, data quality, model robustness, accuracy and bias.

This kind of testing is a nascent field when it comes to AI and still lacks standardised measures. Developing these tests is expensive and time-consuming and there are no incentives to undertake them. Making third-party testing compulsory would ensure impartial product testing, and would have the additional effect of motivating AI companies to fund the appropriate development of risk assessment measurement units.

Third, foundation models pose a unique challenge to evaluations because their capabilities cannot be precisely predicted. Adversarial testing by independent experts can help uncover potentially dangerous features. The adversarial approach may also identify how malevolent actors could misuse AI models, for example by employing data poisoning (when trawled data for AI training is intentionally compromised with malicious information), or by using specific prompts to make a model respond in a way that violates its intended safety guidelines.

Additionally, precisely because foundation models have the capacity to develop new capabilities or deficiencies post-deployment and over time, independent conformity assessments cannot be ‘one and done’ and periodical inspections of products already on the market is becoming essential.

This is also likely to reduce the risk that one-off conformity assessments are circumvented (see for example, the Volkswagen emissions scandal). There are already both technical and policy solutions to enable continuous assessment of AI models.

What is the role of policy?

As mentioned above, at present, AI model providers are, in the words of UK Prime Minister Rishi Sunak, effectively allowed to ‘mark their own homework’.

To remedy this shortcoming, regulation could compel mandatory pre-market product testing, including adversarial testing, and regular periodic assessments throughout the lifecycle by third-party auditors. This could also be carried out by a second-party who is subject to regulatory oversight and operates under consistent standards, or vetted researchers, for GPAI models posing systemic risk. Vetting both organisations and researchers helps to ensure adequate expertise, independence and reliability. This process could also entail an audit of the AI developer’s governance processes, including quality management systems.

Mandating such assessments would not only avoid obvious conflicts of interest but could also drive innovation and competitiveness in the AI sector.

First, it would increase AI uptake, by offering assurance for downstream companies (often small and medium-sized enterprises (SMEs)) who build on top of or deploy applications based on a foundation model and often do not have the capacity or model-access to ensure AI Act compliance themselves.

Second, European companies have a strong track record of compliance, and an established culture of safety supported by ecosystems of auditors that operate according to a clear metrology. Compelling independent scrutiny would allow these ecosystems to adapt their services to the AI sector, utilising this European competitive advantage.

Finally, it is easier for large companies with significant in-house expertise to conduct internal testing. In contrast, smaller players may lack this expertise, leading to increased compliance costs. Mandating independent testing can contribute to creating a level playing field.

Compelling external scrutiny and periodic re-assessments can also support regulatory capacity. In the rapidly growing AI market, monitoring how the capabilities of foundation models evolve requires significant resources. Having external scrutiny via independent experts would alleviate the burden on regulators by establishing an additional instrument for surveying and mitigating emergent harms, for example through incident reporting.

Resourcing an ecosystem of assessment

Developing an ecosystem of independent experts who can audit and inspect AI models according to established methodologies will take time and significant resources.

Initially, testing and measurement infrastructure must be rapidly rolled out in the EU. One way to do this is through regulatory sandboxes and testing, and experimentation facilities, which offer a controlled testing environment simulating real-world conditions.

In addition, the EU’s initiative on access to compute (which enables startups and SMEs to access publicly sponsored EU computing resources) could be conditioned on contributing to the science of measurement and benchmarking for AI models, with safety, transparency and information sharing as a prerequisite for access. Companies accessing EU sponsored compute should also commit to offering priority access to the EU AI Office to conduct safety tests.

Establishing an assessment ecosystem impacts other jurisdictions too. For example, as the UK considers how to address the development and application of AI foundation models, the Government should consider which pre- and post-market quality assurance mechanisms could effectively support regulatory outcomes. Its AI assurance roadmap, if pursued, could help build the expertise and economic infrastructure necessary to support such an ecosystem.

However, building up such infrastructure, in any jurisdiction, should not be dependent solely on public funding, but also the companies who are exposing society to potential systemic harms. Governments have already been funding academia, AI offices, safety institutes and standards-setting bodies. AI developers above a certain size threshold should contribute funding to ensure a fair allocation of costs. Again, this is already standard practice in other industries, such as life sciences or finance.

The potential impact of rapid AI development and deployment, coupled with the competitive pressures on companies operating in this sector, means that the stakes are too high to leave safety and efficacy testing in the hands of the companies releasing the products. The EU has built quality infrastructure in critical sectors before – from automotive to pharmaceuticals – and is well placed to lead the way globally on an ecosystem of third-party assessment for AI.


Footnotes

[1] Looking at the share of European companies among the market leaders in ‘high-risk sectors’, which require intensive testing or inspection, we found that in almost all sectors (automotive, elevators, pressure equipment, construction), the market share of European companies was high or very high.

[2] Defined as actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole, that can be propagated at scale across the value chain.