Skip to content

The rapid development and roll-out of vaccines to protect people from COVID-19 has prompted debate about digital ‘vaccine passports’.

This report presents the key debates, evidence and common questions under six subject headings. These are further distilled in this summary into six requirements that governments and developers will need to deliver, to ensure any vaccine passport scheme builds from a secure scientific foundation, understands the full context of its specific sociotechnical system, and mitigates some of the biggest risks and harms through law and policy. In other words, a roadmap for a vaccine passport system that delivers societal benefit.

Checkpoints for vaccine passports

Requirements that governments and developers will need to deliver in order for any vaccine passport system to deliver societal benefit

Checkpoints for vaccine passports Download (PDF 1 MB)

Executive Summary

The rapid development and roll-out of vaccines to protect people from COVID-19 has prompted debate about digital ‘vaccine passports’. There is a confusion of different terms to describe these tools, which are also called COVID-19 status certificates. We identify them through the common properties of linking health status (vaccine status and/or test results) with verification of identity, for the purpose of determining permissions, rights or freedoms (such as access to travel, leisure or work). The vaccine passports under debate primarily take a digital form.

Digital vaccine passports are novel technologies, built on uncertain and evolving science. By creating infrastructure for segregation and risk scoring at an individual level, and enabling third-parties to access health information, they bring profound risks to individual rights and concepts of equity in society.

As the pandemic death toll rises globally, some countries are bringing down case numbers through rapid vaccination programmes, while others are facing substantial third or fourth waves of infection, and the mitigating effects of vaccination have brought COVID vaccine passports into consideration for companies, states and countries.

Arguments offered in support of vaccine passports include that they could allow countries to reopen more safely, let those at lower risk of infection and transmission help to restart local economies, and allow people to reengage in social contact with reduced risk and anxiety.

Could a digital vaccine passport provide a progressive return to a normal life, for those who meet the criteria now, while vaccines are distributed in the coming months and years? Or might the local and global inequalities and risks outweigh the benefits and undermine societal notions of solidarity?

The current vaccine passport debate is complex, encompassing a range of different proposed design choices, uses and contexts, as well as posing high-level and generalised trade-offs, which are impossible to quantify given the current evidence base, or false choices that obstruct understanding (e.g. ‘saving lives vs privacy’). Meanwhile, policymakers supporting these strategies, and companies developing and marketing these technological solutions, make a compelling and simplistic pitch that these tools can help societies open up safer and sooner.

This study disentangles those debates to identity the important issues, outstanding questions and tests that any government should consider in weighing whether to permit this type of tool to be used within society. It aims to support governments and developers to work through the necessary steps to examine the evidence available, understand the design choices and the societal impacts, and assess whether a roll-out of vaccine passports could navigate risks to play a socially beneficial role.

This report is the result of an international call for evidence, an expert deliberation, and months of monitoring the debate and development of COVID status certification and vaccine passport systems around the world. We have reviewed evidence and discussion on technical build, risks, concerns and opportunities put forward by governments, supranational bodies, collectives, companies, developers, experts and third-sector organisations. We are indebted to the many experts who brought their knowledge and evidence to this project (see full acknowledgements at the end of this report).

Responding to the policy environment, and the real-world decisions being made at pace, this study has, of necessity, prioritised speed over geographic completeness. In particular, we should caution that the evidence submitted is weighted towards the UK, Europe and North American contexts and will be more useful currently to policymakers in these areas, and in future to policymakers facing similar conditions – increased levels of vaccination and reducing case numbers – while navigating what are likely to be long-term questions of managing outbreaks and variants.

There are some factors for consideration that will be relevant to any conditions, for countries and states considering whether and how to use digital vaccine passports. These include the exploration of the current evidence on infection and transmission of the virus following vaccination, and some aspects of the technical design considerations and choices that any scheme will face.

A number of the issues – such as the standards governing technical development – will need to be considered at an international level, to ensure interoperability and mutual recognition between different countries. There are strong reasons why all countries should consider the potential global impacts of adoption of a vaccine passport scheme. Any national or regional use of vaccine passports that contributes to hoarding or ‘vaccine nationalism’ will produce extreme local manifestations of existing global inequalities – both in terms of health and economics – as the high rate of infection and deaths in India currently evidences. Prioritising national safety over global responsibility also risks prolonging the pandemic for everyone by leaving the door open to mutations that aren’t well controlled by existing vaccines.

Other requirements will be highly contextualised in each jurisdiction. The progress in accessing and administering vaccinations, local levels of uptake and reasons for vaccine hesitancy, legal regimes, and ethical and social considerations will weigh heavily on whether and how such schemes should go ahead. Even countries that seem to have superficially similar conditions may in fact differ on important and relevant aspects that will need local deliberation of what is justifiable and achievable practically, from the extent of existing digital infrastructure to public comfort with the use of technology, and attitudes towards increased visibility to the state or to private companies.

Incentives and overheads will look different as well. The structure of the economy – whether it is highly reliant on tourism for example, as well as the level of access to the internet and smartphones – will be important factors in calculating marginal costs and benefits of digital vaccine passports. And that local calculation will need to be dynamic: countries with minimal public health restrictions in place and low rates of COVID-19 face very different calculations in terms of benefits and costs to those in highly restrictive lockdowns with a high rate of COVID-19 in the community.

This report presents the key debates, evidence and common questions under six subject headings. These are further distilled in this summary into six requirements that governments and developers will need to deliver, to ensure any vaccine passport scheme builds from a secure scientific foundation, understands the full context of its specific sociotechnical system, and mitigates some of the biggest risks and harms through law and policy. In other words, a roadmap for a vaccine passport system that delivers societal benefit. These are:

  1. Scientific confidence in the impact on public health
  2. Clear, specific and delimited purpose
  3. Ethical consideration and clear legal guidance about permitted and restricted uses, and mechanisms to support rights and redress, and to tackle illegal use
  4. Sociotechnical system design, including operational infrastructure
  5. Public legitimacy
  6. Protection against future risks and mitigation strategies for global
    harms.

These requirements (with detailed recommendations below) set a series of high thresholds for vaccine passports being developed, deployed and implemented in a societally beneficial way. Building digital infrastructure in which different actors across society control rights or freedoms on the basis of individual health status, and all the myriad of potential benefits and harms that could arise from doing so, should face a high bar.

At this stage in the pandemic, there hasn’t been an opportunity for real-world models to work comprehensively through these challenging but necessary steps, and much of the debate has focused on a smaller subset of these requirements – in particular technical design and public acceptability. Despite the high thresholds, and given what is at stake and how much is still uncertain about the pathway of the pandemic, it is possible that the case can be made for vaccine passports to become a legitimate tool to manage COVID-19 at a domestic, national scale, as well as supporting safer international travel.

As evidence, explanation and clarification of a complex policy area, we hope this report helps all actors navigate the necessary decision-making prior to adoption and use of vaccine passports. By setting out the features to be delivered across the whole system, the benefits and risks to be weighed, and the harms to be mitigated, we hope to support governments to calculate whether they can be justified, or whether investment in vaccine passports might prove to be a technological distraction from the central goal to reopen societies safely and equitably: global vaccination.

Recommendations summary for governments and developers

1. Scientific confidence in the impact on public health

The timeframe of the pandemic means that – despite significant leaps forward in understanding that have led to more effective disease control and vaccine development –scientific knowledge is still developing about the effectiveness of protection offered through tests, vaccines or antibodies that most vaccine passport models rely on.

Most of the vaccines now available offer a high level of protection against serious illness from the currently dominant strains of the virus. It is still too early to know the level of protection offered by individual vaccines in terms of duration, generalisability, efficacy regarding mutations and protection against transmission.

This means that any vaccine passport system would need to be dynamic, taking into account the differing efficacy of each vaccine, known differences in efficacy against circulating variants, and the change in efficacy over time. A vaccine passport should not be seen as a ‘safe’ pass or a proxy for immunity, rather as a lowering of risk that might be comparable to, or work in combination with, other public health measures.

Calculating an individual’s risk based on providing test results within a vaccine passport scheme avoids some of the problems associated with relying solely on vaccination, including access, take-up and coverage. A good negative test indicates that an individual is not currently infectious and therefore not a risk to others. However, this type of hybrid scheme requires widespread access to highly accurate and fast turnaround tests, as well as scientific consensus as to the window in which someone can be deemed low risk (most use 24–72 hours).

Evidence of a negative test offers no ‘future’ protection after that window, making it less desirable for a move to another city or entry to another country. Given that most point-of-care tests (tests that give a result at home) have a lower level of accuracy than tests administered in clinical settings, the practical overheads of reliance on testing may make this highly challenging for any routine or widespread use. If consistently accurate point-of-care tests become available, that might make testing a more viable route for a passport system, but would also reduce the need for a digital record – as people could simply show the test at the point of access.

Almost all models of vaccine passport attempt to manage risk at an individual level rather than using collective and contextual measures: they class an individual as lower risk based on their vaccine or test status, rather than a more contextual risk of local infection numbers and R rate in a given area. Prioritising this narrow calculation above a more contextual one may undermine collective assessments of risk and safety, and reduce the likelihood of observing social distancing or mask wearing.

A further important dimension is how the use of a vaccine passport affects vaccine take-up by hesitant groups – it provides a clear incentive to disengaged or busy people, but could heighten anxiety from those who distrust the vaccine or the state, if it is seen as mandatory vaccination or surveillance by the back door.

Before progressing further with plans for vaccine passports:

Governments and public health experts should:

 

  1. Set scientific pre-conditions, including the level of reduced transmission from vaccination that would be acceptable to permit their use; and acceptable testing regimes (accuracy levels and timeline).
  2. Model and test behavioural impacts of different passport schemes (for
    example, in combination or in place of social distancing). This should examine
    any ‘side effects’ of certification (such as a false sense of security, or impacts
    on vaccine hesitancy), as well as responses to changing conditions (for
    example, vaccines’ efficacy against new mutations). This should be modelled
    in general and in specific situations (such as the predicted health impact if
    used in place of quarantine at borders, or social distancing in restaurants), to
    inform their likely real-world impact on risk and transmission.
  3. Compare vaccine passport schemes to other public health measures in
    terms of necessity, benefits, risks and costs, or alternatives – for example,
    offering different guidance to vaccinated and non-vaccinated populations
    without requiring certification; investing in public health measures; or greater
    incentives to test and self-isolate.
  4. Develop and test public communications about what certification should be
    understood to mean in terms of uncertainty and risk.
  5. Outline the permitted pathways for calculating what constitutes ‘lower risk’ individuals, to build into any ‘passport’ scheme, including: vaccine type; vaccination schedule (gaps between doses); test types (at home or professionally administered); natural immunity/antibody protection; and duration of reduced risk following vaccination, testing and infection.
  6. Outline public health infrastructure requirements for successful use of a passport scheme, which might include access to vaccine, vaccination rate, access to tests, testing accuracy, or testing turnaround.

Developers must:

 

  1. Recognise, understand and use the science underpinning these systems.
  2. Use evidence-based terminology to avoid incorrect or misleading understanding of their products. For example, many developers conflated the concept of ‘immunity’ with ‘vaccinated’ in materials shared with partners and governments, creating a false sense that these systems can prove if someone is immune.
  3. Follow government guidelines for permitted pathways to calculation of ‘lower risk’.
  4. Not release for public use any digital vaccine passport tools for use until there is scientific. agreement about how they represent ‘lower risk’ (as above).

2. Clear, specific and delimited purpose

It will be much easier to weigh the benefits, risks and potential mitigations when considering specific use cases (visiting care homes, starting university, or international travel without quarantine, for example) rather than generalised uses.

Based on the health modelling, there may be greater justification for some use cases of digital vaccine passports than others, such as settings where individuals work face to face with vulnerable groups. Countries are already coming under pressure to create certificates for international travel to selected destinations and this is likely to expand. There may also
be some uses that should be prohibited as discriminatory (examples to consider include accessing essential services, public transport or voting) and exemptions that should be introduced for those unable to have a vaccine or regular testing.

Developing clear purposes and uses should be carried out with consideration to public deliberation, and law and ethics (see below), and mindful of risks that could be caused in different settings, which might include liability for businesses or insurance costs for individuals, barriers to employment, as well as stigma and discrimination.

Before progressing further with plans for vaccine passports:

Governments should:

 

  1. Specify the purpose of a vaccine passport and articulate the specific problems it seeks to solve.
  2. Weigh alternative options and existing infrastructure, policy or practice to consider whether any new system and its overheads are proportionate for specific use cases.
  3. Clearly define where use of certification will be permitted, and set out the scientific evidence on the impact of these systems.
  4. Clearly define where the use of certification will not be acceptable, and whether any population groups should be exempted (for example children, pregnant women or those with health conditions).
  5. Consult with representatives of workers and employers, and issue clear guidance on the use of vaccine passports in the workplace.
  6. Develop success measures and a model for evaluation.

Developers must:

  1. Articulate clear intended use cases and purposes for these systems, and anticipate unsupported uses. Some developers consulted for this study said they designed their systems as ‘use agnostic’, meaning they failed to articulate who the specific end users and affected parties would be. Not having clear use cases makes it challenging for developers to utilise best-practice privacy-by-design and ethics-by-design approaches when designing new technologies.
  2. Utilise design tools and processes that seek to identify the consequences and potential effects of these tools in different contexts. These may include scenario planning of different situations in which users might use these tools for unintended purposes; utilising design practices like consequence scanning to identify and mitigate potential harms; and employing ‘red teams’ to identify vulnerabilities by deliberately attacking the tools’ digital and physical security features. For the sake of their own product’s effectiveness, it is essential that developers work back from the worst-case scenariouses of their tools to make necessary changes to technical design features, partnership and business models, and use this process to inform impact evaluation and monitoring.

3. Ethical consideration and clear legal guidance about permitted and restricted uses, and mechanisms to support rights and redress and tackle illegal use

Interpretation and application of ethics and law will be particularly local to regions’ jurisdictions, and – as described above – this report does not attempt to do justice to a fully international picture. There are of course some global agreements, and in particular the Universal Declaration of Human Rights and its two covenants, that are universally applicable.
Based on the debates around ethical norms and social values we have been following in the UK, USA and Europe in particular, there are a number of areas of focus in terms of ethics and law.

Personal liberty has been a significant concern in the debate – that vaccine passports might represent the least restrictive option for individual liberties while minimising harm to others. There are important legal tests, in particular respecting a range of human rights, particularly
the right to a private life, which must be considered where people are required to disclose personal information.

Wider concerns raised are around impacts on fairness, equality and non-discrimination, social stratification and stigma at both a domestic and an international level. Specific concerns about harms to individuals or groups, through facilitating additional surveillance by governments or
private companies, blocking employment or access to essential services, will need to be addressed.

Legal and ethical issues should be weighed in advance of any roll-out, and adequate guidance, oversight and regulation will be required.

Before progressing any further with vaccine passports:

Governments should:

  1. Publish, and require the publication of, impact assessments – on issues including data protection, equality and human rights.
  2. Offer clarity on the current legality of any use, in particular relating to laws regarding employment, equalities, data protection, policing, migration and asylum, and health regulations.
  3. Create clear and specific laws, and develop guidelines for all potential user groups about the legality of use, mechanisms for enforcement and methods of legal redress for any vaccine passport scheme.
  4. Support cooperation between relevant regulators that need to work cooperatively and pre-emptively.
  5. Make any changes via primary legislation, to ensure due process, proper scrutiny and public confidence.
  6. Develop suitable policy architecture around any vaccine passport scheme, to mitigate harms identified in impact assessments. That might require employment protection and financial support for those facing barriers to work on the basis of health status; mass rapid testing centres that can be flexed by need (for example, before major sports events) and guaranteed turnaround of results that is fast enough to be used in a passport scheme.

Developers must:

  1. Undertake human rights, equalities and data protection impact assessments of their systems, both prior to use and post-deployment, to measure their impact in different scenarios. These assessments can help clarify potential risks and harms of systems, and offer clear routes to mitigation. They should be made public and subject to scrutiny by an independent assessor.
  2. Consider the existing norms of social behaviour that these tools may change. Do these tools grant additional power to particular members of society at the cost of others? Do they open new potential for misuse? The misuse of data collected for contact tracing should act as a warning – contact tracing data from pubs being harvested and sold on to third-parties is an example of unforeseen behaviours that these tools may enable. Mitigating these risks should be built into the sociotechnical design (see below).

4. Sociotechnical system design, including operational infrastructure to make a digital tool feasible

Designing a vaccine passport system requires much more than the technical design of an app, and includes consideration of wider societal systems alongside a detailed examination of how any scheme would operate in practice.

When it comes to technical design, there are a number of models being developed that have different attributes and security measures, and bring different risks into focus. There are commonalities, for example QR codes are widely used with varying degrees of security, but the models are too disparate and varied to summarise in detail here. With some models bringing together identity information and biometrics information with health records, any scheme must incorporate the highest-level security.

Some risks can be minimised to some extent, by following best-practice design principles, including data minimisation, openness, privacy by design, ethics by design and giving the user control over their data. Governments also need to be careful not to allow rapid deployment
of COVID vaccine passport systems to lock in future decisions including around the development of wider digital identity systems (see requirement on future risks).

When it comes to the ‘socio’ part of sociotechnical design, governments need to decide what role they ought to play, even if they choose not to design and implement a system themselves (many developers described their role as ‘creating the highway’ and look to governments to decide the ‘rules of the road’).

Governments (alone, or acting through regional or international governmental institutions) are the only actor that can consider the opportunities and risks (identified above) in the round, and will need to offer legal clarity as well as monitor impact and mitigate harms, so should not step back from this question. They will need to ensure that the operational and digital infrastructure is in place across the whole system, from jab or test through to job or border.

Governments will also need to consider costs – including opportunity costs, maintenance costs and burdens on business – and impacts on other aspects of public health, including vaccination programmes, other public health measures, and public trust in health services and
vaccination.

Before progressing any further with vaccine passports:

Governments should:

  1. Outline their vision for any role vaccine passports should play in their COVID-19 strategy, whether they are developing their own systems or permitting others to develop and use passports.
  2. Outline a set of best-practice design principles any technical designs should
    embody – including data minimisation, openness, ethics by design and privacy
    by design – and conduct small-scale pilots before further deployment.
  3. Protect against digital discrimination, by creating a non-digital (paper)
    alternative.
  4. Be clear about how vaccine passports link or expand existing data systems
    (in particular health records and identity).
  5. Clarify broader societal issues relating to the system, including the duration
    of any planned system, practical expectations of other actors in the system
    and technological requirements, aims, costs and the possible impacts on other
    parts of the public health system or economy, informed by public deliberation
    (see below).
  6. Incorporate policy measures to mitigate ethical and social risks or harms
    identified (see above).

Developers must:

  1. Consider how these applications will fit within wider societal systems, and
    what externalities their introduction may cause. While governments should
    articulate the rules of the road, developers must acknowledge values and
    incentives that they bake into their design and security features, and how these
    can amplify or mitigate potential harmful uses of their technology. It is essential
    that developers work with local communities, regulators, businesses and
    civil society organisations to understand risks introduced by their products,
    and tests out how these systems are being used in practice, to understand
    their externalities. Failing to do so will not only risk causing further harm to
    already marginalised members of society, but lead to reputational damage and litigation or legal liability for developers.
  2. Proactively clarify with regulators the need for clear legal guidance
    on where these systems are appropriate prior to any roll-out or use of
    specific applications. In the event a lack of clear guidance from governments
    continues, this may result in firms, developers and their users facing legal
    liability for misuse or abuse.
  3. Ensure they develop their technology with privacy-by-design and ethicsby-design approaches. This should include data-minimisation strategies to
    reducing the amount of data stored and transferred; consequence scanning
    in the design phase; public engagement, in particular with marginalised
    communities during design and implementation; and scanning for security
    threats across the whole system (from health actors to border control).
  4. Ensure their systems meet international interoperability standards being developed by the WHO.
  5. Work with governments and members of local communities to develop training materials for these systems.

5. Public legitimacy

Public confidence will be crucial to the success of a COVID vaccine passport system, and will be highly locally contextual. There are sensitivities involved in building technical systems that require personal health data to be linked with identity or biometric data for many countries. These combine with challenges in the wider sociotechnical system, including financial and other burdens on society, businesses and individuals, to produce concerns about potential harms. A system that is seen as trusted and legitimate could bolster hopes that it might encourage vaccination and updake of booster shots, or inspire more confidence in spaces that require vaccination or testing to enter.

Polling suggests public support for vaccine passports varies based on the particular details of proposed systems (including how they will establish status and in which settings), and concerns about discrimination and inequality. Polling to date only scratches the surface
of these new applications of technology, and deeper methods of public engagement will be needed to properly understand opinion, perceived benefits and risks, and the trade-offs the public is willing to make.

Before progressing any further with vaccine passports:

Governments should:

  1. Undertake rapid and ongoing public deliberation as a complement to, and not a replacement for, existing guidance, legislation and proper consideration of subjects mentioned above and throughout this report.
  2. Undertake public deliberation with groups who may have particular interests or concerns from such a system, for example those who are unable to have the vaccine, those unable to open businesses due to risk, those who face oversurveillance from policy or authorities, groups who have experienced discrimination or stigma, or those with particularly sensitivities about the use of biometric identification systems, for example. This would be in addition to assessing general public opinion.
  3. Engage key actors in the successful delivery of these systems (business
    owners, border control, public health experts, for example).

Developers must:

  1. Undertake meaningful consultation with potentially affected stakeholders,
    local communities and businesses to understand whether roll-outs of these systems are desired, and identify any risks or concerns. The negative reaction from parts of the hospitality industry in the UK should be a warning to developers who explicitly cite this use case as a primary reason for developing their system.1

6. Protection against future risks and mitigation strategies for global harms

If governments believe they have resolved all the preceding tensions and determined that a new system should be developed, they will also need to consider the longer-term effects of such a system and how it might shape future decisions or be used by future governments.

Risks to mitigate include the concern that emergency measures become a permanent feature of society. The introduction of vaccine passports has the potential to pave the way to normalising individualised health risk scoring, and could be open to scope creep post-pandemic, including more intrusive data collection or a wider sharing of health information.
Governments should consider the risk of infrastructure passing to future governments with different political agendas, and how tools introduced for pandemic containment could be repurposed against marginalised groups or for repressive purposes. More prosaically there
are maintenance and continuous development costs to consider, as well as path dependency for future decisions generated by emergency practices becoming normalised.

Equally pressing is how one national scheme affects the global response to COVID-19. Despite international coordination, there are significant inequalities of access to vaccines resulting in extreme differences in local manifestations of the virus – both in terms of health and economics. A legitimate concern is that wealthier countries rolling out vaccine passports could further contribute to exacerbating global inequalities, by incentivising vaccine hoarding. For example, vaccine passport schemes could encourage well-vaccinated and contextually low-risk countries to prioritise retaining booster shots to allow their citizens to take international holidays, rather than incentivise global vaccination – which is the only definitive route to controlling the pandemic.

Before progressing any further:

Governments should:

  1. Be up front as to whether any systems are intended to be used long term, and design and consult accordingly.
  2. Establish clear, published criteria for the success of a system and for ongoing evaluation.
  3. Ensure legislation includes a time-limited period with sunset clauses or conditions under which use is restricted and any dataset deleted – and structures or guidance to support deletion where data has been integrated into work systems for example.
  4. Ensure legislation includes purpose limitation, with clear guidance on application and enforcement, and include safeguards outlining uses which would be illegal.
  5. Work through international bodies like the WHO, GAVI and COVAX to seek international agreement on vaccine passports and mechanisms to counteract inequalities and promote vaccine sharing.

Developers must:

  1. Engage in scenario-planning exercises that think ahead to how these tools
    will be used after the pandemic. This should include consideration of how
    these tools will be used in other contexts, whether those uses are societally
    beneficial, and whether tools can be time-limited to mitigate potentially
    harmful uses.

Introduction

The question of whether and how to implement COVID status certification schemes, or ‘vaccine passports’, has become an important topic across the globe. These schemes would allow differential access to venues and services on the basis of verified health information relating to an individual’s COVID-19 risk, and would be used to control the spread of
COVID-19.

There is a diversity of approaches being pursued across the world, for multiple purposes. Some countries and states are moving ahead unilaterally: Israel, Denmark and New York State are already rolling out COVID vaccine passports, and the United Kingdom is undertaking a
review into whether to implement a passport system.2

For use in international travel and tourism, groups like the Commons Project and the International Air Transport Association are developing applications for vaccine passports; the European Union has set out its plans for a Digital Green Certificate to enable travel within the bloc; and the World Health Organisation is developing a digital version of its International Certificate of Vaccination or Prophylaxis for use with COVID-19.

In this report, the Ada Lovelace Institute aims to clarify the key considerations for any jurisdiction considering whether and how to implement digital vaccine passports to control the spread of COVID-19.

Most of the evidence we received came from or focused on the United Kingdom, Europe and north America, so our requirements for socially beneficial vaccine passport schemes are likely to be particularly relevant to liberal democracies.

We use ‘vaccine passports’ as an imperfect umbrella
term to encompass digital certification schemes that use
one or more of vaccination record, test result or ‘natural immunity’

Defining ‘vaccine passports’

Finding the right phrase to describe these new forms of digital certification is difficult. ‘Passports’ may be more helpful than ‘certificates’ in that they imply that an individual’s status means something in terms of what they can access, rather than simply recognising that an event (a vaccination) has taken place. But they can also be confusing given conversations are happening about both international travel and domestic uses.

When schemes based on an individual having recovered from COVID-19 were first discussed, they were known as ‘immunity passports’ or ‘immunity certificates’. But the term ‘immunity’ was problematic for at least two reasons: proof of recovery from the disease was an imperfect proxy at best for immunity, with evidence still emerging about how protected a recovered patient might be; and the term ‘immunity’ itself has different meanings in individual and collective contexts (whether it protects an individual and to what extent, and whether it protects those they come into contact with).

Many countries and schemes, e.g. Israel’s domestic scheme and the European Union’s proposed scheme for travel, refer to ‘green pass’ or ‘green certificate’. This focuses on the authorisation part of the scheme – like a traffic light – rather than the health information aspect.

Most recently, ‘vaccine passports’ or ‘vaccine certification’ have become common. As described above, a variety of tests are now being used as part of existing and proposed systems, so the term can be misleading as it suggests that only vaccination will provide an individual with access and other benefits. Acknowledging this complexity, we have chosen to
use ‘vaccine passports’ as an imperfect umbrella term to encompass digital certification schemes that use one or more of vaccination record, test result or ‘natural immunity’.

For the purposes of this study, a digital vaccine passport as defined here consists of four component functions and purposes:

  • health information (recording and communication of vaccine status or test result through e.g. a certificate)
  • identity information (which could be a biometric, a passport, or a health identity number)
  • verification (connection of a user identity to health information)
  • authorisation or permission (allowing or blocking actions based on the health and identify information).

Modelling individual risk will always require simplification of a messier underlying reality that involves missing or inaccurate information

This definition extends the function and purpose beyond a digital vaccination record, to enable healthcare providers to know which vaccine doses to administer when. Sharing this verified health information through a vaccine passport is intended to provide information about an individual’s COVID-19 risk, both to themselves and to others, and to assess that information to make decisions about access and movement.

Modelling individual risk will always require simplification of a messier underlying reality that involves missing or inaccurate information, and uncertainty in how to interpret the information available. The question is whether those proxies for risk, despite their flaws, can enable individuals and third parties to distinguish between individuals who are more or less at risk of being infected with and spreading COVID-19.

Most models currently focus on displaying binary attributes (yes/no) of some combination of four different types of risk-relevant COVID-19 information:

  • A status based on medical process, evidenced through:
    • vaccination records, including data, type and doses
    • proof of recovery from COVID-19, e.g. receiving a positive PCR
      test, completing the requisite period of self-isolation and being
      symptom free.
  • A status based on direct observation of correlates of risk, evidenced
    through:

    • negative virus test results
    • antibody test results.

Other schemes might provide a more granular or ‘live’ assessment of risk by incorporating information such as background infection rates, demographic characteristics of users, or users’ underlying health conditions. These schemes are not covered in this report, although many of the points below can also relate to models that provide a stratified assessment of risk and subsequently more differentiated access.

However we choose to identify them, vaccine passports must be considered as part of a wider sociotechnical system – something that goes beyond the data and software that form the technical application (or ‘app’) itself, and includes: data; software; hardware and infrastructure; people, skills and capabilities; organisations; and formal and informal institutions.3

Identifying these components highlights how any successful system needs to consider not just the technical design questions within the app itself, but how it interacts with wider complex systems. Vaccine passports are part of extensive societal systems, like a public-health system that includes test, trace and isolate services, behavioural guidance on mask wearing and social distancing, or a wider biometrics and digital ID ecosystem.

How any sociotechnical system should be designed, what use cases are appropriate, what legal concerns need to be considered and clarified, what ethical tensions are most relevant, what publics deem acceptable and legitimate, and what future risks any system runs, are all questions that will need to be resolved within the particular context policymakers and developers are operating in.

A brief history of health-based, differential restrictions and vaccine certification

Discussions of vaccine certification are not unique to COVID-19. They have been around for as long as vaccines themselves – such as smallpox in pre-independence India.4 The idea of ‘immunoprivilege’ – that citizens identified as having immunity against certain diseases would enjoy greater rights and privileges – also has a long history, such as the status of survivors of yellow fever in the nineteenth-century United States.5

Yellow fever is the most commonly referenced example of existing vaccine certification for a specific disease. The International Certificate of Vaccination or Prophylaxis (ICVP), also known as the Carte Jaune or Yellow Card, was created by the World Health Organisation as a measure to prevent the cross-border spread of infectious diseases.6

Although it dates back in some form to the 1930s, it has been part of the International Health Regulations since 1969 (and was most recently updated in 2005). The regulations remove barriers to entry for anyone who has been vaccinated against the disease. Even when travelling from a country where yellow fever was endemic, showing a Yellow Card would mean someone could not be prevented from entering a country because of that disease.4

There are some important differences between yellow fever and COVID-19: yellow fever vaccines are highly effective and long lasting, while COVID-19 vaccines are still being developed and there is not yet evidence to show how long they are effective for. Transmission is also different: yellow fever spreads via vectors (infected mosquitoes) rather than directly from person to person, which is why there are no global outbreaks of yellow fever and it is easier to control the disease.8

In May 2021, yellow fever is the only disease that is expressly listed in the International Health Regulations, meaning that countries can require proof of vaccination from travellers as a condition of entry. But there have been others, including smallpox (removed after the disease was eradicated), cholera and typhus, both removed when it was decided vaccination against them was not enough to stop outbreaks around the world. The certificate has, historically, been paper-based, but there had been proposals and advocacy to digitise the system even before
COVID-19.9

A brief history of COVID status certification

At the start of the pandemic, a number of countries demonstrated interest in some form of ‘immunity passport’ based on natural immunity and antibodies after infection with COVID-19 to restore pre-pandemic freedoms (including Germany and the UK, and a pilot in Estonia), but a
lack of evidence about the protection acquired through natural immunity meant few schemes were used in real-world scenarios.10

The WHO has shifted its stance by announcing plans to develop a digitally enhanced International Certification of Vaccination

In April 2020, the World Health Organisation (WHO) put out a statement saying there was ‘not enough evidence about the effectiveness of antibody-mediated immunity to guarantee the accuracy of an ‘immunity passport’ or “risk-free certificate”’, and that ‘the use of such certificates may therefore increase the risks of continued transmission’.11

The approval and roll-out of effective vaccines re-energised the idea of restoring personal freedoms and societal mobility based on COVID vaccinate passports. Israel implemented a domestic ‘Green Pass’ in February 2021,12 the European Commission published plans for a Digital Green Certificate in March 2021,13 and Denmark began using a domestic ‘Coronapas’ in April 2021.14

The WHO has shifted its stance by announcing plans to develop a digitally enhanced International Certificate of Vaccination and has established the Smart Vaccination Certificate consortium with Estonia. However, as of April 2021, it remains of the view that it ‘would not like to see the vaccination passport as a requirement for entry or exit because we are not certain at this stage that the vaccine prevents transmission’.15

IBM has launched Digital Health Pass,16 integrated with Salesforce’s employee management platform Work.com,17 and has worked with New York State to launch Excelsior Pass.18 CommonPass, supported by the World Economic Forum, and the International Air Transport Association (IATA)’s Travel Pass are both being trialled by airlines.19

The Linux Foundation Public Health’s COVID-19 Credentials Initiative and the Vaccination Credential Initiative, which includes Microsoft and Oracle, are pushing for open interoperable standards.20 A marketplace of smaller, private actors has also emerged offering bespoke solutions and infrastructures.21

In the UK, the Government initially appeared reluctant, saying it had ’no plans’ to introduce a scheme, and that such a scheme would be ’discriminatory’.22 Other ministers left the door open to digital passporting schemes when circumstances changed,23 and and the Government appeared to be keeping its options open by funding a number of startups piloting similar technology, tendering for an electronic system for citizens to show a negative COVID-19 test, and reportedly instructing officials to draw up draft options for vaccine certificates for international travel.24

There are intuitive attractions to the idea of a COVID vaccine passport scheme

As part of its roadmap out of lockdown in February, the Government announced a review into the possible use of certification.25 This was followed by a two-week consultation and an update in April announcing trials of domestic COVID status certification for mass gatherings, theatres, nightclubs and other indoor entertainment venues.26

For a comprehensive overview of international developments, see the Ada Lovelace Institute’s international monitor of vaccine passports and COVID status apps.

The hopes for vaccine passports

There are intuitive attractions to the idea of a COVID vaccine passport scheme, and particularly in the hope that a better balance could be found between economic activity and community safety, by allowing a more fine-grained and targeted set of restrictions than sweeping measures of national lockdowns. Such hopes are particularly located in the prospect of a silver bullet that may help return life to something resembling normal, after more than a year of social anxiety and economic damage.

A number of arguments have been put forward for the usefulness of
COVID vaccine passports, including:

  • Public health: Those who are certified as unable to transmit the virus are allowed to take part in activities that would normally present a risk of transmission. Being able to take part in such activities, see family and friends and visit hospitality and entertainment venues will have a positive effect on wellbeing and mental health.
  • Vaccine uptake: The use of certification to provide those who have been vaccinated with greater access to society could incentivise vaccination among those who are able to be safely immunised.
  • Personal liberty: Enhancing the freedoms of those who have a passport to do things that would otherwise be restricted due to COVID-19 (always noting that granting permissions for some will, in relative terms, increase the loss of liberty experienced by others). This could have a particularly profound benefit for those facing extreme harm and isolation due to the virus, for example those suffering domestic abuse, or in care homes and unable to see relatives.
  • Economic benefits: supporting industries struggling in lockdown (and the wider economy) by enabling phased opening, for example in entertainment, leisure and hospitality.
  • International travel: a passport scheme will allow people to travel for business and pleasure, with economic benefits (particularly for the tourism industry) and social advantages (reuniting families or holidays).

Science and public health

The first question to ask of a COVID vaccine passport system is whether an individual’s status conveys meaningful information about the risk they pose to others

The foundation of any COVID status certificate or ‘vaccine passport’ is that it allows stratification of people by COVID-19 risk and therefore allows a more fine-grained approach to preserving public health, keeping the community safer with fewer restrictions. Vaccine passports allow only those who pose an acceptably lower risk to others to take part in activities that would normally present a risk of transmission, e.g. working in care homes, travelling abroad, or entering venues and events such as pubs, restaurants, music festivals or sporting fixtures.

Therefore, the first question to ask of a COVID vaccine passport system is whether an individual’s status, for example that they have been vaccinated, conveys meaningful information about the risk they pose to others? Does the scientific evidence base we have on COVID-19 vaccines, antibodies and viral testing, support making that link, and if so, how certain should we be about an individual’s risk based on those proxies?

The development and deployment of a significant number of viable vaccines in just over a year is a remarkable scientific achievement. Tests have also rapidly improved in quality and quantity, and scientific understanding of COVID-19 infection and transmission has improved greatly since the beginning of the pandemic. In spite of these inventions and innovations, unfortunately the novelty of the disease means the answers to significant questions are still uncertain.

Vaccination and immunity

Our knowledge of COVID-19 vaccine efficacy against different its strands and immunity following an infection continues to evolve. Key questions about vaccines include:

  • What are the effect of vaccines on those vaccinated?
  • What are the effect of vaccines on spreading the disease to others?
  • What is the efficacy of vaccines against different emerging variants?
  • What is the efficacy of vaccines over time?

Our expert deliberative panel expressed concern about developing any system of COVID vaccine passport based on proof of vaccination while so much is still unknown – as systems could be built on particular assumptions that would then change. Any system that was developed would have to be flexible enough to deal with emerging evidence.

One certainty is that no vaccine is currently entirely effective for all people. Although evidence is encouraging that the current COVID-19 vaccines offer strong protection against serious illness, vaccination status does not offer conclusive proof that someone vaccinated cannot become ill.

The evidence is even more emergent on the effect of vaccines on the transmission of COVID-19 from one person to another. Any public health argument in favour of introducing vaccine passports relies on evidence that someone being vaccinated would protect others, but this remains unclear.27

A vaccine can provide different types of immunity:

  • Non-sterilising immunity, where an infected individual is protected from the effects of the disease but can still transmit it (and may instead have an asymptomatic case where previously they would have displayed symptoms).28
  • Sterilising immunity, where a vaccinated person does not get ill themselves and cannot transmit the disease.

Experts in our deliberation identified a ‘false dilemma’ in discussions about the efficacy of these different types of immunity: even a population vaccinated with ‘non-sterilising’ immunity should still prevent the disease spreading, as infected individuals will have weaker forms of it and fewer ‘virions’ (infectious virus particles) to spread. Emerging evidence suggests that ‘viral load’ is lower in vaccinated individuals, which may have some effect on transmission, and one study (in Scotland) found the risk of infection was reduced by 30% for household members living with a vaccinated individual, but much remains unknown.29

An issue raised in the deliberation was that focusing on individual proof of vaccination might underemphasise the collective nature of the challenge. Vaccination programmes aim at (and work through) a population effect: that when enough people have some level of protection, whether through vaccination or recovery from infection, the whole population is protected through reaching herd immunity. Even following vaccination, the UK Government’s Scientific Advisory Group for Emergencies offers caution: ‘Even when a significant proportion of the population has been vaccinated lifting NPIs [non-pharmaceutical interventions, like social distancing] will increase infections and there is a likelihood of epidemic resurgence (third wave) if restrictions are relaxed such that R is allowed to increase to above 1 (high confidence)’. This pattern of vaccination and infection may be occurring in Chile, where high vaccination rates have been followed by a surge in cases.30

Different vaccines have different levels of efficacy when it comes to protecting both the person receiving the vaccination and anyone they come into contact with. This is partly due to vaccines having different levels of effectiveness, based on differently underlying technologies.

As of May 2021, 12 different vaccines are approved or in use around the world, utilising messenger ribonucleic acid (mRNA), viral vectors, inactive coronavirus, and virus-like proteins:31

Vaccines approved for use, May 2021

Different levels of efficacy will also be partly due to different individuals responding differently to the same vaccine – the same vaccine may be effective in protecting one recipient and less so in protecting another.

The efficacy of the vaccines may change with different variants of the disease. There are concerns that some vaccines, for example the current Oxford-AstraZeneca vaccine, may be less effective against the so-called South African variant.32 There will continue to be mutations in COVID-19, such as the E484K mutation which has been found in the Brazilian, South African and Kent strains of the disease (this is an ‘escape mutation’ which can make it easier for a virus to slip through the body’s defences) and the E484Q and L425R mutations present in many cases in India.33 Such mutations make understanding of vaccination effects on individual transmission a moving target, as vaccines must be assessed against a changing background of dominant strains within the population.

Booster vaccinations against variants may help manage the issue of strains. It is possible these may be necessary, as the efficacy of vaccines against any strain may change over time; the WHO has said, it is ‘too early to know the duration of protection of COVID-19 vaccines’.34 With the disease only just over a year old and the vaccines having deployed only in the last few months, it will be some time before conclusive evidence is available on this.

Any vaccine passport system would need to be dynamic – taking into account the differing efficacy of different vaccines, known differences in efficacy against certain variants and the change in efficacy over time – as well as representing the effect of the vaccine on the individual carrying a vaccine passport.

There are also questions about any lasting immunity acquired by those recovering from COVID-19. The WHO has noted that while ‘most people’ who recover from COVID-19 develop some ‘period of protection’, ‘we’re still learning how strong this protection is, and how long it lasts’.35

Inclusion of testing

A number of COVID vaccine passport schemes in development (and the UK Government’s review into what it calls COVID status certification) may allow a combination of three characteristics to be recorded and used in addition to vaccination: recovery from COVID-19, testing negative for COVID-19, or testing positive for protective antibodies against
COVID-19.

We can group these characteristics into statuses based on medical process, and those based on medical observation.

Status based on medical process includes vaccination status and proof of recovery from COVID-19. In both cases, a particular event – recovering from an infection or having a vaccination – that might have some impact on an individual’s immunity is taken as a proxy for them posing less risk. As described above, the potential efficacy of this must be understood in the context of what remains unknown about an individual’s ability to spread the disease, their own immunity and the change in their immunity over time.

Status based on medical observation – or direct observation of results correlating to risk – includes two forms of testing: a negative test result for the virus, or a positive test result for antibodies that can offer protection against COVID-19.36 Incorporating robust tests might provide a better, though very time-limited, measure of risk (the biggest challenges to this would be practical and operational). Status based on test results would also avoid the need for building a larger technical infrastructure, particularly one involving digital identity records. But current testing mechanisms do have drawbacks.

There are two main kinds of diagnostic tests that could be used for negative virus test certification:

  1. Molecular testing, which includes the widely used polymerase chain reaction (PCR) tests, detect the virus’s genetic material. They are generally highly accurate at detecting negative results (usually higher than 90%), but their exact predictive value depends on the background rate of COVID-19 infection,37 and depends on the point in the infection that the test is taken.38 These tests often detect the presence of coronavirus for more than a week after an individual stops being infectious. They also need to be processed in a lab – during which time an individual may have become infected and infectious.
  2. Antigen testing, which includes the rapid lateral flow tests used in the UK Government’s mass-testing programmes, detect specific proteins from the virus. If someone tests positive, the result is generally accurate – but as these types of test only detect high viral loads, positive cases can be missed (a ‘false negative’) particularly when self-administered. Certificates based on antigen tests are likely to have a high degree of inaccuracy – tests might be useful in screening and denying (a ‘red light’), rather than allowing (a ‘green light’ test), entry to individuals at a specific point in time. They are unlikely to be useful for any kind of durable negative certification.

Antibody tests, meanwhile, confirm that an individual has previously had the virus. There are two sources of variability from these tests. First, people may have variable antibody response when they are infected with COVID-19 – while most people infected with SARS-CoV-2 display an antibody response between 10 and 21 days after being infected, detection in mild cases can take longer, and in a small number of cases antibodies are not detected at all.39 Second, the tests themselves are not completely accurate, and the accuracy of different tests varies.40

It also remains unclear how an individual antibody test result should be interpreted. The European Centre for Disease Prevention and Control advises that it is currently unknown, as of February 2021, whether an antibody response in a given infected person confers protective immunity, what level of antibodies is needed for this to occur, how this might vary from person to person or the impact of new variants on the protection existing antibodies confer.41 The longevity of the antibody response is also still uncertain, but it is known that antibodies to other coronaviruses wane over time.42

Questions remain as to how viable rapid and highly accurate testing is, particularly those that can be completed outside a lab setting. Although a testing regime allowing entry to venues could avoid a number of the challenges associated with using vaccination status (extensive technical infrastructure and access to health data, possible discrimination against certain groups) it also provides practical and logistical challenges – from administering such tests for access to a sporting event or hospitality venue, to the feasibility of regularly testing children – as well as there being uncertainty around the accuracy of tests.

Risk and uncertainty

At a time when uncertainty – about vaccine efficacy, when life will return to ‘normal’ and much else besides – is endemic, it is natural that politicians, policymakers and the public alike are grasping for certainty. There may be a danger in seeing COVID vaccine passports as a silver bullet returning us quickly to normality, with passports suggesting false binaries (yes/no, safe/unsafe, can access/cannot access) and false certainty, at a time when governments need to be communicating uncertainty with humility and encouraging the public to consider evidence-based risk. Our expert panel raised concerns that the UK Government saying it was ‘led by the science’ brought disadvantages, encouraging a simplistic view of it being infallible and squeezing out space for nuance and debate.

Conveying a proper sense of uncertainty and risk will be important as individuals make decisions about their own health that may also have an impact on collective public health. For example, if I have been vaccinated, but know there is a chance it may not be fully effective, how does that change how I assess the risk to me in engaging in certain behaviours?
What information will I need to also assess my risk of spreading the disease to others? Is it useful for a venue that admits me to understand that a passport may provide a false sense of certainty that I do not have or cannot easily spread the disease?

Any reliance on proof that the process of vaccination has been completed will also require careful consideration about the actual change in risk as a result of that system: experts raised the risk that use of passports could increase the spread of the disease, as individuals who believe themselves to be completely protected engage in riskier behaviour. A review of the limited evidence so far suggests vaccine passports could reduce other protective behaviours.43

While vaccine passports could make people more confident in some areas, for example by providing reassurance to vulnerable people who have been isolating, it could also slow down the return to normality by suggesting to some that their fellow citizens are a permanent threat.
Creating categories of ‘safe’ and ‘unsafe’ that could continue to keep risk salient in people’s minds even once the risk is reduced (for example a risk closer to that of flu: dangerous but not overwhelmingly so) could be counterproductive to reopening and restarting society and the economy.

Recommendations and key concerns

If a government wants to roll out its own COVID vaccine passport system, or permit others to do so, there are some significant risks it needs to consider and mitigate from the perspective of public health.

The first is that vaccine passport schemes could undermine public health by treating a collective problem as an individual one. Vaccine passport apps could potentially undermine other public health interventions and suggest a binary certainty (passport holders are
safe; those without are risky) that does not adequately reflect a more nuanced and collective understanding of risk posed and faced during the pandemic. It may be counterproductive or harmful to encourage risk scoring at an individual level when risk is more contextual and collective – it will be national and international herd immunity that will offer ultimate
protection. Passporting might foster a false sense of security in either the passported person or others, and increase rather than decrease risky behaviours.35

The second is the opportunity cost of focusing on COVID vaccine passport schemes at the expense of other interventions. Particularly for those countries with rapid vaccination regimes, there may be a comparatively narrow window where there is scientific confidence about the impact of vaccines on transmission and enough of a vaccinated population that it is worth segregating rights and freedoms. Once there is population-level herd immunity or COVID-19 becomes endemic with comparable risks to flu, it will not make sense to differentiate and a vaccine passport scheme would be unnecessary.

COVID vaccine passport schemes bring political, financial and human capital costs that must be weighed against any benefits. They might crowd out more important policies to reopen society more quickly for everyone, such as vaccine roll-out, test, trace and isolate schemes, and other public health measures. Focusing on vaccine passports may give the public a false sense of certainty that other measures are not required, and lead governments to ignore other interventions that may be crucial.

If a government does want to move forward, it should:

 

Set scientific preconditions. To move forward, governments should have a better understanding of vaccine efficacy and transmission, durability and generalisability, and evidence that use of vaccine passports would lead to:

  • reduced transmission risk by vaccinated people – this is likely to involve
    issues of risk appetite, as the risk of transmission may be reduced but will
    probably not be nil.
  • low ‘side effects’ – that passporting won’t foster a false sense of security in either the passported person or others, which might lead to an increase of risky behaviours (not following required public health measures), with a net harmful effect. This should be tested, where possible, against the benefits of other public health measures.

 

Communicate clearly what certification means. Whether governments choose to issue some kind of COVID status certification, sanction private companies to do so or ban discrimination on the basis of certification altogether, individuals will make judgements based on the health information underlying potential schemes in informal settings such as gathering with friends or dating.

 

Governments must clearly communicate the differences between different types of certification, the probabilistic rather than binary implications of each, and the relative risks individuals face as a result.

 

To support effective communication, governments, regardless of whether they themselves intend to roll-out any certification scheme, should undertake further quantitative and qualitative research of different framings and phrasing on public understanding of risk, to determine how best to communicate efficacy of each kind of certification.

 

 

Purpose

It is important that governments state
the purpose and intended effect of any COVID vaccine
passport scheme

It is important that governments state the purpose and intended effect of any COVID vaccine passport scheme, to give clarity both to members of the public as to why the scheme is being introduced and to businesses and others who will need to implement any scheme and meet legal requirements in frameworks like data protection.

It is hard to model, assess or evaluate vaccine passports at a general level so governments will need to state the purpose of any system, what it will be used for and, crucially, what will not be included in any such system, i.e. if particular groups will be exempt, or if particular settings will
be off-limits.

Use cases

In debates, particular use cases have focused on international travel, indoor entertainment venues and employment.

International travel

Some organisations, like the Tony Blair Institute, have argued that the way to navigate allowing people to travel internationally again will be for travellers to show their current COVID-19 status – either a proof of vaccination or testing status.45 Already, many countries require proof
of vaccination, proof of recovery or negative COVID-19 test results as a requirement for entry. Much of the industry focus for vaccine passports has been on airports and international travel.

International travel already has existing norms around restricting entry to places at specific checkpoints, based on information contained in passports, and the infrastructure to support such a system. Further, passports are already linked to biometrics and sometimes to digital
databases, as with the USA’s ESTA visa.

In these circumstances, countries will have an obligation to provide their citizens with proof of vaccination in order to allow them to travel to countries that require it. Once a system is in place to allow proof of vaccination for travel to some countries, the marginal cost for further
countries to require proof lowers, and there is a normalised precedent set by other travellers. It is easy to see international COVID vaccine passport schemes come into place even if initially only a small number of countries strongly support them.

The WHO maintains that they do not recommend proof of COVID-19 vaccination as a condition of departure or entry for international travel.46 However, the WHO is consulting on ‘Interim guidance for developing a Smart Vaccination Certificate’.47 The question of COVID vaccine passport systems for international travel seems now to be resolving around standard-setting, ensuring equity and establishing the duration of the scheme, rather than whether such schemes should exist at all.

Indoor entertainment venues

Indoor entertainment venues such as theatres, cinemas, concert venues and indoor sports arenas all have similar characteristics. with large groups of people coming together and remaining seated or standing in close proximity for hours. This means they are both higher risk and discretionary activities, which many countries have focused on as an opportunity to allow opening, or to reassure customers in attending.

Examining the use case of opening theatres only to those with some form of COVID status certification highlights how many of the logistical issues might play out in a particular context. First, there will be other activities related to the theatre trip – particularly using public transport
to reach the venue, or meeting in a pub beforehand. One of the UK Government’s scientific advisory bodies considered these may pose a higher transmission risk than the activity itself.48

Second, there will be practical and logistical challenges at the theatre. Because tickets are sold through secondary sellers as well as by the venue, it is likely that status could only be checked at the theatre on arrival. Any certification system would need to be available to all visitors,
including international ones. If tests at the venue could also be used to permit entry, there would be logistical challenges (for example, where would the tests be administered, and by whom?) that could make the cost prohibitive for theatres.

The increasing role many theatres and arts organisations play in their community could also suffer. Disparities in vaccine uptake, particularly between communities of different ethnicities, could mean COVID vaccine passports are counterproductive to theatre’s goals of inclusivity and acting as a shared public space. According to one producer, ‘the application of vaccine passports for audiences are likely to fundamentally alter a relationship with its local community.’49

Others in the arts,50 sport and hospitality acknowledge these challenges but believe they can be overcome. In the UK, a number of leading sports venues and events – including Wimbledon (tennis), Silverstone (motor racing), the England and Wales Cricket Board and the main football and rugby leagues – have welcomed the Government’s review and would welcome early guidelines to support planning.51

Employment (and health and safety)

Employment-related use cases discussed in the media include proposals that frontline workers, particularly in health and social care, would have to be vaccinated to work in certain settings (especially in care homes). Other employers – such as plumbing firm Pimlico Plumbers in the UK – have suggested they may only take on new staff who have been vaccinated.52 Staff may feel more comfortable returning to work, knowing that colleagues have been vaccinated. Therefore it’s an important use case for governments to address (and may have to grapple with themselves, given they are also employers).

The situation will vary from jurisdiction to jurisdiction. In the UK, the Health and Safety at Work Act (1974) requires employers to take care of their employees and ensure they are safe at work. Given that, employers might think it prudent to ask themselves whether vaccination could play a role in that process.

The ‘hierarchy of controls’ applies in workplace settings in the UK, and may also be a helpful guide for other jurisdictions.53 Controls at the top of the hierarchy are most effective in protecting against a risk and should be prioritised:

  • Elimination: Can the employer eliminate the risk by removing a work activity or hazard altogether? This is not currently possible in the case of COVID-19. Vaccination and even testing could not guarantee this, given the still-emerging scientific evidence on vaccine impact on transmission, and possible false negatives in testing.
  • Substitution: Can the hazard be replaced with something less hazardous? Working from home rather than at the place of work would count as a substitution.
  • Engineering controls: This refers to using equipment to help control the hazards, such as ventilation and screens.
  • Administrative controls: This involves implementing procedures to control the hazards – with COVID-19, these might include lines on the floor, one-way systems around the workplace and social distancing.
  • Personal protective equipment (PPE): This is the last line of defence, to be tried only if measures at all other levels have been tried and found ineffective. Even if one argued that a vaccine counted as PPE, it would only be a last line of defence, and no substitute for employers taking other actions first.

In most settings, it is likely to be difficult for an employer to argue that vaccination could be a primary control in ensuring the safety of most workplaces. Other measures, such as social distancing, better ventilation and allowing employees to work from home, are higher up the hierarchy and likely to deliver some benefits.

There may be some workplace settings where different considerations might apply – for example, in healthcare. The UK Government has suggested that care home staff might be required by law to have a COVID-19 vaccination, and is consulting on the issue.54 Many have
cited hepatitis B vaccination as a precedent. However, this is not legally required in the way many people have understood – it is a recommendation of the Green Book on immunisation that many health providers have considered proportionate and therefore require their staff
to have as part of their health and safety guidance.55 This will vary across workplaces: if an employer carried out a risk assessment that found that employees had to have a vaccination, proportionality would depend on the quality of the risk assessment.56 There may be other examples of measures being considered proportional in some work settings but not in others – for example, regularly testing staff working on a film or television production might be sensible, given that any outbreak would shut the production down at huge cost, but not in an office, where other measures can be taken.

What would happen if an employer tried to implement a ‘no jab, no job’ policy, where someone could not work without a vaccine? The UK’s workplace expert body, ACAS (the Advisory, Conciliation and Arbitration Service), recommends that employers should:

  • not impose any such decision, but discuss it with staff and unions (where applicable)
  • support staff to get the vaccine, rather than attempting to force them to do so
  • put any policy in writing and ensure it is in line with existing organisation policies (for example, disciplinary and grievance policies), and probably do so after receiving legal advice.57

Discussions with employees should also surface any other concerns. These may include scope creep – employees might be concerned that employers will want further information – including why an employee might not be able to receive a vaccine – which might require disclosing personal information (pregnancy for example) or perhaps other personal data (such as venues an employee had checked into). Once an employer has invested in a system, there may be concerns as to what else they might want to use it for – there are concerns about growing workplace surveillance in general,58 especially given the changes made to working patterns by the pandemic. There may also be concerns that if an employer tried to require vaccination, they could also require (for example) that employees return physically to the office rather than being able to work from home.

If any certification system is more than temporary, other concerns include new forms of discrimination opening up – what if an employee cannot have the vaccine, is therefore banned from business travel, and is passed over for promotion opportunities as a result?

A ‘no jab, no job’ employer could face the risk of legal action in the UK, particularly on discrimination grounds – because not everyone can make particular choices to have the vaccine. The UK Government’s equalities body, the Equality and Human Rights Commission, has suggested such a policy may not be possible.59 Creating a COVID vaccine passport that was used to relax other health and safety measures could also pose rights concerns, particularly for staff in high contact face-to-face services such as hospitality or education.60 If evidence reveals that COVID vaccine passport schemes have a limited impact in controlling the spread of the virus, those who have become infected as a result of vaccine passport use, and then developed serious or even fatal illness may have had their right to life (Article 2 ECHR) or right to respect for private and family life (Article 8 ECHR) violated.

The European Court of Human Rights has previously ruled that if a government knowingly failed to take measures to protect workers from workplace hazards, there would be a violation of the right to life (if the worker died from the hazard) or the right to respect for private and family life (if the worker developed a serious disease).61 In this case, the workplace hazard would be the risk of infection from other members of staff and their customers. Of course, if the certification scheme demonstrably improved the safety of staff compared to existing COVID-19 mitigation measures, there is the possibility of a reversed scenario, where government and employers have an obligation to introduce such schemes to protect their employee’s right to life and right to respect for private and family life.

All this underlines the importance of having clear scientific evidence about the impact of vaccinations on an individual’s risk to themselves and their risk of transmission to others, before schemes are implemented. This would allow concerns to be properly weighted, legal
clarification to be given, and risks to be clearly communicated. It also underlines the need for employers to be given legal clarity and guidance from governments on what they can and should (and cannot and should not) do. Otherwise, the burden of decision and implementation will fall on many workplaces already stretched by the pandemic, and leave employees relying on the decisions made by their employers.

Exemptions and exceptions

It is also important to consider what use cases are undesirable and unacceptable and thus should be explicitly prohibited by governments.

Places

Some places are essential to an individual’s participation in society. For example, many countries judged supermarkets so essential that they remained open even during the tightest lockdown restrictions. Essential venues may include but are not limited to:

  • supermarkets and other essential retail, e.g. pharmacies or home repair
  • medical establishments, e.g. GPs, hospitals, other clinics
  • the justice system, including courts and police stations.

Public support for certification in these and similar settings tends to be lower than for what might be considered more ‘discretionary’ activities, such as international travel, sporting events, gyms and entertainment, and hospitality venues (see Public legitimacy). But there are trade-offs to be made when considering these venues, too, such as mental health and economic benefits.

People

As well as particular places, there may be particular groups of people who could be considered for exemptions, with medical or other reasons making it difficult or impossible for them to be vaccinated. Recommendations are changing for some vaccines, but currently these might include, but are not limited to:

  • pregnant women
  • children
  • the immunocompromised
  • those with learning disabilities who are unable to be vaccinated or
    tested regularly.

In Israel, children under the age of one were excluded from their vaccine passport scheme, but those between the age of one and sixteen were unable to access the Green Pass system via vaccination and could  only use it if they could provide proof of recovery from COVID-19.62 In contrast, the Danish Coronapas system, which does provide a testing alternative for those who are not yet vaccinated, has chosen to exempt children under 15 from the scheme.63

Recommendations and key concerns

 

Governments need to define clearly where the use of COVID vaccine passport schemes will be acceptable and the purpose behind introducing any such scheme. They should set out the scientific evidence as to the impact of schemes in different settings. They should also consider whether existing processes and structures could be adapted, and if not, explain clearly why a new system is required.

 

They should also consult with representatives of workers and employers and issue clear guidance on the use of COVID vaccine passports in the workplace, to reduce the burden on employers to make these difficult decisions and ensure that workers are not at the mercy of poor decisions by individual employers.

 

Governments should also define where the use of certification will never be acceptable, such as to access essential services, and what exemptions will be permitted, for example for those who are unable to be vaccinated.

 

Law, rights and ethics

The introduction of any vaccine passport system inevitably intersects with a wide range of legal concerns

The introduction of any vaccine passport system inevitably intersects with a wide variety of legal concerns, including equality and discrimination, data protection, employment, health and safety, and wider human rights laws. Any scheme will also have to make clear trade-offs between ethical and societal commitments, and this will be complicated by intersections between legal concerns and broader ethical and societal concerns. These are likely to manifest in the domain of rights; on questions of individual liberty, societal equity and fairness; risks of new forms of stratification and discrimination, both within societies and across borders; and new geopolitical tensions.

In this chapter we examine these legal, ethical and rights concerns in context.

Legal systems are inherently specific to their jurisdictions. There is some commonality across legal regimes, arising from shared histories, international agreements, and from many jurisdictions’ responses to similar issues over time. For example, the International Bill of Human
Rights and its constituent parts, the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic Social and Cultural Rights (ICESCR) form an international framework that informs and underpins the legal protection of human rights in jurisdictions around the
world.64

As described above – much of the evidence compiled in this report represents laws operating in the UK and Europe. Comparing the legal dimensions of certification schemes across jurisdictions is beyond the scope of this report, but given the international alignment on human rights, some analysis may be transferable to jurisdictions not directly considered here.

Similarly, the view below represents a broadly Western set of ethical and social values. The findings may be useful to other jurisdictions, recognising that alternative conditions and cultures may represent substantially different concerns, or take universal issues and interpret or weight them differently.

Further, counterfactual possibilities are an important consideration in ethical analysis of COVID status certification systems. These systems will only represent one policy intervention in a full complement of public health, economic and social policy that governments can make to mitigate the effects of the pandemic. The feasible alternatives to COVID vaccine passports that are under consideration by governments – for example whether to continue full lockdowns, implement slower general reopening or propose a full reopening against different background risks from COVID-19 – are therefore important in any analysis of their ethics,
in evaluating the marginal economic, societal and health benefits and harms.65

Principal areas of debate have focused on personal liberty, privacy and other human rights, fairness, equality and non-discrimination, societal stratification and international equity.

Personal liberty

Over the last year, civil liberties have been restricted in the form of lockdowns and other public health restrictions. During a pandemic, this is justified by the fact that an infected person can cause harm and death to others. For COVID-19 in particular, widespread transmission in
communities and high rates of transmission without symptoms means that an individual’s risk to others is difficult to determine, and therefore universal restrictions are justified to prevent harm to others.35

Some bioethicists have argued that there are strong ethical arguments in favour of COVID status certification systems that use antibody tests and/or proof of vaccination.67 They argue that these COVID status certification systems represent the least restrictive option for individual liberties, without causing additional harm to others, when compared to other pandemic responses such as lockdowns. They argue that those
who can demonstrate that they are highly unlikely to spread COVID-19 no longer pose a risk to others’ right to life, and so it is unjustified to restrict their civil liberties.

The argument centres on an individual being able to prove that they are not a substantial risk to others, through proof of vaccination or antibodies, to lift restrictions on that individual’s liberty. This argument does not necessarily require vaccination or natural immunity to COVID-19 to be perfect: we commonly accept a level of risk in our everyday lives, for example infectious diseases like flu are considered to be a tolerable risk, to be managed without additional restrictions.

This argument requires a vaccine or natural immunity to reduce risk to an acceptable level to remove the justification for restrictions. The strength of this argument therefore turns on what level of risk is acceptable for a given society, the impact vaccinations and antibodies have on
transmission and therefore risk to others, and the degree of certainty we are willing to accept in the evidence on transmission.

If all those conditions can be met, then COVID status certification is argued to represent a ‘pareto’ improvement on lockdown measures for some people without others’ situation worsening, i.e. they expand the number of people who can exercise their personal liberty without infringing on the liberties of others or increasing the risk of harm to others.

Any COVID status certification scheme should also ensure it does not arbitrarily interfere with individual human rights, in particular the right to respect for private life, the rights to freedom of assembly and movement and the right to work.

State sanctioned systems which require the collection and disclosure of personal information fall within the scope of the right to privacy guaranteed by provisions such as Article 8 of the ECHR and implemented in national laws, e.g. in the UK, under the Human Rights Act
1998.68 Vaccine passport systems which rest on the generation, collation and dissemination of sensitive personal health information, and which may also permit monitoring of individuals’ movements by a range of actors, will be permissible when they are in pursuit of legitimate aims that justify interference with the right, including ‘the protection of health’ and ‘the economic well-being of the country’. However, even if these aims are clearly being pursued, any interference with this right must satisfy the cumulative tests of legality, necessity and proportionality:69

  • The legality test requires that COVID status certification schemes interfering with the right to respect for private life must have a basis in domestic law and be compatible with the rule of law.
  • The necessity test demands that the measures adopted address a pressing social need.
  • The proportionality test requires that the measures taken by public authorities are proportionate to the legitimate aims pursued and entail the least restrictive viable solution.

COVID status certification schemes may well be able to meet these tests, given the scale of physical and mental harms caused by the COVID-19 pandemic, directly and indirectly, and the economic damage that has resulted. However, again, decision-makers will need to demonstrate they have sufficient scientific evidence to justify the necessity and proportionality of these schemes. Further, the requirement of proportionality necessitates transparently weighing these schemes against alternatives, such as greater investment in test, trace and
isolate schemes (e.g. additional support payments and sick pay) and considering the marginal protection to health, benefit to economic wellbeing, and restrictiveness of certification schemes.

Other human rights, including the right to work, and the freedoms of assembly and movement, may also be engaged by vaccine passport systems, and restrictions on those rights must similarly be justified in accordance with the tests for permissible limitations. The implications
of vaccine passports for the right to freedom of assembly deserve particular scrutiny, in light of the protests that have occurred since the start of the pandemic, and the responses to protests such as Black Lives Matter in summer 2020. During moments of exceptional societal
upheaval, peaceful assembly and protest remain critical tools for ensuring justice and demanding democratic accountability. Although the protection of public health constitutes a legitimate purpose to limit the exercise of such rights, there is a legitimate concern that restrictions on assembly and protest may be disproportionately applied in the name of
pandemic prevention.70 Consideration should be given to the potential for misuse of a vaccine passport system by a government with ulterior motives, or repurposed in future by subsequent administrations.

Fairness

Arguments for and against vaccine passports centre around fairness: some have argued that until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair.71 Responses to this have suggested introducing proof of vaccination requirements only once vaccines are widely available, and exempting those who are not eligible to be vaccinated from the need to prove their vaccination status. (Note that, epidemiologically speaking, a system would cease to be useful once herd immunity had reached a level sufficient to protect against transmission.)72

Others have argued that while it is true that COVID status certification is ‘unfair’ in the sense that only some people will be able to access them, that differential access is not arbitrary and is instead based on a genuine reduction in risk associated with those individuals who have
been certified.73 Therefore, there is a legitimate reason to afford them a different treatment.

It is further argued that pandemics are necessarily unfair and responses to them, such as lockdowns, have differential effects even if the same rule is applied to all. Some can work from home in secure jobs, while others lose their jobs and businesses, and those providing healthcare and essential services are required to expose themselves to risk. This, it is argued, is unfair under another view of fairness. The debate is given further complexity by introducing choices between different kinds of unfairness and questioning whether that unfairness has a legitimate underpinning.

Some argue that benefits of COVID status certifications schemes could also spill over to those not eligible. For example, greater economic activity would allow the continued existence of hospitality, leisure and cultural venues that might have otherwise been forced to close, and
would preserve them for others to access once they become eligible for certification or once restrictions are lifted for all.

On the other hand, certification schemes may exacerbate inequalities between those who might be free to return to work or seek certain kinds of employment, and those uncertified who cannot. Existing distrust of the state, identity infrastructure and vaccines could put some groups at a particular disadvantage. Globally, access to digital technology, forms of identification, tests and vaccines is already unequal, and COVID status certification schemes may unintentionally mirror and reinforce existing inequalities without wider programmes for addressing health inequalities.

Many therefore argue that COVID status certification schemes must be accompanied by a redistribution of the resources and benefits they create, for example by providing additional support to ease the costs to those still facing restrictions, to maximise the fairness and equity of any scheme.74

Equality and non-discrimination

COVID status certification systems discriminate on the basis of COVID-19 risk by design. The relevant legal question is therefore whether the law protects against this kind of discrimination, either directly or indirectly, and if so, whether that discrimination is proportionate (and
therefore permissible) in pursuit of other legitimate aims.

Article 1 of the Universal Declaration of Human Rights (UDHR) recognises that ‘all human beings are born free and equal in dignity and rights’. International treaties on human rights such as the ECHR operationalise the right to equality by establishing guarantees against discrimination (Article 14 ECHR).75

In the UK, the Equality Act 2010 provides a single legal framework for the protection of equality and the right to non-discrimination. Relevant to issues of COVID status certification are protections against discrimination on the basis of:76

  • age
  • disability
  • pregnancy and maternity
  • religion or belief
  • race.

For example, a vaccination requirement allowing differential access could be challenged on grounds of indirect discrimination on the basis of age, at least until all adults have had fair opportunity to have a coronavirus vaccination. UK Government policy prioritises primarily on the basis of age, meaning that a vaccination requirement would systematically disadvantage younger members of the population. Similar legal concerns around discrimination are likely to arise in other countries with age-based vaccination prioritisation.

Even once all eligible adults have been offered a vaccine, those groups where vaccination is not recommended may still be able to claim that a vaccination requirement is discriminatory under the Equality Act 2010.

Others might be able to claim discrimination on the basis of religion or belief that requires vaccine refusal. Faith leaders across many major organised religions have endorsed COVID-19 vaccination,77 but this won’t cover religious communities with different beliefs or interpretation of religious texts, so may legitimately claim their religious convictions require vaccine refusal and therefore argue that vaccine requirements constitute discrimination.

Finally, vaccination hesitancy has been shown to correlate with ethnic background in some communities,78 due to distrust of the state arising from longstanding, evidenced practices of racism and injustice.79 Requiring vaccination may therefore compound existing discrimination. This indirect discrimination is apparently one of the concerns raised with the UK Government by its equalities watchdog, the Equality and Human Rights Commission.80

These concerns are relevant to both private- and government-provided systems. The Government may also have human rights obligations to prevent discrimination by private providers, even if the discrimination is not directly imposed by the state and instead the state simply fails to ‘protect individuals from such discrimination performed by private
entities.’60

Some of these potential forms of discrimination would be ameliorated once there is widespread access to vaccination and if evidence emerges that vaccination is appropriate for groups currently advised against it for medical reasons. However, some discrimination will be present in any scheme based on vaccination requirements. The question for any scheme reliant on vaccine certification then becomes: if discrimination can be established on any of these grounds, is this discrimination ‘a proportionate means of achieving a legitimate aim’ under the provisions of the Equality Act 2010?82

Many of these discrimination concerns can potentially be avoided if appropriate alternatives to vaccination certification are available, for example by exempting certain groups or through providing a negative viral test alternative.

Some schemes could prove discriminatory against minority ethnic communities and women with darker skin tones in particular because of the way they verify identity.75 It has been suggested that some COVID vaccine passport schemes could use facial recognition to verify an individual’s identity.84 Research demonstrates that commonly used commercial facial recognition products do not accurately identify Black and Asian faces, especially when trying to recognise women with darker skin types.85 This could also lead to unlawful discrimination on grounds of race, if the products are inaccurate and there are not alternative ways to verify identity.

Societal stratification

Some bioethicists have highlighted that marginalised groups as a whole may face more scrutiny, as the creation of new checkpoints to access services and spaces may perpetuate disproportionate policing.86

Labelling people on the basis of their COVID-19 status would also create a new categorisation by which society could be stratified, i.e. the ‘immunoprivileged’ and the ‘immunodeprived’, potentially creating circumstance for novel forms of discrimination.35 This could happen informally without any certification schemes, as individuals already have access to and can share their own vaccination status, but certification schemes could increase the salience of those distinctions and amplify those distinctions by creating social situations that can only be accessed by those in possession of ‘immunoprivilege’.

This kind of immunological stratification is not without precedent. In nineteenth-century New Orleans, repeated waves of yellow fever generated a hierarchy of ‘immunocapital’ where those who survived became ‘acclimated citizens’ whose immunity conferred social, economic and political power, and ‘unacclimated strangers’ – generally those who had recently migrated to the area – were treated as an underclass. This stratification also helped to entrench existing ethnic and socioeconomic inequality.88

International equity and stratification

There are many low-income countries that do not currently have the economic capacity to acquire all the doses needed to immunise their whole population. Even with the support of COVAX – an international scheme designed to improve access to vaccines – many countries will only be able to vaccinate their most vulnerable citizens in the near future. Furthermore there are stark inequalities in access to cold chains and transportation, as well as capacity to administer vaccines.89

Adding to these health inequalities, people from such countries are disproportionately likely to have their freedom of movement restricted if an international vaccinate passport scheme is put in place. This will particularly affect stateless, undocumented migrants, refugees (whether
internationally or internally displaced), and similar groups who lack or even fear formal connections to governmental public health bodies.

Citizens of these low-income countries may already be discriminated against. As Dr Btihaj Ajana puts it, ‘the amalgamation of borders, passports, and biometric technologies [that] has been instrumental in creating a dual regime of circulation and an international class
differentiation through which some nations can move around and access services with ease while others are excluded and made to endure an “excess of documentation and securitisation”.’90

For example, health practitioners and researchers from low-income countries already struggle to conduct research, share their work at conferences and undertake consultancy work in high-income countries, because of existing difficulties obtaining visas and meeting entry
requirements. International COVID vaccine passports could worsen this imbalance, making diversity and inclusion an even more difficult task in the field, and side-lining valuable expertise of academics in low-income countries.91

It is easy to see how similar problems could arise in other fields and industries, meaning that COVID vaccine passports could add another layer of discrimination to this existing system and have consequences beyond the official end of the pandemic. (We return to the future risks these systems pose in a later chapter).

The structure of the global economy may push countries whose citizens might be excluded by international COVID vaccine passport schemes into supporting their development. Many low-income countries are dependent on tourism, and thus are incentivised to support schemes in
order to restart the flow of visitors. These differential incentives play out in supranational administrations like Europe, where the main supporters of the European Union Digital Green Certificate have been countries like Greece and Spain, which are more reliant on tourism than their northern neighbours.

None of this is to condemn countries for responding to those incentives. For countries reliant on tourism, and especially lower-income ones with a comparatively younger population and fewer economic alternatives, taking on the risks of virus transmission and discrimination may be worth it for the net economic and wider health benefits. Countries should not be condemned for responding to those incentives, but the analysis of how their decisions are shaped and constrained by existing global inequities is informative.

There is already pressure on governments to acquire vaccine supplies, which in turn triggers a form of ‘vaccine nationalism’ – where richer countries are able to buy up supplies of vaccines where poorer ones can’t. Tying movement to vaccine certification could entrench existing
global inequalities, making international cooperation on any schemes even more important. International friction is especially unhelpful when vaccination is, ultimately, a global public good. Any individual country’s fate is tied to reaching international herd immunity, as we are already seeing with new strains emerging. In the present moment we are seeing tensions play out as calls are made for countries to donate the vaccines they have acquired to India as it faces a growing crisis,92 and debates intensify about temporarily suspending vaccine patents.93

Oversight and regulation

Enforcement of existing legal protections will be carried out principally by the courts and through litigation. However, regulators and independent bodies with relevant remits, through the enforcement of existing regulation and issuance of context-specific guidance, will also have a role in legal accountability and oversight of COVID status certification systems, both before they are implemented and during any roll-out. Many use cases will also necessarily cut across multiple remits, as workplace schemes might engage data protection, contract law, equalities, and workplace health and safety concerns.

Regulators like the United Kingdom’s Information Commissioner’s Office have said they would approach a detailed COVID status certification scheme proposal in the same way they would approach any other initiative by government.94 International forums of data protection and privacy authorities have also begun to issue pre-emptive guidance on certification systems.95

Relevant regulators and independent bodies may include:

  • data protection authorities
  • national human rights institutions
  • occupational Health and Safety regulators
  • medical products regulators
  • centres for disease control and prevention, and other public health bodies.

Certain types of domestic laws can be changed in certain countries, and international law contains derogation clauses for specific purposes. However, Governments should be on guard not to needlessly tear down Chesterton’s Fence.96 If governments want to change a law or make a special carve-out for status certification schemes, they should know why the laws preventing it were enacted in the first place and be able to explain clearly why legal changes are necessary and proportionate, acknowledging potential unintended consequences.

Recommendations and key concerns

 

  • Governments must act urgently to create clear and specific guidelines and law around any uses, mechanisms for enforcement and methods of legal redress of COVID status certification. Given the sensitive nature of these systems, private actors will need legal clarity whether or not legal changes are enacted. Contextual guidance should be issued with interpretations of existing law, even if legislators don’t change anything. Regulators and independent bodies with relevant remits should take pre-emptive action to clarify the regulation and guidance they oversee, and take pro-active steps to ensure enforcement where possible.
  • Regulators should work cooperatively, acknowledging that many use cases will necessarily cut across multiple remits, and therefore a clear division of responsibilities is essential so that poor practice doesn’t fall through the cracks. Working together to provide maximum clarity in a fast-moving area, will ensure that regulators do not issue contradictory guidance.
  • If there are tensions between different obligations, regulators should work together to resolve those rather than passing the burden on to businesses and individuals. If combinations of obligations make a specific system unworkable, regulators should also be empowered to flag that to government, businesses and the public, and pass responsibility on to democratically elected bodies to untangle those contradictions in a public forum.
  • Those responsible for rolling out any certification schemes should be required to publish impact assessments, including Data Protection Impact Assessments and Equality and Human Rights Impact Assessments, which outline what protections are being put in place to reduce risks and mitigate harms.
  • Any legal changes should be made via primary legislation to ensure proper scrutiny and debate, rather than emergency regulations introduced at hours’ or days’ notice.97 If a COVID certification scheme is to be temporary, legislation should include clear sunset clauses and be accompanied by explanations as to how the system will be dismantled.

Sociotechnical design and operational infrastructure

Designing any technical system requires comprehensive thinking about the human or societal as well as technological elements of a system

Designing any technical system requires comprehensive thinking about the human or societal as well as technological elements of a system, and how humans and technology interact as part of that system. For example, a car is a piece of technology – a machine made of an engine, wheels, materials and electronic systems – but its operation also involves a driver, the rules of the road, traffic safety laws and planning decisions that allow roads to be built (and much more).

Thinking about a digital vaccine passport system requires doing the technical design ‘right’, and there are many factors that contribute to that empirical judgement. There is currently no single or dominant model for these technologies, and different attributes bring distinct design options and incorporated risks into focus. New infrastructure and databases may be required, depending on existing capacity in the national context.

With some models bringing together identity information, biometrics information, health records and contact tracing data, technical design must incorporate the highest security. Some risks can be minimised to some extent by following best-practice design principles, including data minimisation, openness, privacy by design, ethics by design, giving the user control over their data, and adopting the highest standards of governance, transparency, accountability and adherence with data protection law.

But successful design and delivery will involve thinking about much more than the technical design of an app – it should involve detailed consideration of how a technical solution would fit into a broader societal context, including the full range of public health interventions. For example, it might be theoretically possible to build an app that in itself protects the privacy of the user and helps them access particular rights and freedoms, but that nonetheless causes wider societal harms through increasing stigma or new opportunities for oversurveillance of minority groups.

Whatever we call the applications themselves, COVID vaccine passports are part of a wider sociotechnical system. That is, they are part of a wider COVID status certification system that goes beyond the data and software that form the technical application itself, including:98

  • Data such as the vaccination records, identity proxies, health and location data of individuals.
  • Software such as apps, verification systems, interoperability middleware, biometric systems, testing systems, databases, linkages across multiple databases and multiple jurisdictions, encryption systems.
  • Hardware and infrastructure such as verification kiosks or scanners, servers and cloud storage, mobile phones, linkages to testing and vaccination procedures.
  • People, skills and capabilities such as skilled operators, medical experts and their expertise, compliant individuals and populations, regulators, enforcement services such as border control and the police, IT professionals, standards bodies, infrastructure firms, services firms, marketing and public information, democratic engagement and deliberation, legal professionals.
  • Organisations such governments, global governance organisations, firms, lobby groups, unions.
  • Formal and informal institutions such as laws, regulations, standards and enforcement mechanisms, accountability structures.

At another level, these COVID vaccine passport systems are part of wider societal systems. For example, they are one part of a wider public health system, where consideration needs to be given to how they interact with other interventions and mitigation measures, for example
their behavioural impacts on mask wearing and social distancing, or diversion of attention and resources away from other parts of the vaccination programme or from test, trace and isolate schemes.

If introduced, vaccine passports would also be part of a wider emerging system of digital identification and the roll-out of biometrics into everyday life around the world. In this context, they need to be considered in relation to how their implementation might accelerate the
development and implementation of these schemes without sufficient public engagement or response to public concerns, and the risks that accompany embedding technologies that are hard to roll back into everyday life.

Finally, they will require practical and operational overheads to work – whether that’s scanners to read QR codes at venues, additional staff at the door to check passports, access to wifi at vaccination centres, or adequate testing capacity so that test results can be turned around
quickly enough to be of practical use.

In a multipurpose system and in the face of such complexity – that everything is connected to everything else, and that any intervention will have uncertain and unpredictable outcomes – it might be tempting to assume evaluation of any individual intervention will be almost impossible.3 Instead, those considering implementing or condoning these systems, and governments in particular, must investigate the nature and the strengths of these connections, gather empirical evidence, and then assess whether that evidence justifies policy action
while being transparent about the uncertainties involved.

We will look at technical and sociotechnical design in turn, and form recommendations and key concerns in response to both technical design and the context of the wider societal system.

Technical design

There are currently several options for the technical design and roll-out of vaccine passports, and this makes decision-making particularly difficult. Where the debate about contact tracing apps focused on two very different models – decentralised systems (where data stayed on
individuals’ phones) and centralised systems (involving central servers), there is no equivalent binary choice in the vaccine passport debate. What is emerging is a range of solutions being proposed and developed, and divergent approaches to delivery (see our international monitor for specific models under development around the world).100

Vaccine passport taxonomies

Any vaccine passport system will have the following common components:

  1. health information (recording and communication of vaccine status or test result through e.g. a certificate)
  2. identity information (which could be a biometric, a passport, or a health identity number)
  3. verification (connection of a user identity to health information)
  4. authorisation or permission (allowing or blocking actions through based on the health and identify information).

That brings into focus the number of distinct roles operating within the system, including:

  • the issuer of the credential – for example, the authority that holds the health data and could confirm that a vaccine or test had been administered (the NHS in the UK)
  • the holder of that information – for example, an individual with the credential on their phone
  • the verifier of all the necessary information – for example, a venue checking that the correct credential applied to the individual in front of them
  •  technical providers – for example, the developer of a particular vaccine passport app.

Each component of the vaccine passport system could be digital or non-digital. For example, an entirely non-digital system would involve:

Digital and non-digital components of a vaccine passport system

Table of digital and non-digital components of a vaccine passport system

Digital versus non-digital systems

 

Most of the discussion about COVID-19 vaccine passports, in the UK and elsewhere, has focused on apps delivered through smartphones. While digital passports are the focus of this report it is necessary to consider how digital and non-digital (analogue) systems compare.

 

An analogue (non-digital, or paper) system may have some advantages:

  • It does not require an extensive technical infrastructure.
  • It does not require the verifier (e.g. a venue) to store sensitive personal data.
  • It can be implemented quickly.
  • It is less permanent, and therefore less vulnerable to scope creep.

 

But it is also an imperfect system in many ways:

  • An identity document or vaccine card contains more sensitive information than is needed for the purpose of access (e.g. a passport number or address).
  • A sizeable minority of any population may not possess these documents.
  • Paper-based identity documents, and in particular vaccine cards, can be
    fraudulently copied, or ‘faked’.

 

Apps have some advantages over analogue mechanisms, and potentially
provide:

  • a simple yes/no result without sharing extensive personal details (in contrast
    with sharing all the information in a passport, driving license or medical record
    for example)
  • a clearer audit trail as to when and where an individual has had to verify their
    COVID-19 status
  • the ability to update details, as more becomes known about the lasting
    efficacy of vaccines
  • greater security and protection against fraud.

Technical infrastructure can exacerbate the significant risks of surveillance and
scope creep (see chapters on ethics and future risks). Equitable access is a significant concern, and arguments have been made that there would be substantial disadvantages to a digital-only system, primarily around digital exclusion, even in countries with extensive access to technology infrastructure.

 

Internet and smartphone access and use varies between and within countries.101
A recent Ada Lovelace Institute report, which considered some of the digital
and data divides in the United Kingdom, showed that a fifth of respondents
didn’t have a smartphone, 14% did not have broadband, and the most clinically
vulnerable were less likely to have either.102 By comparison in India only 38% of people report having a smartphone or using the internet occasionally, with big differences between those of different ages, education levels and income.103

Health information and identity data

Schemes will be technically distinct across different countries, depending on a number of factors, including the extent to which health records are digital, whether health systems have existing central databases or are fragmented across providers, whether countries have
digital identity infrastructure or whether digital apps already exist in health systems. In Denmark, for example, the government has worked closely with private vendor Netcompany, and the app operates in the context of an existing digital identity system. Other countries like the UK have a centralised health system but no digital identity system so have to grapple with different routes to providing identity – none of which will be perfect.

Most systems relying on identity verification will be likely to require ‘anchor’ documents such as a passport or driving licence to be used somewhere in the process, but that won’t enable access for all individuals: in 2015, one in four people eligible to vote in England and Wales were estimated to lack either a passport or driving licence, with certain groups, such as young people, even more excluded.104 Ensuring complete registration and access are challenges that exist already in all health systems, and these are often linked to age and class inequalities in access, both in physical and digital health systems, and more pronounced in many low- and middle-income countries.98

Depending on design and country context, schemes will have different implications for data infrastructure. Some call back to existing databases (checking with existing medical records or checking acceptable QR codes, for example). Others create a digital credential or token that might be stored on your phone. Vaccine passport schemes might require the creation of new databases, which include biometrics records. Each of these pose different risks and benefits, depending on the wider systems they interface with.

In the UK, the preferred route to implementing vaccine passports seems to be building the functionality into the existing NHS app (not to be confused with the NHS contact tracing app or GP apps). This app is regulated by the Medicines and Healthcare products Regulatory
Agency (MHRA) to hold digital health records, and to act as an interface between patients and health services to book appointments and manage prescriptions. A strength of this approach is that it develops an existing infrastructure, rather than building a new one, which already operates to high-level data security standards (see data security section).

Building the tool under the auspices of the NHS brings built in-trust, however it also raises the stakes: if something did go wrong, or this was perceived as a tool that wasn’t in-keeping with the NHS values, it could have an impact on wider trust in the NHS. It is also will have to deal with
coverage issues: currently the NHS app is available only in England rather than across the UK and currently has only two million verified users.106

Verification

A critical element of a passport scheme is verification: how the relying or verifying party, e.g. the venue or the airline, can check that the credential that confirms an individual has been vaccinated or tested actually belongs to that individual.89

Many applications being developed rely on QR codes, which are issued as digital or printed cards when an individual is vaccinated, and can be scanned by a venue. Some of these systems would produce a binary (yes/no) response to indicate whether a person could or could not enter, without revealing what method (vaccination, test or exemption) allowed them to do so. Others might be more specific, such as the Danish Coronapas that shows how much time remains in the 24–72 hour window provided by a negative test result.108

Apps have different security protocols: some providers stress that, under their systems, the ‘digital ceremony’ of verification takes place only between the individual and the venue with no databases having to be called – the cryptography within the apps is enough. Others say there
would be a record of the code in the cloud or on a blockchain to verify it was genuine, but this would be separated from any personal data stored on-device.

Israel’s Green Pass app has a QR code that can be scanned, while also providing a physical alternative (Green Pass plus ID document to verify identity). Denmark’s Coronapas system – which includes an August 2021 sunset clause, except for tourism and travel109 – allows citizens to sign into the app with their existing national digital ID (and use the photo from their passport) and display a QR code based on tests, antibody tests or vaccination.

Others are using more complex technologies to verify identity. The Mvine/iProov project funded by the Innovate UK research agency to be trialled in the UK, for example, makes use of facial recognition technology: once an individual has been vaccinated, a medical professional takes a picture and issues the individual with a digital certificate (including a QR code). The certificate number and biometric face scan are stored online by iProov; although the facial biometric is not available to third-parties, this storage could still raise privacy risks. A venue would scan the QR code and the holder’s face, to verify that they were the person to whom the credential belonged. Anyone without a smartphone could have a card with a QR code and still have their face scanned as verification.110

Using facial recognition data in the passport could get around the issue of having no state digital identity system, provided a trusted medical professional was able to link the health credential to the facial biometrics, but it brings in other challenges. In the past facial recognition has been shown to be less accurate for certain demographics – in particular women, and people from minority ethnic backgrounds – which could amplify discrimination and reduce inclusivity.

Conflating COVID vaccine passports with another controversial technology could undermine public trust and confidence – many people are uncomfortable with biometric data about their faces being gathered by private companies or government, and are concerned about how
such data is governed. The Ada Lovelace Institute’s Citizen Biometrics Council recently called for better standards and stronger regulation of biometric data.111

Authorisation or permission

The point of verification by a venue of an individual’s identity may also create practical challenges. It might be a simple, non-digital process – a human examining a digital health record that displays a green tick and a photo, for example, and then waving the individual through. Or it might have a further technical component, by scanning a QR code or further
biometric verification, requiring infrastructure that brings in additional security risks that require further consideration. For example, would venues be required to keep an audit of everyone they have allowed to enter – with related privacy and practical implications of storing a great deal of personal data – or would the fact they have followed a (more minimal) process be sufficient?112

Regardless of how the scheme is delivered, any vaccine passport system should be compliant with data protection, adopt best-practice design principles, offer high data security, be clear how it links or expands existing state data systems, in particular digital identity, and offer a non-digital route. We go into each of these aspects in more detail below.

Data protection and health data

Any vaccine passport system will involve secure access to an individual’s health data, which in many regions will be subject to particular conditions under data protection laws.

In the UK, data protection is guaranteed by the Data Protection Act 2018, which enshrines the EU GDPR, and which in the short term is likely to remain aligned with the EU GDPR.113 (GDPR – the General Data Protection Regulation – was introduced across Europe in 2018 and aims to
standardise the approach to privacy and data protection across Europe. It has also provided a model for other countries, such as Brazil.)

Health data – such as the results of COVID-19 tests and vaccination records – constitutes sensitive data under Article 9 of the GDPR, meaning the collection and further use of that data needs to be justified with one of the exemptions in Article 9-2.75 One of these exemptions is the necessity ‘for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health’.

Any use of personal data for public health reasons should be necessary – that is, targeted and proportional to achieving the desired purpose – and be of benefit to the wider public and society, rather than just individual health. One evidence submission we received suggested this means that governments and developers will need to demonstrate that a vaccine passport will have a meaningful impact on public health.60 European authorities have also underlined that any arrangements justified by the current public health emergency should not continue afterwards.116

Even if such a justification can be established, Article 9-2(i) of the GDPR requires adequate and specific measures to safeguard the rights and freedoms of individuals to be put in place even when pursuing public health interests.75 Given that COVID vaccine passport systems will contain sensitive personal information, app providers will need to comply with the principles of lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity, confidentiality and accountability, as outlined in Article 5 of the GDPR.35

Even if explicit consent or public health interests allow for the collection, storage and processing of test results and vaccination records, providers would still need to build data protection into the design of these technologies by default under Article 25-1 of the GDPR. For
example, providers proactively need to take technical and organisational measures to address potential data privacy-invasive situations, including the transfer of data to parties not covered by GDPR, which might occur if services are offered by international private providers.119

Providers should also ensure individuals are informed as to how their data is being utilised, by whom and for what purpose, providing clear and accessible information, recognising the geographical, cultural and linguistic diversity of the societies they are providing this service to.120

Given this, the Data Protection Act will almost certainly require providers, public or private, to carry out a data protection impact assessment (DPIA). National data protection authorities, which in the UK context means the Information Commissioner’s Office (ICO), will have a duty to
monitor, investigate and enforce the application of these rules under Articles 57 and 58 of the GDPR.75

More broadly, the Global Privacy Assembly – an international body composed of information and privacy commissioners – has said that while the processing of health data to enable international travel may be justified on public health grounds, governments and other organisations should take heed of principles including:

  • Embedding ‘privacy by design and default’ into the design of any system, including conducting a ‘formal and comprehensive assessment’ of the privacy impact on individuals (see design principles below).
  • Ensuring personal data is used according to a clearly defined purpose, under relevant legal authority, and only where it is necessary and proportionate.
  • Protecting the data protection rights of individuals unable to use or access electronic devices or access vaccines and consider alternatives to prevent them suffering discrimination.
  • Informing individuals as to how their data is being used.
  • Collecting only the minimum health information from individuals ‘necessary for their contribution to protection of public health’.
  • Building sunset clauses into the design of such schemes, ‘foreseeing permanent deletion of such data or databases, recognising that the routine processing of COVID 19 health information at borders may become unnecessary once the pandemic ends’.122

Design principles

Anyone developing a COVID certification scheme should consider a series of design principles at all stages of developing a system that will help to minimise harms and the risk of unintended consequences, and maximise the chances of a system working and commanding public
confidence. These principles may include, but are not limited to:

  • Data minimisation, the principle that only the personal data needed to fulfil a specified purpose should be held.123 This would suggest that, for the purposes of letting someone into a venue, the only relevant information is whether a person is permitted to enter or not, rather than fuller details of why (have had a vaccination, a negative test or are exempt, for example) or unnecessary personal information.
  • User control, the idea that the individual should have control of their data at all times and choose who to share it with.
  • Not unrelated is the idea that the credential should ‘act like paper’, as with a paper credential, there is no need for the system to ‘call back home’ and refer back to other databases.
  • Privacy-enhancing technology, operating to international privacy standards, should be used where possible to protect personal data, and developers should take a ‘data protection by design’ approach. All of this also points towards solutions that do not make use of other controversial technologies, such as facial recognition or verification
    that don’t operate locally and securely on user-controlled devices.
  • Openness, not just in explaining to the public exactly how systems are operating (including key details like who is responsible and accountable, the legal protections and ethical standards being applied, and what data is being used and how), but in taking an open-source approach to code that will help keep it up to date and open to scrutiny.
  • Transparency about who is responsible and accountable, what legal protections are in place, what ethical standards are being applied, and what data is being used and how.
  • High standards of governance, accountability, the application of other principles and adherence with data protection law (including the GDPR in Europe) will be essential to protecting the individual, but also ensuring public trust – as the UK Information Commissioner has written, a failure in one scheme could lead to a loss of trust across all attempts to use data and digital technology to combat COVID-19.124 The ICO is clearly conscious about the issue of ‘scope creep’ – that data collected for one purpose could be used for others.125
  • Adherence to international standards, for the sake of interoperability and quality. Among those currently being utilised are W3C’s Verifiable Credentials (although the use of this standard has been critiqued on privacy grounds)126 and the HL7’s Fast Healthcare Interoperability Resources (FHIR) for sharing healthcare data.127
  • Piloting proposed solutions at small scale, with full details of any such trials made public, and thorough evaluation and iterative improvements before rolling out any schemes on a larger scale.
  • Undertake consequence scanning128 to explore what potential use cases are desirable or undesirable, and make design choices accordingly.
  • Analyse plans from a security perspective. Key questions include how many potential security threats are being created by implementing these infrastructures, what new power the system gives to different actors (venue owners, etc.) and how that power could be misused, whether these new powers contravene existing norms,
    whether they raise a risk of unequal treatment in society and how these risks can be mitigated.
  • Engage members of the public – particularly those from marginalised communities – in the design and piloting of these systems. (See the Public legitimacy chapter for more detail).

Data security

Some COVID status certification services would require robust security – particularly if they are bringing together sensitive information. Higher technical security may pose a trade-off for accessibility which will need to be weighed carefully. For example, the NHS offers three levels of identity verification:129

  • Low level, where a user has verified ownership of an email address and mobile phone number but has not proven who they are or provided any other details.
  • Medium level (P5), where additional information (date of birth, NHS number, name, postcode) has been checked against patient records on the NHS Personal Demographics Service. This allows users to access services like contacting their GP but not to access health records (and so is unlikely to be sufficient for sharing vaccination or testing data).
  • High level (P9), which requires a full identity verification process including comparison between the user and photo ID, either at a GP surgery or submitting a photo of their ID and a short recording of their face.

Any process requiring access to personal health data should use high-security level. But the verification of photo ID may exclude vulnerable people or add a burden to GP services, who would need additionally to resource verification of patient identity.130 Alternatives would need to be provided for non-digital access, given a mobile phone is required for an NHS login, and for groups without an NHS number including foreign tourists.

Where countries are building their own solutions tied to state infrastructure, the alternative is for third-party apps, run by private providers, to be given permission to access health records. In the UK there are already private companies who are regulated to store health records and act as an interface between the public and NHS services. Particular consideration needs to be given to exactly how this would work in relation to COVID vaccine passports, what standards providers would need to meet in accessing and using this extremely sensitive data and
how accountability might be assured. In addition, given the high levels of verification necessary, there must be due consideration of whether and how such a standard could be met by private providers. (Also see security and fraud section below).

Developing digital identity infrastructure

It is essential that it is clear whether digital vaccine passports will create or expand existing infrastructure, in particular as regards to digital identity.

In the UK there have been at least two decades of debate about digital identity (the UK currently does not have a single digital identity system), and reaching consensus about identity verification has been challenging. In March 2021, the UK Government confirmed the end of the Verify scheme (although it has been given a final short extension),131 long criticised for failing to meet expectations of users, or in terms of the number of government services using it.132, 133 134 In September 2020, the Government published its response to a consultation on digital identity; in February 2021, it published a draft framework for digital identity. Any organisations currently developing vaccine passport systems in the UK will need to ensure that they fit within this framework.

In India, which has an existing identity system called Aadhaar, the roll-out of a contact tracing app has been used to populate other databases linked to Aadhaar, without further scrutiny and amid claims that it violates purpose limitation (the idea that data collected for one purpose
cannot be used for others without a user’s consent). Concerns have been raised from countries such as Argentina and Kenya, that existing digital identity systems lack transparency and oversight.89

Governments that do not currently use digital identity systems should ensure they do not rush into them because of vaccine certification without due thought, debate and deliberation to explore the potential benefits (greater interoperability of identity, joined-up services, etc.) as
well as the practical and privacy concerns. Creating new infrastructure that is primarily designed to meet the needs of the pandemic might restrict future choices.

Non-digital route

To be inclusive any technical vaccine passport system will need to have an analogue or paper-based alternative to protect against exclusion. This will bring risks, in particular relating to fraud and exclusion (see below). A non-digital route might not need to be an entirely separate system, for example one of the pilot projects funded by the UK Government, involving app developers Mvine and iProov, reports that their combination of a printed QR code and facial verification allows people without smartphones to be part of the system.136

As discussed above, the technical design of a digital vaccine passport is part of the wider sociotechnical system. This means that even if the technical build is done in a way to (for example) minimise the sharing of personal data and enhance privacy, this will not eradicate all harms. The act of certification discriminates between different groups of people – that will be the case whatever the technical design.137 Therefore it is critical to consider the sociotechnical design as at least as important as technical design.

Sociotechnical design

We now turn to questions of what the wider system around any technical implementation will need to look like, and what will need to be considered in the creation of such systems.

The role of government

The first question asked in relation to domestic vaccine certification systems is often who will provide them: government itself (as in Israel) or other actors, including private companies (many of whom are developing solutions) and non-profit foundations (such as Linux Foundation Public Health). However, the important question to start with is: what will
government’s role be?

Governments have the ability to consider the whole sociotechnical system, including any mitigations against harm that might be required, in ways that other actors cannot, and as such have an essential role to play. Some countries – such as Israel – are already rolling out their own
schemes where their Green Pass is issued by the state. But governments that decide not to roll out their own scheme while permitting others to build them are still taking a decision that carries responsibilities. In many nations governments will be the only legitimate standard setters, and in countries with national health systems they will be responsible for administering vaccinations and certifying that they took place.

Even if governments opted to prohibit the use of vaccine certification – something our expert deliberation felt would be difficult – informal uses are possible, so even here governments should play a role in public communication or guidance. If they do not, key public policy questions around discrimination and ethics will effectively be outsourced to private
companies. In most countries, private companies are likely to have some involvement even in state-run schemes. The question then is not whether government has responsibilities relating to vaccination certification, but what those responsibilities are.

There may be advantages to a system being the responsibility of government. They may already own key parts of the infrastructure that could be used. Many countries have existing ID systems, which can help with identity verification. In the UK, the NHS is responsible for
administering the vaccine and it has been suggested the existing NHS app (not the contact tracing app) could be modified to allow citizens to access their vaccination records. Adapting existing systems may negate the need to build entirely new ones, saving time, cost and reducing risks like scope creep and path dependency.

On the other hand, adapting existing systems to accommodate vaccine passports brings risks. If existing systems, especially identity ones, are flawed, existing problems may become further entrenched. In the UK, the NHS enjoys higher public trust than most institutions and higher
data trust than anyone else,138 but this could be damaged if expectations for vaccine passports were not met, for example through continued outbreaks of COVID-19 (as people falsely assume vaccination or testing will stop all transmission).

Existing apps for citizens may exclude certain groups. The NHS app, which is reliant on registration with the NHS in England, may not cover all eligible UK citizens and would also not work for many individuals visiting or resident in the UK. This could prevent those who have been vaccinated by, and are registered with foreign healthcare providers, from accessing domestic leisure venues during a holiday, or exclude undocumented migrants, asylum seekers and refugees who were not able to be vaccinated in their home country from access to systems in the UK. In Israel, many foreign students, diplomats, asylum seekers and
other non-citizens were excluded from the Green Pass system for weeks after the scheme launched, despite having been vaccinated in Israel.139

Allowing private companies to develop solutions could encourage competition and innovation and provide users with a choice, as no solution is likely to work perfectly in all settings.140 There are risks to relying on a single system (including security risks),141 and a competitive market could help push out untrustworthy players.

Our expert deliberation raised concerns about market-led approaches:

  • That a market-led system could be dominated by big players who were not experts in the field, even leading to a monopoly or monopsony.
  • That risk might be heightened by only certain technology companies being big enough to adapt any system to rapidly changing scientific evidence (for example, on transmission).
  • That the rush to dominate the market quickly could lead to vital discussions of equality and ethics being missed, not leave enough time for user research and evaluation, and bring insufficient engagement with health authorities.
  • That there is uncertainty and a lack of transparency about the business model for any private sector solution, and that data acquired through provision of the app (even if anonymised) may be monetised by private providers.

Other risks include that allowing different systems to be developed could fragment a public policy problem into a series of private problems that would be harder to govern; that private companies would have less of an incentive to think about the wider societal context and possible harms unless government had put standards and rules in place; and that multiple solutions may not be interoperable, which would lead to some being recognised in some settings (e.g. by some venues or restaurant chains) but not by others.

Whether apps are supported and developed by government or other, private providers, there are some facts that should be made public clearly, including who is responsible and accountable, what legal protections are in place, what ethical standards are being applied, and
what data is being used and how.

Duration of a COVID vaccine passport system

Another important consideration will be the duration any system is operational. If a system is intended to be a temporary response to avoid prolonging lockdowns and to ease other public health restrictions, its lifecycle would depend to a significant degree on the background rate
of COVID-19, the speed of vaccination within a jurisdiction, and the subsequent impact of health measures on the risk posed by COVID-19.

Some countries have moved quickly in vaccinating their population. As of 12 April 2021, Israel had provided more than 60% of its population with at least one dose of a vaccine, the UK nearly half its population and the US more than a third.142 The percentage of the population that is fully vaccinated in Israel is over 50%, the US over 20% and the UK over 25%.143

A vaccine passport scheme may have some utility when a sizeable minority of the population has had two doses, but before a nation has achieved herd immunity. It may have less utility when only a very small percentage of the population is vaccinated (existing lockdowns would
be likely to continue, there may not be enough economic incentive for businesses to reopen), or with a large percentage of the population having been vaccinated (herd immunity will have some effect).

The speed at which Israel, the US and the UK are vaccinating their populations, for example, suggest that there may only be a very limited window where vaccine passports could be of any use, and there would still be strong scientific reasons (listed above) and other societal reasons
(explored through the rest of this report) not to introduce them.

Mass vaccination would likely bring the risk to society of COVID-19 down to the level of other illnesses already circulating in society, such as seasonal flu. In the UK, the average number of annual deaths from the flu was around 15,000 from 2014/15 to 2018/19,144 but there is no expectation of a passport or testing regime for the flu. Our expert deliberation panel assumed a COVID-19 passport system might have some appeal in the transition from a pandemic to steadier conditions – when, as with the flu, the disease was endemic but vaccination, herd immunity and better treatment had made it less deadly – but then questioned how far it would be possible to switch off a temporary, transition measure once it was in place.

The UK prime minister has suggested that a third wave of COVID-19 could yet ‘wash up on our shores’.145 Would vaccine passports offer any support against such waves globally? Following mass vaccination, the hope is that any future waves would have a more tolerable impact on health, perhaps comparable impact to annual flu seasons unless the virus mutated into a variant against which existing vaccines are not effective. It is not clear how passports would offer significant public health benefit in a situation of low transmission and high population immunity.

The potential scenario of a vaccine-resistant mutation complicates the role of a passport. Those who had previously been considered lower risk would no longer be, and if people behaved as though they were protected because they had a passport, that could potentially accelerate the spread of the disease. On the other hand, if only one vaccine (Pfizer, for
example) was ineffective against a new variant, vaccine passports could be used to allow a subset of the population to continue movement, or government guidance could pivot to a system that was reliant on testing rather than vaccinations.

The end of the COVID vaccine passport lifecycle occurs when it is deemed no longer necessary – but what criteria would need to be met for it to be turned off, and under whose authority? Possible end points could include cases falling below a certain level (though consideration would need to be given as to whether some trigger – an increase in cases, or the
emergence of particular variants – would require them to be switched back on), or the WHO declaring an end to the pandemic. Some have argued that a benefit of passports would be encouraging boosters, which might indicate more long-term use.

Denmark’s plans for its Coronapas contain an August 2021 ’sunset clause’ (other than for tourism and travel), with decisions about any continued scope and use to be informed by the experiences of its domestic use.146 Our expert panel were sceptical about the ease of turning a system off once implemented, and worried about scope creep. Others have argued for disease surveillance systems remaining in place and becoming part of normal health infrastructure, to protect against future pandemics.147

Opportunity costs and overheads

The opportunity cost of focusing on COVID vaccine passports

There will be opportunity costs to focusing on COVID vaccine passports rather than other interventions. Certification schemes will involve political, financial and human capital costs that a government will need to weigh against their benefits. These costs and benefits should not be
considered in isolation. Given that governments have finite resources and attention, focusing on certification schemes should be reviewed in comparison to the costs and benefits of further investment in alternative public health measures intended to lift restrictions, such as investing in greater vaccine supply and roll-out or attempting to improve test, trace and isolate schemes.

As we have seen, there will be a discrete time period where there is scientific confidence about the impact of vaccines on transmission and a large enough vaccinated population to warrant segregating rights and freedoms, before population-level herd immunity, or endemic and low-risk COVID-19 makes vaccine passports unnecessary. In some countries, like the UK, this window might be very narrow.

This window will vary from country to country, and affect the relative balance of costs versus benefits, which will depend on the intended duration of any COVID status certification. High up-front infrastructure and business costs and significant opportunity costs would need to
be weighed in the decision to set up a temporary scheme. Schemes intended to have a long duration also need to be mindful of ongoing costs of maintenance and any costs borne continuously by users, for example in acquiring tests.

Maintenance

Any vaccine passport system will require maintenance, repair and updating in order to remain functional and continue to serve its intended purpose as conditions change around them. The question of who is responsible for maintaining these systems and the costs associated with
continued upkeep should be factored into any cost-benefit analysis of the viability of these systems.

If a vaccine passport system is intended to be temporary, then its obsolescence should be designed in from the start. Legislation and plans should contain sunset clauses, and the costs of closing the system down factored into budget planning. Care should also be taken not to develop other systems that are reliant on it.

Designing in obsolescence may be relatively novel in software development, but not in other fields: nuclear power stations are designed with maintenance, the end of their lifespan and decommissioning in mind. If governments and other providers have not thought about how to close a technical system down, it implies that either they believe it will not be a
temporary measure or have not given the issue sufficient consideration,  both of which may be damaging to public confidence.

If a system is to be more than temporary, then maintenance and upgrade costs will need to be planned in. The prospect of ‘technical debt’ – the idea that limited systems built in haste will require future development spending – is also higher if governments and other providers rush to build systems in weeks or months rather than thinking longer term.

Financial burden on businesses

Businesses that require vaccinations for customers or employees will need systems and additional resource for reviewing vaccine passports, which could create a financial burden for businesses already struggling with depleted financial reserves as they try to reopen.
In certain contexts, like health and social care, there may be existing systems in place which have tracked and verified vaccinations, but many firms in other sectors are likely to be starting from the ground up and having to procure new systems, train staff, and employ ‘security’ staff to administer their use.

There are other possible costs businesses will need to consider. For example, it is unclear what liabilities a venue would face if customers became infected with COVID-19 despite using vaccine passports, if the scheme allowed the venue to (say) reduce the space between theatre seats or between restaurant tables.

There may be related risks for businesses in terms of reputational damage, should such a situation occur. For example, if there is an outbreak traced back to e.g. a cinema using the  scheme to remove maskwearing and spacing requirements, those cinemas might be, fairly or
unfairly, seen as more risky venues.

Costs to users

While almost all countries have chosen to make vaccinations freely available to all as they become eligible, schemes that rely on testing could impose additional costs on users of the system. The more widespread a scheme is, the more burdensome any repeat costs could
become on those who must rely on testing that is not freely available.

In the UK, testing is widely and freely available for most people, and the Government has a service that allows citizens to request free lateral flow tests. But, even in the UK context, testing companies are charging customers for PCR tests required for international travel.148

Interaction with the wider public health system

Effect on vaccine uptake

One possible public health reason for introducing a COVID vaccine passport system would be to encourage uptake of COVID-19 vaccines, in order to reach herd immunity faster. This calculation will be specific to different countries, as rates of vaccine hesitancy vary greatly and the strength of incentivisation may also vary substantially. In England it is not clear there would be much additional benefit by further incentivising vaccination through a vaccine passport system, as more than 95% of people aged 60 and over have already been vaccinated with a first dose,149 and nearly 90% of unvaccinated adults say they would be taking a vaccine if available.150

Some preliminary studies show a mixed picture as to whether vaccine passports would incentivise people to get vaccinated;151 further evidence and investigation will be necessary for any given local context.152 There may be a downside risk that certification could reduce trust and increase vaccine hesitancy if the scheme is seen as introducing mandatory vaccination by the back door.153 This may be particularly acute in some minority ethnic communities that have been oversurveilled historically, leading to a further deterioration in trust.154 This is an area where further research is needed.

Placing an additional burden on the public health system

As well as raising opportunity costs in relation to the wider vaccination effort, these systems could place a direct administrative burden on  vaccine programmes, and on healthcare staff administering vaccinations and handling medical records, who are already overstretched from
additional workloads imposed by the pandemic.155 While some digital systems may be able to reuse existing vaccination records with minimal additional work on the part of frontline health staff, non-digital solutions and obtaining proof of exemption (and authorising some digital schemes) could place additional strain on general practitioners and family doctors,
worsening other health outcomes, unless there are easy and clearly signposted alternative routes or additional resources are made available to general practitioners.

This may be particularly acute in countries still developing digital infrastructure. In their evidence, Access Now give the hypothetical example of a vaccination drive in a village in India. The administrator of the vaccine is required not only to vaccinate the people there, but to authenticate their identity, create their unique identity on the government’s platform and log their vaccination status. But the internet goes down – the vaccinations are halted until it is restored – a lack of technological infrastructure means people are left unvaccinated that
day despite people and vaccines being present. Similar cases, in the distribution of rations and other social benefits, have been recorded in India.89

Setting interoperable global standards

Standards are important for complex technological systems to function properly. In a globalised world, standards act as an important process for establishing shared rules, practices, languages, design principles and procedures. They allow a diversity of actors taking a multiplicity of approaches in a local context to nevertheless maintain coherence for individuals interacting with a technology, work together to avoid duplication of effort, and avoid as much as possible a lack of interoperability between different systems in different places.98 COVID vaccine passport schemes will require interoperable standards, particularly in the context of international travel and border control, and especially if governments allow private actors to develop a diversity of certification applications.

Who is responsible for setting standards

Designing and setting standards is not a neutral process.35 Given the impact they have, standards will often be contested by different countries and interest groups, as they can codify and project particular world views. Standard-setting is not a one-off process, as standards require maintenance and iteration to remain useful and consistent. The process of setting new standards can sometimes be remote from those on the receiving end of novel technologies. The development of COVID vaccine passport systems will need inclusive processes for the creation and maintenance of standards.35

What they should include

As discussed in the Science and public health chapter, there are a number of possible pieces of COVID-19 risk-relevant health information that could be included in a COVID vaccine passport scheme. Decisions will need to be made about:35

  • the risk factors within the system that will be represented in models
  • how to measure or approximate the values of these variables/factors
  • where to define the boundaries of the system, and how to assign confidence to data and components inside and outside these boundaries.

Those responsible for standard setting in COVID vaccine passport systems will need to decide which tests, vaccinations and dosing regimens will be accepted within a specific, and often geographically contained, certification system.

In particular, many high-income countries have primarily relied on vaccines developed in the United States and Western Europe, and have not approved vaccines developed in Russia and China.161 Travellers from low- and medium-income countries, who have primarily relied on Russian and Chinese vaccines, could be denied access to countries recognising only European or North American vaccines or be required to undertake self-isolation or even (costly) hotel quarantining to access those countries. It could also lead to domestic discrimination against migrants from low- and medium-income countries, if access to venues and services is conditional on vaccines used in those high-income countries and migrants are vaccinated with ‘invalid’ vaccines.

Security and fraud

Any digital vaccine passport scheme that successfully restricts and permits access to certain rights and freedoms will inevitably prompt attempts to defraud it. The greater the differentials in access, the stronger the incentive will be. Steps will need to be taken to ensure any vaccine passport scheme is not vulnerable to fraud or accusations of fraud. The Global Privacy Assembly, a global forum for privacy and data protection authorities, emphasises that the cyber security risk of any digital COVID vaccine passport system or app must be fully assessed, taking full account of the risks that can emerge from different actors in a global threat context.162

Within the first week of Israel’s Green Pass scheme, it was reported that a black market for forged passes had emerged on the messaging app Telegram163 and subsequently that the police were looking for hundreds of individuals who had bought counterfeit certificates.164 In February 2021, Europol issued an Early Warning Notification on illicit sales of false negative COVID-19 test certificates, citing multiple incidents across the continent and saying that, as long as travel restrictions remained in place, ‘it is highly likely that production and sales of fake test certificates will prevail. Given the widespread technological means available, in the form of high-quality printers and different software, fraudsters are able to produce high-quality counterfeit, forged or fake documents.’165

Counterfeit vaccine passports could undermine the public health rationale for certification by allowing those at a potentially high risk of transmission to engage illegitimately in riskier activities, creating a situation similar to if there were no certification at all. It could even be
worse: those unaware of counterfeit vaccine passports might make inaccurately low risk assessments of situations and not use other, more informal mitigations (such as social distancing). Widespread counterfeits could also undermine public confidence in vaccine passports if individuals no longer trust any other individual’s certification to be valid and become more suspicious of others’ claims to be vaccinated, recovered, or otherwise at a relatively lower risk to themselves and others.

Recommendations and key concerns

 

Anyone developing a COVID vaccine passport scheme should:

  • consider a series of design principles at all stages of developing a system – that will help to minimise harms and the risk of unintended consequences, and maximise the chances of a system working and commanding public confidence – and conduct small-scale pilots before further deployment
  • protect against digital discrimination by creating a non-digital (paper)
    alternative
  • be clear about how vaccine passports link or expand existing state data
    systems (in particular health records and identity).

 

If a government does want to move ahead with a COVID vaccine passport scheme, it should:

 

Clarify its own role. Whether it is in building its own system, permitting others to do so, or attempting to prohibit such systems altogether, government will have a part to play. This may involve formulating design principles, such as those set out below, and ensuring they are met. It should also involve international discussions about operability across borders.

 

Be clear about the relationship between a COVID vaccine passport scheme and wider plans for digital identity. If governments want to make a case for wider digital identity schemes, then they should have those discussions with the public on their own terms. Conflating digital identity systems with emergency plans for COVID vaccine passport systems could damage public confidence in both technical applications. Governments should also be careful that consideration of any COVID vaccine passport schemes does not force them into longer-lasting decisions about digital identity systems.

 

Design systems that are as accessible as possible. This will include ensuring testing is free at the point of use, including ideally free testing or at the very minimum at-cost testing in private applications (although governments would do well to subsidise private testing if allowed to go ahead, as many do with workplace testing already).

In short, governments should provide clarity on:

  • The role they will play in any system – whether taking ownership of a system themselves, or regulating others. Only governments can take a holistic view of
    the opportunities and potential harms of a system to the society they govern
  • How long a system should endure.
  • What the opportunity costs are of focusing on vaccine passports at the expense of other interventions.
  • The impact of vaccine passport schemes on other elements of the public
    health system, including vaccine uptake and vaccine distribution.
  • The practical expectations on others involved in making a system work, such
    as businesses.
  • Standard-setting: governments need to be clear about which, tests, vaccinations and dosing regimes will be accepted for domestic usage and
    provide unambiguous criteria for inclusion and exclusion based on reliable consideration of the available scientific evidence and background context of
    infection rates and variants present in their jurisdiction. The simplest solution
    is to make the list of accepted vaccinations coterminous with those approved
    for use by the jurisdiction’s relevant medicines regulators, e.g. the MHRA in
    the UK, the EMA in the European Union or the FDA in the USA.
  • The risk of vaccine nationalism in the contexts of border control and domestic access for migrants, especially in the medium to long-term. At a minimum, countries should aim to minimise these potential oversights by operating a mutual recognition scheme that allows vaccines approved by any ‘trusted’ medicines regulator and/or on the WHO’s Emergency Use Listing to be included within a vaccine passport scheme or at least not be excluded on the basis of lack of domestic approval. Not only would mutual recognition and permissive approval enhance individual fairness, it reduces the risk of entrenching existing international inequalities and the risk of geopolitical divides being worsened in the long-term by inconsistent requirements and the systemisation of ‘vaccine worlds’.

Finally, they should incorporate policy measures to mitigate ethical and societal risks or harms identified above.

Public legitimacy

Another consideration for any COVID vaccine passport scheme is its perceived legitimacy

Another consideration for any COVID vaccine passport scheme is its perceived legitimacy. Illegitimate systems are undesirable both because they lack a sufficient political justification and because an illegitimate system will be likely to face significant resistance to its implementation. Legitimacy is a contested concept, and different attributes will be required for a system to be legitimate in different cultures and under different moral and political philosophies. Here, we are concerned with legitimacy in democratic pollical systems.

In part, legitimacy in democratic political systems can come from following due process. This includes debate by representatives in a legislature and subsequent legislation, or by ensuring proportionality and respect for human rights in accordance with existing legal and constitutional frameworks, as we have discussed in previous chapters. However, another important of legitimacy in democratic political systems is the consent of citizens and public support for particular measures. This means understanding what the public are willing to endorse and continuously involving the public at each stage of development.

Polling

One approach to public legitimacy of vaccine passports would be through surveys and polls. Polls conducted in the UK suggest that public support for COVID vaccine passports varies depending on the availability of vaccinations, the particular use cases, and the providers of
certification:

  • An academic study of UK public opinion during March–April 2020, the height of the first wave of COVID-19 in the UK, found most people did not object to immunity passports (introduced as ‘imply[ing] that you are now immune and therefore unable to spread the virus to other people’) and 60% of people wanted one (to varying degrees), although 20% thought them unfair and opposed them completely.166
  • Polling by Deltapoll in January and February 2021 found support for restrictions at an international level. At a domestic level, January polling found narrow support (42–39%) for vaccinated people being allowed to do things (meeting friends, eating in restaurants, using public transport) that others could not.167 Support had risen 12 points by the end of February, although passports and certificates were not explicitly mentioned.
  • Polling published by YouGov in March 2021 found support for a vaccine passport system, but with greater opposition in younger age groups, varying levels of support for different use cases (from 72% in favour of use at care homes, to 31% at supermarkets) and opposition to private companies being allowed to develop their own systems. Support was higher for passports once everyone had been offered a vaccine, compared to during vaccine rollout – which, as discussed above, is when the scientific case for using them is weaker.168 Somewhat contradicting the general support for certification is a separate YouGov poll from early March. This found that 79% of respondents thought those vaccinated should still be subject to the same COVID-19 restrictions as others, until most people had been vaccinated.169
  • Ipsos MORI polling in March 2021 found support for ‘vaccine passports’ was highest for international travel (78%) or visiting relatives in a care home (78%) or hospital (74%), but also high for theatres and indoor concerts (68%), visiting pubs and restaurants (62%) and using public transport (58%, though 25% were opposed). Nonetheless, one in five of those polled thought the ethical and legal concerns outweighed any potential benefits to the economy, with the young and ethnic minorities more concerned.170
  • Research conducted by Ipsos MORI at the end of March 2021, for King’s College London and the University of Bristol, found 39% of those polled thought unvaccinated people would face discrimination (28% did not), with 44% worried that vaccine passports would be sold on the black market. Half of those polled didn’t think passports would have a negative impact on personal freedoms, though a quarter thought they would reduce civil liberties. Just over a fifth of people thought passports would be used for surveillance by the Government, while more than two fifths did not, but concern was much higher among minority ethnic groups.171
  • A survey by De Montfort University, Leicester, found 70% agreed with the need for vaccine passports to travel internationally, but only 34% agreed with such a need for pubgoers or diners (compared to 45% against).172
  • Cultural sector consultancy Indigo found around two-thirds of people would be comfortable with passports or testing to attend live events (with a fifth and close to a third uncomfortable, respectively), but that 60% of people would be uncomfortable if this meant that other public health measures or restrictions inside the venue were dropped.173
  • Polling for the Serco Institute found broad support for passports across different settings, assuming there were ‘appropriate protections and exemptions for people who are precluded from taking the vaccine due to medical conditions’.174

The Ada Lovelace Institute’s own polling, with the Health Foundation, found more than half (55%) of those polled thought a vaccine passport scheme would be likely to lead to marginalised groups being discriminated against. 48% of people from minority ethnic backgrounds and 39% of people in the lowest income bracket (£0-£19,000) were concerned that a vaccine passport scheme would lead to them being discriminated against. While twice as many respondents (45%) disagreed with a ban on vaccine passports compared to those agreeing there should be a ban (22%), a third of respondents (33%) were undecided.

Taken together, these polls point to a lack of societal consensus on the way forward for vaccine passport schemes. Publics in the United States and in France show similar divisions.175

Deeper engagement

Surveys and polls are a powerful tool for measuring mass trends in attitudes, establishing broad baselines in opinion, or understanding what proportion of the public agree with particular statements. The information they provide helps us to understand the pulse of a population’s attitudes. But these methods fail to give comprehensive understanding of people’s perspectives on complex topics, such as the ethical and societal challenges of COVID vaccine passports and related digital technologies, and risk boiling these complex issues down into
statements which can be answered with ‘yes’ or ‘no’, ‘strongly disagree’ or ‘not sure’. Framing questions as simply about vaccine certification schemes also risks focusing on one possible measure rather than taking a holistic view of other measures that governments could deploy.

If governments want to understand what the public thinks about these issues and what trade-offs they might be willing to make in a deeper way, they need to provide a space for them to do so through more deliberative means.

Citizens’ juries and councils enable detailed understanding of people’s perspectives on complex topic areas. For example, the Ada Lovelace Institute has recently undertaken a year-long Citizens’ Biometrics Council to understand public preferences on the use and governance of biometrics technologies.176 Focus groups or engagement workshops can better capture the nuance in people’s opinions and creates complex data to analyse and describe in reports and recommendations. Qualitative and deliberative methods complement the population-level insights provided by polling by offering greater detail on why people hold certain opinions, what values or information inform those views, and what they would advise when informed.

This will be particularly important given the access to government decision-makers that other groups – lobbyists for particular industries, private companies building vaccine passport solutions – may have already had.3 In the UK, lobbying and corruption is currently towards
the top of the news agenda: given the importance of public trust to making government plans for lifting lockdown work, and in deploying new technology, it is vital that governments understand the position of different publics and hold their trust.

Recommendations and key concerns

 

We recommend undertaking rapid and ongoing online public deliberation that is designed to be iterative, across different points of the ‘development’ cycle of COVID vaccine passports, starting before any decision has been taken to implement such a scheme and continuously engaging with diverse publics through the design and implementation of any scheme if and as it develops.

 

Key groups to involve (beyond nationally representative panels of the population) include any groups disproportionately affected by the pandemic to date, and ‘non-users’ that could be excluded from a system, including those who were unable to have a vaccine. Governments should use existing community networks to reach people where they are located.

 

Public engagement to understand what trade-offs the public would be willing to make should be seen as a complement to, and not a replacement for, existing guidance and legislation. It should consider COVID countermeasures in the round (not just COVID vaccine passports) and should be clear about what is and what is not up for public debate.

 

Public engagement is important at all stages of development:

  • Deliberation should be undertaken before any decision on implementation is made, on the ethical trade-offs the public is willing to make and whether they think it’s acceptable for it to go ahead.
  • If deliberation establishes that such a scheme is acceptable or a decision has already been taken to implement a scheme, then public deliberation should be undertaken based on a clear proposal, to stress test the scheme and ask what implementation of vaccine passports would be most likely to engender benefit and generate least risk or harm to all members and groups in society.
  • If a scheme is implemented, then governments should continue to engage with the public to assess the impact of the technologies on particular groups within society, reflect on the experiences of individuals using the scheme in practice, and to inform and guide decision-making about whether such a scheme should continue, how it should be brought to an end or how it should be extended. Deliberation should include future risks and global consequences.
  • All stages are important, but even if deliberation is not possible at one stage, itcan still be implemented at other stages.

Future risks and global consequences

The focus of most discussions of COVID vaccine passport schemes (including previous chapters of this report) has been on the immediate and near-term questions of practicality, legality, ethics and acceptability of systems being developed right now, looking at opportunities and concerns over the next year or two. These discussions focus on schemes being launched within months, including those already operational in Israel and before summer 2021 in the European Union, and their operations over the next year or two, as mass vaccination campaigns roll out around the world.

Even if a country is able to establish that all design questions have been answered, that the societal, legal and ethical tensions have been resolved, that there is no way of adapting existing systems and that a new system needs to be built, the long-term effects of building such systems and how they could shape the future must be considered. In particular, consideration should be given to whether these systems will:

  • Become a permanent fixture of future pandemics?
  • Be expanded to cover a wider and more granular set of health statuses, outside the pandemic context?
  • Change norms and expectations about the bounds of sharing personal health data?
  • Create wider path dependencies that shape the adoption of other technologies in future?

This section is also vital to any public engagement. The public may not see all the unintended consequences and may discount effects on their future selves and future generations, especially with the prospect of escaping a cycle of lockdowns faster. States with longer time horizons and broader duties to all their citizens, need to consider the future risks alongside the immediate pressures on their publics, and encourage their public to do so through deliberative and open engagement.

Permanent emergency solutions

Once time, resources and political capital have been invested in their construction, it is unlikely that these systems and their underlying infrastructure will be rolled back once the crisis that initially justified their creation has passed. There are arguments for maintaining such systems:
for example, the Tony Blair Institute suggests in its case for digital health passports, that ‘Designed properly and integrating testing status, a health passport would also help us manage the virus and prepare for new strains and future pandemics.’178

It is likely that SARS-CoV-2 (the virus that causes COVID-19) will become endemic, like seasonal flu and other infectious disease-causing pathogens (or better contained, like measles, or even eliminated), at which point it will no longer require the emergency and intrusive measures justified by its present transmissibility and fatality. Accepting this as a reasonable scientific expectation for the near future raises concerns about the longevity of emergency apparatus, and that such infrastructure – once built – will not be stripped back.

In response, it has been suggested that sunset clauses should be built into any COVID vaccine passport scheme, with primary legislation clearly setting out the date or conditions by which a scheme will come to an end, and procedures designed into the system to allow that to happen, e.g. a process for the permanent deletion of any data, databases or apps that compromise the technical system.162

Clauses could be included in any use of emergency powers or particular legislation setting out government powers during the COVID-19 pandemic, and include time horizons like the end of a particular year or the end of the crisis according to set criteria (a declaration by the WHO, or cases of infection at a certain level for a specified time period). The clause could also include any process by which a scheme may be explicitly reapproved and continued. In the UK, it has been suggested that a majority vote in both Houses of Parliament could be required to continue any system.180 The Danish Government’s plan for the use of its Coronapas includes an August 2021 ’sunset clause’ for the use of the app other than for tourism and travel, with discussions about the experience of using the system in May and June 2021, to decide on its continued scope and use.181

These will not always be enough to guarantee the system does not become a permanent fixture. Take for example, the European Union’s Digital Green Certificate.95 In one way, it is clearly a time-limited proposal with a clear-end point, albeit with quite a high bar: ‘the Digital Green Certificate system will be suspended once the World Health Organization (WHO) declares the end of the international public health emergency caused by COVID-19.’ However, as a reminder of how these systems become a permanent fixture of life, they note immediately afterwards that ‘Similarly, if the WHO declares a new international public health emergency caused by COVID-19, a variant of it, or a similar infectious disease, the system could be reactivated.’

This creates a kind of path dependency: once this system is built, it becomes a tool for future emergencies, including any future outbreak of COVID-19 or other respiratory pandemics. This in itself does not pose too many additional concerns, beyond those raised in previous chapters. If it can be justified in our current emergency circumstances, there are good reasons to think it could be justified in similar future emergencies, and a pre-existing system could allow it to be spun up much faster. But many of these systems are being discussed as if they are one-off temporary solutions. If the plan from the start is for them to form the basis for future respiratory pandemic preparedness, they should be honestly presented to the public in these terms. They will also require ongoing investment and maintenance.

Scope creep

There is another version of this path dependency: if the purpose and design of the system expands beyond the narrow focus on an emergency response to become business as usual. The digital nature of the system particularly lends itself to iteration, gradual expansion and ‘scope creep’. Some forms of expanded functionality might be in keeping with a public
health purpose, for example, collecting data for disease surveillance and epidemiological research for COVID-19, and perhaps integrating symptom tracking systems with vaccination status.

Other forms may be more sweeping. Other kinds of health status such as physical and mental health records and genetic-test results could also be incorporated to provide more sophisticated risk scoring or even inclusion and exclusion on the basis of health risks beyond COVID-19, moving from COVID status certification to health status certification.3
medConfidential suggest a thought experiment for provocation: any solution under consideration should be tested against whether we would accept the same system of health information verification and differential access for a mental health condition or for HIV.184

Some have pointed to the history of biometric technologies as an analogous example of scope creep, with the initial uses of biometrics limited to exceptional circumstances, such as detention centres and crime investigations, before gradually expanding into everyday tasks,
such as unlocking our phones or logging into our bank accounts. Technologies that seemed intrusive when introduced become commonplace step by step, first by their use in extremes and then each use setting a precedent for the next.185 That is not to say that the gradual
expansion of biometrics is inherently problematic – they are clearly useful in many applications – but often technologies are developed and rolled out before there is sufficient engagement with the public about what use cases they find acceptable and what criteria for effectiveness
and governance they would set.

Similarly, the continued use and expansion of a COVID vaccine passport system could possibly be justified if the tensions in previous chapters are resolved and COVID-19 remains a long-term danger or we deem the systems useful enough to be repurposed for other health concerns. The key concern is that conversations and public engagement need to happen at each stage of continued use and expansion. Each use needs to be evaluated on its own terms at the time of deployment, informed by lessons learned from the previous operation of any similar systems, and driven by informed decisions rather than allowed to continue through software updates without transparency or accountability to citizens.

There is a risk that these important conversations about continued use may not happen or lose salience when the immediate danger is passed, and citizens have to focus their minds on rebuilding post-pandemic.

Others have suggested the system could be expanded beyond the health context, such as for identity verification for other purposes and generalised surveillance.186 However, the greatest impact of developing COVID vaccine passport systems may not be that the core of the system
is directly expanded into a permanent form of digital identity. Rather, the implementation of the system might set precedents and norms that influence and accelerate the creation of other systems for identification and surveillance.

Wider path dependencies

Just as path dependencies in terms of existing infrastructure, legal mechanisms and social and ethical norms will shape any adoption of COVID vaccine passport systems, so will those systems shape the paths available to decision-makers at future junctures.

Decisions made today may have implications for many years to come. For example, if we put in place widespread facial recognition systems to verify identity under these schemes, will we then re-evaluate the appropriateness of using facial recognition for other purposes e.g. age verification in hospitality venues? Or will we be locked into a path, once the capital has been invested, of installing and ironing out the operational issues in these systems? In this scenario, venues find themselves with a very different cost-benefit calculation than they did before the pandemic.

Comparisons were drawn during our expert deliberation to post9/11 security infrastructure at airports, and the once limited but now essentially mandatory Aadhaar identity system in India. There was pessimism about the likelihood of COVID vaccine passport technologies being ‘switched off’ once the crisis has passed, and the tendency to lead to path dependency: ‘Once a road is built, good luck not using it,’ as one participant in our expert deliberation put it. This might be a particular issue if the status of other health conditions were to be added.

Continuous development

If we recognise that these technologies are not intended to disappear once the immediate danger has passed, then we must think of these technologies as perpetually unfinished. This is especially true of the software aspects, which will require constant updates to remain
compatible and consistent with other software systems, legislation and standards.

Therefore, ethical evaluations of COVID status certification systems will require acknowledgment of uncertainty, risk and the inherent unfinished nature of the technology. Where significant uncertainty exists, some suggest that decision-makers can learn from precautionary and anticipatory approaches in sustainable development and other fields.3

Wider information flows and changing expectations

Even if the scope of statuses and purposes in the systems themselves remains limited, concerns were raised during our expert deliberation about how information in the system might be used more broadly than intended.

Even with the most privacy-preserving technology, health data could come into contact with different actors, including those in healthcare settings, employers, clients, police, pubs and insurance companies, who may have different levels of experience and trustworthiness in handling personal data. Private companies who offer COVID vaccine passports may also have commercial incentives to monetise any personal data they collect. Both risk data being shared with third-parties and being repurposed in future for uses the individual did not consent to. This concern is likely to be less significant if high standards on privacy-preserving design are followed in the design phase, and if data protection law is adequately enforced.

Finally, the implementation and existence of a system of health data-sharing in exchange for differential access to services could change social norms about the acceptable circumstances for health data-sharing in future, particularly if the system has any durability beyond the immediate emergency circumstances.35 This is not to prejudge what those changes will be – an ineffective and mismanaged system could damage public trust in digital identity systems and health data-sharing, while an apparently successful one might embed those ideas as a normal part of daily life. Either way, it will have an effect on the social norms and ethical reality in which we evaluate the system retrospectively, for good or ill, and it will shape the attitudes we take into future systems with similar properties.

Recommendations and key concerns

 

The current uncertainty, ongoing social anxiety and economic cost of the pandemic makes the technical fix of a novel tool and emergency infrastructure seem attractive, but the starting point should be identifying specific problems and looking at whether and how these could be addressed through existing practices and laws.

 

If these systems are intended to be used in the long-term, then governments should be upfront about that intention and undertake design, legal and ethical assessment, deliberation etc. on that basis, not pretend they are building a temporary system.

 

This should include – in primary legislation, where possible – details of:

  • Sunset clauses, including clear procedures for deciding whether to continue schemes, and details of legislative oversight and further public deliberation.
  • Commitments not to engage in ‘scope creep’; any expansion to the system should undergo its own separate assessment, with all the criteria outlined in other sections.
  • Proper investment of resources to ensure systems are properly maintained during use and don’t break down, and so exclude people or otherwise unexpectedly fail.
  • Governments and other providers should establish clear, published criteria for evaluation of the success of a system at achieving its stated purpose and of any side effects or externalities caused by the creation of these systems. This might include epidemiological modelling, as far as is possible, of the system’s effect on COVID-19 spread within society, and economic evaluation of the additional marginal benefit provided by the system. Any such evaluation should be continuous with regular public reviews and updates.

Conclusion

In this report the Ada Lovelace Institute sets out detailed recommendations under six requirements for policymakers, developers and designers to work through, to determine whether a roll-out of vaccine passports could navigate risks to play a socially beneficial role.

The six requirements for policymakers, developers and designers are:

  1. Scientific confidence in the impact on public health
  2. Clear, specific and delimited purpose
  3. Ethical consideration and clear legal guidance about permitted and restricted uses, and mechanisms to support rights and redress and tackle illegal use
  4. Sociotechnical system design, including operational infrastructure
  5. Public legitimacy
  6. Protection against future risks and mitigation strategies for global harms.

This report draws on a wide range of evidence:

  • Evidence submitted as part of an open call during January and
    February 2021, which can be found on the Ada Lovelace Institute
    website.
  • A rapid deliberation by an expert panel, summarised in the February 2021 report What place should COVID-19 vaccine passports have in society? The deliberation was chaired by Professor Sir Jonathan Montgomery with leading experts in immunology, epidemiology, public health, law, philosophy, digital identity and engineering.
  • A series of public events on the history and uses of vaccine passports, their possible economic and epidemiological impact, their ethical implications, and the socio-technical challenges of building a vaccine passport system.
  • An international monitor of the development and use of vaccine passport schemes globally.
  • Desk research and targeted interviews with experts and developers.

The report concludes that building digital infrastructure that enables different actors across society to control rights or freedoms on the basis of individual health status, and all the potential benefits and harms that could arise from doing so, should:

  1. Face a high bar: to build from a secure scientific foundation, with understanding of the full context of the sociotechnical system, and mitigate some of the biggest risks through law and policy.
  2. Not prove a technological distraction from the only definitive route to reopening societies safely and equitably: global vaccination.

At the current point in the pandemic response, there hasn’t been enough time for real-world models to work comprehensively through these challenging but necessary steps, and much of the debate has focused on a smaller subset of these requirements – in particular technical design and public acceptability.

Despite the high thresholds, and given what is at stake and how much is still uncertain about the pathway of the pandemic, it is possible that the case can be made for vaccine passports to become a legitimate tool to manage COVID-19 at a domestic, national scale, as well as supporting safer international travel.

As the pandemic response continues around the globe. evidence will continue to emerge, and more detail will come into the public domain about possible models and pilot schemes. We hope the structures developed here remain valuable for decision-makers in industry and government and support efforts to ensure that – if vaccine passports are developed and deployed – that happens in a way that supports a just, equitable society.

Acknowledgements

We are indebted to the many experts and organisations who contributed evidence, spoke at events and briefings, demonstrated tools, and took part in the expert deliberation. We’d especially like to thank Professor Sir Jonathan Montgomery for chairing the expert deliberation, and Gavin Freeguard who has made substantial contributions as a consultant to
delivering this project.

This project has been supported by the European AI Fund, a collaborative initiative of the Network of European Foundations (NEF). The sole responsibility for the project lies with the organiser(s) and the content may not necessarily reflect the positions of European AI Fund,
NEF or European AI Fund’s Partner Foundations.

Participants in the expert deliberation:

Sir Jonathan Montgomery (chair) is Professor of Health Care Law at University College London and Chair of Oxford University Hospitals NHSFT. He was previously Chair of the Nuffield Council on Bioethics and Chair of the Health Research Authority.

Professor Danny Altmann is Professor of Immunology at Imperial College London, where he heads a lab at the Hammersmith Hospital Campus. He was previously Editor-in-Chief of the British Society for Immunology’s ‘Immunology’ journal and is an Associate Editor at ‘Vaccine’ and at ‘Frontiers in Immunology.’

Professor Dave Archard is Emeritus Professor of Philosophy at Queen’s University Belfast. He is also Chair of the Nuffield Council on Bioethics, a member of the Clinical Ethics Committee at Great Ormond Street Hospital and Honorary Vice-President of the Society for Applied Philosophy.

Dr Ana Beduschi is an Associate Professor of Law at Exeter University. She currently leads the UKRI ESRC-funded project on COVID-19: Human Rights Implications of Digital Certificates for Health Status Verification. Professor Sanjoy Bhattacharya is Professor in the History of Medicine, Director of the Centre for Global Health Histories and Director of the WHO
Collaborating Centre for Global Health Histories at the University of York

Dr Sarah Chan is a Chancellor’s Fellow and Reader in Bioethics at the Usher Institute, University of Edinburgh. She is also Deputy Director of the Mason Institute for Medicine, Life Sciences and Law, a Associate Director of the Centre for Biomedicine, Self and Society and a member of the Genomics England Ethics Advisory Committee.

Dr Tracey Chantler is Assistant Professor of Public Health Evaluation & Medical Anthropology at the London School of Hygiene and Tropical Medicine. She is also a member of the Immunisation Health Protection Research Unit, a collaborative research group involving Public Health England and LSHTM.

Professor Robert Dingwall is Professor of Sociology at Nottingham Trent University. He is also a Fellow of the Academy of Social Sciences and a member of the Faculty of Public Health. He sits on several government advisory committees, including NERVTAG (New and Emerging Respiratory Virus Threats Advisory Group) and the JCVI (Joint Committee on Vaccination and Immunisation) sub-committee on Covid-19.

Professor Amy Fairchild is Dean and Professor at the College of Public Health, Ohio State University. She is also Co-Director of the World Health Organization Collaborating Center for Bioethics at Columbia’s Center for the History and Ethics of Public Health.

Dr Matteo Galizzi is Associate Professor of Behavioural Science at the London School of Economics. He is also Co-Director of LSE Behavioural Lab and coordinates the Behavioural Experiments in Health Network and the Data Linking Initiative in Behavioural Science.

Professor Michael Parker is Director of the Wellcome Centre for Ethics and Humanities and Director of the Ethox Centre at the University of Oxford. He is also a member of the Government’s Scientific Advisory Group for Emergencies, the Chair of the Genomics England Ethics Advisory Committee and a non-executive director of Genomics England.

Dr Sobia Raza is a Senior Fellow at the Health Foundation within the Data Analytics team. She is also an Associate and previous Head of Science at the PHG Foundation.

Dr Peter Taylor is Director of Research at the Institute of Development Studies. He was previously the Director of Strategic Development at the International Development Research Centre.

Dr Carmela Troncoso is Assistant Professor, Security and Privacy Engineering Lab at the École Polytechnique Fédérale de Lausanne. She was a leading researcher on DP-3T and is also a member of the Swiss National COVID-19 Science Task Force’s expert group on Digital Epidemiology.

Dr Edgar Whitley is Associate Professor of Information Systems at the London School of Economics. He is co-chair of the UK Cabinet Office Privacy and Consumer Advisory Group and was the research coordinator of the LSE Identity Project on the UK’s proposals to introduce biometric identity cards.

Dr James Wilson is Professor of Philosophy and Co-Director of the Health Humanities Centre at University College London. He is also an Associate editor of Public Health Ethics and Member of the National Data Guardian’s Panel and Steering Group.

The following individuals and organisations responded to our open call for evidence:

  • Access Now
  • Ally Smith
  • Dr Baobao Zhang, Cornell University
  • BLOK BioScience International
  • Dr Btihaj Ajana, King’s College London
  • Consult Hyperion
  • The COVID-19 Credentials Initiative, Linux Foundation Public Health
  • Professor Derek McAuley, Professor Richard Hyde and Dr Jiahong
    Chen, Horizon Digital Economy Research Institute, University of
    Nottingham
  • Dr Dinesh V Gunasekeran, National University of Singapore
  • eHealthVisa
  • The Electronic Frontiers Foundation
  • James Edwards
  • Professor Julian Savulescu and Dr Rebecca Brown, Oxford Uehiro
    Centre for Practical Ethics, University of Oxford
  • Marcia Fletcher
  • medConfidential
  • The PathCheck Foundation
  • Patrick Gracey, Patrick Gracey Productions Ltd
  • Robert Seddon
  • SICPA
  • Susan Mayhew
  • techUK
  • The Tony Blair Institute for Global Change
  • The UK Pandemic Ethics Accelerator
  • Yoti
  • ZAKA
  • Zebra Technologies.

This report was authored by Elliot Jones, Imogen Parker and Gavin Freeguard.


Preferred citation: Ada Lovelace Institute. (2021). Checkpoints for vaccine passports. Available at: https://www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports/

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

Exit through the App Store? A rapid evidence review of the technical considerations and societal implications of using technology to transition from the COVID-19 crisis was undertaken with a view to supporting the Government and the NHS as it adopts technical solutions to aid in the transition from the COVID-19 crisis.

The review focuses on three technologies in particular: digital contact tracing, symptom tracking apps and immunity certification. It makes pragmatic recommendations to support well-informed policymaking in response to the crisis. It is informed by the input of more than twenty experts drawn from across a wide range of domains, including technology, policy, human rights and data protection, public health and clinical medicine, behavioural science and information systems, philosophy, sociology and anthropology.

The purpose of this review is to open up, rather than close down, an informed and public dialogue on the technical considerations and societal implications of the use of technology to transition from the crisis.

Key findings

There is an absence of evidence to support the immediate national deployment of symptom tracking applications, digital contact tracing applications and digital immunity certificates. While the Government is right to explore non-clinical measures for transition, for national policy to rely on these apps, they would need to be able to:

  1. Represent accurate information about infection or immunity
  2. Demonstrate technical capabilities to support required functions
  3. Address various practical issues for use, including meeting legal tests
  4. Mitigate social risks and protect against exacerbating inequalities and vulnerabilities

At present, the evidence does not demonstrate that tools are able to address these four components adequately. We offer detailed evidence and recommendations for each application in the report summary.

In particular, we recommend that:

  • Effective deployment of technology to support the transition from the crisis will be contingent on public trust and confidence, which can be strengthened through the establishment of two accountability mechanisms:
    • the Group of Advisors on Technology in Emergencies (GATE) to review evidence, advise on design and oversee implementation, similar to the expert group recently established by Canada’s Chief Science Adviser;
    • and an independent oversight mechanism to conduct real-time scrutiny of policy formulation.
  • Clear and comprehensive primary legislation should be advanced to regulate data processing in symptom tracking and digital contact tracing applications. Legislation should impose strict purpose, access and time limitations.

Until a robust and credible means of immunity testing is developed, focus should be on developing a comprehensive strategy around immunity that considers the deep societal implications of any immunity certification regime, rather than on developing digital immunity certificates. Full and robust Parliamentary scrutiny and legislation will be crucial for any future regime of immunity testing and certification.

Technical design choices should factor in privacy-by-design and accessibility features and should be buttressed by non-technical measures to account for digital exclusion.

We recognise that technology and data may be critical to enabling the UK to transition from the crisis, and acknowledge that lifting lockdown measures and getting people back to work has not only economic drivers, but social and public health drivers. But the rapid review finds that premature deployment of ineffective apps could undermine public trust and confidence in the long-term, hampering the widespread uptake of tracking technologies which may be critical to their eventual success.

This rapid evidence review is the beginning of our work to ensure new technologies that emerge to reduce the impact of the COVID-19 global pandemic work for people and society. We welcome approaches for further conversations and would be pleased to brief you or your colleagues on our findings, or consider your suggestions as to how to take this work further.

Policy impact

The report has been influential in informing the policy response:  

  • Darren Jones MP brought the report as evidence to the House of Commons Science and Technology Committee, when they considered the implications of proposed contact tracing apps 
  • The report was referenced in the Biometrics Commissioner’s statement on the use of symptom tracking applications, 21 April 2020 
  • The Information Commissioner’s Office included the report as evidence in their report on COVID-related technologies
  • Carly Kind advised Michael Webb, joint Chief Economic Adviser to the Chancellor and Prime Minister, on algorithmic bias in relation to a proposed new post-COVID-19 digital markets policy initiative, 25 June 2020 
  • An unofficial Whitehall source reported that the ‘Exit through the App Store’ rapid evidence review and associated summaries were being referenced in internal government strategy documents about T3C (test, track, and trace), and in ministers’ briefing notes, 5 May 2020
  • A media source reported that ‘Exit through the App Store’ was referenced in briefing documents presented to SAGE
  • Carly Kind joined an international roundtable hosted by CIFAR at the request of Canada’s chief science adviser 
  • Lord Clement-Jones used the rapid evidence review as supporting evidence for an oral question in the House of Lords, 6 May 2020
  • Imogen Parker and Carly Kind were invited to attend a CDEI roundtable with representatives from the Turing and NHSX, to advise on the future of app 
  • The Ada Lovelace Institute hosted a roundtable under Chatham House rules to convene senior public health, technology, academics and civil society organisations, to discuss technologies in development to support the easing of lockdown measures
  • Four organisations made approaches to constitute GATE, the multidisciplinary Group of Advisors on Technology in Emergencies (GATE) proposed in the rapid evidence review, to act as gatekeepers of the deployment of technologies in support of a transition strategy 
  • Reema Patel formally presented the emerging findings from COVID-19 online deliberation to two meetings – the Silver Data COVID19 network (T2) convened by Government, and the Scottish government’s expert advisory group on engagement and participation in COVID-19, convened by DemSoc and Involve. 

Media coverage

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

Executive summary

This discussion paper contributes to the conversation around European Union (EU) AI standards by clarifying the role technical standards will play in the AI governance framework created by the EU’s Artificial Intelligence Act (the ‘AI Act’), and how this may diverge from the expectations of EU policymakers.

In the AI Act, EU policymakers appear to rely on technical standards to provide the detailed guidance necessary for compliance with the Act’s requirements for fundamental rights protections. However, standards development bodies seem to lack the expertise and legitimacy to make decisions about interpreting human rights law and other policy goals.

This misalignment is important because it has the potential to leave fundamental rights and other public interests unprotected.

The research presented in this paper is not conclusive; it is based on the limited, publicly available information about the development of technical standards for the AI Act, as well as feedback from a small number of experts.

However, this information and feedback point to several policy strategies that may be helpful and necessary for the successful implementation of the AI Act. This paper can therefore inform the interinstitutional negotiations (‘trilogues’) on the AI Act and help the European Commission explore these policy strategies.

One approach is to boost civil society participation in the standardisation process, which would improve the diversity of viewpoints and representation of public interests. However, since this is unlikely to provide the political and legal guidance needed to interpret essential requirements, institutional innovations are also proposed.

This discussion paper may also help policymakers outside the EU to understand the feasibility of implementing AI policy through technical standards when developing their own AI regulations. For similar reasons, civil society organisations considering their positions on AI policy proposals may find it informative.

This paper begins by exploring the role of standards in the AI Act and whether the use of standards to implement the Act’s essential requirements creates a regulatory gap in terms of the protection of fundamental rights. It goes on to explore the role of civil society organisations in addressing that gap, as well as other institutional innovations that might improve democratic control over essential requirements.

This is followed by conclusions and recommendations for adapting the EU’s standardisation policy to the goals of the AI Act.

Information about this topic was gathered through legislative and policy analysis, as well as interviews with experts involved in standards development for the AI Act and civil society organisations with expertise relevant to the AI Act. A detailed description of the methodology appears below.

Recommendations and open questions for EU policymakers

Our analysis finds that EU standardisation policy and the AI Act create a regulatory gap. Lawmakers expect that technical standards will clarify and implement the Act’s essential requirements. However, neither the legislative text, nor the technical standards implementing the legislation, are likely to answer the challenging legal and political questions raised by these essential requirements.

Although the European Commission’s standardisation request to Joint Technical Committee 21 (JTC-21) says that adequate fundamental rights expertise and other public interests must be represented in the standards-setting process, most experts identified prohibitive barriers to meaningful civil society participation. These barriers include, but are not limited to: the time commitment, the opacity and complexity of the standardisation process and the dominance of industry voices in that process.

These findings suggest that EU policymakers should explore institutional innovations to fill the regulatory gap, as well as strategies to boost civil society participation.

This paper explores three strategies for EU policymakers to expand civil society participation in JTC-21:

  • Amend the Regulation on European Standardisation to broaden the categories of Annex III organisations eligible for funding and mandated participation, increasing funding for organisations’ participation in line with this.
  • Fund more individuals from civil society organisations with the Commission’s specialised StandICT grants, which provide funding for European standardisation experts to participate in standards development, including for participation in national delegations.
  • Create or fund a central hub to support civil society participation. This would institutionalise activities already carried out by organisations such as the European Trade Union Confederation (ETUC) and the European Consumer Voice in Standardisation (ANEC) that aim to facilitate the contribution of subject-matter experts to standards-setting processes.

The European Commission should also consider institutional innovations to improve democratic control over essential requirements. These include the creation of:

  • Common specifications: The Commission could leverage its right to develop common specifications, which would address the safety and fundamental rights concerns that are not captured by the technical standards that implement EU legislation (known as ‘harmonised standards’).
  • A benchmarking institute: The proposed AI benchmarking institute could take up the questions that JTC-21 avoids or answers inadequately, complementing JTC-21’s procedure- and documentation-oriented standards with more substantive standards.

Further questions

As originally conceived, the EU’s New Legislative Framework (NLF) ensures political decisions remain within EU institutions and decisions made within European Standards Organisations (ESOs) are ‘purely technical’.[1]

The Commission’s Explanatory Memorandum implies this is true of the AI Act, describing harmonised standards as ‘precise technical solutions’[2] for designing AI that complies with essential requirements. Yet, the AI Act effectively delegates political decisions to ESOs. This scenario is unlikely to ensure fundamental rights protections and related policy goals are realised.

This research therefore raises a broader question about the AI Act and the NLF – what role do EU institutions expect standards to play in AI governance?

Before voting on the AI Act, EU policymakers should ask the following questions:

  • How far is the EU delegating political power to private entities?
  • Which private entities are being empowered?
  • Are amendments necessary to safeguard public interests?

These questions will be of particular importance for parliamentarians voting on the AI Act and other institutional players during the ‘trilogue’ negotiations.

There may be a better solution that avoids relying on European standards at all. This path prompts bigger questions:

  • Is a new political theory of AI governance necessary and, if so, what should it be?
  • How could a governance framework be designed to effectively protect fundamental rights and better safeguard the public interest from conflicting corporate interests?
  • How can it balance the incorporation of technical expertise with effective democratic control?

We hope this research will generate discussion among EU policymakers, civil society organisations and standards bodies, about how to expand civil society participation within standards development for the AI Act. For EU policymakers in particular, there are broader questions to consider around the role of standards in AI governance alongside this. Detailed analysis and next steps for policymakers can be found in the chapter on ‘How to fill the regulatory gap’.

Introduction

The Artificial Intelligence Act (AI Act)[3] represents the European Union’s (EU’s) proposed framework to regulate artificial intelligence broadly, beyond specific areas like medical devices. The European Commission’s proposal is designed to achieve several overarching goals: the protection of EU values and citizens’ fundamental rights, health and safety; fostering an innovative and globally competitive AI market; and setting global legal standards and norms.[4]

Fundamental rights protections are particularly prominent in the AI Act. In addition to contributing to the Commission’s ‘ultimate aim’ of ensuring AI ‘increas[es] human well-being’, the Commission expects strong fundamental rights protections to promote uptake and growth of the AI market by fostering public trust in AI.[5]

Much of the legislation outlines substantive rules for the protection of fundamental rights and other public interests, along with requirements for demonstrating compliance with these substantive rules. These rules apply to AI identified in the legislation as ‘high-risk’, meaning it poses a significant risk to fundamental rights, health or safety.[6]

However, the requirements for high-risk systems, known as essential requirements, are phrased in highly general and vague terms in the legislative text of the AI Act. For example, a biometric identification system must feature an ‘appropriate level of accuracy’ to mitigate risks to fundamental rights.[7]

Ambiguous instructions for software design can ‘conflict deeply with [. . .] [a] computer scientist’s mindset’,[8] which relies on precision and clarity. This may make it difficult for AI providers – people or entities who develop AI or have AI developed for marketing or use under their name or trademark – to interpret and operationalise essential requirements, resulting in insufficient protections for fundamental rights and other public interests.[9]

It appears that the Commission intends for standards development bodies to clarify essential requirements by operationalising them in technical standards for use by developers.[10] As in some other product safety legislation, the AI Act empowers the European Commission to request the development of technical standards by private standards development bodies to facilitate compliance with essential requirements.

This is seemingly based on the assumption that standards development bodies are equipped to grapple with questions about human rights and other public interests implicated by the AI Act.

However, standards development bodies typically rely on employees of large technology companies for their outputs and see minimal participation by civil society organisations and other stakeholders.[11] This means they are unlikely to benefit from the legal and policy expertise relevant to the AI Act’s essential requirements.

This situation also creates the possibility that decisions will be made in companies’ best interests, even when they conflict with the public interest.

If neither the legislative text of the AI Act nor standards clarify how to comply with the AI Act’s essential requirements for fundamental rights and other public interests, AI designers may not implement them effectively, leaving the public unprotected.

Whether this is the case is unclear. Little information about the development of standards for the AI Act is publicly available. AI is also a relatively new area in standards development, which makes it difficult to trace the impacts of AI standards on individuals and society, or to understand how AI experts approach these issues in standards development.

What are standards?

 

A standard is a document that ‘describes the best way of doing something. It could be about making a product, managing a process, delivering a service or supplying materials – standards cover a huge range of activities’.[12]

A standard ‘provides rules, guidelines or characteristics for activities or for their results, aimed at achieving the optimum degree of order in a given context. It can take many forms. Apart from product standards, other examples include: test methods, codes of practice, guideline standards and management systems standards’.[13]

Companies can access and license these documents through standards development bodies, which are intended for industry-wide use. For example, electronics companies have standardised the design of electric power plugs and sockets within entire countries and regions, enabling one to use a device manufactured by one company after plugging it into a socket manufactured by another.

What is clear from research in other areas of standards development is that standards can create significant sociopolitical impacts, including fundamental rights impacts, and can be highly contested for political and economic reasons by different stakeholders.[14]

If standards are to play a significant role in the EU’s new approach to AI governance, research is needed about AI standards to assess the AI Act’s suitability. Several questions remain unanswered:

  1. Whether the AI Act creates a regulatory gap for the protection of fundamental rights and other public interests. Will providers of AI systems find it difficult or impossible to comply with these requirements, given the ambiguity of the legislative text and the apparent lack of authoritative guidance from technical standards bodies?
  2. If there is a regulatory gap, is civil society participation in standardisation helpful or even necessary to fill it? Civil society organisations with expertise in human rights law and other policy areas may be able to provide the non-technical expertise necessary to implement the AI Act’s essential requirements in technical standards. They may also help to ensure the public interest is not disregarded in the pursuit of commercial interests.
  3. Assuming there is a regulatory gap for the protection of fundamental rights and other public interests, and that civil society participation can fill this gap, how can policymakers enhance the effective participation of civil society in the development of standards for the AI Act? Few civil society organisations are able to participate in standards development and those that do find it difficult to influence the process. Policymakers may be able to provide them with additional resources and legislative support.

Sources of information

Several types of information can shed light on these questions. This research used legislative analysis and document review, as well as interviews with civil society organisations and participants in standards development.

To understand whether a regulatory gap exists, the AI Act’s text was analysed in conjunction with documentation related to other elements of the European standardisation system. Based on their experience, those involved in standards development are best placed to understand whether and how civil society organisations can provide missing legal and policy expertise in standards development for the AI Act.

Interviews with experts (i.e., interviewees) involved in the development of standards for the AI Act, as well as experts with experience in standards development more generally, helped to answer these questions. Interviewees were mainly experts who are part of working groups of Joint Technical Committee 21 (JTC-21), which is responsible for developing standards to implement the AI Act. JTC-21 is a technical committee created by two of the three European Standards Organisations (ESOs): the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC), jointly referred to as CEN-CENELEC.

JTC-21 working group experts include both representatives of civil society organisations and technologists from industry and academia. Most experts are employees of companies, acting as delegates of the national members of CEN-CENELEC to JTC-21.

Civil society organisations shared insight into the barriers to and facilitators of their participation in standards development. Interviews, workshops and polls – with representatives of organisations both with and without experience in standards development – provided guidance on the resources, policies and norms that can promote or undermine their effective participation.

A major limitation to this research was the small number of interviewees and workshop participants. The names of JTC-21 experts are generally not publicly available, which made it difficult to identify potential interviewees. In the civil society workshop, few participants felt confident contributing actively due to a lack of familiarity with European standardisation and AI.

For more information about interviewees and workshop participants, and the methods used in this research, see the ‘Methodology’ chapter below.

Does the AI Act create a regulatory gap?

The AI Act’s fundamental rights and other public interest protections may be ineffective, due to the discretion the legislative text apparently affords industry in their interpretation.

Modelled on the New Legislative Framework (NLF), the AI Act is designed in a way that assumes standards development bodies will develop the crucial details of high-level rules for the protection of fundamental rights and other policy goals. In the absence of standards, those decisions generally fall to individual companies.

In theory, the NLF restricts political and legal decisions to EU institutions and allocates technical questions about the implementation of legislation to standards development bodies.

In practice, the legislation leaves open many questions about how to operationalise fundamental rights protections and other policy goals, which leaves highly political questions to standards development bodies or companies that generally lack the expertise and incentive to implement them effectively.

What is the New Legislative Framework?

Like other EU legislation regulating certain technologies, such as boats and explosives, the AI Act is modelled on the NLF. The European Commission and Parliament have published several detailed descriptions of the logic behind the NLF and how it works.[15]

NLF legislation features essential requirements, which ‘define the results to be attained, or the hazards to be dealt with, [without] specify[ing] the technical solutions for doing so’.[16] The European Commission requests the development of technical standards, known as harmonised standards, by European Standards Organisations (ESOs) to operationalise essential requirements, providing the ‘precise technical solution’[17] to achieve the desired result.

Harmonised standards help companies to comply with essential requirements by operationalising policy language in a way technologists can understand. Alternatively, a provider can develop their own technical solution ‘in accordance with general engineering or scientific knowledge laid down in engineering and scientific literature’,[18] or by using other technical standards.

While voluntary, the NLF incentivises the use of harmonised standards by offering additional legal certainty, known as a presumption of conformity. This means a Market Surveillance Authority must begin with an assumption that any product designed in line with a harmonised standard complies with the relevant essential requirements, making it more challenging to punish a provider for non-compliance. Though they do not completely shield a manufacturer from liability for failure to meet essential requirements, harmonised standards offer authoritative guidance for satisfying essential requirements that are approved by the European Commission.[19] The Commission creates a presumption of conformity by citing a potential harmonised standard in the Official Journal of the European Union.[20]

This regulatory framework was developed as an alternative to including technical specifications in legislation, as the EU’s legislative process was too slow to meet industry needs.[21]

However, EU institutions consider it imperative to draft NLF laws in a way that ensures all political decisions remain with them, and only technical decisions are made by ESOs. [22]

What do ideal essential requirements look like?

EU institutions explain that they maintain the boundary between political and purely technical decisions by defining essential requirements precisely in legislation.[23] Failure to define essential requirements precisely and preclude misinterpretation by ESOs would risk ‘delegat[ing] political powers to the ESOs and their members’,[24] which EU institutions aim to avoid.

For example, it is the European Parliament’s responsibility to define the maximum permissible level of exposure to a hazard in legislation.[25]

Another element of the NLF that helps to minimise ambiguity is that essential requirements typically set health and safety standards for physical products with limited ranges of use.[26] For example, an essential requirement in an NLF law regulating watercraft specifies maximum decibel levels for noise emissions.[27]

Although there is little general information about how the Commission determines whether a harmonised standard satisfies essential requirements, this determination is apparently based on whether the standard or design reflects the ‘state of the art’.[28] According to the Commission, the ‘assessment of whether requirements have been met or not [is] based on the state of technical know-how at the moment the product is placed on the market’.[29]

What is the role of civil society organisations in standards development?

The Regulation on European Standardisation, which underpins the NLF, requires ESOs to include civil society organisations representing certain societal stakeholders in the development of harmonised standards.[30] This helps to ensure that the interests of people and groups affected by standards are taken into account during their development.

Annex III of the Regulation lists the categories of stakeholder groups that ESOs must consult in the standardisation process. So-called ‘Annex III organisations’ include those representing consumer rights, workers’ rights, environmental protection and small and medium enterprises (SMEs).[31] The Regulation on European Standardisation also empowers the European Commission to fund their participation.[32] A recital justifies this funding and mandatory participation by describing civil society participation as ‘necessary’[33] for the safety and wellbeing of EU citizens, given the broad impact standards can have on society.

Why the AI Act does not conform to the New Legislative Framework

While the AI Act is structured as an NLF law, it diverges from EU institutions’ characterisations of the NLF in several consequential ways.

Essential requirements in the AI Act are ambiguous, potentially leaving them open to interpretation by ESOs. They are worded imprecisely, and sources of clarification outlined in the AI Act and elsewhere appear to be insufficient. Substantively, they cover fundamental rights and other policy areas that are not as easily quantified and operationalised as safety standards.

Additional sources of clarification from public authorities and international standards are likely to be insufficient.

Finally, ESOs’ existing stakeholder representation is unlikely to cover all affected public interests. This is inconsistent with the logic behind the inclusion of Annex III organisations, which is to represent interests affected by NLF legislation.

Unclear essential requirements

Essential requirements for high-risk AI systems appear in Title III, Chapter 2 of the AI Act.[34] High-risk systems are categories of AI deemed to pose a particularly high risk to human health, safety or fundamental rights, such as AI used in education, worker management, biometric surveillance and access to essential services.[35]

As in other NLF legislation, the AI Act’s essential requirements address human health and safety. Unlike most NLF laws, they also broadly address fundamental rights and apply to technologies that affect other policy goals, like the administration of elections.[36]

Essential requirements in the AI Act tend to be worded ambiguously. According to Article 9, the overall level of risk to fundamental rights and health and safety following a risk mitigation process must be ‘acceptable’.[37] Training datasets must be assembled using ‘relevant design choices’.[38] High-risk systems must exhibit an ‘appropriate level of accuracy, robustness and cybersecurity’.[39]

The ambiguity of essential requirements is inconsistent with the Commission’s description of harmonised standards, which must be defined precisely to avoid delegating political power to ESOs.

For example, the Commission specifically uses the choice of a maximum hazard exposure level as an example of a political choice that must remain with lawmakers. In contrast, the AI Act leaves decisions about acceptable levels of risks related to fundamental rights to ESOs and providers.

Moreover, human rights law is far less amenable to quantification, and far more open to interpretation, than safety standards. This is likely to create new challenges for industry technologists trying to operationalise essential requirements.

Inadequate alternative sources of clarification

The AI Act and the EU’s standardisation strategy potentially provide sources of clarification for ambiguous essential requirements, these include: references to the state of the art; European Commission and member state guidance; international standards; and stakeholder representation. However, it is doubtful that any of these sources will be sufficient to meet the needs of providers of high-risk systems. Each source is explored in detail below.

The state of the art

As in other NLF laws, the AI Act implies that the state of the art can help providers and ESOs understand how to comply with essential requirements. However, this is largely inapplicable where fundamental rights are concerned.

Article 9, which describes the risk management system used to determine the overall level of risk to fundamental rights permitted in a high-risk system, states that the designer should ‘take into account the generally acknowledged state of the art’.[40]

Similarly, Recital 49 explains that high-risk systems must ‘meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art’.

The Commission sought to clarify the meaning of ‘state-of-art’ in its draft standardisation request to CEN-CENELEC. Here they said the term ‘should be understood as a developed stage of technical capability at a given time as regards products, processes and services, based on the relevant consolidated findings of science, technology and experience and which is accepted as good practice in technology. The state of the art does not necessarily imply the latest scientific research still in an experimental stage or with insufficient technological maturity.’[41]

More generally, the Commission’s Explanatory Memorandum accompanying the AI Act proposal states that the ‘precise technical solutions to achieve compliance with [essential] requirements may be provided by standards or […] otherwise be developed in accordance with general engineering or scientific knowledge at the discretion of the provider of the AI system’.[42]

During a discussion at a panel event in 2021 hosted by the Center for Data Innovation with members of the European Parliament and others, a Microsoft representative confirmed that the Commission will accept design solutions to address essential requirements that are based on the state of the art.[43]

However, an allusion to the state of the art is unlikely to answer questions about what constitutes an acceptable level of risk to fundamental rights, or what constitutes an appropriate level of accuracy. Unlike the measurement of noise emissions with decibel levels, there is no agreed, one-dimensional metric for measuring risk to fundamental rights, and any metric that is developed will be highly contested.

Whether a rule or practice violates human rights law tends to be context dependent and determinations typically involve balancing various rights and interests. Unlike hearing loss injuries, human rights violations usually cannot be easily quantified or reduced to either-or decisions to demarcate acceptable from unacceptable risk levels.

Furthermore, even if state of the art standards in human rights law and related policy areas existed, the lack of relevant legal and policy expertise in ESOs would make it difficult to identify them.

This means that the state of the art is unlikely to provide sufficient guidance for operationalising essential requirements related to fundamental rights.

Guidance from the European Commission and Member States

The AI Act also envisions ways in which the European Commission and EU member states can provide authoritative interpretations of essential requirements directly to providers. These include common specifications; guidance from a European Artificial Intelligence Board; harmonised standards (HAS) consultants; regulatory sandboxes; and a dedicated communication channel.

However, few details are provided in the legislation or elsewhere about whether or how these resources will be provided. Without information about their implementation, it is difficult to predict whether any will be sufficient to clarify essential requirements. Based on the information available, this appears doubtful.

European Commission guidance

Several aspects of the AI Act and the EU’s standardisation policy enable the Commission to provide guidance for interpreting essential requirements.

Article 41 of the AI Act empowers the Commission to essentially create its own harmonised standards, called common specifications, for providers to use if ESOs’ harmonised standards are incomplete or insufficient. However, there are no publicly available plans in place to develop common specifications, so it is unlikely these will be available to industry when the AI Act comes into effect.

Article 58(c) describes the tasks of a newly established European Artificial Intelligence Board, part of which is to ‘issue opinions, recommendations or written contributions on matters related to […] technical specifications […] regarding [essential] requirements’.

This language suggests that the Board could issue specifications that operationalise essential requirements like appropriate accuracy levels and acceptable levels of risk to fundamental rights for various types of high-risk systems.

While there are no detailed, publicly available plans in place for the Board, it is unlikely the Board would be sufficiently responsive to providers’ questions while juggling advisory work with other administrative tasks.[44]

On 5 December 2022, the Commission issued a draft request to CEN-CENELEC to develop harmonised standards to support the AI Act.[45] . This is the first formal step in the process of developing harmonised standards.

In line with its existing standardisation strategy, the European Commission can provide HAS consultants to ESOs during the standards development process to help interpret essential requirements.[46]

HAS consultants are private contractors from a consultancy firm who play two key roles in European standardisation. First, at certain stages of the standardisation process, they can provide feedback about whether existing drafts conform to an NLF law’s essential requirements.[47] Second, they help the Commission to assess a standard to determine whether it should be cited in the Official Journal, creating a presumption of conformity.[48]

However, it is questionable that private contractors can or should make such weighty decisions as to what constitutes an acceptable level of risk to fundamental rights, such as in the development of a biometric surveillance system used in the processing of asylum seekers.

It is unclear whether HAS consultants for AI Act standards would have expertise in human rights law or other policy goals, like the administration of elections. Calls for expressions of interest from consultants typically require a master’s degree or experience in the relevant industrial sector, for example, but not in human rights law or other areas of public policy.[49]

Member State guidance

The AI Act also calls on EU member states to provide guidance for compliance. Articles 53 and 55 require or encourage member states to provide guidance or help through regulatory sandboxes (e.g. test beds) and ‘dedicated channel[s] for communication’ for smaller providers, respectively.[50]

However, as in the case of guidance from the Commission, the lack of detail makes it unclear how responsive this guidance will be to providers’ needs.

International standards

Another potential source of clarification is international standards.

A large proportion of harmonised standards originates as international standards, later adopted by ESOs because of agreements between ESOs and their international counterparts to prioritise these standards.[51] This means that, if an international standard on a topic already exists, an ESO generally cannot develop a conflicting standard on the same topic.

However, there is no indication that international standards currently under development will address the political facets of essential requirements.[52] Also, the lack of civil society participation suggests international standards are likely to suffer from similar shortcomings in terms of legal and policy expertise and will not provide the necessary guidance for AI Act compliance.[53]

Stakeholder representation

Stakeholder representation within ESOs could potentially facilitate the interpretation of essential requirements for fundamental rights and other public interests.

However, Annex III organisations include only those representing consumer rights, workers’ rights, environmental interests and SMEs. These represent merely a fraction of the fundamental rights and other public interests implicated by the AI Act.

Nevertheless, civil society participation in the development of harmonised standards is likely to be the most promising strategy to fill the AI Act’s apparent regulatory gap. It appears this is the only source of non-technical expertise with a record (discussed below) of providing advice about the protection of fundamental rights and other public interests to ESOs.

As such, policymakers would benefit from a better understanding of civil society organisations’ current and future roles in European standardisation. In particular, it is important to understand whether rules governing their participation must be updated to accommodate an expanded remit in the development of harmonised standards for the AI Act.

Open questions

 

An analysis of the AI Act’s text reveals a possible regulatory gap. Although a core goal of the Act is to protect fundamental rights and other public interests beyond health and safety, it does not guarantee that clear rules or authoritative guidance will be available to providers to ensure this goal is realised.

 

This leaves open the questions of whether civil society participation in European standardisation could fill the gap and, if so, how policymakers can bolster this participation.

Experts’ views on civil society participation

Interviews and reviewed documents exposed high barriers to effective participation by civil society organisations, as well as several existing and potential facilitators of participation.

While interviews with standardisation experts revealed perceived benefits to civil society participation, these generally did not include the interpretation of legislation or human rights law. This is largely due to Joint Technical Committee 21’s (JTC-21) avoidance of these topics.

For more information on the methodology, as well as a list of experts who were interviewed, see the ‘Methodology’ chapter below.

High barriers to effective civil society participation

Civil society representatives both with and without experience in European standardisation identified several significant barriers to effective participation. These include restrictive eligibility criteria for existing opportunities, burdensome time commitments, an inability to navigate complicated standardisation processes, industry dominance and a lack of awareness and interest.

Limited opportunities for civil society participation

Opportunities for participation by civil society organisations in JTC-21 are limited. These include participating as an Annex III organisation, a CEN-CENELEC liaison organisation, and direct or indirect participation through a National Standardisation Body (NSB). Even when an organisation qualifies for one of these opportunities, formal and practical impediments prevent it from wielding significant influence.

Participation as a liaison organisation

CEN-CENELEC conditions for liaison participation prevent most civil society organisations with expertise relevant to the AI Act’s fundamental rights protections and policy goals from participating in JTC-21.

An organisation can apply to CEN-CENELEC for permission to participate in JTC-21 as a liaison organisation to represent interests affected by its standardisation activities.[54]

One eligibility criterion is that the organisation must have representatives in at least four CEN-CENELEC NSB member states, and those representatives must be businesses or organisations, rather than individuals.[55]

In a survey of workshop participants whose organisations have relevant expertise but are not involved in JTC-21, less than a quarter satisfied this requirement.

Interviewees, including both technologists and civil society representatives, could name only one liaison organisation currently involved in JTC-21. It is called ForHumanity and specialises in the independent auditing of AI and autonomous systems.[56] Few if any other liaison organisations represent non-commercial interests.[57]

Participation as an Annex III organisation

Few civil society organisations receive funding from the European Commission for participation in European standardisation.

As discussed above, the European Commission funds the participation of civil society groups representing consumer and labour rights, as well as environmental interests, and requires ESOs to include them in standardisation activities.[58] They are called Annex III organisations because the categories of organisations eligible for funding are listed in Annex III of the Regulation on European Standardisation.

However, the eligibility conditions are restrictive and the Commission funds only one organisation per stakeholder category. In the workshop held for civil society organisations, none met the eligibility requirement of having mandates from organisations in at least two-thirds of EU member states.

In theory, each Annex III organisation represents the views of their national counterparts. For example, the European Trade Union Confederation (ETUC) collects and represents the views of national trade unions in standardisation activities, including JTC-21 work. However, Annex III organisations find it challenging to interest their national counterparts in education and research on standards, according to interviewees Philippe Saint-Aubin, a JTC-21 expert working on behalf of ETUC, and Chiara Giovannini of ANEC, who represent their respective Annex III organisations in standards development.

Participation as a National Standardisation Body (NSB)

While eligibility criteria for participation in NSBs may be less strict, civil society organisations find it difficult to influence national standardisation activities.

At the national level, civil society organisations usually have opportunities to participate directly or indirectly in JTC-21 activities through their NSB’s mirror committee or through public comments. A mirror committee exists to gather national stakeholders’ views about the activities of a European or international technical committee, such as JTC-21.

Though the rules of NSBs vary, a civil society organisation generally has an opportunity to contribute feedback in a mirror committee and can potentially act as an NSB’s delegate to JTC-21.[59] A delegate represents the positions of an NSB in standardisation activities, as NSBs make up the membership of CEN-CENELEC.

However, it can be difficult for civil society organisations to join and wield influence in an NSB mirror committee.

Interviewee Chiara Giovannini of ANEC, an Annex III organisation representing consumer rights, finds that the consumer voice is ‘frequently absent and disregarded’ in NSBs. Contributions from Giovannini’s national counterparts in NSBs have been disregarded because, by missing meetings, the organisations lost good standing, in accordance with the NSB’s rules.

Even when they are able to participate, civil society representatives are usually vastly outnumbered and outvoted by company representatives. It was because of civil society’s ‘weak’ representation in NSBs, relative to industry, that the European Parliament recognised the need for financial and political support for what are now known as Annex III organisations.[60]

NSBs also offer members of the public – including civil society representatives – the opportunity to read and comment on draft standards after registering for an account on the NSB’s website.[61] This is possible during a limited period of time, after a CEN-CENELEC technical committee has completed a draft standard, and before all suggested amendments are considered in the comment resolution stage.

NSBs can consider public comments and decide whether to submit them to the CEN-CENELEC technical committee for consideration in the comment resolution stage. While most or all AI-related standards are not yet available for public review and comment, this option appears to be used rarely in all standardisation categories.

Giovannini also points to a lack of dedicated funds for civil society participation in NSBs as a barrier.

Time commitment

As a time-intensive process, European standardisation excludes many organisations that cannot afford to commit full-time personnel. The development of a European or international standard generally requires between two and five years, according to interviewees David Filip, of the Organization for Standardization (ISO), and Philippe Saint-Aubin, who both participate in JTC-21’s work. New areas like AI require closer to five years.

Saint-Aubin, a representative of the ETUC, an Annex III organisation, finds that each standard requires between two and ten hours per month for meetings, comments and reading. Earlier stages require less time, while the more significant comment resolution stage, during which proposed changes to a draft standard are negotiated and resolved, requires closer to ten hours.

Saint-Aubin judges that standardisation is too time-consuming for most trade unionists, who are already ‘overbooked’. Interviewee Mary Towers, of the Trades Union Congress (TUC) in Britain, confirms this view, finding it difficult to juggle standardisation work with her many other responsibilities.

Participation in standardisation must also be continuous to be effective. In the experiences of civil society representatives interviewed, it was essential to attend every, or nearly every, meeting of a working group developing a standard to maintain the credibility necessary to influence the group.

According to Saint-Aubin, it is important to begin contributing as early as possible, during less critical stages of the process, to develop the standing necessary to influence the more important later stages; otherwise, their views will be disregarded. Early and continuous participation gives other experts confidence that one can be trusted to bring a valuable perspective to the process.

On a practical level, Mary Towers describes how, given the complexity of standards development, missing a single meeting or joining halfway through can also cause a civil society representative without extensive experience to feel confused about the proposal under consideration.

Additionally, a technical committee typically develops multiple standards at any given time, causing even Annex III organisations to refrain from contributing to many of them.

As such, Saint-Aubin reports that, despite their funding from the European Commission, even Annex III organisations lack the human resources necessary to participate in every working group of a technical committee. An organisation would likely require more than one full-time expert to participate actively in the development of all AI standards relevant to its mission.

Moreover, organisations like the ETUC are responsible for participating in standards development in areas outside of AI.

A small survey of civil society representatives not participating in JTC-21 activities, but whose organisations have expertise relevant to the AI Act’s fundamental rights protections, revealed that none were certain their organisations could make this time commitment. Most were certain their organisations could not spare this much time, and only one was unsure. This time commitment was also the most frequently listed barrier to their potential participation in a question with open answers.

Opacity and complexity

The opacity and complexity of the standards development process can be particularly challenging to those without extensive experience. Mary Towers finds standards development to be ‘distant’ and ‘difficult to navigate’. Even jargon is ‘a real barrier’ that can make the process ‘inaccessible’.

Lack of awareness and interest

A lack of awareness and interest in standards development emerged as another key challenge for civil society representation.

Mary Towers has the impression that there is low awareness among trade unionists about the relevance of standards to their work. This is likely to be because standards development happens outside of their workplaces and does not fall within the realm of most workers’ immediate experiences.

Philippe Saint-Aubin finds that national trade unions are interested in standards that impact workers more tangibly, such as those dealing with health and safety or human resources, and do not prioritise AI standards.

A consumer rights representative from the Annex III organisation ANEC has found it difficult to interest a coalition of civil society groups specialising in AI policy in standards development. Chiara Giovannini attempted to solicit feedback from the fifteen organisations in the coalition about how to reword ambiguous essential requirements, due to concerns about how they would be interpreted by ESOs, but received no feedback.

Industry dominance

Civil society representatives often find themselves unable to influence final decisions made in standards development because they are vastly outnumbered by industry representatives. This is problematic because industry preferences can conflict with the public interest.

Most experts in CEN-CENELEC working groups are employees of large companies, sent as delegates to represent NSBs. This is because few organisations, besides large companies, have the resources to pay full-time staff to work on standardisation, according to David Filip. When industry and civil society opinions diverge, industry views take precedence.

Decisions made by industry representatives can undermine the public interests civil society organisations aim to promote. For example, in European standardisation, Chiara Giovannini of ANEC finds that industry representatives tend to interpret ambiguous essential requirements in line with existing industry practices, even when these practices are inconsistent with the spirit of the legislation.

In the development of an international standard that defined maximum surface temperatures for household appliances, Giovannini found that industry representatives preferred to codify existing norms, despite rigorous empirical evidence that ANEC had gathered from scientific experts in burns hospitals demonstrating that these norms were unsafe. She believes this was because it would be more expensive to use alternative or thicker materials to prevent burns.

Giovannini also participated in the development of a European standard that the European Commission declined to reference in the Official Journal due to its failure to meet accessibility requirements.

Part of the standard addressed the degree of colour contrast featured in lift button panels and was intended to implement an EU Directive on lifts and lift safety components.[62] Those drafting the standard were primarily representatives of five dominant lift manufacturers in Europe, and they chose a colour contrast level that was deemed too low for visually impaired people.

Civil society representatives have little recourse in these situations. While delegates of NSBs – who are almost always industry representatives – have voting rights in CEN and CENELEC, civil society organisations do not.[63] Giovannini finds influencing a vote on a standard to be even more challenging than influencing the content of a standard.

While Annex III organisations have the right to appeal a decision, Giovannini finds that the process is too labour-intensive to exercise it as often as ANEC otherwise would.

NSBs may give civil society organisations voting rights when developing views for the NSB to bring to ESO technical committees, but industry votes usually or always outnumber them, according to Giovannini.

This lack of influence is reflected in standards’ content. After contributing feedback to a 60-page standard, Saint-Aubin found that the ETUC’s suggestions appeared in only one footnote. He says that this work can be disappointing. Giovannini recalls six years of ANEC contributions to the development of one standard, which resulted in the modification of only one line.

At the same time, according to David Filip, who participates in JTC-21’s work, even industry actors and technologists are ‘lucky’ if their contributions appear in one-to-three lines of a final product, given the intensive editing process involved.

Facilitators of civil society participation

Civil society organisations also shared views about what does or would facilitate their participation in JTC-21, or European standardisation generally. These include a central resource for information to facilitate ad hoc participation, funding and education.

Central resource for information

By acting as a hub for information and activity, Annex III organisations facilitate participation of national civil society organisations in standards development.

Mary Towers of the TUC in Britain participates in a standardisation committee for trade unions, as well as an AI taskforce, both organised by the ETUC, an Annex III organisation.[64] ETUC representatives share information and documents related to standardisation activities at the European and international levels via email, giving members of the committee and taskforce opportunities to provide feedback without participating in standardisation directly. This enables Towers to participate when she has enough time to do so.

In addition to gathering the perspectives of national organisations, Annex III organisations can also funnel positions to national organisations.

Chiara Giovannini describes ANEC as a hub of information used by national consumer rights advocates participating in NSB activities. ANEC provides research and positions to national counterparts that would like to participate in NSB activity but lack the resources to do so independently.

Funding

Many interviewees identified funding as a vital resource for expanding civil society participation in European standardisation.

Giovanni points out that sufficient funding can enable organisations to set up specialised departments on standards development, hire experts to attend more meetings, organise lobbying campaigns and commission scientific studies to provide empirical evidence.

Likewise, Philippe Saint-Aubin, an expert working on behalf of the ETUC, thinks additional funding would be useful for organisations to hire more experts to participate, and Mary Towers identified funding as a critical resource.

Given how crucial it is, Giovannini also argues that policymakers must choose to either significantly increase funding for civil society participation to represent the public interest adequately in standardisation or refrain from implementing public policy through standards.

Education and training

Raising awareness of the relevance of AI standards to civil society organisations, as well as training them to participate in European standardisation, is also likely to be essential in promoting effective civil society participation.

According to Philippe Saint-Aubin, even with more funding for civil society participation, organisations would still be hampered by a lack of potential experts. While national trade union members could potentially supply these experts to represent labour interests, they do not prioritise standardisation. This means few trade unionists are willing and able to navigate the AI standards development process.

Those without extensive experience and training that do venture into standards development often stop participating because of confusion about the process. Saint-Aubin thinks more education is needed about the importance of standards to workers, as is training in the procedures of standards development.

Similarly, Mary Towers identifies a need for more education about the relevance of standards within trade union affiliates, as well as training in the process of standards development. After organising a training session with ETUC representatives, Towers found that several of her colleagues became interested enough in standards development to attend a workshop about the design of the Alan Turing Institute’s AI Standards Hub.[65]

The value of civil society participation

Interviews with JTC-21 experts, most of whom are technologists, focused on the benefit civil society organisations bring to AI standards development, as well as the costs. There was a particular focus on whether civil society organisations can support JTC-21 to interpret ambiguous key terms from essential requirements that relate to the protection of fundamental rights and other public interests.

While most participants found inclusivity to be helpful by providing otherwise missing perspectives and information, the interpretation of ambiguous essential requirements was not identified as a benefit.

Providing diversity of viewpoints and building consensus

Standardisation experts generally found civil society participation in standards development beneficial or even essential.

David Filip, the convenor of a working group on AI trustworthiness in the ISO, has found Philippe Saint-Aubin’s contributions to various standardisation activities valuable.

Filip noticed that the ETUC representative shaped the working group’s agenda. Saint-Aubin contributed to the development of a roadmap for the group’s standardisation activities and highlighted opportunities to promote the 8th UN Sustainable Development Goal (SDG), which focuses on decent work and economic growth.[66]

This is relevant because the ISO encourages the development of standards that help users address SDGs, and others in Filip’s working group tend to focus mainly on the 9th SDG, which addresses industry, innovation and infrastructure.[67]

Based on these experiences, Filip thinks it is important to have a more ‘representative’ and ‘balanced’ standardisation process, because ‘the stakes are too high’ in the field of AI to exclude non-industry voices.

He finds that having the right team in place from the beginning of a standardisation project is the most important factor in the project’s success, and that a more inclusive group with civil society representation can help the group to see an issue from every angle.

While chairing a working group in the Institute of Electrical and Electronics Engineers (IEEE) that is developing a standard for algorithmic bias considerations, Ansgar Koene, who represents the British Standards Institution (BSI) in JTC-21, has noticed civil society representatives and others with non-technical backgrounds making unique contributions to high-level thought and planning.

Participants without computer science backgrounds sparked the idea for annexes covering cultural dimensions of bias and different jurisdictions’ legal approaches, which would otherwise not have been included. The annex on varying cultural norms is intended to help providers adjust risk assessments for bias in different cultural contexts.

Participants with social science backgrounds led the stakeholder identification activities, helping Koene’s group to identify stakeholders beyond the more obvious categories of people with specific legal protections.

Koene finds civil society input particularly useful for the ‘cultural dimensions’ of standards development and identifying sensitivities. He says that lived experience shared by civil society representatives can be the most valuable input.

Adam Leon Smith, another BSI representative in JTC-21, who also gained experience in international AI standardisation prior to JTC-21, finds civil society participation beneficial when the participants have relevant subject-matter expertise. For example, he would find it helpful to have an expert in homelessness involved in the development of a standard related to banking, given the particular challenges this group might face, but not necessarily an expert in voting rights.

Another benefit of civil society participation, according to David Filip, is that it helps to build a more durable consensus. If a standard is developed to reflect the views of all affected interests, it is less likely that excluded interests will identify and object to shortcomings at a later stage.

Even when standards developers focus only on implementing legislation, rather than interpreting it, Chiara Giovannini of ANEC finds broad stakeholder participation beneficial. For example, a standard for recordkeeping procedures may not require experts to interpret human rights law directly, but decisions they make can indirectly affect a person’s human rights, such as the right to an effective remedy. In these cases, it is useful to have civil society representatives present to spot issues and make recommendations.

There were few perceived downsides to participation by civil society organisations. While David Filip finds that greater inclusivity increases the amount of time needed to reach a consensus, he judges that its benefits outweigh the time costs.

Avoiding legislative interpretation

Interviewees report that JTC-21 working group experts tend to avoid making more granular decisions about interpreting ambiguous legislative terms pertaining to fundamental rights and related public interests. Experts, including both technologists and civil society representatives, feel that these decisions should be made primarily by lawmakers. As a result, they do not seek this input from civil society representatives.

Ansgar Koene observes that JTC-21 working group experts ‘dance around’ questions raised by the interpretation of terms like ‘appropriate level of accuracy’. His sense is that experts feel they ‘do not have the right’ to make these decisions, as the issues are too ‘sensitive’, and JTC- 21 has not been authorised to define societal norms.

Instead, his working group and others in JTC-21 focus on procedures and documentation that will enable public authorities to assess a system’s compliance with policies they have made, such as thresholds they have set for accuracy or risk levels. These standards will instruct a provider about which steps to take, or which boxes to tick, to demonstrate that issues like fundamental rights have been considered fully, and how to document these steps. They will not specify thresholds or benchmarks to meet.

Adam Leon Smith witnessed a similar tendency in the development of an international standard on algorithmic bias. Although this standard does not implement legislation, the committee frequently discussed whether or how to address fairness. Ultimately it avoided defining fairness because there was too much cultural variation in its meaning.

From David Filip’s perspective, there are two major challenges to standardising human rights protections.

The first challenge is that human rights risks in AI are multi-dimensional, making it infeasible to develop a single metric to measure risk to fundamental rights. In contrast, product safety standards developed for most New Legislative Framework (NLF) legislation typically address one-dimensional risks to human life or physical injuries.

Where fundamental rights are concerned, multiple rights may be implicated by AI. A developer may need to make difficult legal assessments about a design feature that protects one right but interferes with another right. Legal balancing tests and similar analyses normally fall within the purview of a constitutional court or legislature, which have the expertise and legitimacy to make such determinations.

According to Filip, a technical committee can determine how to minimise the number of workers killed by machinery, for example, but not which degree of privacy intrusion is acceptable to prevent a worker from being injured. This is an ‘unsolvable problem’ for which JTC-21 cannot and will not take responsibility.

The second challenge is that the standardisation of risk management for a product depends on the sequential development of several interdependent standards.

For example, Filip’s ISO working group on trustworthiness first defines qualitative characteristics of trustworthiness in standards, such as robustness, and then determines how to measure them in subsequent standards. From his perspective, only after these steps are complete does it make sense to require a certain threshold of a characteristic in law.

Equivalent preliminary standards would be necessary to develop the standards envisioned in the AI Act. However, that work has not yet been completed, and will not be complete before the AI Act goes into effect, in 2023 or 2024.

Similarly, James Davenport, a representative of the BSI, thinks that, in the absence of operationalised definitions of risks to human rights that are produced by lawmakers or otherwise socially accepted, JTC-21 cannot develop standards for acceptable levels of risk.

Davenport illustrates this point with the hypothetical example of avoiding gender-based discrimination resulting from the use of hiring software (a type of high-risk AI system). He points out that no UK or EU law specifies whether the output of a shortlisting programme should be a list in which there are equal numbers of applicants with each gender, the proportion of each gender reflects the original applicant pool, or some other pattern.

Yet ‘no answer is not good enough for a computer programme’, says Davenport; they ‘need to have an answer’. He thinks it is ‘not reasonable’ for policymakers to ask something of standards development bodies that policymakers have not done themselves.

Without operationalised definitions of risk to fundamental rights, questions about what constitutes acceptable or appropriate levels are ‘not scientifically sound’, according to Davenport. For this reason, he thinks that it is not helpful to have civil society organisations available to help interpret these provisions of the AI Act.

On the other hand, Davenport is confident that JTC-21 can deliver process-oriented standards. Representatives of Annex III organisations hold similar views.

 

Philippe Saint-Aubin, an ETUC expert, states that ‘nobody wants standards to replace law and policy’, so standards development organisations aim to avoid specifying what should be covered in national laws. Rather than creating substantive rules that overlap with regulation, Saint-Aubin encourages the incorporation of social dialogue in international standards affecting workers’ rights. This is because different national trade unions may have different views, and some workers may end up in a worse position with a uniform set of rules.

How to fill the regulatory gap: analysis and next steps for policymakers

Summary of the research findings

An analysis of the AI Act and documents pertaining to EU standardisation policy suggests that the AI Act does create a regulatory gap. Neither the legislative text, nor harmonised standards implementing the legislation, are likely to answer challenging legal and political questions raised by essential requirements.

Little information is available about most other potential sources of authoritative interpretations of essential requirements, but the evidence suggests they will be inadequate to meet providers’ needs.

Although Joint Technical Committee 21 (JTC-21) aims to avoid interpreting the AI Act’s essential requirements for fundamental rights and related public interests when developing standards, most of the standardisation experts interviewed value inclusive civil society representation. Their expertise can provide otherwise missing viewpoints and knowledge and facilitate consensus-building.

However, civil society organisations face significant barriers to effective participation in JTC-21 and standards development generally. While there are several opportunities for direct civil society participation in JTC-21, most civil society organisations are ineligible to take advantage of them, and those that do face major barriers to participating effectively.

Challenges include the size and inflexibility of the time commitment, the opacity and complexity of the standardisation process, disempowerment by industry dominance in the standardisation process and a lack of awareness about the relevance of European and AI standards. Though eligibility criteria for public comments can be less restrictive, this option limits participation to a narrow window of time, and civil society groups appear to be unaware of or uninterested in it.

Feedback from civil society representatives suggests several resources could increase the amount and effectiveness of their participation in standards development. These include education about the relevance of standards to organisations’ missions, training in how to participate and funding for participants.

However, even with increased civil society participation in JTC-21, the ambiguity of the AI Act’s essential requirements for fundamental rights protections and other public interests limits the types of standards deliverables JTC-21 can produce.

This means JTC-21’s harmonised standards are unlikely to clarify how providers can comply with the AI Act’s essential requirements for the protection of fundamental rights and related public interests. This leaves challenging political and legal questions to providers.

These findings suggest that EU policymakers should explore strategies to boost civil society participation, while also exploring institutional innovations to fill the regulatory gap.

Expanding civil society participation in JTC-21

The European Commission and Parliament can explore several strategies to bolster civil society participation in JTC-21. These include increasing the number and diversity of Annex III organisations, expanding eligibility criteria for Commission grants to individuals and creating or incentivising the creation of a civil society hub.

This could increase JTC-21’s viewpoint diversity and balance the relative representation of public and commercial interests.

Why increase civil society participation?

There are several reasons why the European Commission and Parliament should develop strategies to boost the number and effectiveness of civil society organisations in JTC-21.

First, civil society representatives with expertise in human rights law and public policy provide valuable input, even if that input does not involve interpreting legislation or human rights law.

They can provide missing perspectives and information, such as different cultural perspectives on ethical questions or lived experience, which Ansgar Koene, who represents the BSI in JTC-21, found useful in the development of a standard on algorithmic bias considerations.

As an ANEC representative pointed out, they can identify and make recommendations for indirect human rights impacts, such as the ways in which record-keeping practices can impact the enforcement of human rights protections.

Second, a more equal balance of civil society representatives and employees of large companies could avert decisions made in companies’ interests that conflict with the public interest.

JTC-21 may be less likely to design standards in line with existing industry practice when empirical evidence shows that alternative interpretations produce better outcomes for the public (although nothing suggests that this is currently a problem in JTC-21). Whereas a lone ANEC representative may be unable to influence a working group dominated by industry representatives who have voting rights, as Chiara Giovannini of ANEC has found, a coalition of civil society representatives may be more successful.

Even if expanded participation may prolong the consensus-building process, the benefits for increasing the quality of the standard will likely outweigh the time cost, according to JTC-21 participant David Filip. This is particularly relevant where harmonised standards are concerned; the Commission frequently declines to cite potential harmonised standards in the Official Journal since harmonised standards (HAS) consultants find they do not conform to essential requirements.[68]

How can civil society representation be increased?

In light of these benefits, the European Commission and Parliament should explore strategies to increase the representation of civil society organisations in JTC-21.

Several options have emerged from this research, including:

  • broadening the categories of Annex III organisations eligible for funding and mandated participation by amending the regulation on European Standardisation, and increasing funding for organisations’ participation in line with this
  • funding more individuals from civil society organisations with the Commission’s specialised StandICT grants, including for participation in national delegations
  • exploring ideas for the creation of a central hub to support civil society participation.
Amend the regulation on European Standardisation

Annex III of the Regulation on European Standardisation lists the types of civil society organisations eligible for EU funding and mandated involvement in European standardisation. Currently the list includes groups representing consumer, environmental, small and medium enterprises (SMEs) and social interests, with social interests defined as employees’ and workers’ rights.[69]

Parliament could amend the Regulation to add new categories of stakeholder groups to Annex III.[70] Categories could correspond to each of the fundamental rights and policy areas implicated by the AI Act’s essential requirements and high-risk systems, such as privacy and surveillance, fair elections and the right to an education.

The logic behind the Regulation on European Standardisation arguably demands this amendment. The Regulation justifies the participation of Annex III organisations by standards’ ‘broad impact on society’,[71] making it ‘necessary [to strengthen] the role and the input of societal stakeholders’.[72]

Yet the shortlist of Annex III organisations was created a decade ago, when New Legislative Framework (NLF) laws dealt mainly with single-use manufactured products, and essential requirements dealt mainly with health and safety.

Given that the AI Act will expand the scope of standards’ societal impacts on fundamental rights generally, as well as a variety of new policy areas, the breadth of interests represented by Annex III organisations should expand correspondingly. Otherwise, stakeholder input would reflect only a narrow portion of standards’ societal impacts.

Consequently, the European standardisation system would privilege environmental interests over students’ interests in the development of standards for AI used in education, for instance, which would not prioritise the interests most affected.

Amending Annex III would mitigate some of the challenges to effective civil society participation identified in this research. It would create new funding opportunities, particularly for organisations that wish to hire experts to focus on standards development full-time, as civil society representatives often struggle to meet the demanding time commitment along with their other responsibilities.

Increasing the number of potential Annex III organisations could also help to balance the numbers of experts representing public and company interests. Another outcome might be an increase in more ad hoc participation by creating new hubs for civil society activity.

One Annex III organisation, the European Trade Union Confederation (ETUC), solicits feedback from national trade unions and some civil society organisations and experts focused on labour rights to inform its work in European and international standardisation. Another, ANEC, feeds information to national consumer rights groups that wish to participate in National Standardisation Bodies (NSBs).

New Annex III organisations could play these roles for different interests impacted by the AI Act. As a result, harmonised standards developed by JTC-21 would be more likely to successfully implement the AI Act’s essential requirements.

Expand StandICT grant eligibility

Through the StandICT.eu Fellowship Programme, the European Commission provides funding for European standardisation experts to participate in standards development. Funds can be used for travel expenses or time to participate in or prepare for meetings.

While the most recent call for applications welcomes those with expertise in some areas relevant to the AI Act’s essential requirements, such as data governance, privacy and justice, it does not reference fundamental rights generally.[73]

With its influence, the Commission could encourage the programme to expand eligibility criteria to include those who have expertise in additional fundamental rights and policy areas.

Additionally, the StandICT website states that funds are only available for international standardisation activities, suggesting they are available for work in the International Organization for Standardization (ISO) but not in CEN-CENELEC. However, funds can be granted for work in CEN-CENELEC and other standardisation organisations. This should be clarified so that civil society organisations wishing to participate in JTC-21 or NSB mirror committees know they can apply for funding.

Dedicated civil society hub

The European Commission could also create or fund a hub for civil society participation in European standardisation. This hub could institutionalise activities already carried out by the ETUC and ANEC that enable organisations and experts to contribute to European standardisation when they otherwise could not. Its design could be based on best practices derived from other central resources created for standardisation.

A civil society hub could be designed to mitigate several of the challenges to effective civil society participation.

Through outreach to civil society organisations focusing on fundamental rights and public interests beyond health and safety, the hub could make organisations aware of the relevance of European standardisation to their organisations’ missions. This would mitigate a major challenge to civil society participation, which is that most organisations are unaware of its importance, according to interviews with civil society representatives.

The hub could provide training and continued support on procedures, terminology, English language translations and other aspects of the standardisation process that tend to intimidate or frustrate the efforts of newcomers and ad hoc participants. This would mitigate problems observed by Mary Towers of the Trades Union Congress (TUC) and Philippe Saint-Aubin, an ETUC expert, in the chapter on ‘Experts’ views on civil society participation’.

It could also provide technical expertise to enable those without technical backgrounds to better understand standards’ contents, reducing another barrier to effective civil society participation.

To facilitate ad hoc participation by organisations excluded by the time requirements, one or more point persons in the hub could collect and represent civil society views continuously throughout the standardisation process.

When designing the hub, the Commission could look to the successes and failures of other attempts to create central resources for participation in standards development.

One of these is the European Multi-Stakeholder Platform on ICT Standardisation, which is established by the Commission.[74] The Multi-Stakeholder Platform appoints very few civil society members, most of which are Annex III organisations.[75]

Another example is the Alan Turing Institute’s AI Standards Hub. Although the Hub is new, one of its stated aims is to educate and train stakeholders, including civil society, in international standards development.[76] The Commission may be able to learn about best practices and pitfalls to avoid.

Summary of strategies for increasing civil society participation

Whilst some policymakers or industry representatives may object to these policies, arguing that they would require additional EU funding or improper interference with private entities, improved civil society representation can help to ensure essential requirements uphold fundamental rights and other public interests.

According to Chiara Giovannini of ANEC, if policymakers wish to implement public policy through standards while relying on civil society to represent the public interest, then civil society participation must be funded adequately; otherwise, policymakers must disentangle standards from public policy.

Even with these changes, JTC-21’s understandable reluctance to make political decisions on standards could create a fatal flaw in the AI Act’s regulatory strategy, necessitating deeper reforms to the standards development process.

Institutional innovations for democratic control over essential requirements

If the AI Act continues to rely on a decades-old regulatory framework designed for product safety legislation, the European Commission and Parliament should explore possibilities for institutional innovations that adapt the NLF to the AI Act.

As things stand, the AI Act’s harmonised standards will not fulfil their intended function, which is to clarify for providers how to design AI in accordance with requirements about fundamental rights and other public interests. Neither the legislation nor other sources of clarification are likely to deliver this information.

Lacking authoritative interpretations of essential requirements, providers will face legal uncertainty in their attempts to comply with the AI Act. This would both negate the purpose of the NLF and endanger fundamental rights and other public interests.

Institutional innovations designed to answer tricky political and legal questions could fill this regulatory gap, while also creating opportunities for stronger democratic control and the inclusion of more legal and policy expertise in standardisation. This would be likely to result in the more successful implementation of the AI Act’s public interest protections.

Why are institutional innovations required?

Preliminary interviews with JTC-21 experts revealed that they are generally unwilling to develop harmonised standards for essential requirements that involve political judgements, due to a perceived lack of legitimacy.

This aligns with EU institutions’ views on the NLF, according to which ‘essential requirements […] should be defined precisely in order to avoid misinterpretation on the part of the ESOs or leaving them to make political choices’.[77] It is imperative to avoid the risk of ‘delegat[ing] political powers to the ESOs’.[78]

Though the AI Act includes other potential sources of clarification for essential requirements, there is uncertainty about whether they will meet providers’ needs.

A new European Artificial Intelligence Board will be empowered to provide advice about the implementation of the AI Act, including technical specifications related to essential requirements.[79] However, the high-level outline of the Board’s responsibilities in the proposal does not guarantee that the Board will fill the gaps left by JTC-21.

Even if the Board attempts to provide this guidance, it will juggle this task with other responsibilities, such as the coordination of member state enforcement and administration of regulatory sandboxes.[80] Whether the Board will have the resources and vision necessary to carry out all of these tasks effectively remains to be seen.

Additionally, the Commission can clarify essential requirements by issuing common specifications. Common specifications are implementing acts – a type of streamlined EU legislation – that are functionally equivalent to harmonised standards. They are permissible when harmonised standards are absent or insufficient to implement the AI Act.[81]

Finally, while HAS consultants may be on hand to clarify legislative terms and legal matters to JTC-21, this will not fill the regulatory gap if JTC-21 continues to avoid political questions. Were JTC-21 to reverse its position, this would raise concerns about the legitimacy of its decisions.

For the NLF to work with the AI Act, providers will need additional sources of authoritative guidance for essential requirements that have sufficient democratic or political legitimacy.

How could institutional innovations be implemented?

The European Commission can explore the possibilities of common specifications and a benchmarking institute to provide missing guidance in the implementation of the AI Act. These mechanisms could also create opportunities to build effective democratic control or oversight into the EU’s AI governance policy, along with sufficient legal and policy expertise.

Common specifications

The European Commission could use the AI Act as an opportunity to create a novel standardisation process that incorporates sufficient legal and policy expertise, while allowing for more democratic control.

Re-designing procedures for common specifications in a way that ensures civil society organisations and other experts are consulted more widely than in ESOs would be one way to do this.

Choosing the route of common specifications was seemingly ruled out when the Commission issued a draft request for harmonised standards in December 2022. Article 3 of this request compels CEN-CENELEC to ensure the appropriate involvement of ‘civil society

organisations, and the gathering of relevant expertise in the area of fundamental rights’ in its standardisation processes. It remains to be seen how CEN-CENELEC will achieve this, but they will be required to provide relevant evidence in their final report.

If relevant safety and fundamental right protections are deemed inadequate in the harmonised standards, the Commission should withhold, and possibly leverage, the right to use common specifications instead. This process should gather the views of relevant bodies or expert groups that are not necessarily tied to an industrial sector. It could consult civil society organisations with expertise in a larger proportion of the legal and policy areas implicated by the AI Act’s essential requirements.

The Commission could also explore the possibility of regularly consulting organisations that engage the public in policymaking when developing common specifications. This would introduce a higher degree of democratic control than would otherwise exist in decision-making by political appointees and civil servants or private contractors.

Though representatives of EU member states will have the opportunity to vote on the adoption of common specifications, national representatives in similar decision-making processes are usually not elected officials. Instead, they tend to represent trade and economy ministries.[82]

Given that common specifications take the form of implementing acts, the public can potentially provide feedback via the Commission’s ‘Have your say’ website, which would create new opportunities for ad hoc civil society participation in standardisation.[83] This could expand participation by organisations like the TUC that are effectively excluded by the time commitments typically required for standards development.

Benchmarking institution

The European Commission could explore similar ideas in a more targeted benchmarking institution.

This institution could take up the questions that JTC-21 avoids, complementing JTC-21’s procedure- and documentation-oriented standards with more substantive standards. It could provide guidance about questions like how to measure risk to fundamental rights, and which thresholds are ‘acceptable’ or ‘appropriate’.

The European Parliament’s Committee on Industry, Research and Energy has proposed amendments that would prompt the European Artificial Intelligence Board to either design an independent benchmarking institution or house a benchmarking authority within the Board.[84]

At least one JTC-21 expert, Ansgar Koene, felt that a benchmarking authority could answer the more political questions about compliance with essential requirements left open by the committee’s focus on procedure and documentation.

Summary of institutional innovations

Common specifications and a benchmarking institution are only two examples of institutional innovations the European Commission and Parliament can explore to fill the regulatory gap created by the AI Act’s NLF. Regardless of the particular strategy chosen, institutional innovations that modernise the NLF are probably necessary to ensure JTC-21 and providers have access to otherwise absent authoritative guidance on the interpretation of essential requirements.

EU institutions can take advantage of this modernisation by promoting more effective civil society participation. This would help to ensure essential requirements are interpreted in accordance with the views of experts in human rights law and relevant policy areas.

They can also use the opportunity to create new forms of democratic control over a rulemaking process that is currently dominated by private actors, which is arguably necessary to legitimise decisions in standards-setting that are more overtly political.

Further questions

These conclusions and suggestions are tentative, given how limited public information is about Joint Technical Committee 21 (JTC-21) experts and activities, and consequently, how difficult it is to gather empirical evidence through interviews and other means. This gives rise to its own questions:

Do EU institutions have a responsibility to publicise (or require CEN-CENELEC to publicise) JTC-21’s activities, given how crucial they are to a landmark piece of legislation that directly implicates fundamental rights and other public interests?

For those who are or could be involved in the development of EU standards, including JTC-21 participants and civil society organisations, this paper raises questions about the role of civil society in the development of the AI Act. There are many approaches that would facilitate their involvement – this would support the protection of fundamental rights and therefore fulfil one of the primary goals of the AI Act.

This research also raises broader questions about the AI Act and EU’s New Legislative Framework (NLF) – what role do EU institutions expect standards to play in AI governance?

As originally conceived, the NLF ensures political decisions remain within EU institutions and decisions made within European Standards Organisations (ESOs) are ‘purely technical’.[85] The Commission’s Explanatory Memorandum implies this is true of the AI Act, describing harmonised standards as ‘precise technical solutions’[86] for designing AI that complies with essential requirements.

However, the AI Act effectively delegates political decisions to ESOs, who are unequipped to make these decisions, leaving them to individual providers. This scenario is unlikely to ensure fundamental rights protections and related policy goals are realised. Nevertheless, the choice of the NLF for the AI Act implies that Parliament does not wish to make these granular decisions for industry.

Before voting on the AI Act, the European Parliament must understand the degree to which it is delegating consequential political power to private entities, which private entities are being empowered and whether amendments are necessary to safeguard public interests.

Parliament must also consider whether the NLF is suitable for AI governance and the protection of fundamental rights and other public interests. Rather than reforming European standardisation and the decades-old NLF to accommodate the AI Act, policymakers could avoid relying on European standards at all.

This raises broader questions for EU policymakers:

  • Is a new political theory of AI governance necessary and, if so, what should it be?
  • How could a governance framework be designed to effectively protect fundamental rights and better safeguard the public interest from conflicting corporate interests?
  • How can it balance the incorporation of technical expertise with effective democratic control?

Methodology

Several strategies were used to determine whether a regulatory gap exists in the AI Act, whether the gap can be filled by civil society participation in Joint Technical Committee 21 (JTC-21) and how to bolster civil society participation.

Legislative analysis and other policy analysis was used to determine whether a regulatory gap exists in the AI Act, particularly where the protection of fundamental rights and other public interests are concerned.

Interviews with JTC-21 working group experts and others with experience in standardisation were intended to clarify whether they consider it helpful or crucial to have civil society representatives involved in the interpretation and operationalisation of the AI Act’s essential requirements.

The goal behind interviews with civil society representatives experienced in European or AI standardisation was to understand what facilitates or hinders their effective participation. This information informed a workshop with civil society organisations lacking this experience, in order to understand what would be necessary for them to participate effectively.

Additional information was gathered through document review.

Legislative and policy analysis

An analysis of the AI Act’s text was carried out to reveal whether the legislation creates a regulatory gap, by depending on European Standards Organisations (ESOs) to operationalise ambiguously worded protections of fundamental rights and other public interests.

The legislation was analysed in conjunction with other sources and descriptions of EU standardisation policy, such as the Regulation on European Standardisation and the Commission’s Blue Guide.

Semi-structured interviews with Joint Technical Committee 21 experts

Interviews with participants in JTC-21, the ESO technical committee responsible for AI Act standards, were designed to elucidate how the committee goes about interpreting the Act’s essential requirements concerning fundamental rights and policy areas like election administration.

The goal was to understand whether JTC-21 working group experts will struggle to implement essential requirements in harmonised standards, and the degree to which civil society participation can help.

Although the European Commission’s first standardisation request to JTC-21 is not yet finalised, the recently formed committee has begun preliminary work in anticipation of the first standardisation request. Experts include both representatives of civil society organisations and technologists from industry and academia.

Questions in semi-structured interviews with standardisation experts were built, in part, around understanding how experts plan to operationalise essential requirements related to fundamental rights and issues like election administration.

They asked, for example, about how working groups within the committee are approaching terms like ‘appropriate level of accuracy’, whether they have the legal and policy expertise needed to interpret and operationalise them, and what role civil society organisations play or have played in similar standards development.

Interviews with civil society representatives also focused on the barriers to and facilitators of their participation.

Interviews were held in the spring of 2022 through Zoom and Microsoft Teams, with follow-up questions sent by email.

Interviewees

The target group for interviews was JTC-21 working group experts and those with experience in European standardisation or AI standards development.

Because the names of JTC-21 participants are not publicly available, most interviewees were identified through news articles, websites and LinkedIn profiles after searching for variations of ‘JTC-21’. Several interviewees were referred to the author of this paper by an interviewee or were already known to her.

Interviewees included:

  • James Davenport, a computer science and mathematics professor at the University of Bath, and a representative of the British Standards Institution (BSI), the United Kingdom’s National Standardisation Body (NSB), in JTC-21.
  • David Filip, who focuses on global standardisation strategy for the Huawei Ireland Research Centre. He is a JTC-21 participant who also convenes a working group focused on trustworthiness in AI in the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC).
  • Chiara Giovannini, a Deputy Secretary General and Senior Manager of Policy & Innovation at the European Consumer Voice in Standardisation (ANEC). She has experience in European and international standardisation.
  • Ansgar Koene, the Global AI Ethics and Regulatory Leader at Ernst & Young. He represents the BSI in JTC-21, in which he convenes a working group on conformity assessment. He also chairs a working group in the Institute of Electrical and Electronics Engineers (IEEE) that is developing a standard on algorithmic bias considerations.
  • Adam Leon Smith, the Chief Technology Officer of Dragonfly, who represents the BSI in JTC-21. He has experience in international AI standards development.
  • Philippe Saint-Aubin, a JTC-21 expert working on behalf of the European Trade Union Confederation (ETUC), who has experience in AI standards development in the ISO and IEC.
  • Mary Towers, an employment rights policy expert with the Trades Union Congress. She has represented her organisation in international standards development, with guidance from the ETUC.

Limitations

There were several limitations to these interviews. The number of interviewees was limited by the fact that the identities of JTC-21 participants are not publicly available. Of the experts whose names are publicly available, most declined interview requests.

Most NSBs did not respond to emails asking for referrals to their JTC-21 representatives. Most interviewees were based in the UK or Brussels, as most experts were referred or introduced to the interviewer by other interviewees.

Also, JTC-21 activity had only recently begun, and had done so before the European Commission finalised its first standardisation request. This means experts had relatively few experiences to draw from and were not certain about the exact scope of the work they would be asked to do.

Finally, because so few civil society groups are involved in JTC-21 and European standardisation broadly, the number of civil society representatives interviewed was necessarily small.

Workshop with civil society representatives

The views of civil society representatives that are not involved in European standardisation, but whose organisations’ missions will be impacted by essential requirements, were gathered in a workshop.

Held online as part of RightsCon on 9 June 2022, the workshop both informed participants about the relationship between the fundamental rights, the AI Act and European standardisation, and also elicited feedback from participants about their organisations’ abilities to engage effectively with JTC-21.

Questions were designed to understand what would be necessary or helpful for these organisations to participate in JTC-21 and were based on information gleaned in interviews and document review.

For example, one question asked participants whether they could meet the average time commitment for the development of a harmonised standard, which was information derived from interviews, with the options of ‘yes’, ‘no’ and ‘not sure’. Another asked which organisations satisfied the eligibility criteria for CEN-CENELEC liaison organisation status, which was information derived from document review.

Answers were submitted through two Mentimeter polls. One poll was directed to European civil society organisations, and another to other participants. Only the former is referenced in this paper.

Participants

Participants from at least one civil society organisation in each CEN-CENELEC member country (which includes countries that are not EU member states) were invited to the workshop. Representatives of organisations with expertise in each high-risk category were invited, as were representatives with organisations in fundamental rights more broadly. Invitations were also sent to organisations specialising in technology policy and human rights.

For example, representatives of anti-poverty organisations were invited to share their perspective on access to essential public services, a high-risk AI category.

Participants included an expert in the use of automation in the administration of justice; a policy analyst and a human rights lawyer from organisations specialising in human rights and technology policy; a representative of an organisation specialising in the use of technology to document human rights violations; and a representative of an organisation that promotes media freedom. Several participants contributed anonymously.

Additional RightsCon participants joined the workshop, which took the total number of participants to 25.

Limitations

There were several limitations to the workshop.

Although most invitees were from organisations that did not specialise in technology policy, most of the participants that accepted invitations were from such organisations. Those lacking familiarity with AI or technology policy generally tended not to reply, or to reply saying that they felt uncomfortable discussing a topic outside of their area of expertise.

Those that did join attended on the condition that they would listen rather than actively contribute.

Though 25 participants attended the workshop, less than half answered questions in the Mentimeter polls and contributed to group discussions.

Document review

Additional information about European standardisation was gathered through the review of documents, such as publications from the European Commission and CEN-CENELEC rules of procedure.

Acknowledgements

This project was funded by the Notre Dame-IBM Tech Ethics Lab, as part of the Future of Regulation programme.

This report was lead-authored by Christine Galvagna, with substantive contributions from Octavia Reeve, Imogen Parker and Andrew Strait.

We would like to thank James Davenport, David Filip, Chiara Giovannini, Ansgar Koene, Adam Leon Smith, Philippe Saint-Aubin, Mary Towers and RightsCon workshop participants.

Footnotes

[1] European Commission (2015). Vademecum on European Standardisation in support of Union Legislation and policies, SWD(2015) 205 final, part 1, section 3.1. Available at: https://ec.europa.eu/docsroom/documents/13507/attachments/1/translations

[2] European Commission (2021). Proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (AI Act), COM(2021) 206 final section 5.2.3. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[3] European Commission (2021). Proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (Explanatory Memorandum), COM(2021) 206 final. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[4] European Commission (2021). AI Act (proposal), Recitals 1, 5, 13, 32, 39, 43, and 78. Available at:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206; European Commission (2021). Explanatory Memorandum, sections 1.1, 1.2, 1.3, 2.2, 2.4 and 3.5; European Commission (2022). A European approach to artificial intelligence. Available at: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

[5] European Commission (2021). Explanatory Memorandum, section 1.1.

[6] European Commission (2021). Explanatory Memorandum, section 1.1.

[7] European Commission (2021). AI Act (proposal), Article 15(1).

[8] Kroll, J. et al. (2017). ‘Accountable Algorithms’, University of Pennsylvania Law Review, p. 696. Available at: https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3

[9] European Commission (2021). AI Act (proposal), Article 3(2). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[10] European Commission (2021). Explanatory Memorandum, sections 2.1 and 2.3. Available at:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[11] Büthe, T. and Mattli, W. (2011). The New Global Rulers, p. 139. Princeton: Princeton University Press.

[12] International Organization for Standardization (ISO). Standards. Available at: https://www.iso.org/standards.html (Accessed: 16 March 2023)

[13] ISO. Deliverables. Available at: https://www.iso.org/deliverables-all.html (Accessed: 16 March 2023)

[14] See, for example: Caeiro, C., Jones, K. and Taylor, E. (forthcoming). ‘Technical Standards and Human Rights: The case of New IP’, Human Rights in a Changing World. Washington, DC: Brookings Institution Press. Available at: https://oxil.uk/publications/2021-08-27-technical-standards-human-rights/Human_rights_and_technical_standards.pdf; Cath-Speth, C. (2021). Changing Minds and Machines: A Case Study of Human Rights Advocacy in the Internet Engineering Task Force (IETF). Oxford Internet Institute. Available at: https://corinnecath.com/wp content/uploads/2021/09/CathCorinne-Thesis-DphilInformationCommunicationSocialSciences.pdf; ten Oever, N. (2020). Wired Norms. Available at: https://nielstenoever.net/wp-content/uploads/2020/09/WiredNorms-NielstenOever.pdf

[15] European Commission (2015). Vademecum on European Standardisation in support of Union legislation and policies, SWD(2015) 205 final, part I. Available at: https://ec.europa.eu/docsroom/documents/13507/attachments/1/translations; European Commission

(2016). The ‘Blue Guide’ on the implementation of EU products rules 2016, Official Journal of the European Union, C 272/1. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52016XC0726%2802%29&from=EN; European Commission

(2022). The ‘Blue Guide’ on the implementation of EU product rules 2022, Official Journal of the European Union, C 247. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:C:2022:247:TOC

[16] European Commission (2016). The ‘Blue Guide’ on the implementation of EU products rules 2016, Official Journal of the European Union, C 272/1. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52016XC0726%2802%29&from=EN

[17] European Commission (2016). Blue Guide 2016, section 4.1.1.

[18] European Commission (2016). Blue Guide 2016, section 4.1.1.

[19] European Commission (2016). Blue Guide 2016, sections 1.1.3 and 1.4.

[20] European Commission (2016). Blue Guide 2016, section 4.1.2.2.

[21] European Commission (2016). Blue Guide 2016, section 1.1.1.

[22] European Commission (2015) Vademecum on European Standardisation in support of Union Legislation and policies, Part 1, section 3.1. Available at: https://ec.europa.eu/docsroom/documents/13507/attachments/1/translations

[23] European Commission (2015) Vademecum on European Standardisation in support of Union legislation and policies, Part 1, section 3.1. Available at: https://ec.europa.eu/docsroom/documents/13507/attachments/1/translations; European Parliament (2010) Resolution of 21 October 2010 on the future of European standardisation (2010/2051(INI)), C 70 E, paragraph 15. Available at: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2012:070E:0056:0067:EN:PDF

[24] European Commission (2015) Vademecum on European Standardisation in support of Union legislation and policies, Part 1, section 3.1. Available at: https://ec.europa.eu/docsroom/documents/13507/attachments/1/translations

[25] European Commission (2015)Vademecum, Part 1, Section 3.1.

[26] European Commission ‘Harmonised Standards’. Available at: https://single-market-economy.ec.europa.eu/single-market/european-standards/harmonised-standards_en (Accessed: 22 February 2023)

[27] European Parliament and Council of the European Union (2013) Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC, Annex I(C). Available at: https://eur-lex.europa.eu/eli/dir/2013/53/oj

[28] European Commission (2016) The ‘Blue Guide’ on the implementation of EU products rules 2016 (Blue Guide 2016), section 4.1.2.5. Official Journal of the European Union, C 272. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52016XC0726%2802%29&from=EN

[29] European Commission (2016). The ‘Blue Guide’ on the implementation of EU products rules 2016, Official Journal of the European Union, C 272/1. Available at: http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52016XC0726%2802%29&from=EN

[30] European Parliament and Council of the European Union (2012). Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, Article 5. Available at: http://data.europa.eu/eli/reg/2012/1025/2015-10-07

[31] European Parliament and Council of the European Union (2012). Regulation (EU) No 1025/2012, Article 5 and Annex III.

[32] European Parliament and Council of the European Union (2012). Regulation (EU) No 1025/2012, Article 5.

[33] European Parliament and Council of the European Union (2012). Regulation (EU) No 1025/2012, Recital 22.

[34] European Commission (2021). AI Act (proposal), Annex VI(3). Available at:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[35] European Commission (2021). AI Act (proposal), Article 6 and Annex III; European Commission (2021). Explanatory Memorandum,

section 5.2.3. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[36] European Commission (2021). AI Act (proposal), Annex III; European Commission (2021). Explanatory Memorandum, section 5.2.3.

[37] European Commission (2021) AI Act, Article 9(4). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[38] European Commission (2021). AI Act (proposal), Article 10(2)(a).

[39] European Commission (2021). AI Act (proposal), Article 15(1).

[40] European Commission AI Act, Article 9(3). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[41] European Commission (2022) AI Act: Draft Standardisation Request, Annex II. Available at: https://artificialintelligenceact.eu/wp-content/uploads/2022/12/AIA-%E2%80%93-COM-%E2%80%93-Draft-Standardisation-Request-5-December-2022.pdf

[42] European Commission (2021) Proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (Explanatory Memorandum),, section 5.2.3. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[43] Center for Data Innovation (2021) What’s Next on the EU’s Proposed AI Law? [Webinar]. Available at: https://www.youtube.com/watch?v=vdcSKXeiDAU&t=3335s

[44] European Commission (2021). AI Act (proposal), Article 58. Available at:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[45] European Commission (2022). AI Act: Draft Standardisation Request, No 1025/2012. Available at:

https://artificialintelligenceact.eu/wp-content/uploads/2022/12/AIA-%E2%80%93-COM-%E2%80%93-Draft-Standardisation-Request-5-December-2022.pdf

[46] Beltrão, A. and Legrand, T. (2018). ‘HAS consultants assessment’ [Presentation]. Available at:

https://experts.cenelec.eu/media/Experts/Trainings/Harmonized%20Standard/has-consultants-assessment.pdf; European Committee for Standardization (CEN) (2021). HAS assessment process. Available at: https://boss.cen.eu/developingdeliverables/pages/en/pages/has_assessment_process/  (Accessed: 22 March 2023)

[47] Beltrão, A. and Legrand, T. (2018); CEN. (2021).

[48] Beltrão, A. and Legrand, T. (2018). ‘HAS consultants assessment’ [Presentation]. Available at:

https://experts.cenelec.eu/media/Experts/Trainings/Harmonized%20Standard/has-consultants-assessment.pdf; European Committee for Standardization (CEN) (2021). HAS assessment process. Available at: https://boss.cen.eu/developingdeliverables/pages/en/pages/has_assessment_process/  (Accessed: 22 March 2023)

[49] See, for example: Ernst & Young. Call for Expression: “Eco-design” (Directive 2009/125/EC & Several Regulations). Available at: https://assets.ey.com/content/dam/ey-sites/ey-com/en_be/topics/advisory/ey-has-eco-design.pdf (Accessed: 16 March 2023); Ernst & Young. Would you like to become a Harmonised Standards Consultant? Available at: https://www.ey.com/en_be/consulting/harmonised-standards-consultant (Accessed: 16 March 2023)

[50] European Commission (2021). AI Act (proposal), Articles 53(1) and 55(1)(c). Available at:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[51] International Organization for Standardization (ISO) and CEN. Agreement on Technical Co-operation between ISO and CEN (Vienna Agreement), section 4. Available at: https://boss.cen.eu/media/CEN/ref/vienna_agreement.pdf; ISO and CEN (2016). Guidelines for

the Implementation of the Agreement on Technical Cooperation between ISO and CEN, 7th edition, p. 6, section 5.2 and Annex A.2.1. Available at: https://boss.cen.eu/media/CEN/ref/va_guidelines_implementation.pdf

[52] ISO. Standards by ISO/IEC JTC 1/SC 42: Artificial Intelligence. Available at:

https://www.iso.org/committee/6794475/x/catalogue/p/0/u/1/w/0/d/0 (Accessed: 16 March 2023)

[53] ISO. ISO/IEC JTC 1/SC 42 – About: Liaisons. Available at: https://www.iso.org/committee/6794475.html (Accessed: 16 March 2023)

[54] CEN. European Partners: Liaison Organizations. Available at: https://standards.cencenelec.eu/dyn/www/f?p=205:42:0::::FSP_ORG_ID,FSP_LANG_ID:,25&cs=1BBDD38C5C889B115AE5CD7D931EFA3BD (Accessed: 16 March 2023)

[55] CEN-CENELEC (2021). Guide 25: The concept of Cooperation with European Organizations and other stakeholders, Edition 3, section 2.3. Available at: https://www.cencenelec.eu/media/Guides/CEN-CLC/cenclcguide25.pdf. NB As part of a student-led non-profit organisation, the author of this report applied unsuccessfully for liaison status in 2021.

[56] See: ForHumanity. Available at: https://forhumanity.center/

[57] CEN. European Partners: Liaison Organizations.

[58] European Parliament and Council of the European Union (2012). Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, Article 5. Available at: http://data.europa.eu/eli/reg/2012/1025/2015-10-07

[59] CEN-CENELEC. (2015). Guide 20, Edition 4, section 2. Available at:

https://www.cencenelec.eu/media/Guides/CEN-CLC/cenclcguide20.pdf

[60] European Parliament (2010). Resolution of 21 October 2010 on the future of European standardisation (2010/2051(INI)), Official Journal of the European Union, C 70 E/56, paragraph 33. Available at: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2012:070E:0056:0067:EN:PDF

[61] See, for example: British Standards Institution. Healthcare: Latest standards activities. Available at:

https://standardsdevelopment.bsigroup.com/categories/006 (Accessed: 16 March 2023)

[62] European Parliament and Council of the European Union (2014) Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts. Available at: http://data.europa.eu/eli/dir/2014/33/oj

[63] European Parliament and Council of the European Union (2012) Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, Recital 23. Available at: http://data.europa.eu/eli/reg/2012/1025/2015-10-07; CEN-CENELEC (2021) Guide 25: The concept of Cooperation with European Organizations and other stakeholders, Edition 3, section 1.2.1. Available at: https://www.cencenelec.eu/media/Guides/CEN-CLC/cenclcguide25.pdf

[64] The author also participates in the ETUC AI taskforce.

[65] Alan Turing Institute. AI Standards Hub. Available at: https://aistandardshub.org/ (Accessed: 24 March 2023)

[66] United Nations Development Programme (UNDP).What are the Sustainable Development Goals? Available at: https://www.undp.org/sustainable-development-goals (Accessed: 22 February 2023)

[67] UNDP. What are the Sustainable Development Goals?; ISO. Sustainable Development Goals. Available at: https://www.iso.org/sdgs.html (Accessed: 22 February 2023)

 

[68] European Commission (2022). Report on the implementation of the Regulation (EU) No 1025/2012 from 2016 to 2020, COM(2022) 30 final, section 2.7.1. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52022DC0030

[69] European Parliament and Council of the European Union (2012). Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, Official Journal of the European Union, Recital 17 and Annex III. Available at: http://data.europa.eu/eli/reg/2012/1025/2015-10-07

[70] Although Annex III can be amended in a delegated act by the Commission alone, a delegated act can only be used to change the eligibility criteria of a stakeholder category, but not add new categories. See: European Parliament and Council of the European Union (2012). Regulation on European standardisation, Article 20(b).

[71] European Parliament and Council of the European Union (2012). Regulation on European standardisation, Recital 22.

[72] European Parliament and Council of the European Union (2012). Regulation on European standardisation, Recital 22..

[73] StandICT.eu (2022). StandICT.eu 2023 – 7th Open Call. Available at: https://www.standict.eu/standicteu-2023-7th-open-call (Accessed: 22 March 2023)

[74] European Commission (2022). European Multi-Stakeholder Platform on ICT Standardisation. Available at: https://digital-strategy.ec.europa.eu/en/policies/multi-stakeholder-platform-ict-standardisation (Accessed: 22 March 2023)

[75] European Commission (2022). Register of Commission Expert Groups and Other Similar Entities: European Multi-Stakeholders Platform on ICT Standardisation (E02758). Available at:

https://ec.europa.eu/transparency/expert-groups-register/screen/expert-groups/consult?do=groupDetail.groupDetail&groupID=2758 (Accessed: 22 March 2023)

[76] Ostmann, F. and McGarr, T. (2022) ‘Introduction to the AI Standards Hub’ [Presentation]. Available at: ​​https://jtc1info.org/wp-content/uploads/2022/06/01_07_Tim_Florian_AI-Standards-Hub-intro.pdf

[77] European Commission (2015). Vademecum on European Standardisation in support of Union legislation and policies, SWD(2015) 205 final, part 1, section 3.1. Available at: https://ec.europa.eu/docsroom/documents/13507/attachments/1/translations

[78] European Commission (2015). Vademecum, section 3.1.

[79] European Commission (2021). AI Act (proposal), Article 58(c). Available at:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

[80] European Commission (2021). AI Act (proposal), Article 58.

[81] European Commission (2021). AI Act (proposal), Article 41.

[82] See, for example: European Commission (2020). Committee on Standards: Summary record of the 21st Meeting Held on 8 November 2020, p.3. Available at: https://ec.europa.eu/transparency/comitology-register/screen/documents/073077/1/consult?lang=en

[83] European Commission. Welcome to Have your say. Available at: https://ec.europa.eu/info/law/better-regulation/have-your-say_en (Accessed: 22 March 2023)

[84] European Parliament (2022). Draft opinion of the Committee on Industry, Research and Energy for the Committee on the Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021)0206 – C9-0146/2021 – 2021/0106(COD), Amendments 8, 58, and 100. Available

at: https://www.europarl.europa.eu/doceo/document/ITRE-PA-719801_EN.pdf

[85] European Commission (2015). Vademecum on European Standardisation in support of Union legislation and policies, SWD(2015) 205 final, part 1, section 3.1. Available at: https://ec.europa.eu/docsroom/documents/13507/attachments/1/translations

[86] European Commission (2021). Explanatory Memorandum, section 5.2.3. Available at:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

COVID-19 Data Explorer

This report is accompanied by the 'COVID-19 Data Explorer', a resource containing country-specific data on timelines, technologies, and public response to be used to explore the legacy and implications of the rapid deployment of contact tracing apps and digital vaccine passports across the world.

Executive summary

The COVID-19 pandemic is the first global public health crisis of ‘the algorithmic age’.[1] In response, hundreds of new data-driven technologies have been developed to diagnose positive cases identify vulnerable populations and conduct public health surveillance of individuals known to be infected.[2] Two of the most widely deployed are digital contact tracing apps and digital vaccine passports.

For many governments, policymakers and public health experts across the world, these technologies raised hopes through their potential to assist in the fight against the COVID-19 virus. At the same time, they provoked concerns about privacy, surveillance, equity and social control because of the sensitive social and public health surveillance data they use – or are seen as using.

An analysis of the evidence on how contact tracing apps and digital vaccine passports were deployed can provide valuable insights about the uses and impacts of technologies at the crossroads of public emergency, health and surveillance.

Analysis of their role in societies can shed light on the responsibilities of the technology industry and policymakers in building new technologies, and on the opinions and experiences of members of the public who are expected to use them to protect public health.

These technologies were rolled out rapidly at a time when countries were under significant pressure from the financial and societal costs of the pandemic. Public healthcare systems struggled to cope with the high numbers of patients, and pandemic restrictions such as lockdowns resulted in severe economic crises and challenges to education, welfare and wellbeing.

Governments and policymakers needed to make decisions and respond urgently, and they turned to new technologies as a tool to help control the spread of infection and support a return to ‘normal life’. This meant that – as well as guiding the development of technologies – they had an interest in convincing the public that they were useful and safe.

Technologies such as contact tracing apps and digital vaccine passports have significant societal implications: for them to be effective, people must consent to share their health data and personal information.

Members of the public were expected to use the technologies in their everyday lives and change their behaviour because of them – for example, proving their vaccination status to access workplaces, or staying at home after receiving a COVID-19 exposure alert.

Examining these technologies therefore helps to build an understanding of the public’s attitudes to consent in sharing their health information, as well as public confidence in and compliance with health technologies more broadly.

As COVID-19 technologies emerged, the Ada Lovelace Institute was one of the first research organisations to investigate their potential legislative, technical and societal implications. We reviewed the available evidence and made a wide range of policy and practice recommendations, focusing on effectiveness, public legitimacy, governance and potential impact on inequalities.

This report builds on this work: revisiting those early recommendations; assessing the evidence available now; and drawing out lessons for policymakers, technology developers, civil society and public health organisations. Research from academia and civil society into the technologies concentrates largely on specific country contexts.[3]

There are also international studies that provide country-specific information and synthesise cross-country evidence but focus primarily on one aspect of law and governance or public attitudes. [4], [5], [6] This body of research provides valuable insights into diverse policies and practices and unearths legislative and societal implications of these technologies at different stages of the pandemic.

Yet research that investigates COVID-19 technologies in relation to public health, societal inequalities and regulations simultaneously and at an international level remains limited. In addition, efforts to track the development of global policy and practice have slowed in line with the reduced use of these technologies in many countries.

However, it remains important to understand the benefits and potential harms of these technologies by considering legislative, technical and societal aspects simultaneously. Despite the limitations, presenting the evidence and identifying gaps can provide cross-cutting lessons for governments and policymakers, to inform policy and practice both now and in the future.

These lessons concern the wide range of technical, legislative and regulatory requirements needed to build public trust and cooperation, and to mitigate harms and risks when using technologies in public crises, and in health and social care provision.

Learning from the deployment of contact tracing apps and digital vaccine passports remains highly relevant. As the infrastructure remains in place in many countries (for example, authentication services, external data storage systems, security operations built within applications, etc.), the technologies are easy to reinstate or repurpose.

Some countries have already transformed them into new health data and digital identity systems – for example, the Aarogya Setu app in India. In addition, on 27 January 2023, the World Health Organization (WHO) stated: ‘While the world is in a better position than it was during the peak of the Omicron transmission one year ago, more than 170,000 COVID-19-related deaths have been reported globally within the last eight weeks’.[7]

And on 5 May 2023, the WHO confirmed that while COVID-19 no longer constitutes a public health emergency of international concern and the number of weekly reported deaths and hospitalisations has continued to decrease, it is concerned that ‘surveillance reporting to WHO has declined significantly, that there continues to be inequitable access to life-saving interventions, and that pandemic fatigue continues to grow’. [8]

In other words, the pandemic is far from over, and we need to pay attention to the place of these technologies in our societies now and in future pandemics.

This report synthesises the available evidence on a cross-section of 34 countries, exploring technical considerations and societal implications relating to the effectiveness, public legitimacy, inequalities and governance of COVID-19 technologies.

Evidence was gathered from a wide range of sources across different disciplines, including academic and grey literature, policy papers, the media and workshops with experts.

Existing research demonstrates that governments recognised the value of health, mobility, economic or other kinds of personal data in managing the COVID-19 pandemic and deployed a wide range of technologies to collect and share data.

However, given that the technologies were developed and deployed at pace, it was difficult for governments to adequately prepare to use them – and the data collected and shared through them – in their broader COVID-19 pandemic management.[9]

It is therefore unsurprising that governments did not clearly define how to measure the effectiveness and social impacts of COVID-19 technologies. This leaves us with important evidence gaps, making it harder to fully evaluate the effectiveness of the technologies and understand their impact on health and other forms of social inequalities.

We also highlight evidence gaps that indicate where evaluation and learning mechanisms fell short when technologies were used in response to COVID-19. We call on governments to consider these gaps and retrospectively evaluate the effectiveness and impact of COVID-19 technologies.

This will enable them to improve their evaluation and monitoring mechanisms when using technologies in future pandemics, public health, and health and social care provision.

The report’s findings should guide governments, policymakers and international organisations when deploying data-driven technologies in the context of public emergency, health and surveillance. They should also support civil society organisations and those advocating for technologies that support fundamental rights and protections, public health and public benefit.

‘COVID-19 technologies’ refers to data-driven technologies and AI tools that were built and deployed to support the COVID-19 pandemic response. Two of the most widely deployed are contact tracing apps and digital vaccine passports, and they are main focus of this report. Both technologies aim to identify an individual’s risk to others and block or allow freedoms and restrictions accordingly. There are varying definitions of these technologies. In this report we define them through their common purposes and properties, as follows:

  • Contact tracing apps aim to measure an individual’s risk of becoming infected with COVID-19 and of transmitting the virus to others based on whether they have been in close proximity to a person known to be infected. If a positive COVID-19 test result is reported to the app (by the user or the health authorities), the app alerts other users who might have been in close proximity to the person known to be infected with COVID-19. App users who have received an alert are expected to get tested and/or self-isolate at home for a certain period of time.[10]
  • Digital vaccine passports show the identity of a person and their COVID-19 vaccine status or antigen test results. They are used to prove the level of risk an individual poses to others based on their COVID-19 test results, and proof of recovery or vaccine status. They function as a pass that blocks or allows access to spaces and activities (such as travelling, leisure or work).[11]

Cross-cutting findings

Despite the complex, conflicting and limited evidence available about contact tracing and digital vaccine passports, this report uses a wide range of available resources and identifies the cross-cutting findings summarised here under the four themes of effectiveness; public legitimacy; inequalities; and governance, regulation and accountability.

Effectiveness: Did COVID-19 technologies work?

  • Contact tracing apps and digital vaccine passports were – necessarily – rolled out quickly, without consideration of what evidence would be needed to demonstrate their effectiveness. There was insufficient consideration and no consensus reached on how to define, monitor, evaluate or demonstrate their effectiveness and impacts.
  • There are indications of the effectiveness of some technologies, for example the NHS COVID-19 app (used in England and Wales). However, the limited evidence base makes it hard to evaluate their technical efficacy or epidemiological impact overall at an international level.
  • The technologies were not well integrated into broader public health systems and pandemic management strategies, and this reduced their effectiveness. However, the evidence on this is limited in most of the countries in our sample (with a few exceptions, for example Brazil and India), and we do not have clear evidence to compare COVID-19 technologies with non-digital interventions or to weigh up their relative benefits and harms.
  • The evidence is inadequate on whether COVID-19 technologies resulted in positive change in people’s health behaviours (for example, whether people self-isolated after receiving an alert from a contact tracing app), either when the technologies were first deployed or over time.
  • Similarly, it is not clear how the apps’ technical properties and the various policies and approaches impacted on public uptake of the apps or adherence to relevant guidelines (for example, self-isolation after receiving an alert from a contact tracing app).

Public legitimacy: Did people accept COVID-19 technologies?

  • Public legitimacy was key to ensuring the success of these technologies, affecting uptake and behaviour.
  • People were concerned about the use of digital vaccine passports to enforce restrictions on liberty and increased surveillance. People protested against them, and the restrictive policies they enabled, in more than half of the countries in our sample.
  • Public acceptance of contact tracing apps and digital vaccine passports depended on trust in their effectiveness, as well as trust in governments and institutions to safeguard civil rights and liberties. Individuals and communities who encounter structural inequalities are less likely to trust government institutions and the public health advice they offer. Not surprisingly, these groups were less likely than the general population to use these technologies.
  • The lack of targeted public communications resulted in poor understanding of the purpose and technical properties of COVID-19 technologies. This reduced public acceptance and social consensus around whether and how to use the technologies.

Inequalities: How did COVID-19 technologies affect inequalities?

  • Some social groups faced barriers to accessing, using or following the guidelines for contact tracing apps and digital vaccine passports, including unvaccinated people, people structurally excluded from sufficient digital access or skills, and people who could not self-isolate at home due to financial constraints. A small number of sample countries adopted policies and practices to mitigate the risk of widening existing inequalities. For example, the EU allowed paper-based Digital COVID Certificates for those with limited digital access and skills.
  • This raises the question of whether COVID-19 technologies widened health and other societal inequalities. In most of the sample countries, there is no clear evidence whether governments adopted effective interventions to help those who were less able to use or benefit from these technologies (for example, whether they provided financial support for those who could not self-isolate after receiving an exposure alert due to not being able to work from home).
  • Most sample countries requested proof of vaccination from inbound travellers before allowing unconditional entry (that is, without a quarantine or self-isolation period) at some stage of the pandemic. This amplified global inequalities by discriminating against the residents of countries that could not secure adequate vaccine supply or had low vaccine uptake – specifically, many African countries.

Governance, regulation and accountability: Were COVID-19 technologies governed well and with accountability?

  • Contact tracing apps and digital vaccine passports combine health information with social or surveillance data. As they limit rights (for example, by blocking access to travel or entrance to a venue for people who do not have a digital vaccine passport), their use must be proportional. This means striking a balance between limitations of rights, potential harms and the intended purpose. To achieve this, it is essential that these tools are governed by robust legislation, regulation and oversight mechanisms, and that there are clear ‘sunset mechanisms’ in place to determine when they no longer need to be used.
  • Most countries in our sample governed these technologies in line with pre-existing legislative frameworks, which were not always comprehensive. Only a few countries enacted robust regulations and oversight mechanisms specifically governing contact tracing apps and digital vaccine passports, including the UK, EU member states, Taiwan and South Korea.
  • The lack of robust data governance frameworks, regulation and oversight mechanisms led to lack of clarity about who was accountable for misuse or poor performance of COVID-19 technologies. Not surprisingly, there were incidents of data leaks, technical errors and data being reused for other purposes. For example, contact tracing app data was used in police investigations in Singapore and Germany, and sold to third parties for commercial purposes in the USA.[12]
  • Many governments relied on private technology companies to develop and deploy these technologies, demonstrating and reinforcing the industry’s influence and the power located in digital infrastructure.

Lessons

These findings present clear lessons for governments and policymakers deciding how to use contact tracing apps and digital vaccine passports in the future. These lessons may also apply more generally to the development and deployment of any new data-driven technologies and approaches.

Effectiveness

To build evidence on the effectiveness of contact tracing apps and digital vaccine passports:

  • Support research and learning efforts which review the impact of these technologies on people’s health behaviours.
  • Weigh up the technologies’ benefits and harms by considering their role within the broader COVID-19 response and comparing them with non-digital interventions (for example, manual contact tracing).
  • Understand the varying impacts of apps’ different technical properties, and of policies and approaches to implementation on people’s acceptance of, and experiences of, these technologies in specific socio-cultural contexts and across geographic locations.
  • Use this impact evaluation to help set standards and strategies for the future use of these technologies in public crises.

To ensure the effective use of technology in future pandemics:

  • Invest in research and evaluation from the start, and implement a clear evaluation framework to build evidence during deployment that supports understanding of the role that technologies play in broader pandemic health strategies.
  • Define criteria for effectiveness using a human-centred approach that goes beyond technical efficacy and builds an understanding of people’s experiences.
  • Establish how to measure and monitor effectiveness by working closely with public health experts and communities, and set targets accordingly.
  • Carry out robust impact assessments and evaluation.

Public legitimacy

To improve public acceptance:

  • Build public trust by publishing guidance and enacting clear law about permitted and restricted uses and mechanisms to support rights (for example, the right to privacy) and how to tackle legal issues and enable redress (e.g., data leakage, which could involve using collected data for reasons other than health).
  • Effectively communicate the purpose of using technology in public crises, including the technical infrastructure and legislative framework for specific technologies, to address public hesitancy and build social consensus.

Inequalities

To avoid entrenching and exacerbating societal inequalities:

  • Create monitoring mechanisms that specifically address the impact of technology on inequalities. Monitor the impact on public health behaviours, particularly in relation to social groups who are more likely to encounter health and other forms of social inequalities.
  • Use the impact evidence to identify marginalised and disadvantaged communities and to establish strong public health services, interventions and social policies to support them.

To avoid creating or reinforcing global inequalities and tensions:

  • Harmonise global, national and regional regulatory tools and mechanisms to address global inequalities and tensions.

Governance and accountability

To ensure that individual rights and freedoms are protected:

  • Establish strong data governance frameworks and ensure regulatory bodies and clear sunset mechanisms are in place.
  • Create specific guidelines and laws to ensure technology developers follow privacy-by-design and ethics-by-design principles, and that effective monitoring and evaluation frameworks and sunset mechanisms are in place for the deployment of technologies.
  • Build clear evidence about the effectiveness of new technologies to make sure that their use is proportionate to their intended results.

To reverse the growing power imbalance between governments and the technology industry:

  • Develop the public sector’s technical literacy and ability to create technical infrastructure. This does not mean that the private sector should be excluded from developing technologies related to public health, but it is crucial that technical infrastructure and governance are effectively co-designed by government, civil society and private industry.

Effectiveness, public legitimacy, inequalities and accountability have varying definitions across disciplines. In this report we define them as follows:

 

Effectiveness: We define the effectiveness of contact tracing apps and digital vaccine passports in terms of the extent to which they positively affect public health, that is, result in decreasing the rate of transmission. We use a non-technocentric approach, distinguishing technical efficacy from effectiveness. Technical efficacy refers to a technology’s ability to perform a technical task (that is, a digital vaccine passport’s ability to generate QR code to share data).

 

Public legitimacy: We define this in terms of public acceptance of using contact tracing apps and digital vaccine passports. We also focus specifically on marginalised and disadvantaged communities, whose opinions and experiences might differ from the dominant dispositions.

 

Inequalities: We investigate inequalities both within and across countries. We look at whether COVID-19 technologies create new or reinforce existing health and other types of societal inequalities for disadvantaged and vulnerable groups (for example, people who could not use COVID-19 technologies due to inadequate digital access and skills). We also examine their impact on global inequalities by focusing on inequalities of resources, opportunities and power between countries and regions (for example, around access to vaccine supply).

 

Accountability: We use this to refer to the regulation, institutions and mechanisms that are ways of making governments and officials accountable for preserving civil rights and freedoms.

Introduction

The COVID-19 pandemic is the first global epidemic of ‘the algorithmic age’.[13] In response, hundreds of new technologies have been developed, to diagnose patients, identify vulnerable populations and conduct surveillance of individuals known to be infected.[14] Data and artificial intelligence (AI) have therefore played a key role in how policymakers and international and national health authorities have responded to the pandemic.

Digital contact tracing apps and digital vaccine passports, which are the focus of this report, are two of the most widely deployed new technologies. Although versions of contact tracing apps had previously been deployed in some countries, such as in Sierra Leone as part of the Ebola response, for most countries across the world this was their first experience of such technologies.[15]

These technologies differ from pre-existing state surveillance tools, such as CCTV, and from other types of technologies deployed in the context of the COVID-19 pandemic, such as machine learning algorithms that profile the risk of incoming travellers or predict infected patients at high risk of developing severe symptoms.[16]

To be effective, contact tracing apps and digital vaccine passports require public acceptance and cooperation, as individuals need to consent to share their health and other types of personal information and change their behaviour, for example, by showing evidence of health status to enter a venue via a digital vaccine passport, or by staying at home on receiving an exposure notification from a contact tracing app.[17]

These technologies are therefore at the crossroads of public emergency, health and surveillance and so have significant societal implications.

The emergence of contact tracing apps and digital vaccine passports resulted in public anxiety and resistance related to their effectiveness, legitimacy and proportionality, as well as concern about the implications for informed consent, privacy, surveillance, equality, discrimination and the role of technology in broader public health management.

These technologies were therefore high stakes and were perceived as necessary, but high-risk measures in dealing with the pandemic.

As the technologies brought together a range of highly sensitive data, they were a test of the extent of the public’s willingness to share sensitive personal data and to accept limits on freedoms and rights.

The technologies were developed and deployed to save lives, but in practice they both enabled and limited people’s individual freedoms, by scoring the risk they posed to others based on their health status, location or mobility data.

Despite the risks and sensitivities, due to the challenging conditions of the pandemic, they were created and implemented quickly, and without a clear consensus on how they should be designed, governed and regulated.

Countries adopted different approaches, and – while there are some commonalities across countries and dominant infrastructures – the technical choices, policies and practices were neither unified nor consistent. Frequent changes were made even at a regional level.

It was particularly challenging for countries with weaker technological infrastructures, financial capabilities or legislative frameworks to develop and deploy COVID-19 technologies. Even in countries with relatively comprehensive regulation, these technologies caused fresh concerns for human rights and civil liberties, as they intensified ‘top-down institutional data extraction’ across the world.[18]

Many critics correctly anticipated that such technologies would normalise surveillance via state ownership of sensitive data in a way that would persist beyond the pandemic.

This creates a complex picture, made more challenging by incomplete evidence on how the technologies were developed, used and governed – and, most importantly, on their impact on people, health, healthcare provision and society. It is therefore important to monitor their development, understand their impact and consider what legacy they might have as well as the lessons we can learn for the future.

A range of studies focus on aspects of contact tracing apps and digital vaccine passports at different stages of the pandemic. The Ada Lovelace Institute has monitored the evolution of these technologies over the last three years. However, compared with more traditional health technologies or policy interventions, there is a lack of in-depth research into them or evaluation of their effectiveness.

As the infrastructure is still in place in most countries, these technologies can easily be re-used or transformed into new technologies for new purposes. Therefore, these are live questions with tangible effects on people and societies.

By synthesising evidence from a cross-section of 34 countries, this report identifies cross-cutting issues and challenges, and considers what lessons we should learn from the deployment of COVID-19 technologies as examples of new and powerful technologies that have been embedded across society.

Scope and rationale of this report

In the first two years of the pandemic, from early 2020, the Ada Lovelace Institute conducted extensive research first on contact tracing apps and then on digital vaccine passports. This research focused on the technical considerations and societal implications of these new technologies and included public attitudes research, expert deliberations, workshops, webinars and evidence reviews.

To conduct this research, we engaged multidisciplinary experts from the fields of behavioural science, bioethics, ethics, development studies, immunology, law, public health and sociology. As well as analysing the technical efficacy of the technologies, this created a holistic picture of their legal, societal and public health implications.

We published nine reports based on our research, and two international monitors, which tracked policy and practice developments related to digital vaccine passports and contact tracing apps.

In this work, we acknowledged the potential of new data-driven technologies in the fight against COVID-19. However, we also identified the risks of rapid decision-making by governments and policymakers.

In most cases, there was not sufficient time or adequate research to consider and address the wide range of societal, political, legal and ethical risks. This led to significant challenges, related to effectiveness, public legitimacy, inequalities, and governance and accountability.

Risks and challenges of COVID-19 technologies contained in the Ada Lovelace Institute’s previous publications

When contact tracing apps and digital vaccine passports first emerged, we argued that governments and policymakers should pay attention to a wide range of risks and challenges when deploying these technologies.

From early 2020, the Ada Lovelace Institute – through reports, trackers and monitors – identified and warned about the risks of these technologies.[19]

The risks we identified and highlighted can be summarised as:

Effectiveness

  • Lack of resources to monitor effectiveness and impact. Impact monitoring and evaluation strategies were not developed, making it difficult to assess the effectiveness of the technologies. Digital vaccine passports and contact tracing apps were new technologies, developed and deployed at pace, so there was not enough time or resource to establish effective strategies and monitoring mechanisms to investigate their impacts on public health.
  • Undermining public health by treating a collective problem (public health) as an individual one (personal safety). This placed the emphasis on individualised risks or requirements, and greater health surveillance at an individual level. For example, contact tracing apps categorise an individual as lower risk based on their vaccine or test status, rather than focusing on a more contextual risk of local infection in a specific area.
  • An increase in higher-risk behaviours due to the technologies fostering a false sense of security. Experts highlighted that COVID-19 technologies could create a false sense of security and discourage people from adhering to other protection measures that reduce the risk of transmission, for example, wearing a mask.[20]

Public legitimacy

  • Harming public trust in health data-driven technologies if they were not governed properly or were used for reasons other than health (for example, surveillance). Damaged public trust could make it difficult for governments to roll out new data-driven approaches and technologies to deal with public crises and in general.

Inequalities

  • Creating new forms of stratification and discrimination (for example, discrimination against unvaccinated people or those unable to access accepted vaccines or tests) or amplifying existing societal inequalities (for example, digital exclusion or poor access to healthcare).
  • Amplifying existing global inequalities and geopolitical tensions, particularly in the case of inequitable access to vaccines on a global level. Digital vaccine passport schemes required proof of vaccination for international travel or access to domestic activities (for example, entering a venue for a concert) across the world. This created the risk of a global race for vaccine supply, leaving many low- and middle-income countries scrambling for access.

Governance and accountability

  • Facilitating restrictions on individual liberty and increased surveillance. Members of the public were expected to use these powerful and potentially invasive technologies that collected and stored their personal data. These tools could therefore be used for surveillance, invading privacy or controlling individuals’ activities and mobility in general.
  • Repurposing individuals’ data for reasons other than health, for example, tracking dissidents’ activities, selling data to third parties for commercial purposes, etc.
  • Uncertainty and lack of transparency about private sector involvement and the risks of concentrating power and enabling long-term digital infrastructure that is reliant on private actors.[21]

Our reports made several recommendations for policymakers about how to mitigate these risks and challenges. As well as detailed recommendations for each technology, our cross-cutting recommendations covered the lifecycle of development and implementation.

Recommendations for policymakers made in previous Ada Lovelace Institute reports (2020–2022)

Effectiveness

  • Demonstrate the effectiveness of these technologies within the broader public health ecosystem, publishing modelling and testing; considering uptake and adherence to guidelines around these technologies (for example, reporting a positive COVID-19 test result, self-isolating on receiving an exposure notification or getting vaccinated); and publicly setting success criteria and outcomes and identifying risks and harms, particularly for vulnerable groups.

Public legitimacy

  • Build public trust through clear public communications and transparency. These communications should consider ethical considerations; establish clear legal guidance about permitted and restricted uses and mechanisms to support rights; and demonstrate how to tackle legal issues and enable redress (for example, by making a formal complaint in the case of a privacy breach).

Inequalities

  • Proactively address the needs of, and risks in relation to, vulnerable groups.
  • Work with international bodies to seek cross-border agreements and mechanisms to counteract the creation or amplification of global inequalities.

Governance and accountability

  • Ensure data protection by design to prevent data breaches or misuse.
  • Develop legislation with clear, specific and delimited purposes, and ensure clear sunset clauses for the technologies, and the legislation governing them.[22]

The focus of this research

The Ada Lovelace Institute’s original research in 2020 and 2021 focused on the conditions and principles required to safely deploy and monitor COVID-19 technologies.

By early 2022 many countries had deployed these technologies. Therefore, we shifted our focus and began investigating whether the risks and challenges we identified had materialised and, if so, what could be done differently in deploying technologies in the future.

As identified above, contact tracing apps and digital vaccine passports were deployed without consistent research and monitoring mechanisms. This contributed to a limited evidence base and meant that we needed to use a broad range of resources and research methods to develop this report (see Methodology).

Academic and grey literature provided valuable insights. This was supplemented by media and civil society coverage, for example of the repurposing of data collected through the contact tracing app Luca in Germany or the blocking of protests through Health Code app in China.[23]

The evidence in this report includes qualitative and quantitative data related to the uses and impacts of COVID-19 technologies drawn from policy trackers, the media, policy papers, research papers and workshops convened with experts between January 2022 and December 2022.

To accompany the report, we have created the ‘COVID-19 Data Explorer: Policies, Practices and Technology’[24] to enable civil society organisations, researchers, journalists and members of the public to access the body of data.

The COVID-19 Data Explorer supports the discovery and exploration of policies and practices relating to digital vaccine passports and contact tracing apps across the world. The data on timelines, technologies and public response demonstrates the legacy and implications of their rapid deployment.

By using a wide range of resources, reviewing the existing evidence and identifying evidence gaps, we draw important cross-cutting lessons to inform policy and practice.

We synthesise the available evidence from a sample of 34 countries, with the aim of taking a macro view and identifying cross-cutting issues at an international level. The report contributes to the growing body of research on COVID-19 technologies, improving how we understand, investigate and build data-driven technologies for public good.

The evidence sources include:

  • the Ada Lovelace Institute’s previous work on contact tracing apps and digital vaccine passports in the first two years of the pandemic
  • academic and grey literature on digital vaccine passports, contact tracing apps and COVID-19 pandemic management, focusing on the 34 countries in our sample
  • government websites and policy papers
  • a workshop delivered by the Ada Lovelace Institute with cross-country experts, focusing on the effectiveness of contact tracing apps in Europe
  • papers submitted in response to The Ada Lovelace Institute’s international call for evidence on the effectiveness of digital vaccine passports and contact tracing apps
  • news media coverage of digital vaccine passports, contact tracing apps and pandemic management in the 34 countries in our sample.

See Methodology for more information on methods, sampling and resources.

Ada Lovelace Institute publications on COVID-19 technologies from 2020 to 2023[25]

  • Exit through the App Store? (April 2020): A rapid evidence review of the technical considerations and societal implications of using technology to transition from the first COVID-19 lockdown.
  • Confidence in a crisis? (August 2020): Findings of a public online deliberation project on attitudes to the use of COVID-19 technologies to transition out of lockdown.
  • Provisos for a contact tracing app (May 2020): A report that highlights the milestones that would have to be met by the UK Government to ensure the safety, equity and transparency of digital contact tracing apps.
  • COVID-19 digital contact tracing tracker (July 2020): A resource for monitoring the development, uptake and efficacy of global attempts to use smartphones and other digital devices for contact tracing.
  • No green lights, no red lines (November 2020): A report that explores the public perspectives on COVID-19 technologies and draws lessons to assist governments and policymakers when deploying data-driven technologies in the context of the pandemic.
  • What place should COVID-19 vaccine passports have in society? (February 2021): Findings from an expert deliberation on the potential roll-out of digital vaccine passports.
  • Public attitudes to COVID-19, technology and inequality (March 2021): A tracker summarising studies and projects that offer insights into people’s attitudes to and perspectives on COVID-19, technology and inequality.
  • The data divide (March 2021): Public attitudes research in partnership with the Health Foundation to explore the impacts of data-driven technologies and systems on inequalities in the context of the pandemic.
  • Checkpoints for vaccine passports (May 2021): A report on the requirements that governments and developers need to meet for any vaccine passport system to deliver societal benefit.
  • International COVID-19 monitor (June 2021): A policy and practice tracker that summarises developments concerning digital vaccine passports and COVID-19 status apps.
  • The rule of trust (July 2022): Principles identified by citizens’ juries to ensure that data-driven technologies are implemented in ways that the public can trust and have confidence in.

List of countries in our sample:

  1. Argentina (ARG)
  2. Australia (AUS)
  3. Brazil (BRA)
  4. Botswana (BWA)
  5. Canada (CAN)
  6. China (CHN)
  7. Germany (DEU)
  8. Egypt (EGY)
  9. Estonia (EST)
  10. Ethiopia (ETH)
  11. Finland (FIN)
  12. France (FRA)
  13. United Kingdom (GBR)
  14. Greece (GRC)
  15. India (IND)
  16. Israel (ISR)
  17. Italy (ITA)
  18. Jamaica (JAM)
  19. Kyrgyzstan (KGZ)
  20. South Korea (KOR)
  21. Morocco (MAR)
  22.  Mexico (MEX)
  23.  Nigeria (NGA)
  24.  New Zealand (NZL)
  25.  Romania (ROU)
  26.  Russia (RUS)
  27.  Saudi Arabia (SAU)
  28.  Singapore (SGP)
  29.  Tunisia (TUN)
  30.  Türkiye (TUR)
  31. Taiwan (TWN)
  32.  United States of America (USA)
  33.  South Africa (ZAF)
  34.  Zimbabwe (ZWF)

Contact tracing apps

Emergence

Contact tracing is an established disease control measure. Public health experts help patients recall everyone they have come into close contact with during the timeframe in which they may have been infectious. Contact tracing teams then inform exposed individuals that they are at risk of infection and provide them with guidance and information.[26]

In the early phase of the pandemic, the idea of building on this practice by digitising contact tracing quickly became prominent. With lockdowns contributing to social and economic hardships, the objective was to return to the pre-pandemic ‘normal’ as soon as possible, and the global consensus at the time was that vaccination would be the only long-term solution to achieve this.

While vaccines were being developed, many countries relied on contact tracing to break chains of infection so that they could ease pandemic restrictions such as lockdowns.

Research shows that contact tracing as a disease control measure reaches its full potential when carried out by trained public health experts, who are able to engage with patients and their contacts rapidly and sensitively.[27] However, many countries lacked adequate numbers of trained public health staff and resources (for example, testing capacity to detect contacts known to be infected) for this kind of manual tracking and isolation.[28] In this context, digital contact tracing offered the possibility of accelerating contact tracing.

Countries had varying approaches to contact tracing and the use of digital contact tracing technologies, depending on their existing infrastructure. South Korea, for example, established a national tower that oversaw data collection and monitoring activities. This was built on existing smart city infrastructures which contained data collected from immigration records, CCTV footage, card transaction data and medical records.[29]

Research in South Africa highlights the state’s surveillance capabilities using mobile network systems and tracking internet users’ online activities.[30] South Africa used location information from mobile network operators to help contact tracing teams who ‘tracked and traced’ people infected with COVID-19 with no prior public announcement or consultation, although it later abandoned this approach.[31]

In Asia and Africa, digital contact tracing involved extensive collection of personal data through mass surveillance. In Europe and the USA, on the other hand, the idea of digital contact tracing through a mobile app on citizens’ smartphones began to be considered. Contact tracing apps were considered a lower-risk alternative than the mass surveillance tools adopted in Asia and Africa.

The idea of building contact tracing apps eventually gained momentum not only in Europe and the USA but across the world. Governments needed to consider the technical infrastructure, efficacy and purpose of this new technology, and the related benefits, risks and harms.

As early research from the Ada Lovelace Institute showed, public legitimacy and trust were critical for these technologies to work effectively.[32] Members of the public had to use contact tracing apps in the way intended by governments and technology companies, such as by uploading their health information if diagnosed with COVID-19 or isolating after being informed they had had close contact with someone known to be infected with COVID-19. This was particularly challenging for countries and regions with low levels of digital access and skills.[33]

To support public trust, contact tracing apps needed to be built using established best-practice methods and principles, and uses of the technology and data had to be controlled through strong regulation. If the data were to be repurposed, such as for surveillance purposes, it could damage public trust in the government, limiting the effectiveness of using COVID-19 technologies to deal with public crises in the future.

Despite these challenges, many countries across the world deployed contact tracing apps at pace in 2020.[34] In this chapter, we outline the various technical approaches and infrastructure behind contact tracing apps to build understanding of the different debates and concerns around them. We then assess their effectiveness, public legitimacy, impact on inequalities and governance.

Types of contact tracing apps

Contact tracing apps can be divided into two types: centralised or decentralised. This determines where data is stored and who can access it.[35]

Table 1: Design approaches for contact tracing apps

Communication protocol How is data generated, stored and processed? Who can access the data?
Centralised system approach Users’ data is generated, stored and processed on a central server operated by public authorities. Public authorities have access to data. They score users according to their risk and decide which users to inform. For example, if person x has been in close proximity to y, who is known to be infected with COVID-19, public authorities will be able to identify x and contact them.
Decentralised system approach Users’ data is generated, stored and processed on users’ mobile phones. The data gathered through mobile phones can also be shared on a backend server. A backend server is responsible for storing, processing and communicating data. But decentralised contact tracing systems use arbitrary identifiers (for example, a set of numbers and letters) rather than identifiers (for example., IP address). Hence, even when public authorities access the data on a backend server, they cannot identify users or reconstruct their locations and social interactions.[36]

 

There are three main technologies that are used in both centralised and decentralised systems to detect and trace users’ contacts and estimate their risk of infection.

Table 2: Technologies of contact tracing apps

How do apps decide if a user has been in contact with a person known to be infected?
Bluetooth exposure notification system This approach is based on proximity tracing: this means determining whether two individuals were near each other in a particular context for a specific duration.[37] Contacts are identified through Bluetooth technology on mobile phones. By giving permission for contact tracing apps to use their smartphone’s Bluetooth function, users allow the app to track real-time and historical proximity to other smartphones using the app. The app will share an infection alert if a user has been in proximity to a person who is known to be infected with COVID-19.

Contact tracing apps based on Bluetooth technology are also referred to as exposure notification apps.

Location GPS data This approach is based on location: contact tracing apps use the mobile device’s location (GPS) feature to identify contacts who have been in the same location as a person who is known to be infected with COVID-19
QR code This approach is based on presence tracing; whether two individuals were present at the same time in a venue where infection could have taken place.[38] Users scan a QR code with their smartphone on entry to venues. If a user who is known to be infected with COVID-19 uploads this information to the app, other users who have scanned the same QR code are notified.

New Zealand incorporated Near Field Communication (NFC) codes as an alternative to QR codes in the NZ COVID Tracer app. NFC is a technology that allows two devices to connect through proximity. NFC codes work by tapping mobile phones on or near NFC readers, in the same way that contactless credit cards, Google and Apple Pay work by tapping on or near card readers.[39]

When contact tracing apps were being considered for development, many countries were enthusiastic about deploying apps with a centralised system approach, which stores the data of app users on a central server.

Supporters of this centralised approach argued that access to data would give epidemiologists and health authorities valuable information for analysis. However, many privacy, data security and human rights researchers and activists highlighted the risks created by user data being accessible to third parties through a centralised server. These risks included the privacy infringements, data repurposing and increased surveillance.

In this context, proposals emerged for technical protocols that would enable decentralised contact tracing, designed to be ‘privacy preserving’ by enabling users’ data to be stored on their mobile smartphones rather than on a centralised server.

Several decentralised protocols emerged in April 2020, including the open protocol DP-3T (Decentralized Privacy-Preserving Proximity Tracing), PEPP-PT (Pan-European Privacy-Preserving Proximity Tracing) and the Apple/Google Exposure Notification protocol (GAEN API).

In our research, we collected evidence about the system approaches of contact tracing apps in 25 countries.[40] We discovered that 15 out of 25 countries used a decentralised system approach. Of the 15 countries that adopted a decentralised approach, not all of these based their decision on their privacy-preserving infrastructure.

The Apple/Google protocol quickly became the dominant decentralised protocol, because of the control exercised by the platforms over the two main smartphone operating systems (iOS and Android, respectively).

The Apple/Google protocol gained dominance in part because centralised contact tracing apps could not perform well on Google and Apple’s operating systems[41] without the platforms making technical changes to these systems, which they refused to do because of concerns about users’ privacy.[42]

The centralised contact tracing apps of Australia and France, for example, had major technical problems.[43] In June 2020, France’s junior minister for digital affairs highlighted that the poor technical efficacy of France’s centralised app had led to decreased public confidence in the app, stating: ‘There has been an upward trend in uninstalling over the last few days, to the tune of several tens of thousands per day’.

Similarly, Australia’s contact tracing app, which combined Bluetooth technology with a centralised system server approach, identified only 17 contacts not found manually in two years.

This caused tensions between technology companies and governments that wanted to use centralised systems with Bluetooth technology, which was considered less invasive of privacy than collecting geographical location data. Countries such as the UK and Germany, which initially pursued centralised apps independently of the Apple/Google protocols, eventually had to deploy the GAEN API to enable their Bluetooth notification systems to work effectively.[44]

In some cases, the distinction between centralised and decentralised systems was blurred. There are decentralised contact tracing systems that centralise information, if users voluntarily upload data.

For example, Singapore’s Bluetooth exposure notification app is decentralised in that it does not store users’ data on a central server. However, when users sign up for TraceTogether, they provide their phone number and ‘unique identification number’ (a government ID used for a range of activities).

If a user is known to be infected with COVID-19, they can grant the Ministry of Health access to their Bluetooth proximity data. This allows the ministry to identify people who have had close contact with the infected app user within the last 25 days, so it follows a more centralised model at that point.[45]

The developers emphasised that they built this ‘hybrid model of decentralised and centralised approach specifically for Singapore’.[46] Similarly, Ireland’s COVID Tracker allows users to upload their contact data, age, sex and health status to a centralised data storage server.[47] There are also apps that use both GPS data and a Bluetooth exposure system, such as India’s Aarogya Setu.

QR codes were also widely used in contact tracing apps, especially those with Bluetooth exposure notification systems, such as the UK’s NHS COVID-19 app.

  • Romania, the USA, Russia and Greece are the only countries in our sample that did not launch a national contact tracing app.[48]
  • India, Ghana, South Korea, Türkiye, Israel and Saudi Arabia used both Bluetooth and location data with a centralised approach.[49]
  • Estonia, France, Finland, Canada, India and Australia discontinued their contact tracing apps and deleted all of the data gathered and stored through them.[50] England and Wales also closed down their contact tracing app NHS COVID-19, and the personal data collected was deleted, but anonymous analytical data may be retained for up to 20 years.[51]
  • Several contact tracing apps were expanded to include vaccine information – for example, Italy’s Immuni app, Türkiye’s Hayat Eve Sığar (HES; Life Fits into Home) app and Singapore’s TraceTogether (TT) app.
  • The USA did not have a federal contact tracing app. MIT Technology Review’s COVID Tracing Tracker demonstrates that only 19 states out of 50 had rolled out contact tracing apps as of December 2020, and to the best of our knowledge no contact tracing app was developed in the USA after this date.[52]

Effectiveness of contact tracing apps

In April 2020, the Ada Lovelace Institute published the rapid evidence review Exit through the App Store?. [53] This report explored technical and societal implications of a variety of COVID-19 technologies, including contact tracing apps. The review acknowledged that, given the potential of data-driven technologies ‘to inform research into the disease, prevent further infections and support the restoration of system capacity and the opening up of the economy’, it was right for governments to consider their use.

However, we urged decision-makers to consider the lack of scientific evidence demonstrating the potential efficacy and impact of contact tracing apps. And we pointed out that there had not been adequate time or resources to establish effective strategies and monitoring mechanisms to investigate their impacts on public health.

We emphasised that lack of credible evidence supporting the apps’ effectiveness could undermine public trust and hinder implementation due to low uptake.

Since then, a considerable number of studies have emerged investigating the effectiveness of contact tracing apps. This body of literature offers four key findings:

  • Some Bluetooth notification exposure apps with decentralised systems have been effective in identifying and notifying close contacts of people known to be infected with COVID-19, for example the UK’s NHS COVID-19 app.[54] However, the technical efficacy of this kind of system cannot be generalised at an international level. The evidence from South Africa and Canada, for example, indicates technical problems, including insufficient Bluetooth accuracy and smartphone batteries being quickly drained.[55] Such technical issues affected the apps’ ability to identify and notify close contacts of people who were known to be infected with COVID-19.
  • Apps with centralised systems and Bluetooth exposure notification systems, which were not compatible with Google and Apple’s GAEN API, had significant technical problems. This reduced their ability to identify close contacts.[56] For example, France’s contact tracing app had sent only 14 notifications after 2 million downloads as of June 2020.[57]
  • Low uptake of contact tracing apps reduced their effectiveness in some countries, for example in Australia.[58] This is because the proportion of potentially exposed people who actually receive an exposure notice and stay at home is, by definition, lower if fewer people are using the app overall.
  • Contact tracing apps were insufficiently integrated with government services and public health systems. An investigation of the effectiveness of contact tracing apps from a public health perspective in six countries found that apps did not reach their full potential, due to inadequate testing capacity and poor data sharing across local and central government authorities.[59]

However, there are still important evidence gaps which prevent us from definitively assessing the effectiveness of contact tracing apps.

To explore these gaps, we organised a multidisciplinary workshop with experts from the USA and Europe in October 2022 to discuss the effectiveness of contact tracing apps. The findings from the workshop (listed below) demonstrate the limitations of the evidence.

It was clear that there is still no consensus on what effectiveness means beyond apps’ technical efficacy. How can we define people-centred effectiveness?

Research is also limited on how contact tracing apps affected individual behaviours that would have supported wider public health measures: for example, whether users self-isolated after a COVID-19 exposure notification. The existing evidence is limited in both sample size and scope,[60] because (to date) people’s real-life experiences of contact tracing apps have received little research attention.

A Digital Global Health and Humanitarianism Lab (DGHH Lab) investigation of contact tracing apps provides a useful framework for how further research should evaluate people’s real-life experiences of contact tracing such apps. The investigation looks at people’s opinions and experiences of contact tracing apps in five countries: Cyprus, Iceland, Ireland, Scotland and South Africa.[61] It concludes that user engagement with the apps should be seen in four stages:

  1. Uptake (users download the app).
  2. Use (users run the app and keeps it updated).
  3. Report (users report a positive COVID-19 diagnosis via the app).
  4. React ( users follow necessary next steps when they receive an exposure notification from the app).[62]

Uptake alone does not guarantee continued use and change in behaviour (for example, getting tested or staying at home when notified of an exposure). The stage-based approach should therefore guide our understanding of individuals’ actual, ongoing usage of COVID-19 technologies.

Several studies demonstrate that uptake does not guarantee continued use. In France, for example, only a minority of users of the TousAntiCovid (Everyone Against COVID, formerly StopCovid) app used the contact tracing feature.

BBC News reported that although two million people downloaded the Protect Scotland app, only 950,000 people actively used it, and that around 50,000 people stopped using it a few months after its launch.[63] Similarly, there is evidence that millions of people who downloaded the NHS COVID-19 app (used in England and Wales) never technically enabled it, so despite having an intention to engage with it, they did not use it in practice.[64]

This evidence does not suggest that contact tracing apps were completely ineffective. But it challenges us to consider why people did not use the apps as anticipated by policymakers and developers.

Exploring this will help ensure that contact tracing apps and similar health technologies reach their full potential in the future.

A research study on the UK contact tracing apps demonstrates that some people also stopped using apps after a while because they lost confidence in their effectiveness.[65] Similarly, the Government of Canada’s evaluation of the COVID Alert app notes that its perceived lack of effectiveness among the public led to fewer downloads and less continued usage, which prevented the app from reaching its full potential.[66]

These findings demonstrate that more research is needed to investigate people’s views and practices in relation to contact tracing apps in real-life contexts and over time. This will help review the apps’ effectiveness, not just technically but in terms of outcomes for people and society.

How did different technologies, policies and public communications impact public attitudes when the apps were first deployed and over time?

We need more comparative evidence to understand how different technologies, policies and public communication strategies impacted public attitudes. The existing evidence, despite its limitations, indicates the importance of comparative research.

For example, there is an important distinction between tracing apps (location GPS data) and exposure notification apps (Bluetooth technology), in terms of the risks and challenges they pose. Yet there is no adequate research into how the public perceives the respective risks and effectiveness of these two different types of contact tracing apps.

A qualitative research study with 20 users of Canada’s COVID Alert app confirms the significance of this evidence gap. It demonstrates that participants favoured the app’s decentralised approach over centralised systems because of the higher level of privacy protection and optional level of cooperation.[67] The research also finds that users’ motivation to notify the app if known to be infected with COVID-19,and to follow government guidelines, increases with their understanding of the purpose and technical functionality of the app.

A limitation of the evidence base is that existing research largely investigates contact tracing apps in the first year of the pandemic. There is a need to understand the success and effectiveness in the context of changing nature of the pandemic. This will help understand how people’s confidence in apps’ effectiveness and their usage practices have changed over time.

Our recommendation when contact tracing apps emerged in 2020:

  • Establish the effectiveness of contact tracing apps as part of a wider pandemic response strategy.[68]

 

In 2023, the evidence on the effectiveness of the various apps can be summarised as follows:

  • Countries did not decide what effectiveness would look like when rolling out these apps.
  • Contact tracing apps have demonstrated that digital contact tracing is feasible. Some decentralised contact tracing apps with Bluetooth technology worked well, in that they demonstrated technical efficacy (for example, the NHS COVID-19 app in England and Wales[69]). However, the technical efficacy of decentralised Bluetooth exposure notification systems cannot be generalised at an international level. The evidence from South Africa and Canada, for example, indicates technical problems.
  • Apps with centralised systems and Bluetooth exposure notification systems, which were not compatible with Google and Apple’s GAEN API, had significant technical problems. This negatively impacted their ability to identify and notify close contacts (for example, in France).
  • Existing research and expert opinion indicate that the apps were not well integrated within broader public health systems and pandemic management strategies, which negatively impacted their effectiveness.
  • The impact of contact tracing apps on public health is unclear because significant evidence gaps remain that prevent understanding of their impact on public health behaviours at different stages of the pandemic. There is also a lack of clear evidence around how different technologies, policies and public communications have affected public attitudes towards the apps.

 

Lessons learned:

To build evidence around the effectiveness of contact tracing apps as part of the wider pandemic response strategy:

  • Support research and learning efforts on the impact of contact tracing apps on people’s public behaviours.
  • Understand how the apps’ technical properties, and different policies and implementation approaches, impact on people’s experiences of contact tracing apps in specific socio-cultural contexts and across geographic areas.
  • Use this impact evaluation to help set standards and strategies for the future use of technology in public crises. Weigh up digital tools’ benefits and harms by considering their role within the broader COVID-19 response and comparing them with non-digital interventions (for example, manual contact tracing).

 

To ensure the effective use of technologies in future pandemics:

  • Invest in research and evaluation from the outset, and implement a clear evaluation framework to build evidence during deployment that supports understanding of the role that COVID-19 technologies play in broader pandemic health strategies.
  • Define criteria for effectiveness using a human-centred approach that goes beyond technical efficacy and builds an understanding of people’s experiences.
  • Establish how to measure and monitor effectiveness by working closely with public health experts and communities, and set targets accordingly.
  • Carry out robust impact assessments and evaluation of technologies, both when first deployed and over time.

Public legitimacy of contact tracing apps

When they first emerged, we argued that public legitimacy was key to the success of contact tracing apps.

Members of the public were more likely to use the apps and follow the guidelines (for example, self-isolating after receiving a notification) if they trusted the technology’s effectiveness and believed that adequate regulatory mechanisms were in place to safeguard their privacy and freedoms.[70]

We also demonstrated that public support for contact tracing apps was contextual: people had varying views and experiences of the apps depending on how they were implemented locally (for example, whether uptake was mandatory or voluntary).[71]

In countries where contact tracing app use was mandatory, members of the public had to use them even if they did not think that they were legitimate technologies. For example, in China, the Health Code app was automatically integrated into users’ WeChat and Alipay, so that they could only deactivate the COVID-related functionality by deleting these applications.[72]

These applications are widely used, as smartphone-based digital payment is the main method of payment in China.[73] The app was therefore assigned mandatorily to 900 million users (out of 1.4 billion) in over 300 cities, using pre-existing legal mechanisms to justify and enforce the policy (for example, the Novel Coronavirus Pneumonia Prevention and Control Plans).[74]

The Health Code app was not the only automatically assigned technology across China. Cities and regions required their residents to use multiple technologies depending on their own local COVID-19 pandemic measures and mechanisms; however, there is not much information regarding local authorities’ administration of these technologies. Similarly, it was not always clear which government department had ultimate authority for oversight and enforcement.[75]

In the majority of the countries in our sample, contact tracing apps were voluntary. People were not obliged through legislation to use them, and only did so if they believed in their effectiveness and had the resources to adopt them and adhere to guidelines.

Seen through this lens, contact tracing apps can be taken as a test of public acceptance of powerful technologies that entail sensitive data and are embedded in everyday life.

A study that investigated voluntary contact tracing app adoption in 13 countries found that the adoption rate was 9% on average.[76] In 2020, the Ada Lovelace Institute conducted an online public deliberation project on the UK Government’s use of the NHS COVID-19 contact tracing app to transition out of lockdown.[77] This research demonstrated that the public demanded clarity on data use and potential risks as well as independent expert review of the technology’s efficacy. Since then, there has been a boom in research into public attitudes to contact tracing apps that confirms this point.

This demonstrates the reasons for low levels of public support for contact tracing apps. These include low levels of trust in government and concerns about apps’ security and effectiveness, leading to low adoption (or high rates of people discontinuing use) in some countries, for example, Australia, France and South Africa. [78]

While we do not have in-depth insights about public support for apps in the countries where uptake was mandatory, recent developments in China demonstrate people’s dissatisfaction with the Health Code app and the restrictions it enabled. When the Chinese government ended the Health Code mandate in December 2022, many people shared celebratory content on social media platforms.

Some of this content suggested that people were happy to make decisions and take precautions for themselves rather than rely on the Health Code algorithm.[79] A considerable number of privacy and human rights law experts were explicitly critical of the use of Health Code system (both about the use of the system in general and its use beyond the height of the pandemic) and urged the Chinese government to discontinue its use beyond the COVID-19 pandemic.[80]

Experts emphasise the importance of effective public communication strategies in pandemic management.[81] The existing research demonstrates that many governments across the world have not been able to communicate scientific evidence effectively, particularly to address vaccine hesitancy and misinformation.[82] This finding includes communications around digital interventions.

Research undertaken in the UK shows that the public do not have a clear understanding of the technical capabilities and uses of COVID-19 technologies.

When asked about digital contact tracing apps, participants in the research imagined these apps ‘being able to “see” or ‘visualise’ their every move’.[83]

This indicates a misunderstanding (or lack of knowledge) regarding the apps’ infrastructure. Contact tracing apps in the UK are built on the GAEN API using Bluetooth technology, so they do not collect geo-location data and are not able to track users’ location in the literal sense of knowing where a user is at a given point in time.

In Europe, Bluetooth technology has been widely used instead of geo-location data.[84] However, the perceived risk of surveillance and literal tracking has been a public concern in the majority of European countries, especially among social groups with lower levels of trust in government.[85] Similar evidence exists for South Africa, where the lack of focused and targeted communications reduced public trust, and the COVID Alert SA app was not widely used by members of the public.[86]

Perhaps an exception within our sample is Canada, which established an extensive communications campaign to increase awareness and understanding of the COVID Alert app.[87] Health Canada, the government department responsible for national health policy, spent C$21 million on this campaign to encourage Canadians to download and use the app.[88]

The official evaluation of the app published by Health Canada and the Public Health Agency of Canada concludes that these campaigns resulted in millions of downloads.[89] This evidence demonstrates the importance of effectively communicating the apps’ purpose and technical infrastructure to members of the public.

Existing political structures and socio-economic inequalities were also important in determining uptake. In many parts of the world, structural factors and inequalities mean that marginalised and disadvantaged communities are more likely to distrust the government, institutions and public health advice.[90]

It is unsurprising that these groups were less likely to use contact tracing apps. There is strong online survey research evidence from the UK that confirms this point, in an investigation of the adoption of and attitudes towards the NHS COVID-19 app:

  • 42% of Black, Asian and minority ethnic respondents downloaded the app compared with 50% of white respondents
  • 13% of Black, Asian and minority ethnic respondents downloaded then deleted the app compared with 7% of white respondents
  • Black, Asian and minority ethnic respondents were more concerned about how their data would be used and felt more frustrated as a result of a notification from the app than white respondents
  • Black, Asian and minority ethnic respondents had lower levels of trust in the National Health Service (NHS) and were less likely to download the app to help the NHS.[91]

Our recommendations when contact tracing apps emerged:

  • Build public trust by publicly setting out guidance and enacting clear law about permitted and restricted uses. Explain the legal guidance and mechanisms to support rights through clear public communications and transparency.
  • Ensure users understand apps’ purpose, the quality of its evidence, its risks and limitations, and users’ rights, as well as how to use the app.[92]

 

In 2023, the evidence that has emerged on the public legitimacy of contact tracing apps demonstrates these points:

  • Public acceptance of contact tracing apps depended on public trust in apps’ effectiveness and in governments and institutions, as well as the safeguard mechanisms in place to protect privacy and individual freedoms.
  • Individuals and communities who encounter structural inequalities were less likely to trust in government institutions and the public health advice they offered. Hence, they were less likely than the general population to use contact tracing apps.
  • Governments did not always do well at communicating with the public about the properties, purpose and legal mechanisms of contact tracing apps. This negatively impacted public legitimacy, since governments could not gain public trust in the safety and effectiveness of the apps.

 

Lessons learned:

To achieve public legitimacy for the use of technology in future pandemics:

  • Reinforce the need to build public trust by publicly setting out guidance and enacting clear law about permitted and restricted uses. Explain the legal guidance and mechanisms to support rights through clear public communications and transparency.
  • Effectively communicate the purpose, governance and properties of contact tracing technologies to the public.

Inequalities

The international evidence concerning the impact of COVID-19 on communities demonstrates higher infection and mortality rates among the most disadvantaged communities.

It highlights the intersections of socio-economic, ethnic, geographical, digital and health inequalities, particularly in unequal societies and regions.[93]

The introduction of contact tracing apps led to concerns that they could widen health inequalities for vulnerable and marginalised individuals in society (for example, around digital exclusion and poor access to healthcare). In this context, we called on governments to carefully consider the potential negative social impacts of contact tracing apps, especially on vulnerable and disadvantaged groups.[94]

A part of pandemic management, policymakers and technology companies developed and adopted new technologies rapidly. This left insufficient room to discuss questions about equality and impact, such as whether contact tracing apps would benefit everyone in society equally, who might not be able to benefit from them, and what the alternatives were for those individuals and communities.

There was a surge in techno-solutionism – the view that technologies can solve complex real-world problems – during the pandemic. As Marelli and others (2022) argue, ‘the rollout of COVID interventions in many countries has tended to replicate a mode of intervention based on ‘technological fixes’ and ‘silver-bullet solutions’, which tend to erase contextual factors and marginalize other rationales, values, and social functions that do not explicitly support technology-based innovation efforts’. [95]

This meant that non-digital interventions that could perhaps have benefited marginalised and disadvantaged communities – particularly manual contact tracing – were not adequately considered.

Research shows that contact tracing as a disease control measure, if effectively conducted in a timely way, can save lives, particularly for disadvantaged and marginalised communities.[96]

Manual contact tracing teams should ideally be trained to help individuals and families to access testing, identify symptoms, and secure food and medication when isolating. This type of in-depth case investigation and contact tracing requires knowing and effectively communicating with communities, which cannot be done via a mobile application.

Some contact tracing apps recognised this need and attempted to incorporate a manual function. COVID Tracker Ireland, for example, offered users the option of providing a phone number if they wanted to be contacted by public health staff.[97] This is important because it gives contact tracers the opportunity to contact people who are known to be infected with COVID-19 and address their needs.

However, it was unclear how these apps were intended to work alongside manual contact tracers, since it is a core function of majority of contact tracing apps that they inform individuals of exposure directly, with no involvement from public health staff.[98]

This raises the question of whether digital contact tracing was carried out at the expense of other health interventions (most notably, manual contact tracing) and led to the needs of particular individuals and families not being sufficiently considered.[99]

Furthermore, contact tracing apps’ success relies on the assumption that people will self-isolate if notified as a contact of someone who has tested positive for COVID-19. Yet as Landau, the author of People Count: Contact-Tracing Apps and Public Health, argues: ‘the privilege of staying at home is not evenly distributed’.[100]

While some people were able to work from home, many were not and therefore did not have the opportunity to self-isolate if notified of exposure. This shows that technologies cannot work efficiently in isolation and must be supported by strong social policies.

In some countries, governments introduced financial support for those who were ill or self-isolating. In the UK for example, the Government enabled citizens to claim a payment if notified by the NHS COVID-19 app.[101] But a report by Nuffield Foundation and the Resolution Trust found that the financial support given by the Government during the pandemic covered only a quarter of workers’ earnings.[102]

For health technologies such as contact tracing apps to result in changes in behaviour, policymakers need to address structural factors and inequalities that affect disadvantaged groups.

Similarly, people who did not have adequate digital access and skills were not able to use contact tracing apps, even if they wanted to. And these apps were particularly challenging for countries with low levels of internet access, such as South Africa and Nigeria.[103]

Our recommendation when contact tracing apps emerged:

  • Proactively address the needs of, and risks relating to, vulnerable groups.[104]

 

In 2023, the evidence on the impact of contact tracing apps on inequalities demonstrates these points:

  • The rapid introduction of apps caused concerns that they would widen health inequalities for vulnerable and marginalised individuals in society (for example, those who are digitally excluded or with poor access to healthcare) who would not be able to benefit from them.
  • The evidence is unclear around the impact of contact tracing apps on health inequalities and whether authorities produced effective non-digital solutions and services for marginalised and disadvantaged communities.
  • Marginalised and disadvantaged communities (for example, those facing digital exclusion or lacking the financial security to self-isolate) were less likely to use contact tracing apps. To increase their adoption, they had to be supported with non-digital solutions and public services (for example, with manual contact tracing or financial support).

 

Lessons learned:

To mitigate the risk of increasing inequalities when using technology in future pandemics:

  • Consider and monitor the impact of technologies on disadvantaged and marginalised communities. These communities may not benefit from technological solutions as much as the general population, which might increase health inequalities
  • Mitigate the risk of increasing (health) inequalities for these groups by establishing non-digital services and policies that will help them use the technologies and adhere to guidelines (for example, providing financial support for those who cannot work from home).

Governance, regulation and accountability

In deciding to introduce contact tracing apps, governments had to consider trade-offs between human rights and public health interests, because the apps used sensitive personal information and determined the freedoms and rights of individuals.

In the early stages of the pandemic, the Ada Lovelace Institute recommended that if governments wanted to build contact tracing apps, they should ensure that these new tools were governed by strong regulations and oversight mechanisms. We argued that contact tracing apps should be designed and governed in line with data protection and privacy principles.[105]

We acknowledge that these principles are not universal but are informed by political, cultural and social values. But they are underpinned by an international framework that informs the legal protection of human rights around the world.[106] It is beyond the scope of this report to evaluate country-specific laws. But the evidence we have uncovered suggests that different political cultures and pre-existing legislative frameworks of countries yielded varying governance mechanisms, which sometimes fell short of protecting civil rights and freedoms.

One of the most polarising issues concerning the launch of contact tracing apps was whether they should be mandatory or voluntary.

When contact tracing first emerged, we argued that making the use of contact tracing apps mandatory would not be proportionate given the lack of evidence for such apps’ effectiveness.

We also highlighted that contact tracing apps could facilitate surveillance and result in discrimination against certain groups (for example, those who are digitally excluded or refuse to use contact tracing apps). If these risks and challenges materialised, they could be detrimental to human rights.[107]

A comparative analysis of legislation and digital contact tracing policies in 12 countries shows that, in western countries, where privacy legislation strongly emphasises individual freedoms and rights, contact tracing app use was voluntary (for example, France, Austria and the UK).[108]

In Israel, China, Taiwan and South Korea, contact tracing app use was mandatory. Several studies demonstrate how the pre-existing laws and confidentiality requirements allowed Taiwan’s and South Korea’s governments to collect a wide range of social and surveillance data with relatively high levels of public acceptance.[109]

Both Taiwan and South Korea had had recent experiences of dealing with pandemics, and there was pre-existing legislation that permitted tracking through contact tracing apps, CCTV and credit card companies. These laws allowed the governments to carry out large-scale data collection programmes, and there were also strict confidentiality requirements in place.

Although digital contact tracing was mandatory and extensive, contact tracing app governance was transparent and civilian-run in both countries, based on pre-existing public emergency and data protection legislation.[110]

In China, on the other hand, there was no pre-existing comprehensive privacy legislation when the Health Code was deployed (as the Personal Information Protection Law came into effect in November 2021).[111] China enforced mandatory use of the Health Code app between February 2020 and December 2022.

Health Code served as both a contact tracing app and a digital vaccine passport, linked with users’ national identity numbers. It used GPS location in combination with data gathered through WeChat and Alipay, two of the most popular social commerce platforms in China.

These platforms were chosen to guarantee widescale adoption, since they provide the backbone for electronic financial transactions in China. The app categorised people into three categories to determine a risk score for users: green (low risk, free movement); yellow (medium risk, 7-day self-isolation); and red (high risk, 14-day mandatory quarantine)’.[112]

Health code systems were automatically added to citizens’ smartphones through Alipay and WeChat, and Chinese authorities were accused of misusing the systems to stop protests and conduct surveillance of activists.[113]

In Israel, where the contact tracing app was mandatory and centralised, the legislation relating to pandemics does not include digital data collection because it was established in 1940. When a state of emergency is declared, the government is empowered to enact emergency regulations that may suspend the validity of other laws that protect individual rights and freedoms.

In this context, the absence of digital data collection in the legislation relating to pandemics allowed the government to enact emergency regulations allowing the authorities to conduct extensive digital contact monitoring.[114]

The Lex-Atlas COVID-19 project also highlights that emergency powers were used to justify excessive data gathering and surveillance mechanisms in various countries.[115] Some countries unlawfully attempted to make the apps mandatory for domestic activities.

For example, in spring 2020, India made it mandatory for government and private sector employees to download the Aarogya Setu app. This decision was then questioned by experts, including a former Supreme Court judge in Kerala High Court, due to the lack of any law that backed mandatory use of the app.[116]

After the challenge was heard in early May 2020, the Ministry of Home Affairs issued a notification on 17 May 2020, clarifying that use of the Aarogya Setu app should be changed from mandatory to ‘best effort’ basis.[117] This allowed government employees to challenge the mandatory use of the app enforced by the government or a government institution.

In this case, the ‘competent authority’ to extend the scope of Aarogya Setu’s Data Access and Sharing Protocol was the Empowered Group on Technology and Data Management. However, the group was dissolved in September 2020, and the Protocol expired in May 2022. Therefore, the use of the app was anchored in a discontinued protocol and regulatory authority.[118]

Norton Rose Fulbright’s contact tracing global snapshot project demonstrates that countries with weaker legislation and enforcement mechanisms were less transparent when communicating information about their contact tracing apps. Türkiye and Russia, for example, did not clarify how long the data would be stored, whether a privacy risk assessment had been completed, or whether the data would be stored on a centralised or decentralised server.[119]

Another example demonstrating the importance of strong data protection mechanisms comes from the USA, where there are no federal privacy laws regulating companies’ data governance.[120] [121]

In 2020, we highlighted the risk of repurposing contact tracing apps being repurposed, that is, the technology and the data collected being used for reasons other than health.[122]

The company that owns the privacy and security assistant app Jumbo investigated the contact tracing app of the state of North Dakota in the USA. It reported that user location data was being shared with a third party, location data platform Foursquare.

Foursquare’s business model is based on providing advertisers with tools and data to target audiences at specific locations.[123] This exemplifies the repurposing of the data collected through a contact tracing app for commercial purposes, highlighting the importance of strong laws and mechanisms to safeguard users’ data.

Another important investigation was carried out by the Civil Liberties Union for Europe in 10 EU countries.[124] According to the EU General Data Protection Regulation (GDPR), providers should carry out a data protection and equality impact assessment before deploying contact tracing apps, as they posed risks to people’s rights and freedoms.

Yet the Civil Liberties Union for Europe investigation demonstrates that although these countries launched contact tracing apps in 2020, none had yet conducted these assessments by October 2021.

This point is also supported by Algorithm Watch’s evaluation of contact tracing apps in 12 European countries. It found that contact tracing app policies varied significantly within the EU, and that apps were deployed ‘not in an evidence-based fashion and mostly based on contradictory, faulty, and incomparable methods, and results’.[125]

Another relevant example is Singapore. The Criminal Procedure Code (2010) in Singapore allowed the police to use the data collected by contact tracing app TraceTogether data for reasons other than health.[126] In February 2021, it was reported that police had used the app in a murder investigation case.[127]

Following this, the government amended the COVID-19 (Temporary Measures) Act (2020) to restrict the use of the data. But according to this Act, personal data collected through digital contact tracing can still be used by law enforcement in investigations of ‘serious offences’.[128]

As the examples above show, unsurprisingly, countries with more comprehensive data protection and privacy legislation applied data protection principles more effectively than countries with weak legislation.

But incidents of privacy breaches and repurposing data also took place in countries with relatively strong laws and regulatory mechanisms. Germany has comprehensive personal data protection regulations under the EU GDPR and the new Federal Data Protection Act (BDSG).[129]

The Civil Liberties Union for Europe report highlights that Germany is one of the few EU countries that built and rolled out its contact tracing apps in line with the principles of transparency, public debate and impact assessments.[130] But the data gathered and stored through the Luca app, which provides QR codes to check in at restaurants, events and venues, was shared with the police and used in a murder investigation case.[131]

The role of the private sector

Our research reveals that contact tracing apps with centralised data systems were repurposed and/or used to restrict individual freedoms and privacy. This finding is also supported by Algorithm Watch’s COVID-related automated decision-making database project.

As highlighted in Algorithm Watch’s final report, there have been fewer cases of dangerous uses of data-driven technology and AI in EU countries, which largely used the decentralised GAEN API with Bluetooth technology, than in Asia and Africa.[132]

Many privacy advocates supported GAEN technology, which stored data on a decentralised server, since its use would prevent government mass surveillance and oppression.

Nonetheless, as this initiative was led by Google and Apple and not by policymakers and public health experts, it generated questions about the legitimacy of having private corporations decide the properties and uses of this kind of sensitive digital infrastructure.[133]

As digital rights academic Michael Veale argues, a GAEN-based contact tracing system may be ‘great for individual privacy, but the kind of infrastructural power it enables should give us sleepless nights’.[134] The pandemic demonstrated that big tech companies like Apple and Google hold enormous power over computing infrastructure, and therefore over significant health interventions such as digital contact tracing apps.

Apple and Google partnered to influence properties of contact tracing apps in a way that was not favourable to particular nation states (for example, France, which pursued a centralised system approach despite its incompatibility with Bluetooth technology).

This revealed the difficulty, even at state level, of engaging in advanced use of data without the cooperation of the corporations that control the software and hardware infrastructure.[135] While preventing government abuse is crucial, the growing power of technology companies, whose main interest is profit rather than public good, is equally concerning.

Some critics also – and rightly – challenge the common claim that contact tracing apps with GAEN API have been privacy preserving. The reason for the challenge is that it is very difficult to verify whether the data collected has been stored and processed as technology companies claim.[136] This indicates a wider problem: the lack of strong regulation to ensure clear and transparent insight into the workings of technology companies.

These concerns raise two important questions: how will governments rebalance power against dominant technology corporations; and how will they ensure that power is distributed to individuals and communities? As Knodel argues, governments need to move toward designing multistakeholder initiatives with increased ability ‘to respond and help check private sector motivations’.[137]

And as GOVLAB and Knight Foundation argue in their review of the use of data during the pandemic, more coordination between stakeholders would prevent fragmentation in management efforts and functions in future pandemics.[138]

In the light of evidence identified above, as we have already recommended, strong legislation and regulations should be enacted to impose strict purpose and time limitations on digital interventions in times of public crisis. Regulations and oversight mechanisms should be incorporated into emergency legal systems to curb state powers. Governments need to consider a long-term strategy that focuses on collaborating effectively with private technology companies.

Our recommendation when contact tracing apps emerged:

  • Governments should develop legislation, regulations and accountability mechanisms to impose strict purpose and time limitations.[139]

 

In 2023 the evidence on the governance, regulations and accountability of contact tracing apps demonstrates that:

  • Most countries in our sample rolled out contact tracing apps at pace, without strong legislation or public consultation. The different political cultures and pre-existing legislative frameworks of countries yielded varying governance mechanisms, which sometimes fell short of protecting civil rights and freedoms.
  • Some countries used existing emergency powers to sidestep democratic processes and regulatory mechanisms (for example, Türkiye, Russia and India). Even in those countries with relatively strong regulations, privacy breaches and repurposing of data took place, mostly notably in Germany.
  • We have not come across any incidents of misuse of the decentralised contact tracing apps using the Apple/Google GAEN API. But private sector influence on public health technologies is a factor in the ability of governments to develop regulation and accountability mechanisms. The COVID-19 pandemic (and particularly the roll-out of contact tracing apps) showed that national governments are not always able to use their regulatory powers, due to their reliance on large corporations’ infrastructural power.

Lessons learned:

  • Define specific guidelines and laws when deploying new technologies in emergency situations.
  • Develop the public sector’s technical literacy and ability to create technical infrastructure. This does not mean that the private sector should be excluded from developing technologies related to public health. But it is crucial that the technical infrastructure and governance are effectively co-designed by government, civil society and private industry.

Digital vaccine passports

Emergence

From the beginning of the COVID-19 pandemic, establishing some form of ‘immunity passport’ based on evidence or assumption of natural immunity and antibodies after infection with COVID-19 was seen as a possible route out of restrictions.

Governments hoped that immunity passports would allow them to lift mobility restrictions and restore individual freedoms, at least for those who had acquired immunity to the virus.

However, our understanding of infection-induced immunity from the virus was still inadequate due to lack of evidence concerning the level and longevity of antibody levels against COVID-19 after infected by the virus. In this context, these plans were slowed down to allow evidence to accumulate about the efficacy of natural immunity to protect people.[140]

In the meantime, there was considerable investment in efforts to develop vaccine against COVID-19 to protect people through vaccine-induced immunity. On 7 October 2020, Estonia and the World Health Organization (WHO) announced a collaboration to develop a digitally enhanced international certificate of vaccination to help strengthen the effectiveness of the COVAX initiative, which provides COVID-19 vaccines to poorer countries.[141]

The WHO eventually decided to discontinue this project, because the impacts and effectiveness of digital vaccine passports could not be estimated. It also pointed to several scientific, technical and societal concerns with the idea of an international digital vaccine passport system, including the fact that it could prevent citizens of countries unable to secure a vaccine supply from studying, working or travelling abroad.[142]

In November 2020, Pfizer and BioNTech announced their vaccine’s efficacy against COVID-19.[143] In December 2020, the first patient received COVID-19 vaccination in the UK.[144] In the same month, China approved its state-owned COVID vaccine for general use.[145]

Many other vaccines were quickly rolled out, including Moderna, Oxford AstraZeneca and Sputnik V. Countries aimed to roll out vaccination programmes as rapidly as possible to bring down numbers of deaths and cases, and facilitate the easing of COVID-19 restrictions.[146]

This re-energised the idea of establishing national and regional digital vaccine passport systems – among governments, but also among universities, retailers and airlines that sought an alternative to lockdowns.[147]

Despite the lack of scientific evidence on their effectiveness, the majority of countries in our sample eventually introduced digital vaccine passports, with two main purposes: to create a sense of security and to increase vaccine uptake when ending lockdowns.[148]

Unsurprisingly, technology companies raced towards building digital vaccine passports to be used domestically and internationally.[149] The digital identity industry strongly advocated for the introduction of digital vaccine passports.[150] Their argument in support of this was that, if enacted successfully, digital vaccine passports could prove the feasibility of national, regional and international schemes based on proving one’s identity and health status digitally.[151]

Private companies went on to build vaccine passports with the potential to be used in various industries as well by governments, for example, the International Air Transport Association’s Travel Pass app for international travel.[152]

Vaccine passports are not a new concept: paper vaccine passports have been around since the development of smallpox vaccines in the eighteenth century.[153] Although yellow fever is the only disease specified in the International Health Regulations (2005) for which countries may require proof of vaccination as a condition of entry, in the event of outbreaks the WHO recommends that countries ask for proof of vaccines.[154]

COVID-19 vaccine passports are the first digital health certificates that indicate someone’s vaccination against a particular disease. Due to their data-driven digital infrastructure, the health information of individuals can be easily collected, stored and shared. Digital infrastructure of COVID-19 vaccine passports caused public controversy.

When digital vaccine passports emerged, arguments offered in support of them included that they could: allow countries to lift lockdown measures more safely; enable those at lower risk of infection and transmission to help to restart local economies; and allow people to re-engage in social contact with reduced risk and anxiety.

Using a digital rather than a paper-based approach would accommodate future changes in policy, for example vaccine passes expiring or being re-enabled after subsequent infections, based on individual circumstances, countrywide policies or emerging scientific evidence.

Arguments against digital vaccine passports highlighted their potential risks and challenges. These included creating a two-tier society between unvaccinated and vaccinated people, amplifying digital exclusion, and risking privacy and personal freedoms. Experts also highlighted that vaccine passports attempt to manage risks and permit or restrict liberties at an individual level, rather than supporting collective action and contextual measures.

They categorise an individual as lower risk based on their vaccine or test status rather than taking into account a more contextual risk of local infection in a given area. They could also reduce the likelihood of individuals observing social distancing or mask wearing to protect themselves and others.[155]

Digital vaccine passport systems carry specific risks because they gather and store medical and other forms of sensitive personal information that can be compromised through hacking, leaking or selling of data to third parties. They can also be linked to other digital systems that store personal data, for example, the digital identity system Aadhaar in India and the health system Conecte SUS in Brazil.

Experts recommended that strong privacy-preserving technical designs and regulations were needed to prevent such problems, but these were challenging to establish at pace.[156]

These risks and challenges raised questions around public legitimacy and fuelled public resistance to digital vaccine passports in some countries, making it difficult for countries to gain public trust – particularly given the sharp rise in public discontent with governments and political systems due to the pressures of the pandemic.[157]

The Ada Lovelace Institute closely followed the debate regarding digital vaccine passports as they emerged. We conducted evidence reviews, convened workshops with scientists and experts, and published evidence-based research to support decision-making at pace.

Based on the evidence we gathered, we argued that although governments’ attempts to find digital solutions were understandable, rolling out these technologies without high standards of governance could lead to wider societal harms.

The expert deliberation we convened in 2021 suggested that governments should pause their digital vaccine passport plans until there was clear evidence that vaccines were effective in preventing transmission, and that they would be durable and effective against new variants of COVID-19.[158]

We also concluded that it was important to address public concerns and build public legitimacy through transparent adoption policies, secure technical designs and effective communication strategies.

Finally, we highlighted the risk of poorly governed vaccine passports being incorporated into broader systems of identification, and the wider implications of this for the UK and other countries (a risk that has been realised in various countries).[159]

Before proceeding to explaining whether the risks, aspirations and challenges outlined above have materialised, we need to identify the various digital vaccine restrictions and understand how these new technologies have been implemented across the world. In the next section, we discuss digital vaccine passport systems, and the restrictions they have enabled based on a person’s vaccination status or test results.

Types of digital vaccine passport systems and restrictions

In this section, we identify the types of digital vaccine passport systems and restrictions in 34 countries. All countries in our sample introduced digital vaccine passports between January and December 2021 – with varying adoption policies.

Digital vaccine passports were in use in two important public health contexts to either limit or enable individuals’ ability to access certain spaces and activities during the COVID-19 pandemic:

  1. Domestic vaccine passport schemes: providing a valid vaccine passport to prove immunity status when participating in public activities (for example, going to a restaurant).
  2. International vaccine passport schemes: providing a valid vaccine passport to show immunity status when travelling from one country to another.

The majority of the countries in our sample changed their vaccine passport schemes at multiple times throughout the pandemic.[160] For example, both Türkiye and France introduced digital vaccine passports in summer 2021, internationally for inbound travellers and domestically for residents to access particular spaces (for example, restaurant, museums, concert halls, etc.).

By spring 2022, both countries had lifted vaccine passport mandates domestically but still required inbound travellers to provide immunity proof to avoid self-isolation and testing.

By August 2022, digital vaccine passports were no longer in use or enforced in either country (although the infrastructure is still in place in both countries and can be reused at any time). At the time, China and New Zealand were still enforcing digital vaccine passports – to varying degrees – to maintain their relatively low number of deaths and cases by restricting residents’ eligibility for domestic activities and inbound travellers’ eligibility to visit.

Contrary to China and New Zealand’s severe vaccine passport schemes, many countries, especially European countries, implemented domestic vaccine passport schemes to ease COVID-19 measures and transition from lockdown measures, despite increasing number of cases and hospitalisations (for example, in summer 2022).[161]

We identified eight different vaccine passports systems that allowed or blocked freedoms for residents and inbound travellers in the 34 countries in our sample.

We have coded them according to the severity of their implementation.

Digital vaccine passport restrictions

  1. Available but not compulsory. In use but not enforced for inbound travellers and domestic use.
  2. Mandatory for inbound travellers. Not mandatory for domestic use.
  3. Not mandatory for inbound travellers. Domestic use decided by regional governments.
  4. Mandatory for inbound travellers unless they are nationals and/or residents. Domestic use decided by regional governments.
  5. Mandatory for inbound travellers. Domestic use decided by regional governments.
  6. Mandatory for inbound travellers unless they are nationals and/or residents. Domestic use decided at a federal level.
  7. Mandatory self-isolation for non-national inbound travellers, regardless of possession of vaccine passports.
  8. Mandatory self-isolation for non-national inbound travellers, regardless of vaccine passport. Federal policy for domestic use.

There is currently no universal vaccine passport scheme that can determine how and under what circumstances digital vaccine passports can be used internationally as well as for domestic purposes.[162]

In the absence of internationally accepted criteria, countries determined when and how to use digital vaccine passports themselves, leading to a wide range of adoption policies.

A map of the world that shows the introduction of vaccine passports in countries in our sample by quarter

  • Asian and European countries were among the first to introduce digital vaccine passports in early 2021
  • North and South America from mid-2021
  • Oceania from late 2021.

The different approaches to using digital vaccine passports in different countries stem from their different technical capabilities, politics, public tolerance, finance and, most importantly, approaches to pandemic management.

Countries with zero-COVID policies, for example China and New Zealand, implemented stringent vaccine passport policies along with closing borders and imposing strict lockdowns on residents to suppress transmission.[163]

Many countries relied on a combination of various measures at different phases of the pandemic. In 2023, all countries in our sample currently have either no or moderate measures in place and seem to have chosen a ‘living with COVID’ policy.

Despite the varying approaches, in all the countries in our sample the technological and legislative infrastructure of vaccine passports are still in place. This is important not only because vaccine passports can still be reused, but because they can be transformed into other forms of digital systems in the future.

Examples of how varying pandemic management approaches and political contexts affected digital vaccine passport systems across the world include:

  • Brazil: Former Brazilian president Bolsonaro was against vaccination in general.[164] This meant that most of the pressure for vaccination campaigns came from the federal regions. The judiciary also played a strong role in pressuring the government to take measures against COVID-19, including vaccination. A Supreme Court justice ruled that inbound travellers had to show digital or paper-based proof of vaccination against COVID-19.[165]
  • USA: Digital vaccine passports, particularly for domestic use, were a politically divisive issue in the USA. Some states banned vaccine mandates and the use of digital vaccine passports within their states. Citizens in these states could acquire paper-based vaccine passports to prove their vaccination status for international travel. Several studies demonstrated that political affiliation, perceived effectiveness of vaccines and education level shaped individuals’ attitudes towards digital vaccine passports. Unsurprisingly, fear of surveillance was prominent in determining whether people trusted the government and corporations with their personal data.[166] The federal US administration did not initiate a national domestic vaccine passport but was involved in efforts to establish standards for vaccine passports for international travel.
  • Italy: Italy was the first country in Europe to be hit by the COVID-19 pandemic.[167] The government was confronted with high numbers of hospitalisations and deaths, and faced criticism for being slow to act. It responded by taking stricter measures than many of its European counterparts, and so Italy had one of the strictest vaccine passport schemes in Europe. It separated each region into a coloured zone depending on how severe the rate of transmission and hospitalisation numbers were in that area. It operated a two-tiered green pass system. The ‘super green pass’ was valid proof of vaccination or recovery, the ‘green pass’ was proof of a negative COVID test. Different venues and activities required one or both of the passes.[168]
  • The EU: Member states in the EU experienced the pandemic differently – some countries had higher number of deaths, cases and hospitalisations than others. Vaccine uptake across the member states differs significantly.[169] While the EU Digital COVID Certificate helped the EU to reintroduce freedom of movement and revive the economy within the zone, member states have the liberty to implement vaccine passports domestically as they see fit. This led to considerable differences in domestic vaccine passport schemes across the EU zone.[170] For example, Romania, one of the least vaccinated countries in the EU, made digital vaccine passports mandatory for inbound national travellers for only a short period of time to address the surge in numbers of cases and deaths as lockdowns were ended. Finland, which had a high vaccination rate, required a digital vaccine passport for all inbound travellers, including nationals, for nine months before it stopped enforcing digital vaccine passports completely.

Effectiveness

Digital vaccine passports essentially demonstrate an individual’s transmission risk to other people.

A digital vaccine passport scheme relies on the assumption that an individual is a lower risk to others if they have been vaccinated (or if they have gained natural immunity after being infected with and recovering from the disease).

In early 2021, we argued that there was no clear evidence about whether being vaccinated reduced an individual’s risk of transmitting the disease. We suggested that governments should pause deploying vaccine passports until the evidence was clearer.[171]

We also called on governments to build evidence that considers the benefits and risks of digital vaccine passports – in particular, whether they would increase risky behaviours (for example, not observing social distance) by creating a false sense of security.

Despite this lack of evidence, many governments across the world moved forward to introduce digital vaccine passports in 2021.[172]

Policymakers saw digital vaccine passports as valuable public health tools, once the initial scientific trials of vaccines suggested that they would reduce the likelihood of severe symptoms, and hence hospitalisations and deaths.

This was critical for policymaking in many countries whose healthcare systems were under immense pressure.

At the same time, vaccine scepticism was on the rise in many countries. In this context, the idea developed that digital vaccine passport schemes would give people an incentive to get vaccinated. This represented a considerable shift in their purpose, from a digital health intervention aimed at reducing transmission to a behaviour control tool aimed at increase vaccine uptake.

Many countries considered mandatory vaccination for domestic activities as a way to increase uptake. For example, in January 2022, announcing domestic vaccine mandates, French President Macron stated ‘the unvaccinated, I really want to hassle them. And so, we will continue to do it, until the end.’[173]

Mandatory digital vaccine passport schemes raise the question of ‘whether that is ethically acceptable or instead may be an unacceptable form of coercion, detrimental to the right to free self-determination, which is guaranteed for any medical treatment, thus coming to resemble a sort of roundabout coercion’.[174]

In short, it was hoped that digital vaccine passports would positively impact public health in two main ways: (1) reducing transmission, hospitalisations and deaths, and (2) increasing vaccine uptake.

In this section, we will look at the evidence on the effectiveness of digital vaccine passports in both of these senses. We will then briefly explain several evidence gaps that prevent us from building a full understanding of digital vaccine passports’ overall impact on public health.

Impact of digital vaccine passports on reducing transmission, hospitalisations and deaths

In 2023, the scientific evidence on the efficacy of vaccines to reduce transmissions still needs to be elucidated. Although there is some evidence that being vaccinated makes it less likely that one will transmit the virus to others, experts largely agree that ‘a vaccinated person’s risk of transmitting the virus is not considerably lower than an unvaccinated person’.[175] [176] Yet there is strong evidence that vaccines are effective in protecting individuals from developing severe symptoms (although the experts say that their efficacy reduces over several months).[177]

Therefore, even if mandatory domestic vaccine passport schemes did not help to decrease rates of transmission, they might have reduced the pressure on public healthcare because fewer number of people needed medical care. This would only be the case if digital vaccine passports were indeed effective in increasing vaccine uptake (see next section below).

Vaccines have been found to be effective against new variants, but the level of effectiveness is unclear.[178] According to the WHO, there are five predominant variants of COVID-19 and more than 200 subvariants. The WHO also reports that it is becoming more difficult to monitor new variants, since many countries have stopped testing and surveillance.

The infrastructure and legislation of digital vaccine passports are still in place, meaning that they can be reused at any time.

But limited monitoring and research on (sub)variants raises concerns around vaccines’ durability and their ability to be used more widely Governments need to invest in building evidence on the vaccines’ efficacy against rapidly evolving variants if they decide to re-use digital vaccine passport.

Impact of digital vaccine passports on vaccine uptake

Digital vaccine passport systems had a mixed impact on vaccine uptake at an international level. Several countries reported a significant increase in vaccination after the introduction of digital vaccine passports. In France for example, after the digital vaccine passports were introduced, ‘the overall uptake of first doses… increased by around 15% in the last month following a lull in vaccinations.’[179]

Another study suggests that the vaccine passport requirement for domestic travelling and accessing different social settings led to higher vaccination rates in the majority of the EU countries.[180] However, levels of COVID-19 vaccine acceptance were low particularly in West Asia, North Africa, Russia, Africa and Eastern Europe despite the use of digital vaccine passports.[181]

For example, one out of four Russians continued to refuse vaccination despite the government’s plan to introduce mandatory digital vaccine passports for accessing certain spaces (for example, workplaces).[182] Similarly, in Nigeria, Bulgaria, Russia and Romania, black markets for fake passports were created by anti-vaxxers,[183] demonstrated the strength of resistance among some people to getting vaccinated or sharing their data. These examples indicate the importance of political and cultural contexts and urge us to avoid broad international conclusions.

Important evidence gaps

As well as vaccination, the scientific evidence shows that a wide range of measures can reduce the risk of COVID-19 transmission. How have vaccine passports affected individuals’ motivation to follow other COVID-19 protection measures? This question is fundamental: one of the major concerns about digital vaccine passports was that they might give people a false sense of security, leading them to stop following other important COVID-19 health measures such as wearing a face mask.

Some experts argue that digital vaccine passport schemes in the EU led to more infections because they led to increased social contact.[184] But studies that explore this were either conducted in the early phase of the pandemic or remain limited in their scope. This means that we cannot fully evaluate the impact of digital vaccine passports on public health behaviours, so we cannot weigh their benefits against the risks in a comprehensive manner.

To fill this evidence gap, we need studies that examine (and compare) unvaccinated and vaccinated people’s attitudes to other COVID-19 protection measures over time.

A systematic review of community engagement to support national and regional COVID-19 vaccination campaigns demonstrates that working with members (or representatives) of communities to co-design vaccination strategies, build trust in authorities and address misinformation is an effective way to increase vaccine uptake.

The review points to the success of several COVID-19 vaccination rollout programmes, including the United Nations High Commissioner for Refugees efforts to reach migrant workers and refugees, a female-led vaccination campaign for women in Sindh province in Pakistan and work with community leaders to reach out to the indigenous population in Malaysia.[185]

The standard and quality of countries’ healthcare systems also played a huge role in how successfully they tackled vaccine hesitancy. For example, Morocco’s pre-existing national immunisation programme, supported by a successful COVID-19 communications campaign, led to in higher vaccination rates in Morocco compared with other African countries.[186]

This raises another important question, which cannot be comprehensively answered due to limited evidence: were digital vaccine passport policies deployed at the expense of other (non-digital) interventions, such as targeted community-based vaccination programmes?

Governments’ ambition to increase vaccine uptake by using digital vaccine passport schemes (for example, by not allowing unvaccinated people to enter venues) raises the question of whether they expected digital vaccine passports to ‘fix’ the problem of vaccine hesitancy instead of working with communities and effectively communicating scientific evidence.

To comprehensively address this question, governments would need to provide detailed documentation of vaccination rollout programmes and activities and support expert evaluations of the risks and benefits of digital vaccine passport systems, compared with non-digital interventions like vaccination campaigns targeted at communities with high levels of vaccine hesitancy.

Our recommendations when digital vaccine passports emerged:

  • Build an in-depth understanding of the level of protection offered by individual vaccines in terms of duration, generalisability, efficacy regarding mutations and protection against transmission.
  • Build evidence of the benefits and risks of digital vaccine passports. For example, consider whether they reduce transmission but also increase risky behaviours (for example, not observing social distancing), with a new harmful effect.[187]

 

In 2023, the evidence on the effectiveness of digital vaccine passports reveals:

  • Countries initially aimed to use digital vaccine passports to score an individual’s transmission risk based on their vaccination status, test results or proof of recovery. They established digital vaccine passport schemes without clear evidence of the vaccine’s effectiveness in reducing a transmission risk. Governments hoped that even if vaccines did not reduce transmission risk, digital vaccine passports would increase vaccine uptake, and hence decrease an individual’s risk of developing severe symptoms and increase vaccine uptake.
  • Vaccines were effective at reducing the likelihood of developing severe symptoms, and therefore of hospitalisations and deaths. This meant that they decreased the pressure on health systems because fewer people required medical care.
  • However, there is no clear evidence that vaccinated people are less likely to transmit the virus than unvaccinated people, which means that vaccines have not reduced transmissions as hoped by governments and policymakers.
  • In some countries (for example, France) digital vaccine passport schemes increased vaccine uptake, but in other countries (for example, Russia and Romania) people resisted vaccinations despite digital vaccine passport restrictions. Black markets for fake digital vaccine passports were created in some places (for example, Italy, Nigeria and Romania). This demonstrates that we cannot reach broad international conclusions about digital vaccine passports’ impact on vaccine uptake.
  • Significant gaps in the evidence prevent us from weighing the benefits of digital vaccine passport systems against the harms. These include the impact of digital vaccine passports on other COVID-19 protection measures (for example, wearing mask) and whether governments relied on digital vaccine passport systems to increase vaccine uptake instead of establishing non-digital community-targeted interventions to address vaccine hesitancy.

 

Lessons learned:

To build evidence on the effectiveness of digital vaccine passports as part of the wider pandemic response strategy:

  • Support research and learning to understand the impact of digital vaccine passports on other COVID-19 protection measures (for example, wearing mask and observing social distancing).
  • Support research and learning to understand the impact of digital vaccine passports on non-digital interventions (for example, effective public communications to address vaccine hesitancy).
  • Use this impact evaluation to weigh up the risks and harms of digital vaccine passports and to help set standards and strategies for the future use of technology in public crises.

To ensure the effective use of technologies in future pandemics:

  • Invest in research and evaluation from the outset, and implement a clear evaluation framework to build evidence during deployment that supports understanding of the role that digital technologies play in broader pandemic health strategies.
  • Define criteria for effectiveness using a societal approach that goes beyond technical efficacy and takes account of people’s experiences.
  • Establish how to measure and monitor effectiveness by closely working with public health experts and communities, and set targets accordingly.
  • Carry out robust impact assessments and evaluation of technologies, both when first deployed and over time.

Public legitimacy

Public legitimacy was key to ensuring that digital vaccine passports were legitimate and effective health interventions. In the first two years of the pandemic, we conducted a survey and public deliberation research to investigate public attitudes to digital vaccine passports in the UK.

We found that digital vaccine passports needed to be supported by strong governance and accountability mechanisms to build public trust. Our work also highlighted public concern with regards to digital vaccine passport schemes’ potential negative impacts on marginalised and disadvantaged communities. We called on governments to build public trust and create social consensus on whether and how to use digital vaccine passports.[188]

Since then, wider evidence has emerged that complements our findings. For example, an IPSOS Mori survey from March 2021 found that minority ethnic communities in the UK were more concerned than white respondents about vaccine passports being used for surveillance.[189]

This reflects a general trend in UK society: minoritised and disadvantaged people trust public institutions less with personal data than the white majority do.[190] Unsurprisingly, there is also a link between people’s attitudes to digital vaccine passports and vaccine hesitancy.

Those who are less likely to take up the COVID-19 vaccine feel their sense of personal autonomy is threatened by mandatory vaccine passport schemes.[191]

It is difficult to draw conclusions about public acceptance of digital vaccine passports at an international level, since public legitimacy depends on existing legal and constitutional frameworks as well as moral, cultural and political factors in a society.

But we can say that more than 50% of countries in our sample experienced protests against digital vaccine passports and the restrictive measures that they enabled (for example, not being eligible to enter the workplace or travel without proof of vaccination), showing the widespread public resistance across the world.

Countries that saw such protests vary in terms of political cultures and attitudes to technology, including Italy, Russia, France, Nigeria and South Africa. In most cases, anti-digital vaccine passport protests started shortly after national or regional governments had announced mandatory schemes, demonstrating public resistance to using data-driven technology in everyday contexts.

Several studies demonstrated that people were less favourable towards domestic uses of digital vaccine passports than towards their use for international travel.

This was particularly the case for schemes that required people to use a digital vaccine passport to access work, education, and religious settings and activities.[192] Lack of trust in government and institutions, vaccine efficacy and digital vaccine passports’ effectiveness all contributed to public resistance to digital vaccine passport systems.[193]

Our recommendations when digital vaccine passports emerged:

  • Build public trust through strong regulation, effective public communication and consultation.[194]
  • Ensure social consensus on whether and how to use digital vaccine passports.

 

In 2023, the evidence on the public legitimacy of digital vaccine passports reveals that:

  • Many countries experienced protests against digital vaccine passports (more than half of the countries in our sample) and the restrictive measures that they enabled. This demonstrates the lack of public acceptance of, and social consensus around, digital vaccine passport systems.
  • Lack of trust in government and institutions, vaccine efficacy and digital vaccine passports’ effectiveness all contributed to public resistance to digital vaccine passports.[195]

 

Lesson learned:

  • Ensure that people’s rights and freedoms are safeguarded with strong regulations, oversight and redressal mechanisms. Effectively communicate the purpose and legislative and regulatory basis of health technologies to build public trust and social consensus.

Inequalities

Digital vaccine passports posed significant inequality risks, including discrimination based on immunity status, excess policing of citizens, and amplification of digital inequalities and other forms of societal inequalities.[196]

In this context, one of the major risks highlighted by the Ada Lovelace Institute was that mandatory vaccine passports could lead to discrimination against unvaccinated people. Mandatory vaccination policies were frequently adopted by (national or regional) governments or workplaces across the countries in our sample.[197]

For example, in November 2021, the Austrian government announced mobility restrictions for unvaccinated people.[198] The measure was ended in January 2022 due to dropping case numbers and decreasing pressure on hospitals. However, the government announced a vaccine mandate policy with penalties of up to €3,000 for anyone who refused to be vaccinated. The controversial law was never enforced due to civil unrest and international criticism.[199]

In Italy, people had to show a ‘green pass’, which included vaccination proof, recovery proof and a negative Polymerase Chain Reaction (PCR) test, to access workplaces between October and December 2021.

The policy officially ended on 1 May 2022, making it illegal for employers to ask for vaccine passports.[200] In 2021, the Moscow Department of Health declared that only vaccinated people could receive medical care.[201] The Mayor of Moscow also instituted a mandatory vaccine passport system for gaining entry to restaurants, bars and clubs after 11pm in the city.

In relation to digital exclusion, we recommended that if governments were to pursue digital vaccine passport plans, they should create non-digital (paper) alternatives for those with no or limited digital access and skills. We also recommended that plans should include different forms of immunity in vaccine passports – such as antigen test results – to prevent discrimination against unvaccinated people.[202]

In some countries, for example, Türkiye, although physical vaccine passports were available, people had to download their vaccination proof as an electronic PDF (portable document format) , which excluded those who were unable to use the internet.[203]

Some countries adopted good practices and policies to mitigate the inequality risks. In India, for example, the Supreme Court decided that vaccination could not be made compulsory for domestic activities and directed the federal government to provide publicly available information on any adverse effects of vaccination.[204]

The UK Government introduced a non-digital NHS COVID Pass letter.[205] Those who did not have access to a smartphone or internet could request this physical letter via telephone.

The European Union’s Digital COVID Certificate could be obtained after taking a biochemical test that demonstrates a form of immunity or lack of infection and hence does not discriminate against those who cannot be or refuse to be vaccinated. This made the Digital COVID Certificate available to wider population, as 25% of the EU population remained unvaccinated as of August 2022.[206]

Global inequalities

Tackling pandemics requires global cooperation. Effective collaboration is needed to fight diseases at regional and global levels.[207] Digital vaccine passports, which were used for border management in the name of public health, created vaccine nationalism, and as a result they amplified global inequalities.[208]

Digital vaccine passports did not emerge in a vacuum; state-centric perspectives that prioritise the ‘nation’s health’ by restricting or controlling certain communities and nations have existed for decades.[209] Securitising trends using the unprecedented compilation and analysis of personal data intensified following the 9/11 terrorist attack in New York.[210]

Countries compiled pandemic-related data about other countries to score risk and produce entry schemes for inbound travellers. This led to the emergence of an international digital vaccine passport scheme where individuals were linked to a verifiable test or vaccine.[211]

Low-income countries found it difficult to meet rigid standards for compliance due to low access to and uptake of vaccines.[212]

There is a positive correlation between a country’s GDP and the share of vaccinated individuals in the population.[213]

According to Our World in Data, when digital vaccine passports were introduced, the share of fully vaccinated people was 17% in Jamaica, 18% in Tunisia and 11% in Egypt.[214] At the other end of the scale, 56% of the population was fully vaccinated in Singapore, 32% in Italy and 37% in Germany.[215]

International digital vaccine passport schemes also resulted in new global tensions. The COVAX initiative led by the WHO, aimed at ensuring equitable access to COVID-19 treatments and vaccines through global collaboration.[216]

COVISHIELD, a COVID-19 vaccine manufactured in India, was distributed largely to African countries through the COVAX initiative. Nonetheless, the EU, which donated €500 million donation to support the initiative, did not authorise COVISHIELD as part of the EU Digital COVID Certificate system.[217] This meant that the digital vaccine passports of people who had received COVISHIELD in Africa were not recognised as valid in the EU, restricting their ability to travel to EU countries.

As of December 2022, Africa still had the slowest vaccination rate of any continent, with just 33% of the population receiving at least one dose of a vaccine.[218]

In this context, many low- and middle-income countries sought vaccines approved by the European Medicine Agency (EMA). This was challenging due to lack of financial means and the limited number of vaccine manufacturing companies.

The EU Digital COVID Certificate system eventually expanded to only 49 non-EU countries, including Monaco, Türkiye, the UK and Taiwan (to give a few examples from our sample).[219] These countries’ national vaccination programmes offered vaccines authorised for use by EMA in the EU.

Our recommendations when digital vaccine passports emerged:

  • Carefully consider the groups that might face discrimination if mandatory domestic and international vaccine passport policies are adopted (for example, unvaccinated people).
  • Make sure policies and interventions are in place to mitigate the amplification of societal and global inequalities – for example, provide paper-based vaccine certificates for people who are not able or not willing to use digital vaccine passports.[220]

 

In 2023, the evidence on the impact of digital vaccine passports on inequalities demonstrates that:

  • The majority of countries in our sample adopted mandatory domestic and international vaccine passport schemes at different stages of the pandemic, which restricted the freedoms of individuals.
  • Some countries in our sample (for example, the EU and UK) adopted physical digital vaccine passports and approved a biochemical test to demonstrate a form of immunity or lack of infection as part of their digital vaccine passports. These helped to mitigate the risk of discrimination against unvaccinated individuals and individuals who lack adequate digital access and skills.
  • Countries compiled pandemic-related data about other countries to score risk and produce entry schemes for inbound travellers. This led to the emergence of an international digital vaccine passport scheme where individuals were linked to a verifiable test or vaccine. Low-income countries found it difficult to meet rigid standards of compliance due to low access to and uptake of vaccines.

 

Lessons learned:

  • Address the needs of vulnerable groups and offer non-digital solutions where necessary to prevent discrimination and amplification of inequalities.
  • Consider the implications of national policies and practices relating to technologies at a global level. Cooperate with national, regional and international actors to make sure technologies do not reinforce existing global inequalities.

Governance, regulation and accountability

Like contact tracing apps, digital vaccine passports had implications for data privacy and human rights, provoking reasonable concerns about proportionality, legality and ethics.

Data protection regimes are based largely on principles that aim to protect rights and freedoms. Included within these is a set of principles and ‘best practices’ that guide data collection in disaster conditions. These include that:

  • measures are transparent and accountable
  • the limitations of rights are proportional to the harms they are intended to prevent or limit
  • data collection is minimised and time constrained
  • data is retained for research or public use purposes and unused personal data is destroyed
  • data is anonymised in such a way that individuals cannot be reidentified
  • third party sharing both within and outside of government is prevented.[221]

In the Checkpoints for vaccine passports report, we made a set of legislative, regulatory and technical recommendations in line with the principles outlined above.

We highlighted the importance of oversight mechanisms to ensure technical efficacy and security, as well as the enforcement of relevant regulations.[222] It is beyond the scope of this report to analyse country-specific regulations and how they were shaped by differences in legal systems and ethical and societal values. But there are several cross-cutting issues and reflections that are worth drawing attention to.

As far as we know, there were fewer incidents of repurposing data and privacy breaches in the case of digital vaccine passports than in relation to contact tracing apps. Yet in some countries, critics warned that data protection principles were not always followed despite relevant regulations being in place.[223] For example, central data systems had security flaws in some countries, for example, in Brazil and Jamaica, which resulted in people’s health records being hacked.[224]

The effectiveness of digital vaccine passports was critical when deciding whether they were proportionate to their intended purpose.[225] When they emerged, some bioethicists argued that digital vaccine passport policies were a justified restriction on civil liberties, since vaccinated people were unlikely to spread the disease and hence posed no risk to others’ right to life.[226]

However, as explained in the previous sections, the evidence does not confirm vaccines’ effectiveness at reducing transmission. And it is noteworthy that some places for example, Vietnam, successfully managed the disease without a focus on technology due to their pre-existing strong healthcare systems.[227]

Our evidence also reveals that although some countries established specific regulations for digital vaccine passports (for example, UK and Canada), this was not the case for most of the countries in our sample.

In many countries, digital vaccine passports were regulated through existing public laws, protocols and general data protection regulations.

This created concerns in those countries without data protection frameworks, for example, South Africa.[228]

In our sample of 34 countries, the EU Digital COVID Certificate regulation is the most comprehensive regulation. It clearly states when the vaccine passport scheme will end (June 2023).[229] It also provides detailed information regarding security safeguards and time limitation.

But it is important to note that the EU does not determine member states’ national policies on vaccine passport use, which means that countries can choose to keep the infrastructure and reuse digital vaccine passports domestically.

Our recommendations when digital vaccine passports emerged:

  • Use scientific evidence to justify the necessity and proportionality of digital vaccine passport systems.
  • Establish regulations with clear, specific and delimited purposes, and with clear sunset mechanisms .
  • Ensure best-practice design principles to ensure data minimisation, privacy and safety.
  • Ensure that strong regulations and regulatory bodies and redressal mechanisms are in place to safeguard individual freedoms and privacy.

 

In 2023, the evidence on governance, regulations and accountability of digital vaccine passports demonstrates that:

  • Only a handful of countries (for example, the UK and the EU) enacted specific regulations before rolling out digital vaccine passports.
  • In many countries, digital vaccine passports were regulated using existing public laws, protocols and general data protection regulations. This created concerns in countries without data protection frameworks, for example, South Africa.
  • There were fewer incidents of repurposing data and privacy breaches in the case of digital vaccine passports than there were in connection with contact tracing apps. But the lack of strong regulation or oversight mechanisms and poor design still resulted in data leakages, privacy breaches and repurposing of the technology in some countries (for example, hacking digital vaccine passport data in Brazil).

 

Lessons learned:

  • Justify the necessity and proportionality of technologies with sufficient relevant evidence in public health emergencies.
  • If technologies are found to be necessary and proportional and therefore justified, create specific guidelines and regulations. These guidelines and regulations should ensure that mechanisms for enforcement are in place as well as methods of legal redress.

Conclusions

Contact tracing apps and digital vaccine passports have been two of the most widely deployed technologies in COVID-19 pandemic response across the world.

They raised hopes through their potential to assist countries in their fight against the COVID-19 virus. At the same time, they provoked concerns about privacy, surveillance, equity and social control, because of the sensitive social and public health surveillance data they use – or are perceived to use.

In the first two years of the pandemic, the Ada Lovelace Institute extensively investigated the societal, legislative and regulatory challenges and risks of contact tracing apps and digital vaccine passports. We published nine reports containing a wide range of recommendations for governments and policymakers about what they should do mitigate these risks and challenges when using these two technologies.

This report builds on this earlier work. It synthesises the evidence on contact tracing apps and digital vaccine passports from a cross-section of 34 countries. The findings should guide governments, policymakers and international organisations when using data-driven technologies in the context of public emergencies, health and surveillance.

They should also support civil society organisations and those advocating for technologies that support fundamental rights and protections, public health and public benefit.

We also identify important gaps in the evidence base. COVID-19 was the first global health crisis of ‘the algorithmic age’, and evaluation and monitoring efforts fell short in understanding the effectiveness and impacts of the technologies holistically.

The evidence gaps identified in this report indicate the need to continue research and evaluation efforts, to retrospectively investigate the impact of COVID-19 technologies so that we can decide on their role in our societies, now and in the future. The gaps should also guide evaluation and monitoring frameworks when using technology in future pandemics and in broader contexts of public health and social care provision.

This report synthesises the evidence by focusing on four questions:

  1. Did the new technologies work?
  2. Did people accept them?
  3. How did they affect inequalities?
  4. Were they well governed and accountable?

The limited and inconsistent evidence base and the wide-ranging, international scope present some challenges to answering these questions. Using a wide range of resources, we aim to provide some balance and context to compensate for missing information.

These resources include the media, policy papers, findings from the Ada Lovelace Institute’s workshops, evidence reviews of academic and grey literature, and material submitted to international calls for evidence.

We illustrate the findings on both contact tracing apps and digital vaccine passports with policy and practice examples from the sample countries.

Within the evidence base, the two technologies were implemented using a wide range of technical infrastructures and adoption policies. Despite these divergences and the often hard-to-uncover evidence, there are important cross-cutting findings that can support current and future decision-making around pandemic preparedness, and health and social care provision more broadly.

Cross-cutting findings

Effectiveness: did COVID-19 technologies work?

  • Digital vaccine passports and contact tracing apps were – of necessity – rolled out quickly, but without consideration of what evidence would be required to demonstrate their effectiveness. There was insufficient consideration and no consensus reached on how to define, monitor, evaluate or demonstrate their effectiveness­ and impacts.
  • There are indications of the effectiveness of some technologies, for example the NHS COVID-19 app (used in England and Wales). However, the limited evidence base makes it hard to evaluate their technical efficacy or epidemiological impact overall at an international level.
  • The technologies were not well integrated within broader public health systems and pandemic management strategies, and this reduced their effectiveness. However, the evidence on this is limited in most of the countries in our sample (with a few exceptions, for example Brazil and India), and we do not have clear evidence to compare COVID-19 technologies with non-digital interventions and weigh up their relative benefits and harms.
  • It is not clear whether COVID-19 technologies resulted in positive change in people’s health behaviours (for example, whether people self-isolated after receiving an alert from a contact tracing app).
  • It is also not clear if public support was impacted by the apps’ technical properties, or the associated policies and implementations.

Public legitimacy: Did people accept COVID-19 technologies?

  • Public legitimacy was key to ensuring the success of these technologies, affecting uptake and behaviour.
  • The use of digital vaccine passports to enforce restrictions on liberty and increased surveillance caused concern. There were protests against them, and the restrictive policies they enabled, in more than half the countries in our sample.
  • Public acceptance of contact tracing apps and digital vaccine passports depended on trust in their effectiveness, as well as trust in governments and institutions to safeguard civil rights and liberties. Individuals and communities who encounter structural inequalities are less likely to trust government institutions and the public health advice they offer. Not surprisingly, these groups were less likely than the general population to use these technologies.
  • The lack of targeted public communications resulted in poor understanding of the purpose and technical properties of COVID-19 technologies. This reduced public acceptance and social consensus around whether and how to use the technologies.

Inequalities: How did COVID-19 technologies affect inequalities?

  • Some social groups faced barriers to accessing, using or following the guidelines for contact tracing apps and digital vaccine passports, including unvaccinated people, people structurally excluded from sufficient digital access or skills, and people who could not self-isolate at home due to financial constraints. A small number of sample countries adopted policies and practices to mitigate the risk of widening existing inequalities. For example, the EU allowed paper-based Digital COVID Certificates for those without sufficient digital access and skills.
  • This raises the question of whether these technologies widened health and other societal inequalities. In the majority of sample countries, there is no clear evidence as to whether governments adopted effective interventions to help those who were less able to use or benefit from these technologies (for example, whether financial support was provided for those who could not self-isolate after receiving an exposure alert due to not being able to work from home).
  • The majority of sample countries requested proof of vaccination from inbound travellers before allowing unconditional entry (that is, without a quarantine or self-isolation period) at some stage of the pandemic. This amplified global inequalities by discriminating against the residents of countries that could not secure adequate vaccine supply or had low vaccine uptake – specifically, many African countries.

Governance, regulation and accountability: Were COVID-19 technologies well governed and accountable?

  • Contact tracing apps and digital vaccine passports combine health information with social or surveillance data. As they limit rights (for example, by blocking access to travel or entrance to a venue for people who do not have a digital vaccine passport), they must be proportional. This means striking a balance between limitations of rights, potential harms and intended purpose. To achieve this, it is essential that they are governed by robust legislation, regulation and oversight mechanisms, and that there are clear sunset mechanisms in place to determine when they no longer need to be used.
  • Most countries in our sample governed these technologies in line with pre-existing legislative frameworks, which were not always comprehensive. Only a few countries enacted robust regulations and oversight mechanisms specifically governing contact tracing apps and digital vaccine passports, including the UK, EU member states, Taiwan and South Korea.
  • The lack of robust data governance frameworks, regulation and oversight mechanisms led to lack of clarity about who was accountable for misuse or poor performance of COVID-19 technologies. Not surprisingly, there were incidents of data leaks, technical errors and data being reused for other purposes. For example, contact tracing app data was used in police investigations in Singapore and Germany, and sold to third parties for commercial purposes in the USA.[230]
  • Many governments relied on private technology companies to develop and deploy these technologies, demonstrating and reinforcing the industry’s influence and the power located in digital infrastructure.

Lessons

In light of these findings, there are clear lessons for governments and policymakers deciding how to use digital vaccine passports and contact tracing apps in the future.

These lessons may also apply more generally to the development and deployment of new data-driven technologies and approaches.

Effectiveness

To build evidence on the effectiveness of contact tracing apps and digital vaccine passports:

  • Support research and learning efforts on impact of these technologies on people’s health behaviours.
  • Understand the impacts of apps’ technical properties, and of policies and approaches to implementation, on people’s acceptance of, and experiences of, these technologies in specific socio-cultural contexts and across geographic locations.
  • Weigh up their benefits and harms by considering their role within the broader COVID-19 response and comparing with non-digital interventions (for example, manual contact tracing).
  • Use this impact evaluation to help set standards and strategies for the future use of these technologies in public crises.

To ensure the effective use of technology in future pandemics:

  • Invest in research and evaluation from the start, and implement a clear evaluation framework to build evidence during deployment that supports understanding of the role that technologies play in broader pandemic health strategies.
  • Define criteria for effectiveness using a human-centred approach that goes beyond technical efficacy and builds an understanding of people’s experiences.
  • Establish how to measure and monitor effectiveness by working closely with public health experts and communities, and set targets accordingly.
  • Carry out robust impact assessments and evaluation.

Public legitimacy

To improve public acceptance:

  • Build public trust by publicly setting out guidance and enacting clear law about permitted and restricted uses and mechanisms to support rights, and redress and tackle legal issues.
  • Effectively communicate the purpose of using technology in public crises, including the technical infrastructure and legislative framework of specific technologies, to address public hesitancy and create social consensus.

Inequalities

To avoid making societal inequalities worse:

  • Create monitoring mechanisms that specifically address the impact of technology on inequalities. Monitor the impact on public health behaviours, particularly in relation to social groups who are more likely to encounter health and other forms of social inequalities.
  • Use the impact evidence to identify marginalised and disadvantaged communities and to establish strong public health services, interventions and social policies to support them.

To avoid creating or reinforcing global inequalities and tensions:

  • Harmonise global, national and regional regulatory tools and mechanisms to address global inequalities and tensions.

Governance and accountability

To ensure that individual rights and freedoms are protected:

  • Establish strong data governance frameworks and make sure that regulatory bodies and clear sunset mechanisms are in place.
  • Create specific guidelines and laws to make sure that technology developers follow privacy-by-design and ethics-by-design principles, and that effective monitoring and evaluation frameworks and sunset mechanisms are in place for the deployment of technologies.
  • Build clear evidence about the effectiveness of new technologies to make sure that their use is proportionate to their intended results.

To reverse the growing power imbalance between governments and the technology industry:

  • Develop the public sector’s technical literacy and ability to create technical infrastructure. This does not mean that the private sector should be excluded from developing technologies related to public health, but it is crucial that technical infrastructure and governance are effectively co-designed by government, civil society and private industry.

The legacy of COVID-19 technologies? Outstanding questions

This report synthesises evidence that has emerged on contact tracing apps and digital vaccine passports from 2020 to 2023. These technologies have short histories, but they have potential long-term, societal implications and bring opportunities as well as challenges.

In this research we have attempted to uncover evidence of existing practices rather than speculating about the potential long-term impacts.

In the first two years of the pandemic, the Ada Lovelace Institute raised concerns about the potential risks and negative longer-term implications of COVID-19 technologies for society, beyond the COVID-19 pandemic. The main concerns were about:

  • repurposing of digital vaccine passports and contact tracing apps beyond the health context, such as for generalised surveillance
  • expanding or transforming of digital vaccine passports into wider digital identity systems by allowing digital vaccine passports to ‘set precedents and norms that influence and accelerate the creation of other systems for identification and surveillance’
  • damaging public trust in health and social data-sharing technologies if these technologies were mismanaged, repurposed or ineffective.[231]

In this section, we identify three outstanding research questions which would allow these three potential longer-term risks and implications. Addressing these questions will require consistent research and thinking on the evolution of COVID-19 technologies and their longer-term implications for society and technology.

Governments, civil society and the technology industry should consider the following under-researched questions, and should work together to increase understanding of contact tracing apps and digital vaccine passports and their long-term impact.

Question 1: Will contact tracing apps and digital vaccine passports continue to be used? If so, what will happen to the collected data?

Only a minority of countries, including Australia, Canada and Estonia,[232] have decommissioned their contact tracing apps and deleted the data collected. Digital vaccine passport infrastructure is still in place in many countries across the world, despite most countries having adopted a ‘living with COVID’ policy.

It is important to consider the current and future objectives of governments that are preserving these technological infrastructures, as well as how they intend to use the collected data beyond the pandemic. Given that most countries in our sample did not enact strong regulations with sunset clauses that restrict use and clarify structures or guidance to support deletion, it is crucial that we continue to monitor the future uses of these technologies and ensure that they are not repurposed beyond the health context.

Question 2: How will the infrastructure of COVID-19 technologies and related regulation persist in future health data and digital identity systems?

Digital vaccine passports have accelerated moves towards digital identity schemes in many countries and regional blocs.[233] In Saudi Arabia, the Tawakkalna contact tracing app has been transformed into a comprehensive digital identity system, which received a public service award from the United Nations for institutional resilience and innovative responses to the COVID-19 pandemic.[234]

The African Union, which built the My COVID Pass vaccine passport app in collaboration with African Centres for Disease Control and Prevention, is working towards building a digital ID framework for the African continent. The EU introduced uniform and inter-operable proofs of vaccination through the EU Digital COVID Certificate .

It is not yet clear what the societal implications of these changes of use are, or how they will affect fundamental rights and protections. Following the Digital COVID Certificate’s perceived success among policymakers, the European Commission plans to introduce an EU digital wallet that will give every EU citizen digital identity credentials that are recognised throughout the EU zone.

In some countries, healthcare systems have been transformed as a result of COVID-19 technologies. India has transformed its contact tracing app Aarogya Setu to become the nation’s health app.[235]

In the UK, data and AI have been central to the Government’s response to the pandemic. This has accelerated proposals to use health data for research and planning services. NHS England has initiated a ‘federated data platform’. This will enable NHS organisations to share their operational data through software.

It is hoped that researchers and experts from academia, industry and the charity sector will use the data gathered on the platform for research and analysis to improve the health sector in England.[236]

The federated data platform initiative has been recognised for its potential to transform the healthcare system, but it has also caused concerns about accountability and trustworthiness, as patients’ data will be accessible to many stakeholders. [237] These include private technology companies like Palantir, which has been reported as not always being transparent in how it gathers, analyses and uses people’s data.[238]

These changes in digital identity and health ecosystems can provide significant economic and societal benefits to individuals and nations.[239] But they should be well designed and governed in order to benefit everyone in society. In this context, it is necessary to continue monitoring the evolution of COVID-19 technologies into new digital platforms and to understand their legislative, technical and societal legacies.

Question 3: How have COVID-19 technologies affected public’s attitudes towards data-driven technologies in general?

There is a lot of research on public attitudes towards  COVID-19 technologies. This body of research was largely undertaken in the first years of the pandemic.[240] But, the question of whether, and how, they have affected people’s attitudes towards data-driven technologies beyond the pandemic has not had much attention.

People had to use these technologies in their everyday lives to prove their identity and share their health and other kinds of personal information. But, as demonstrated in this report, there have been incidents that might have damaged people’s confidence in the technologies’ safety and effectiveness.

In this context, we believe that it is crucial to continue to reflect on COVID-19 technologies’ persistent impacts on public attitudes towards data-driven technologies – particularly, those technologies that entail sensitive personal data.

Methodology

In 2020 and 2021, the Ada Lovelace Institute conducted extensive research on COVID-19 technologies. We organised workshops and webinars, and conducted public attitudes research, evidence reviews and desk research. We published nine reports and two monitors. This body of research highlighted the risks and challenges these  technologies posed and made policy recommendations to ensure that they would not cause or exacerbate harms and would benefit everyone in society equally.

In the first two years of the pandemic, many countries rolled out digital vaccine passports and contact tracing apps, as demonstrated in ‘International monitor: vaccine passports and COVID-19 status apps’.[241] In January 2022, as we were entering the third year of the pandemic, we adjusted the scope and objectives of the COVID-19 technologies project. In the first two years of the pandemic, we had focused on the benefits, risks and challenges; now we started focusing on the lessons learned from these technologies from January 2022 onwards. We aimed to address the following questions:

  1. Did COVID-19 technologies work? Were they effective public health tools?
  2. Did people accept them?
  3. How did they affect inequalities?
  4. Were they governed well and with accountability?
  5. What lessons can we learn from the deployment and uses of these new technologies?

Sampling

We aimed for regional representation in our sample. We decided to focus on policies and practices in 34 countries in total. We based our sampling on geographical regions of North Africa, Central Africa, South Africa, South East Asia, Central Asia, East Asia, North America, South America, Eastern Europe, European Union, West Asia, North Africa and Oceania.

Relying on Our World in Data[242] datasets on total deaths, total cases and the share of people who had completed the initial vaccine protocol in 194 countries on 5 June 2022, we created a pandemic impact score for each country, giving equal weight to each of the three variables.

In each geographical region, we then selected two countries with the highest impact score, two countries with medium impact score, and two countries with low impact score for detailed review.

Methods and evidence

This research project encompasses evidence from 34 countries (see the list of the countries in our sample).

Unsurprisingly, the amount and type of evidence on each country varies significantly. Our aim in this research project is not to compare these countries with very different technical infrastructures, political cultures and pandemic management strategies, but to have a number of shared criteria against which we can assess the policies, practices and technical infrastructure in these countries.

With this aim in mind, we established a list of data categories to collect country-specific information:

  • introduction date of vaccine passports
  • end date of vaccine passport regulations
  • protests against vaccine passports or contact tracing apps
  • implementations of vaccine passports, for example, being mandatory in workplaces, for international travel, etc.
  • cumulative number of cases when digital vaccine passports were introduced
  • cumulative number of deaths when digital vaccine passports were introduced
  • share of the vaccinated people when digital vaccine passports were introduced
  • whether there was a government-launched contact tracing app
  • technical infrastructure of contact tracing apps
  • reported cases of surveillance
  • reported cases of repurposing data
  • reported cases of rights infringements
  • evidence on whether COVID-19 technologies increased societal inequalities (for example, around digital exclusion)
  • evidence on whether COVID-19 technologies increased global inequalities
  • evidence on the effectiveness of digital vaccine passports and contact tracing apps.

We used the following methods and resources to gather evidence on the data categories outlined above:

External datasets

We used quantitative datasets of other organisations’ data trackers and policy monitors for the following data categories:

  • proportion of the vaccinated people from Our World in Data.[243]
  • COVID restrictions (for example, school closures, lockdowns, etc.) from Blavatnik School of Government, Oxford University.[244]
  • cumulative number of cases from Our World in Data.[245]
  • cumulative number of deaths from Our World in Data.[246]

Call for evidence

In July 2022, we announced an international call for input on the effectiveness and social impact of digital vaccine passports and contact tracing apps. We incorporated the relevant evidence submitted to this call into the evidence base. For some countries, the evidence submitted was helpful as it either provided us with the missing information or confirmed that the respective country did not have an official regulation (or protocol) to govern vaccine passports or contact tracing apps.

We also worked with some of the individuals and organisations that submitted evidence as consultants to acquire further information on their respective country of expertise.

Workshop

We organised a workshop for evidence building in October 2022. The workshop aimed to discuss the effectiveness of contact tracing apps with experts from the disciplines of epidemiology, cybersecurity, public health, law and media and communications.

The aim of the workshop was to deliberate on the effectiveness of contact tracing apps in Europe. The multidisciplinary background of the workshop participants allowed a focus on the effectiveness beyond technical efficacy by considering the social, legislative and regulatory impacts of apps.

Desk research

Between August 2022 and January 2023 we conducted multiple, structured internet search queries using a set of keywords for each country in our sample. These keywords include ‘vaccine certificate’, ‘vaccine passport’, ‘immunity certificate’, ‘digital contact tracing’, ‘contact tracing app’, ‘COVID technologies’ and ‘the name of the country’.

This approach to desk research enabled collection and analysis of evidence from three different types of resources: media news, government websites, and academic and grey literature (produced by organisations who are not traditional publishers, including government documents, or third-sector organisation reports).

Limitations

There are 34 countries in this research sample. Although the sampling covers every continent, as discussed in the sampling section, we do not claim that our country-specific findings are representative of continents, regions or political blocs. Similarly, we also do not claim exhaustive evidence on developments in every country.

We also recognise that as a UK-based organisation, there might be barriers to discovering evidence emerging from various parts of the world. Our qualitative evidence on media reports in particular is largely in the English language – although there are a few exceptions. We worked with consultants from Brazil, India, Egypt, China and South Africa who provided us with non-English language media and government reports that we had not been able to capture through desk research.

The language barrier also emerged in our policy analysis. We aimed to collect data on policies and regulations from government websites and official policy papers. We used online translation software to conduct research in the official languages of the countries in our sample.

The low rate of success in discovering official policy papers of countries indicates that there are limitations to this method. Not all governments made policies and practices of contact tracing apps and digital vaccine passports publicly available. In this context, while the low amount of policy papers we gathered is partly due to the language barrier, it also relates to governments’ lack of transparency about the uses and governance of these technologies.

Acknowledgements

This report was lead-authored by Melis Mevsimler, with substantive contributions from Bárbara Prado Simão, Dr Nagla Rizk, Gabriella Razzano and Prateek Waghre, who provided evidence and analysis as consultants.

Participants in the workshop:

Professor Christophe Fraser, University of Oxford

Professor Susan Landau, Tufts University

Dr Frans Folkvord, Tilburg University

Claudia Wladdimiro Quevedo, Uppsala University

Dr Simon Williams, Swansea University

Francisco Lupianez Villanueva, Open University of Catalonia

Krzysztof Izdebski, Open Spending EU Coalition

Dr Stephen Farrell, Trinity College Dublin

Dr Laszlo Horvath, Birbeck University

Dr Mustafa Al-Haboubi, London School of Hygiene & Tropical Science

Danqi Guo, Free University of Berlin

Dr Federica Lucivero, University of Oxford

Shahrzad Seyfafheji, Bilkent University

Dr Agata Ferretti, ETH Zurich

Yasemin Gumus Agca, Bilkent University

Boudewijn van Eerd, AWO

Peer reviewers:

Eleftherios Chelioudakis, AWO

Hunter Dowart, Bird & Bird

Professor Ana Beduschi, University of Exeter


Footnotes

[1] Carly Kind, ‘What will the first pandemic of the algorithmic age mean for data governance?’ (Ada Lovelace Institute, 2 April 2020) www.adalovelaceinstitute.org/blog/first-pandemic-of-the-algorithmic-age-data-governance/#:~:text=Coronavirus%20is%20the%20first%20pandemic,its%20detection%2C%20treatment%20and%20prevention accessed 12 April 2023.

[2] The BMJ, ‘Artificial intelligence and Covid-19’, www.bmj.com/AICOVID19 accessed 31 March 2023.

[3] For example, G Samuel and others, ‘COVID-19 Contact Tracing Apps: UK Public Perceptions’ (2021) 32:1 Critical Public Health 31, https://doi.org/10.1080/09581596.2021.1909707; MC Mills and T Ruttanauer, ‘The Effect of Mandatory COVID-19 Certificates on Vaccine Uptakes: Synthetic-Control Modelling of Six Countries’ (2022) 7:1 The Lancet 15, https://doi.org/10.1016/S2468-2667(21)00273-5.

[4] ‘COVID-19 Law Lab’ https://covidlawlab.org accessed 31 March 2023; ‘Lex-Atlas: Covid-19’ https://lexatlas-c19.org accessed 31 March 2023; ‘Digital Global Health and Humanitarianism Lab (DGHH Lab)’ https://dghhlab.com/publications/#PUB-DRCOVID19 accessed 31 March 2023.

[5] AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023.

[6] MIT Technology Review, ‘Covid Tracing Tracker’ www.technologyreview.com/tag/covid-tracing-tracker accessed 31 March 2023.

[7] World Health Organization, ‘Statement on the fourteenth meeting of the International Health Regulations (2005) Emergency Committee regarding the coronavirus disease (COVID-19) pandemic’ (WHO, 30 January 2023) www.who.int/news/item/30-01-2023-statement-on-the-fourteenth-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-coronavirus-disease-(covid-19)-pandemic accessed 31 March 2023.

[8] World Health Organization, ‘Statement on the fifteenth meeting of the IHR (2005) Emergency Committee on the COVID-19 pandemic’, (WHO 5 May 2023) https://www.who.int/news/item/05-05-2023-statement-on-the-fifteenth-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-coronavirus-disease-(covid-19)-pandemic accessed 31 May 2023

[9] GOVLAB and Knight Foundation, ‘The #Data4Covid19 Review’ https://review.data4covid19.org accessed 12 April 2023.

[10] M Shahroz and others, ‘COVID-19 Digital Contact Tracing Applications and Techniques: A Review Post Initial Deployments’ (2021) 5 Transportation Engineering 100072, https://doi.org/10.1016/j.treng.2021.100072.

[11] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[12] A Hussain, ‘TraceTogether data used by police in one murder case: Vivian Balakrishnan (Yahoo! News, 5 January 2021) https://uk.style.yahoo.com/trace-together-data-used-by-police-in-one-murder-case-vivian-084954246.html?guccounter=2 accessed 12 April 2023; DW, ‘German police under fire for misuse of COVID app’ DW (11 January 2022) www.dw.com/en/german-police-under-fire-for-misuse-of-covid-contact-tracing-app/a-60393597 accessed 31 March 2023.

[13] Carly Kind, ‘What will the first pandemic of the algorithmic age mean for data governance?’ (Ada Lovelace Institute, 2 April 2020) www.adalovelaceinstitute.org/blog/first-pandemic-of-the-algorithmic-age-data-governance/#:~:text=Coronavirus%20is%20the%20first%20pandemic,its%20detection%2C%20treatment%20and%20prevention accessed 26 April 2023.

[14] The BMJ, ‘Artificial intelligence and covid-19’, www.bmj.com/AICOVID19 accessed 31 March 2023.

[15] LO Danquah and others, ‘Use of a Mobile Application for Ebola Contact Tracing and Monitoring in Northern Sierra Leone: A Proof-of-Concept Study’ (2019) 19 BMC Infectious Diseases 810, https://doi.org/10.1186/s12879-019-4354-z.

[16] Fabio Chiusi and others, ‘Automating COVID Responses: The Impact of Automated Decision-Making on the COVID-19 Pandemic’ (AlgorithmWatch 2022) https://algorithmwatch.org/en/wp-content/uploads/2021/12/Tracing-The-Tracers-2021-report-AlgorithmWatch.pdf accessed 26 April 2023.

[17] F Yang, L. Heemsbergen and R Fordyce, ‘Comparative Analysis of China’s Health Code, Australia’s COVIDSafe and New Zealand’s COVID Tracer Surveillance App: A New Corona of Public Health Governmentality?’ (2020) 178:1 Media International Australia 182, 10.1177/1329878X20968277.

[18] F Yang, L Heemsbergen and R Fordyce, ‘Comparative Analysis of China’s Health Code, Australia’s COVIDSafe and New Zealand’s COVID Tracer Surveillance App: A New Corona of Public Health Governmentality?’ (2020) 178:1 Media International Australia 182, 10.1177/1329878X20968277.

[19] Ada Lovelace Institute, ‘Health data and COVID-19 technologies’ https://www.adalovelaceinstitute.org/our-work/programmes/health-data-covid-19-tech/
accessed 31 May 2023.

[20] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 30 March 2023.

[21] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 30 March 2023; Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 30 March 2023.

[22] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023; ‘Exit through the App Store? COVID-19 Rapid Evidence Review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 12 April 2023; ‘No Green Lights, No Red Lines’ (2020) www.adalovelaceinstitute.org/report/covid-19-no-green-lights-no-red-lines accessed 12 April 2023; ‘Confidence in a Crisis? Building Public Trust in a Contact Tracing App’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 12 April 2023.

[23] DW, ‘German police under fire for misuse of COVID app’ DW (11 January 2022) www.dw.com/en/german-police-under-fire-for-misuse-of-covid-contact-tracing-app/a-60393597 accessed 31 March 2023; E Tham, ‘China Bank Protest Stopped by Health Codes Turning Red, Depositors Say’ (Reuters, 16 June 2022) www.reuters.com/world/china/china-bank-protest-stopped-by-health-codes-turning-red-depositors-say-2022-06-14 accessed 31 March 2023.

[24] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023) https://covid19.adalovelaceinstitute.orgaccessed 31 May 2023.

[25] Ada Lovelace Institute, ‘Health data and COVID-19 technologies’  https://www.adalovelaceinstitute.org/our-work/programmes/health-data-covid-19-tech accessed 31 May 2023.

[26] Centers for Disease Control and Prevention ‘Contact Tracing’ (2022) www.cdc.gov/coronavirus/2019-ncov/easy-to-read/contact-tracing.html accessed 31 March 2023.

[27] M Hunter, ‘Track and Trace, Trial and Error: Assessing South Africa’s Approaches to Privacy in Covid-19 Digital Contact Tracing’ (December 2020) www.researchgate.net/publication/350896038_Track_and_trace_trial_and_error_Assessing_South_Africa%27s_approaches_to_privacy_in_Covid-19_digital_contact_tracing accessed 31 March 2023.

[28] Some areas used manual contact tracing effectively, for example Vietnam and the Indian state of Kerala. See G Razzano, ‘Digital hegemonies for COVID-19’ (Global Data Justice, 5 November 2020) https://globaldatajustice.org/gdj/188 accessed 31 March 2023.

[29] C Yang, ‘Digital Contact Tracing in the Pandemic Cities: Problematizing the Regime of Traceability in South Korea’ (2022) 9:1 Big Data & Society https://doi.org/10.1177/20539517221089294.

[30] Freedom House ‘Freedom on the net 2021: South Africa’ (2021) https://freedomhouse.org/country/south-africa/freedom-net/2021 accessed 31 March 2023.

[31] M Hunter, ‘Track and Trace, Trial and Error: Assessing South Africa’s Approaches to Privacy in Covid-19 Digital Contact Tracing’ (December 2020) www.researchgate.net/publication/350896038_Track_and_trace_trial_and_error_Assessing_South_Africa%27s_approaches_to_privacy_in_Covid-19_digital_contact_tracing accessed 31 March 2023.

[32] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 9 June 2023.

[33] Ada Lovelace Institute, ‘Provisos for a contact tracing app: The route to trustworthy digital contact tracing’ (4 May 2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 31 March 2023.

[34] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023), https://covid19.adalovelaceinstitute.org  accessed 31 May 2023

[35] M Ciucci and F Gouarderes, ‘National COVID-19 Contact Tracing Apps’ (Think Tank European Parliament, 15 May 2020) www.europarl.europa.eu/thinktank/en/document/IPOL_BRI(2020)652711 accessed 31 March 2023.

[36] M Briers, C Holmes and C Fraser, ‘Demonstrating the impact of the NHS COVID-19 app: Statistical analysis from researchers supporting the development of the NHS COVID-19 app’ (The Alan Turing Institute, 2020) www.turing.ac.uk/blog/demonstrating-impact-nhs-covid-19-app accessed 31 March 2023.

[37] M Veale, ‘The English Law of QR Codes: Presence Tracing and Digital Divides’ (Lex-Atlas: Covid-19, 25 May 2021) https://lexatlas-c19.org/the-english-law-of-qr-codes accessed 31 March 2023.

[38] M Veale, ‘The English Law of QR Codes: Presence Tracing and Digital Divides’ (Lex-Atlas: Covid-19, 25 May 2021) https://lexatlas-c19.org/the-english-law-of-qr-codes accessed 31 March 2023.

[39] Ministry of Health, ‘Ministry of Health to trial Near Field Communication (NFC) tap in technology with NZ COVID Tracer’ (Ministry of Health, New Zealand, 2021) www.health.govt.nz/news-media/media-releases/ministry-health-trial-near-field-communication-nfc-tap-technology-nz-covid-tracer accessed 14 April 2023.

[40] We draw on evidence on a cross-section of 34 countries in this report. Three countries in our sample never launched a national contact tracing app, and we could not find reliable information on six countries. You can find more information on technical infrastructure of contact tracing apps on COVID-19 data explorer. Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.org  l accessed 31 May 2023

[41] L White and P Basshuysen, ‘Privacy versus Public Health? A Reassessment of Centralised and Decentralised Digital Contact Tracing’ (2021) 27 Science and Engineering Ethics 23 https://doi.org/10.1007/s11948-021-00301-0 accessed 31 March 2023

[42] M Ciucci and F Gouarderes, ‘National COVID-19 Contact Tracing Apps’ (Think Tank European Parliament, 15 May 2020) www.europarl.europa.eu/thinktank/en/document/IPOL_BRI(2020)652711 accessed 31 March 2023.

[43] E Braun, ‘French contact-tracing app sent just 14 notifications after 2 million downloads’ (Politico, 23 June 2020) www.politico.eu/article/french-contact-tracing-app-sent-just-14-notifications-after-2-million-downloads accessed 31 March 2023; BBC News ‘Australia Covid: Contact tracing app branded expensive “failure”’ (10 August 2022) www.bbc.co.uk/news/world-australia-62496322 accessed 31 March 2023.

[44] M Veale, ‘Opinion: Privacy is not the problem with the Apple-Google contact tracing app’ (UCL News, 1 July 2020) www.ucl.ac.uk/news/2020/jul/opinion-privacy-not-problem-apple-google-contact-tracing-app accessed 31 March 2023; N Lomas ‘Germany ditches centralized approach to app for COVID-19 contacts tracing’ (TechCrunch, 27 April 2020) https://techcrunch.com/2020/04/27/germany-ditches-centralized-approach-to-app-for-covid-19-contacts-tracing accessed 31 March 2023.

[45] G Goggin, ‘COVID-19 Apps in Singapore and Australia: Reimagining Health Nations with Digital Technology’ (2020) 177:1 Media International Australia 61, 10.1177/1329878X20949770.

[46] G Goggin, ‘COVID-19 Apps in Singapore and Australia: Reimagining Health Nations with Digital Technology’ (2020) 177:1 Media International Australia 61, 10.1177/1329878X20949770.

[47] M Ciucci and F Gouarderes, ‘National COVID-19 Contact Tracing Apps’ (Think Tank European Parliament, 15 May 2020) www.europarl.europa.eu/thinktank/en/document/IPOL_BRI(2020)652711 accessed 26 May 2023 C Gorey, ‘4 things you need to know before installing the HSE Covid-19 contact-tracing app’ (Silicon Republic, 7 July 2020) www.siliconrepublic.com/enterprise/hse-COVID-19-contact-tracing-app accessed 31 March 2023.

[48] AL Popescu, ‘România în urma pandemiei. Statul ignoră propria aplicație anti-Covid, dar și una lansată gratis’ (Europa Libera Romania, 27 November 2020) https://romania.europalibera.org/a/rom%C3%A2nia-%C3%AEn-urma-pandemiei-statul-ignor%C4%83-propria-aplica%C8%9Bie-anti-covid-dar-%C8%99i-una-lansat%C4%83-gratis/30972627.html accessed 31 March 2023; Fabio Chiusi and others, ‘Automating COVID Responses: The Impact of Automated Decision-Making on the COVID-19 Pandemic’ (AlgorithmWatch 2022) https://algorithmwatch.org/en/wp-content/uploads/2021/12/Tracing-The-Tracers-2021-report-AlgorithmWatch.pdf accessed 31 March 2023 https://romania.europalibera.org/a/românia-în-urma-pandemiei-statul-ignoră-propria-aplicație-anti-covid-dar-și-una-lansată-gratis/30972627.html

[49] Several countries in our sample, such as China and India, had a very fragmented contact tracing app ecosystem, with various states/cities/municipalities attempting to create their own apps. There are therefore notable differences across provinces, making difficult to capture the diversity of implementation and experiences.

[50] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023), https://covid19.adalovelaceinstitute.org l accessed 31 May 2023

[51] UK Health Security Agency, ‘NHS COVID-19 app’ (gov.uk, 2020) www.gov.uk/government/collections/nhs-covid-19-app accessed 31 March 2023.

[52] MIT Technology Review, ‘Covid Tracing Tracker’ (2021) www.technologyreview.com/tag/covid-tracing-tracker accessed 31 March 2023.

[53] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (19 April 2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 31 March 2023, 4.

[54] C Wymant, ‘The epidemiological impact of the NHS COVID-19 app’ (National Institutes of Health, 2021) https://pubmed.ncbi.nlm.nih.gov/33979832/ accessed 31 March 2023.

[55] RW Albertus and F Makoza, ‘An Analysis of the COVID-19 Contact Tracing App in South Africa: Challenges Experienced by Users’ (2022) 15:1 African Journal of Science, Technology, Innovation and Development 124,  https://doi.org/10.1080/20421338.2022.2043808; Office of Audit and Evaluation (Health Canada) and the Public Health Agency of Canada, ‘Evaluation of the National COVID-19 Exposure Notification App’ (Health Canada, 20 June 2022) www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/evaluation/covid-alert-national-covid-19-exposure-notification-app.html accessed 26 May 2023.

[56] F Vogt and others, ‘Effectiveness Evaluation of Digital Contact Tracing for COVID-19 in New South Wales, Australia’ (2022) 7:3 The Lancet E250, https://doi.org/10.1016/S2468-2667(22)00010-X; Ada Lovelace Institute, ‘Provisos for a contact tracing app: The route to trustworthy digital contact tracing’ (2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 26 May 2023.

[57] E Braun, ‘French contact-tracing app sent just 14 notifications after 2 million downloads.’ (Politico, 23 June 2020) www.politico.eu/article/french-contact-tracing-app-sent-just-14-notifications-after-2-million-downloads accessed 31 March 2023.

[58] F Vogt and others, ‘Effectiveness Evaluation of Digital Contact Tracing for COVID-19 in New South Wales, Australia’ (2022) 7:3 The Lancet E250, https://doi.org/10.1016/S2468-2667(22)00010-X; AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023.

[59] AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023.

[60] For example, see Y Huang and others, ‘Users’ Expectations, Experiences, and Concerns with COVID Alert, and Exposure-Notification App’ (2022) 6: CSCW2 ACM Journals: Proceedings of the ACM on Human–Computer Interaction 350, https://doi.org/10.1145/3555770.

[61] ‘Digital Global Health and Humanitarianism Lab (DGHH Lab)’ https://dghhlab.com/publications/#PUB-DRCOVID19 accessed 31 March 2023.

[62] ‘Digital Global Health and Humanitarianism Lab (DGHH Lab)’ https://dghhlab.com/publications/#PUB-DRCOVID19 accessed 31 March 2023.

[63] BBC News, ‘Covid in Scotland: Thousands turn off tracking app’ (24 July 2021) www.bbc.co.uk/news/uk-scotland-57941343 accessed 31 March 2023.

[64] S Trendall, ‘Data suggests millions of users have not enabled NHS contact-tracing app’ (Public Technology, 30 June 2021) www.publictechnology.net/articles/news/data-suggests-millions-users-have-not-enabled-nhs-contact-tracing-app accessed 31 March 2023.

[65] V Garousi and D Cutting, ‘What Do Users Think of the UK’s Three COVID-19 Contact Tracing Apps? A Comparative Analysis’ (2021) 28:1 BMJ Health Care Inform e100320, 10.1136/bmjhci-2021-100320.

[66] Office of Audit and Evaluation (Health Canada) and the Public Health Agency of Canada, ‘Evaluation of the National COVID-19 Exposure Notification App’ (Health Canada, 20 June 2022) www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/evaluation/covid-alert-national-covid-19-exposure-notification-app.html accessed 31 March 2023.

[67] Y Huang and others, ‘Users’ Expectations, Experiences, and Concerns with COVID Alert, and Exposure-Notification App’ (2022) 6: CSCW2 ACM Journals: Proceedings of the ACM on Human–Computer Interaction 350, https://doi.org/10.1145/3555770.

[68] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 31 March 2023.

[69] C Wymant, ‘The epidemiological impact of the NHS COVID-19 app’ (National Institutes of Health, 2021) https://directorsblog.nih.gov/2021/05/25/u-k-study-shows-power-of-digital-contact-tracing-in-the-pandemic accessed 26 May 2023.

[70] Ada Lovelace Institute, ‘Confidence in a crisis? Building public trust in a contact tracing app’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 26 May 2023.

[71] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 26 May 2023.

[72] F Yang, L. Heemsbergen and R Fordyce, ‘Comparative Analysis of China’s Health Code, Australia’s COVIDSafe and New Zealand’s COVID Tracer Surveillance App: A New Corona of Public Health Governmentality?’ (2020) 178:1 Media International Australia 182, 10.1177/1329878X20968277.

[73] Planet Payment. ‘Alipay and WeChat Pay’ https://www.planetpayment.com/en/merchants/alipay-and-wechat-pay/ accessed 26 May 2023.

[74] F Liang, ‘COVID-19 and Health Code: How Digital Platforms Tackle the Pandemic in China’ (2021) 6:3 Social Media + Society, https://doi.org/10.1177/2056305120947657; National Health Commission of the People’s Republic of China, ‘Prevention and control of novel coronavirus pneumonia’ (7 March 2020) www.nhc.gov.cn/xcs/zhengcwj/202003/4856d5b0458141fa9f376853224d41d7.shtml accessed 26 May 2023.

[75] W Bin and others, ‘Depositors Are Forcibly Given Red Codes, the Latest Responses from All Parties’ (Southern Metropolis Daily, 14 June 2022) https://mp.weixin.qq.com/s/KAc8_3rCviqnVv05aQvSlw?fbclid=IwAR1xfMQtjZsRikz9vkisYxQBVAAkE9tgekKnMQ4nPaynr2BN9Ceyep3mjq8 accessed 13 April 2023.

[76] S Chan, ‘COVID-19 contact tracing apps reach 9% adoption in most populous countries’ (Sensor Tower, July 2020) https://sensortower.com/blog/contact-tracing-app-adoption accessed 26 May 2023.

[77] Ada Lovelace Institute, ‘Confidence in a crisis? Building public trust in a contact tracing app’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 26 May 2023

[78] L Muscato, ‘Why people don’t tryst contact tracing apps, and what to do about it’ (Technology Review, 12 November 2020) www.technologyreview.com/2020/11/12/1012033/why-people-dont-trust-contact-tracing-apps-and-what-to-do-about-it accessed 31 March 2023; AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023; L Horvath and others, ‘Adoption and Continued Use of Mobile Contact Tracing Technology: Multilevel Explanations from a Three-Wave Panel Survey and Linked Data’ (2022) 12:1 BMJ Open e053327, 10.1136/bmjopen-2021-053327; Ada Lovelace Institute, ‘Public attitudes to COVID-19, technology and inequality: A tracker’ (2021) https://www.adalovelaceinstitute.org/resource/public-attitudes-covid-19/ accessed 26 May 2023; A Kozyreva and others, ‘Psychological Factors Shaping Public Responses to COVID-19 Digital Contact Tracing Technologies in Germany’ (2021) 11 Scientific Reports 18716, https://doi.org/10.1038/s41598-021-98249-5; G Samuel and others, ‘COVID-19 Contact Tracing Apps: UK Public Perceptions’ (2022) 1:32 Critical Public Health 31, 10.1080/09581596.2021.1909707; M Caserotti and others, ‘Associations of COVID-19 Risk Perception with Vaccine Hesitancy Over Time for Italian Residents’ (2021) 272 Social Science & Medicine 113688, 10.1016/j.socscimed.2021.113688.

[79] M Koetse ‘Goodbye, Health Code: Chinese netizens say farewell to the green horse’ (What’s on Weibo, 8 December 2022) www.whatsonweibo.com/goodbye-health-code-chinese-netizens-say-farewell-to-the-green-horse accessed 26 May 2023; L Houchen, ‘Are you ready to use the “Health Code” all the time?’ (7 April 2020) https://mp.weixin.qq.com/s/xDKKicV22IBRGnNnNStOVg accessed 26 May 2023. The National Health Commission’s notice to end the Health Code mandate did not immediately translate into municipal governments discontinuing their policies. See Health Commission, ‘Notice on printing and distributing the Prevention and Control Plan for Novel Coronavirus Pneumonia (Ninth Edition)’ (Health Commission, 28 June 2022) www.gov.cn/xinwen/2022-06/28/content_5698168.htm accessed 13 April 2023.

[80] For example, see Southern Metropolis Daily’s interview with a number of experts on the impacts of using health codes in China. W Bin and others, ‘Depositors Are Forcibly Given Red Codes, the Latest Responses from All Parties’ (Southern Metropolis Daily, 14 June 2022) https://mp.weixin.qq.com/s/KAc8_3rCviqnVv05aQvSlw?fbclid=IwAR1xfMQtjZsRikz9vkisYxQBVAAkE9tgekKnMQ4nPaynr2BN9Ceyep3mjq8 accessed 13 April 2023.

[81] M Caserotti and others, ‘Associations of COVID-19 Risk Perception with Vaccine Hesitancy Over Time for Italian Residents’ (2021) 272 Social Science & Medicine 113688, 10.1016/j.socscimed.2021.113688.

[82] M Dewatripont, ‘Policy Insight 110: Vaccination Strategies in the Midst of an Epidemic’ (Centre for Economic Policy Research, 1 October 2021) https://cepr.org/publications/policy-insight-110-vaccination-strategies-midst-epidemic accessed 13 April 2023.

[83] G Samuel and others, ‘COVID-19 Contact Tracing Apps: UK Public Perceptions’ (2022) 1:32 Critical Public Health 31, 10.1080/09581596.2021.1909707.

[84] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[85] J Amann, J Sleigh and E Vayena, ‘Digital Contact-Tracing during the Covid-19 Pandemic: An Analysis of Newspaper Coverage in Germany, Austria, and Switzerland’ (2021) PLOS ONE, https://doi.org/10.1371/journal.pone.0246524.

[86] AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023.

[87] Office of Audit and Evaluation (Health Canada) and the Public Health Agency of Canada, ‘Evaluation of the National COVID-19 Exposure Notification App’ (Health Canada, 20 June 2022) www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/evaluation/covid-alert-national-covid-19-exposure-notification-app.html accessed13 April 2023.

[88] J Ore, ‘Where did things go wrong with Canada’s COVID Alert App?’ (CBS, 9 February 2022) www.cbc.ca/radio/costofliving/from-boycott-to-bust-we-talk-spotify-and-neil-young-and-take-a-look-at-covid-alert-app-1.6339708/where-did-things-go-wrong-with-canada-s-covid-alert-app-1.6342632 accessed 13 April 2023.

[89] Office of Audit and Evaluation (Health Canada) and the Public Health Agency of Canada, ‘Evaluation of the National COVID-19 Exposure Notification App’ (Health Canada, 20 June 2022) www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/evaluation/covid-alert-national-covid-19-exposure-notification-app.html accessed 13 April 2023.

[90] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[91] L Dowthwaite and others, ‘Public Adoption of and Trust in the NHS COVID-19 Contact Tracing App in the United Kingdom: Quantitative Online Survey Study’ (2021) 23:9 JMIR Publications e29085, 10.2196/29085.

[92] Ada Lovelace Institute, ‘Confidence in a crisis? Building public trust in a contact tracing app’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 13 April 2023; ‘Provisos for a contact tracing app: The route to trustworthy digital contact tracing’ (2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 13 April 2023.

[93] C Bambra and others, ‘The COVID-19 Pandemic and Health Inequalities’ (2020) 74:11 Journal of Epidemiology & Community Health 964, http://dx.doi.org/10.1136/jech-2020-214401; E Yong, ‘The Pandemic’s Legacy Is Already Clear’ (The Atlantic, 30 September 2022) www.theatlantic.com/health/archive/2022/09/covid-pandemic-exposes-americas-failing-systems-future-epidemics/671608 accessed 13 April 2023.

[94] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 Rapid Evidence Review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 26 May 2023.

[95] L Marelli, K Kieslich and S Geiger, ‘COVID-19 and Techno-Solutionism: Responsibilization without Contextualization?’ (2022) 32:1 Critical Public Health 1, https://doi.org/10.1080/09581596.2022.2029192.

[96] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[97] Government of Ireland, ‘COVID Tracker app’ www.covidtracker.ie accessed 31 March 2023.

[98]  S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[99] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[100] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[101] M Veale, ‘The English Law of QR Codes: Presence Tracing and Digital Divides’ (Lex-Atlas: Covid-19, 25 May 2021) https://lexatlas-c19.org/the-english-law-of-qr-codes accessed 31 March 2023.

[102] S Reed and others, ‘Tackling Covid-19: A Case for Better Financial Support to Self-Isolate’ (Nuffield Trust, 14 May 2021) www.nuffieldtrust.org.uk/research/tackling-covid-19-a-case-for-better-financial-support-to-self-isolate accessed 26 May 2023.

[103] Statista, ‘Internet user penetration in Nigeria from 2018 to 2027’ (June 2022) www.statista.com/statistics/484918/internet-user-reach-nigeria accessed 26 May 2023.; G Razzano, ‘Privacy and the pandemic: An African response’ (Association For Progressive Communications, 21 June 2020) www.apc.org/en/pubs/privacy-and-pandemic-african-response accessed 31 March 2023.

[104] Ada Lovelace Institute, ‘Confidence in a Crisis? Building Public Trust in a Contact Tracing App’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 26 May 2023.; ‘Provisos for a Contact Tracing App: The Route to Trustworthy Digital Contact Tracing’ (2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 26 May 2023.

[105] Privacy International, ‘The principles of data protection: not new and actually quite familiar’ (24 September 2018) https://privacyinternational.org/news-analysis/2284/principles-data-protection-not-new-and-actually-quite-familiar accessed 31 March 2023; Ada Lovelace Institute, ‘Provisos for a contact tracing app: The route to trustworthy digital contact tracing’ (2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 26 May 2023. Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 26 May 2023.

[106] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023.

[107] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 26 May 2023.

[108] TT Altshuler and RA Hershkovitz, ‘Digital Contact Tracing and the Coronavirus: Israeli and Comparative Perspectives’ (The Brookings Institution, August 2020) www.brookings.edu/wp-content/uploads/2020/08/FP_20200803_digital_contact_tracing.pdf accessed 31 March 2023.

[109] P Garrett and others, ‘High Acceptance of COVID-19 TRACING Technologies in Taiwan: A Nationally Representative Survey Analysis’ (2020) 19:5 International Journal of Environmental Research and Public Health 3323, 10.3390/ijerph19063323.

[110] TT Altshuler and RA Hershkovitz, ‘Digital Contact Tracing and the Coronavirus: Israeli and Comparative Perspectives’ (The Brookings Institution, August 2020) www.brookings.edu/wp-content/uploads/2020/08/FP_20200803_digital_contact_tracing.pdf accessed 31 March 2023.

[111] J Zhu, ‘The Personal Information Protection Law: China’s version of the GDPR?’ (Columbia Journal of Transnational Law, 14 February 2022) www.jtl.columbia.edu/bulletin-blog/the-personal-information-protection-law-chinas-version-of-the-gdpr accessed 26 May 2023.; it is noteworthy that there were pre-existing privacy rules in place embedded in several laws and regulations; however, these were not enforced with adequate oversight capacity. See A Geller, ‘How Comprehensive Is Chinese Data Protection Law? A Systematisation of Chinese Data Protection Law from a European Perspective’ (2020) 69:12 GRUR International Journal of European and International IP Law 1191, https://doi.org/10.1093/grurint/ikaa136.

[112] H Yu, ‘Living in the Era of Codes: A Reflection on China’s Health Code System’ (2022) Biosocieties, 10.1057/s41292-022-00290-8.

[113] A Li, ‘Explainer: China’s Covid-19 Health Code System’ (Hong Kong Free Press, 13 July 2022) https://hongkongfp.com/2022/07/13/explainer-chinas-COVID-19-health-code-system accessed 31 March 2023; A Clarance, ‘Aarogya Setu: Why India’s Covid-19 contact tracing app is controversial’ (BBC News, 15 May 2020) www.bbc.co.uk/news/world-asia-india-52659520 accessed 31 March 2023; W Bin and others, ‘Depositors Are Forcibly Given Red Codes, the Latest Responses from All Parties’ (Southern Metropolis Daily, 14 June 2022) https://mp.weixin.qq.com/s/KAc8_3rCviqnVv05aQvSlw?fbclid=IwAR1xfMQtjZsRikz9vkisYxQBVAAkE9tgekKnMQ4nPaynr2BN9Ceyep3mjq8 accessed 13 April 2023.

[114] TT Altshuler and RA Hershkovitz, ‘Digital Contact Tracing and the Coronavirus: Israeli and Comparative Perspectives’ (The Brookings Institution, August 2020) www.brookings.edu/wp-content/uploads/2020/08/FP_20200803_digital_contact_tracing.pdf accessed 31 March 2023.

[115] ‘Lex-Atlas: Covid-19’ https://lexatlas-c19.org accessed 31 March 2023.

[116] A Clarance, ‘Aarogya Setu: Why India’s Covid-19 contact tracing app is controversial’ (BBC News, 15 May 2020) www.bbc.co.uk/news/world-asia-india-52659520 accessed 31 March 2023.

[117] Internet Freedom Foundation, ‘Statement: Victory! Aarogya Setu changes from mandatory to, “best efforts”’ (18 May 2020) https://internetfreedom.in/aarogya-setu-victory accessed 26 May 2023.

[118] Evidence submitted to Ada Lovelace Institute by Internet Freedom Foundation, India.

[119] Norton Rose Fulbright, ‘Contact Tracing Apps: A New World for Data Privacy’ (February 2021) www.nortonrosefulbright.com/en/knowledge/publications/d7a9a296/contact-tracing-apps-a-new-world-for-data-privacy accessed 26 May 2023.

[120] T Klosowski, ‘The State of Consumer Data Privacy Laws in the US (and Why It Matters)’ (New York Times, 6 September 2021) www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us accessed 26 May 2023.

[121] Health Insurance Portability and Accountability Act  is a federal law to protect sensitive patient health information, but contact tracing apps were not covered because they are not ‘regulated entities’ under the Act. Centers for Disease Control and Prevention ‘Health Insurance Portability and Accountability Act of 1996 (HIPAA) https://www.cdc.gov/phlp/publications/topic/hipaa.html accessed 26 May 2023.

[122] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 31 March 2023.

[123] P Valade, ‘Jumbo Privacy Review: North Dakota’s Contact Tracing App’ (Jumbo, 21 May 2020) https://blog.withjumbo.com/jumbo-privacy-review-north-dakota-s-contact-tracing-app.html accessed 31 March 2023.

[124]  Civil Liberties Union for Europe, ‘Do EU Governments Continue to Operate Contact Tracing Apps Illegitimately?’ (October 2021) https://dq4n3btxmr8c9.cloudfront.net/files/Nv4A36/DO_EU_GOVERNMENTS_CONTINUE_TO_OPERATE_CONTACT_TRACING_APPS_ILLEGITIMATELY.pdf accessed 31 March 2023.

[125] Fabio Chiusi and others, ‘Automating COVID Responses: The Impact of Automated Decision-Making on the COVID-19 Pandemic’ (AlgorithmWatch 2022) https://algorithmwatch.org/en/wp-content/uploads/2021/12/Tracing-The-Tracers-2021-report-AlgorithmWatch.pdf accessed 31 March 2023.

[126] A Hussain, ‘TraceTogether data used by police in one murder case: Vivian Balakrishnan (Yahoo! News, 5 January 2021) https://uk.style.yahoo.com/trace-together-data-used-by-police-in-one-murder-case-vivian-084954246.html?guccounter=2 accessed 31 March 2023.

[127] K Han, ‘COVID app triggers overdue debate on privacy in Singapore’ (Al Jazeera, 10 February 2021) www.aljazeera.com/news/2021/2/10/covid-app-triggers-overdue-debate-on-privacy-in-singapore accessed 31 March 2023.

[128] K Han, ‘COVID app triggers overdue debate on privacy in Singapore’ (Al Jazeera, 10 February 2021) www.aljazeera.com/news/2021/2/10/covid-app-triggers-overdue-debate-on-privacy-in-singapore accessed 31 March 2023.

[129] S Hilberg, ‘The new German Privacy Act: An overview’ (Deloitte) www2.deloitte.com/dl/en/pages/legal/articles/neues-bundesdatenschutzgesetz.html accessed 26 May 2023.

[130] Civil Liberties Union for Europe, ‘Do EU Governments Continue to Operate Contact Tracing Apps Illegitimately?’ (October 2021) https://dq4n3btxmr8c9.cloudfront.net/files/Nv4A36/DO_EU_GOVERNMENTS_CONTINUE_TO_OPERATE_CONTACT_TRACING_APPS_ILLEGITIMATELY.pdf accessed 31 March 2023.

[131] H Heine, ‘Check-In feature: Corona-Warn-App can now scan luca’s QR codes’ (Corona Warn-app Open Source Project, 9 November 2021) www.coronawarn.app/en/blog/2021-11-09-cwa-luca-qr-codes accessed 26 May 2023.

[132] Fabio Chiusi and others, ‘Automating COVID Responses: The Impact of Automated Decision-Making on the COVID-19 Pandemic’ (AlgorithmWatch 2022) https://algorithmwatch.org/en/wp-content/uploads/2021/12/Tracing-The-Tracers-2021-report-AlgorithmWatch.pdf accessed 26 May 2023.

[133] M Knodel, ‘Public Health, Big Tech, and Privacy: Multistakeholder Governance and Technology-Assisted Contact tracing’ (Global Insights, January 2021) www.ned.org/wp-content/uploads/2021/01/Public-Health-Big-Tech-Privacy-Contact-Tracing-Knodel.pdf accessed 16 April 2023.

[134] M Veale, ‘Opinion: Privacy is not the problem with the Apple-Google contact tracing app’ (UCL News, 1 July 2020) www.ucl.ac.uk/news/2020/jul/opinion-privacy-not-problem-apple-google-contact-tracing-app accessed 31 March 2023.

[135] Ada Lovelace Institute, Rethinking Data and Rebalancing Digital Power (2022) www.adalovelaceinstitute.org/report/rethinking-data accessed 16 April 2023.

[136] H Mance, ‘Shoshana Zuboff: “Privacy Has Been Extinguished. It Is Now a Zombie”’ (Financial Times, 30 January 2023) www.ft.com/content/0cca6054-6fc9-4a94-b2e2-890c50d956d5#myft:my-news:page accessed 16 April 2023.

[137] M Knodel, ‘Public Health, Big Tech, and Privacy: Multistakeholder Governance and Technology-Assisted Contact tracing’ (Global Insights, January 2021) www.ned.org/wp-content/uploads/2021/01/Public-Health-Big-Tech-Privacy-Contact-Tracing-Knodel.pdf accessed 16 April 2023.

[138] GOVLAB and Knight Foundation, ‘The #Data4Covid19 Review’ https://review.data4covid19.org accessed 16 April 2023.

[139] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 16 April 2023.

[140] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 16 April 2023.

[141] World Health Organization, ‘Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX’ (WHO, 7 October 2020) www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax accessed 16 April 2023.

[142] World Health Organization, ‘Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX’ (WHO, 7 October 2020) www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax accessed 16 April 2023.

[143] Pfizer, ‘Pfizer and BioNtech announce vaccine candidate against COVID-19 achieved success in first interim analysis from Phase 3 Study’ (9 November 2020) www.pfizer.com/news/press-release/press-release-detail/pfizer-and-biontech-announce-vaccine-candidate-against accessed 16 April 2023.

[144] NHS England, ‘Landmark moment as first NHS patient receives COVID-19 vaccination’ (NHS England News, December 2020) www.england.nhs.uk/2020/12/landmark-moment-as-first-nhs-patient-receives-COVID-19-vaccination accessed 12 April 2023.

[145] H Davidson, ‘China Approves Sinopharm Covid-19 Vaccine for General Use’ (Guardian, 31 December 2020) www.theguardian.com/world/2020/dec/31/china-approves-sinopharm-covid-19-vaccine-for-general-use accessed 12 April 2023.

[146] NHS England, ‘Landmark moment as first NHS patient receives COVID-19 vaccination’ (NHS England News, * December 2020) www.england.nhs.uk/2020/12/landmark-moment-as-first-nhs-patient-receives-COVID-19-vaccination accessed 12 April 2023.

[147] Y Noguchi, ‘The history of vaccine passports in the US and what’s new’ (NPR, 8 April 2021) www.npr.org/2021/04/08/985253421/the-history-of-vaccine-passports-in-the-u-s-and-whats-new accessed 12 April 2023.

[148]  Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023) https://covid19.adalovelaceinstitute.org l accessed 31 May 2023.

[149] Y Noguchi, ‘The history of vaccine passports in the US and what’s new’ (NPR, 8 April 2021) www.npr.org/2021/04/08/985253421/the-history-of-vaccine-passports-in-the-u-s-and-whats-new accessed 12 April 2023.

[150] K Teyras, ‘Covid-19 health passes can open the door to a digital ID revolution’ (THALES, 30 November 2021) https://dis-blog.thalesgroup.com/identity-biometric-solutions/2021/06/23/covid-19-health-passes-can-open-the-door-to-a-digital-id-revolution accessed 12 April 2023; Privacy International, ‘Covid-19 vaccination certificates: WHO sets minimum demands, governments must do even better’ (9 August 2021) https://privacyinternational.org/advocacy/4607/covid-19-vaccination-certificates-who-sets-minimum-demands-governments-must-do-even accessed 12 April 2023.

[151] S Davidson, ‘How vaccine passports could change digital identity’ (Digicert,6 November 2021) www.digicert.com/blog/how-vaccine-passports-could-change-digital-identity accessed 12 April 2023.

[152] Ada Lovelace Institute, ‘International monitor: vaccine passports and COVID-19 status apps’ (2021) https://www.adalovelaceinstitute.org/resource/international-monitor-vaccine-passports-and-covid-19-status-apps/ accessed 12 April 2023.

[153] F Kritz, ‘The vaccine passport debate actually began in 1897 over a plague vaccine’ (NPR, 8 April 2021) www.npr.org/sections/goatsandsoda/2021/04/08/985032748/the-vaccine-passport-debate-actually-began-in-1897-over-a-plague-vaccine accessed 12 April 2023.

[154] F Kritz, ‘The vaccine passport debate actually began in 1897 over a plague vaccine’ (NPR, 8 April 2021) www.npr.org/sections/goatsandsoda/2021/04/08/985032748/the-vaccine-passport-debate-actually-began-in-1897-over-a-plague-vaccine accessed 12 April 2023.

[155] Ada Lovelace Institute, Checkpoints for Vaccine Passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[156] ibid.

[157] S Subramanian, ‘Biometric tracking can ensure billions have immunity against Covid-19’ (Bloomberg, 13 August 2020) https://www.bloomberg.com/features/2020-COVID-vaccine-tracking-biometric accessed 13 April 2023

[158] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[159] See the legacy of COVID-19 technologies?: Outstanding questions section, p. 118.

[160] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023) https://covid19.adalovelaceinstitute.org  accessed 31 May 2023

 

 

[161] World Health Organization, ‘Rapidly escalating Covid-19 cases amid reduced virus surveillance forecasts a challenging autumn and winter in the WHO European Region’ (WHO, 19 July 2022) www.who.int/europe/news/item/19-07-2022-rapidly-escalating-COVID-19-cases-amid-reduced-virus-surveillance-forecasts-a-challenging-autumn-and-winter-in-the-who-european-region accessed 12 April 2023.

[162] F Kritz, ‘The vaccine passport debate actually began in 1897 over a plague vaccine’ (NPR, 8 April 2021) www.npr.org/sections/goatsandsoda/2021/04/08/985032748/the-vaccine-passport-debate-actually-began-in-1897-over-a-plague-vaccine accessed 12 April 2023.

[163] The New Zealand government shifted its policy towards COVID-19 acceptance by opening the borders and ending lockdowns in October 2021. See J Curtin, ‘The end of New Zealand’s zero-COVID policy’ (Think Global Health, 28 October 2021) www.thinkglobalhealth.org/article/end-new-zealands-zero-COVID-policy accessed 12 April 2023.

[164] Reuters, ‘Brazil health regulator asks Bolsonaro to retract criticism over vaccines’ (9 January 2022) www.reuters.com/business/healthcare-pharmaceuticals/brazil-health-regulator-asks-bolsonaro-retract-criticism-over-vaccines-2022-01-09 accessed 12 April 2023.

[165] Al Jazeera, ‘Brazil judge mandates proof of vaccination for foreign visitors’ (12 December 2021) www.aljazeera.com/news/2021/12/12/brazil-justice-mandates-vaccine-passport-for-visitors accessed 12 April 2023.

[166]  Y Noguchi, ‘The history of vaccine passports in the US and what’s new’ (NPR, 8 April 2021) www.npr.org/2021/04/08/985253421/the-history-of-vaccine-passports-in-the-u-s-and-whats-new accessed 12 April 2023.

[167] M Bull, ‘The Italian Government Response to Covid-19 and the Making of a Prime Minister’ (2021) 13:2 Contemporary Italian Politics 149, https://doi.org/10.1080/23248823.2021.1914453.

[168] A Peacock, ‘What is the Covid ‘Super Green’ pass?’ (Tuscany Now & More) www.tuscanynowandmore.com/discover-italy/essential-advice/travelling-italy-COVID-green-pass accessed 12 April 2023.

[169] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.org accessed 31 May 2023

 

[170] DF Povse, ‘Examining the pros and cons of digital COVID certificates in the EU’ (Ada Lovelace Institute, 15 December 2022) www.adalovelaceinstitute.org/blog/examining-digital-covid-certificates-eu accessed 12 April 2023.

[171] Ada Lovelace Institute, ‘What place should COVID-19 vaccine passports have in society?’ (17 February 2021) www.adalovelaceinstitute.org/report/covid-19-vaccine-passports accessed 31 March 2023.

[172] Ada Lovelace Institute, ‘International Monitor: vaccine passports and COVID-19 status apps’ (15 October 2021) https://www.adalovelaceinstitute.org/resource/international-monitor-vaccine-passports-and-covid-19-status-apps/ accessed 31 March 2023

[173] S Amaro, ‘France’s Macron sparks outrage as he vows to annoy the unvaccinated’ (CNBC, 5 January 2022) www.cnbc.com/2022/01/05/macron-french-president-wants-to-annoy-the-unvaccinated-.html accessed 12 April 2023.

[174] G Vergallo and others, ‘Does the EU COVID Digital Certificate Strike a Reasonable Balance between Mobility Needs and Public Health? (2021) 57:10 Medicina (Kaunas) 1077, 10.3390/medicina57101077.

[175] C Franco-Paredes, ‘Transmissibility of SARS-CoV-2 among Fully Vaccinated Individuals’ (2022) 22:1 The Lancet P16, https://doi.org/10.1016/S1473-3099(21)00768-4.

[176] World Health Organization, ‘Information for the public: COIVID-19 vaccines’ (WHO, 18 November 2022) https://www.who.int/westernpacific/emergencies/covid-19/information-vaccines accessed 01 June 2023.

[177] World Health Organization, ‘Vaccine efficacy, effectiveness and protection’ (WHO, 14 July 2021) www.who.int/news-room/feature-stories/detail/vaccine-efficacy-effectiveness-and-protection accessed 12 April 2024; A Allen, ‘Pfizer CEO pushes yearly shots for Covid: Not so fast, experts say’ (KFF Health News, 21 March 2022) https://kffhealthnews.org/news/article/pfizer-ceo-albert-bourla-yearly-COVID-shots accessed 31 March 2023.

[178] World Health Organization, ‘Tracking SARS-CoV-2 variants’ www.who.int/activities/tracking-SARS-CoV-2-variants accessed 31 March 2023.

[179] G Warren and R Lofstedt, ‘Risk Communication and COVID-19 in Europe: Lessons for Future Public Health Crises’ (2021) 25:10 Journal of Risk Research 1161, https://doi.org/10.1080/13669877.2021.1947874.

[180] DF Povse, ‘Examining the pros and cons of digital COVID certificates in the EU’ (Ada Lovelace Institute, 15 December 2022) www.adalovelaceinstitute.org/blog/examining-digital-covid-certificates-eu accessed 31 March 2023.

[181] M Sallam, ‘COVID-19 Vaccine Hesitancy Worldwide: A Concise Systematic Review of Vaccine Acceptance Rates’ (2021) 9:2 Vaccines 160, https://doi.org/10.3390/vaccines9020160.

[182] SuperJob, ‘Most often, the introduction of QR codes is approved at mass events, least often – in non-food stores, but 4 out of 10 Russians are against any QR codes’ (16 November 2021) www.superjob.ru/research/articles/113182/chasche-vsego-vvod-qr-kodov-odobryayut-na-massovyh-meropriyatiyah accessed 31 March 2023.

[183] G Salau, ‘How vaccine cards are procured without jabs’ (The Guardian [Nigeria], 23 December 2021) https://guardian.ng/features/how-vaccine-cards-are-procured-without-jabs accessed 26 May 2023; E de Bre, ‘Fake COVID-19 vaccination cards emerge in Russia’ (Organized Crime and Corruption Reporting Project, 30 June 2021) www.occrp.org/en/daily/14733-fake-COVID-19-vaccination-cards-emerge-in-russia accessed 31 March 2023.

[184] J Ceulaer, ‘Viroloog Emmanuel Andre: “Covid Safe Ticket leidde tot meer besmettingen”’ (De Morgen, 29 November 2021) www.demorgen.be/nieuws/viroloog-emmanuel-andre-covid-safe-ticket-leidde-tot-meer-besmettingen~bae41a3e/?utm_source=link&utm_medium=social&utm_campaign=shared_earned accessed 12 April 2023.

[185] Gilmore and others, ‘Community Engagement to Support COVID-19 Vaccine Uptake: A Living Systematic Review Protocol’ (2022) 12 BMJ Open e063057, http://dx.doi.org/10.1136/bmjopen-2022-063057.

[186] AD Bourhanbour and O Ouchetto, ‘Morocco Achieves the Highest COVID-19 Vaccine Rates in Africa in the First Phase: What Are Reasons for Its Success?’ (2021) 28:4 Journal of Travel Medicine taab040, https://doi.org/10.1093/jtm/taab040.

[187] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023.

[188] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023.

[189] K Beaver, G Skinner and A Quigley, ‘Majority of Britons support vaccine passports but recognise concerns in new Ipsos UK Knowledge Panel poll’ (Ipsos, 31 March 2021) www.ipsos.com/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-uk-knowledgepanel-poll accessed 12 April 2023.

[190] H Kennedy, ‘The vaccine passport debate reveals fundamental views about how personal data should be used, its role in reproducing inequalities, and the kind of society we want to live in’ (LSE, 12 August 2021) https://blogs.lse.ac.uk/impactofsocialsciences/2021/08/12/the-vaccine-passport-debate-reveals-fundamental-views-about-how-personal-data-should-be-used-its-role-in-reproducing-inequalities-and-the-kind-of-society-we-want-to-live-in accessed 26 May 2023.

[191] C Brogan, ‘Vaccine passports linked to COVID-19 vaccine hesitancy in UK and Israel’ (Imperial College London, 2 September 2021) www.imperial.ac.uk/news/229153/vaccine-passports-linked-covid-19-vaccine-hesitancy accessed 12 April 2023.

[192] J Drury, ‘Behavioural Responses to Covid-19 Health Certification: A Rapid Review’ (2021) 21 BMC Public Health 1205, https://doi.org/10.1186/s12889-021-11166-0; JR de Waal, ‘One year on: Global update on public attitudes to government handling of Covid’ (YouGov, 19 November 2021) https://yougov.co.uk/topics/international/articles-reports/2021/11/19/one-year-global-update-public-attitudes-government accessed 12 April 2023.

[193] H Kennedy, ‘The vaccine passport debate reveals fundamental views about how personal data should be used, its role in reproducing inequalities, and the kind of society we want to live in’ (LSE, 12 August 2021) https://blogs.lse.ac.uk/impactofsocialsciences/2021/08/12/the-vaccine-passport-debate-reveals-fundamental-views-about-how-personal-data-should-be-used-its-role-in-reproducing-inequalities-and-the-kind-of-society-we-want-to-live-in accessed 12 April 2023.

[194] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[195] H Kennedy, ‘The vaccine passport debate reveals fundamental views about how personal data should be used, its role in reproducing inequalities, and the kind of society we want to live in’ (LSE, 12 August 2021) https://blogs.lse.ac.uk/impactofsocialsciences/2021/08/12/the-vaccine-passport-debate-reveals-fundamental-views-about-how-personal-data-should-be-used-its-role-in-reproducing-inequalities-and-the-kind-of-society-we-want-to-live-in accessed 12 April 2023.

[196] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[197] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.orgaccessed 31 May 2023

[198] B Bell, ‘Covid: Austrians heading towards lockdown for unvaccinated’ (BBC News, 12 November 2021) www.bbc.co.uk/news/world-europe-59245018 accessed 12 April 2023.

[199] B Bell, ‘Covid: Austrians heading towards lockdown for unvaccinated’ (BBC News, 12 November 2021) www.bbc.co.uk/news/world-europe-59245018 accessed 12 April 2023.

[200] Simmons + Simmons, ‘COVID-19 Italy: An easing of covid restrictions’ (1 May 2022) www.simmons-simmons.com/en/publications/ckh3mbdvv151g0a03z6mgt3dr/covid-19-decree-brings-strict-restrictions-for-italy accessed 12 April 2023.

[201] E de Bre, ‘Fake COVID-19 vaccination cards emerge in Russia’ (Organized Crime and Corruption Reporting Project, 30 June 2021) www.occrp.org/en/daily/14733-fake-COVID-19-vaccination-cards-emerge-in-russia accessed 31 March 2023.

[202] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023.

[203] Health Pass, ‘Sıkça Sorulan Sorular’ https://healthpass.saglik.gov.tr/sss.html accessed 12 April 2023.

[204] S Dwivedi, ‘“No one can be forced to get vaccinated”: Supreme Court’s big order’ (NDTV, 2 May 2022) www.ndtv.com/india-news/coronavirus-no-one-can-be-forced-to-get-vaccinated-says-supreme-court-adds-current-vaccine-policy-cant-be-said-to-be-unreasonable-2938319 accessed 12 April 2023.

[205] NHS, ‘NHS COVID Pass’ www.nhs.uk/nhs-services/covid-19-services/nhs-covid-pass accessed 12 May 2021.

[206] Our World in Data, ‘Coronavirus (COVID-19) Vaccinations’ https://ourworldindata.org/COVID-vaccinations?country=OWID_WRL accessed 12 April 2023.

[207] Harvard Global Health Institute, ‘From Ebola to COVID-19: Lessons in digital contact tracing in Sierra Leone’ (1 September 2020) https://globalhealth.harvard.edu/from-ebola-to-covid-19-lessons-in-digital-contact-tracing-in-sierra-leone accessed 26 May 2023.

[208] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023. Riaz and colleagues define vaccine nationalism as ‘an economic strategy to hoard vaccinations from manufacturers and increase supply in their own country’. See M Riaz and others, ‘Global Impact of Vaccine Nationalism during COVID-19 Pandemic’ (2010) 49 Tropical Medicine and Health 101, https://doi.org/10.1186/s41182-021-00394-0.

[209] E Racine, ‘Understanding COVID-19 certificates in the context of recent health securitisation trends’ (Ada Lovelace Institute, 9 March 2023) www.adalovelaceinstitute.org/blog/covid-certificates-health-securitisation accessed 26 May 2023.

[210] E Racine, ‘Understanding COVID-19 certificates in the context of recent health securitisation trends’ (Ada Lovelace Institute, 9 March 2023) www.adalovelaceinstitute.org/blog/covid-certificates-health-securitisation accessed 26 May 2023.

[211] J Atick, ‘Covid vaccine passports are important but could they also create more global inequality?’ (Euro News, 17 August 2021) www.euronews.com/next/2021/08/16/covid-vaccine-passports-are-important-but-could-they-also-create-more-global-inequality accessed 12 April 2023.

[212] E Racine, ‘Understanding COVID-19 certificates in the context of recent health securitisation trends’ (Ada Lovelace Institute, 9 March 2023) www.adalovelaceinstitute.org/blog/covid-certificates-health-securitisation accessed 12 April 2023.

[213] A Suarez-Alvarez and AJ Lopez-Menendez, ‘Is COVID-19 Vaccine Inequality Undermining the Recovery from the COVID-19 Pandemic?’ (2022) 12 Journal of Global Health 05020, 10.7189/jogh.12.05020. Share of vaccinated people refers to the total number of people who received all doses prescribed by the initial vaccination protocol, divided by the total population of the country.

[214]  Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.org accessed 31 May 2023.

[215]  ibid.

[216] World Health Organization, ‘COVAX: Working for global equitable access to COVID-19 vaccines’ www.who.int/initiatives/act-accelerator/covax, accessed 12 April 2023.

[217] European Commission, ‘Team Europe contributes €500 million to COVAX initiative to provide one billion COVID-19 vaccine doses for low and middle income countries’ (15 December 2020) https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2262 accessed 12 April 2023.

 

[218] Holder, J. (2023). Tracking Coronavirus Vaccinations Around the World. The New York Times [online]. Available at: https://www.nytimes.com/interactive/2021/world/covid-vaccinations-tracker.html. (Accessed: 12 April 2023).

[219] European Council. EU digital COVID certificate: how it works. Available at: https://www.consilium.europa.eu/en/policies/coronavirus/eu-digital-covid-certificate//

[220] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[221] A Gillwald and others, ‘Mobile phone data is useful in coronavirus battle: But are people protected enough?’ (The Conversation, 27 April 2020) https://theconversation.com/mobile-phone-data-is-useful-in-coronavirus-battle-but-are-people-protected-enough-136404 accessed 26 May 2023..

[222] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023..

[223] A Gillwald and others, ‘Mobile phone data is useful in coronavirus battle: But are people protected enough?’ (The Conversation, 27 April 2020) https://theconversation.com/mobile-phone-data-is-useful-in-coronavirus-battle-but-are-people-protected-enough-136404 accessed 26 May 2023.

[224] ABC News, ‘Brazil’s health ministry website hacked, vaccination information stolen and deleted’ (11 December 2021) www.abc.net.au/news/2021-12-11/brazils-national-vaccination-program-hacked-/100692952 accessed 12 April 2023; Z Whittaker, ‘Jamaica’s immigration website exposed thousands of travellers’ data’ (TechCrunch, 17 February 2021) https://techcrunch.com/2021/02/17/jamaica-immigration-travelers-data-exposed accessed 12 April 2023.

[225] Proportionality is a general principle in law which refers to striking a balance between the means used and the intended aim. See European Data Protection Supervisor, ‘Necessity and proportionality’ https://edps.europa.eu/data-protection/our-work/subjects/necessity-proportionality_en accessed 12 April 2023.

[226] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023..

[227] G Razzano, ‘Privacy and the pandemic: An African response’ (Association For Progressive Communications, 21 June 2020) www.apc.org/en/pubs/privacy-and-pandemic-african-response accessed 26 May 2023.

[228] A Gillwald and others, ‘Mobile phone data is useful in coronavirus battle: But are people protected enough?’ (The Conversation, 27 April 2020) https://theconversation.com/mobile-phone-data-is-useful-in-coronavirus-battle-but-are-people-protected-enough-136404 accessed 26 May 2023.

[229] European Commission ‘Coronavirus: Commission proposes to extend the EU Digital COVID Certificate by one year’ (3 February 2022) https://ec.europa.eu/commission/presscorner/detail/en/ip_22_744 accessed 26 May 2023.

[230] A Hussain, ‘TraceTogether data used by police in one murder case: Vivian Balakrishnan (Yahoo! News, 5 January 2021) https://uk.style.yahoo.com/trace-together-data-used-by-police-in-one-murder-case-vivian-084954246.html?guccounter=2 accessed 30 March 2023. DW, ‘German police under fire for misuse of COVID app’ DW (11 January 2022) www.dw.com/en/german-police-under-fire-for-misuse-of-covid-contact-tracing-app/a-60393597 accessed 31 March 2023.

[231] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023; ‘Confidence in a Crisis? Building Public Trust in a Contact Tracing App’ (17 August 2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 12 April 2023; ‘Exit through the App Store? COVID-19 Rapid Evidence Review’ (19 April 2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 12 April 2023.

[232] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.orgaccessed 31 May 2023

[233] European Council, ‘European digital identity (eID): Council makes headway towards EU digital wallet, a paradigm shift for digital identity in Europe’ (6 December 2022) https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/european-digital-identity-eid-council-adopts-its-position-on-a-new-regulation-for-a-digital-wallet-at-eu-level accessed 12 April 2023; Y Theodorou, ‘On the road to digital-ID success in Africa: Leveraging global trends’ (Tony Blair Institute, 13 June) www.institute.global/insights/tech-and-digitalisation/road-digital-id-success-africa-leveraging-global-trends accessed 12 April 2023.

[234] The Tawakkalna app is available at https://ta.sdaia.gov.sa/en/index; Saudi–US Trade Group, ‘United Nations recognizes Saudi Arabia’s Tawakkalna app with Public Service Award for 2022 www.sustg.com/united-nations-recognizes-saudi-arabias-tawakkalna-app-with-public-service-award-for-2022 accessed 12 April 2023.

[235] Varindia, ‘Aarogya Setu has been transformed as nation’s health app’ (26 July 2022) https://varindia.com/news/aarogya-setu-has-been-transformed-as-nations-health-app accessed 13 April 2023.

[236] NHS England, ‘Digitising, connecting and transforming health and care’ www.england.nhs.uk/digitaltechnology/digitising-connecting-and-transforming-health-and-care accessed 13 April 2023.

[237] DHI News Team, ‘The role of a successful federated data platform programme’ (Digital Health, 27 September 2022) www.digitalhealth.net/2022/09/the-role-of-a-successful-federated-data-platform-programme accessed 12 April 2023; Department of Health and Social Care, ‘Better, broader, safer: Using health data for research and analysis (gov.uk, 7 April 2022) www.gov.uk/government/publications/better-broader-safer-using-health-data-for-research-and-analysis accessed 13 April 2023.

[238] N Sherman, ‘Palantir: The controversial data firm now worth £17bn’ (BBC News, 1 October 2020) www.bbc.co.uk/news/business-54348456 accessed 13 April 2023.

[239] C Handforth, ‘How digital can close the ‘identity gap’ (UNDP, 19 May 2022) www.undp.org/blog/how-digital-can-close-identity-gap?utm_source=EN&utm_medium=GSR&utm_content=US_UNDP_PaidSearch_Brand_English&utm_campaign=CENTRAL&c_src=CENTRAL&c_src2=GSR&gclid=CjwKCAiA0J accessed 13 April 2023.

[240] L Muscato, ‘Why people don’t tryst contact tracing apps, and what to do about it’ (Technology Review, 12 November 2020) www.technologyreview.com/2020/11/12/1012033/why-people-dont-trust-contact-tracing-apps-and-what-to-do-about-it accessed 31 March 2023; AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023; L Horvath and others, ‘Adoption and Continued Use of Mobile Contact Tracing Technology: Multilevel Explanations from a Three-Wave Panel Survey and Linked Data’ (2022) 12:1 BMJ Open e053327, 10.1136/bmjopen-2021-053327; A Kozyreva and others, ‘Psychological Factors Shaping Public Responses to COVID-19 Digital Contact Tracing Technologies in Germany’ (2021) 11 Scientific Reports 18716, https://doi.org/10.1038/s41598-021-98249-5; G Samuel and others, ‘COVID-19 Contact Tracing Apps: UK Public Perceptions’ (2022) 1:32 Critical Public Health 31, 10.1080/09581596.2021.1909707; M Caserotti and others, ‘Associations of COVID-19 Risk Perception with Vaccine Hesitancy Over Time for Italian Residents’ (2021) 272 Social Science & Medicine 113688, 10.1016/j.socscimed.2021.113688. Ada Lovelace Institute’s ‘Public attitudes to COVID-19, technology and inequality: A tracker’ summarises a wide range of studies and projects that offer insight into people’s attitudes and perspectives. See Ada Lovelace Institute, ‘Public attitudes to COVID-19, technology and inequality: A tracker’ (2021) https://www.adalovelaceinstitute.org/resource/public-attitudes-covid-19/ accessed 12 April 2023.

[241] Ada Lovelace Institute, ‘International monitor: vaccine passports and COVID-19 status apps’ (15 October 2021) https://www.adalovelaceinstitute.org/resource/international-monitor-vaccine-passports-and-covid-19-status-apps/ accessed 30 March 2023.

[242] Our World in Data, ‘Coronavirus Pandemic (COVID-19) https://ourworldindata.org/coronavirus, accessed 31 May 2023

[243] Our World in Data ‘Coronavirus Pandemic (COVID-19)’ https://ourworldindata.org/coronavirus#explore-the-global-situation accessed 12 April 2023.

[244] University of Oxford ‘COVID-19 Government Response Tracker’ https://www.bsg.ox.ac.uk/research/covid-19-government-response-tracker  accessed 12 April 2023.

[245] Our World in Data ‘Coronavirus Pandemic (COVID-19)’ https://ourworldindata.org/coronavirus#explore-the-global-situation accessed 12 April 2023.

[246] Our World in Data ‘Coronavirus Pandemic (COVID-19)’ https://ourworldindata.org/coronavirus#explore-the-global-situation accessed 12 April 2023.

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

Letter from the working group co-chairs

 

This project by an international and interdisciplinary working group of experts from academia, policy, law, technology and civil society, invited by the Ada Lovelace Institute, had a big ambition: to imagine rules and institutions that can shift power over data and make it benefit people and society.

 

We began this work in 2020, only a few months into the pandemic, at a time when public discourse was immersed in discussions about how technologies – like contact tracing apps – could be harnessed to help address this urgent and unprecedented global health crisis.

 

The potential power of data to affect positive change – to underpin public health policy, to support isolation, to assess infection risk – was perhaps more immediate than at any other time in our lives. At the same time, concerns such as data injustice and privacy remained.

 

It was in this climate that our working group sought to explore the relationship people have with data and technology, and to look towards a positive future that would centre governance, regulation and use of data on the needs of people and society, and contest the increasingly entrenched systems of digital power.

 

The working group discussions centred on questions about power over both data infrastructures, and over data itself. Where does power reside in the digital ecosystem, and what are the sources of this power? What are the most promising approaches and interventions that might distribute power more widely, and what might that rebalancing accomplish?

 

The group considered interventions ranging from developing public-service infrastructure to alternative business models, from fiduciary duties for data infrastructures to a new regime for data under a public-interest approach. Many were conceptually interesting but required more detailed thought to be put into practice.

 

Through a process of analysis and distillation, that broad landscape narrowed to four areas for change: infrastructure, governance, institutions and democratic participation in decisions over data processing, collection and use. We are happy that the group has endorsed a pathway towards transformation, identifying a shared vision and practical interventions to begin the work of changing the digital ecosystem.

 

Throughout this process, we wanted to free ourselves from the constraints of currently perceived models and norms, and go beyond existing debates around data policy. We did this intentionally, to extend the scope of what is politically thought to be possible, and to create space for big ideas to flourish and be discussed.

 

We see this work as part of one of the most challenging efforts we have to make as humans and as societies. Its ambitious aim is to bring to the table a richer set of possibilities of our digital future. We uphold that we need new imaginaries if we are to create a world where digital power is distributed among many and serves the public good, as defined in democracies.

 

We hope this report will serve as both a provocation and a way to generate constructive criticism and mature ideas on how to transform digital ecosystems, but also a call to action for those of you – our readers – who hold the power to make the interventions we describe into political and business realities.

 

Diane Coyle

Bennett Professor of Public Policy, University of Cambridge

 

Paul Nemitz

Principal Adviser on Justice Policy, European Commission and visiting Professor of Law at College of Europe

 

Co-chairs

Rethinking data working group

A call for a new vision

In 2020, the Ada Lovelace Institute characterised the digital ecosystem as:

  • Exploitative: Data practices are exploitative, and they fail to produce the potential social value of data, protect individual rights and serve communities.
  • Shortsighted: Political and administrative institutions have struggled to govern data in a way that enables effective enforcement and acknowledges its central role in the data-driven systems.
  • Disempowering: Individuals lack agency over how their data is generated and used, and there are stark power imbalances between people, corporations and states.[footnote]Ada Lovelace Institute. (2020). Rethinking Data – Prospectus. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2020/01/Rethinking-Data-Prospectus-Ada-Lovelace-Institute-January-2019.pdf[/footnote]

We recognised an urgent need for a comprehensive and transformative vision for data that can serve as a ‘North Star’, directing our efforts and encouraging us to think bigger and move further.

Our work to ‘rethink data’ began with a forward-looking question:

‘What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem towards people and society?’

This drove the establishment of an expert working group, bringing together leading thinkers in privacy and data protection, public policy, law and economics from the technology sector, policy, academia and civil society across the UK, Europe, USA, Canada and Hong Kong.

This disciplinarily diverse group brought their perspectives and expertise to understand the current data ecosystem and make sense of the complexity that characterises data governance in the UK, across Europe and internationally. Their reflection on the challenges informed a holistic approach to the changes needed, which is highly relevant to the jurisdictions mentioned above, and which we hope will be of foundational interest to related work in other territories.

Understanding that shortsightedness limits creative thinking, we deliberately set the field of vision to the medium term, 2030 and beyond. We intended to escape the ‘policy weeds’ of unfolding developments in data and technology policy in the UK, EU or USA, and set our sights on the next generation of institutions, governance, infrastructure and regulations.

Using discussions, debates, commissioned pieces, futures-thinking workshops, speculative scenario building and horizon scanning, we have distilled a multitude of ideas, propositions and models. (For full details about our methodology, see ‘Final notes’.)

These processes and methods moved the scope of enquiry on from the original premise – to articulate a positive ambition for the use and regulation of data that recognised asymmetries of power and enabled social value – to seeking the most promising interventions that address the significant power imbalances that exist between large private platforms, and groups of people and individuals.

This report highlights and contextualises four cross-cutting interventions with a strong potential to reshape the digital ecosystem:

  1. Transforming infrastructure into open and interoperable ecosystems.
  2. Reclaiming control of data from dominant companies.
  3. Rebalancing the centres of power with new (non-commercial) institutions.
  4. Ensuring public participation as an essential component of technology policymaking.

The interventions are multidisciplinary and they integrate legal, technological, market and governance solutions. They offer a path towards addressing present digital challenges and the possibility for a new, healthy digital ecosystem to emerge.

What do we mean by a healthy digital ecosystem? One that privileges people over profit, communities over corporations, society over shareholders. And, most importantly, one where power is not held by a few large corporations, but is distributed among different and diverse  models, alongside people who are represented in, and affected by the data used by those new models.

The digital ecosystem we propose is balanced, accountable and sustainable, and imagines new types of infrastructure, new institutions and new governance models that can make data work for people and society.

Some of these interventions can be located within (or built from) emerging or recently adopted policy initiatives, while others require the wholesale overhaul of regulatory regimes and markets. They are designed to spark ideas that political thinkers, forward-looking policymakers, researchers, civil society organisations, funders and ethical innovators in the private sector consider and respond to when designing future regulations, policies or initiatives around data use and governance.

This report also acknowledges the need to prepare the ground for the more ambitious transformation of power relations in the digital ecosystem. Even a well-targeted intervention won’t change the system unless it is supported by relevant institutions and behavioural change.

In addition to targeted interventions, the report explains the preconditions that can support change:

  1. Effective regulatory enforcement.
  2. Legal action and representation.
  3. Removal of industry dependencies.

Reconceptualising the digital ecosystem will require sustained, collective and thorough efforts, and an understanding that elaborating on strategies for the future involves constant experimentation, adaptation and recalibration.

Through discussion of each intervention, the report brings an initial set of provocative ideas and concepts, to inspire a thoughtful debate about the transformative changes needed for the digital ecosystem to start evolving towards a people and society-focused vision. These can help us think about potential ways forward, open up questions for debate instead of rushing to provide answers, and offer a starting point from which more fully fledged solutions for change are able to grow.

We hope that policymakers, researchers, civil society organisations, funders and ethical industry innovators will engage with – and, crucially, iterate on – these propositions in a collective effort to find solutions that lead to lasting change in data practices and policies.

Making data work for people and society

 

The building blocks for a people-first digital ecosystem start from repurposing data to respect individual agency and deliver societal benefits, and from addressing abuses that are well defined and understood today, and are likely to continue if they are not dealt with in a systemic way.

 

Making data work for people means protecting individuals and society from abuses caused by corporations’ or governments’ use of data and algorithms. This means fundamental rights such as privacy, data protection and non-discrimination are both protected in law and reflected in the design of computational processes that generate and capture personal data.

 

The requirement to protect people from harm does not only operate in the present, there is also a need to prevent harms from happening in the future, and to create resilient institutions that will operate effectively against future threats and potential impact that can’t be fully anticipated.

 

To produce long-lasting change, we will need to break structural dependencies and address the sources of power of big technology companies. To do this, one goal must be to create data governance models and new institutions that will balance power asymmetries. Another goal is to restructure economic, technical and legal tools and incentives, to move infrastructure control away from unaccountable organisations.

 

Finally, positive goals for society can emerge from data infrastructures and algorithmic models developed by private and/or public actors, if data serves both individual and societal goals, rather than just the interests of commerce or undemocratic regimes.

How to use this report

The report is written to be of particular use to policymakers, researchers, civil society organisations, funders and those working in data-governance. To understand how and where you can take the ideas explored here forward, we recommend these approaches:

  • If you work on data policy decision-making, go through a brief overview of the sources of power in today’s digital ecosystem in Chapter 1, focus on ‘The vision’ subsections in and answer the call to action in Chapter 3 by considering ways to translate the proposed interventions into policy action and help build the pathway towards a comprehensive and transformative vision for data.
  • If you are a researcher, focus on the ‘How to get from here to there’ and ‘Further considerations and provocative concepts’ subsections in Chapter 2 and answer the call to action in Chapter 3 by reflecting critically on the provocative concepts and help develop the propositions into more concrete solutions for change.
  • If you are a civil society organisation, focus on ‘How to get from here to there’ subsections in Chapter 2 and answer the call to action in Chapter 3 by engaging with the suggested transformations and build momentum to help visualise a positive future for data and society.
  • If you are a funder, go through an overview of the sources of power in today’s digital ecosystem in Chapter 1, focus on ‘The vision’ subsections in Chapter 2 and answer the Call to action in Chapter 3 by supporting the development of a proactive policy agenda by civil society.
  • If you are working on data governance in industry, focus on sections 1 and 2 in Chapter 2, help design mechanisms for responsible generation and use of data, and answer the call to action in Chapter 3 by supporting the development of standards for open and rights enhancing systems.

Chapter 1: Understanding power in data-intensive digital ecosystems

1. Context setting

To understand why  a transformation is needed in the way our digital ecosystem operates, it’s necessary to understand the dynamics and different facets of today’s data-intensive ecosystem.

In the last decade, there has been an exponential increase in the generation, collection and use of data. This upsurge is driven by an increasing datafication of everyday parts of our lives,[footnote]Ada Lovelace Institute. (2020). The data will see you now. Available at: https://www.adalovelaceinstitute.org/report/the-data-will-see-you-now/[/footnote] from work to social interactions and, to the provision of public services. The backbone of this change is the growth of digitally connected devices, data infrastructures and platforms, which enable new forms of data generation and extraction at an unprecedented scale. 

Estimates put the volume of data created and consumed from two zettabytes in 2010 to 64.2 zettabytes in 2020 (one zettabyte is a trillion gigabytes) and project that it will grow to more than 180 zettabytes up to 2025.[footnote]Statista Research Department. (2022). Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025. Available at: https://www.statista.com/statistics/871513/worldwide-data-created/[/footnote] These oft-cited figures disguise a range of further dynamics (such as the wider societal phenomena of discrimination and inequality that are captured and represented in these datasets), and the textured landscape of who and what is included in the datasets, what data quality means in practice, and whose objectives are represented in data processes and met through outcomes from data use.

Data is often promised to be transformative, but there remains debate as to exactly what it transforms. On one hand, data is recognised as an important economic opportunity, and policy focus across the globe and is believed to deliver significant societal benefits. On the other hand, increased datification and calculability of human interactions can lead to human rights abuses and illegitimate public or private control. In between these opposing views are a variety of observations that reflect the myriad ways data and society interact, broadly considering the ways such practices reconfigure activities, structures and relationships.[footnote]Balayn, A. and Gürses, S. (2021). Beyond Debiasing, Regulating AI and its inequalities. European Digital Rights (EDRi). Available at: https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf[/footnote]

According to scholars of surveillance and informational capitalism, today’s digital economy is built on deeply rooted, exploitative and extractive data practices.[footnote]Zuboff, S. (2019). The age of surveillance capitalism: the fight for a human future at the new frontier of power. New York: PublicAffairs and Cohen, J. E. (2019). Between truth and power: the legal constructions of informational capitalism. New York: Oxford University Press.[/footnote] These result in the accrual of immense surpluses of value to dominant technology corporations, and a role for the human participants enlisted in value creation for these big technology companies that has been described as a form of ‘data rentiership’.[footnote]Birch, K., Chiappetta, M. and Artyushina, A. (2020). ‘The problem of innovation in technoscientific capitalism: data rentiership and the policy implications of turning personal digital data into a private asset’. Policy Studies, 41(5), pp. 468–487. doi: 10.1080/01442872.2020.1748264[/footnote]

Commentators differ, however, on the real source of the value that is being extracted. Some consider that value comes from data’s predictive potential, while others emphasise that the economic arrangements in the data economy allow for huge profits to be made (largely through the advertising-based business model) even if predictions are much less effective than technology giants claim.[footnote]Hwang, T. (2020). Subprime attention crisis: advertising and the time bomb at the heart of the Internet. New York: FSG Originals.[/footnote]

In practice, only a few large technology corporations – Alphabet (Google), Amazon, Apple, Meta Platforms (Facebook) and Microsoft – have the data, processing abilities, engineering capacity, financial resources, user base and convenience appeal to provide a range of services that are both necessary to smaller players and desired by a wide base of individual users.

These corporations extract value from their large volumes of interactions and transactions, and process massive amounts of personal and non-personal data in order to optimise the service and experience of each business or individual user. Some platforms have the ability to simultaneously coordinate and orchestrate multiple sensors or computers in the network, like smartphones or connected objects. This drives the platform’s ability to innovate and offer services that seem either indispensable or unrivalled.

While there is still substantial innovation outside these closed ecosystems, the financial power of the platforms means that in practice they are able to either acquire or imitate (and further improve) innovations in the digital economy. Their efficiency in using this capacity enables them to leverage their dominance into new markets. The acquisition of open-source code platforms like GitHub by Microsoft in 2018 and RedHat by IBM in 2019 also points to a possibility that incumbents intend to extend their dominance to open-source software. The difficulty new players face to compete makes the largest technological players seem unmovable and unchangeable.

Over time, access to large pools of personal data has allowed platforms to develop services that now represent and influence the infrastructure or underlying basis for many public and private services. Creating ever-more dependencies in both public and private spheres, large technology companies are extending their services to societally sensitive areas such as education and health.

This influence has become more obvious during the COVID-19 pandemic, when large companies formed contested public-private partnerships with public health authorities.[footnote]Fitzgerald M. and Crider C. (2020). ‘Under pressure, UK government releases NHS COVID data deals with big tech’. openDemocracy. Available at: https://www.opendemocracy.net/en/ournhs/under-pressure-uk-government-releases-nhs-covid-data-deals-big-tech/[/footnote] They also partnered among themselves to influence contact tracing in the pandemic response, by facilitating contact tracing technologies in ways that were favourable or unfavourable to particular nation states. This revealed the difficulty, even at state level, of engaging in advanced use of data without the cooperation of the corporations that control the software and hardware infrastructure. 

Focusing on data alone is insufficient to understand power in data-intensive digital systems. A vast number of interrelated factors consolidate both economic and societal power of particular digital platforms.[footnote]European Commission – Expert Group for the Observatory on the Online Platform Economy. (2021). Uncovering blindspots in the policy debate on platform power. Available at: https://www.sipotra.it/wp-content/uploads/2021/03/Uncovering-blindspots-in-the-policy-debate-on-platform-power.pdf[/footnote] These factors go beyond market power and consumer behaviour, and extend to societal and democratic influence (for example through algorithmic curation and controlling how human rights can be exercised).[footnote]European Commission – Expert Group for the Observatory on the Online Platform Economy. (2021).[/footnote]

Theorists of platform governance highlight the complex ways in which vertically integrated platforms make users interacting with them legible to computers, and extract value by intermediating access to them.[footnote]Cohen, J.E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford: Oxford University Press.[/footnote]

This makes it hard to understand power from data without understanding complex technological interactions up and down the whole technology ‘stack’, from the basic protocols and connectivity that underpin technologies, through hardware, and the software and cloud services that are built on them.[footnote]Andersdotter, A. and Stasi, I. Framework for studying technologies, competition and human rights. Available at: https://amelia.andersdotter.cc/framework_for_competition_technology_and_human_rights.html[/footnote]

Large platforms have become – as a result of laissez-faire policies (minimal government intervention in market and economic affairs) rather than by deliberate, democratic design – one of the building blocks for data governance in the real world, unilaterally defining the user experience and consumer rights. They have used a mix of law, technology and economic influence to place themselves in a position of power over users, governments, legislators and private-sector developers, and this has proved difficult to dislodge or alter.[footnote]Cohen, J. E. (2017). ‘Law for the Platform Economy’. U.C. Davis Law Review, 51, pp. 133–204. Available at: https://perma.cc/AW7P-EVLC[/footnote] 

2. Rethinking regulatory approaches in digital markets

There is a recent, growing appetite to regulate both data and platforms using a variety of legal approaches to regulate market concentration, platforms as public spheres, and data and AI governance. The year 2021 alone marked a significant global uptick in proposals for the regulation of AI technologies, online markets, social media platforms and other digital technologies, with more still to come in 2022.[footnote]Mozur, P., Kang, C., Satariano, A. and McCabe, D. (2021). ‘A Global Tipping Point for Reining In Tech Has Arrived’. New York Times. Available at: https://www.nytimes.com/2021/04/20/technology/global-tipping-point-tech.html[/footnote]

A range of jurisdictions are reconsidering the regulation of digital platforms both as marketplaces and places of public speech and opinion building (‘public spheres’). Liability obligations are being reanalysed, including in bills around ‘online harms’ and content moderation. The Online Safety Act in Australia,[footnote]Australia’s Online Safety Act (2021). Available at: https://www.legislation.gov.au/Details/C2021A00076[/footnote] India’s Information Technology Rules,[footnote]Ministry of Electronics and Information Technology. (2021). The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Available at: https://prsindia.org/billtrack/the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021[/footnote] the EU’s Digital Services Act[footnote]European Parliament. (2022). Legislative resolution of 5 July 2022 on the proposal for a regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act). Available at: https://www.europarl.europa.eu/doceo/document/TA-9-2022-0269_EN.html[/footnote] and the UK’s draft Online Safety Bill[footnote]Online Safety Bill. (2022-23). Parliament: House of Commons. Bill no. 121. London: Published by the authority of the House of Commons. Available at https://bills.parliament.uk/bills/3137[/footnote] are all pieces of legislation that seek to regulate more rigorously the content and practices of online social media and messaging platforms.

Steps are also being made to rethink the relationship between competition, data and platforms, and jurisdictions are using different approaches. In the UK, the Competition and Markets Authority launched the Digital Markets Unit, focusing on a more flexible approach, with targeted interventions in competition in digital markets and codes of conduct.[footnote]While statutory legislation will not be introduced in the 2022–23 Parliamentary session, the UK Government reconfirmed its intention to establish the Digital Market Unit’s statutory regime in legislation as soon as Parliamentary time allows. See: Hayter, W. (2022). ‘Digital markets and the new pro-competition regime’. Competition and Markets Authority. Available at: https://competitionandmarkets.blog.gov.uk/2022/05/10/digital-markets-and-the-new-pro-competition-regime/ and UK Government. (2021). ‘Digital Markets Unit’. Gov.uk. Available at https://www.gov.uk/government/collections/digital-markets-unit[/footnote] In the EU, the Digital Markets Act (DMA) takes a top-down approach and establishes general rules for large companies that prohibit certain practices up front, such as combining or cross-using personal data across services without users’ consent, or giving preference to their own services and products in rankings.[footnote]Replace: European Parliament and Council of the European Union. (2022). Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), Article 5 (2) and Article 6 (5). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote] India is also responding to domestic market capture and increased influence from large technology companies with initiatives such as the Open Network for Digital Commerce, which aims to create a decentralised and interoperable platform for direct exchange between buyers and sellers without intermediary services such as Amazon.[footnote]Ansari, A. A. (2022), ‘E-commerce is the latest target in India’s push for an open digital economy’. Atlantic Council. Available at: https://www.atlanticcouncil.org/blogs/southasiasource/e-commerce-is-the-latest-target-in-indias-push-for-an-open-digital-economy/[/footnote] At the same time, while the draft 2019 Indian Data Protection Bill is being withdrawn, a more comprehensive legal framework is expected in 2022 covering – alongside privacy and data protection – broader issues such as non-personal data, regulation of hardware and devices, data localisation requirements and rules to seek approval for international data transfers.[footnote] Aryan, A., Pinnu, S. and Agarwal, S. (2022). ‘Govt looks to table data bill soon, draft at advanced stage’. Economic Times. Available at: https://telecom.economictimes.indiatimes.com/news/govt-looks-to-table-data-bill-soon-draft-at-advanced-stage/93358857 and Raj, R. (2022). ‘Data protection: Four key clauses may go in new bill’. Financial Express. Available at: https://www.financialexpress.com/industry/technology/data-protection-four-key-clauses-may-go-in-new-bill/2618148/[/footnote] 

Developments in data and AI policy

Around 145 countries now have some form of data privacy law, and many new additions or revisions are heavily influenced by legislative standards including the Council of Europe’s Convention 108 + and the EU General Data Protection Regulation (GDPR).[footnote]Greenleaf, G. (2021). ‘Global Data Privacy Laws 2021: Despite COVID Delays, 145 Laws Show GDPR Dominance’. Privacy Laws & Business International Report, 1, pp. 3–5.[/footnote]

The GDPR is a prime example of legislation aimed at curbing the worst excesses of exploitative data practices, and many of its foundational elements are still being developed and tested in the real world. Lessons learned from the GDPR show how vital it is to consider power within attempts to create more responsible data practices. This is because regulation is not just the result of legal design in isolation, but is also shaped by immense corporate lobbying,[footnote]Corporate Europe Observatory. (2021). The Lobby Network: Big Tech’s Web of Influence in the EU. Available at: https://corporateeurope.org/en/2021/08/lobby-network-big-techs-web-influence-eu[/footnote] applied within organisations via their internal culture and enforced in a legal environment that gives major corporations tools to stall or create disincentives to enforcement. 

In the United States, there have been multiple attempts at proposing privacy legislation,[footnote]Rich, J. (2021). ‘After 20 years of debate, it’s time for Congress to finally pass a baseline privacy law’. Brookings. Available at https://www.brookings.edu/blog/techtank/2021/01/14/after-20-years-of-debate-its-time-for-congress-to-finally-pass-a-baseline-privacy-law/ and Levine, A. S. (2021). ‘A U.S. privacy law seemed possible this Congress. Now, prospects are fading fast’. Politico. Available at: https://www.politico.com/news/2021/06/01/washington-plan-protect-american-data-silicon-valley-491405[/footnote] and there is growing momentum with privacy laws being adopted at the state level.[footnote]Zanfir-Fortuna, G. (2020). ‘America’s “privacy renaissance”: What to expect under a new presidency and Congress’. Ada Lovelace Institute. Available at https://www.adalovelaceinstitute.org/blog/americas-privacy-renaissance/[/footnote] A recent bipartisan privacy bill proposed in June 2022[footnote]American Data Privacy and Protection Act, discussion draft, 117th Cong. (2021). Available at: https://www.commerce.senate.gov/services/files/6CB3B500-3DB4-4FCC-BB15-9E6A52738B6C[/footnote] includes broad privacy provisions, with a focus on data minimisation, privacy by design and by default, loyalty duties to individuals and the introduction of a private right to action against companies. So far, the US regulatory approach to new market dynamics has been a suite of consumer protection, antitrust and privacy laws enforced under the umbrella of a single body, the Federal Trade Commission (FTC), which has a broad range of powers to protect consumers and investigate unethical business practices.[footnote]Hoofnagle, C. J., Hartzog, W. and Solove, D. J. (2019). ‘The FTC can rise to the privacy challenge, but not without help from Congress’. Brookings. Available at: https://www.brookings.edu/blog/techtank/2019/08/08/the-ftc-can-rise-to-the-privacy-challenge-but-not-without-help-from-congress/[/footnote]

Since the 1990s, with very few exceptions, the US technology and digital markets have been dominated by a minimal approach to antitrust intervention[footnote] Bietti, E. (2021). ‘Is the goal of antitrust enforcement a competitive digital economy or a different digital ecosystem?’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/antitrust-enforcement-competitive-digital-economy-digital-ecosystem/[/footnote] (which is designed to promote competition and increase consumer welfare). Only recently has there been a revival of antitrust interventions in the US with a report on competition in the digital economy[footnote]House Judiciary Committee’s Antitrust Subcommittee. (2020). Investigation of Competition in the Digital Marketplace: Majority Staff Report and Recommendations. Available at: https://judiciary.house.gov/news/documentsingle.aspx?DocumentID=3429[/footnote] and cases launched against Facebook and Google.[footnote]In the case of Facebook, see the Federal Trade Commission and the State Advocate General cases: https://www.ftc.gov/enforcement/cases-proceedings/191-0134/facebook-inc-ftc-v and https://ag.ny.gov/sites/default/files/facebook_complaint_12.9.2020.pdf. In the case of Google, see the Department of Justice and the State Advocate General cases: https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws and https://coag.gov/app/uploads/2020/12/Colorado-et-al.-v.-Google-PUBLIC-REDACTED-Complaint.pdf[/footnote]

In the UK, a consultation launched in September 2021 proposed a number of routes to reform the Data Protection Act and the UK GDPR.[footnote]Ada Lovelace Institute. (2021). ‘Ada Lovelace Institute hosts “Taking back control of data: scrutinising the UK’s plans to reform the GDPR”‘. Available at: https://www.adalovelaceinstitute.org/news/data-uk-reform-gdpr/[/footnote] Political motivations to create a ‘post-Brexit’ approach to data protection may test ‘equivalence’ with the European Union, to the detriment of the benefits of coherence and seamless convergence of data rights and practices across borders.

There is also the risk that the UK lowers levels of data protection to try to increase investment, including by large technology companies operating in the UK, therefore reinforcing their market power. Recently released policy documents containing significant changes are the National Data and AI Strategies,[footnote]See: UK Government. (2021). National AI Strategy. Available at: https://www.gov.uk/government/publications/national-ai-strategy and UK Government. (2020). National Data Strategy. Available at: https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy[/footnote] and the Government’s response to the consultation on the reforms to the data protection framework,[footnote]UK Government. (2022). Data: a new direction – Government response to consultation. Available at: https://www.gov.uk/government/consultations/data-a-new-direction/outcome/data-a-new-direction-government-response-to-consultation[/footnote] followed by a draft bill published in July 2022.[footnote]Data Protection and Digital Information Bill. (2022-23). Parliament: House of Commons. Bill no. 143. London: Published by the authority of the House of Commons. Available at: https://bills.parliament.uk/bills/3322/publications[/footnote]

Joining the countries that have developed AI policies and national strategies,[footnote] Stanford University. (2021). Artificial Intelligence Index 2021, chapter 7. Available at https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-_Chapter-7.pdf and OECD/European Commission. (2021). AI Policy Observatory. Available at: https://oecd.ai/en/dashboard[/footnote] Brazil,[footnote]Ministério da Ciência, Tecnologia e Inovações. (2021). Estratégia Brasileira de Inteligência Artificial. Available at: https://www.gov.br/mcti/pt-br/acompanhe-o-mcti/transformacaodigital/inteligencia-artificial[/footnote] the USA[footnote]See: National Artificial Intelligence Initiative Act, 116th Cong. (2020). Available at https://www.congress.gov/bill/116th-congress/house-bill/6216 and the establishment of the National Artificial Intelligence Research Resource Task Force: The White House. (2021). ‘The Biden Administration Launches the National Artificial Intelligence Research Resource Task Force’. Available at: https://www.whitehouse.gov/ostp/news-updates/2021/06/10/the-biden-administration-launches-the-national-artificial-intelligence-research-resource-task-force/[/footnote] and the UK[footnote]UK Government. (2021). National AI Strategy. Available at: https://www.gov.uk/government/publications/national-ai-strategy[/footnote] launched their own initiatives, with regulatory intentions ranging from developing ethical principles and guidelines for responsible use, to boosting research and innovation, to becoming a world leader, an ‘AI superpower’ and a global data hub. Many of these initiatives are industrial policy rather than regulatory frameworks, and focus on creating an enabling environment for the rapid development of AI markets, rather than mitigating risk and harms.[footnote]For concerns raised by the US National Artificial Intelligence Research Resource (NAIRR) see: AI Now and Data & Society’s joint comment. Available at https://ainowinstitute.org/AINow-DS-NAIRR-comment.pdf[/footnote]

In August 2021, China adopted its comprehensive data protection framework consisting of the Personal Information Protection Law,[footnote]For a detailed analysis, see: Dorwart, H., Zanfir-Fortuna, G. and Girot, C. (2021). ‘China’s New Comprehensive Data Protection Law: Context, Stated Objectives, Key Provisions’. Future of Privacy Forum. Available at https://fpf.org/blog/chinas-new-comprehensive-data-protection-law-context-stated-objectives-key-provisions/[/footnote] which is modelled on the GDPR, and the Data Security Law, which focuses on harm to national security and public interest from data-driven technologies.[footnote]Creemers, R. (2021). ‘China’s Emerging Data Protection Framework’. Social Science Research Network. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3964684[/footnote] Researchers argue that understanding this unique regulatory approach should not start from a comparative analysis (for example to jurisdictions such as the EU, which focus on fundamental rights). They trace its roots to the Chinese understanding of cybersecurity, which aims to protect national polity, economy and society from data-enabled harms and defend against vulnerabilities.[footnote]Creemers, R. (2021).[/footnote]

While some of these recent initiatives have the potential to transform market dynamics towards less centralised and less exploitative practices, none of them meaningfully contest the dominant business model of online platforms or promote ethical alternatives. Legislators seem to choose to regulate through large actors as intermediaries, rather than by reimagining how regulation could support a more equal distribution of power. In particular, attention must be paid to the way many proposed solutions tacitly require ‘Big Tech’ to stay big.[footnote]Owen, T. (2020). ‘Doctorow versus Zuboff’. Centre for International Governance Innovation. Available at https://www.cigionline.org/articles/doctorow-versus-zuboff/[/footnote]

The EU’s approach to platform, data and AI regulation

 

In the EU, the Digital Services Act (DSA) and the Digital Markets Act (DMA) bring a proactive approach to platform regulation, by prohibiting certain practices up front and introducing a comprehensive package of obligations for online platforms.

 

The DSA sets clear obligations for online platforms against illegal content and disinformation and prohibits some of the most harmful practices used by online platforms (such as using manipulative design techniques and targeted advertising based on exploiting sensitive data).

 

It mandates increased transparency and accountability for key platform services (such as providing the main parameters used by recommendation systems) and includes an obligation for large companies to perform systemic risk assessments. This is complemented with a mechanism for independent auditors and researchers to access the data underpinning the company’s risk assessment conclusions and scrutinise the companies’ mitigation decisions.

 

While this is undoubtedly a positive shift, the impact of this legislation will depend substantially on online platforms’ readiness to comply with legal obligations, their interpretation of new legal obligations and effective enforcement (which has proved challenging in the past, for example with the GDPR).

 

The DMA addresses anticompetitive behaviour and unfair market practices of platforms that – according to this legislation – qualify as ‘gatekeepers’. Next to a number of prohibitions (such as combining or cross-using personal data without user consent), which are aimed at preventing the gatekeepers’ exploitative behaviour, the DMA contains obligations that – if enforced properly – will lead to more user choice and competition in the market for digital services.

 

These include basic interoperability requirements for instant messaging services, as well as interoperability with the gatekeepers’ operating system, hardware and software when the gatekeeper is providing complementary or supporting services.[footnote]European Parliament and Council of the European Union. (2022). Digital Markets Act, Article 7, Article 6 and Recital 57. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote] Another is the right for business users of the gatekeepers’ services to obtain free-of-charge, high quality, continuous and real-time access to data (including personal data) provided or generated in connection with their use of the gatekeepers’ core service.[footnote]European Parliament and Council of the European Union. (2022). Article 6 (10).[/footnote] End users will also have the right to exercise the portability of their data, both provided as well as generated through their activity on core services such as marketplaces, app stores, search and social media.[footnote]European Parliament and Council of the European Union. (2022). Digital Markets Act, Article 6 (9). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote]

 

The DMA and DSA do not go far enough in terms of addressing deeply rooted challenges, such as supporting alternative business models that are not premised on data exploitation or speaking to users’ expectations to be able to control algorithmic interfaces (such as the interface for content filtering/generating recommendations). Nor does it create a level playing field for new market players who would like to develop services that compete with the gatekeepers’ core services.

 

New approaches to data access and sharing are also seen with the adopted Data Governance Act (DGA)[footnote]Replace: European Parliament and Council of the European Union. (2022). Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868&qid=1657887017015[/footnote] and the draft Data Act.[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote] The DGA introduces the concept of ‘data altruism’ (the possibility for individuals or companies to voluntarily share data for the public good), facilitates the re-use of data from public and private bodies, and creates rules for data intermediaries (providers of data sharing services that are free of conflicts of interests relating to the data they share).

 

Complementing this approach, the proposed Data Act aims at securing end users’ right to obtain all data (personal, non-personal, observed or provided) generated by their use of products such as wearable devices and related services. It also aims to develop a framework for interoperability and portability of data between cloud services, including requirements and technical standards enabling common European data spaces.

 

There is also an increased focus on regulating the design and use of data-driven technologies, such as those that use artificial intelligence (machine learning algorithms). The draft Artificial Intelligence Act (AI Act) follows a risk-based approach that is limited to regulating ‘unacceptable’ and high-risk AI systems, such as prohibiting AI uses that pose a risk to fundamental rights or imposing ex ante design obligations on providers of high-risk AI systems.[footnote]European Commission. (2021). Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206[/footnote]

 

Perhaps surprisingly, the AI Act, as proposed by the European Commission, does not impose any transparency or accountability requirements on systems that pose less than high risk (with the exception of AI system that may deceive or confuse consumers), which include the dominant commercial business-to-consumer (B2C) services (e.g. search engines, social media, some recommendation systems, health monitoring apps, insurance and payment services).

 

Regardless of the type of risk (high-risk or limited-risk), this approach leaves a significant gap in accountability requirements for both large and small players that could be responsible for creating unfair AI systems. Responsibility measures should aim both at regulating the infrastructural power of large technology companies that supply most of the tools for ‘building AI’ (such as large language models, cloud computing power, text and speech generation and translation), as well as at creating responsibility requirements for smaller downstream providers who make use of these tools to construct their underlying services.

 

3. Weak enforcement response in digital markets

Large platforms are by their nature multi-sided, multi-sectoral and operate globally. The regulation of their commercial practices cuts across many sectors, and they are overseen by multiple bodies in different jurisdictions with varying degrees of expertise and in-house knowledge about how platforms operate. These include consumer protection authorities, data protection and competition authorities, non-discrimination and equal opportunities bodies, and financial markets, telecom regulators, media regulators, etc.).

It is well known that these regulatory bodies are frequently under-equipped for the task they are charged with, and there is an asymmetry between the resources available to them compared to the resources large corporations invest in neutralising enforcement efforts. For example, in the EU there is an acute lack of resources and institutional capacity: half the data protection authorities in the EU have an annual budget of €5 million or less, and 21 of the data protection authorities declare that their existing resources are not enough to operate effectively.[footnote]Ryan, J. and Toner, A. (2020). ‘Europe’s governments are failing the GDPR’. brave.com. Available at: https://brave.com/static-assets/files/Brave-2020-DPA-Report.pdf and European Data Protection Board (2020). Contribution of the EDPB to the evaluation of the GDPR under Article 97. Available at: https://edpb.europa.eu/sites/default/files/files/file1/edpb_contributiongdprevaluation_20200218.pdf[/footnote] 

A bigger problem is the lack of regulatory response in general, and recent lessons learned from insufficient data-protection enforcement responses show there needs to be a shift towards a stronger response from regulators, and a more proactive, collaborative approach to curbing exploitative and harmful activities, and bringing down illegal practices.

For example, in 2018 the first complaints against the invasive practices of the online advertising industry (such as real-time bidding, an online ad auctioning system that broadcasts personal data to thousands of companies)[footnote]More details at Irish Council for Civil Liberties. See: https://www.iccl.ie/rtb-june-2021/[/footnote] were filed with the Irish Data Protection Commissioner (Irish DPC) and with the UK’s Information Commissioner Office (ICO),[footnote]Irish Council for Civil Liberties. (2018). Regulatory complaint concerning massive, web-wide data breach by Google and other ‘ad tech’ companies under Europe’s GDPR. Available at: https://www.iccl.ie/digital-data/regulatory-complaint-concerning-massive-web-wide-data-breach-by-google-and-other-ad-tech-companies-under-europes-gdpr/[/footnote] two of the more resourceful – but still not sufficiently funded – authorities. Similar complaints followed across the EU.

After three years of inaction, civil society groups initiated court cases against the two regulators for lack of enforcement, as well as a lawsuit against major advertising and tracking companies.[footnote]See: Irish Council for Civil Liberties. (2022). ‘ICCL sues DPC over failure to act on massive Google data breach’. Available at: https://www.iccl.ie/news/iccl-sues-dpc-over-failure-to-act-on-massive-google-data-breach/; Irish Council for Civil Liberties. (2021). ‘ICCL lawsuit takes aim at Google, Facebook, Amazon, Twitter and the entire online advertising industry’. Available at: https://www.iccl.ie/news/press-announcement-rtb-lawsuit/; and Open Rights Group. Ending illegal online advertising. Available at: https://www.openrightsgroup.org/campaign/ending-adtech-abuse/[/footnote] It was a relatively small regulator, the Belgian Data Protection Authority, that confirmed in its 2022 decision that those ad tech practices are illegal, showing that the lack of resources is not the sole cause for regulatory inertia.[footnote]Belgian Data Protection Authority. (2022). ‘The BE DPA to restore order to the online advertising industry: IAB Europe held responsible for a mechanism that infringes the GDPR’. Available at: https://www.dataprotectionauthority.be/citizen/iab-europe-held-responsible-for-a-mechanism-that-infringes-the-gdpr[/footnote]

Some EU data protection authorities have been criticised for their reluctance to intervene in the technology sector. For example, it took three years from launching the investigation for the Irish regulator to issue a relatively small fine against WhatsApp for failure to meet transparency requirements under the GDPR.[footnote]Data Protection Commission. (2021). ‘Data Protection Commission announces decision in WhatsApp inquiry’. Available at: https://www.dataprotection.ie/en/news-media/press-releases/data-protection-commission-announces-decision-whatsapp-inquiry[/footnote] The authority is perceived as a key ‘bottleneck’ to enforcement because of its delays in delivering enforcement decisions,[footnote]The European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE Committee) also issued a draft motion in 2021 in relation to how the Irish DPC was handling the ‘Schrems II’ case and recommended the European Commission to start the infringement procedures against Ireland for not properly enforcing the GDPR.[/footnote] as many of the large US technology companies are established in Dublin.[footnote]Espinoza, J. (2021). ‘Fighting in Brussels bogs down plans to regulate Big Tech’. Financial Times.. Available at: https://www.ft.com/content/7e8391c1-329e-4944-98a4-b72c4e6428d0[/footnote]

Some have suggested that ‘reform to centralise enforcement of the GDPR could help rein in powerful technology companies’.[footnote]Manancourt, V. (2021). ‘EU privacy law’s chief architect calls for its overhaul’. Politico. Available at: https://www.politico.eu/article/eu-privacy-laws-chief-architect-calls-for-its-overhaul/[/footnote]

The Digital Markets Act (DMA) awards the European Commission the role of a sole enforcer against certain data-related practices performed by ‘gatekeeper’ companies (for example the prohibition of combining and cross-using personal data from different services without consent). The enforcement mechanism of the DMA gives powers to the European Commission to target selected data practices that may also infringe rules typically governed by the GDPR.

In the UK, the ICO has been subject to criticism for its preference for dialogue with stakeholders over formal enforcement of the law. Members of Parliament as well as civil society organisations have increasingly voiced their disquiet over this approach,[footnote]Burgess, M. (2020). ‘MPs slam UK data regulator for failing to protect people’s rights’. Wired UK. Available at: https://www.wired.co.uk/article/ico-data-protection-gdpr-enforcement; Open Rights Group (2021). ‘Open Rights Group calls on the ICO to do its job and enforce the law’. Available at: https://www.openrightsgroup.org/press-releases/open-rights-group-calls-on-the-ico-to-do-its-job-and-enforce-the-law/[/footnote] while academics have queried how the ICO might be held accountable for its selective and discretionary application of the law.[footnote]Erdos, D. (2020). ‘Accountability and the UK Data Protection Authority: From Cause for Data Subject Complaint to a Model for Europe?’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3521372[/footnote]

The 2021 public consultation led by the UK Government – Data: A New Direction – will do little to reassure those concerned, given the significant incursions into the ICO’s regulatory independence mooted.[footnote]Lynskey, O. (2021). ‘EU-UK Data Flows: Does the “New Direction” lead to “Essentially Equivalent” Protection?’. The Brexit Institute. Available at https://dcubrexitinstitute.eu/2021/09/eu-uk-data-new-direction/[/footnote] It remains to be seen whether subsequent consultations initiated by the ICO regarding its regulatory approach signal a shift from selective and discretionary application of law towards formal enforcement action.[footnote]Erdos, D. (2022). ‘What Way Forward on Information Rights Regulation? The UK Information Commissioner’s Office Launches a Major Consultation’. Inforrm. Available at https://inforrm.org/2022/01/21/what-way-forward-on-information-rights-regulation-the-uk-information-commissioners-office-launches-a-major-consultation-david-erdos/[/footnote]

The measures proposed for consultation go even further towards removing some of the important requirements and guardrails against data abuses, which in effect will legitimise practices that have been declared illegal in the EU.[footnote]Delli Santi, M. (2022). ‘A day of reckoning for IAB and Adtech’. Open Rights Group. Available at https://www.openrightsgroup.org/blog/a-day-of-reckoning-for-iab-and-adtech/[/footnote]

Recognising the need for cooperation among different regulators

Examinations of abuses, market failure, concentration tendencies in the digital economy and market power of large platforms are more prominent. Extensive reports were commissioned by governments in the UK,[footnote]Digital Competition Expert Panel. (2019). Unlocking digital competition. UK Government. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf[/footnote] Germany,[footnote]Schweitzer, H., Haucap, J., Kerber, W. and Welker, R. (2018). Modernisierung der Missbrauchsaufsicht für marktmächtige Unternehmen. Baden-Baden: Nomos. Available at https://www.bmwk.de/Redaktion/DE/Publikationen/Wirtschaft/modernisierung-der-missbrauchsaufsicht-fuer-marktmaechtige-unternehmen.pdf?__blob=publicationFile&v=15. An executive summary in English is available at: https://ssrn.com/abstract=3250742[/footnote], the European Union,[footnote]Crémer, J., de Montjoye, Y-A. and Schweitzer, H. (2019) Competition policy for the digital era. European Commission. Available at: http://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf[/footnote] Australia[footnote]Australian Competition and Consumer Commission (ACCC). (2019). Digital Platforms Inquiry – Final Report. Available at: https://www.accc.gov.au/system/files/Digital%20platforms%20inquiry%20-%20final%20report.pdf[/footnote] and beyond, asking what transformations are necessary in competition policy, to address the challenges of the digital economy.

A comparison of these four reports highlights the problem of under-enforcement in competition policy and recommends a more active enforcement response.[footnote]Kerber, W. (2019). ‘Updating Competition Policy for the Digital Economy? An Analysis of Recent Reports in Germany, UK, EU, and Australia’. Social Science Research Network. Available at: https://ssrn.com/abstract=3469624[/footnote] It also underlines that all the reports analyse the important interplay between competition policy and other policies such as data protection and consumer protection law.

The Furman report in the UK recommended the creation of a new Digital Markets Unit that collaborates on enforcement with regulators in different sectors and draws on their experience to form a more robust approach to regulating digital markets.[footnote]Digital Competition Expert Panel. (2019). Unlocking digital competition. UK Government. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf[/footnote] In 2020, the UK Digital Regulation Cooperation Forum (DRCF) was established to enhance cooperation between the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), the Office of Communications (Ofcom) and the Financial Conduct Authority (FCA) and support a more coordinated regulatory approach.[footnote]Digital Regulation Cooperation Forum. Plan of work for 2021 to 2022. Ofcom. Available at: https://www.ofcom.org.uk/__data/assets/pdf_file/0017/215531/drcf-workplan.pdf[/footnote]

The need for more collaboration and joined-up thinking among regulators was highlighted by the European Data Protection Supervisor (EDPS) in 2014.[footnote] European Data Protection Supervisor. (2014). Privacy and Competitiveness in the Age of Big Data. The Interplay between data Protection, Competition Law and Consumer Protection in the Digital Economy, Preliminary Opinion. Available at: https://edps.europa.eu/sites/edp/files/publication/14-03-26_competitition_law_big_data_en.pdf[/footnote] In 2016, the EDPS launched the Digital Clearinghouse initiative, an international voluntary network of enforcement bodies in different fields,[footnote]See: European Data Protection Supervisor 2016 initiative to create a network of data protection, consumer and competition regulators. Available at: https://www.digitalclearinghouse.org/[/footnote] however its activity has been limited.

Today there is still limited collaboration between regulators across sectors and borders because of a lack of legal basis for effective cooperation and exchange of information, including compelled and confidential information. Support for a more proactive and coherent regulatory enforcement must increase substantially to make a significant impact in terms of limiting the overwhelming power of large technology corporations in markets, over people and in democracy.

Chapter 2: Making data work for people and society

This chapter explores four cross-cutting interventions that have the potential to shift power in the digital ecosystem, especially if implemented in coordination with each other. These provocative ideas are offered with the aim to push forward the thinking around existing data policy and practice.

Each intervention is capable of integrating legal, technological, market and governance solutions that could help transition the digital ecosystem towards a people-first vision. While there are many potential approaches, for the purposes of this report – for clarity and ease of understanding –one type of potential solution or remedy is focused on under each intervention.

Each intervention is woven and connected to the others in a way that sets out a cross-cutting vision of an alternative data future, and which can frame forward-looking debates about data policy and practice. The vision these interventions offer will require social and political standing. Behind each intervention there is a promise of a positive change that needs the support and collaboration of policymakers, researchers, civil society organisations and industry practitioners to make them into a reality.

1. Transforming infrastructure through open ecosystems

The vision

Imagine a world in which digital systems have been transformed, and control over technology infrastructure and algorithms no longer lies in the hands of a few large corporations.

Transforming infrastructure means what was once a closed system of structural dependencies, which enabled large corporations to concentrate power, has been replaced by an open ecosystem where power imbalances are reduced and people can shape the digital experiences they want.

No single company or subset of companies controls the full technological stack of digital infrastructures and services. Users can exert meaningful control over the ways an operating system functions on phones and computers, and actions performed by web browsers and apps.

The incentive structures that drove technology companies to entrench power have been dismantled, and new business models are more clearly aligned with user interests and societal benefits. This means there are no more ‘lock in’ models, in which users find it burdensome to switch to another digital service provider, and fewer algorithmic systems that are optimised to attract clicks, prioritising advertising revenue over people’s needs and interests.

Instead, there is competition and diversity of digital services for users to choose from, and these services use interoperable architectures that enable users to switch easily to other providers and mix-and-match services of their choice within the same platform. For example, third-party providers create products that enable users to seamlessly communicate  on social media channels from a standalone app. Large platforms allow their users to change the default content curation algorithm to the one of their choice.

Thanks to full horizontal and vertical interoperability, people using digital services are empowered to choose their favourite or trusted provider of infrastructure, content and interface. Rather than platforms setting rules and objectives that determine what information is surfaced by their recommender system, third-party providers, including reputable news organisations and non-profits, can build customised filters (operating on the top of default recommender systems to modify the newsfeed) or design alternative recommender systems.

All digital platforms and service providers operate within high standards of security and protection, which are audited and enforced by national regulators. Following new regulatory requirements, large platforms operate under standard protocols that are designed to respect choices made by their users, including strict limitations on the use of their personal data.

How to get from here to there

In today’s digital markets, there is unprecedented consolidation of power in the hands of a few, large US and Chinese digital companies. This tendency towards centralised power is supported by the current abilities of platforms to:

  • process substantial quantities of personal and non-personal data, to optimise their services and the experience of each business or individual user
  • extract market-dominating value from large-volume interactions and transactions
  • use their financial power to either acquire or imitate (and further improve) innovations in the digital economy
  • use this capacity to leverage dominance into new markets
  • use financial power to influence legislation and stall enforcement through litigation.

The table below takes a more detailed look at some of the sources of power and possible remedies.

These dynamics reduce the possibility for new alternative services to be introduced and contribute to users’ inability to switch services and to make value-based decisions (for example, to choose a privacy-optimised social media application, or to determine what type of content is prioritised on their devices).[footnote]Brown, I. (2021). ‘From ‘walled gardens’ to open meadows’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/walled-gardens-open-meadows/[/footnote] Instead, a few digital platforms have the ability to capture a large user base and extract value from attention-maximising algorithms and ‘dark patterns’ – deceptive design practices that influence users’ choices and encourage them to take actions that result in more profit for the corporation, often at the expense of the user’s rights and digital wellbeing.[footnote]See: Brown, I. (2021) and Norwegian Consumer Council. (2018). Deceived by Design. Available at: https://www.forbrukerradet.no/undersokelse/no-undersokelsekategori/deceived-by-design/[/footnote]

As discussed in Chapter 1, there is still much to explore when considering possible regulatory solutions, and there are many possible approaches to reducing concentration and market dominance. Conceptual discussions about regulating digital platforms that have been promoted in policy and academia range from ‘breaking up big tech’,[footnote]Warren, E. (2020). Break Up Big Tech. Available at: https://2020.elizabethwarren.com/toolkit/break-up-big-tech[/footnote] by separating the different services and products they control into separate companies, to nationalising and transforming platforms into public utilities or conceiving of them as universal digital services.[footnote]Muldoon, J. (2020). ‘Don’t Break Up Facebook — Make It a Public Utility’. Jacobin. Available at: https://www.jacobinmag.com/2020/12/facebook-big-tech-antitrust-social-network-data[/footnote] Alternative proposals suggest limiting the number of data-processing activities a company can perform concurrently, for example separating search activities from targeted advertising that exploits personal profiles.

There is a need to go further. The imaginary picture painted at the start of this section points towards an environment where there is competition and meaningful choice in the digital ecosystem, where rights are more rigorously upheld and where power over infrastructure is less centralised. This change in power dynamics would require, as one of the first steps, that digital infrastructure is transformed with full vertical and horizontal interoperability. The imagined ecosystem includes large online platforms, but in this scenario they find it much more difficult to maintain a position of dominance, thanks to real-time data portability, user mobility and requirements for interoperability stimulating real competition in digital services.

What is interoperability?

Interoperability is the ability of two or more systems to communicate and exchange information. It gives end users the ability to move data between services (data portability), and to access services across multiple providers.

 

How can interoperability be enabled?

Interoperability can be enabled by developing (formal or informal) standards that define a set of rules and specifications that, when implemented, allow different systems to communicate and work together. Open standards are created through the consensus of a group of interested parties and are openly accessible and usable by anyone.

This section explores a range of interoperability measures that can be introduced by national or European policy makers, and discusses further considerations to transform the current, closed platform infrastructure into an open ecosystem.

Introducing platform interoperability

Drawing from examples of other sectors that historically have operated in silos,  mandatory interoperability measures are a potential tool that merit further exploration, to create new opportunities for both companies and users.

Interoperability is a longstanding policy tool in EU legislation and more recent digital competition reviews suggest it as a measure against highly concentrated digital markets.[footnote]Brown, I. (2020). ‘Interoperability as a tool for competition’. CyberBRICS. Available at: https://cyberbrics.info/wp-content/uploads/2020/08/Interoperability-as-a-tool-for-competition-regulation.pdf and Brown, I. (2021). ‘From ‘walled gardens’ to open meadows’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/walled-gardens-open-meadows/[/footnote] 

In telecommunications, interoperability measures make it possible to port phone numbers from one provider to another, and enable customers of one phone network to call and message customers on other networks, improving choice for consumers. In the banking sector, interoperability rules made it possible for third parties to facilitate account transfers from one bank to another, and to access data about account transactions to build new services. This opened up the banking market for new competitors and delivered new types of financial services for customers.

In the case of large digital platforms, introducing mandatory interoperability measures is one way to allow more choice of service (preventing both individual and business users from being trapped in one company’s products and services), and to re-establish the conditions to enable a competitive market for start-ups and small and medium-sized enterprises to thrive.[footnote]Brown, I. (2021).[/footnote]

While some elements of interoperability are present in existing or proposed EU legislation, this section explores a much wider scope of interoperability measures than those that have already been adopted. (For a more detailed discussion on ‘Possible interoperability mandates and their practical implications’, see the text box below.)

Some of these elements of interoperability in existing or proposed EU legislation are:[footnote]For a more comprehensive list, see: Brown, I. (2020). ‘Interoperability as a tool for competition’. CyberBRICS. Available at: https://cyberbrics.info/wp-content/uploads/2020/08/Interoperability-as-a-tool-for-competition-regulation.pdf[/footnote]

  • The Digital Markets Act enables interoperability requirements between instant messaging services, as well as with the gatekeepers’ operating system, hardware and software (when the gatekeeper is providing complementary or supporting services), and strengthens data portability rights.[footnote]European Parliament and Council of the European Union. (2022). Digital Markets Act, Recital 64, Article 6 (7), Recital 57, and Article 6 (9) and Recital 59. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote]
  • The Data Act proposal aims to enable switching between cloud providers.[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act). Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote]
  • Regulation on promoting fairness and transparency for business users of online intermediation services (‘platform-to-business regulation’) gives business users the right to access data generated through the provision of online intermediation services.[footnote]European Parliament and European Council. Regulation 2019/1150 on promoting fairness and transparency for business users of online intermediation services. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32019R1150[/footnote]

These legislative measures address some aspects of interoperability, but place limited requirements on services other than instant messaging services, cloud providers and operating systems in certain situations.[footnote]Gatekeepers designated under the Digital Markets Act need to provide interoperability to their operating system, hardware or software features that are available or used by the gatekeeper in the provision of its own complementary or supporting services or hardware. See: European Parliament and Council of the European Union. (2022). Digital Markets Act, Recital 57. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote] They also do not articulate a process for creating technical standards around open protocols for other services. This is why there is a need to test more radical ideas, such as mandatory interoperability for large online platforms covering both access to data and platform functionality.

 

Possible interoperability mandates and their practical implications

Ian Brown

 

Interoperability in digital markets requires some combination of access to data and platform functionality.

 

Data interoperability

Data portability (Article 20 of the EU GDPR) is the right of a user to move their personal data from one company to another. (The Data Transfer Project developed by large technology companies is slowly developing technical tools to support this.[footnote]The Data Transfer Project is a collaboration launched in 2017 between large companies such as Google, Facebook, Microsoft, Twitter, Apple to build a common framework with open-source code for data portability and interoperability between platforms. More information is available at: https://datatransferproject.dev/[/footnote]) This should help an individual switch from one company to another, including by giving price comparison tools access to previous customer bills. 

 

However, a wider range of uses could be enabled by real-time data mobility[footnote]Digital Competition Expert Panel. (2019). Unlocking digital competition. UK Government. Available at: https://assets. publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_ competition_furman_review_web.pdf[/footnote] or interoperability,[footnote]Kerber, W. and Schweitzer, H. (2017). ‘Interoperability in the Digital Economy’. JIPITEC, 8(1). Available at: https://www.jipitec.eu/issues/jipitec-8-1-2017/4531[/footnote] implying that an individual can give one company permission to access their data held by another, and meaning it can be updated whenever they use the second service. These remedies can stand alone, where the main objective is to enable individuals to give access to their personal data held by an incumbent firm to competitors.

 

Scholars make an additional distinction between syntactic or technical interoperability, the ability for systems to connect and exchange data (often via Application Programming Interfaces or ‘APIs’) and semantic interoperability, that connected systems share a common understanding of the meaning of data they exchange.[footnote]Kerber, W. and Schweitzer, H. (2017).[/footnote]

 

An important element of making both types of data-focused interoperability work is developing more data standardisation to require datasets to be structured, organised, stored and transmitted in more consistent ways across different devices, services and systems. Data standardisation creates common ontologies, or classifications, that specify the meaning of data.[footnote]Gal, M.S. and Rubinfeld, D. L. (2019), ‘Data Standardization’. NYU Law Review, 94, no. (4). Available at: https://www.nyulawreview.org/issues/volume-94-number-4/data-standardization/[/footnote]

 

For example, two different instant messaging services would benefit from a shared internal mapping of core concepts such as identity (phone number, nickname, email), rooms (public or private group chats, private messaging), reactions, attachments, etc. – these are concepts and categories that could be represented in a common ontology, to bridge functionality and transfer data across these services.[footnote]Matrix.org is a recent design of an open protocol for instant messaging service interoperability.[/footnote]

 

Data standardisation is an essential underpinning for all types of portability and interoperability and, just like the development of technical standards for protocols, it needs both industry collaboration and measures to ensure powerful companies do not hijack standards to their own benefit.

 

An optional interoperability function is to require companies to support personal data stores (PDS), where users store and control data about them using a third-party provider and can make decisions about how it is used (e.g. the MyData model[footnote]Kuikkaniemi, K., Poikola, A. and Honko, H. (2015). MyData – A Nordic Model for Human-Centered Personal Data Management and Processing’. Ministry of Transport and Communications. Available at: https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/78439/MyData-nordic-model.pdf[/footnote] and Web inventor Tim Berners-Lee’s Solid project). 

 

The data, or consent to access it, could be managed by regulated aggregators (major Indian banks are developing a model where licensed entities aggregate account data with users’ consent and therefore act as an interoperability bridge between multiple financial services),[footnote]Singh, M. (2021) ‘India’s Account Aggregator Aims to Bring Financial Services to Millions’. TechCrunch. Available at: https://social.techcrunch.com/2021/09/02/india-launches-account-aggregator-system-to-extend-financial-services-to-millions/[/footnote] or facilitated by user software through an open set of standards adopted by all service providers (as in the UK’s Open Banking). It is also possible for service providers to send privacy-protective queries or code to run on personal data stores inside a protected sandbox, limiting the service provider’s access to data (e.g. a mortgage provider could send code, checking an applicant’s monthly income was above a certain level, to their PDS or current account provider, without gaining access to all of their transaction data).[footnote]Yuchen, Z., Haddadi, H., Skillman, S., Enshaeifar, S., and Barnaghi, P. (2020) ‘Privacy-Preserving Activity and Health Monitoring on Databox’. EdgeSys ’20: Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, pp. 49–54. Available at: https://doi.org/10.1145/3378679.3394529[/footnote]

 

The largest companies currently have a significant advantage in their access to very large quantities of user data, particularly when it comes to training machine learning systems. Requiring access to statistical summaries of the data (e.g. popularity of specific social media content and related parameters) may be sufficient, while limiting the privacy problems caused. Finally, firms could be required to share the (highly complex) details of machine learning models, or provide regulators and third-parties access to them to answer specific questions (such as the likelihood a given piece of social-media content is hate speech).

 

The interoperability measures described above would enable a smoother transfer of data between digital services, and enable users to exert more control over what kind of data is shared and in what circumstances. This would make for a ‘cleaner’ data ecosystem, in which platforms and services are no longer incentivised to gather as much data as possible on every user.

 

Rather, users would have more power to determine how their data is collected and shared, and smaller services wouldn’t need to engage in extractive data practices to ‘catch up’ with larger platforms, as barriers to data access and transfer would be reduced. The overall impact on innovation would depend on whether increased competition resulting from data sharing at least counterbalanced these reduced incentives.

 

Functionality-oriented interoperability

Another form of interoperability relates to enabling digital services and platforms to work cross-functionally, which could improve user choice in the services they use and reduce the risk of ‘lock in’ to a particular service. Examples of functionality-oriented interoperability (sometimes referred to as protocol interoperability,[footnote]Crémer, J., de Montjoye, Y-A., and Schweitzer, H. (2019). Competition Policy for the Digital Era. European Commission. Available at https://data.europa.eu/doi/10.2763/407537[/footnote] or in telecoms regulation, interconnection of networks) include:

  • the ability for a user of one instant-messaging service to send a message to a user or group on a competing service
  • the user of one social media service can ‘follow’ a user on another service, and ‘like’ their shared content
  • the ability of a user of a financial services tool to initiate a payment from an account held with a second company
  • the user of one editing tool can collaboratively edit a document or media file with the user of a different tool, hosted on a third platform.

 

Services communicate with each other using open/publicly accessible APIs and/or standardised protocols. In Internet services, this generally looks like the ‘decentralised’ network architectures shown below:

The UK’s Open Banking Standard recommended: ‘The Open Banking API should be built as an open, federated and networked solution, as opposed to a centralised/hub-like approach. This echoes the design of the Web itself and enables far greater scope for innovation.’[footnote]Open Data Institute. (2016). The Open Banking Standard. Available at: http://theodi.org/wp-content/uploads/2020/03/298569302-The-Open-Banking-Standard-1.pdf[/footnote]

 

An extended version of functional interoperability would allow users to exercise other forms of control over the products and services they use, including:

  • signalling their preferences to platforms on profiling – the recording of data to assess or predict their preferences – using a tool such as the Global Privacy Control, or expressing their preferred default services such as search
  • replacing core platform functionality, such as a timeline ranking algorithm or an operating system default mail client, with a preferred version from a competitor (known as modularity)[footnote]Farrell, J., and Weiser, P. (2003). ‘Modularity, Vertical Integration, and Open Access Policies: Towards a Convergence of Antitrust and Regulation in the Internet Age’. Harvard Journal of Law and Technology, 17(1). Available at: https://doi.org/10.2139/ssrn.452220[/footnote]
  • using their own choice of software to interact with the platform.

 

Noted competition economist Cristina Caffarra has concluded: ‘We need wall-to-wall [i.e. near-universal] interoperability obligations at each pinch point and bottleneck: only if new entrants can connect and leverage existing platforms and user bases can they possibly stand a chance to develop critical mass.’[footnote]Caffarra, C. (2021). ‘What Are We Regulating For?’. VOX EU. Available at: https://cepr.org/voxeu/blogs-and-reviews/what-are-we-regulating[/footnote] Data portability alone is a marginal solution (and a limited remedy for GAFAM (Google, Apple, Facebook (now Meta Platforms), Amazon, Microsoft) when those companies want to flag their good intentions.[footnote]Caffarra, C. (2021).[/footnote] A review of portability in the Internet of Things sector came to a similar conclusion.[footnote]Turner, S., Quintero, J. G., Turner, S., Lis, J. and Tanczer, L. M. (2020). ‘The Exercisability of the Right to Data Portability in the Emerging Internet of Things (IoT) Environment’. New Media & Society. Available at: https://doi.org/10.1177/1461444820934033[/footnote]

 

Further considerations and provocative concepts

Mandatory interoperability measures have the potential to transform digital infrastructure, and to enable innovative services and new experiences for users. However, they need to be supported by carefully considered additional regulatory measures, such as cybersecurity, data protection and related accountability frameworks. (See text box below on ‘How to address sources of platform power? Possible remedies’ for an overview of interoperability and data protection measures that could tackle some of the sources of power for platforms.)

Also, the development of technical standards for protocols and classification systems or ontologies specifying the meaning of data (see text box above on ‘Possible interoperability mandates and their practical implications’) is foundational to data and platform interoperability. However, careful consideration must be placed on designing new types of infrastructure, in order to prevent platforms from consolidating control. Examples from practice show that developing open standards and protocols are not enough on their own.

Connected to the example above on signalling preferences to platforms, open protocols such as the ‘Do Not Track’ header were meant to help users more easily exercise their data rights by signalling an opt-out preference from website tracking.[footnote]Efforts to standardise the ‘Do Not Track’ header ended in 2019 and expressing tracking preferences at browser level is not currently a widely adopted practice. More information is available here: https://www.w3.org/TR/tracking-dnt/[/footnote] In this case, the standardisation efforts stopped due to insufficient deployment,[footnote]See here: https://github.com/w3c/dnt/commit/5d85d6c3d116b5eb29fddc69352a77d87dfd2310[/footnote] demonstrating the significant challenge in obliging platforms to  facilitate the use of standards in the services they deploy. 

A final point relates to creating interoperable systems that do not overload users with too many choices. Already today it is difficult for users to manage all the permissions they give across all the services and platforms they use. Interoperability may offer solutions for users to share their preferences and permissions for how their data should be collected and used by platforms, without requiring recurring ‘cookie notice’-type requests to a user when using each service.

How to address sources of platform power? Possible remedies

Ian Brown

 

Interoperability and related remedies have the potential to address not only problems resulting from market dominance of a few large firms, but – more importantly – some of the sources of their market power. However, every deep transformation needs to be carefully designed to prevent unwanted effects. The challenges associated with designing market interventions based on interoperability mandates need to be identified early in the policy- making process so that problems can either be solved or accounted for.

 

The table below presents specific interoperability measures, classified by their potential to address various sources of large platforms’ power, next to problems that are likely to result from their implementation.

 

While much of the policy debate so far on interoperability remedies has taken place within a competition-law framework (including telecoms and banking competition), there are equally important issues to consider under data and consumer protection law, as well as useful ideas from media regulation. Competition-focused measures are generally applied only to the largest companies, while other measures can be more widely applied. In some circumstances these measures can be imposed under existing competition-law regimes on dominant companies in a market, although this approach can be extremely slow and resource-intensive for enforcement agencies.

 

The EU Digital Markets Act (DMA), and US proposals (such as the ACCESS Act and related bills), impose some of these measures up-front on the very largest ‘gatekeeper’ companies (as defined by the DMA). The European Commission has also introduced a Data Act that includes some of the access to data provisions below.[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act). Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote] Under these measures, smaller companies are free to decide whether to make use of interoperability features that their largest competitors may be obliged to support.

Sources of market power for large firms/platforms   Proposed interoperability or related remedies Potential problems
Access to individual customer data (including cross-use of data from multiple services) Real-time and continuous user-controlled data portability/data interoperability

Requirement to support user data stores

(Much) stricter enforcement of data minimisation and purpose limitation requirements under data protection law, alongside meaningful transparency about reasons for data collection (or prohibiting certain data uses cross-platform)

Need for multiple accounts with all services, and take-it-or-leave-it contract terms

Incentive for mass collection, processing and sharing of data, including profiling

Access to large-scale raw customer data for analytics/product improvement Mandated competitor access to statistical data[footnote]For example, search query and clickstream data.[/footnote]

*Mandated competitor access to raw data is dismissed because of significant data protection issues

Reduced incentives for data collection
Access to large-scale aggregate/statistical customer data Mandated competitor access to models, or specific functionality of models via APIs Reduced incentives for data collection and model training
Ability to restrict competitor interaction with customers Requirement to support open/publicly accessible APIs or standardised communications protocols Complexity of designing APIs/standards, while preventing anticompetitive exclusion
Availability and use of own core platform services to increase ‘stickiness’ Government coordination and funding for development of open infrastructural standards and components

Requirement for platforms to support/integrate these standard components

Technical complexity of full integration of standard/competitor components into services/design of APIs while preventing anticompetitive exclusion

Potential pressure for incorporation of government surveillance functionality in standards

Ability to fully control user interface, such as advertising, content recommendation, specific settings, or self-preferencing own services Requirement to support competitors’ monetisation and filtering/recommendation services via open APIs[footnote]Similar to ‘must carry’ obligations in media law, requiring, for example, a cable or satellite TV distributor to carry public service broadcasting channels.[/footnote]

Requirement to present competitors’ services to users on an equal basis[footnote]Requiring, for example, a cable or satellite TV distributor to show competitors’ channels equally prominently in Electronic Programme Guides as their own.[/footnote]

Requirement to recognise specific user signals

Open APIs to enable alternative software clients

Technical complexity of full integration of competitor components into services/design of APIs while preventing anticompetitive exclusion

Food for thought

In the previous section strong data protection and security provisions were emphasised as essential for building an open ecosystem that enables more choice for users, respects individual rights and facilitates competition.

Going a step further, there is a discussion to be had about boundaries of system transformation that seem achievable with interoperability. What are the ‘border’ cases, where the cost of transformation outweighs its benefits? What immediate technical, societal and economic challenges can be identified, when imagining more radical implementations of interoperability than those that have already been tested or are being proposed in EU policy?

In order to trigger further discussion, a set of problems and questions are offered as provocations:

  1. Going further, imagine a fully interoperable ecosystem, where different platforms can talk to each other. What would it mean to apply a full interoperability mandate across different digital services and what opportunities would it bring? For example, provided that technical challenges are overcome, what new dynamics would emerge if a Meta Platforms (Facebook) user could exchange messages with Twitter, Reddit or TikTok users without leaving the platform?
  2. More modular and customisable platform functionalities may change dynamics between users and platforms and lead to new types of ecosystems. How would the data ecosystem evolve if core platform functionalities were opened up? For example, if users could choose to replace core functionalities such as content moderation or news feed curation algorithms with alternatives offered by independent service providers, would this bring more value for individual users and/or societal benefit, or further entrench the power of large platforms (becoming indispensable infrastructure)? What other policy measures or economic incentives can complement this approach in order to maximise its transformative potential and prevent harms?
  3. Interoperability measures have produced important effects in other sectors and present a great potential for digital markets. What lessons can be learned from introducing mandatory interoperability in the telecommunications and banking sectors? Is there a recipe for how to open up ecosystems with a ‘people-first’ approach that enables choice while preserving data privacy and security, and provides new opportunities and innovative services that benefit all?
  4. Interoperability rules need to be complemented and supported by measures that take into account inequalities and make sure that the more diverse portfolio of services that is created through interoperability is accessible to the less advantaged. Assuming more choice for consumers has already been achieved through interoperability mandates, what other measures need to be in place to reduce structural inequalities that are likely to keep less privileged consumers locked in the default service? Experience from the UK energy sector shows that it is often the consumers/users with the fewest resources who are least likely to switch services and benefit from the opportunity of choice (the ‘poverty premium’).[footnote]Davies, S. and Trend, L. (2020). The Poverty Premium: A Customer Perspective. University of Bristol Personal Finance Research Centre. Available at https://fairbydesign.com/wp-content/uploads/2020/11/The-poverty-premium-A-Customer-Perspective-Report.pdf[/footnote]

2. Reclaiming control of data from dominant companies

The vision

In this world, the primary purpose of generating, collecting, using, sharing and governing data is to create value for people and society. The power to make decisions about data has been removed from the few large technology companies who controlled the data ecosystem in the early twenty-first century, and is instead delegated to public institutions with civic engagement at a local and national level.

To ensure that data creates value for people and society, researchers and public-interest bodies oversee how data is generated, and are able to access and repurpose data that traditionally has been collected and held by private companies. This data can be used to shape economic and social policy, or to undertake research into social inequalities at the local and national level. Decisions around how this data is collected, shared and governed are overseen by independent data access boards.

The use of this data for societally beneficial purposes is also carefully monitored by regulators, who provide checks and balances on both private companies to share this data under high standards of security and privacy, and on public agencies and researchers to use that data responsibly.

In this world, positive results are being noticed from practices that have become the norm, such as developers of data-driven systems making their systems more auditable and accessible to researchers and independent evaluators. Platforms are now fully transparent about their decisions around how their services are designed and used. Designers of recommendation systems  publish essential information, such as the input variables and optimisation criteria used by algorithms and results of their impact assessments, which supports public scrutiny. Regulators, legislators, researchers, journalists and civil society organisations  easily interrogate algorithmic systems, and have a straightforward  understanding over what decisions systems may be rendering and how those decisions impact people and society.

Finally, national governments have launched ‘public-interest data companies’, which collect and use data under strict requirements for objectives that are in the public interest. Determining ‘public interest’ is a question these organisations routinely return to through participatory exercises that empower different members of society

The importance of data in the digital market triggers the question how control over data and algorithms can be shifted away from dominant platforms, to allow individuals and communities to be involved in decisions about how their data is used. The imaginary scenario above builds a picture of a world where data is used for public good, and not (only) for corporate gain.

Current exploitative data practices are based on access to large pools of personal and non-personal data and the capacity to efficiently use data to extract value by means of advanced analytics.[footnote]Ezrachi, A. and Reyna, A. (2019). ‘The role of competition policy in protecting consumers’ well-being in the digital era’. BEUC. Available at: https://www.beuc.eu/publications/beuc-x-2019-054_competition_policy_in_digital_markets.pdf[/footnote]

The insights into social patterns and trends that are gained by large companies through analysing vast datasets currently remain closed off and are used for extracting and maximising commercial gains, where they could have considerable social value.

Determining what constitutes uses of data for ‘societal benefit’ and ‘public interest’ is a political project that must be undertaken with due regard for transparency and accountability. Greater mandates to access and share data must be accompanied by strict regulatory oversight and community engagement to ensure these uses deliver actual benefit to individuals impacted by the use of this data.

The previous section discussed the need to transform infrastructure in order to rebalance power in the digital ecosystem. Another and interrelated foundational area where more profound legal and institutional change is needed is in control over data.

Why reclaim control over data?

 

For the purposes of this proposition, reflecting the focus on creating more societal benefit, the first goal of reclaiming control over data is to open up access to data and resources from companies and repurposing them for public-interest goals, such as developing public policies that take into consideration insights and patterns from large-scale datasets. A second purpose is to open up access to data and to machine-learning algorithms, in order to increase scrutiny, accountability and oversight over how proprietary algorithms function and to understand their impact at the individual, collective and societal level.

How to get from here to there

Proprietary siloing of data is currently one of the core obstacles to using data in societally beneficial ways. But simply making data more shareable, without specific purposes and strong oversight can lead to greater abuses rather than benefits. To counter this, there is a need for:

  • legal mandates that private companies make data and resources available for public interest purposes
  • regulatory safeguards to ensure this data is shared securely and with independent oversight.

Mandating companies share data and resources in the public interest

One way to reclaim control over data and repurpose it for societal benefits is to create legal mandates requiring companies to share data and resources that could be used in the public interest. For example:

  • Mandating the release from private companies of personal and non-personal aggregate data for public use (where aggregate data means a combination of individual data, which is anonymised through eliminating personal information).[footnote]While there is an emerging field around ‘structured transparency’ that seeks to use privacy-preserving techniques to provide access to personal data without a privacy trade-off, these methods have not yet been proven in practice. For a discussion around structured transparency, see: Trask, A., Bluemke, E., Garfinkel, B., Cuervas-Mons, C. G. and Dafoe, A. (2020). ‘Beyond Privacy Trade-offs with Structured Transparency’. arXiv, Available at https://arxiv.org/pdf/2012.08347.pdf[/footnote] These datasets would be used to inform public policies (e.g. use mobility patterns from a ride-sharing platform to develop better road infrastructure and traffic management).[footnote]In 2017, Uber launched the Uber Movement initiative, which releases free-of-charge aggregate datasets to help cities better understand traffic patterns and address transportation and infrastructure problems. See: https://movement.uber.com/[/footnote]
  • Requiring companies to create interfaces for running data queries on issues of public interest (for example public health, climate, pollution, etc). This model relies on using the increased processing and analytics capabilities inside a company, instead of asking for access to large ‘data dumps’, which might prove difficult and resource intensive for public authorities and researchers to process. Conditions need to be in place around what types of queries are allowed, who can run these and what are the company’s obligations around providing responses.
  • Providing access for external researchers and regulators to machine learning models and core technical parameters of AI systems, which could enable evaluation of an AI system’s performance and real optimisation goals (for example checking the accuracy and performance of content moderation algorithms for hate speech).

Some regulatory mechanisms are emerging at national and regional level in support of data access mandates. For example, in France, the 2016 Law for a Digital Republic (Loi pour une République numérique) introduces the notion of ‘data of general interest’ which includes access to data from private entities that have been delegated to run a public service (e.g. utility or transportation), access to data from entities whose activities are subsidised by public authorities, and access to certain private databases for the statistical purposes.[footnote]See: LOI n° 2016-1321 du 7 octobre 2016 pour une République numérique (1). Available at: https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000033202746/[/footnote]

In Germany, the 2019 leader of the Social Democratic Party championed a ‘data for all’ law that advocated for a ‘data commons’ approach and breaking-up data monopolies through a data-sharing obligation for market-dominant companies.[footnote]Nahles, A. (2019). ‘Digital progress through a data-for-all law’. Social Democratic Party. Available at: https://www.spd.de/aktuelles/daten-fuer-alle-gesetz/[/footnote] In the UK, the Digital Economy Act provides a legal framework for the Office for National Statistics (ONS) to access data held within the public and private sectors in support of statutory functions to produce official statistics and statistical research.[footnote]See: Chapter 7 of Part 5 of the Digital Economy Act and UK Statistics Authority. ‘Digital Economy Act: Research and Statistics Powers’. Available at: https://uksa.statisticsauthority.gov.uk/digitaleconomyact-research-statistics/[/footnote]

The EU’s Digital Services Act (DSA) includes a provision on data access for independent researchers.[footnote]European Parliament. (2022). Digital Services Act, Article 31. Available at: https://www.europarl.europa.eu/doceo/document/TA-9-2022-0269_EN.html[/footnote]

Under the DSA, large companies will need to comply with a number of transparency obligations, such as creating a public database of targeted advertisement and providing more transparency around how recommender systems work. It also includes an obligation for large companies to perform systemic risk assessments and to implement steps to mitigate risk.

In order to ensure compliance with the transparency provisions in the regulation, the DSA mandates independent auditors and vetted researchers with access to the data that led to the company’s risk assessment conclusions and mitigation decisions. This provision ensures oversight over the self-assessment (and over the independent audit) that companies are required to carry out, as well as scrutiny over the choices large companies make around their systems.

Other dimensions of access to data mandates can be found in the EU’s Data Act proposal, which introduces compulsory access to company data for public-sector bodies in exceptional situations (such as public emergencies or where it is needed to support public policies and services).[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act). Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote] The Data Act also provides for various data access rights, such as a right for individuals and businesses to access the data generated from the products or related service they use and share the data with a third party continuously and in real-time[footnote]European Commission. (2021). Articles 4 and 5.[/footnote] (companies which fall under the category of ‘gatekeepers’ are not eligible to receive this data).[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act), Article 5 (2). Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote]

This forms part of the EU general governance framework for data sharing in business-to-consumer, business-to-business and business-to-government relationships created by the Data Act. It complements the recently adopted Data Governance Act (focusing on voluntary data sharing by individuals and businesses and creating common ‘data spaces’) and the Digital Markets Act (which strengthens access by individual and business users to data provided or generated through the use of core platform services such as marketplaces, app stores, search, social media, etc.).[footnote]European Parliament and Council of the European Union. (2022). Digital Markets Act, Article 6 (9) and (10). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote]

Independent scrutiny of data sharing and AI systems

Sharing data for the ‘public interest’ will require novel forms of independent scrutiny and evaluation, to ensure such sharing is legitimate, safe, and has positive societal impact. In cases where access to data is involved, concerns around privacy and data security need to be acknowledged and accounted for.

In order to mitigate some of these risks, one recent model proposes a system of governance in which a new independent entity would assess the researchers’ skills and capacity to conduct research within ethical and privacy standards.[footnote]Benesch, S. (2021). ‘Nobody Can See Into Facebook’. The Atlantic. Available at: https://www.theatlantic.com/ideas/archive/2021/10/facebook-oversight-data-independent-research/620557/[/footnote] In this model, an independent ethics board would review the project proposal and data protection practices for both the datasets and the people affected by the research. Companies would be required to ‘grant access to data, people, and relevant software code in the form researchers need’ and refrain from influencing the outcomes of research or suppressing findings.[footnote]Benesch, S. (2021).[/footnote]

An existing model for gaining access to platform data is Harvard’s SocialScienceOne project,[footnote]See: Harvard University. Social Science One. Available at: https://socialscience.one/[/footnote] which partnered with Meta Platforms (Facebook) in the wake of the Cambridge Analytica scandal to control access to a dataset containing public URLs shared and clicked by Facebook users globally, along with metadata including Facebook likes. Researchers requests for access to the dataset go to an academic advisory board that is independent from Facebook, and which reviews and approves applications.

While initiatives like SocialScienceOne are promising, it has faced its share of criticism for failing to provide timely access to requests,[footnote]Silverman, C. (2019). ‘Exclusive: Funders Have Given Facebook A Deadline To Share Data With Researchers Or They’re Pulling Out’. BuzzFeed. Available at: https://www.buzzfeednews.com/article/craigsilverman/funders-are-ready-to-pull-out-of-facebooks-academic-data[/footnote] and concerns that the dataset Meta Platforms (Facebook) shared has significant gaps.[footnote]Timberg, C. (2021). ‘Facebook made big mistake in data it provided to researchers, undermining academic work’. Washington Post. Available at: https://www.washingtonpost.com/technology/2021/09/10/facebook-error-data-social-scientists/[/footnote]

The programme also relies on the continued voluntary action of Meta Platforms (Facebook), and therefore lacks any guarantees that the corporation (or others like it) will provide this data in years to come. Future regulatory proposals should explore ways to create incentives for firms to share data in a privacy-preserving way, but not use them as shields and excuses to prevent algorithm inspection.

A related challenge is developing novel methods for ensuring external oversight and evaluation of AI systems and models that are trained on data shared in this way. Two approaches to holding platforms and digital services accountable to the users and communities they serve are algorithmic impact assessments, and algorithm auditing.

Algorithmic impact assessments look at how to identify possible societal impacts of a system before it is in use, and ongoing once it is. They have been proposed primarily in the public sector,[footnote]Ada Lovelace Institute. (2021). Algorithmic accountability for the public sector. Available at: https://www.adalovelaceinstitute.org/report/algorithmic-accountability-public-sector/[/footnote] with a focus on public participation in the identification of harms and publication of findings. Recent work has seen them explored in a data access context, making them a condition of access.[footnote]Ada Lovelace Institute. (2022). Algorithmic impact assessment: a case study in healthcare. Available at: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/[/footnote]

Algorithm auditing involves looking at the behaviour of an algorithmic system (usually by examining inputs and outputs) to identify whether risks and potential harms are occurring, such as discriminatory outcomes,[footnote]A famous example is ProPublica’s bias audit of a criminal risk assessment algorithm. See: Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). ‘Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks’. ProPublica. Available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing[/footnote] or the prevalence of certain types of content.[footnote]A recent audit of Twitter looked at how its algorithm amplifies certain political opinions. See: Huszár, F., Ktena, S. I., O’Brien, C., Belli, L., Schlaikjer, A., and Hardt, M. (2021). ‘Algorithmic amplification of politics on Twitter’. Proceedings of the National Academy of Sciences of the United States of America, 119(1). Available at: https://www.pnas.org/doi/10.1073/pnas.2025334119[/footnote]

The Ada Lovelace Institute’s work identified six technical inspection methods that could be applied in scrutinising social media platforms, each with its own limitations and challenges.[footnote]Ada Lovelace Institute. (2021). Technical methods for regulatory inspection of algorithmic systems in social media platforms. Available at: https://www.adalovelaceinstitute.org/report/technical-methods-regulatory-inspection/[/footnote] Depending on the method used, access to data is not always necessary, however important elements for enabling auditing are: access to documentation about the dataset’s structure and purpose, the system’s design and functionality, and access to interviews with developers of that system. 

In recent years, a number of academic and civil society initiatives to conduct third-party audits of platforms have been blocked because of barriers to accessing data held by private developers. This has led to repeated calls for increased transparency and access to the data that platforms hold.[footnote]Kayser-Bril, N. (2020). ‘AlgorithmWatch forced to shut down Instagram monitoring project after threats from Facebook’. AlgorithmWatch. Available at: https://algorithmwatch.org/en/instagram-research-shut-down-by-facebook/ and Albert, J., Michot, S., Mollen, A. and Müller, A. (2022). ‘Policy Brief: Our recommendations for strengthening data access for public interest research’. AlgorithmWatch. Available at: https://algorithmwatch.org/en/policy-brief-platforms-data-access/[/footnote] [footnote]Benesch, S. (2021). ‘Nobody Can See Into Facebook’. The Atlantic. Available at: https://www.theatlantic.com/ideas/archive/2021/10/facebook-oversight-data-independent-research/620557/[/footnote]

There is also growing interest in the role of regulators, who, in a number of jurisdictions, will be equipped with new inspection and information-gathering powers over social media and search platforms, which could overcome access challenges experienced by research communities.[footnote]Ada Lovelace Institute and Reset. (2021). Inspecting algorithms in social media platforms. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2020/11/Inspecting-algorithms-in-social-media-platforms.pdf[/footnote] One way forward may be for regulators to have the power to issue ‘access to platform data’ mandates for independent researchers, who can collect and analyse data about potential harms or societal trends under strict data protection and security conditions, for example minimising the type of data collected and with a clear data retention policy.

Further considerations and provocative concepts

Beyond access to data: grappling with fundamental issues

Jathan Sadowski

 

To reclaim resources and rights currently controlled by corporate platforms and manage them in the public’s interests and for societally beneficial purposes, ‘a key enabler would be a legal framework mandating private companies to grant access to data of public interest to public actors under conditions specified in the law.’[footnote]Micheli, M., Ponti, M., Craglia, M. and Suman A.B. (2020). ‘Emerging models of data governance in the age of datafication’. Big Data & Society. doi: 10.1177/2053951720948087[/footnote]

 

One aspect that needs to be considered is whether this law should establish requirements around data collected by large companies to become part of the public domain after a reasonable number of years.

 

Another proposal suggested the possibility of allowing companies to use the data that they gather only for a limited period (e.g. five years), after which it is reverted to a ‘national charitable corporation that provides access to certified researchers, who would both be held to account and be subject to scrutiny to ensure the data is used for the common good’. [footnote]Shah, H. (2018) ‘Use our personal data for the common good’. Nature, 556(7699). doi: 10.1038/d41586-018-03912-z[/footnote]

 

These ideas will have to consider various issues, such as the need to ensure that individual’s data is not released into the public domain, and the fact that commercial competitors might not see any benefit in using ‘old’ data. Nevertheless, we should draw inspiration from these efforts and seek to expand their purview.

 

To that point, policies aimed at making data held by private companies into a common resource should go further than simply allowing other companies to access data and build their own for-profit products from it.

To rein in the largely unaccountable power of big technology companies who wield enormous, and often black-boxed, influence over people’s lives,[footnote]Martinez, M. and Kirchner, L. (2021). ‘The Secret Bias Hidden in Mortgage-Approval Algorithms’. The Markup. Available at https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms[/footnote] these policies must grapple with fundamental issues related to who gets to determine how data is made, what it means, and why it is used. 

 

Furthermore, the same policies should extend their target beyond monopolistic digital platforms. Data created and controlled by, for example, transportation services, energy utilities and credit rating agencies ought also to be subjected to public scrutiny and democratic decisions about the most societally beneficial ways to use it or discard it.

Further to these considerations, in this section provocative concepts are shared, which show different implementation models that can be set up in practice to re-channel the use of data and resources from companies towards societal good.

Public interest divisions with public oversight

Building on the Uber Movement model, which releases aggregate datasets on a restricted, non-commercial basis to help cities with urban planning,[footnote]See: Uber Movement initiative. Available at: https://movement.uber.com[/footnote]

relevant companies could be obliged to form a well-resourced public interest division, running as part of the core organisational structure with full access to the company’s capabilities (such as computational infrastructure and machine learning models).

This division would be in charge of building aggregate datasets to support important public value. Key regulators could issue ‘data-sharing mandates’, to identify which types of datasets would be most valuable and run queries against them. Through this route, the computational resources and the highly skilled human resources of the company would be used for achieving societal benefits and informing public policy.

The aggregate datasets could be used to inform policymaking and public service innovation. Potential examples could include food delivery apps informing health nutrition policies, or ride-sharing apps informing street planning, traffic congestion, housing and environmental policies. There would be limitations to use: for example insights from social media companies could be used for identifying the most pressing social issues in one area, and this information should not be used by the political class in the electoral cycle or for winning popularity by gaining political insight and using it in political campaigns.

Publicly run corporations (the ‘BBC for data’)

Another promising avenue for repurposing data in the public interest and increasing accountability is to introduce a publicly run competitor to specific digital platforms (e.g. social media). This model could be established by mandating the sharing of data from particular companies operating in a given jurisdiction to a public entity, which uses the data for projects that are in service of the public good.[footnote]Coyle, D. (2022). ‘The Public Option’. Royal Institute of Philosophy Supplement, 91, pp. 39–52. doi:10.1017/S1358246121000394[/footnote]

The value proposition behind such an intervention in the digital market would be similar to the effect of the British Broadcasting Corporation (BBC) in the UK broadcast market, where it competes with other broadcasters. The introduction of the BBC supported competition in dimensions other than audience numbers, and provided a platform for more types of content providers (for example music and independent production) that otherwise may not have existed, or not at a scale enabling them to address global markets.

Operating as a publicly run corporation has the benefit of establishing a different type of incentive structure, one that is not narrowly focused on profit-making. This could avoid the more extractive, commercially oriented business models and practices that result from the need to generate profits for shareholders and demonstrate continuous growth.

One business model that dominates the digital ecosystem, and is the primary incentive for many of the largest technology companies, is online advertising. This model has underpinned the development of mature, developed platforms, which means that, while individuals may support the concept of a business model that does not rely on extractive practices, in practice it may be difficult to get users to switch to services that do not offer equivalent levels of convenience and functionality. The success of this model is dependent on the ‘BBC for data’ competitor offering broad appeal and well-designed, functional services, so it can scale to operate at a significant level in the market.

The introduction of a democratically accountable competitor alone would not be enough to shape new practices, or to establish political and public support. It would need committed investment in the performance of its services and in attracting users. Citizens should be engaged in shaping the practices of the new public competitor, and these should reflect – in market terms – what choices, services and approaches they expect.

Food for thought

As argued above, reclaiming control over data and resources to public authorities, researchers, civil society organisations and other bodies that work in the public interest has a transformative potential. The premise of this belief is simple: if data is power, making data accessible to new actors, with non-commercial goals and agendas, will shift the power balance and change the dynamic within the data ecosystem. However, without deeper questioning, the array of practical problems and structural inequalities will not disappear with the arrival of new actors and their powers to access data.

Enabling data sharing is no simple feat – it will require extensive consideration of privacy and security issues, and oversight from regulators to prevent the misuse, abuse or concealing of data. The introduction of new actors and powers to access and use data will, inevitably, trigger other externalities and further considerations that are worthy of greater attention from civil society, policymakers and practitioners.

In order to trigger further discussion, a set of problems and questions are offered as provocations:

  1. Discussions around ‘public good’ need mechanisms to address questions of legitimacy and accountability in a participatory and inclusive way. Who should decide what uses of data serve the public good and how these decisions should be reached in order to maintain their legitimacy as well as social accountability? Who decides what constitutes ‘public good’ or ‘societal benefit,’ and how can such decisions be made justly?
  2. Enabling data sharing and access needs to be accompanied by robust privacy and security measures. What legal requirements and conditions need to be designed for issuing ‘data sharing’ mandates from companies?
  3. Data sharing and data access mandates imply that the position of large corporations is still a strong one, and they are still playing a substantial role in the ecosystem. In what ways might data-sharing mandates entrench the power of large technology platforms, or exacerbate different kinds of harm? What externalities are likely to arise with mandating data sharing for public interest goals from private companies?
  4. The notion of ‘public good’ opens important questions about what type of ‘public’ is involved in discussions and who gets left out. How can determinations of public good be navigated in inclusive ways across different jurisdictions, and accounting for structural inequalities?

3. Rebalancing the centres of digital power with new (non-commercial) institutions

The vision

In this world, new forms of data governance institutions made up of collectives of citizens control how data is generated, collected, used and governed. These intermediaries, such as data trusts and data cooperatives, empower ‘stewards’ of data to collect and use data in ways that support their beneficiaries (those represented in and affected by that data).

These models of data governance have become commonplace, enabling people to be more aware and exert more control over who has access to their data, and engendering a greater sense of security and trust that their data will only be used for purposes that they approve.

Harmful uses of data are more easily identifiable and transparent, and efficient forms of legal redress are available in cases where a data intermediary acts against the interests of their beneficiary.

The increased power of data collectives balances the power of dominant platforms, and new governance architectures offer space for civil society organisations to hold to account any ungoverned or unregulated, private or public exercises of power.

There is a clear supervision and monitoring regime ensuring ‘alignment’ to the mandate that data intermediaries have been granted by their beneficiaries. Data intermediaries are discouraged and prevented from monetising data. Data markets have been prohibited by law, understanding that the commodification of data creates room for abuse and exploitation.

The creation and conceptualisation of new institutions that manage data for non-commercial purposes is necessary to reduce power and information asymmetries.

Large platforms and data brokers currently collect and store large pools of data, which they are incentivised to use for corporate rather than societal benefit. Decentring and redistributing the concentration of power away from large technology corporations and towards individuals and collectives requires explorations around new ways of governing and organising data (see the text box on ‘Alternative data governance models’ below).

Alternative data governance models could offer a promising pathway for ensuring data subjects have rights and preferences over how their data is used are enforced. If designed properly, these governance methods could also help to address structural power imbalances.

However, until power is shifted away from large companies, and market dynamics are redressed to allow more competition and choice, there is a high risk of data intermediaries being captured.

New vehicles representing collective power, such as data unions, data trusts, data cooperatives or data-sharing initiatives based on corporate or contractual mechanisms, could help individuals and organisations position themselves better in relation to more powerful private or public organisations, offering new possibilities for enabling choices related to how data is being used.[footnote]Ada Lovelace Institute. (2021). Exploring lLegal Mechanisms mechanisms for Data data Stewardshipstewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/[/footnote]

There are many ways in which these models can be set up. For example, some models put more emphasis on individual gains, such as a ‘data union’ or a data cooperative that works in the individual interest of its members (providing income streams for individuals who pool their personal data, which is generated through the services they use or available on their devices).

These structures can also work towards wider societal aspirations, when members see this as their priority. Another option might be for members to contribute device-generated data to a central database, with ethically minded entrepreneurs invited to build businesses on top of these databases, owned collectively by the ‘data commons’ and feeding its revenues back into the community, instead of to the individual members.

A detailed discussion on alternative data governance models is presented in the Ada Lovelace Institute report Exploring legal mechanisms for data stewardship, which discusses three legal mechanisms – data trusts, data cooperatives, and corporate and contractual mechanisms – that could help facilitate the responsible generation, collection, use and governance of data in a participatory and rights-preserving way.[footnote]Ada Lovelace Institute. (2021).[/footnote]

Alternative data governance models
  • Data trusts: stemming from the concept of UK trust law, individuals pool data rights (such as those provided by the GDPR) into an organisation – a trust – where the data trustees are tasked with exercising data rights under fiduciary obligations.
  • Data cooperatives: individuals voluntarily pool data together, and the benefits are shared by members of the cooperative. A data cooperative is distinct from a ‘data commons’ because a data cooperative grows or shrinks as resources are brought in or out (as members join or leave), whereas a ‘data commons’ implies a body of data whose growth or decline is independent of the membership base.
  • Corporate and contractual agreements: legally binding agreements between different organisations that facilitate data sharing for a defined set of aims or an agreed purpose.

Many of the proposed models for data intermediaries need to be tested and further explored to refine their practical implementation, and the considerations below offer a more critical perspective highlighting how the different transformations of the data ecosystem discussed in this chapter are interconnected, and how one institutional change (or failure) determines the conditions for a change in another area.

Decentralised intermediaries need adequate political, economic, and infrastructural support, to fulfil their transformative function and deliver the value expected from them. The text box below, by exploring the shortcomings of existing data intermediaries, gives an idea of the economic and political conditions that would provide a more enabling environment.

Critical overview of existing data intermediaries models

Jathan Sadowski

 

There are now a number of emerging proposals for alternative data intermediaries that seek to move away from the presently dominant, profit-driven model and towards varying degrees of individual ownership, legal oversight or social stewardship of data.[footnote]Ada Lovelace Institute. (2021). Exploring legal mechanisms for data stewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/ and Micheli, M., Ponti, M., Craglia, M. and Suman, A. B. (2020). ‘Emerging models of data governance in the age of datafication’. Big Data & Society. doi: 10.1177/2053951720948087[/footnote]

 

These proposals include relatively minor reforms to the status quo, such as legally requiring companies to act as ‘information fiduciaries’ and consider the interests of stakeholders who are affected by the company, alongside the interests of shareholders who have ownership in the company.

 

In a recent Harvard Law Review article, David Pozen and Lina Khan[footnote]Pozen, D. and Khan, L. (2019). ‘A Skeptical View of Information Fiduciaries’. Harvard Law Review, 133, pp. 497–541. Available at: https://harvardlawreview.org/2019/12/a-skeptical-view-of-information-fiduciaries/[/footnote] provide detailed arguments for why designating a company like Meta Platforms (Facebook) – ‘a loyal caretaker for the personal data of millions’ does not actually pose a serious challenge to the underlying business model or corporate practices. In fact, such reforms may even entrench the company’s position atop the economy. ‘Facebook-as-fiduciary is no longer a public problem to be solved, potentially through radical reform. It is a nexus of sensitive private relationships to be managed, nurtured, and sustained [by the government]’.[footnote]Pozen, D. and Khan, L. (2019).[/footnote]

 

Attempts to tweak monopolistic platforms, without fundamentally restructuring the institutions and distributions of economic power, are unlikely to produce – and may even impede – the meaningful changes needed.

 

Other models take a more decentralised solution in the form of ‘data-sharing pools’[footnote]Shkabatur, J. (2018). ‘The Global Commons of Data’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3263466[/footnote] and ‘data cooperatives’[footnote]Miller, K. (2021). ‘Radical Proposal: Data Cooperatives Could Give Us More Power Over Our Data’. Human-Centered Artificial Intelligence (HAI), Stanford University. Available at: https://hai.stanford.edu/news/radical-proposal-data-cooperatives-could-give-us-more-power-over-our-data[/footnote] that would create a vast new ecosystem of minor intermediaries for data subjects to choose from. As a different way of organising the data economy, this would be, in principle, a preferable democratic alternative to the extant arrangement.

 

However, in practical terms, this approach risks putting the cart before the horse, by acting as if the political, economic and infrastructural support for these decentralised intermediaries already existed. In fact, it does not: with private monopolies sucking all the oxygen out of the economy, there’s no space for an ecosystem of smaller alternatives to blossom. At least, that is, without relying on the permission and largesse of profit-driven giants.

 

Under present market conditions – where competition is low and capital is hoarded by a few – it seems much more likely that start-ups for democratic data governance would either fizzle/fail or be acquired/crushed.

How to get from here to there

Alternative data governance proposals listed above represent novel and unexplored models that require better understanding and testing to demonstrate proof of concept. The success of these alternative data governance models will require (aside from a fundamental re-conceptualisation of market power and political, economic and infrastructural support; see more in the text box on ‘Paving the way for a new ecosystem of decentralised intermediaries’), strong regulations and enforcement mechanisms, to ensure data is stewarded in the interests of their beneficiaries.

The role, responsibilities and standards of practice remain to be fully defined and should include aspects of:

  • enforcing data rights and obligations (e.g. compliance with data protection legislation),
  • achieving a level of maturity of expertise and competence in the administration of a data intermediary, especially if its mission requires it to negotiate with large companies
  • establishing clear management decision-making around delegation and scrutiny, and setting out the overarching governance of the ‘data steward’, which could be a newly established professional role (a data trustee or capable managers and administrators in a data cooperative) or a governing board (for example formed by individuals that have shares in a cooperative based on the data contributed). The data contributed may define the role of an individual in the board and the decision-making power regarding data use.

Supportive regulatory conditions are needed, to ease the process of porting individual and collective data into alternative governance models, such as a cooperative. Today, it is a daunting – if not impossible – task to ask a person to move all their data over to a new body (data access requests can take a long time to be processed, and often the data received needs to be ‘cleaned’ and restructured in order to be used elsewhere).

Legal mechanisms and technical standards must evolve to make that process easier. Ideally, this would produce a process that cooperatives, trusts and data stewardship bodies could undertake on behalf of individuals (the service they provide could include collecting and pooling data; see below on the Data Governance Act). Data portability, as defined by the GDPR, is not sufficient as a legal basis because it covers only data provided by the data subject and relies heavily on individual agency, whereas in the current data ecosystem, the most valuable data is generated about individuals without their knowledge or control.

Alternative data governance models have already made their way into legislation. In particular, the recently adopted EU Data Governance Act (DGA) creates a framework for voluntary data sharing via data intermediation services, and a mechanism for sharing and pooling data for ‘data altruism’ purposes.[footnote]European Parliament and Council of the European Union. (2022). Regulation 2022/868 on European data governance (Data Governance Act). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868&qid=1657575745441[/footnote] The DGA mentions a specific category of data intermediation services that could support data subjects in exercising their data rights under the GDPR, however this option is only briefly offered in one of the recitals as one of the options, and lacks detail as to the practical implementation.[footnote]European Parliament and Council of the European Union. (2022). Recital 30. For a more detailed discussion on the mandatability of data rights, see: Giannopoulou, A., Ausloos, J., Delacroix, S. and Janssen, H. (2022). ‘Mandating Data Rights Exercises’. Social Science Research Network. Available at: https://ssrn.com/abstract=4061726[/footnote]

The DGA also emphasises the importance of neutral and independent data-sharing intermediaries and sets out the criteria for entities that want to provide data-sharing services (organisations that provide only data intermediation services, and companies that offer data intermediation services in addition to other services, such as data marketplaces).[footnote]European Commission. (2022). Data Governance Act explained. Available at: https://digital-strategy.ec.europa.eu/en/policies/data-governance-act-explained[/footnote] One of the criteria is that service providers may not use the data for purposes other than to put it at the disposal of data users, and must separate its data intermediation services structurally from any other value-added services it may provide. At the same time, data intermediaries will bear fiduciary duties towards individuals, to ensure that they act in the best interests of the data holders.[footnote]European Parliament and Council of the European Union. (2022). Data Governance Act, Recital 33. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868&qid=1657575745441[/footnote]

Today there is a basic legal framework for data portability under the GDPR, which has been complemented with new portability rules in legislation, such as in the DMA. More recently, a new framework has been adopted that encourages voluntary data sharing and defines the criteria and conditions for entities that want to serve as a data steward or data intermediary. What are still needed are the legal, technical and interoperability mechanisms for individuals as well as collectives to effectively reclaim their data (including behavioural observations and statistical patterns that not only convey real economic value but can also serve individual and collective empowerment) from private entities (either directly or via trusted intermediaries), and a set of safeguards protecting these individuals and collectives from being, once again, exploited by another powerful agent (i.e. making sure that a data intermediary will remain independent and trustworthy, and is able to perform their mandate effectively in the wider data landscape).

Further considerations and provocative concepts

The risk of amplifying collective harm

Jef Ausloos, Alexandra Giannopoulou and Jill Toh

 

So-called ‘data intermediaries’ have been framed as one practical way through which the collective dimension of data rights could be given shape in practice.[footnote]For example, Workers Info Exchange’s plan to set up a ‘data trust’, to help workers access and gain insight from data collected from them at work. Available at: https://www.workerinfoexchange.org/. See more broadly: Ada Lovelace Institute. (2021). Exploring legal mechanisms for data stewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/ and Ada Lovelace Institute. (2021). Participatory data stewardship. Available at: https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/[/footnote] While they show some promise for more effectively empowering people and curbing collective data harms,[footnote]See: MyData. Declaration of MyData Principles, Version 1.0. Available at: https://mydata.org/declaration/[/footnote] their growing popularity in policy circles mainly stems from their assumed economic potential.

 

Indeed, the political discourse at EU level, particularly in relation to the Data Governance Act (DGA) focuses on the economic objectives of data intermediaries, framing them in terms of their supposedly ‘facilitating role in the emergence of new data-driven ecosystems’.[footnote]European Parliament and Council of the European Union. (2022). Data Governance Act, Recital 27. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868&qid=1657575745441[/footnote] People’s rights, freedoms and interests are only considered to the extent that the data intermediaries empower individual data subjects. 

 

This focus on the (questionable) economic potential of data intermediaries and individual empowerment of data subjects raises significant concerns. Without clear constraints on the type of actors that can perform the role of intermediaries, their model can easily be usurped by the interests of those with (economic and political) power, at the cost of both individual and collective rights, freedoms and interests. Even more, their legal entrenching in EU law, risks amplifying collective data-driven harms. Arguably, for ‘data intermediaries’ to positively contribute to curbing collective harm and constraining power asymmetries, it will be important to move beyond the dominant narrative focusing on the individual and economic potential. Clear legal and organisational support in exercising data rights in a coordinated manner are a vital step in this regard.

To begin charting out the role of data intermediaries in the digital landscape, there is a need to explore questions such as: What are the first steps towards building alternative forms of data governance? How to undermine the power of companies that now enclose and control the data lifecycle? What is the role of the public sector in reclaiming power over data? How to ensure legitimacy of new data governance institutions? The text below offers some food for thought by exploring these important questions.

Paving the way for a new ecosystem of decentralised intermediaries

Jathan Sadowski

 

Efforts to build alternative forms of data governance should focus on changing its political economic foundations. We should focus on advancing two related strategies for reform that would pave the way for a new ecosystem of decentralised intermediaries.

 

The first strategy is to disintermediate the digital economy by limiting private intermediaries’ ability to enclose the data lifecycle – the different phases of data management, including construction, collection, storage, processing, analysis, use, sharing, maintenance, archiving and destruction.

The digital economy is currently hyper-intermediated. We tend to think of the handful of massive monopolistic platforms that have installed themselves as necessary middlemen in production, circulation, and consumption processes. But there is also an overabundance of smaller, yet powerful, companies that insert themselves into every technical, social and economic interaction to extract data and control access.

 

Disintermediation means investigating what kind of policy and regulatory tools can constrain and remove the vast majority of these intermediaries whose main purpose is to capture – often without creating – value.[footnote]Sadowski, J. (2020). ‘The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism’. Antipode, 52(2), pp.562–580.[/footnote] For example, disintermediation would require clamping down on the expansive secondary market for data, such as the one for location data,[footnote]Keegan, J. and Ng, A. (2021). ‘There’s a Multibillion-Dollar Market for Your Phone’s Location Data’. The Markup. Available at https://themarkup.org/privacy/2021/09/30/theres-a-multibillion-dollar-market-for-your-phones-location-data[/footnote] which incentivises many companies to engage in the collection and storage of all possible data, for the purposes of selling and sharing with, or servicing, third parties such as advertisers. 

 

Even more fundamental reforms could target the rights of control and access that companies possess over data assets and networked devices, which are designed to shut out regulators and researchers, competitors and consumers from understanding, challenging and governing the power of intermediaries. Establishing such limits is necessary for governing the lifecycle of data, while also making space for different forms of intermediaries designed with different purposes in mind.

 

In a recent example, after many years of fighting against lobbying by technology companies, the US Federal Trade Commission has voted to enforce ‘right to repair’ rules that grant users the ability to fix and modify technologies like smartphones, home appliances and vehicles without going through repairs shops ‘authorised’ by the manufacturers.[footnote]Kavi, A. (2021). ‘The F.T.C. votes to use its leverage to make it easier for consumers to repair their phones’. The New York Times. Available at: https://www.nytimes.com/2021/07/21/us/politics/phones-right-to-repair-FTC.html[/footnote] This represents a crucial transference of rights away from intermediaries and to the public. 

 

The second strategy consists of the construction of new public institutions for democratic governance of data.

 

Achieving radical change requires advocating for forms of large-scale intervention that actively aim to undermine the current conditions of centralised control by corporations.  In addition to pushing to expand the enforcement of data rights and privacy protections, efforts should be directed at policies for reforming government procurement practices and expanding public capacities for data governance.

 

The political and financial resources already exist to create and fund democratic data intermediaries. But funds are currently directed at outsourcing government services to technology companies,  rather than insourcing the development of capacities through new and existing institutions. Corporate executives have been happy to cash the cheques of public investment, and a few large companies have managed to gain a substantial hold on public administration procurement worldwide.

 

Ultimately, strong legal and institutional interventions are needed in order to foundationally transform the existing arrangements of data control and value. Don’t think of alternative data intermediaries  (such as public data trusts in the model advocated for in this article)[footnote]Sadowski, J., Viljoen, S. and Whittaker, M. (2021). ‘Everyone Should Decide How Their Digital Data Are Used — Not Just Tech Companies’. Nature, 595, pp.169–171. Available at https://www.nature.com/articles/d41586-021-01812-3[/footnote] as an endpoint, but instead as the beginning for a new political economy of data – one that will allow and nurture the growth of more decentralised models of data stewardship. 

 

Public data trusts would be well positioned to provide alternative decentralised forms of data intermediaries with the critical resources they need – e.g. digital infrastructure, expert managers, financial backing, regulatory protections and political support – to first be feasible and then to flourish. Only then can we go beyond rethinking and begin rebuilding a political economy of data that works for everybody.[footnote]Sadowski, J. (2022). ‘The political economy of data intermediaries’. Ada Lovelace Institute. Available at https://www.adalovelaceinstitute.org/blog/political-economy-data-intermediaries/[/footnote]

Food for thought

In order to trigger further discussion, a set of problems and questions, which arise around alternative data governance institutions and the role they can play in generating transformative power shifts, are offered as provocations:

  1. Alternative data governance models can play a role at multiple levels. They can work both for members that have direct contributions (e.g. members pooling data in a data cooperative and being actively engaged in running the cooperative), as well as for indirect members (e.g. when the scope of a data cooperative is to have wider societal effects). This raises questions such as: How are ‘beneficiaries’ of data identified and determined? Who makes those determinations, and by what method?
  2. Given the challenges of the current landscape, there are questions about what is needed in order for data intermediaries to play an active and meaningful role that leads to responsible data use and management in practice. What would it take for these new governance models to actually increase control around the ways data is used currently (e.g. to forbid certain data uses)? Would organisations have to be mandated to deal with such new structures or adhere to their wishes even for data not pooled inside the model?
  3. In practice, there can be multiple types of data governance structures, potentially with competing interests. For example some of them could be set up to restrict and to protect data, while others could be set up to maximise income streams for members from data use. If potential income streams are dependent on the use of data, what are the implications for privacy and data protection? How can potential conflicts between data intermediaries be addressed and by whom? What kinds of incentives structures might arise and what type of legal underpinnings do these alternative data governance models need to function correctly?
  4. The role of the specific parties involved in managing data intermediaries, their responsibilities and qualifications need to be considered and balanced. Under what decision-making and management models would these structures operate, and how are decisions being made in practice? If things go wrong, who is held responsible, and by what means?
  5. The particularities of different digital environments across the globe lead to questions of applicability in different jurisdictions. Can these models be translated/work in different regions around the world, including the less developed?
What about Web3?

 

Some readers might ask why this report does not discuss ‘Web3’ technologies – a term coined by Gavin Wood in his 2014 essay, which envisions a reconfiguration of the web’s technical, governance and payments/transactions infrastructure that moves away from ‘entrusting our information to arbitrary entities on the internet’.[footnote]Wood, G. (2014) ‘ĐApps: What Web 3.0 Looks Like’. Available at: http://gavwood.com/dappsweb3.html[/footnote]

 

The original vision of Web3 aimed to decentralise parts of the online web experience and remove middlemen and intermediaries. It proposed four core components for a Web 3.0 or a ‘post-Snowden’ web:

  • Content publication: a decentralised, encrypted information publication system that ensures the downloaded information hasn’t been interfered with. This system could be built using principles that have been previously used in technologies such as the Bittorrent[footnote]See: BitTorrent. Available at: https://www.bittorrent.com/[/footnote] protocol for peer-to-peer content distribution and HTTPS for secure communication over a computer network.
  • Messaging: a messaging system that ensures communication is encrypted and traceable information is not revealed (e.g. IP addresses).
  • Trustless transactions: a means of agreeing the rules of interaction within a system and ensuring automatic enforcement of these rules. A consensus algorithm prevents powerful adversaries from derailing the system. Bitcoin is the most popular implementation of this technology and establishes a peer-to-peer system for validating transactions without a centralised authority. While blockchain technology is associated primarily with payment transactions, the emergence of smart contracts has extended the set of use cases to more complex financial arrangements and non-financial interactions such as voting, exchange, notarisation or providing evidence.
  • Integrated user interface: a browser or user interface that provides a similar experience to traditional web browsers, but uses a different technology for name resolution. In today’s internet, the domain name system (DNS) is controlled by the Internet Corporation of Assigned Names and Numbers (ICANN) and delegated registrars. This would be replaced by a decentralised, consensus-based system which allows users to navigate the internet pseudonymously, securely and trustlessly (an early example of this technology is Namecoin).

 

Most elements of this initial Web3 vision are still in their technological infancy. Projects that focus on decentralised storage (for example BitTorrent, Swarm, IPFS) and computation (e.g. Golem, Ocean) face important challenges on multiple fronts – performance, confidentiality, security, reliability, regulation – and it is doubtful that the current generation of these technologies are able to provide a long-term, feasible alternative to existing centralised solutions for most practical use cases.

 

Bitcoin and subsequent advances in blockchain technology have achieved wider adoption and considerably more media awareness, although the space has been rife with various forms of scams and alarming business practices, due to rapid technological progress and lagging regulatory intervention.

 

Growing interest in blockchain networks has also contributed to the ‘Web3 vision’ being gradually co-opted by venture capital investors, to promote a particular niche of projects.  This has popularised Web3 as an umbrella term for alternative financial infrastructure – such as payments, collectibles (non-fungible tokens or NFTs) and decentralised finance (DeFi) – and encouraged an overly simplistic perception of decentralisation.[footnote]Aramonte, S., Huang, W. and Schrimpf, A. (2021). ‘DeFi risks and the decentralisation illusion.’ Bank for International Settlements. Available at: https://www.bis.org/publ/qtrpdf/r_qt2112b.pdf[/footnote] It is not often discussed nor widely acknowledged that the complex architecture of these systems can (and often does) lead to centralisation of power re-emerging in the operational, incentive, consensus, network and governance layers.[footnote]Sai, A. R., Buckley, J., Fitzgerald, B., Le Gear, A. (2021). ‘Taxonomy of centralization in public blockchain systems: A systematic literature review’. Information Processing & Management, 58(4). Available at: https://www.sciencedirect.com/science/article/pii/S0306457321000844?via%3Dihub[/footnote]

 

The promise of Web3 is that decentralisation of infrastructure will necessarily lead to decentralisation of digital power. There is value in this argument and undoubtedly some decentralised technologies, after they reach a certain level of maturity and if used in the right context, can offer benefits over existing centralised alternatives.

 

Acknowledging the current culture and state of development around Web3, at this stage there are few examples in this space where values such as decentralisation and power redistribution are front and centre. It would be interesting to see whether progressive alternatives will deliver on their promise in the near to medium term and take these values to the core.

4. Ensuring public participation in technology policy making

The vision

This is a world in which everybody who wants to participate in decisions about data and its governance can do so – there are mechanisms for engagement to legitimate needs and expectations of those affected by technology. Through a broad range of participatory approaches – from citizens’ councils and juries that directly inform local and national data policy and regulation, to public representation on technology company governance boards people are better represented, more supported and empowered to make data systems and infrastructures work for them, and policymakers are better informed about what people expect and desire from data, technologies and their uses.

Through these mechanisms for participatory data and technology policymaking and stewardship, individuals who wish to be active citizens can participate directly in data governance and innovation, whereas those who want their interests to be better represented have mechanisms where their voices and needs are represented through members of their community or through organisations.

Policymakers are more empowered through the legitimacy of public voice to act to curb the power of large technology corporations, and equipped with credible evidence to underpin approaches to policy, regulation and governance.

Public participation, engagement and deliberation have emerged in recent years as fundamental components in shaping future approaches to regulation across a broad spectrum of policy domains.[footnote]OECD. (2020). Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave. doi:10.1787/339306da-en[/footnote] However, despite their promising potential to facilitate more effective policymaking and regulation, the role of public participation in data and technology-related policy and practice remains remarkably underexplored, if compared – for example – to public participation in city planning and urban law. 

There is, however, a growing body of research that aims to understand the theoretical and practical value of public participation approaches for governing the use of data, which is described in our 2021 report, Participatory data stewardship.[footnote]Ada Lovelace Institute. (2021). Participatory data stewardship. Available at: https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/[/footnote]

What is public participation?

Public participation describes a wide range of methods that bring members of the public’s voices, perspectives, experiences and representation to social and policy issues. From citizen panels to deliberative polls, surveys to community co-design, these methods have important benefits, including informing more effective and inclusive policymaking, increasing representation and accountability in decision making, and enabling more trustworthy governance and oversight.[footnote]Gastil, J. (ed.). (2005). The deliberative democracy handbook: strategies for effective civic engagement in the twenty-first century. Hoboken, N.J: Wiley.[/footnote]

 

Participation often involves providing members of the public with information about particular uses of data or technology, including access to experts, and time and space to reflect and develop informed opinions. Different forms of public participation are often described on a spectrum from ‘inform’, ‘consult’ and ‘involve’, through to ‘collaborate’ and ‘empower’.[footnote]IAP2 International Federation. (2018). Spectrum of Participation. Available at: https://www.iap2.org/page/pillars[/footnote] In our report Participatory data stewardship, the Ada Lovelace Institute places this spectrum into the context of responsible data use and management.

How to get from here to there

Public participation, when implemented meaningfully and effectively, ensures that the values, experiences and perspectives of those affected by data-driven technologies are represented and accounted for in policy and practices related to those technologies.

This has multiple positive impacts. Firstly, it offers a more robust evidence base for developing technology policies and practices that meet the needs of people and society, by building a better understanding of people’s lived experiences and helping to better align the development, deployment and oversight of technologies with societal values. Secondly, it provides policy and practice with greater legitimacy and accountability by ensuring those who are affected have their voices and perspectives taken into account.

Taken together, the evidence base and legitimacy offered by public participation can support a more responsible data and technology ecosystem that earns the trust of the public, rather than erodes and undermines it. Possible approaches to this include:

  1. Members of the public could be assigned by democratically representative random lottery to independent governance panels that provide oversight of dominant technology firms and public-interest alternatives. Those public representatives could be supported by a panel of civil society organisations that interact with governing boards and scrutinise the activity of different entities involved in data-driven decision-making processes.
  2. Panels or juries of citizens could be coordinated by specialised civil society organisations to provide input on the audit and assessment of datasets and algorithms that have significant societal impacts and effects.[footnote]Ada Lovelace Institute. (2022). Algorithmic impact assessment: a case study in healthcare. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/02/Algorithmic-impact-assessment-a-case-study-in-healthcare.pdf[/footnote]
  3. Political institutions could conduct region-wide public deliberation exercises to gather public input and shape future regulation and enforcement of technology platforms. For example, a national or regional-wide public dialogue exercise could be conducted to consider how a novel technology application might be regulated, or to evaluate the implementation of different legislative proposals.
  4. Participatory co-design or deliberative assemblies could be used to help articulate what public interest data and technology corporations might look like (see the ‘BBC for Data’ above), as alternatives to privatised and multinational companies.

These four suggestions represent just a selection of provocations, and are far from exhaustive. The outcomes of public participation and deliberation can vary, from high-level sets of principles on how data is used, to detailed recommendations that policymakers are expected to implement. But in order to be successful, such initiatives need political will, support and buy-in, to ensure that their outcomes are acknowledged and adopted. Without this, participatory initiatives run the risk of ‘participation washing’, whereby public involvement is merely tokenistic.

Additionally, it is important to note that public participation is not about shifting responsibility back to people and civil society to decide on intricate matters, or to provide the justifications or ‘mandates’ for uses of data and technology that haven’t been ethically, legally or morally scrutinised. Rather it is about the institutions and organisations that develop, govern and regulate data and technology making sure they act in the best interests of the people who are affected by the use of data and technology.

Further considerations and provocative concepts

Marginalised communities in democratic governance

Jef Ausloos, Alexandra Giannopoulou and Jill Toh

 

As Europe and other parts of the world set out plans to regulate AI and other technology services, it is more urgent than ever to reflect critically on the value and practical application of those legally designed mechanisms in protecting social groups and individuals that are affected by high-risk AI systems and other technologies. The question of who has access to decision-making processes, and how these decisions are made, is crucial to address the harms caused by technologies.

 

The #BrusselsSoWhite conversations (a social media hashtag expounding on the lack of racial diversity in EU policy conversations)[footnote]Islam, S. (2021). ‘“Brussels So White” Needs Action, Not Magical Thinking’. EU Observer. Available at: https://euobserver.com/opinion/153343 and Azimy, R. (2020). ‘Why Is Brussels so White?’. Euro Babble. Available at: https://euro-babble.eu/2020/01/22/dlaczego-bruksela-jest-taka-biala/[/footnote] have clearly shown the absence and lack of marginalised people in discussions around European technology policymaking,[footnote]Çetin, R. B. (2021). ‘The Absence of Marginalised People in AI Policymaking’. Who Writes The Rules. Available at: https://www.whowritestherules.online/stories/cetin[/footnote] despite the EU expressing its commitment to anti-racism and inclusion.[footnote]European Commission. (2020). EU Anti-Racism Action Plan 2020-2025. Available at: https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-anti-racism-action-plan-2020-2025_en[/footnote]

Meaningful inclusion requires moving beyond the rhetoric, performativity and tokenisation of marginalised people. It requires looking inwards to assess if the existing work environment, internal practices, hiring and retention requirements are barriers to entry and exclusionary-by-design.[footnote]Çetin, R. B. (2021). ‘The Absence of Marginalised People in AI Policymaking’. Who Writes The Rules. Available at: https://www.whowritestherules.online/stories/cetin[/footnote] Additionally, mere representation is insufficient. This also requires a shift to recognise the value of different types of expertise, and seeing marginalised people’s experiences and knowledge as legitimate, and equal.

 

There are a few essential considerations for achieving this.

 

Firstly, legislators and civil society – particularly those active in the field of ‘technology law’ – should consider a broader ambit of rights, freedoms and interests at stake in order to capture the appropriate social rights and collective values generally left out from market-driven logics. This ought to be done by actively engaging with the communities affected and interfacing more thoroughly with respective pre-existing legal frameworks and value systems.[footnote]Meyer, L. (2021). ‘Nothing About Us, Without Us: Introducing Digital Rights for All’. Digital Freedom Fund. Available at: https://digitalfreedomfund.org/nothing-about-us-without-us-introducing-digital-rights-for-all/; Niklas, J. and Dencik, L. (2021). ‘What rights matter? Examining the place of social rights in the EU’s artificial intelligence policy debate’. Internet Policy Review, 10(3). Available at: https://policyreview.info/articles/analysis/what-rights-matter-examining-place-social-rights-eus-artificial-intelligence; and Taylor, L. and Mukiri-Smith, H. (2021). ‘Human Rights, Technology and Poverty’. Research Handbook on Human Rights and Poverty. Available at: https://www.elgaronline.com/view/edcoll/9781788977500/9781788977500.00049.xml[/footnote]

 

Secondly, the dominant narrative in EU techno-policymaking frames all considered fundamental rights and freedoms from the perspective of protecting ‘the individual’ against ‘big tech’. This should be complemented with a wider concern for the substantial collective and societal harm generated and exacerbated by the development and use of data-driven technologies by private and public actors.

 

Thirdly, in consideration of the flurry of regulatory proposals, there should be more effective rules on lobbying, related to transparency and funding requirements and funding sources for thinktanks and other organisations. The revolving door between European institutions and technology companies continues to remain highly problematic and providing independent oversight with investigative powers is crucial.[footnote]Corporate Europe Observatory. (2021). The Lobby Network: Big Tech’s Web of Influence in the EU. Available at: https://corporateeurope.org/en/2021/08/lobby-network-big-techs-web-influence-eu[/footnote]

 

Lastly, more (law) is not always better. Especially, civil society and academia ought to think more creatively on how legal and non-legal approaches may prove to be productive in tackling the collective hams produced by (the actors controlling) data-driven technologies. Policymakers and enforcement agencies should proactively support such efforts.

Further to these considerations, one approach to embedding public participation into technology policymaking is to facilitate meaningful and diverse deliberation on the principles and values that should guide new legislation and inform technology design.

For example, to facilitate public deliberation on the rules governing how emerging technologies are developed, the governing institutions responsible for overseeing new technologies – be it local, national or supranational government – could establish a citizens’ assembly.[footnote]For more information about citizens’ assemblies see: Involve. (2018). Citizens’ Assembly. Available at: https://www.involve.org.uk/resources/methods/citizens-assembly. For an example of how public deliberation about complex technologies can work in practice, see: Ada Lovelace Institute. (2021). The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/[/footnote]

Citizens’ assemblies can take various forms, from small groups of citizens in a local community discussing a single issue over a few days, to many hundreds of citizens from across regions considering a complex topic across a series of weeks and months.

Citizens’ assemblies must include representation of a demographically diverse cross-section of people in the region. Those citizens should come together in a series of day-long workshops, hosted across a period of several months, and independently facilitated. During those workshops, the facilitators should provide objective and accessible information about the technological issue concerned and the objectives of legislative or technical frameworks.

The assembly must be able to hear from and ask questions to experts on the topic, representing a mix of independent professionals and those holding professional or official roles with associated parties – such as policymakers and technology developers.

At the end of their deliberations, the citizens in the assembly should be supported to develop a set of recommendations – free from influence of any vested parties – with the expectation that these recommendations will be directly addressed or considered in the design of any legislative or technical frameworks. Such citizens’ assemblies can be an important tool, in addition to grassroot engagement in political parties and civil society, for bringing people into work on societal issues.

Food for thought

As policymakers around the world develop and implement novel data and technology regulations, it is essential that public participation forms a core part of this drafting process. At a time when trust in governments and technology companies is reaching record lows in many regions, policymakers must experiment with richer forms of public engagement beyond one-way consultations. By empowering members of the public to co-create the policy that impacts their lives, policymakers can create more representative and more legitimate laws and regulations around data.

In order to trigger further discussion, a set of questions are offered as provocations for thinking about how to implement public participation and deliberation mechanisms in practice:

  1. Public participation requires a mobilisation of resources and new processes throughout the cycle of technology policymaking. What incentives, resources and support do policymakers and governments need, to be able to undertake public engagement and participation in the development of data and AI policy?
  2. Public participation methods need strategic design, and limits need to be taken into consideration. Given the ubiquitous and multi-use nature of data and AI, what discrete topics and cases can be meaningfully engaged with and deliberated on by members of the public?
  3. Inclusive public participation is essential, to ensuring a representative public deliberation process that delivers outcomes for those affected by technology policymaking. Which communities and groups are the most disproportionately harmed or affected by data and AI, and what mechanisms can ensure their experiences and voices are included in dialogue?
  4. It is important to make sure that public participation is not used as a ‘stamp of approval’ and does not become merely a tick-box exercise. To avoid ‘participation washing’, what will encourage governments, industry and other power holders to engage meaningfully with the public, whereby recommendations made by citizens are honoured and addressed?

Chapter 3: Conclusions and open questions

In this report, we started with two questions: What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem? And what are the most promising interventions to create a more balanced system of power and a people-first approach for data?

In Chapter 1, we defined the central problem: that today’s digital economy is built on deep-rooted exploitative and extractive data practices and forms of ‘data rentiership,’ which have resulted in the accrual of vast amounts of power to a handful of large platforms.

We explained how this power imbalance has prevented benefits to people, who are largely unable to control how their data is collected and used, and are increasingly disempowered from engaging in, seeking redress or contesting data-driven decisions that affect their lives.

In Chapter 2 we outlined four cross-cutting interventions concerning infrastructure, data governance, institutions and participation that can help redress that power imbalance in the current digital ecosystem. We recognise that these interventions are not sufficient to solve the problems described above, but we propose them as a realistic first step towards a systemic change.

From interventions, framed as objectives for policy and institutional change, we moved to provocative concepts: more tangible examples of how changing the power balance could work in practice. While we acknowledge that, in the current conditions, these concepts open up more questions than they give answers, we hope other researchers and civil society organisations will join us in an effort to build evidence that validates or establishes limitations to their usefulness.

Before we continue the exploration of specific solutions (legal rules, institutional arrangements, technical standards) that have the potential to transform the current digital ecosystem towards what we have called ‘a people-first approach’, we reiterate how important it is to think about this change in a systemic way.

A systemic vision envisages all four interventions as interconnected, mutually reinforcing and dependent on one another. And requires consideration of external ‘preconditions’ that could prevent or impede this systemic reform. We identify the preconditions for the interventions to deliver results as: the efficiency and values of the enforcement bodies, increasing the possibilities for individual and collective legal action, and reducing the dependency of key political stakeholders on (the infrastructure and expertise of) large technology companies.

In this last chapter we not only acknowledge political, legal and market conditions that determine the possibilities for transformation of the digital ecosystem, but also propose questions to guide further discussion about these – very practical – challenges:

1. Effective regulatory enforcement

Increased regulatory enforcement, in the context of both national and international cooperation, is a necessary precondition to the success of the interventions described above. As described in Chapter 1, resolving the regulatory enforcement problem will help create meaningful safeguards and regulatory guardrails to support change.

An important aspect of regulatory enforcement and cooperation measures includes the ability of one authority to supply timely information to other authorities from different sectors and from different jurisdictions, subject to relevant procedural safeguards. Some models of this kind of regulatory cooperation already exist – in the UK, the Digital Regulation Cooperation Forum (DRCF) is a cross-regulatory body formed in 2020 by the Competition and Markets Authority (CMA), and includes the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO) and the Office of Communications (Ofcom).[footnote]Information Commissioner’s Office. (2020). ‘Digital Regulation Cooperation Forum’. Available at: https://ico.org.uk/about-the-ico/what-we-do/digital-regulation-cooperation-forum/[/footnote]

Where regulatory action is initiated against major platforms and global players, new measures should be considered as part of international regulators’ fora, that will provide the possibility to create ad hoc enforcement task forces across sectors and geographic jurisdictions, and to institutionalise such bodies, where necessary. The possibility of creating multi-sectoral and multi-geographic oversight and enforcement bodies focusing only on the biggest players in the global data and digital economy should be actively considered.

Moreover, it is necessary to create formal channels of communication between enforcement bodies, to be able to share sensitive information that might be needed in investigations. Currently, many enforcement authorities cannot share important information they have obtained in the course of their procedures with enforcement authorities that have a different area of competence or operate in a different jurisdiction. As data and all-purpose technologies are currently used by large platforms, any single enforcement body will not be able to see the full picture of risks and harms, leading to suboptimal enforcement of platforms and data practices. Coherent and holistic enforcement is needed.

 Questions that need to be addressed:

  • What would an integrated approach to regulation and enforcement be constituted in practice, embedding data protection, consumer protection and competition law objectives and mechanisms?
  • How can we uphold procedural rights, such as the right to good administration and to effective judicial remedy, in the context of transnational and trans-sectoral disputes?
  • How can enforcement authorities be made accountable where they fail to enforce the law effectively?
  • How to build more resilient enforcement structures that are less susceptible to corporate capture?
Taking into account collective harm

Jef Ausloos, Alexandra Giannopoulou and Jill Toh

 

Despite efforts to prevent it from being a mere checkbox exercise, GDPR compliance efforts often suffer from a narrow-focused framing, ignoring the multifarious issues that (can) arise in complex data-driven technologies and infrastructures. A meaningful appreciation of the broader context and the evaluation of potential impacts on (groups of) individuals and communities is necessary in order to move from ‘compliance’ narratives to fairer data ecosystems that are continuously evaluated and confronted with the potential individual or collective harms caused by data-driven technologies.

 

Public decision-makers responsible for deploying new technologies should start by questioning critically the very reason for adopting a specific data-driven technology in the first place. These actors should fundamentally be able to first demonstrate the necessity of the system itself, before assessing what data collection and processing the respective system would require. For instance, in the example of the migrant-monitoring system Centaur used in new refugee camps in Greece, authorities should be able to first demonstrate in general terms the necessity of a surveillance system, before assessing the inherent data collection and processing that Centaur would require and what would justify as necessary.

 

This deliberation is a complex exercise. Where the GDPR requires a data protection impact assessment, this deliberation is left to data controllers, before being subject to any type of questioning by relevant authorities.

 

One problem is that data controllers often define the legitimacy of a chosen system by stretching the meaning of GDPR criteria, or by benefitting from the lack of strict compliance processes for principles (such as data minimisation and data protection by design and by default) in order to demonstrate compliance. This can lead to a narrow norm-setting environment, because even if operating under rather flexible concepts (such as the respect of data protection principles as set out in the GDPR), the data controllers’ interpretation remains constricted in practice and neglects to consider new types of harms and impacts on a wider level.

 

While the responsibility to identify and mitigate harms is the responsibility of the data controller, civil society organisations could play an important facilitator role (without placing any formal burden to facilitate this process) in revealing collective harms that complex data-driven technological systems are likely to inflict on specific communities and groups, as well as sector-specific or community-specific interpretations of these harms.[footnote]And formalised through GDPR mechanisms such as codes of conduct (Article 40) and certification mechanisms (Article 42).[/footnote]

 

In practice, accountability measures would then require that the responsible actors need not only demonstrate the consideration of these possible broader collective harms, but also the active measures and steps taken to prevent them from materialising.

 

Put briefly, both data protection authorities and those controlling impactful data-driven technologies, need to recognise they can be held accountable for, and have to address, complex harms and impacts on individuals and communities. For instance, from a legal perspective, and as recognised under the GDPR’s data protection by design and by default requirement,[footnote]Article 25 of the GDPR.[/footnote] this means that compliance ought not to be seen as a one-off effort at the start of any complex data-driven technological system, but rather a continuous exercise considering the broader implications of data infrastructures on everyone involved.

 

Perhaps more importantly, and because not all harms and impacts can be anticipated, robust mechanisms should be in place enabling and empowering affected individuals and communities to challenge (specific parts of) data-driven technologies. While the GDPR may offer some tools for empowering those affected (e.g. data rights), they cannot be seen as goals in themselves, but need to be interpreted and accommodated in light of the context in which, and interests for which, they are invoked.

2. Legal action and representation

Another way to support the proposed interventions in Chapter 2 having their desired effect is to create more avenues for civil society organisations, groups and individuals to hold large platforms accountable for abuses of their data rights, as well as state authorities that do not adequately fulfil their enforcement tasks.

Mandating the exercise of data rights to intermediary entities is being explored as a way to address information and power asymmetries and systemic data-driven injustices at a collective level.[footnote]Giannopoulou, A., Ausloos, J., Delacroix, S and Janssen, H. (2022). ‘Mandating Data Rights Exercises’. Social Science Research Network. Available at https://ssrn.com/abstract=4061726[/footnote] The GDPR does not prevent the exercise of data rights through intermediaries, and rights delegation (as opposed to waiving the right to data protection, which is not possible under EU law since fundamental rights are inalienable), has started to be recognised in data protection legislation globally.

For example, in India[footnote]See: draft Indian Personal Data Protection Bill (2019). Available at: https://prsindia.org/files/bills_acts/bills_parliament/2019/Personal%20Data%20Protection%20Bill,%202019.pdf[/footnote] and Canada,[footnote]See: draft Canadian Digital Charter Implementation Act (2020). Available at: https://www.parl.ca/DocumentViewer/en/44-1/bill/C-11/first-reading[/footnote] draft data protection and privacy bills speak about intermediaries that can exercise the rights conferred by law. In the US, the California Consumer Privacy Act (CCPA)[footnote]See: California Consumer Privacy Act of 2018. Available at: https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5[/footnote] and the California Privacy Rights Act (CPRA)[footnote]See: California Privacy Rights Act of 2020. Available at: https://iapp.org/resources/article/the-california-privacy-rights-act-of-2020[/footnote] – which amends and expands the CCPA – both mention ‘authorised agents’, and the South Korean Personal Information Protection Act[footnote]See: Article 38 of the South Korean Personal Information Protection Act of 2020. Available in English at: https://elaw.klri.re.kr/kor_service/lawView.do?hseq=53044&lang=ENG[/footnote] also talks about ‘representatives’ who can be authorised by the data subject to exercise rights.

Other legal tools enabling legal action for individuals and collectives are Article 79 of the GDPR, which allows data subjects to bring compliance orders before courts, and Article 80(2) of the GDPR, which allows representative bodies to bring collective actions without the explicit mandate of data subjects. Both these mechanisms are underused and underenforced, receiving little court attention.

One step further would be to strengthen the capacity for civil society to pursue collective legal action for rights violations directly against the large players or against state authorities that do not adequately fulfil their enforcement tasks. The effort of reforming legal action and representation rules in order to make them more accessible for civil society actors and collectives needs to include measures to reduce the high costs for bringing court claims.[footnote]For example, in Lloyd v Google, the respondent is said to have secured £15.5m backing from Therium, a UK litigation funder, to cover legal costs. See: Thompson, B. (2017). ‘Google faces UK suit over alleged snooping on iPhone users’. Financial Times. Available at: https://www.ft.com/content/9d8c7136-d506-11e7-8c9a-d9c0a5c8d5c9. Lloyd v Google is a landmark case in the UK seeking collective claims on behalf of several millions of people against Google’s practices of tracking Apple iPhone users and collecting data for commercial purposes without the user’s knowledge or consent. The UK’s Supreme Court verdict was not to allow collective claims, which means that every individual would have to seek legal action independently and prove material damage or distress, bearing the full costs of litigation. The full judgement is available here: https://www.supremecourt.uk/cases/docs/uksc-2019-0213-judgment.pdf[/footnote] Potential solutions could be cost-capping for certain general actions when the claimant cannot afford the case. 

Questions that need to be addressed:

  • How can existing mechanisms for legal action and representation be made more accessible to civil society actors and collectives?
  • What new mechanisms and processes need to be designed for documenting abuses and proving harms, to address systemic data-driven injustices at a collective level?
  • How can cost barriers to legal action be reduced?

3. Removing industry dependencies

Finally, another way to ensure the interventions described above are successful is to lessen dependencies between regulators, civil society organisations and corporate actors. Industry dependencies can take many forms, including the sponsoring of major conferences for academia and civil society, and funding policy-oriented thinktanks that seek to advise regulators.[footnote]Solon, O. and Siddiqui, S. (2017). ‘Forget Wall Street – Silicon Valley is the new political power in Washington’. The Guardian. Available at: https://www.theguardian.com/technology/2017/sep/03/silicon-valley-politics-lobbying-washington[/footnote] [footnote]Stacey, K. and Gilbert, C. (2022). ‘Big Tech increases funding to US foreign policy think-tanks’. Financial Times. Available at https://www.ft.com/content/4e4ca1d2-2d80-4662-86d0-067a10aad50b[/footnote] While these dependencies do not necessarily lead to direct influence over research outputs or decisions, they do raise a risk of eroding independent critique and evaluation of large digital platforms. 

There are only a small number of specialist university faculties and research institutes working on data, digital and societal impacts that do not operate, in one way or another, with funding from large platforms.[footnote]Clarke, L., Williams, O. and Swindells, K. (2021). ‘How Google quietly funds Europe’s leading tech policy institutes’. The New Statesman. Available at: https://www.newstatesman.com/science-tech/big-tech/2021/07/how-google-quietly-funds-europe-s-leading-tech-policy-institutes[/footnote] This industry-resource dependency can risk jeopardising academic independence. A recent report highlighted that ‘[b]ig tech’s control over AI resources made universities and other institutions dependent on these companies, creating a web of conflicted relationships that threaten academic freedom and our ability to understand and regulate these corporate technologies.’[footnote]Whittaker, M. (2021). ‘The steep cost of capture’. ACM Interactions. Available at: https://interactions.acm.org/archive/view/november-december-2021/the-steep-cost-of-capture[/footnote]

This points to the need for a more systematic approach to countering corporate dependencies. Civil society, academia and the media play an important role in counterbalancing the narratives and actions of large corporations. Appropriate public funding, statutory rights and protection are necessary for them to be able to fulfil their function as balancing actors, but also as visionaries for alternative and potentially better ecosystems.

Questions that need to be addressed:

  • What would alternative funding models (such as public or philanthropic) that remove dependencies on industry be constituted?
  • Could national research councils (such as UKRI) and public funding play a bigger role in creating dedicated funding streams to support universities, independent media and civil society organisations, to shield them from corporate financing?
  • What type of mechanisms and legal measures need to be put in place, to establish endowment funds for specific purposes, creating sufficient incentives for founding members, but without compromising governance? (For example, donors, including large companies, could benefit from specific tax deductions but wouldn’t have any rights or decision-making power in how an endowment is governed, and capital endowments would be allowed but not recurring operating support, as that creates dependency).

Open invitation and call to action

A complete overturn of the existing data ecosystem cannot happen overnight. In this report, we acknowledge that a multifaceted approach is necessary for such a reform to be effective. Needless to say, there is no single, off-the-shelf solution that – on its own – will change the paradigm. Looking towards ideas that can produce substantial transformations can seem overwhelming, and it is also necessary to acknowledge and factor in the challenges that lie with adopting less revolutionary ideas into practice.

Acknowledging that there are many instruments that remain to be fully tested and understood in existing legislation, in this report we set off to develop the most promising tools for intervention that can take us towards a people-first digital ecosystem that’s fit for the middle of the twenty-first century.

In this intellectual journey, we explored a set of instruments, which carry transformative potential, and divided them into four areas that reflect the biggest obstacles we will face when imagining a deep reform of the digital ecosystem: control over technology infrastructure, power over how data is purposed and governed, balancing asymmetries with new institutions and more social accountability with inclusive participation in policymaking.

We unpacked some of the complexity of these challenges, and asked questions that we deem critical for the success of this complex reform. With this opening, we hope to fuel a collective effort to articulate ambitious aspirations for data use and regulation that work for people and society.

Reinforcing our invitation in 2020 to ‘rethink data’, we call on policymakers, researchers, civil society organisations, funders and industry to build towards more radical transformations, reflecting critically, testing and further developing these proposed concepts for change.

Who What you can do
Policymakers ●      Transpose the proposed interventions into policy action and help build the pathway towards a comprehensive and transformative vision for data

●      Ensure that impediments to effective enforcement of existing regulatory regimes are identified and removed

●      Use evidence of public opinion to proactively develop policy, governance and regulatory mechanisms that work for people and society.

Researchers ●      Reflect critically on the goals, strengths and weaknesses of the proposed concepts for change

●      Build on the proposed concepts for change with further research into potential solutions.

Civil society organisations ●      Analyse the proposed transformations and propose ways to build a proactive (instead of reactive) agenda in policy

●      Be ambitious and bold, visualise a positive future for data and society

●      Advocate for transformative changes in data policy and practice and make novel approaches possible.

Funders ●      Include exploration of the four proposed interventions in your annual funding agenda, or create a new funding stream for a more radical vision for data

●      Support researchers and civil society organisations to remain independent of government and industry

●      Fund efforts that work towards advancing concepts for systemic change.

Industry ●      Support the development and implementation of open standards in a more inclusive way (incorporating diverse perspectives)

●      Contribute to developing mechanisms for the responsible use of data for social benefit

●      Incorporate transparency into practices, including being open about internal processes and insights, and allowing researcher access and independent oversight.

Final notes

Context for our work

One of the core conundrums that motivated the establishment of the Ada Lovelace Institute by the Nuffield Foundation in 2018 was how to construct a system for data use and governance that engendered public trust, enabled the protection of individual rights and facilitated the use of data as a public good.

Even before the Ada Lovelace Institute was fully operational, Ada’s originating Board members (Sir Alan Wilson, Hetan Shah, Professor Helen Margetts, Azeem Azhar, Alix Dunn and Professor Huw Price) had begun work on a prospectus to establish a programme of work, guided by a working group, to look ‘beyond data ownership’ at future possibilities for overhauling data use and management. This programme built on the foundations of the Royal Society and British Academy 2017 report, Data Use and Management, and grew to become Rethinking Data.

Ada set out an ambitious vision for a research programme, to develop a countervailing vision for data, which could make the case for its social value, tackle asymmetries of power and data injustice, and promote and enable responsible and trustworthy use of data. Rethinking Data aimed to examine and reframe the kinds of language and narratives we use when talking about data, define what ‘good’ looks like in practice when data is collected, shared and used, and recommend changes in regulations so that data rights can be effectively exercised, and data responsibilities are clear.

There has been some progress in changing narratives, practices and regulations: popular culture (in the form of documentaries such as The Social Dilemma and Coded Bias), corporate product choices (like Apple’s decision to restrict tracking by default on iPhone apps) and high-profile news stories (such as the Ofqual algorithm fiasco, which saw students take to British streets to protest ‘F**k the algorithm’), have contributed to an evolving and more informed narrative about data.

The potential of data-driven technologies has been front and centre in public health messaging around the pandemic response, and debates around contact tracing apps have revealed a rich and nuanced spectrum of public attitudes to the trade-off between individual privacy and the public interest. The Ada Lovelace Institute’s own public deliberation research during the pandemic showed that the ‘privacy vs the pandemic’ arguments entrenched in media and policy narratives are contested by the public.[footnote]Ada Lovelace Institute. (2020). No green lights, no red lines – Public perspectives on COVID-19 technologies. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2020/07/No-green-lights-no-red-lines-final.pdf and Parker, I. (2020). ‘It’s complicated: what the public thinks about COVID-19 technologies’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/no-green-lights-no-red-lines/[/footnote]

There is now an emerging discourse around ‘data stewardship’, the responsible and trustworthy management of data in practice, to which the Ada Lovelace Institute has contributed via research which canvasses nascent legal mechanisms and participatory approaches for improving ethical data practices.[footnote]Ada Lovelace Institute. (2021). Exploring legal mechanisms for data stewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/ and Ada Lovelace Institute. (2021). Participatory data stewardship. Available at: https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/[/footnote] The prospect of new institutions and mechanisms for empowering individuals in the governance of their data is gaining ground, and the role of new data intermediaries is being explored in legislative debates in Europe, India and Canada,[footnote]Data Trusts. (2020). International approaches to data trusts: recent policy developments from India, Canada and the EU. Available at: https://datatrusts.uk/blogs/international-policy-developments[/footnote] as well as in the data reform consultation in the UK.[footnote]See: Department for Digital, Culture, Media & Sport (DCMS). (2021). Data: A new direction, Section 7. Available at: https://www.gov.uk/government/consultations/data-a-new-direction[/footnote]

Methodology

The underlying research for this project was primarily informed by the range of expert perspectives in the Rethinking data working group. It was supplemented by established and emerging research in this landscape and refined by several research pieces commissioned from leading experts on data policy.

Like most other things, the COVID-19 pandemic made the task of the Rethinking data working group immensely more difficult, not least because we had envisaged the deliberation of the group (which spans three continents) would take place in person. Despite this, the working group persisted and managed 10 meetings over a 12 month period.

To start with, the working group met to identify and analyse themes and tensions in the current data ecosystem. In the first stage of these deliberations, they singled out the key questions and challenges they felt were most important, such as questions around the infrastructure used to collect and store data, emerging regulatory proposals for markets and data-driven technologies, and the market landscape that major technology companies operate in.

Once these challenges were identified, the working group used a horizon-scanning methodology, to explore the underlying assumptions, power dynamics and tensions. To complement the key insights from the working group discussion, a landscape overview on ‘future technologies’ – such as privacy-enhancing techniques, edge computing, and others – was commissioned from the University of Cambridge.

The brief looked at emerging trends that present more pervasive, targeted or potentially intrusive data capture, focusing only on the more notable or growing models. The aim was to identify potential glimpses into how power will operate in new settings created by technology, and how the big business players’ approach to people and data might evolve as a result of these new developments, without the intention to predict or to forecast how trends will play out.

Having identified power and centralisation of large technology companies as two of the major themes for concern, in the second stage of the deliberations, the two major questions the working group considered were: What are the most important manifestations of power? And what are the most promising interventions to enabling an ambitious vision for the future of data use and regulation?

Speculative thinking methodologies, such as speculative scenarios, were used as provocations for the working group, to think beyond the current challenges, allowing different concepts for interventions to be discussed. The three developed scenarios highlighted potential tensions and warned about fallacies that could emerge if a simplistic view around regulation was employed.

In the last stage of our process, the interventions suggested by the working group were mapped into an ecosystem of interventions that could support positive transformations to emerge. Commissioned experts were invited to surface further challenges, problems and open questions associated with different interventions.

Acknowledgements

This report was lead authored by Valentina Pavel, with substantive contributions from Carly Kind, Andrew Strait, Imogen Parker, Octavia Reeve, Aidan Peppin, Katarzyna Szymielewicz, Michael Veale, Raegan MacDonald, Orla Lynskey and Paul Nemitz.

Working group members

Name Affiliation (where appropriate)
Diane Coyle (co-chair) Bennett Professor of Public Policy, University of Cambridge
Paul Nemitz (co-chair) Principal Adviser on Justice Policy, EU Commission, visiting Professor of Law at College of Europe
Amba Kak Director of Global Policy & Programs, AI Now Institute
Amelia Andersdotter Data Protection Technical Expert and Founder, Dataskydd
Anne Cheung Professor of Law, University of Hong Kong
Martin Tisné Managing Director, Luminate
Michael Veale Lecturer in Digital Rights and Regulation, University College London
Natalie Hyacinth Senior Research Associate, University of Bristol
Natasha McCarthy Head of Policy, Data, The Royal Society
Katarzyna Szymielewicz President, Panoptykon Foundation
Orla Lynskey Associate Professor of Law, London School of Economics
Raegan MacDonald Tech-policy expert
Rashida Richardson Assistant Professor of Law and Political Science, Northeastern University School of Law & College of Social Sciences and Humanities
Ravi Naik Legal Director, AWO
Steven Croft Founding board member, Centre for Data Ethics and Innovation (CDEI)
Taylor Owen Associate Professor, McGill University – Max Bell School of Public Policy

Commissioned experts

Name Affiliation (where appropriate)
Ian Brown Leading specialist on internet regulation and pro-competition mechanisms such as interoperability
Jathan Sadowski Senior research fellow, Emerging Technologies Research Lab, Monash University
Jef Ausloos Institute for Information Law (IViR), University of Amsterdam
Jill Toh Institute for Information Law (IViR), University of Amsterdam
Alexandra Giannopoulou Institute for Information Law (IViR), University of Amsterdam

External reviewers

Name Affiliation (where appropriate)
Agustín Reyna Director, Legal and Economic Affairs, BEUC
Jeni Tennison Executive Director, Connected by data
Theresa Stadler Doctoral assistant, Security and Privacy Engineering Lab, at Ecole Polytechnique Fédérale de Lausanne (EPFL)
Alek Tarkowski Director of Strategy, Open Future Foundation

Throughout the working group deliberations we also received support from Annabel
Manley, research assistant at the University of Cambridge, and Jovan Powar and Dr Jat
Singh, Compliant & Accountable Systems Group at the University of Cambridge.

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

Executive summary

With the increasing use of AI systems in our everyday lives, it is essential to understand the risks they pose and take necessary steps to mitigate them. Because the risks of AI systems may become manifest at different stages of their deployment, and the specific kinds of risks that may emerge will depend on the contexts in which those systems are being built and deployed, assessing and mitigating risk is a challenging proposition.

Addressing that challenge requires identifying and deploying a range of methods across the lifecycle of an AI system’s development and deployment.[1] By understanding these methods in more detail, policymakers and regulators can support their use in the UK’s technology sector, and so reduce the risks that AI systems can pose to people and society.

In its March 2023 AI regulation white paper, the UK Government proposed creating a set of central Government functions to support the work of regulators. This included a cross-sectoral risk-assessment function, intended to support regulators in their own risk assessments, to identify and prioritise new and emerging risks, and share risk enforcement best practices.

This central function has the potential to help coordinate and standardise the somewhat fragmented risk-assessment landscape identified in this paper and support the development of a cross-sectoral AI assessment ecosystem in the UK.

Key takeaways

  1. There is not a singular, standardised process for assessing the risks or impacts of AI systems (or a common vocabulary), but there are commonly used components: policymakers, regulators and developers will need to consider how these are delivered and tailored.
  2. Risk and impact assessment methods typically involve five components: risk identification, risk prioritisation, risk mitigation planning, risk monitoring and communicating risks. The main differences between components are in how they are achieved, the actors involved, the scope of impacts considered and the extent of accountability.
  3. Policymakers globally are incorporating risk and impact assessments in AI governance regimes and legislation, with the EU, USA, Brazil and Canada mandating assessments for various AI systems. Regulators and policymakers face the challenge of ensuring risk consideration is conducted, acted on and monitored over time, highlighting the need for an ecosystem of assessment methods.
  4. Identifying and assessing risks alone does not ensure risks are avoided. AI risk management will require an ecosystem of assessment, assurance and audit. This includes independent auditing, oversight bodies, ethics review committees, safety checklists, model cards, datasheets and transparency registers that collectively enable monitoring and mitigation of AI-related risks.
  5. Ensuring this AI assessment ecosystem is effective will require consensus on risk-assessment standards, supported by incentives for assessing societal risks and case studies showcasing risk-assessment methods in practice. Domain-specific guidance, skilled professionals and strong regulatory capacity can further enhance the ecosystem. Third-party assessors – including civil society, academia and commercial services – will be essential for developing and implementing assessment practices at scale.

Research questions

  1. What are the broad areas of risks that AI systems can pose in different contexts (particularly from emerging AI technologies)?
  2. How should regulators or policymakers respond to different kinds of risks?
  3. What mechanisms and processes can be used to assess different kinds of risks, including the significance of their potential impact and their likelihood?
  4. Whose responsibility (for example, developer, procurer, regulator) is it to conduct these assessments and evaluations?
  5. What are methods for checking, monitoring and mitigating risks through the AI lifecycle?
  6. What might be needed for an effective assessment ecosystem?

To answer these questions, this paper surveys approaches for assessing risks that AI systems pose for people and society – both on the ground within AI project teams, and in emerging legislation. The findings of this report are based on a desk-based review and synthesis of grey and academic literature on approaches to assessing AI risk, alongside analysis of draft regulations that contain requirements for anticipating risk or impacts of AI systems.

Key terms

Impact assessment: Impact assessments are evaluations of an AI system that use prompts, workshops, documents and discussions with the developers of an AI system and other stakeholders to explore how a particular AI system will affect people or society in positive or negative ways. These tend to occur in the early stages of a system’s development before it is in use, but may occur after a system has been deployed.


Risk assessment:
Risk assessments are very similar to impact assessments but look specifically at the likelihood of harmful outcomes occurring from an AI system. These also tend to occur in the early stages of a system’s development before it is in use but may occur after a system has been deployed.

 

Algorithmic audit: Algorithmic audits are a form of external scrutiny of an AI system, or the processes around it, which can be conducted as part of a risk or impact assessment. These can be technical audits of the inputs or outputs of a system; compliance audits of whether an AI development team has completed processes or regulatory requirements; regulatory inspections by regulators to monitor behaviour of an AI system over time; or sociotechnical audits that evaluate the ways in which a system is impacting wider societal processes and contexts in which it is operating. Audits usually occur after a system is in use, so can serve as accountability mechanisms to verify whether a system behaves as developers intend or claim.

Introduction

The last few years have seen a growing body of evidence of the risks AI systems can pose to people and society. In response, governments, industry organisations and civil society groups have developed a series of approaches for evaluating risks.

Each approach provides different methods for identifying potential risks of AI systems to particular groups, assessing the likelihood of those risks occurring and encouraging or suggesting interventions to mitigate them. However, there is currently little standardisation in approaches, and it can be challenging to navigate the range of approaches available.

This report surveys existing methods for assessing potential risks of AI systems – in literature and practice. It aims to support better understanding of how these methods can be used and the maturity of practice in specific areas, and to identify common or differentiating components of different methods. It also considers some wider mechanisms that can support monitoring and mitigation of those risks over time.

What we mean by AI risks

Risk can be thought of as a function of 1) the negative impact, or magnitude of harm, that would arise if the circumstance or event occurs and 2) the likelihood of occurrence.’[2] Negative impacts or outcomes can be experienced by individuals, groups, communities, organisations, society, the environment and the planet. In all evaluations of risk, one of the most important questions to ask is ‘a risk to whom’. For technology companies, risk may be understood in terms of business, reputational or financial risk. For policymakers, civil society and the public, risks to people and society may be front of mind.

Across different risk assessment approaches, the term ‘impact’ broadly refers to the potential positive or negative outcomes of a system, whereas ‘risk’ is focused on potential negative outcomes, and ‘harm’ refers to actual harmful outcomes that occur. However, in naming and discussion of methods these terms are sometimes used interchangeably.

Four ways to think about risks from AI systems

AI systems can pose a broad range of risks, but listing them all is a challenging task given the wide variety of contexts and sectors where AI systems can be deployed, and a lack of agreement over the definition of ‘AI’. Nonetheless, we have identified four ways of thinking about risk and algorithmic harms in the literature:

  • Risks of particular harms such as representational harms, harms to equality, informational harms, physical or emotional harms, human rights infringement harms, societal harms. Some researchers differentiate between harms stemming from the design of an AI system and harms stemming from its use. For example, some differentiate between design flaws like faulty inputs or a failure to adequately test a system, and the risks in how an AI system is used (for example, to undermine civil and economic justice).[3] Others distinguish between risks caused by the ways a large language model is trained (for example, lack of representation of non-English languages) and risks from how it is used (for example, to spread misinformation).[4]
  • Risks associated with scenarios of AI systems such as best- or worst-case scenarios, system failure, process failure or misuse.[5] Risks can occur not only when the AI system goes wrong, but also when the context around the system changes – and even when the system operates as intended. For example, researchers acknowledge that some social media algorithms are designed with the intention of extracting and monitoring user behaviour, which can pose an inherent risk to user privacy.[6] Others distinguish the risk of ‘transfer context bias’, in which an AI system designed for one context is inappropriately applied to another.[7]
  • Risks associated with particular AI technologies, where particular models or forms of AI systems have commonly associated risks. For instance, research has classified common risks of large language models (LLM) as discrimination,[8] hate speech and exclusion, information hazards, malicious uses, human-computer interaction harms and environmental and socioeconomic harms. Other approaches have classified the risks of deepfake technologies.[9]
  • Risks associated with specific domains of application. Risks from AI systems are dependent on the context in which they are deployed, such as healthcare or Some sectors are seeing domain-specific taxonomies of AI harms, like the economy,[10] or the environment.[11]

While these approaches to considering risk can offer regulators and policymakers a useful starting point for identifying potential harms an AI system may pose, identifying the risks raised by a particular AI system requires the use of risk assessment methods. In the last few years, governments, industry organisations and civil society groups have produced a series of approaches for evaluating AI risks. Each approach provides different methods and techniques for identifying potential risks of AI systems to particular groups, the likelihood of those risks and steps to mitigate them. However, there is little standardisation of the methods used in each approach, and it can be challenging to navigate the range of approaches available.

Methods for assessing risks, outcomes and impacts

Risk assessment methods for AI systems are still emerging, with many under development. They vary in their scope of applications and in the specific prompts, questions and processes used by a risk assessor. They are currently rarely determined by consistent standards, and there is no consistent terminology for how to describe these methods, with some bodies describing them as ‘frameworks’ or ‘toolkits’. Some of the activities described in risk and impact assessments also are also described in some methods as ‘algorithm audits’.

The common theme within these methods is that they seek to anticipate harmful outcomes that could happen, both before a system is in use and with the aim of monitoring or reassessing those risks as the system changes and develops. Risks from AI systems can manifest across their lifecycle and these systems can be dynamic – their behaviour can change with new inputs and data. When integrated into complex environments – like a hospital or a school environment – new risks can emerge over time. This requires risk-assessment approaches to also include ongoing monitoring or revisitations of the risk of AI systems.

These methods are beginning to appear as legal requirements in AI governance regimes like the Canadian Directive on Automated Decision-Making,[12] EU AI Act,[13] UK Online Safety Bill,[14] Brazil’s draft AI legislation[15] and the proposed Algorithm Accountability Act in the USA.[16] These methods build on the use of risk and impact-assessment methods in data governance, such as data protection impact assessments (DPIA) under EU and UK General Data Protection Regulation (GDPR). Many technology companies and public-sector bodies are also increasingly adopting these methods voluntarily as part of their governance process.

These methods also build on a history of practice in other fields, where they have been used as part of a governance process to assess the risks of potential policies and practices. For instance, impact assessments have a long history of use in domains like finance, cybersecurity, data protection and environmental studies.[17]  Similarly, risk-management approaches and assessments are common across business management, finance and assurance.[18]

When applied to AI, risk and impact assessment methods aim to anticipate the impacts of AI systems in advance of them occurring and identify actions developers of AI systems can take so that risks can be avoided and positive impacts can be best realised. Where risks are deemed to be too great, the assessments may indicate that a system is not appropriate for continued development, procurement or deployment.

Typically, these methods often lead to the creation of a final document that captures the results of the process. These methods share most or all of the following five components, though differ slightly in terms of: which actors are involved or responsible for each component; the scope of impacts considered; whether the assessment is voluntary or mandatory; and how the results of the assessment are communicated (see also table below):

  1. Risk identification: Risk identification activities in both risk and impact assessments usually involve an exercise to compile answers to a set of prompts about the potential risks a system poses. This is often achieved through a workshop or discussion and captured in a document. Differences in technologies and contexts will determine who should be involved in the risk identification process (such as the development team, wider organisational stakeholders, external stakeholders or experts, user groups, or impacted communities). Risk identification activities can be driven by different prompts, for example: a particular set of risks (for example, risks to human or fundamental rights); the domain of application or technology used (for example, risks from medical AI or large language models (LLM)); or a scenario-led approach (for example, best-case outcome, worst-case outcome, system failure or system misuse) –  or a combination of these.
  2. Risk prioritisation: Risk prioritisation is often a combined weighting of risk likelihood, size, scope and affected parties (for example, a particular assessment may focus on risks to children). Prioritisation is a subjective activity – priority of risks depends on risks to who (which individuals or groups of people), and how those affected might experience those risks. Some assessments use scoring systems to prioritise risks,[19] where others use qualitative descriptions.[20] As in risk identification, the priority of risks will depend on who is involved in the activity – some methods like the Canadian Government’s algorithmic impact assessment (AIA) process involve the project team making this assessment,[21] while others like the NHS AIA process involve a participatory panel of patients, doctors and nurses.[22]
  3. Risk-mitigation planning: Assessment processes usually involve compiling and stating planned mitigations for identified risks. If conducted by third parties, as is often the case for human rights impact assessments, there may be recommendations for mitigations.[23] If risks are too great, it is advised that mitigations include the option not to proceed with development or deployment of the system.[24]
  4. Risk monitoring: Risk monitoring involves planning for the monitoring of particular risks, or to revisit the risk assessment at a particular point in development or deployment. This may be at a lifecycle stage (for example, to revisit before deployment or regular releases), at a regular cadence (for example, annually), or when the system changes significantly. Identifying system change can pose challenges in terms of determining when a sufficient size or scope of change has occurred, so is often combined with a fixed revisitation point.[25]
  5. Communicating risks: Many risk or impact assessment methods, particularly for the public sector, recommend that findings are published to increase transparency, improve public trust and provide the public or civil society bodies with information on how a system has been tested. These may be in a single repository, as with the Canadian Government’s AIA,[26] or alongside details of an AI product, as has been seen in the private sector.[27] However, particularly with private sector and voluntary processes, there is inconsistency in whether findings (or even the information that the risk or impact assessment has been conducted) are made public and, if so, to what level of detail.

Risk and impact assessment methods in practice

Example Summary Who is involved? When in lifecycle? Communication Voluntary / mandated In use / maturity?
Government of Canada algorithmic impact assessment (AIA)[28] Mandatory risk-assessment tool for use by public-sector organisations to determine impact level. Online questionnaire contains 48 risk and 33 mitigation questions on design, algorithm, decision type, impact and data sources.[29]

Intended for use on external-facing government systems, tools or statistical models used to make an administrative decision about a client (excluding national security).[30]

Assessment conducted by public-sector bodies. Recommended to be completed by a multidisciplinary team with expertise including service recipients, business processes, data and system design decisions.[31] Assessment required twice:
1) Beginning of the design phase of a project.2) Prior to the deployment of the system.
Completed AIAs are released on the Open Government Portal.[32] Mandated for the public sector in Canada. Introduced for all systems developed or procured after 1 April 2020.[33]
Government of the Netherlands Fundamental Rights and Algorithm Impact Assessment (FRAIA)[34] A discussion and decision-making tool for government organisations considering developing, procuring, adjusting or using an algorithm. The process looks holistically at possible consequences of use of an algorithm (including inaccuracy, ineffectiveness), with a particular focus on risks of infringing fundamental rights. Advises that discussion about the various questions should take place in a multidisciplinary team consisting of people with a wide range of specialisations and backgrounds. In decision-making about use of an algorithmic solution (i.e. prior to use). Links to completed FRAIAs included in the Netherlands’ public-sector algorithmic transparency standard.[35] Currently voluntary. Active discussions in the EU around requiring fundamental rights impact assessments in the forthcoming AI Act are however looking to this model.[36] In use by a number of government departments in the Netherlands since 2021.
Techology industry use of human rights impact assessment (HRIA) for AI Applying human rights impact assessments to AI systems. HRIAs originate in the development sector but are increasingly used to assess the human rights impacts of business practices and technologies. Typically a third-party brought in to lead the human rights impact assessment, with access to teams at the company, potentially affected stakeholders and independent experts.[37] Varies – many recommend in advanced of system use,[38] but many published examples have been post-deployment. Sometimes results are published sporadically on company websites. Voluntary. Numerous publicised instances.[39]
Microsoft’s Responsible AI Impact Assessment[40] This impact assessment consists of five sections: project overview, intended uses, adverse impact, data requirements and summary of impact. The process and findings are documented in a template. This template includes prompts around fitness for purpose, potential harms and benefits for different stakeholders, as well as questions on fairness, transparency, accountability, reliability and safety. The impact assessment also prompts for goals for mitigation of risks identified. Assessment conducted by internal teams in the company, led by one person, with some parts described as requiring teamwork from team members with different expertise. Early in the system’s development, ‘typically when defining the product vision and requirements’ and before development starts. Additional review and updates annually, or when new intended uses for the system are added, or before expanding system release.[41] Not published externally.[42] Voluntary. In use as standard at Microsoft, resources available for adoption by others.[43]
Council of Europe Human Rights, Democracy and Rule of Law Assurance Framework for AI Systems (HUDERAF)[44] The HUDERAF is made up of four elements:
1) A ‘Preliminary Context-Based Risk Analysis’ to establish a high-level sense of risk from the proposed system.2) A ‘Stakeholder Engagement Process’ for identifying relevant stakeholders to inform understanding of risk and impact.3) A ‘Human Rights, Democracy and the Rule of Law Impact Assessment’ for identifying potential impacts of the system’s use.4) A ‘Human Rights, Democracy and Rule of Law Assurance Case’ where project teams document their risk and impact assessment processes, the risks identified and mitigation plans.[45]
Conducted by AI project teams, with a process to identify stakeholders and ‘facilitate proportionate stakeholder involvement’. Activities across the project lifecycle, beginning with the design phase. The process recommends that those undertaking it ‘publicly communicate HUDERIA findings and impact management plans (action plans) to the greatest extent possible (for example, published, with any reservations based on risk to rights-holders or other participants clearly justified).’[46] If the draft Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law is established, for signatory countries this process forms a framework for mandatory compliance with that convention.[47] The impact assessment itself is non-binding.[48] Not currently in use.
Stakeholder impact assessment A process to document the collaborative evaluation and reflective anticipation of the possible harms and benefits of AI projects. It involves: identifying affected stakeholders; mapping the goals and objectives of an AI project; considering possible impacts on individuals and society; and public consultation. Developed by researchers at the Alan Turing Institute in the UK, particularly with a view to application in the public sector. It is intended to be used alongside other forms of impact assessment applicable in the UK, such as data protection (DPIA) and equalities impact assessments (EIA).[49] Led by the AI project team, but includes identifying and consulting a wide range of stakeholders, as well as public consultation. At design stage, after development stage (once model has been trained, tested and validated), and iteratively revisited after deployment. Unclear – includes suggested documentation format that could be published. Voluntary. In trial, no published examples.
UK NHS algorithmic impact assessment (AIA) in healthcare Application of AIA approach to a data-access process in healthcare. The process involves reflective thinking through possible impacts by project teams, a deliberative process with patients and members of the public and the publication of the final impact assessment. It was developed by the Ada Lovelace Institute in the UK to be trialled with the UK’s National Medical Imaging Platform (NMIP), with the data-access process intended to form an accountability mechanism for the consideration and mitigation of risks.[50] Led by AI project teams, participated in by patients and members of the public, reviewed by an interdisciplinary data-access committee. Before data access, ideally in the early research and design phase, but in practice may be applied to a range of lifecycle points. Intended to be revisited over time. AIAs of successful applicants for data recommended to be published in one location on the website of the data source. Would be required for data access for the NHS NMIP. Planned pilot,[51] has been explored for use in other contexts.[52]

 

Google Ethical AI team’s SMACTR[53] Framework for internal algorithmic auditing, designed to support end-to-end AI system development. The framework includes five distinct stages: scoping, mapping, artefact collection, testing and reflection. Designed to be completed by a range of ‘key internal stakeholders’, such as product teams, management and other stakeholders who have proximity to (or control of) aspects of an AI system (for example, the training data). Suggests that ‘diverse expertise’ will strengthen the efficacy of the framework. During product development, prior to launch. Internal transparency and external scrutiny promoted, but no formal requirement for publication of a document containing the audit’s  results. Voluntary. Has been put forward for implementation into a medical algorithmic audit process.[54]
US National Institute of Standards and Technology’s (NIST) AI Risk Management Framework[55] In the USA, NIST’s AI Risk Management Framework comes with an accompanying playbook for voluntary use, suggesting ways to govern, map, measure and manage risks in the design, development, deployment and use of AI systems.[56] These can draw on activities such as horizon-scanning, scenario planning and risk registers, which are common in business risk management.

 

‘Different AI stakeholders’ – those playing an ‘active role’ in an AI system’s lifecycle, including both developer and vendor organisations, and individuals such as domain experts, designers, compliance experts and advocacy groups. Iterative, continual process designed to be performed throughout different stages of AI lifecycle – but with potential for variance according to individual organisations’ schedule / interests. No formal transparency requirement. Voluntary. No known published examples.
UK Information Commissioner’s Office (ICO) AI and Data Protection Risk Toolkit The ICO AI and Data Protection Risk Toolkit aims to combine data protection regimes with considerations for AI. It is a spreadsheet of prompts related to UK GDPR and data protection risks at each stage of the AI lifecycle, with accompanying guidance, action recommendations and space for documenting the process.[57] Targeted towards ‘organisations using AI’. Adopted / used at different phases within the AI lifecycle. Unclear – unable to find documentation of use. Voluntary, but can help demonstrate compliance with data protection laws. Unclear – unable to find documentation of use.

Example in practice: human rights impact assessment for AI – Google’s celebrity recognition API

Google commissioned consultancy BSR to conduct a human rights impact assessment (HRIA) during the product design and development phase of its celebrity recognition application programming interface (API).[58]

 

The system uses computer vision to enable searching of images for celebrities within a dataset of licensed images of actors, athletes and TV/film celebrities. The celebrity recognition API is made available for selected media or entertainment enterprise customers, to enable them to search and label their image or video libraries.[59]

 

The HRIA was conducted collaboratively with Google Cloud AI’s API product and cross-functional AI principles teams. The methodology outlined was not detailed, but is described as being ‘based on the UN Guiding Principles (UNGPs) on Business and Human Rights, including aspects such as consultation with potentially affected stakeholders, dialogue with independent expert resources, and paying particular attention to those at heightened risk of vulnerability or marginalisation’.[60]

 

The HRIA resulted in the identification of a range of human rights risks relating to privacy, freedom of expression, security, child rights, non-discrimination and access to culture. The assessment recommended actions for Google to take, such as restricting inclusion in the celebrity database to people who are voluntarily the subject of public media attention. The assessment also included recommendations for wider sectoral actors developing similar products, such as participating in efforts to create industry-wide principles of practice on the use of such products. Finally, the assessment included recommendations for users of these kinds of services, such as doing their own HRIA of their particular use of the product.

 

This HRIA was a single assessment carried out before the system was launched, with no clear requirement for follow-on assessments. It focused on hypothetical uses of the technology, but did not go on to study its actual use post-deployment.

 

An executive summary report of the HRIA has been made available on the BSR website and is described in a blog post on their website.[61] This is linked to from the blog post announcing the celebrity recognition product.[62] At the time of writing it is not clear how or if the recommendations from the HRIA are communicated to users of the product within the product’s interface, or to the celebrities whose images are included in the dataset it matches against.

Example in practice: algorithmic impact assessment in healthcare – NHS medical imaging data access

 

The UK Government is planning to pilot the use of an algorithmic impact assessment (AIA) as part of the data-access process for National Health Service (NHS) data.[63] The process was developed as part of a research partnership between NHS England’s AI Lab and the Ada Lovelace Institute as the first of its kind to explore the potential for AIAs in a real-life healthcare case study: the National Medical Imaging Platform (NMIP).

 

The AIA process involves a reflexive exercise conducted by research and development teams to identify risks, combined with a participatory workshop with patients, and public involvement to broaden the range of inputs into impact identification. This, along with proposed risk mitigations, is submitted as part of a data-access application to a data-access board, who include the AIA as part of their assessment of whether to grant access. It is recommended that AIAs are made public to communicate risks and the risk-assessment process.[64]

There are seven steps in the AIA process, of which four involve participation of the applicant team: 1. Reflexive exercise: team conducts a reflexive exercise, completing the AIA template, 2. Participatory workshop: NHS AI Lab coordinates participatory workshops on applicant projects, 3. Synthesis: applicant team revisits the AIA template completed in the reflexive exercise, based on findings from the participatory workshop, and later, once the AIA is complete, and the data-access decision has been reached: Iteration: AIA is revisited on an ongoing basis by teams as their project develops

Emerging risk-assessment methods for AI in the law

Policymakers worldwide are aiming to incorporate requirements for assessing risks of AI into AI governance regimes and legislation, with risk and impact assessments emerging as common features.

In the EU, the Digital Services Act (2022) requires annual risk assessments for system risks from very large online platforms. With negotiations on the AI Act  still in progress, the Parliament’s text proposes  fundamental rights impact assessments (FRIA) as a requirement for ‘high-risk’ AI systems. In the USA, the proposed Algorithm Accountability Act (2022) would mandate businesses that deploy automated decision-making systems and decision processes ‘augmented’ with AI to produce impact assessments. While this federal bill has yet to be passed, a series of US states like California, New York and Washington have proposed legislation to mandate the use of risk and impact assessments for public sector uses of AI.[65] In Brazil’s draft AI legislation, algorithmic impact assessments (AIAs) must be conducted and made publicly available by providers and for users of ‘high-risk’ AI systems.[66] In Canada, AIAs are already mandated for public sector agencies.[67]

In the UK, the current language of the draft Online Safety Bill requires online platforms to conduct risk assessments of the prevalence of illegal online content appearing on their services.[68] These assessments will be likely to require platforms to consider risks from AI systems used in content moderation and recommendation systems, which may remove or amplify certain kinds of content to users. Similarly, under the current UK General Data Protection Regulation (GDPR), data protection impact assessments (DPIA) are required for data processing that may be considered ‘high risk’ under the bill’s guidelines.[69]

Other methods for checking, monitoring and mitigating risks

Identifying and assessing risks alone does not ensure that risks are mitigated or avoided in practice. Many researchers and government agencies have highlighted the need for an ecosystem of AI risk assessment, assurance or audit.[70]  This reflects the demand for ways that other actors can check that risks have been appropriately considered and acted on, which in turn relies on methods for monitoring and communicating risks over time. Rather than delegating the task of evaluating risks to a single actor, an ecosystem of risk assessment empowers different actors to conduct and verify risk assessments using a range of different methods.

There are many emerging methods that are being proposed in legislation and experimented with by industry. Some of these methods are already in use (for example, transparency registers), while others are still emerging and are largely unaccounted for in national policy and regulatory proposals (for example, red teaming, documentation standards).

Audit and regulatory inspection

AI auditing is used to refer to a number of different practices that typically involve external scrutiny of an AI system or the processes around it.[71] These practices can be thought of as:

  • Technical audit: Originating in the computer science community, technical audits adopt the social science practice of an ‘audit study’ and apply it to algorithmic systems. This form of audit is a narrowly targeted test of a particular hypothesis about a system, usually by looking at its inputs and outputs – for instance, seeing if the system performs differently for different user groups.[72] These methods can be used as standalone audits within companies on their own systems, externally by researchers, journalists or activists, or as part of a compliance audit or regulatory inspection processes.
  • Compliance audit: This involves checking whether an AI development team has completed processes or met benchmarks sufficient to be compliant with legislation. This form of audit is emerging increasingly in regulation and is anticipated to be conducted by third-party auditors – as a similar process to audit in other fields, such as financial audit.
  • Regulatory inspection: Inspections are made by regulators, who have powers to investigate and test AI systems for monitoring, suspected noncompliance or verifying claims, such as in legislation on algorithms in social media platforms in the EU’s Digital Services Act or the UK’s Online Safety Bill.
  • Sociotechnical assessment: These processes have been referred to as ‘audit’, ‘sociotechnical audit’ or ‘internal audit’, although in practice they appear more similar to the impact assessment processes described above. They are sometimes carried out in combination with technical auditing approaches or compliance audit.

Each of these interpretations of ‘AI audit’ can serve an important function. Audits usually come into place after a system is in use, so can serve as accountability mechanisms to verify whether a system behaves as developers intend or claim, whether risk mitigations have been effective and to investigate whether unanticipated impacts have occurred.

Oversight bodies and ethics review committees

Independent oversight bodies have been used to oversee and direct the use of AI, particularly in the public sector, such as the West Midlands Police Data Ethics Committee.[73]  In academic AI research, ethics review committees can serve a similar function, and are increasingly being used in industry AI labs.[74] These bodies and committees are typically responsible for: monitoring the actions of project or research teams; reviewing proposals before research or projects are undertaken; and making recommendations, sanctions or decisions about how projects and research teams can develop, use or deploy an AI system.[75] They could be used to review or contribute to risk and impact assessments, and inform or lead decision-making based on identified risks.

Red teaming

Red teaming is an approach originating in computer security. It describes approaches where individuals or groups (the ‘red team’) are tasked with looking for errors, issues or faults with a system, by taking on the role of a bad actor and ‘attacking’ it. In the case of AI, it has increasingly been adopted as an approach to look for risks of harmful outputs from AI systems.[76]

For instance, AI research lab Anthropic recruited online workers to probe an AI chatbot, to try to discover and measure harmful outputs from language models, such as the chatbot recommending violent acts, or expressing hateful and/or racist statements.[77] However, this approach currently lacks standards and norms. There are risks to the workers recruited to red teams, particularly in red-teaming scenarios at scale, where there is a skew towards lower-paid crowd-workers.[78]

Safety checklists

Safety checklists have a history in engineering and manufacturing, but have also been seen applied to safety in medical settings and aviation.[79] Checklists can be used both to prompt or check completion of actions, but also to prompt discussion of risks.[80] For AI, safety checklists have been proposed to help teams consider a range of risks and ‘check’ that best practices have been followed across the AI lifecycle.[81]

The European Commission’s High Level Expert Group on AI has created an Assessment List for Trustworthy Artificial Intelligence that uses both a written and interactive checklist of prompts on a range of rights-based issues and expected actions.[82] Safety checklists could be used to set expectations and monitor completion of a range of risk assessment and mitigation tasks for AI systems, or as a starting point for a list of risks to consider. However, to date they offer little detail on how to assess, weigh or mitigate those risks, and so would need to be combined with other activities.

Model and dataset documentation methods

Good documentation of AI systems can help support appropriate use. Approaches to standardising documentation of how a system works include model cards (which document information about an AI system’s architecture, testing methods, and intended uses)[83] and datasheets (which document information about a dataset, including what kind of data is included, how it was collected, and how it was processed).[84] These documentation methods often include prompts asking developers of an AI system or dataset to consider and document the potential risks it may pose.

These tools recognise that AI models and datasets are often used by downstream deployers in an AI supply chain, who will need to understand technical details, development and data collection contexts, and risks that may only be known to upstream developers of that system or dataset. Model cards can include details of findings from risk assessments, while both model cards and datasheets could be useful in informing risk and impact assessments for AI systems that implement or use documented models or datasets.

Transparency registers

Where model cards and datasheets often focus on actors in the AI supply chain, there are also frequent calls for transparency about AI systems and their risks to end users, impacted groups and the wider public. In particular in the public sector, registers of AI systems have been proposed as a way to collate documentation about systems in use and make it available to the public.

In the UK, the Algorithmic Transparency Standard and the Algorithmic Transparency Recording Standard Hub are a step towards this, trialling a standard method for reporting on public-sector AI systems.[85] The Transparency Standard includes a section on risks, and requests for outputs from impact assessments such as AIAs, or data privacy impact assessments.

Registers have been trialled at city level in Amsterdam, Antibes, Barcelona, Brussels, Eindhoven, Lyon, Mannheim, Nantes, Ontario, Rotterdam, Sofia and Helsinki.[86] In Chile, pilots are moving towards a General Instruction on Algorithmic Transparency by the Chilean Transparency Council that will mandate what information about public-sector systems is to be made available.[87

Enabling an ecosystem of risk assessment

The relatively immature and fragmented landscape of AI risk-assessment methods presents the UK Government with an opportunity to lead in the development and standardisation of these practices. Some technology companies like Google and Microsoft have experimented with some of these methods in anticipation of forthcoming regulation requiring their use and it is likely that many UK technology companies are also considering the adoption of these mechanisms. There is an urgent need for the Government action to create a standardised method of assessment in coordination with other national bodies developing these methods.

What could the Government do to create an effective assessment ecosystem?

  • Create incentives for companies and third parties to assess risks from AI systems. Methods for AI risk assessments are not yet mainstream or default in AI system development or deployment. In the private sector, adoption is challenging to measure due to lack of public reporting. In the public sector there is some adoption or trialling of reporting mechanisms (such as the UK’s Algorithmic Transparency Standard, which includes requests for information about potential risks, and for links to results of impact assessments), but it is still sporadic. Many jurisdictions are looking to regulation to increase incentives to assess societal risks – and many actors are calling for mandates– from Canada’s mandated algorithmic impact assessment (AIA) for public sector agencies,[88] to fundamental rights impact assessments in the Netherlands. In other locations, such as Chile, there are moves to mandate transparency reporting which, if they include information about risk or risk assessment, could help incentivise risk assessment processes.[89] Other incentives could include introducing requirements as part of data-access processes and procurement requirements in the public sector, as well as strengthening government or regulatory advice around best practice in AI or what to look for when procuring AI systems. This might also include establishing prizes or competitions around risk-assessment methods or trials as direct drivers of examples of good practice.
  • Case studies of risk assessment methods in practice. There are still few published examples of risk assessments of real AI systems, which can make it hard to compare or evaluate risk-assessment methods, or to understand good practice. To improve this, more published case studies of algorithmic risk assessments are needed, documenting how the process changed or shaped the design, development and outcomes. This could be aided by collaboration with independent researchers and civil society to help conduct or evaluate this work.[90]
  • Standards for assessing risks. There is presently no consensus on standards for risk and impact assessments. This is understandable as these methods are still being trialled and developed, but until standards are agreed, there remains a risk that any AI developer can claim to have conducted an assessment with no guarantee or indication for the public as to what it entailed. There has also been discussion about mandated standards: if fundamental rights impact assessments (FRIAs) are brought in as part of the EU AI Act, there has been debate about how the details of these assessments would be established. Some have suggested technical standards bodies could develop these processes, but there is concern that these lack the required mix of skills and accountability to affected communities.[91]
  • Domain or sector-specific guidance on societal risks. The development of guidance in sector-specific areas – such as healthcare, social care, workforce management, recruitment – could help complement broader guidance or standards for using these methods. While many of the approaches examined above include prompts for different forms of societal risks, they will inherently be limited by the expertise of those designing the specific assessment processes. Broadly applicable AI risk and impact assessment methods could be tailored to sectors through additional guidance, prompts or adaptations of methods, as has already been seen in the case of healthcare, with the adaptation of the SMACTR framework, or AIAs to healthcare-specific use cases.
  • Skills and roles in the technology sector. The technology sector will need teams, roles and staff with the skills to conduct risk and impact assessments. In particular, many methods involve identifying and coordinating diverse stakeholders, and the use of participatory or deliberative methods which are not currently widespread in the technology sector, but are more established in other domains such as policy, design, academic sociology and anthropology.[92] Some of the skills of user research, which is more widely applicable in technology development, may be transferable to these methods, though the focus and intention of the role would be different.
  • Regulatory capacity. This will be important to support an ecosystem of risk assessment, and to deliver monitoring and investigation functions that help ensure the mitigation of risks over time.[93] In the UK, for instance, some regulators have had established capacity for this over a long period, such as the Competition and Markets Authority (CMA) and others such as Ofcom, which have recently been expanding to take on new regulatory responsibilities over online safety. However, some regulators that have expertise that is well-suited to considering societal risks are currently under-resourced to tackle questions of AI. They would need both increased resources and to be empowered to investigate risks and harms from AI systems.
  • Empowering third-party risk and impact assessors. Many of the most well-known and significant AI risk assessments to date have been conducted by third-party civil society groups, academics and companies that evaluate a system’s impacts without the permission of the company.[94] For example, evaluations of bias and the transferring of sensitive medical data by Facebook (Meta) have largely been conducted by third-party researchers at organisations like The Markup.[95] Third-party assessors are independent, and can bring local context or consideration to the evaluation of a system’s impacts. However, third parties often lack access or information about emerging AI systems, and may not be well resourced to conduct these kinds of assessments. In emerging regulation, governments can empower third parties to have greater mandates to access critical data or technical information about an AI system that can enable this kind of assessment of risks.

Further questions

This section briefly outlines further questions or opportunities for research.

  • What kinds of risk assessment methods would work well for the UK’s public sector?
  • What specific kinds of support do third-party assessors need to better conduct their assessments?
  • How often should companies developing or using AI undertake risk assessments?
  • Drawing on lessons from other fields, how effective is making risk and impact assessments publicly available at improving transparency and public trust?
  • What kinds of standards and professional practices are needed to create an ecosystem of risk assessment of AI?
  • Should the public and private sectors have the same obligations to undertake risk assessments of AI systems?
  • How can risk assessment methods involve those affected or impacted by AI systems?
  • What lessons can the UK learn from other national risk-assessment requirements, such as the Netherlands’ fundamental rights algorithmic impact assessment (FRAIA) and Canada’s algorithmic impact assessment (AIA)?
  • What role can risk assessments play in the public-sector procurement process in the UK? What methods would work best?

Methodology

This report surveys approaches for assessing risks that AI systems pose for people and society – in practice, on the ground within AI project teams and in emerging legislation.

The findings of this report are based on a desk-based review and synthesis of grey and academic literature on approaches to assessing AI risk. Relevant literature was identified through keyword searching of academic and grey literature databases, and snowball sampling through the community of practitioners working on AI risk management.

The report is also informed by policy analysis of draft legislation related to governance of AI and algorithmic systems, primarily in a UK and European context, and focused on requirements for anticipating risks or impacts of such systems. This analysis is limited to legislation drafted – or with documentation available – in English, and the research team acknowledges that it would benefit from further work considering wider linguistic, geographic and political contexts.

Partner information and acknowledgements

This report was authored by Jenny Brennan, with substantive contributions from Lara Groves, Elliot Jones and Andrew Strait.

This work was funded by BRAID, a UK-wide programme dedicated to integrating arts and humanities research more fully into the responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI. BRAID is funded by the Arts and Humanities Research Council (AHRC). Funding reference: Arts and Humanities Research Council grant number AH/X007146/1.

This work was undertaken with support via UKRI by the Department for Digital, Culture, Media & Sport (DCMS) Science and Analysis R&D Programme. It was developed and produced according to UKRI’s initial hypotheses and output requests. Any primary research, subsequent findings or recommendations do not represent DCMS views or policy and are produced according to academic ethics, quality assurance and independence.


Footnotes

[1] Ian Brown, Allocating Accountability in AI Supply Chains (Ada Lovelace Institute 2023) <https://www.adalovelaceinstitute.org/resource/ai-supply-chains/>

[2] Joint Task Force Transformation Initiative, (2018), Risk management framework for information systems and organizations:: a system life cycle approach for security and privacy, NIST SP 800-37r2. Gaithersburg, MD: National Institute of Standards and Technology, p. 104, doi: 10.6028/NIST.SP.800-37r2;

[3] Slaughter, Kopec and Batal, (2021), Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, https://yjolt.org/sites/default/files/23_yale_j.l._tech._special_issue_1.pdf (Accessed: 30 January 2023);

[4] Weidinger, L. et al., (2022), ‘Taxonomy of Risks posed by Language Models’, 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, USA), https://doi.org/10.1145/3531146.3533088

[5] Dafoe and Zwetsloot, (2019), ‘Thinking About Risks From AI: Accidents, Misuse and Structure’, https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure (Accessed: 16 March 2023);

[6] Slaughter, Kopec and Batal, (2021), Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, https://yjolt.org/sites/default/files/23_yale_j.l._tech._special_issue_1.pdf (Accessed: 30 January 2023);

[7] Galaz, V. et al., (2021) ‘Artificial intelligence, systemic risks, and sustainability’, Technology in Society, https://doi.org/10.1016/j.techsoc.2021.101741.

[8] Weidinger, L. et al. (2022), ‘Taxonomy of Risks posed by Language Models’, 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, https://doi.org/10.1145/3531146.3533088

[9]  Widder, D.G. et al., (2022), ‘Limits and Possibilities for “Ethical AI” in Open Source: A Study of Deepfakes’, 2022 ACM Conference on Fairness, Accountability, and Transparency. New York, NY, https://doi.org/10.1145/3531146.3533779.

[10]  Slaughter, Kopec and Batal, (2021), Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, https://yjolt.org/sites/default/files/23_yale_j.l._tech._special_issue_1.pdf (Accessed: 30 January 2023);

[11]  Galaz, Centeno, Callahan, Causevic, Patterson, Brass, Baum, Farber, … Levy, (2021), ‘Artificial intelligence, systemic risks, and sustainability’, doi: 10.1016/j.techsoc.2021.101741;

[12] Treasury Board of Canada Secretariat (2019) Directive on Automated Decision-Making, https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592#cha6 (Accessed: 21 February 2023).

[13] European Commission (2021) Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206 (Accessed: 21 March 2023).

[14] Michelle Donelan MP and Lord Parkinson of Whitley Bay, (2023), Online Safety Bill, https://bills.parliament.uk/bills/3137 (Accessed: 27 March 2023);

[15] Sakiotis, Meneses and Quathem, (2023), ‘Brazil’s Senate Committee Publishes AI Report and Draft AI Law’, https://www.insideprivacy.com/emerging-technologies/brazils-senate-committee-publishes-ai-report-and-draft-ai-law/ (Accessed: 28 March 2023);

[16] Clarke, (2022), Text – H.R.6580 – 117th Congress (2021-2022): Algorithmic Accountability Act of 2022, http://www.congress.gov/ (Accessed: 27 March 2023);

[17] Moss, Watkins, Metcalf and Elish, (2020), ‘Governing with Algorithmic Impact Assessments: Six Observations’, SSRN Scholarly Paper 3584818. Rochester, NY, doi: 10.2139/ssrn.3584818;

[18] Kotval and Mullin, (2006), Fiscal Impact Analysis: Methods, Cases, and Intellectual Debate, Lincoln Institute of Land Policy, https://www.lincolninst.edu/sites/default/files/pubfiles/kotval-wp06zk2.pdf (Accessed: 17 March 2023);

[19] Treasury Board of Canada Secretariat, (2021), Algorithmic Impact Assessment Tool, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (Accessed: 21 February 2023);

[20] Ada Lovelace Institute, (2022), Algorithmic impact assessment: a case study in healthcare, https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ (Accessed: 19 April 2022);

[21] Treasury Board of Canada Secretariat, (2021), Algorithmic Impact Assessment Tool, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (Accessed: 21 February 2023);

[22] Ada Lovelace Institute, (2022), Algorithmic impact assessment: a case study in healthcare, https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ (Accessed: 19 April 2022);

[23] BSR, (2019), Google Celebrity Recognition API HRIA Executive Summary, https://www.bsr.org/reports/BSR-Google-CR-API-HRIA-Executive-Summary.pdf (Accessed: 26 February 2023);

[24] Reisman, Schultz and Whittaker, (2018), Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, AI Now Institute, https://ainowinstitute.org/aiareport2018.pdf;

[25] Ada Lovelace Institute, (2022), Algorithmic impact assessment: a case study in healthcare, https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ (Accessed: 19 April 2022);

[26] Treasury Board of Canada Secretariat, (no date), Open Government Portal, https://search.open.canada.ca/opendata/?collection=aia&page=1&sort=date_modified+desc (Accessed: 21 February 2023);

[27] Barnes and Schwartz, (2019), Celebrity Recognition now available to approved media & entertainment customers, https://cloud.google.com/blog/products/ai-machine-learning/celebrity-recognition-now-available-to-approved-media-entertainment-customers (Accessed: 26 February 2023);

[28]  Treasury Board of Canada Secretariat, (2021), Algorithmic Impact Assessment Tool, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (Accessed: 21 February 2023);

[29] Treasury Board of Canada Secretariat, (2021),

[30]  Treasury Board of Canada Secretariat, (2019), Directive on Automated Decision-Making, https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592#cha6 (Accessed: 21 February 2023);

[31] Treasury Board of Canada Secretariat, (2021), Algorithmic Impact Assessment Tool, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (Accessed: 21 February 2023);

[32]  Treasury Board of Canada Secretariat, (no date), Open Government Portal, https://search.open.canada.ca/opendata/?collection=aia&page=1&sort=date_modified+desc (Accessed: 21 February 2023);

[33] Treasury Board of Canada Secretariat, (2019), Directive on Automated Decision-Making, https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592#cha6 (Accessed: 21 February 2023);

[34] Zaken, (2022), Impact Assessment Fundamental Rights and Algorithms – Report – Government.nl, https://www.government.nl/documents/reports/2022/03/31/impact-assessment-fundamental-rights-and-algorithms (Accessed: 21 February 2023);

[35] Dutch Algorithmic Transparency Standard, (no date), https://standaard.algoritmeregister.org/ (Accessed: 17 March 2023);

[36] Bertuzzi, (2023), AI Act: MEPs want fundamental rights assessments, obligations for high-risk users, https://www.euractiv.com/section/artificial-intelligence/news/ai-act-meps-want-fundamental-rights-assessments-obligations-for-high-risk-users/ (Accessed: 21 February 2023);

[37] Allison-Hope, Darnton and Lee, (2019), Google’s Human Rights by Design | Blog | Sustainable Business Network and Consultancy | BSR, https://www.bsr.org/en/blog/google-human-rights-impact-assessment-celebrity-recognition (Accessed: 27 February 2023);

[38] Nonnecke and Dawson, (2022), Human rights impact assessments for AI: analysis and recommendations, Access Now, https://www.accessnow.org/cms/assets/uploads/2022/11/Access-Now-Version-Human-Rights-Implications-of-Algorithmic-Impact-Assessments_-Priority-Recommendations-to-Guide-Effective-Development-and-Use.pdf;

[39] Allison-Hope, (2018), Our Human Rights Impact Assessment of Facebook in Myanmar | Blog | Sustainable Business Network and Consultancy | BSR, https://www.bsr.org/en/blog/facebook-in-myanmar-human-rights-impact-assessment (Accessed: 17 March 2023);

[40] Microsoft, (2022), Microsoft Responsible AI Impact Assessment Guide, Microsoft, https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-RAI-Impact-Assessment-Guide.pdf (Accessed: 27 February 2023);

[41] Microsoft, (2022), ‘Microsoft Responsible AI Standard v2 General Requirements’, https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf;

[42] Microsoft, (2022), Responsible AI principles from Microsoft, https://www.microsoft.com/en-us/ai/responsible-ai (Accessed: 21 February 2023);

[43] Microsoft, (2022),

[44] Leslie, Burr, Aitken, Katell, Briggs and Rincon, (2022),

[45] Leslie, Burr, Aitken, Katell, Briggs and Rincon, (2022), Human rights, democracy, and the rule of law assurance framework for AI systems: A proposal, doi: 10.5281/zenodo.5981676;

[46] Leslie, Burr, Aitken, Katell, Briggs and Rincon, (2022),

[47] Council of Europe Committee on Artificial Intelligence, (2023), Revised zero draft [framework] convention on artificial intelligence, human rights, democracy and the rule of law, https://rm.coe.int/cai-2023-01-revised-zero-draft-framework-convention-public/1680aa193f (Accessed: 27 February 2023);

[48] European Commission, (2022), Recommendation for a COUNCIL DECISION authorising the opening of negotiations on behalf of the European Union for a Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52022PC0414 (Accessed: 27 February 2023);

[49] Leslie, (2019), Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector, The Alan Turing Institute, doi: 10.5281/zenodo.3240529;

[50] Ada Lovelace Institute, (2022), Algorithmic impact assessment: a case study in healthcare, https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ (Accessed: 19 April 2022);

[51] Department of Health and Social Care, (2022), UK to pilot world-leading approach to improve ethical adoption of AI in healthcare, https://www.gov.uk/government/news/uk-to-pilot-world-leading-approach-to-improve-ethical-adoption-of-ai-in-healthcare (Accessed: 26 February 2023);

[52] Wright, Mac and Clark, (2022), ‘Implementing Algorithmic Governance: Clarifying Impact Assessments Through Mock Exercises’, doi: 10.2139/ssrn.4349890;

[53] Raji, Smart, White, Mitchell, Gebru, Hutchinson, Smith-Loud, Theron and Barnes, (2020), ‘Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing’, arXiv:2001.00973. arXiv, http://arxiv.org/abs/2001.00973 (Accessed: 27 March 2023);

[54] Liu, Glocker, McCradden, Ghassemi, Denniston and Oakden-Rayner, (2022), ‘The medical algorithmic audit’, doi: 10.1016/S2589-7500(22)00003-6;

[55] Tabassi, (2023), AI Risk Management Framework: AI RMF (1.0), error:  NIST AI 100-1. Gaithersburg, MD: National Institute of Standards and Technology, doi: 10.6028/NIST.AI.100-1;

[56] AI RMF Playbook, (2023), https://pages.nist.gov/AIRMF/manage/ (Accessed: 27 February 2023);

[57] ICO, (2023), AI and data protection risk toolkit, https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection/ai-and-data-protection-risk-toolkit/ (Accessed: 27 February 2023);

[58] BSR, (2019), Google Celebrity Recognition API HRIA Executive Summary, https://www.bsr.org/reports/BSR-Google-CR-API-HRIA-Executive-Summary.pdf (Accessed: 26 February 2023);

[59] Barnes and Schwartz, (2019), Celebrity Recognition now available to approved media & entertainment customers, https://cloud.google.com/blog/products/ai-machine-learning/celebrity-recognition-now-available-to-approved-media-entertainment-customers (Accessed: 26 February 2023);

[60] BSR, (2019), Google Celebrity Recognition API HRIA Executive Summary, https://www.bsr.org/reports/BSR-Google-CR-API-HRIA-Executive-Summary.pdf (Accessed: 26 February 2023);

[61] Allison-Hope, Darnton and Lee, (2019), Google’s Human Rights by Design | Blog | Sustainable Business Network and Consultancy | BSR, https://www.bsr.org/en/blog/google-human-rights-impact-assessment-celebrity-recognition (Accessed: 27 February 2023);

[62] Barnes and Schwartz, (2019), Celebrity Recognition now available to approved media & entertainment customers, https://cloud.google.com/blog/products/ai-machine-learning/celebrity-recognition-now-available-to-approved-media-entertainment-customers (Accessed: 26 February 2023);

[63] Department of Health and Social Care, (2022), UK to pilot world-leading approach to improve ethical adoption of AI in healthcare, https://www.gov.uk/government/news/uk-to-pilot-world-leading-approach-to-improve-ethical-adoption-of-ai-in-healthcare (Accessed: 26 February 2023);

[64] Ada Lovelace Institute, (2022), Algorithmic impact assessment: a case study in healthcare, https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ (Accessed: 19 April 2022);

[65] Engler, (2023), ‘How California and other states are tackling AI legislation’, https://www.brookings.edu/blog/techtank/2023/03/22/how-california-and-other-states-are-tackling-ai-legislation/ (Accessed: 28 March 2023);

[66]  Sakiotis, Meneses and Quathem, (2023), ‘Brazil’s Senate Committee Publishes AI Report and Draft AI Law’, https://www.insideprivacy.com/emerging-technologies/brazils-senate-committee-publishes-ai-report-and-draft-ai-law/ (Accessed: 28 March 2023);

[67] Treasury Board of Canada Secretariat, (2021), Algorithmic Impact Assessment Tool, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (Accessed: 21 February 2023);

[68] How we are approaching online safety risk assessments, (2023), https://www.ofcom.org.uk/news-centre/2023/how-we-are-approaching-online-safety-risk-assessments (Accessed: 28 March 2023);

[69] Information Commissioner’s Office, (2022), Data protection impact assessments, https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/ (Accessed: 28 March 2023);

[70] Centre for Data Ethics and Innovation, (2021), The roadmap to an effective AI assurance ecosystem, https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem/the-roadmap-to-an-effective-ai-assurance-ecosystem (Accessed: 17 March 2023); Costanza-Chock, Raji and Buolamwini, (2022), ‘Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem’, doi: 10.1145/3531146.3533213; Digital Regulation Cooperation Forum, (2022), Auditing algorithms: the existing landscape, role of regulators and future outlook, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1071554/DRCF_Algorithmic_audit.pdf;

[71] Eticas Consulting, (2021), Guide to Algorithmic Auditing, https://www.eticasconsulting.com/wp-content/uploads/2021/04/Guide-to-Algorithmic-Auditing-English-Final-ALL-MZ-version-7.pdf (Accessed: 17 March 2023); Ada Lovelace Institute, (2021), Technical methods for regulatory inspection of algorithms in social media platforms, Ada Lovelace Institute, https://www.adalovelaceinstitute.org/wp-content/uploads/2021/12/ADA_Technical-methods-regulatory-inspection_report.pdf (Accessed: 1 February 2023); Brown, Davidovic and Hasan, (2021), ‘The algorithm audit: Scoring the algorithms that score us’, SAGE Publications Ltd, doi: 10.1177/2053951720983865; Costanza-Chock, Raji and Buolamwini, (2022), ‘Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem’, doi: 10.1145/3531146.3533213; Raji, Xu, Honigsberg and Ho, (2022), ‘Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’, doi: 10.1145/3514094.3534181;

[72] Ada Lovelace Institute and DataKind UK, (2020), Examining the Black Box: Tools for assessing algorithmic systems, https://www.adalovelaceinstitute.org/wp-content/uploads/2020/04/Ada-Lovelace-Institute-DataKind-UK-Examining-the-Black-Box-Report-2020.pdf (Accessed: 1 February 2023);

[73] West Midlands Police & Crime Commissioner, (no date), ‘Ethics Committee’, https://www.westmidlands-pcc.gov.uk/ethics-committee/ (Accessed: 27 February 2023);

[74] Ada Lovelace Institute, (2022), Looking before we leap, https://www.adalovelaceinstitute.org/report/looking-before-we-leap/ (Accessed: 27 February 2023);

[75] Ada Lovelace Institute, AI Now Institute, and Open Government Partnership, (2021), Algorithmic accountability for the public sector, https://www.opengovpartnership.org/documents/ algorithmic-accountability-public-sector/;

[76] Brundage, Avin, Wang, Belfield, Krueger, Hadfield, Khlaaf, Yang, … Anderljung, (2020), ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims’, arXiv:2004.07213. arXiv, http://arxiv.org/abs/2004.07213 (Accessed: 13 February 2023);

[77] Ganguli, Lovitt, Kernion, Askell, Bai, Kadavath, Mann, Perez, … Clark, (2022), ‘Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned’, arXiv:2209.07858. arXiv, doi: 10.48550/arXiv.2209.07858;

[78] Diaz, Kivlichan, Rosen, Baker, Amironesei, Prabhakaran and Denton, (2022), ‘CrowdWorkSheets: Accounting for Individual and Collective Identities Underlying Crowdsourced Dataset Annotation’, doi: 10.1145/3531146.3534647;

[79] Madaio, Stark, Wortman Vaughan and Wallach, (2020), ‘Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI’, doi: 10.1145/3313831.3376445;

[80] Madaio, Stark, Wortman Vaughan and Wallach, (2020),

[81] Madaio, Stark, Wortman Vaughan and Wallach, (2020),

[82] High-Level Expert Group on Artificial Intelligence, (2020), Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment, https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment (Accessed: 26 February 2023);

[83] Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru, (2019), ‘Model Cards for Model Reporting’, doi: 10.1145/3287560.3287596;

[84] Gebru, Morgenstern, Vecchione, Vaughan, Wallach, Daum and Crawford, (2021), Datasheets for Datasets, https://m-cacm.acm.org/magazines/2021/12/256932-datasheets-for-datasets/abstract (Accessed: 27 February 2023) Hutchinson, Smart, Hanna, Denton, Greer, Kjartansson, Barnes and Mitchell, (2021), ‘Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure’, doi: 10.1145/3442188.3445918;

[85] Ada Lovelace Institute, AI Now Institute, and Open Government Partnership, (2021), Algorithmic accountability for the public sector, https://www.opengovpartnership.org/documents/algorithmic-accountability-public-sector/; Domagala and Spiro, (2021), ‘Engaging with the public about algorithmic transparency in the public sector’, https://cdei.blog.gov.uk/2021/06/21/engaging-with-the-public-about-algorithmic-transparency-in-the-public-sector/ (Accessed: 1 February 2023);

[86] Ada Lovelace Institute, AI Now Institute, and Open Government Partnership, (2021), Algorithmic accountability for the public sector, https:// www.opengovpartnership.org/documents/ algorithmic-accountability-public-sector/; Algorithm Register – Algorithmic Transparency Standard, (no date), https://www.algorithmregister.org/ (Accessed: 27 February 2023);

[87] Algoritmos Públicos – GobLab UAI, (no date), https://www.algoritmospublicos.cl/ (Accessed: 27 February 2023); Consejo para la Transparencia, (2022), ‘Consejo para la Transparencia y Universidad Adolfo Ibáñez lideran piloto en organismos públicos para inédita normativa en transparencia algorítmica de América Latina’, https://www.consejotransparencia.cl/consejo-para-la-transparencia-y-universidad-adolfo-ibanez-lideran-piloto-en-organismos-publicos-para-inedita-normativa-en-transparencia-algoritmica-de-america-latina/ (Accessed: 27 February 2023);

[88] Moss, Watkins, Metcalf and Elish, (2020), ‘Governing with Algorithmic Impact Assessments: Six Observations’, SSRN Scholarly Paper 3584818. Rochester, NY, doi: 10.2139/ssrn.3584818;

[89] Consejo para la Transparencia, (2022), ‘Consejo para la Transparencia y Universidad Adolfo Ibáñez lideran piloto en organismos públicos para inédita normativa en transparencia algorítmica de América Latina’, https://www.consejotransparencia.cl/consejo-para-la-transparencia-y-universidad-adolfo-ibanez-lideran-piloto-en-organismos-publicos-para-inedita-normativa-en-transparencia-algoritmica-de-america-latina/ (Accessed: 27 February 2023);

[90] Ada Lovelace Institute and DataKind UK, (2020), Examining the Black Box: Tools for assessing algorithmic systems, https://www.adalovelaceinstitute.org/wp-content/uploads/2020/04/Ada-Lovelace-Institute-DataKind-UK-Examining-the-Black-Box-Report-2020.pdf (Accessed: 1 February 2023);

[91] Ada Lovelace Institute, (2023), Inclusive AI governance: Civil society participation in standards development, https://www.adalovelaceinstitute.org/report/inclusive-ai-governance/;

[92] Costanza-Chock, Raji and Buolamwini, (2022), ‘Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem’, doi: 10.1145/3531146.3533213;

[93] Ada Lovelace Institute, (2021), Regulate to innovate, https://www.adalovelaceinstitute.org/report/regulate-innovate/ (Accessed: 1 February 2023); Aitken, Leslie, Ostmann, Pratt, Margetts and Dorobantu, (2022), ‘Common Regulatory Capacity for AI’, doi: https://doi.org/10.5281/zenodo.6838946; Costanza-Chock, Raji and Buolamwini, (2022), ‘Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem’, doi: 10.1145/3531146.3533213; Digital Regulation Cooperation Forum, (2022), Auditing algorithms: the existing landscape, role of regulators and future outlook, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1071554/DRCF_Algorithmic_audit.pdf;

[94] Costanza-Chock, Raji and Buolamwini, (2022), ‘Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem’, doi: 10.1145/3531146.3533213;

[95] Feathers, Fondrie-Teitler, Waller and Mattu, (2022), Facebook Is Receiving Sensitive Medical Information from Hospital Websites – The Markup, https://themarkup.org/pixel-hunt/2022/06/16/facebook-is-receiving-sensitive-medical-information-from-hospital-websites (Accessed: 27 March 2023);

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

Executive summary

What can foundation model oversight learn from the US Food and Drug Administration (FDA)?

In the last year, policymakers around the world have grappled with the challenge of how to regulate and govern foundation models – artificial intelligence (AI) models like OpenAI’s GPT-4 that are capable of a range of general tasks such as text synthesis, image manipulation and audio generation. Policymakers, civil society organisations and industry practitioners have expressed concerns about the reliability of foundation models, the risk of misuse of their powerful capabilities and the systemic risks they could pose as more and more people begin to use them in their daily lives.

Many of these risks to people and society – such as the potential for powerful and widely used AI systems to discriminate against particular demographics, or to spread misinformation more widely and easily – are not new, but foundation models have some novel features that could greatly amplify the potential harms.

These features include their generality and ability to complete range of tasks; the fact that they are ‘built on’ for a wide range of downstream applications, creating a risk that a single point of failure could lead to networked catastrophic consequences; fast and (sometimes) unpredictable jumps in their capabilities and behaviour, which make it harder to foresee harm; and their wide-scale accessibility, which puts powerful AI capabilities in the hands of a much larger number of people.

Both the UK and US governments have released voluntary commitments for developers of these models, and the EU’s AI Act includes some stricter requirements for models before they can be sold on the market. The US Executive Order on AI also includes some obligations on some developers of foundation models to test their systems for certain risks.[1] [2]

Experts agree that foundation models need additional regulatory oversight due to their novelty, complexity and lack of clear safety standards. Oversight needs to enable learning about risks, and to ensure iterative updates to safety assessments and standards.

Notwithstanding the unique features of foundation models, this is not the first time that regulators have grappled with how to regulate complex, novel technologies that raise a variety of sociotechnical risks.[3] One area where this challenge already exists is in life sciences. Drug and medical device regulators have a long history of applying a rigorous oversight process to novel, groundbreaking and experimental technologies that – alongside their possible benefits – could present potentially severe consequences for people and society.

This paper draws on interviews with 20 experts and a literature review to examine the suitability and applicability of the US Food and Drug Administration (FDA) oversight model to foundation models. It explores the similarities and differences between medical devices and foundation models, the limitations of the FDA model as applied to medical devices, and how the FDA’s governance framework could be applied to the governance of foundation models.

This paper highlights that foundation models may pose risks to the public that are similar to — or even greater than — Class III medical devices (the FDA’s highest risk category). To begin to address the mitigation of these risks through the lens of the FDA model, the paper lays out general principles to strengthen oversight and evaluation of the most capable foundation models, along with specific recommendations for each layer in the supply chain.

This report does not address questions of international governance implications, the political economy of the FDA or regulating AI in medicine specifically. Rather, this paper seeks to answer a simple question: when designing the regulation of complex AI systems, what lessons and approaches can regulators draw on from medical device regulation?

A note on terminology

Regulation refers to the legally binding rules that govern the industry, setting the standards, requirements and guidelines that must be complied with.

Oversight refers to the processes of monitoring and enforcing compliance with regulations, for example through audits, reporting requirements or investigations.

What is FDA oversight?

With more than one hundred years’ history, a culture of continuous learning, and increasing authority, the FDA is a long-established regulator, with FDA-regulated products accounting for about 20 cents of every dollar spent by US consumers.

The FDA regulates drugs and medical devices by assigning them a specific risk level corresponding to how extensive subsequent evaluations, inspections and monitoring will be at different stages of development and deployment. The more risky and more novel a product, the more tests, evaluation processes and monitoring it will undergo.

The FDA does this by providing guidance and setting requirements for drug and device developers to follow, including regulatory approval of any protocols the developer will use for testing, and evaluating the safety and efficacy of the product.

The FDA has extensive auditing powers, with the ability to inspect drug companies’ data, processes and systems at will. It also requires companies to report incidents, failures and adverse impacts to a central registry. There are substantial fines for failing to follow appropriate regulatory guidance, and the FDA has a history of enforcing these sanctions.

Core risk-reducing aspects of FDA oversight

  • Risk- and novelty-driven oversight: The riskier and more novel a product, the more tests, evaluation processes and monitoring there will be.
  • Continuous, direct engagement with developers from development through to market: Developers must undergo a rigorous testing process through a protocol agreed with the FDA.
  • Wide-ranging information access: The FDA has statutory powers to access comprehensive information, for example, clinical trial results and patient data.
  • Burden of proof on developers: Developers must demonstrate the efficacy and safety of a drug or medical device at various ‘approval gates’ before the product can be tested on humans or be sold on a market.
  • Balancing innovation with efficacy and safety: This builds acceptance for the FDA’s regulatory authority.

How suitable is FDA-style oversight for foundation models?

Our findings show that foundation models are at least as complex as and more novel than FDA Class III medical devices (the highest risk category), and that the risks they pose are potentially just as severe.[4][5][6] Indeed, the fact that these models are deployed across the whole economy, interacting with millions of people, means that they are likely to pose systemic risks far beyond those of Class III medical devices.[7] However, the exact risks of these models are so far not fully clear. Risk mitigation measures are uncertain and risk modelling is poor or non-existent.

The regulation of Class III medical devices offers policymakers valuable insight into how they might regulate foundation models, but it is also important that they are aware of the limitations.

Limitations of FDA-style oversight for foundation models

  • High cost of compliance: A high cost of compliance could limit the number of developers, which may benefit existing large companies. Policymakers may need to consider less restrictive requirements for smaller companies that have fewer users, coupled with support for such companies in compliance and via streamlined regulatory pathways.
  • Limited range of risks assessed: The FDA model may not be able to fully address the systemic risks and the risks of unexpected capabilities associated with foundation models. Medical devices are not general purpose, and the FDA model therefore largely assesses efficacy and safety in narrow contexts. Policymakers may need to create new, exploratory methods for assessing some types of risk throughout the foundation model supply chain, which may require increased post-market monitoring obligations.
  • Overreliance on industry: Regulatory agencies like the FDA sometimes need industry expertise, especially in novel areas where clear benchmarks have not yet been developed and knowledge is concentrated in industry. Foundation models present a similar challenge. This could raise concerns around regulatory capture and conflicts of interest. An ecosystem of independent academic and governmental experts needs to be built up to support balanced, well-informed oversight of foundation models, with clear mechanisms for those impacted by AI technologies to contribute. This could be at the design and development stage, eliciting feedback from pre-market ‘sandboxing’, or through market approval processes (under the FDA regime, patient representatives have a say in this process). At any step in the process, consideration should be given to who is involved (this could range from a representative panel to a jury of members of the public), the depth of engagement (from public consultations through to partnership decision-making), and methods (for example, from consultative exercises such as focus groups, to panels and juries for deeper engagement).

General principles for AI regulators

To strengthen oversight and evaluations of the most capable foundation models (for example, OpenAI’s GPT-4), which currently lag behind FDA oversight in aspects of risk-reducing external scrutiny:

  1. Establish continuous, risk-based evaluations and audits throughout the foundation model supply chain.
  2. Empower regulatory agencies to evaluate critical safety evidence directly, supported by a third-party ecosystem – consistently proven higher quality than self- or second-party evaluations across industries.
  3. Ensure independence of regulators and external evaluators, through mandatory industry fees and a sufficient budget for regulators that contract third parties. While existing sector-specific regulators, for example, the Consumer Financial Protection Bureau (CFPB) in the USA, may review downstream AI applications, there might be a need for an upstream regulator of foundation models themselves. The level of funding for such a regulator would need to be similar to that of other safety-critical domains, such as medicine.
  4. Enable structured access to foundation models and adjacent components for evaluators and civil society. This will help ensure the technology is designed and deployed in a manner that meets the needs of the people who are impacted by its use, and enable methods to offer accountability mechanisms if it is not
  5. Enforce a foundation model pre-approval process, shifting the burden of proof to developers.

Recommendations for AI regulators, developers and deployers

Data and compute layers oversight

  1. Regulators should compel pre-notification of, and information-sharing on, large training runs.
  2. Regulators should compel mandatory model and dataset documentation and disclosure for the pre-training and fine-tuning of foundation models,[8] [9] [10] including a capabilities evaluation and risk assessment within the model card for the (pre-) training stage and post-market.

Foundation model layer oversight

  1. Regulators should introduce a pre-market approval gate for foundation models, as this is the most obvious point at which risks can proliferate. In any jurisdiction, defining the approval gate will require significant work, with input from all relevant stakeholders. In critical or high-risk areas, depending on the jurisdiction and existing or foreseen pre-market approval for high-risk use, regulators should introduce an additional approval gate at the application layer of the supply chain.
  2. Third-party audits should be required as part of the pre-market approval process, and sandbox testing in real-world conditions should be considered.
  3. Developers should enable detection mechanisms for the outputs of generative foundation models.
  4. As part of the initial risk assessment, developers and deployers should document and share planned and foreseeable modifications throughout the foundation model’s supply chain.
  5. Foundation model developers, and high-risk application providers building on top of these models, should enable an easy complaint mechanism for users to swiftly report any serious risks that have been identified.

Application layer oversight

  1. Existing sector-specific agencies should review and approve the use of foundation models for a set of use cases, by risk level.
  2. Downstream application providers should make clear to end users and affected persons what the underlying foundation model is, including if it is an open-source model, and provide easily accessible explanations of systems’ main parameters and any opt-out mechanisms or human alternatives available.

Post-market monitoring

  1. An AI ombudsman should be considered, to take and document complaints or known instances of harms of AI. This should be complimented by a comprehensive remedies framework for affected persons based on clear avenues for redress.
  2. Developers and downstream deployers should provide documentation and disclosure of incidents throughout the supply chain, including near misses. This could be strengthened by requiring downstream developers (building on top of foundation models at the application layer) and end users (for example, medical or education professionals) to also disclose incidents.
  3. Foundation model developers, downstream deployers and hosting providers (for example GitHub or Hugging Face) should be compelled to restrict, suspend or retire a model from active use if harmful impacts, misuse or security vulnerabilities (including leaks or otherwise unauthorised access) arise.
  4. Host layer actors (for example cloud service providers or model hosting platforms) should also play a role in evaluating model usage and implementing trust and safety policies to remove harmful models that have demonstrated or are likely to demonstrate serious risks, and flagging harmful models to regulators when it is not in their power to take them down.
  5. AI regulators should have strong powers to investigate and require evidence generation from foundation model developers and downstream deployers. This should be strengthened by whistleblower protections for any actor involved in development or deployment who raises concerns about risks to health or safety.
  6. Any regulator should be funded to a level comparable to (if not greater than) regulators in other domains where safety and public trust are paramount and where underlying technologies form part of national infrastructure – such as civil nuclear, civil aviation, medicines, or road and rail.[11] Given the level of resourcing required, this may be partly funded by AI developers over a certain threshold.
  7. The law around AI liability should be clarified to ensure that legal and financial liability for AI risk is distributed proportionately along foundation model supply chains.

Introduction

As governments around the world consider the regulation of artificial intelligence (AI), many experts are suggesting that lessons should be drawn from other technology areas. The US Food and Drug Administration (FDA) and its approval process for drug development and medical devices is one of the most cited areas in this regard.

This paper seeks to understand if and how FDA-style oversight could be applied to AI, and specifically to foundation models, given their complexity, novelty and potentially severe risk profile – each of which arguably exceeds those of the products regulated by the FDA.

This paper first maps the FDA review process for Class III medical software, to identify both the risk-reducing features and the limitations of FDA-style oversight. It then considers the suitability and applicability of FDA processes to foundation models and suggests how FDA risk-reducing features could be applied across the foundation model supply chain. It concludes with actionable recommendations for policymakers.

What are foundation models?

Foundation models are a form of AI system capable of a range of general tasks, such as text synthesis, image manipulation and audio generation.[12] Notable examples include OpenAI’s GPT-4 – which has been used to create products such as ChatGPT – and Anthropic’s Claude 2.

Advances in foundation models raise concerns about reliability, misuse, systemic risks and serious harms. Developers and researchers of foundation models have highlighted that their wide range of capabilities and unpredictable behaviours[13] could pose a series of risks, including:

  • Accidental harms: Foundation models can generate confident but factually incorrect statements, which could exacerbate problems of misinformation. In some cases this could have potentially fatal consequences, for example, if someone is misled into eating something poisonous or taking the wrong medication.[14] [15]
  • Misuse harms: These models could enable actors to intentionally cause harm, from harassment[16] through to cybercrime at a greater scale[17] or biosecurity risks.[18] [19]
  • Structural or systemic harms: If downstream developers increasingly rely on foundation models, this creates a single point of dependency on a model, raising security risks.[20] It also concentrates market power over cutting-edge foundation models as few private companies are able to develop foundation models with hundreds of millions of users.[21] [22] [23]
  • Supply chain harms: These are harms involving the processes and inputs used to develop AI, such as poor labour practices, environmental impacts and the inappropriate use of personal data or protected intellectual property.[24]

Context and environment

Experts agree that foundation models are a novel technology in need of additional oversight. This sentiment was shared by industry, civil society and government experts at an Ada Lovelace Institute roundtable on standards-setting held in May 2023. Attendees largely agreed that foundation models represent a ‘novel’ technology without an established ‘state of the art’ for safe development and deployment.

This means that additional oversight mechanisms may be needed, such as testing the models in a ‘sandbox’ environment or regular audits and evaluations of a model’s performance before and after its release (similar to the approach to the testing, approval and monitoring approaches in public health). Such mechanisms would enable greater transparency and accessibility for actors with incentives more aligned with societal interest in assessing (second order) effects on people.[25]

Crafting AI regulation is a priority for governments worldwide. In the last three years, national governments across the world have sought to draft legislation to regulate the development and deployment of AI in different sectors of society.

The European AI Act takes a risk-based approach to regulation, with stricter requirements applying to AI models and systems that pose a high risk to health, safety or fundamental rights. In contrast, the UK has proposed a principles-based approach, calling for existing individual regulators to regulate AI models through an overarching set of principles.

Policymakers in the USA have proposed a different approach in the Algorithm Accountability Act,[26] which would create a baseline requirement for companies building foundation models and AI systems to assess the impacts of ‘automating critical decision-making’ and empower an existing regulator to enforce this requirement. Neither the UK nor the USA have ruled out ‘harder’ regulation that would require the creation of a new (or empowering an existing) body for enforcement.

Regulation in public health, such as FDA pre-approvals, can inspire AI regulation. As governments seek to develop their approach to regulating AI, they have naturally turned to other emerging technology areas for guidance. One area routinely mentioned is the regulation of public health – specifically, the drug development and medical device regulatory approval process used by the FDA.

The FDA’s core objective is to ‘speed innovations that make food and drug products more effective, safer and more affordable’ to ‘maintain and improve the public’s health’. In practice, the FDA model requires developers of drugs or medical devices to provide (sufficiently positive) evidence on the safety risks, efficacy and accessibility of products before they are approved to be sold in a market or continue to the next development phase (referred to as pre-market approval or pre-approval).

Many call for FDA-style oversight for AI, though its detailed applicability for foundation models is largely unexamined. Applying lessons from the FDA to AI is not a new idea,[27] [28] [29] though it has recently gained significant traction. In a May 2023 Senate Hearing, renowned AI expert Gary Marcus testified that priority number one should be ‘a safety review like we use with the FDA prior to widespread deployment’.[30] Leading AI researchers Stuart Russell and Yoshua Bengio have also called for FDA-style oversight of new AI models.[31] [32] [33] In a recent request for evidence by the USA’s National Telecommunications and Information Administration on AI accountability mechanisms, 43 pieces of evidence mentioned the FDA as an inspiration for AI oversight.[34]

However, such calls often lack detail on how appropriate the FDA model is to regulate AI. The regulation of AI for medical purposes has received extensive attention,[35] [36] but there has not yet been a detailed analysis on how FDA-style oversight could be applied to foundation models or other ‘general-purpose’ AI.

Drug regulators have a long history of applying a rigorous oversight process to novel, groundbreaking and experimental technologies that – alongside their possible benefits – present potentially severe consequences.

Such technologies include gene editing, biotechnology and medical software. As with drugs, the effects of most advanced AI models are largely unknown but potentially significant.[37] Both public health and AI are characterised by fast-paced research and development progress, the complex nature of many components, their potential risk to human safety, and the uncertainty of risks posed to different groups of people.

As market sectors, public health and AI are both dominated by large private-sector organisations developing and creating new products sold on a multinational scale. Through registries, drug regulators ensure transparency and dissemination of evaluation methods and endpoint setting. The FDA is a prime example of drug regulation and offers inspiration for how complex AI systems like foundation models could be governed.

Methodology and scope

This report draws on expert interviews and literature to examine the suitability of applying FDA oversight mechanisms to foundation models. It includes lessons drawn from a literature review[38] [39] and interviews with 20 experts from industry, academia, thinktanks and government on FDA oversight and foundation model evaluation processes.[40] In this paper, we answer two core research questions:

  1. Under what conditions are FDA-style pre-market approval mechanisms successful in reducing risks for drug development and medical software?
  2. How might these mechanisms be applied to the governance of foundation models?

The report is focused on the applicability of aspects of FDA-style oversight (such as pre-approvals) to foundation models for regulation within a specific jurisdiction. It does not aim to determine if the FDA’s approach is the best for foundation model governance, but to inform policymakers’ decision-making. This report also does not answer how the FDA should regulate foundation models in the medical context.[41]

We focus on how foundation models might be governed within a jurisdiction, not on international cross-jurisdiction oversight. An international approach could be built on top of jurisdictional FDA-style oversight models through mutual recognition and trade limitations, as recently proposed.[42] [43]

We focus particularly on auditing and approval mechanisms, outlining criteria relevant for a future comparative analysis with other national and multinational regulatory models. Further research is needed to understand whether a new agency like the FDA should be set up for AI.

The implications and recommendations of this report will apply differently to different jurisdictions. For example, many downstream ‘high-risk’ applications of foundation models would have the equivalent of a regulatory approval gate under the EU AI Act (due to be finalised at the end of 2023). The most relevant learnings for the EU would therefore be considerations of what upstream foundation model approval gates could entail, or how a post-market monitoring regime should operate. For the UK and USA (and other jurisdictions), there may be more scope to glean ideas about how to implement an FDA-style regulatory framework to cover the whole foundation model supply chain.

‘The FDA oversight process’ chapter explores how FDA oversight functions and its strengths and weaknesses as an approach to risk reduction. We use Software as a Medical Device (SaMD) as a case study to examine how the FDA approaches the regulation of current ‘narrow’ AI systems (AI systems that do not have general capabilities). Then, the chapter on ‘FDA-style oversight for foundation models’ explores the suitability of this approach to foundation models. The paper concludes with recommendations for policymakers and open questions for further research.

Definitions

 

●      Approval gates are the specific points in the FDA oversight process at which regulatory approval decisions are made. They are throughout the development process. A gate can only be passed when the regulator believes that sufficient evidence on safety and efficacy has been provided.

●      Class IIII medical devices: Class I medical devices are low-risk with non-critical consequences. Class II devices are medium risk. Class III devices are devices which can potentially cause severe harms.

●      Clinical trials, ‘also known as clinical studies, test potential treatments in human volunteers to see whether they should be approved for wider use in the general population’.[44]

●      Endpoints are targeted outcomes of a clinical trial that are statistically analysed to help determine efficacy and safety. They may include clinical outcome assessments or other measures to predict efficacy and safety. The FDA and developers jointly agree on endpoints before a clinical trial.

●      Foundation models are ‘AI models capable of a wide range of possible tasks and applications, such as text, image, or audio generation. They can be standalone systems or can be used as a ‘base’ for many other more narrow AI applications’.[45]

○      Upstream (in the foundation model supply chain) refers to the component parts and activities in the supply chain that feed into development of the model.[46]

○      Downstream (in the foundation model supply chain) refers to activities after the launch of the model and activities that build on a model.[47]

○      Fine-tuning is the process of training a pre-trained model with an additional specialised or context-specific dataset, removing the need to train a model from scratch.[48]

●      Narrow AI is ‘designed to be used for a specific purpose and is not designed to be used beyond its original purpose’.[49]

●      Pre-market approval is the point in the regulatory approval process where developers provide evidence on the safety risks, efficacy and accessibility of their products before they are approved to be sold in a market. Beyond pre-market, the term ‘pre-approvals’ generally describes a regulatory approval process before the next step along the development process or supply chain.

●      A Quality Management System (QMS) is a collection of business processes focused on achieving quality policy and objectives to meet requirements (see, for example ISO 9001 and ISO 13485),[50] [51] or on safety and efficacy (see, for example FDA Part 820). This includes management controls; design controls; production and process controls; corrective and preventative actions; material controls; records, documents, and change controls; and facilities and equipment controls.

●      Risk-based regulation ‘focuses on outcomes rather than specific rules and process as the goal of regulation’,[52] adjusting oversight mechanisms to the level of risk of the specific product or technology.

●      Software as a Medical Device (SaMD) is ‘Software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device’.[53]

●      The US Food and Drug Administration (FDA) is a federal agency (and part of the Department of Health and Human Services) that is charged with protecting consumers against impure and unsafe foods, drugs and cosmetics. It enforces the Federal Food Drug and Cosmetic Act and related laws, and develops detailed guidelines.

 How to read this paper

This report offers insight from FDA regulators, civil society and private sector companies on applying specific oversight mechanisms proven in life sciences, to govern AI and foundation models specifically.

…if you are a policymaker working on AI regulation and oversight:

  • The section on ‘Applying key features of FDA-style oversight to foundation models’ provides general principles that can contribute to a risk-reducing approach to oversight,
  • The chapter on ‘Recommendations and open questions’ summarises specific mechanisms for developing and implementing oversight for foundation models.
  • For a detailed analysis of the applicability of life sciences oversight to foundation models, see the chapter ‘FDA-style oversight for foundation models’ and section on ‘The limitations of FDA oversight’.

…if you are a developer or designer of data-driven technologies, foundation models or AI systems:

  • Grasp the importance of rigorous testing, documentation and post-market monitoring of foundation models and AI applications. The introduction and ‘FDA-style oversight for foundation models’ chapter detail why significant investments into AI governance is important, and why the life sciences are a suitable inspiration.
  • The section on ‘Applying specific FDA-style processes along the foundation model supply chain’ describes mechanisms for each layer in the foundation model supply chain, They are tailored to data providers, foundation model developers, hosts and application providers. These mechanisms are based on proven governance methods used by regulators and companies in the pharmaceutical and medical device sectors.
  • Our Recommendations and open questions’ provide actionable ways in which AI companies can contribute to a better AI oversight process.

…if you are a researcher or public engagement practitioner interested in AI regulation:

  • The introduction includes an overview of the methodology which may also offer insight for others interested in undertaking a similar research project.
  • In addition to a summary of the FDA oversight process, the main research contribution of this paper is in the chapter ‘FDA-style oversight for foundation models’.
  • Our chapter on ‘Recommendations and open questions’ outlines opportunities for future research on governance processes.
  • There is also potential in collaborations between researchers in life sciences regulation and AI governance, focusing on the specific oversight mechanisms and technical tools like unique device identifiers described in our recommendations for AI regulators, developers and deployers.

The FDA oversight process

The Food and Drug Administration (FDA) is the US federal agency tasked with enforcing laws on food and drug products. Its core objective is to help ‘speed innovations that make products more effective, safer and more affordable’ through ‘accurate, science-based information’. In 2023, it had a budget of around $8 billion, around half of which was paid through mandatory fees by companies overseen by the FDA.[54] [55]

The FDA’s regulatory mandate has come to include regulating computing hardware and software used for medical purposes, such as in-vitro glucose monitoring devices or breast cancer diagnosis software.[56] The regulatory category SaMD and adjacent software for medical devices encompasses AI-powered medical applications. These are novel software applications that may bear potentially severe consequences, such as software for eye surgeries[57] or automated oxygen level control under anaesthesia.[58] [59]

An understanding of the most important oversight components for the FDA enables the discussion on suitable inspirations for foundation models in the following chapter.

The FDA regulates drugs and medical devices through a risk-based approach. This seeks to identify potential risks at different stages of the development process. The FDA does this by providing guidance and setting requirements for drug and device developers, including agreed protocols for testing and evaluating the safety and efficacy of the drug or device. The definition of ‘safety’ and ‘efficacy’ are dependent on the context, but generally:

  • Safety refers to the type and likelihood of adverse effects. This is then described as ‘a judgement of the acceptability of the risk associated with a medical technology’. A ‘safe’ technology is described as one that ‘causes no undue harm’.[60]
  • Efficacy refers to ‘the probability of benefit to individuals in a defined population from a medical technology applied for a given medical problem’.[61] [62]

Some devices and drugs undergo greater scrutiny than others. For medical devices, the FDA has developed a Class I–III risk rating system; higher-risk (Class III) devices are required to meet more stringent requirements to be approved and sold on the market. For medical software, the focus lies more on post-market monitoring. The FDA allows software on the market with higher levels of risk uncertainty than drugs, but it monitors such software continuously.

Figure 3: Classes of medical devices (applicable to software components and SaMD)[63]

The FDA’s oversight process follows five steps, which are adapted to the category and risk class of the drug, software or medical device in question.[64] [65]

The FDA can initiate reviews and inspections of drugs and medical devices (as well as other medical and food products) at three points: before clinical trials begin (Step 2), before a drug is marketed to the public (Step 4) and as part of post-market monitoring (Step 5). The depth of evidence required depends on the potential risk levels and novelty of a drug or device.

Approval gates – points in the development process where proof of sufficient safety and efficacy is required to move to the next step – are determined depending on where risks originate and proliferate.

This section illustrates the FDA’s oversight approach to novel Class III software (including narrow AI applications). Low-risk software and software similar to existing software go through a significantly shorter process (see Figure 3).

We illustrate each step using the hypothetical scenario of an approval process for medical AI software for guiding a robotic arm to take patients’ blood. This software consists of a neural network that has been trained with an image classification dataset to visually detect an appropriate vein and that can direct a human or robotic arm to this vein (see Figure 4).[66] [67]

While the oversight process for drugs and medical devices is slightly different, this section borrows insights from both and simplifies when suitable. This illustration will help to inform our assessment in the following chapter, of whether and how a similar approach could be applied to ensure oversight of foundation models.

Risk origination points are when risks arise initially; risk proliferation points: when risks spread without being controllable any more.

Step 1: Discovery and development

Description: A developer conducts initial scoping and ideation of how to design a medical device or drug, including use cases for the new product, supply chain considerations, regulatory implications and needs of downstream users. At the start of the development process, the FDA uses pre-submissions, which aim to provide a path from conceptualisation through to placement on the market.

Developer responsibilities:

  • Determine the product and risk category to classify the device, which will determine the testing and evaluation procedure (see Figure 3).
  • While training the AI model, conduct internal (non-clinical) tests, and clearly document the data and algorithms used throughout the process in a Quality Management System (QMS).[68]
  • Follow Good Documentation Practice, which offer guidance on how to document procedures from development through to market, to facilitate risk mitigation, validation and verification, and traceability (to support regulators in the event of recall or investigations).
  • Inform the FDA on the necessity of new software, for example, for efficiency gains or improvements in quality.

FDA responsibilities:

  • Support developers in risk determination.
  • Offer guidance on, for example, milestones for (pre-)clinical research and data analysis.

Required outcomes: Selection of product and risk category to determine regulatory pathway.

Example scenario: A device that uses software to guide the taking of blood may be classified as an in-vitro diagnostics device, which the FDA has previously classified as Class III (highest risk class).[69]

Step 2: Pre-clinical research

Description: In this step, basic questions about safety are addressed through initial animal testing.

Developer responsibilities:

  • Propose endpoints of study and conduct research (often with a second party).
  • Use continuous tracking in the QMS and share results with FDA.

FDA responsibilities:

  • Approve endpoints of the study, depending on the novelty and type of medical device or drug.
  • Review results to allow progression to clinical research.

Required outcomes: Developer proves basic safety of product, allowing clinical studies with human volunteers in the next step.

Example scenario: This step is important for assessing risks of novel drugs. It would not usually be needed for medical software such as our example that helps take blood, as these types of software are typically aimed at automating or improving existing procedures.

Step 3: Clinical research

Description: Drugs, devices and software are tested on humans to make sure they are safe and effective. Unlike for foundation models and most AI research and development, institutional review for research with human subjects is mandatory in public health.

Developer responsibilities:

  • Create a research design (called a protocol) and submit it to an institutional review board (IRB) for ethical review, along with Good Clinical Practice (GCP) principles and ISO standards such as ISO14155.
  • Provide the FDA with the research protocol, the hypotheses and results of the clinical trials and of any other pre-clinical or human tests undertaken, and other relevant information.
  • Following FDA approval, hire an independent contractor to conduct clinical studies (as required by risk level); these may be in multiple regions or locations, as agreed with the FDA, to match future application environments.

For drugs, trials may take place in phases that seek to identify different aspects of a drug:

  • Phase 1 studies tend to involve less than 100 participants, run for several months and seek to identify the safety and dosage of a drug.
  • Phase 2 studies tend to involve up to several hundred people with the disease/condition, run for up to two years and study the efficacy and side effects.
  • Phase 3 studies involve up to 3,000 volunteers, can run for one to four years and study efficacy and adverse reactions.

FDA responsibilities:

  • Approve the clinical research design protocol before trials can proceed.
  • During testing, support the developer with guidance or advice at set intervals on protocol design and open questions.

Required outcomes: Once the trials are completed, the developer submits them as evidence to the FDA. The supplied information should include:

  • description of main functions
  • data from trials to prove safety and efficacy
  • benefit/risk and mitigation review, citing relevant literature and medical association guidelines
  • intended use cases and limitations
  • a predetermined change control plan, allowing for post-approval adaptations of software without the need for re-approval (for a new use, new approval is required)
  • QMS review (code, protocols of storing data, Health Protection Agency guidelines, patient confidentiality).

Example scenario: The developers submit a ‘submission of investigational device exemption’ to the FDA, seeking to simplify design, requesting observational studies of the device instead of randomised controlled trials. They provide a proposed research design protocol to the FDA. Once the FDA approves it, they begin trials in 15 facilities with 50 patients each, aiming to prove 98 per cent accuracy and reduction of waiting times at clinics. During testing, no significant adverse events are reported. The safety and efficacy information is submitted to the FDA.

Step 4: FDA review

Description: FDA review teams thoroughly examine the submitted data on the drug or device and decide whether to approve it.

Developer responsibilities: Work closely with the FDA to provide access to all requested information and facilities (as described above).

FDA responsibilities:

  • Assign specialised staff to review all submitted data.
  • In some cases, conduct inspections and audits of developer’s records and evidence, including site visits.
  • If needed, seek advice from an advisory committee, usually appointed by the FDA Commissioner with input from the federal Secretary of the Health & Human Service department.[70] The committee may include representation from patients, scientific academia, consumer organisations and industry (if decision-making is delegated to the committee, only scientifically qualified members may vote).

Required outcomes: Approval and registration, or no approval with request for additional evidence.

Example scenario: For novel software like the example here, there might be significant uncertainty. The FDA could request more information from the developer and consult additional experts. Decision-making may be delegated to an advisory committee to discuss open questions and approval.

Step 5: Post-market monitoring

Description: The aim of this step is to detect ‘adverse events’[71] (discussed further below) to increase safety iteratively. At this point, all devices are labelled with Unique Device Identifiers to support monitoring and reporting from development through to market. These are particularly in relation to identifying the underlying causes of, and corrective actions for adverse events.

Developer responsibilities: Any changes or upgrades must be clearly documented, within the agreed change control plan.

FDA responsibilities:

  • Monitor safety of all drugs and devices once available for use by the public.
  • Monitor compliance on an ongoing basis through the QMS, with safety and efficacy data reviewed every six to 12 months.
  • Maintain a database on adverse events and recalls.[72]

Required outcomes: No adverse events or diminishing efficacy. If safety issues occur, the FDA may issue a recall.

Example scenario: Due to a reported safety incident with the blood-taking software, the FDA inspects internal emails and facilities. In addition, every six months, the FDA reviews a one per cent sample of patient data in the QMS and conducts interviews with patients and staff from a randomly selected facility.

Risk-reducing aspects of FDA oversight

Our interviews with experts on the FDA and a literature review[73] highlighted several themes. We group them into five risk-reducing aspects below.

Risk- and novelty-driven oversight

The approval gates described in the previous section lead to iterative oversight using QMS and jointly agreed research endpoints, as well as continuous post-market monitoring.

Approval gates are informed by risk controllability. Risk controllability is understood by considering the severity of harm to people; the likelihood of that harm occurring; proliferation, duration of exposure to population; potential false results; patient tolerance of risk; risk factors for people administering or using the drug or device, such as caregivers; detectability of risks; risk mitigations; the drug or device developer’s compliance history; and how much uncertainty there may be around any of these factors.[74]

Class III devices and related software – those that may guide critical clinical decisions or that are invasive or life-supporting – need FDA pre-approval before the drug is marketed to the public. In addition, the clinical research design needs to be approved by the FDA.

Continuous, direct engagement of FDA with developers throughout the development process

There can be inspections at any step of the development and deployment process. Across all oversight steps, the FDA’s assessments are independent and not reliant on input from private auditors who may have profit incentives.

In the context of foundation models, where safety standards are unclear and risk assessments are therefore more exploratory, these assessments should not be guided by profit incentives.

In cases where the risks are less severe, for example,  Class II devices, the FDA is supported by accredited external reviewers.[75] External experts also support reviews of novel technology where the FDA lacks expertise, although this approach has been criticised (see limitations below and advisory committee description above).

FDA employees review planned clinical trials, as well as clinical trial data produced by developers and their contractors. In novel, high-stakes cases, a dedicated advisory committee reviews evidence and decides on approval. Post market, the FDA reviews sample usage, complaint and other data approximately every six months.

Wide-ranging information access

By law, the FDA is empowered to request comprehensive evidence through audits, conduct inspections[76] and check the QMS. The FDA’s QMS regulation requires documented, comprehensive managerial processes for quality planning, purchasing, acceptance activities, nonconformities and corrective/preventative actions throughout design, production, distribution and post-market. While the FDA has statutory powers to access comprehensive information, for example, on clinical trials, patient data and in some cases internal emails, it releases only a summary of safety and efficacy post approval.

Putting the burden of proof on the developer

The FDA must approve clinical trials and their endpoints, and the labelling materials for drugs and medical devices, before they are approved for market. This model puts the burden of proof on the developer to provide this information or be unable to sell their product.

A clear approval gate entails the following steps:

  • The product development process in scope: The FDA’s move into regulating SaMD required it to translate regulatory approval gates for a drug approval process to the stages of a software development process. In the SaMD context, a device may be made up of different components, including software and hardware, that come from other suppliers or actors further upstream in the product development process. The FDA ensures the safety and efficacy of each component by requiring all components to undergo testing. If a component has been previously reviewed by the FDA, future uses of it can undergo an expedited review. In some cases, devices may use open-source Software of Unknown Provenance (SOUP). Such software needs either to be clearly isolated from critical components of the device, or to undergo demonstrable safety testing.[77]
  • The point of approval in the product development process: Effective gates occur once a risk is identifiable, but before it can proliferate or turn into harms. Certain risks (such as differential impacts on diverse demographic groups) may not be identifiable until after the intended uses of the device are made clear (for example will it be used in a hospital or a care home?). For technology with a wide spectrum of uses, like gene editing, developers must specify intended uses and the FDA allows trials with human subjects only in a few cases, where other treatments have higher risks or significantly lower chance of success.[78]
  • The evidence required to pass the approval gate: This is tiered depending on the risk class, as already described. The FDA begins with an initial broad criterion such as simply not causing to the human body when used. Developers and contractors then provide exploratory evidence. Based on this, in the case of medicines, the regulator learns and makes further specifications, for example, around the drug elimination period. For medical devices such as heart stents, evidence could include the percentage reduction in the rate of major cardiac events.

Balancing innovation and risks enables regulatory authority to be built over time

The FDA enables innovation and access by streamlining approval processes (for example, similarity exemptions, pre-submissions) and approvals of drugs with severe risks but high benefits. Over time, Congress has provided the FDA with increasing information access and enforcement powers and budgets, to allow it to enforce ‘safe access’.

The FDA has covered more and more areas over time, recently adding tobacco control to its remit.[79] FDA-regulated products account for about 20 cents of every dollar spent by US consumers.[80] It has the statutory power to issue warnings, make seizures, impose fines and pursue criminal prosecution.

Safety and accessibility need to be balanced. For example, a piece of software that automates oxygen control may perform slightly less well than healthcare professionals, but if it reduces the human time and effort involved and therefore increases accessibility, it may still be beneficial overall. By finding the right balance, the FDA builds an overall reputation as an agency providing mostly safe access, enabling continued regulatory power.[81] When risk uncertainty is high, it can slow down the marketing of technologies, for example, allowing only initial, narrow experiments for novel technologies such as gene editing.[82]

The FDA approach does not rely on any one of these risk-reducing aspects alone. Rather, the combination of all five ensures the safety of FDA-regulated medical devices and drugs in most cases.[83] The five together also allow the FDA to continuously learn about risks and improve its approval process and its guidance on safety standards.

Risk- and novelty-driven oversight focuses learning on the most complex and important drugs, software and devices. Direct engagement and access to a wide range of information is the basis of the FDA’s understanding of new products and new risks.

With the burden of proof on developers through pre-approvals, they are incentivised to ensure the FDA is informed about safety and efficacy.

As a result of this approach to oversight, the FDA is better able to balance safety and accessibility, leading to increased regulatory authority.

‘The burden is on the industry to demonstrate the safety and effectiveness, so there is interest in educating the FDA about the technology.’

Former FDA Chief Counsel

The history of the FDA: 100+ years of learning and increasing power [84] [85] [86]

 

The creation of the FDA was driven by a series of medical accidents that exposed the risks drug development can pose to public safety. While the early drug industry initially pledged to self-regulate, and members of the public viewed doctors as the primary keepers of public safety, public outcry over tragedies like the Elixir Sulfanilamide disaster (see below) led to calls for an increasingly powerful federal agency.

Today the FDA employs around 18,000 people (2022 figures) with a $8 billion budget (2023 data). The FDA’s approach to regulating drugs and devices involves learning iteratively about risks and benefits of products with every new evidence review it undertakes as part of the approval process.

Initiation

The 1906 Pure Food and Drugs Act was the first piece of legislation to regulate drugs in the USA. A groundbreaking law, it took nearly a quarter-century to formulate. It prohibited interstate commerce of adulterated and misbranded food and drugs, marking the start of federal consumer protection.

Learning through trade controls: This Act established the importance of regulatory oversight for product integrity and consumer protection.

Limited mandate

From 1930 to 1937, there were failed attempts to expand FDA powers, with relevant bills  not being passed by Congress. This period underscored the challenges in evolving regulatory frameworks to meet public health needs.

Limited power and limited learning.

Elixir Sulfanilamide disaster

This 1937 event, where an untested toxic solvent caused over 100 deaths, marked a turning point in drug safety awareness.

Learning through post-market complaints: The Elixir tragedy emphasised the crucial need for pre-market regulatory oversight in pharmaceuticals.

Extended mandate

In 1938, previously proposed legislation, the Food, Drug, and Cosmetic Act, was passed into law that changed the FDA’s regulatory approach by mandating review processes without requiring proof of fraudulent intent.

Learning through mandated information access and approval power: Pre-market approvals and the FDA’s access to drug testing information enabled the building of appropriate safety controls.

Safety reputation

During the 1960s, the FDA’s refusal to approve thalidomide –a drug prescribed to pregnant women causing an estimated 80,000 miscarriages and infant deaths and deformities in 20,000 children worldwide – further established its commitment to drug safety.

Learning through prevented negative outcomes: The thalidomide situation led the FDA to calibrate its safety measures by monitoring and preventing large-scale health catastrophes, especially in comparison with similar countries. Post-market recalls were included in the FDA’s regulatory powers.

Extended enforcement power

The 1962 Kefauver-Harris Amendment to the Federal Food, Drug, and Cosmetic  Act was a significant step, requiring new drug applications to provide substantial evidence of efficacy and safety.

Learning through expanded enforcement powers: This period reinforced the evolving role of drug developers in demonstrating the safety and efficacy of their products.

Balancing accessibility with safety

The 1984 Drug Price Competition and Patent Term Restoration Act marked a balance between drug safety and accessibility, simplifying generic drug approvals. In the 2000s, Risk Minimization Action Plans were introduced, emphasising the need for drugs to have more benefits than risks, monitored at both the pre- and the post-market stages.

Learning through a lifecycle approach: This era saw the FDA expanding its oversight scope across product development and deployment for a deeper understanding of the benefit–risk trade-off.

Extended independence

The restructuring of advisory committees in the 2000s and 2010s enhanced the FDA’s independence and decision-making capability.

Learning through independent multi-stakeholder advice: The multiple perspectives of diverse expert groups bolstered the FDA’s ability to make well-informed, less biased decisions, reflecting a broad range of scientific and medical insights – although critics and limitations remain (see below).

Extension to new technologies

In the 2010s and 2020s, recognising the potential of technological advancements to improve healthcare quality and cost efficiency, the FDA began regulating new technologies such as AI in medical devices.

Learning through a focus on innovation: Keeping an eye on emerging technologies.

The limitations of FDA oversight

The FDA’s oversight regime is built for regulating food, drugs and medical devices, and more recently extended to software used in medical applications. Literature reviews[87] and interviewed FDA experts suggest three significant limitations of this regime’s applicability to other sectors.

Limited types of risks controlled

The FDA focuses on risks to life posed by product use, therefore focusing on reliability and (accidental) misuse risks. Systemic risks such as accessibility challenges, structural discrimination issues and novel risk profiles are not as well covered.[88] [89]

  • Accessibility risks include the cost barriers of advanced biotechnology drugs or SaMD for underprivileged groups.[90]
  • Structural discrimination risks include disproportionate risks to particular demographics caused by wider societal inequalities and a lack of representation in data. These may not appear in clinical trials or in single-device post-market monitoring. For example, SaMD algorithms have misclassified Black patients’ healthcare needs systematically because they have suggested treatment based past healthcare spending data that did not accurately reflect their requirements.[91]
  • Equity risks arise when manufacturers claim average accuracy across a population or use only for a specific population (for example, people aged 60+). The FDA only considers whether a product safely and effectively delivers according to the claims of its manufacturers – it doesn’t go beyond this to urge them to reach other populations. It does not yet have comprehensive algorithmic impact assessments to ensure equity and fairness.
  • False similarity risks originate in the accelerated FDA 510(k) approval pathway for medical devices and software through comparison with already-approved products –referred to as predicate devices. Reviews of this pathway have shown ‘predicate creep’ when multiple generations of predicate devices slowly drift away from the originally approved use.[92] This could mean that predicate devices may not provide suitable comparisons for new devices.
  • Novel risk profiles challenge the standard regulatory approach of the FDA that rests on risk detection through trials before risks proliferate through marketing. Risks that are not typically detectable in clinical trials, due to their novelty or new application environments, may be missed. For example, the risk of water-contaminating foods is clear, but it may be less clear how to monitor for new pathogens that might be significantly smaller or otherwise different to those detected by existing routines.[93] While any ‘adverse events’ need to be reported to the FDA, risks that are difficult to detect might be missed.

Limited number of developers due to high costs of compliance

The FDA’s stringent approval requirements lead to costly approval processes that only large corporations can afford, as a multi-stage clinical trial can cost tens of millions of dollars.[94] [95] This can lead to oligopolies and monopolies, high drug prices because of limited competition, and innovation focused on areas with high monetary returns.

If this is not counteracted through governmental subsidies and reimbursement incentives, groups with limited means to pay for medications can face accessibility issues. It remains an open question whether small companies should be able to develop and market severe-risk technologies, or how governmental incentives and efforts can democratise the drug and medical device – or foundation model – development process.

Reliance on industry for expertise

The FDA sometimes relies on industry expertise, particularly in novel areas where clear benchmarks have not been developed and knowledge is concentrated in industry. This means that the FDA may seek input from external consultants and its advisory committees to make informed decisions.[96]

An overreliance on industry could raise concerns around regulatory capture and conflicts of interest – similar to other agencies.[97] For example, around 25 per cent of FDA advisory committee members had conflicts of interest in the past five years.[98] In principle, conflicted members are not allowed to participate, but dependency on their expertise regularly leads this requirement being waived.[99] [100] [101] External consultants have been conflicted, too: one notable scandal occurred when McKinsey advised the FDA on opioid policy while being paid by corporations to help them sell the same drugs.[102]

A lack of independent expertise can reduce opportunities for the voice of people affected by high-risk drugs or devices being heard. This in turn may undermine public trust in new drugs and devices. It has also been shown that oversight processes that are not heavily dependent on industry expertise and funding have been proven to discover more, and more significant, risks and inaccuracies.[103]

Besides these three main limitations, others include enforcement issues for small-scale illegal deployment of SaMD, which can be hard to identify;[104] [105] and device misclassifications in new areas.[106]

FDA-style oversight for foundation models

FDA Class III devices are complex, novel technologies with potentially severe risks to public health and uncertainties regarding how to detect and mitigate these risks.[107]

Foundation models are at least as complex, more novel and – alongside their potential benefits – likewise pose potentially severe risks, according to the experts we interviewed and recent literature.[108] [109] [110] They are also deployed across the economy, interacting with millions of people, meaning they are likely to pose systemic risks that are far beyond those of Class III medical devices.[111]

However, the risks of foundation models are so far not fully clear, risk mitigation measures are uncertain and risk modelling is poor or non-existent.

Leading AI researchers such as Stuart Russell and Yoshua Bengio, independent research organisations, and AI developers have flagged the riskiness, complexity and black-box nature of foundation models.[112] [113] [114] [115] [116] In a review on the severe risks of foundation models (in this case, the accessibility of instructions for responding to biological threats), the AI lab Anthropic states: ‘If unmitigated, we worry that these risks are near-term, meaning they may be actualised in the next two to three years.’[117]

As seen in the history of the FDA outlined above, it was a reaction to severe harm that led to its regulatory capacity being strengthened. Those responsible for AI governance would be well advised to act ahead of time to pre-empt and reduce the risk of similarly severe harms.

The similarities between foundation models and existing, highly regulated Class III medical devices – in terms of complexity, novelty and risk uncertainties – suggests that they should be regulated in a similar way (see Figure 5).

However, foundation models differ in important ways from Software as a Medical Device (SaMD). The definitions themselves reveal inherent differences in the range of applications and intended use:

Foundation models are AI models capable of a wide range of possible tasks and applications, such as text, image or audio generation. They can be stand-alone systems or can be used as a ‘base’ for many other more narrow AI applications.[118]

SaMD  is more specific: it is software that is ‘intended to be used for one or more medical purposes that perform[s] these purposes without being part of a hardware medical device’.[119]

However, the most notable differences are more subtle. Even technology applied across a wide range of purposes, like general drug dispersion software, can be effectively regulated with pre-approvals. This is because the points of risk and the pathways to dangerous outcomes are well understood and agreed upon, and they all start from the distribution of products to consumers – something in which the FDA can intervene.

The first section of this chapter outlines why this is not yet the case for foundation models. The second section illustrates how FDA-style oversight can bridge this gap generally. The third section details how these mechanisms could be applied along the foundation model supply chain – the different stages of development and deployment of these models.

The foundation model challenge: unclear, distributed points of risk

In this section we discuss two key points of risk: 1) risk origination points, when risks arise initially; and 2) risk proliferation points, when risks spread without being controllable.

A significant challenge that foundation models raise is the difficulty of identifying where different risks originate and proliferate in their development and deployment, and which actors within that process should be held responsible for mitigating and providing redress for those harms.[120]

Risk origination and proliferation examples

Bias

Some risks may originate in multiple places in the foundation model supply chain. For example, the risk of a model producing outputs that reinforce racial stereotypes may originate in the data used to train the model, how it was cleaned, the weights that the model developer used, which users the model was made available to, and what kinds of prompts the end user of the model is allowed to make.[121] [122]

 

In this example, a series of evaluations for different bias issues might be needed throughout the model’s supply chain. The model developer and dataset provider would need to be obliged to proactively look for and address known issues of bias. It might also be necessary to find ways to prohibit or discourage end users from prompting a model for outputs that reinforce racial stereotypes.

Cybercrime

Another example is reports of GPT-4 being used to write code for phishing operations to steal people’s personal information. Where in the supply chain did such cyber-capabilities originate and proliferate?[123] [124] Did the risk originate during training (while general code-writing abilities were being built) or after release (allowing requests compatible with phishing)? Did it proliferate through model leakage, widely accessible chatbots like ChatGPT or Application Programming Interfaces (APIs), or downstream applications?

Some AI researchers have conceptualised the uncertainty over risks as a matter of the unexpected capabilities of foundation models. This ‘unexpected capabilities problem’ may arise during models’ development and deployment.[125] Exactly what risks this will lead to cannot be identified reliably, especially not before the range of potential use cases is clear.[126] In turn, this uncertainty means that risks may be more likely to proliferate rapidly (the ‘proliferation problem’),[127] and to lead to harms throughout the lifecycle – with limited possibility for recall (the ‘deployment safety problem’).[128]

The challenge in governing foundation models is therefore in identifying and mitigating risks comprehensively before they proliferate.[129]

There is a distinction to draw between risk origination (the point in the supply chain a risk such as toxic content may arise) and risk proliferation (the point in the supply chain a risk can be widely distributed to downstream actors). Identifying points of risk origination and proliferation can be challenging for different kinds of risks.

Foundation model oversight needs to be continuous throughout the supply chain. Identifying all inherent risks in a foundation model upstream is hard. Leaving risks to downstream companies is not the solution, because they may have proliferated already by this stage.

There are tools available to help upstream foundation model developers reduce risk before training (through filtering data inputs), and to assess risks during training (through clinical trial style protocols). More of these tools are needed. They are most effective when applied at the foundation model layer (see Figure 2 and Figure 6), given the centralised nature of foundation models. However, some risks might arise or be detectable only at the application layer, so tools for intervention at this layer are also necessary.

Applying key features of FDA-style oversight to foundation models

How should an oversight regime be designed so that it suits complex, novel, severe-risk technologies with distributed, unclear points of risk origination and proliferation?

Both foundation models and Class III devices pose potentially severe levels of risk to public safety and therefore require governmental oversight. For the former, this is arguably even more important given national security concerns (for example, the risk that such technologies could enable cyberattacks or widespread disinformation campaigns at far greater scales than current capabilities allow).[130] [131] [132]

Government oversight is needed also because of the limitations of private insurance for severe risks.

As seen in the cases of nuclear waste insurance or financial crisis, large externalities and systemic risks need to be captured by a government.

Below we consider what we can learn from the oversight of FDA-regulated products and whether an FDA-style approach could provide effective oversight of foundation models.

Building on Raji et al’s recent review[133] and interviews, current oversight regimes for foundation models can be understood alongside, and compared with, the core risk-reducing aspects of the FDA approach, as depicted in Figure 7.[134] [135] Current oversight and evaluations of GPT-4 lag behind FDA oversight in all dimensions.

Governance of GPT-4’s development and release according to their 2023 system card and interviews, vs. FDA governance of Class III drugs.[136] [137] [138] While necessarily simplified, characteristics furthest to the right fit best for complex, novel technologies with potentially severe risks and unclear risk (measures).[139]

‘We are in a “YOLO [you only live once]” culture without meaningful specifications and testing – “build, release, see what happens”.’

Igor Krawczuk on current oversight of commercial foundation models

The complexity and risk uncertainties of foundation models could justify similar levels of oversight to those provided by the FDA in relation to Class III medical devices.

This would involve an extensive ecosystem of second-party, third-party and regulatory oversight to monitor and understand the capabilities of foundation models and to detect and mitigate risks. The high speed of progress in foundation model development requires adaptable oversight institutions, including non-governmental organisations with specialised expertise. AI regulators need to establish and enforce improved foundation model oversight across the development and deployment process.

General principles for applying key features of the FDA’s approach to foundation model governance

  1. Establish continuous, risk-based evaluations and audits throughout the foundation model supply chain. Existing bug bounty programmes[140] and complaint-driven evaluation do not sufficiently cover potential risks. The FDA’s incident reporting system captures fewer risks than the universal risk-based reviews before market entry and post-market monitoring requirements.[141] Therefore, review points need to be defined across the supply chain of foundation models, with risk-based triggers. As already discussed, risks can originate at multiple sources, potentially simultaneously. Continuous engagement of reviewers and evaluators is therefore important to detect and mitigate risks before they proliferate.
  2. Empower regulatory agencies to evaluate critical safety evidence directly, supported by a third-party ecosystem. First-party self-assessments and second-party contracted auditing have consistently proven to be lower quality than accredited third-party or governmental audits.[142] [143] [144] [145] Regulators of foundation models should therefore have direct access to assess evaluation and audit evidence. This is especially significant when operating in a context when standards are unclear and audits therefore more exploratory (in the style of evaluations). Regulators can also improve their understanding by consulting independent experts.
  3. Ensure independence of regulators and external evaluators. Oversight processes not dependent on industry expertise and funding have been proven to discover more, and more significant, risks and inaccuracies, especially in complex settings with vague standards.[146] [147] Inspired by the FDA approach, foundation model oversight could be funded directly through mandatory fees from AI labs and only partly through federal funding. Sufficient resourcing in these ways is essential, to avoid the need for additional resourcing that is associated with potential conflicts of interest. Consideration should also be given to an upstream regulator of foundation models as existing sector-specific regulators may only have the ability to review downstream AI applications. The level of funding for such a regulator needs to be similar to that of other safety-critical domains, such as medicine. Civil society and external evaluators could be empowered through access to federal computing infrastructure for evaluations and accreditation programmes.
  4. Enable structured access to foundation models and adjacent components for evaluators and civil society. Access to information is the foundation of an effective audit (although while it is necessary, it is not sufficient on its own).[148] Providing information access to regulators – not just external auditors – increases audit quality.[149] Information access needs to be tiered to protect intellectual property and limit the risks of model leakage.[150] [151] Accessibility to civil society could increase the likelihood of innovations that meet the needs of people that are impacted by its use, for example, through understanding public perceptions of risks and perceived benefits of technologies. Foundation model regulation needs to strike a risk-benefit balance.
  5. Enforce a foundation model pre-market approval process, shifting the burden of proof to developers. If the regulator has the power to stop the development or sale of products, this significantly increases developers’ incentive to provide sufficient safety information. The regulatory burden needs to be distributed across the supply chain – with requirements in line with the risks at each layer of the supply chain. Cross-context risks and those with the most potential for wide-scale proliferation need to be regulated upstream at the foundation model layer; context-dependent risks should be addressed downstream in domain-specific regulation.

‘Drawing from very clear examples of real harm led the FDA to put the burden of proof on the developers – in AI this is flipped. We are very much in an ex post scenario with the burden on civil society.’

Co-Founder of Leading AI thinktank

 

‘We should see a foundation model as a tangible, auditable product and process that starts with the training data collection as the raw input material to the model.’

Kasia Chmielinski, Harvard Berkman Klein Center for Internet & Society

Learning through approval gates

The FDA’s capabilities have increased over time. Much of this has occurred through setting approval gates, which become points of learning for regulators. Given the novelty of foundation models and the lack of an established ‘state of the art’ for safe development and deployment, a similar approach could be taken to enhance the expertise of regulators and external evaluators (see Figure 2).

Approval gates can provide regulators with key information throughout the foundation model supply chain.

Some approval gates already exist under current sectoral regulation for specific downstream domains. At the application layer of a foundation model’s supply chain, the context of its use will be more clear than at the developer layer. Approval gates at this stage could require evidence similar to clinical studies for medical devices, to approximate risks. This could be gathered, for example, through an observational study on the automated allocation of physicians’ capacity based on described symptoms.

Current sectoral regulators may need additional resources, powers and support to appropriately evaluate the evidence and make a determination of whether a foundation model is safe to pass an approval gate.

Every time a foundation model is suggested for use, companies may already need to – or should – collect sufficient context-specific safety evidence and provide it to the regulator. For the healthcare capacity allocation example above, existing FDA –  or MHRA (Medicines and Healthcare products Regulatory Agency, UK) – requirements and approval gates on clinical decision support software currently support extensive evaluation of such applications.[152]

Upstream stages of the foundation model supply chain, in particular, lack an established ‘state of the art’ defining industry standards for development and underpinning regulation. A gradual process might therefore be required to define approval requirements and the exact location of approval gates.

Initially, lighter approval requirements and stronger transparency requirements will enable learning for the regulator, allowing it to gradually set optimal risk-reducing approval requirements. The model access required by the regulator and third parties for this learning could be provided via mechanisms such as sandboxes, audits or red teaming, detailed below.

Red teaming is an approach originating in computer security. It describes exercises where individuals or groups (the ‘red team’) are tasked with looking for errors, issues or faults with a system, by taking on the role of a bad actor and ‘attacking’ it. In the case of AI, it has increasingly been adopted as an approach to look for risks of harmful outputs from AI systems.[153]

Once regulators have agreed inclusive[154] international standards and benchmarks for testing of upstream capabilities and risks, they should impose standardised thresholds for approval and endpoints. Until that point, transparency and scrutiny should be increased, and the burden of proof should be on developers to prove safety to regulators at approval gates.

The next section discusses in more specific detail how FDA-style processes could be applied to foundation model governance.

‘We need end-to-end oversight along the value chain.’

CEO of an Algorithmic auditing firm

Applying specific FDA-style processes along the foundation model supply chain

Risks can manifest across the AI supply chain. Foundation models and downstream applications can have problematic behaviours originating in pre-training data, or they can develop new ones when integrated into complex environments (like a hospital or a school). This means that new risks can emerge over time.[155] Policymakers, researchers, industry and the public therefore ‘require more visibility into the risks presented by AI systems and tools’.

Regulation can ‘play an important role in making risks more visible, and the mitigation of risk more actionable, by developing policy to enable a robust and interconnected evaluation, auditing, and disclosure ecosystem that facilitates timely accountability and remediation of potential harms’.[156]

The FDA has processes, regulatory powers and a culture that helps to identify and mitigate risks across the development and deployment process, from pre-design through to post-market monitoring. This holistic approach provides lessons for the AI regulatory ecosystem.

There are also significant similarities between specific FDA oversight mechanisms and proposals for oversight in the AI space, suggesting that the latter proposals are generally feasible. In addition, new ideas for foundation model oversight can be drawn from the FDA, such as in setting endpoints that determine the evidence required to pass an approval gate. This section draws out key lessons that AI regulators could take from the FDA approach and applies them to each layer of the supply chain.

Data and compute layers oversight

There is an information asymmetry between governments and AI developers. This is demonstrated, for example, in the way that governments have been caught off-guard by the release of ChatGPT. This also has societal implications in areas like the education sector, where universities and schools are having to respond to a potential increase in students’ use of AI-generated content for homework or assessments.[157]

To be able to anticipate these implications, regulators need much greater oversight on the early stages of foundation model development, when large training runs (the key component of the foundation model development process) and the safety precautions for such processes are being planned. This will allow greater foresight over potentially transformative AI model releases, and early risk mitigation.

Pre-submissions and Good Documentation Practice

At the start of the development process, the FDA uses pre-submissions (pre-subs), which allow it to conduct ‘risk determination’. This benefits the developer because they can get feedback from the regulator at various points, for example on protocols for clinical studies. The aim is to provide a path from device conceptualisation through to placement on the market.

This is similar to an idea that has recently gained some traction in the AI governance space: that labs should submit reports to regulators ‘before they begin the training process for new foundation models, periodically throughout the training process, and before and following model deployment’. [158]

This approach would enable learning and risk mitigation by giving access to information that currently resides only inside AI labs (and which has not so far been voluntarily disclosed), for example covering compute and capabilities evaluations,[159] what data is used to train models, or environmental impact and supply chain data.[160] It would mirror the FDA’s Quality Management System (QMS), which documents compliance with standards (ISO 13485/820) and is based on Good Documentation Practice throughout the development and deployment process to ensure risk mitigation, validation and verification, and traceability (to support regulators in the event of recall or investigations).

As well as documenting compliance in this way, the approach means that the regulator would need to demonstrate similar good practice when handling pre-submissions. Developers would have concerns around competition: the relevant authorities would need to be legally compelled to observe confidentiality, to protect intellectual property rights and trade secrets. A procedure for documenting and submitting high-value information at the compute and data input layer would be the first step towards an equivalent to the FDA approach in the AI space.

Transparency via Unique Device Identifiers (UDIs)

The FDA uses UDIs for medical devices and stand-alone software. The aim of this is to support monitoring and reporting throughout the lifecycle, particularly to identify the underlying causes of ‘adverse events’ and what corrective action should be taken (this is discussed further below).[161] This holds some similarities to AI governance proposals, particularly the suggestion for compute verification to help ensure that (pre-) training rules and safety standards are being followed.

Specifically for the AI supply chain, this would apply at the developer layer, to the essential hardware used to train and run foundation models: compute chips. Chip registration and monitoring has gained traction because, unlike other components of AI development, this hardware can be tracked in the same manner as other physical goods (like UDIs). It is also seen as an easy win. Advanced chips are usually tagged with unique numbers, so regulators would simply need to set up a registry; this could be updated each time the chips change hands.[162]

Such a registry would enable targeted interventions. For example, Jason Matheny, the CEO of RAND suggests that regulators should ‘track and license large concentrations of AI chips’, while ‘cloud providers, who own the largest clusters of AI chips, could be subject to ‘know your customer’ (KYC) requirements so that they identify clients who place huge rental orders that signal an advanced AI system is being built’.[163]

This approach would allow regulators and relevant third parties to track use throughout the lifecycle – starting with monitoring for large training runs to build advanced AI models and to verify safety compliance (for example, via KYC checks or providing information about the cybersecurity and risk management measures) for these training runs and subsequent development decisions. It would also support them to hold developers accountable if they do not comply.

Quality Management System (QMS)

The FDA’s quality system regulation is sometimes wrongly assumed to be only a ‘compliance checklist’ to be completed before the FDA approves a product. In fact, the QMS – a standardised process for documenting compliance – is intended to put ‘processes, trained personnel, and oversight’ in place to ensure that a product is ‘predictably safe throughout its development and deployment lifecycles’.

At the design phase, controls consist of design planning, design inputs that establish user needs and risk controls, design outputs, verification to ensure that the product works as planned, validation to ensure that the product works in its intended setting, and processes for transferring the software into the clinical environment.[164]

To apply a QMS to foundation model development phase, it is logical to look at the data used to (pre-)train the model. This – alongside compute – is the key input at this layer of the AI supply chain. As with the pharmaceuticals governed by the FDA, the inputs will strongly shape the outputs, such as decisions on size (of dataset and parameters), purpose (while pre-trained models are designed to be used for multiple downstream tasks, some models are better suited than others to particular types of tasks) and values (for example, choices on filtering and cleaning the data).[165]

These decisions can lead to issues in areas such as bias,[166] copyright[167] and AI-generated data[168] throughout the lifecycle. Data governance and documentation obligations are therefore needed, with similar oversight to the FDA QMS for SaMD. This will build an understanding of where risks and harms originate and make it easier to stop them from proliferating by intervening upstream.

Regulators should therefore consider model and dataset documentation methods[169] for pre-training and fine-tuning foundation models. For example, model cards document information about the model’s architecture, testing methods and intended uses,[170] while datasheets document information about a dataset, including what kind of data is included and how it was collected and processed.[171] A comprehensive model card should also contain a risk assessment,[172] similar to the FDA’s controls for testing for effectiveness in intended settings. This could be based on uses foreseen by foundation model developers. Compelling this level of documentation would help to introduce FDA-style levels of QMS practice for AI training data.

Core policy implications

An approach to pre-notification of, and information-sharing on, large training runs could use the pre-registration process of the FDA as a model. As discussed above, under the FDA regime, developers are continuously providing information to the regulator, from the pre-training stage onwards.[173] This should also be the case in relation to foundation models.

It might also make sense to track core inputs to training runs by giving UDIs to microchips. This would allow compliance with regulations or standards to be tracked and would ensure that the regulator would have sight of non-notified large training runs. Finally, the other key input into training AI models – data – should adhere to documentation obligations, similarly to FDA QMS procedures.

Foundation model developer layer oversight

Decisions taken early in the development process have significant implications downstream. For example, models (pre-)trained on fundamental human rights values produce outputs that are less structurally harmful.[174] To reduce risk of harm as early as possible, critical decisions that shape performance across the supply chain should be documented as they are made, before wide-scale distribution, fine-tuning or application,

Third-party evidence generation and endpoints

The FDA model relies on third-party efficacy and safety evidence to prove ‘endpoints’ (targeted outcomes, jointly agreed between the FDA and developers before a clinical trial) as defined in standards or in an exploratory manner together with the FDA. This allows high-quality information on the pre-market processes for devices to be gathered and submitted to regulators.

Narrowly defined endpoints are very similar to one of the most commonly cited interventions in the AI governance space: technical audits.[175] A technical audit is ‘a narrowly targeted test of a particular hypothesis about a system, usually by looking at its inputs and outputs – for instance, seeing if the system performs differently for different user groups’. Such audits have been suggested by many AI developers and researchers and by civil society.[176]

Regulators should therefore develop – or support the AI ecosystem to develop – benchmarks and metrics to assess the capabilities of foundation models, and possibly thresholds that a model would have to meet before it could be placed on the market. This would help standardise the approach to third-party compliance with evidence and measurement requirements, as under the FDA, and establish a culture of safety in the sector.

Clinical trials

In the absence of narrowly defined endpoints and in cases of uncertainty, the FDA works with developers and third-party experts to enable more exploratory scrutiny as part of trials and approvals. Some of these trials are based on iterative risk management and explorative auditing, and on small-scale deployment to facilitate ‘learning by doing’ on safety issues. This informs what monitoring is needed, provides iterative advice and leads to learning being embedded in regulations afterwards.

AI regulators could use similar mechanisms, such as (regulatory) sandboxes. This would involve pre-market, small-scale deployment of AI models in real-world but controlled conditions, with regulator oversight.

This could be done using a representative population for red-teaming, expert ‘adversarial’ red-teamers (at the foundation model developer stage), or sandboxing more focused on foreseeable or experimental applications and how they interact with end users. In some jurisdictions, existing regulatory obligations could be used as the endpoint and offer presumptions of conformity – and therefore market access – after sandbox testing (as in the EU AI Act).

It will take work to develop a method and an ecosystem of independent experts who can work on third-party audits and sandboxes for foundation models. But this is a challenge the FDA has met, as have other sectors such as aviation, motor vehicles and banking.[177] An approach like the one described above has been used in aviation to monitor and document incidents and devise risk mitigation strategies. This helped to encourage a culture of safety in the industry, reducing fatality risk by 83 per cent between 1998 and 2008 (at the same time as a five per cent annual increase in passenger kilometres flown).[178]

Many organisations already exist that can service this need in the AI space (for example, Eticas AI, AppliedAI, Algorithmic Audit, Apollo Research), and more are likely to be set up.[179]

An alternative to sandboxes is to consider structured access for foundation models, at least until it can be proven that a model is safe for wide-scale deployment.[180] This would be an adaptation of the FDA’s approach to clinical trials, which allows experimentation with a limited number of people when the technology has a wide spectrum of uses (for example, gene editing) or when the risks are unclear, to get insights while preventing any harms that arise from proliferation.

Applied to AI, this could entail a staged release process – something leading AI researchers have already advocated for. This would involve model release to a small number of people (for example, vetted researchers) so that ‘beta’ testing is not done on the whole population via mass deployment.

Internal testing and disclosure of ‘adverse events’

Another mechanism used at the development stage by the FDA is internal testing and mandatory disclosure of ‘adverse events’. Regulators could impose similar obligations on foundation model developers, requiring internal audits and red teaming[181] and the disclosure of findings to regulators. Again, these approaches have been suggested by leading AI developers.[182] They could be made more rigorous by coupling them with mandatory disclosure, as under the FDA regime.

The AI governance equivalent of reporting ‘adverse effects’ might be incident monitoring.[183] This would involve a ‘systematic approach to the collection and dissemination of incident analysis to illuminate patterns in harms caused by AI’.[184] The approach could be strengthened further by including ‘near-miss’ incidents.[185]

In developing these proposals, however, it is important to bear in mind challenges faced in the life sciences sector regarding how to make adverse effect reporting suitably prescriptive. For example, clear indicators for what to report need to be established so that developers cannot claim ignorance and underreport.

However, it is not possible to foresee all potential effects of a foundation model. As a result, there needs to be some flexibility in incident reporting as well as penalties for not reporting. Medical device regulators in the UK have navigated this by providing high-level examples of indirect harms to look out for, and examples of the causes of these harms.[186] In the USA, drug and device developers are liable to report larger-scale incidents, enforced by the FDA through, for example, fines. If enacted effectively, this kind of incident reporting would be a valuable foresight mechanism for identifying emergent harms.

A pre-market approval gate for foundation models

After the foundation model developer layer, regulators should consider a pre-market approval gate (as used by the FDA) at the point just before the model is made widely available and accessible for use by other businesses and consumers. This would build on the mandatory disclosure obligations at the data and compute layers and involve submitting all documentation compiled from third-party audits, internal audits, red teaming and sandbox testing. It would be a rigorous regime, similar to the FDA’s use of QMS, third-party efficacy evidence, adverse event reporting and clinical trials.

AI regulators should ensure that documentation and testing practices are standardised, as they are in FDA oversight. This would ensure that high-value information is used for market approval at the optimal time, to minimise the risk of potential downstream harms before a model is released onto the market.

This approach also depends on developing adequate benchmarks and standards. As a stopgap, approval gates could initially be based on transparency requirements and the provision of exploratory evidence. As benchmarks and standards emerged over time, the evidence required could be more clearly defined.

Such an approval gate would be consistent with one of the key risk-reducing features of the FDA’s approach: putting the burden of proof on developers. Many of the concerns around third-party audits of foundation models (in the context of the EU AI Act) centre on the lack of technological expertise beyond AI labs. A pre-market approval gate would allow AI regulators to specify what levels of safety they expect before a foundation model can reach the market, but the responsibility for proving safety and reliability would be placed on the experts who wish to bring the model to market.

In addition, the approval gate offers the regulator and accredited third parties the chance to learn. As the regulator learns – and the technology develops – approval gates could be updated via binding guidance (rather than legislative changes). This combination of ‘intervention and reflection’ has ‘been shown to work in safety-critical domains such as health’.[187] Regulators and other third parties should cascade this learning downstream, for example, to parties who build on top of the foundation model. This is a key risk-reducing feature of the FDA’s approach: the ‘approvers’ and others in the ecosystem become more capable and more aware of safe use and risk mitigation.

While the burden of proof would be primarily on developers (who may use third parties to support in evidence creation), approval would still depend on the regulator. Another key lesson from FDA processes is that the regulator should bring in support from independent experts in cases of uncertainty, via a committee of experts, consumer and industry representatives, and patient representatives. This is important, as the EU’s regulatory regime for AI has been criticised for a lack of multi-stakeholder governance mechanisms, including ‘effective citizen engagement’.[188]

Indeed, many commercial AI labs say that they want avenues for democratic oversight and public participation (for example, OpenAI and Anthropic’s participation in ‘alignment assemblies’,[189] which seek public opinion to inform, for example, release criteria) but are unclear on how to establish them.[190] Introducing ways to engage stakeholders in cases of uncertainty as part of the foundation model approval process could help to address this. It would give a voice to those who could be affected by models with potentially societal-level implications, in the same way patients are given a voice in FDA review processes for SaMD. It might also help address one of the limitations of the FDA: an overreliance on industry expertise in some novel areas.

To introduce public participation in foundation model oversight in a meaningful way, it would be important to consider the approach to engagement that is suitable to help to identify risks.

One criteria to consider is who should be involved, with options ranging from a representative panel or jury of members of the public to panels formed of members of the public at higher risk of harm or marginalisation.

Another criteria to consider relates to the depth of engagement. The depth of engagement is often framed as a spectrum from low involvement, such as public consultations, all the way to deeper processes that involve partnership in decision-making.[191]

A third criteria to consider is the method of engagement. This would depend on decisions related to who should be involved and to what extent. For example, surveys or focus groups are common in consultative exercises, workshops can enable more involvement whereas panels and juries allow for deeper engagement which can result in its members proposing recommendations. In any case it will be important to consider whose voices, experiences and potential harms will be included or missed, and ensure those less represented or at more risk of harms are part of the process.

Finally, there are ongoing debates about whether pre-market approval should be applied to all foundation models, or ‘tiered’ to ensure those with the most potential to impact society are subject to greater oversight.

While answering this question is beyond the scope of this paper, it seems important that both ex ante and ex post metrics are considered when establishing which models belong in which tier. The former might include, for example, measurement of modalities, the generality of the base model, the distribution method and the potential for adaptation of the model. The latter could include the number of downstream applications built on the model, the number of users across applications and how many times the model is being queried. Any regulator must have the power and capacity to update the makeup of tiers in a timely fashion as and when these metrics shift.

Application layer oversight

Following the AI supply chain, a foundation model is made available and distributed via the ‘host’ layer, by either the model provider (API access) or a cloud service provider (for example, Hugging Face, which hosts models for download).

Some argue that this layer should also have some responsibility for the safe development and distribution of foundation models (for example, through KYC checks, safety testing before hosting or take-down obligations in case of harm). But there is a reason why regulators have focused primarily on developers and deployers: they have the most control over decisions affecting risk origin and safety levels. For this reason, we also focus on interventions beyond the host layer.

However, a minimal set of obligations on host layer actors (such as cloud service providers or model hosting platforms) is necessary, as they could play a role in evaluating model usage, implementing trust and safety policies to remove models that have demonstrated or are likely to demonstrate serious risks, and flagging harmful models to regulators when it is not in their power to take them down. This is beyond the scope of this paper, and we suggest that the responsibilities of the host layer are addressed in further research.

Once a foundation model is on the market and it is fine-tuned, built upon or deployed by downstream users, its risk profile becomes clearer. Regulatory gates and product safety checks are introduced by existing regulators at this stage, for example in healthcare, automotives or machinery (see UK regulation of large language models – LLMs – as medical devices, or the EU AI Act’s regulation of foundation models deployed in ‘high-risk’ areas). These are useful regulatory endpoints that should help to reduce risk and harm proliferation.

However, there are still lessons to be learned at the application layer from the FDA model. Many of the mechanisms used at the foundation model developer layer could be used at this layer, but with endpoints defined based on the risk profile of the area of deployment. This could take the form of third-party audits based on context-specific standards, or sandboxes including representative users based on the specific setting in which the AI system will be used.

Commercial off-the-shelf software (COTS) in critical environments

One essential mechanism for the application layer is a deployment risk assessment. Researchers have proposed that this should involve a review of ‘(a) whether or not the model is safe to deploy, and (b) the appropriate guardrails for ensuring the deployment is safe’.[192] This would serve as an additional gate for context-specific risks and is similar to the FDA’s rules for systems that integrate COTS in severe-risk environments. Under these rules, additional approval is needed unless the COTS is approved for use in that context.

A comparable AI governance regime could allow foundation models that pass the earlier approval gate to be used downstream unless they are to be used in a high-risk or critical sector, in which case a new risk assessment would have to be undertaken and further regulatory approval sought.

For example, foundation models applied in critical energy system would be pre-approved as COTS. The final approval would still need to be given by energy regulators, but the process would be substantially easier for pre-approved COTS. The EU AI Act employs a similar approach: foundation models that are given a high-risk ‘intended purpose’ by downstream developers would have to undergo EU conformity assessment procedures.

Algorithmic impact assessments are a tool for assessing the possible societal impacts of an AI system before the system is in use (with ongoing monitoring often advised).[193] Such assessments should be undertaken when an AI system is to be deployed in a critical area such as cybersecurity, and mitigation measures put in place. This assessment should be coupled with a new risk assessment (in addition to that carried out by the foundation model developer), tailored to the area of deployment. This could involve additional context-specific guidance or questions from regulators, and the subsequent mitigation measures should address these.

Algorithmic impact and risk assessments are essential components at the application layer for high-risk deployments, and are very similar to the QMS imposed by the FDA throughout the development and deployment process. If they are done correctly, they can help to ensure that risk and impact mitigation measures are put in place to cover the lifecycle and will form the basis of post-market monitoring processes.

Some AI governance experts have suggested that these assessments should be complemented by user evaluation and testing – defined as assessments of user-centric effects of an application or system, its functionality and its restrictions, usually via user testing or surveys.[194] These evaluations could be tailored to the intended use context of an application, to ensure adequate representation of people potentially affected by it, and would be similar to the context-specific audit gates used by the FDA.

Post-market monitoring

Across sectors, one-off conformity checks have been shown to open the door for regulations to be ‘gamed’ or for emergent behaviours to be missed (see the Volkswagen emissions scandal).[195] These issues are even more likely to arise in relation to AI, given its dynamic nature, including the capacity to change throughout the lifecycle and for downstream users to fine-tune and (re)deploy models in complex environments. The FDA model shows how these risks can be reduced by having an ecosystem of reporting and foresight, and strong regulatory powers to act to mitigate risks.

MedWatch and MedSun reporting

Post-market monitoring by the FDA includes reporting mechanisms such as MedWatch and MedSun.[196] These mechanisms enable adverse event reporting for medical products, as well as monitoring of the safety and effectiveness of medical devices. Serious incidents are documented and their details made available to consumers.

In the AI space, there are similar proposals for foundation model developers, and for high-risk application providers building on top of these models, to implement ‘an easy complaint mechanism for users and to swiftly report any serious risks that have been identified’.[197] This should compel the upstream providers to take corrective action when they can, and to document and report serious incidents to regulators.

This is particularly important for foundation models that are provided via API, as in this case the provider maintains a huge degree of control over the underlying model.[198] This would mean that the provider would usually be able to mitigate or correct the emerging risk. It would also reduce the burden on regulators to document incidents or take corrective action. Leading AI developers have already committed to introducing a ‘robust reporting mechanism’ to allow ‘issues [that] may persist even after an AI system is released’ to be ‘found and fixed quickly’.[199] Regulators could consider putting such a regime in place for all foundation models.

Regulators could also consider detection mechanisms for generative foundation models. These would aim to ‘distinguish content produced by the foundation model from other content, with a high degree of reliability’, as recently proposed by the Global Partnership on AI.[200] Their report found that this is ‘technically feasible and would play an important role in reducing certain risks from foundation models in many domains’. Requiring this approach, at least for the largest model providers (who have the resources and expertise to develop detection mechanisms), could mitigate risks such as disinformation and subsequent undermining of the rule of law or democracy.

Other reporting mechanisms for foundation models have been proposed, which overlap with the FDA’s ‘usability and clinical data logging, and trend reporting’. For example, Stanford researchers have suggested that regulators should compel the disclosure of usage patterns, in the same manner of transparency reporting for online platforms.[201] This would greatly enhance understanding of ‘how foundation models are used (for example, for providing medical advice, preparing legal documents) to hold their providers to account’.[202]

Concern-based audits

Concern-based audits are a key part of the FDA’s post-market governance. They are triggered by real-world monitoring of consumers and impacts after approval. If concerns are identified, the FDA has strong enforcement mechanisms that allow it to access relevant data and documentation. The audits are rigorous and have been shown to have strong deterrence effects on negligent behaviour by drug companies.

Mechanisms for highlighting ‘concern’ in the AI space could include reporting mechanisms and ‘trusted flaggers’ – organisations that are formally recognised as  independent, and with the requisite expertise, for identifying and reporting concerns. People affected by the technologies could be given the right to lodge a complaint with supervisory authorities, such as an AI ombudsman, to support people affected by AI and increase regulators’ awareness of AI harms as they occur.[203] [204] . This should be complimented by a comprehensive remedies framework for affected persons based on effective avenues for redress, including a right to lodge a complaint with a supervisory authority, judicial remedy and an explanation of individual decision-making

Feedback loops

Post-market monitoring is a critical element of the FDA’s risk-reducing features. It is based on mechanisms to facilitate feedback loops between developers, regulators, practitioners and patients. As discussed above, Unique Device Identifiers at the pre-registration stage support monitoring and traceability throughout the lifecycle, while ongoing review of quality, safety and efficacy data via QMS further supports this. Post-market monitoring for foundation models should similarly facilitate such feedback loops. These could include customer feedback, usability and user prompt screening, human-AI interaction evaluations and cross-company reporting of trends and structural indicators. Beyond feedback to the provider, affected persons should also be able to report incidents directly to a regulatory authority, particularly where harm arises, or is reasonably foreseeable to arise.

Software of Unknown Provenance (SOUP)

In the context of safety-critical medical software, SOUP is software that has been developed with an unknown development process or methodology, or which has unknown safety-related properties. The FDA monitors for SOUP by compelling the documentation of pre-specified post-market software adaptations, meaning that the regulator can validate changes to a product’s performance and monitor for issues and unforeseen use in software.[205]

Requiring similar documentation and disclosure of software and cybersecurity issues after deployment of a foundation model would be a minimum sensible safeguard for both risk mitigation and regulator learning. This could also include sharing issues back upstream to the model developer so that they can take corrective action or update testing and risk profiles.

The approach should be implemented alongside the obligations around internal testing and disclosure of adverse events for foundation models at the developer layer. Some have argued that disclosure of near misses should also be required (as it is in the aviation industry)[206] as an added incentive for safe development and deployment.

Another parallel with the monitoring of SOUP can be seen in AI governance proposals for measures around open-source foundation models. To reduce the unknown element, and for transparency and accountability reasons, application providers – or whoever makes the model or system available on the market – could be required to make it clear to affected persons when they are engaging with AI systems and what the underlying model is (including if it is open source), and to share easily accessible explanations of systems’ main parameters and any opt-out mechanisms or human alternatives available.[207] This would be the first step to both corrective action to mitigate risk or harm, and redress if a person is harmed. It is also a means to identify the use of untested underlying foundation models.

Finally, similar to the FDA’s use of documentation of pre-specified, post-market software adaptations, AI regulators could consider mandating that developers and application deployers document and share planned and foreseeable changes downstream. This would have to be defined clearly and standardised by regulators to a proportionate level, taking into consideration intellectual property and trade secret concerns, and the risk of the system being ‘gamed’ in the context of new capabilities. In other sectors, such as aviation, there have been examples of changes being underreported to avoid new costs, such as retraining.[208] But a similar regime would be particularly relevant for AI models and systems, given their unique ability to learn and develop throughout their lifecycle.

The need for documenting or pre-specifying post-market adaptations of foundation models could be based on capabilities evaluations and risk assessments, so that new capabilities or risks that arise post-deployment are reported to the ecosystem. Significant changes could trigger additional safety checks, such as third-party (‘concern-based’, in FDA parlance) audits or red teaming to stress-test the new capabilities.

Investigative powers

The FDA’s post-market monitoring puts reporting obligations on providers and users, while underpinning this with strong investigative powers. It conducts ‘active surveillance’ (for example, under the Sentinel Initiative),[209] and it is legally empowered to check QMS and other documentation and logging data, request comprehensive evidence and conduct inspections.

Similarly, AI regulators should have powers to investigate foundation model developers and downstream deployers, such as for monitoring and learning purposes or when investigating suspected non-compliance. This could include off- and on-site inspections to gather evidence, to address the information asymmetries between AI developers and regulators, and to mitigate emergent risks or harms.

Such a regime would require adequate resources and sociotechnical expertise. Foundation models are a general-purpose technology that will increasingly form part of our digital infrastructure. In this light, there needs to be a recognition that regulators should be funded on a comparable level to other domains in which safety and public trust are paramount and where underlying technologies form important parts of national infrastructure – such as civil nuclear, civil aviation, medicines, and road and rail.[210]

Recalls, market withdrawals and safety alerts

The FDA uses recalls, market withdrawals and safety alerts when products are in violation of law. Recall can also be a voluntary action by manufacturers and distributors to meet their responsibility to protect public health and wellbeing from products that present risk or are otherwise defective.[211]

Some AI governance experts and standards bodies have called for foundation model developers to similarly establish standard criteria and protocols for when and how to restrict, suspend or retire a model from active use.[212] This would be based on monitoring by the original providers throughout the lifecycle for harmful impacts, misuse or security vulnerabilities (including leaks or otherwise unauthorised access).

Whistleblower protection

In the same way that the FDA mandates reporting, with associated whistleblower protections, of adverse events by employees, second-party clinical trial conductors and healthcare practitioners, AI regulators should protect whistleblowers (for example, academics, designers, developers, project contributors, auditors, product managers, engineers and economic operators) who suspect breaches of law by a developer or deployer or an AI model or system. This protection should be developed in a way that learns from the pitfalls of whistleblower law in other sectors, which have led to ineffective uptake or enforcement. This includes ensuring breadth of coverage, clear communication of processes and protections, and review mechanisms.[213]

Recommendations and open questions

The FDA model of pre-approval and monitoring is an important inspiration for regulating novel technologies with potentially severe risks, such as foundation models.

This model entails risk-based mandates for pre-approval based on mandatory safety evidence. This works well when risks reliably originate and can be identified before proliferating or developing into harms.

The general-purpose nature of foundation models requires exploratory external scrutiny upstream in the supply chain, and targeted sector-specific approvals downstream.

Risks need to be identified and mitigated before they proliferate. This is especially difficult for foundation models.[214] Explorative approval gates have been ‘shown to work in safety-critical domains such as health’, due to the combination of ‘intervention and reflection’. Pre-approvals offer the FDA a mechanism for intervention, allowing most risks to be caught.

Another important feature of oversight is reflection. In health regulation, this is achieved through ‘iteration via guidance, rather than requiring legislative changes’.[215] This is a key consideration for AI regulators, who should be empowered (and compelled) to frequently update rules via binding guidance.

A continuous learning process to build suitable approval and monitoring regimes for foundation models is essential, especially at the model development layer. Downstream, there needs to be targeted scrutiny and approval for deployment through existing approval gates in specific application areas.

Effective oversight of foundation models requires recurring, independent evaluations and audits and access to information, placing the burden of proof on developers – not on civil society or regulators.

Literature reviews of other industries[216] show that this might be achieved through risk-based reviews by empowered regulators and third parties, tiered access for evaluators, mandatory pre-approvals, and treating foundation models like auditable products.

Our general principles for AI regulators are detailed in the section ‘Applying key features of FDA-style oversight to foundation models’.

Recommendations for AI regulators, developers and deployers

Data and compute layers oversight

  1. Regulators should compel pre-notification of, and information-sharing on, large training runs. Providers of compute for such training runs should cooperate with regulators on monitoring (by registering device IDs for microchips) and safety verification (KYC checks and tracking).
    • FDA inspiration: pre-submissions, Unique Device Identifiers (UDIs)
  2. Regulators should compel mandatory model and dataset documentation and disclosure for the pre-training and fine-tuning of foundation models,[217] [218] [219] including a capabilities evaluation and risk assessment within the model card for the (pre-) training stage and throughout the lifecycle.[220] Dataset documentation should focus on a description of training data that is safe to be made public (what is in it, where was it collected, under what licence, etc.), coupled with structured access for regulators or researchers to the training data itself (while adhering to strict levels of cybersecurity, as even this access carries security risks).
    • FDA inspiration: Quality Management System (QMS)

Foundation model layer oversight

  1. Regulators should introduce a pre-market approval gate for foundation models, as this is the most obvious point at which risks can proliferate. In any jurisdiction, defining the approval gate will require significant work, with input from all relevant stakeholders. Clarity should be provided about which foundation models would be subject to this stricter form of pre-market approval. Based on the FDA findings, this gate should at least entail submission of evidence to prove safety and market readiness based on internal testing and audits, third-party audits and (optional) sandboxes. Making models available on a strict and controllable basis via structured access could be considered as a temporary fix until an auditing ecosystem and/or sandboxes are developed.Depending on the jurisdiction in question and existing or foreseen pre-market approval for high-risk use, an additional approval gate should be introduced using endpoints (outcomes or thresholds to be met to determine efficacy and safety) based on the risk profile of the area of deployment for the application layer.
    • FDA inspiration: QMS, third-party efficacy evidence, adverse events reporting, clinical trials
  2. Third-party audits should be required as part of the pre-market approval process, and sandbox testing (as described in Recommendation 3) in real-world conditions should be considered. These should consist of – at least – a third-party audit based on context-specific standards. Alternatively, regulators could use sandboxes that include representative users (based on the setting in which the AI system will be used) to check conformity before deployment. Results should be documented and disclosed to the regulator.
    • FDA inspiration: third-party efficacy evidence, adverse events reporting, clinical trials
  3. Developers should enable detection mechanisms for outputs of generative foundation models.[221] Developers and deployers should make clear to affected persons and end users when they are engaging with AI systems. As an additional safety mechanism, they should build in detection mechanisms to allow end users and affected persons to ‘distinguish content produced by the foundation model from other content, with a high degree of reliability’.[222] Such detection mechanisms are important both as a defensive tool (for example, tagging AI-generated content) and also to enable study of model impacts. AI regulators could consider making this mandatory, at least for the most significant models (developers of which may have the resources and expertise to develop detection mechanisms).
    • FDA inspiration: post-market safety monitoring
  4. As part of the initial risk assessment, developers and deployers should document and share planned and foreseeable modifications throughout the foundation model’s supply chain. A substantial modification that falls outside this scope should trigger additional safety checks, such as third-party (‘concern-based’) audits or red teaming to stress test the new capabilities.
    • FDA: concern-based audits, pre-specified change control plans
  5. Foundation model developers, and subsequently high-risk application providers building on top of these models, should enable an easy complaint mechanism for users to swiftly report any serious risks that have been identified. This should compel upstream providers to take corrective action when they can, and to document and report serious incidents to regulators. These feedback loops should be strengthened further by awareness-raising across the ecosystem about reporting, and sharing lessons learned on what has been reported and corrective actions taken.
    • FDA Inspiration: MedWatch and MedSun programs

Application layer oversight

  1. Existing sector-specific agencies should review and approve the use of foundation models for a set of use cases, by risk level. Deployers of foundation models in high-risk or critical areas (to be defined in each jurisdiction) should undertake a deployment risk assessment to review ‘(a) whether or not the model is safe to deploy, and (b) the appropriate guardrails for ensuring the deployment is safe’.[223] Upstream developers should cooperate and share information with downstream customers to conduct this assessment. If the model is deemed safe, they should also undertake an algorithmic impact assessment to assess possible societal impacts of an AI system before the system is in use (with ongoing monitoring often advised).[224] Results should be documented and disclosed to the regulator.
    • FDA inspiration: COTS (commercial off-the-shelf software), QMS
  2. Downstream application providers should make clear to end users and affected persons what the underlying foundation model is, including if it is an open-source model, and provide easily accessible explanations of systems’ main parameters and any opt-out mechanisms or human alternatives available.[225]
    • FDA inspiration: Software of Unknown Provenance (SOUP)

Post-market monitoring

  1. An AI ombudsman should be considered, to receive and document complaints or known instances of harms of AI. This would increase regulators’ visibility of AI harms as they occur. It could be piloted initially for a relatively modest investment, but if successful it could dramatically improve redress for AI harms and the functionality of an AI regulatory framework as a whole.[226] An ombudsman should be complimented by a comprehensive remedies framework for affected persons based on clear avenues for redress.
    • FDA inspiration: concern-based audits, reporting of adverse events
  2. Developers and deployers should provide documentation and disclosure of incidents throughout the supply chain, including near misses.[227] This could be strengthened by requiring downstream developers (building on top of foundation models at the application layer) and end users (for example, medical or education professionals) to also disclose incidents.
    • FDA inspiration: reporting of adverse events
  3. Foundation model developers and downstream deployers should be compelled to restrict, suspend or retire a model from active use if harmful impacts, misuse or security vulnerabilities (including leaks or other unauthorised access) arise. Such decisions should be based on standardised criteria and processes.[228]
  4. Host layer actors (for example, cloud service providers or model hosting platforms) should also play a role by evaluating model usage, implementing trust and safety policies to remove models that have demonstrated or are likely to demonstrate serious risks, and flagging harmful models to regulators when it is not in their power to take them down.
    • FDA inspiration: recalls, market withdrawals and safety alerts
  5. AI regulators should have strong powers to investigate and require evidence generation from foundation model developers and downstream deployers. This should be strengthened by whistleblower protections for anyone involved in the development or deployment process who raises concerns about risks to health or safety. This would support regulatory learning and act as a strong deterrent to rule breaking. Powers should include off- and on-site inspections and evidence-gathering mechanisms to address the information asymmetries between AI developers and regulators and to mitigate emergent risks or harms. Consideration should be given to the trade-offs between intellectual property, trade secret and privacy protections (and whether these could serve as undue legal loopholes) and the safety-enhancing features of investigative powers: regulators considering the FDA model across jurisdictions should clarify such legally contentious issues.
    • FDA inspiration: wide information access, active surveillance
  6. Any regulator should be funded to a level comparable to (if not greater than) regulators in other domains where safety and public trust are paramount and where underlying technologies form part of national infrastructure – such as civil nuclear, civil aviation, medicines, or road and rail.[229] Given the level of resourcing required, this may be partly funded by AI developers over a certain threshold (to be defined the regulatorfor example, annual turnover)– as is the case with the FDA[230] and the EU’s European Medicines Agency (EMA).[231] Such an approach is important, to ensure that regulators have a source of funding that is stable and secure, and (importantly) independent from political decisions or reprioritisation.
    • FDA inspiration: mandatory fees
  7. The law around AI liability should be clarified to ensure that legal and financial liability for AI risk is distributed proportionately along foundation model supply chains. Liability regimes vary between jurisdictions and a thorough assessment is beyond the scope of this paper, but across sectors regulating complex technology, clarity in liability is a key driver of compliance within companies and uptake of the technology. For example, lack of clarity as to end user liability in clinical AI is a major reason that uptake has been limited. Liability will be even more contentious in the foundation model supply chain when applications are developed on top of foundation models, and this must be addressed accordingly in any regulatory regime for AI.

Overcoming the limitations of the FDA in a prospective AI regulatory regime

Having considered how the risk-reducing mechanisms of the FDA might be applied to AI governance, it makes sense to also acknowledge the limitations of the FDA regime, and to consider how they might also be counterbalanced in a prospective AI regulatory regime.

The first limitation is the lack of coverage for systemic risks, as the FDA focuses on risk to life. Systemic risks are prevalent in the AI space.[232] AI researchers have conceptualised systemic risk as societal harm and point out that it is similarly overlooked. Proposals to address this include: ‘(1) public oversight mechanisms to increase accountability, including mandatory impact assessments with the opportunity to provide societal feedback; (2) public monitoring mechanisms to ensure independent information gathering and dissemination about AI’s societal impact; and (3) the introduction of procedural rights with a societal dimension, including a right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm’.[233] We have expanded on and included these mechanisms in our recommendations in the hope that they can overcome limitations centring on systemic risks.

The second limitation is the high cost of compliance and subsequent limited number of developers, given that the stringent approval requirements are challenging for smaller players to meet. Inspiration for how to counterbalance this may be gleaned from the EU’s FDA equivalent, the EMA. It offers tailored support to small and medium-sized enterprises (SMEs), via an SME Office that provides regulatory assistance for reduced fees. This has contributed to the approval rates for SME applicants increasing from 40 per cent in 2016 to 89 per cent in 2020.[234] Similarly, the UK’s NHS has an AI & Digital Regulations Service that gives guidance and advice on navigating regulation, especially for SMEs that do not have compliance teams.[235]

Streamlined regulatory pathways could be considered to further reduce burdens for AI models or systems with demonstrably promising potential (for example, for scientific discovery). The EMA has done this through its Advanced Therapy Medicine Products process, which streamlines approval procedures for certain medicines.[236]

Similar support mechanisms could be considered for SMEs and startups, as well as streamlined procedures for demonstrably beneficial AI technology, under an AI regulator.

The third limitation is the FDA’s overreliance on industry in some novel areas, because of a lack of expertise. Lack of capacity for effective regulatory oversight has been voiced as a concern in the AI space, too.[237] Some ideas exist for how to overcome this, such as the Singaporean AI Office’s use of public–private partnerships to utilise industry talent without being reliant on it.[238]

The EMA has grappled with similar challenges. Like the FDA, it overcomes knowledge gaps by having a pool of scientific experts, but it seeks to prevent conflict of interest by leaning substantially on transparency: the EMA Management Board and experts cannot have any financial or other interests in the industry they are overseeing, and the curricula vitae, declarations of interest and risk levels for these experts are publicly available.[239]

Taken together, these solutions might be considered to reduce the chances of the limitations of FDA governance being reproduced by an AI regulator.

Open questions

The proposed FDA-style oversight approach for foundation models is far from a detailed ready-to-implement guideline for regulators. We acknowledge the small sample of interviewees for this paper, and that many of our interview subjects may strongly support an FDA model for regulation. For further validation and detailing of the claims in this paper, we are especially interested in future work on three sets of questions.

Understanding foundation model risks

  • Across the foundation model supply chain, where exactly do foundation model risks[240] originate and proliferate, and which players need to be tasked with their mitigation? How can unknown risks be discovered?
  • How effective will exploratory and targeted scrutiny be in identifying different kinds of risks for foundation models?
  • Do current and future foundation models need to be categorised along risk tiers? If so, how? Do all foundation models need to go through an equally rigorous process of regulatory approvals?

Detailing FDA-style oversight for foundation models to foster ‘safe innovation’

  • For the FDA, what aspects of regulatory guidance were easier to prescribe, and to enforce in practice?
  • How do FDA-style oversight or specific oversight features address each risk of foundation models in detail?
  • How can FDA-style oversight for foundation models be integrated into international oversight regimes?[241]
  • What do FDA-style review, audit and inspection processes look like, step by step, for foundation models?
  • How can the limitations of the FDA approach be addressed in every layer of the foundation model supply chain? How can difficult-to-detect systemic risks be mitigated? How can the stifling of innovation, especially among SMEs, be avoided?
  • Are FDA-style product recalls feasible for a foundation model or a downstream applications of foundation models?
  • What role should third parties in the host layer play? While they have less remit over risk origin, might they have significant control over, for example, risk mitigation?
  • What are the implications of FDA-style oversight for foundation models on their accessibility, affordability and sharing their benefits?
  • How would FDA-style pre-approvals be enforced for foundation models, for example, for product recalls?
  • How is liability distributed in an FDA-style oversight approach?
  • Why is the FDA able to be stringent/cautious? How do political incentives on congressional oversight and aversion to risk of harms of medication apply to foundation model regulation?
  • What can be learned from the political economy of the FDA and its reputation?
  • In each jurisdiction (for example, USA, UK, EU), how does an FDA-style approach for AI fit into the political economy and institutional landscape?
  • In each jurisdiction, how should liability law be adapted for AI to ensure that legal and financial liability for AI risk is distributed proportionately along foundation model supply chains?

Learnings from other regulators

  • What can be learned from regulators in public health in other jurisdictions, like the UK’s Medicines and Healthcare products Regulatory Agency (MHRA), EU’s EMA and Health Canada? [242] [243] [244]
  • How can other non-health regulators, such as the US Federal Aviation Administration  or National Highway Traffic Safety Administration, inspire foundation model oversight?[245]
  • How can novel forms of oversight and audits, such as cross-audits or joint audits, be coupled with processes from existing regulators?

Acknowledgements

This paper was co-authored by Merlin Stein (PhD candidate at the University of Oxford) and Connor Dunlop (EU Public Policy Lead at the Ada Lovelace Institute) with input from Andrew Strait.

Interviewees

The 20 interviewees included experts on FDA oversight and foundation model evaluation processes from industry, academia, and thinktanks, as well as government officials. This included three interviews with leading AI labs, two with third-party AI evaluators and auditors, nine with civil society organisations, and six with medical software regulation experts, including former FDA leadership and clinical trial leaders.

The following participants gave us permission to mention their names and affiliations (in alphabetical order). Ten interviewees not listed here did not provide their permission. Respondents do not represent any organisations they are affiliated with. They chose to add their name after the interview and were not sent a draft of this paper before publication. The views expressed in this paper are of the Ada Lovelace Institute.

  • Kasia Chmielinski, Berkman Klein Center for Internet & Society
  • Gemma Galdón-Clavell, Eticas Research & Consulting
  • Gilian Hadfield, University of Toronto, Vector Institute and OpenAI, independent contractor
  • Sonia Khatri, independent SaMD and medical device regulation expert
  • Igor Krawczuk, Lausanne Institute of Technology
  • Sarah Myers West, AI Now Institute
  • Noah Strait, Scientific and Medical Affairs Consulting
  • Robert Trager, Blavatnik School of Government, University of Oxford, and Centre for the Governance of AI
  • Alexandra Tsalidas, Harvard Ethical Intelligence Lab
  • Rudolf Wagner, independent senior executive advisor for SaMD

Reviewers

We are grateful for helpful comments and discussions on this work from:

  • Ashwin Acharya
  • Markus Anderljung
  • Clíodhna Ní Ghuidhir
  • Xiaoxuan Liu
  • Deborah Raji
  • Sarah Myers West
  • Moritz von Knebel

Footnotes

[1] ‘Voluntary AI Commitments’, <www.whitehouse.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf>, accessed October 12, 2023

[2] ‘An EU AI Act that works for people and society’ (Ada Lovelace Institute 2023) <www.adalovelaceinstitute.org/policy-briefing/eu-ai-act-trilogues/> accessed 12 October 2023

[3] The factors that determine AI risk are not purely technical – sociotechnical determinants of risk are crucial. Features such as the context of deployment, the competency of the intended users, and the optionality of interacting with an AI system must all be considered, in addition to specifics of the data and AI model deployed. OECD, “OECD Framework for the Classification of AI Systems,” OECD Digital Economy Papers, no. 323 (February 2022), https://doi.org/10.1787/cb6d9eca-en.

[4] Markus Anderljung and others, ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’ (arXiv, 4 September 2023) <http://arxiv.org/abs/2307.03718> accessed 15 September 2023.

[5] ‘A Law for Foundation Models: The EU AI Act Can Improve Regulation for Fairer Competition – OECD.AI’ <https://oecd.ai/en/wonk/foundation-models-eu-ai-act-fairer-competition> accessed 15 September 2023.

[6] ‘Stanford CRFM’ <https://crfm.stanford.edu/report.html> accessed 15 September 2023.

[7] ‘While only a few well-resourced actors worldwide have released general purpose AI models, hundreds of millions of end-users already use these models, further scaled by potentially thousands of applications building on them across a variety of sectors, ranging from education and healthcare to media and finance.’ Pegah Maham and Sabrina Küspert, ‘Governing General Purpose AI’.

[8] Draft standards here are a very good example of the value of dataset documentation (i.e. declaring metadata) on what is used in training and fine-tuning models. In theory, this could also all be kept confidential as commercially sensitive information once a legal infrastructure is in place www.datadiversity.org/draft-standards

[9] Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru, (2019), ‘Model Cards for Model Reporting’, doi: 10.1145/3287560.3287596

[10] Gebru, Morgenstern, Vecchione, Vaughan, Wallach, Daum and Crawford, (2021), Datasheets for Datasets, https://m-cacm.acm.org/magazines/2021/12/256932-datasheets-for-datasets/abstract (Accessed: 27 February 2023) Hutchinson, Smart, Hanna, Denton, Greer, Kjartansson, Barnes and Mitchell, (2021), ‘Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure’, doi: 10.1145/3442188.3445918;

[11] In the UK, the Civil Aviation Authority has a revenue of £140m and staff of over 1,000, and the Office for Nuclear Regulation around £90m with around 700 staff). An EU-level agency for AI should be funded well beyond this, given that the EU is more than six times the size of the UK.

[12] Algorithmic Accountability Act of 2022 <2022-02-03 Algorithmic Accountability Act of 2022 One-pager (senate.gov)> accessed 15 September 2023.

[13] Lingjiao Chen, Matei Zaharia and James Zou, ‘How Is ChatGPT’s Behavior Changing over Time?’ (arXiv, 1 August 2023) <http://arxiv.org/abs/2307.09009> accessed 15 September 2023.

[14] ‘AI-Generated Books on Amazon Could Give Deadly Advice – Decrypt’ <https://decrypt.co/154187/ai-generated-books-on-amazon-could-give-deadly-advice> accessed 15 September 2023.

[15] ‘Generative AI for Medical Research | The BMJ’ <www.bmj.com/content/382/bmj.p1551#> accessed 15 September 2023.

[16] Emanuel Maiberg ·, ‘Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale’ (404 Media, 22 August 2023) <www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/> accessed 15 September 2023.

[17] Belle Lin, ‘AI Is Generating Security Risks Faster Than Companies Can Keep Up’ Wall Street Journal (10 August 2023) <www.wsj.com/articles/ai-is-generating-security-risks-faster-than-companies-can-keep-up-a2bdedd4> accessed 15 September 2023.

[18] Sarah Carter et. al., <The Convergence of Artificial Intelligence and the Life Sciences www.nti.org/analysis/articles/the-convergence-of-artificial-intelligence-and-the-life-sciences/> accessed 2 November 2023

[19] Dual Use of Artificial Intelligence-powered Drug Discovery – PubMed (nih.gov)

[20] Haydn Belfield, ‘Great British Cloud And BritGPT: The UK’s AI Industrial Strategy Must Play To Our Strengths’ (Labour for the Long Term 2023)

[21] Thinking About Risks From AI: Accidents, Misuse and Structure | Lawfare (lawfaremedia.org)

[22] Governing General Purpose AI — A Comprehensive Map of Unreliability, Misuse and Systemic Risks | Stiftung Neue Verantwortung (SNV) (stiftung-nv.de); Anthropic \ Frontier Threats Red Teaming for AI Safety

[23] www.deepmind.com/blog/an-early-warning-system-for-novel-ai-risks

[24] ‘Mission critical: Lessons from relevant sectors for AI safety’ (Ada Lovelace Institute 2023) <https://www.adalovelaceinstitute.org/policy-briefing/ai-safety/> accessed 23 November 2023

[25] ‘EU AI Standards Development and Civil Society Participation’ <www.adalovelaceinstitute.org/event/eu-ai-standards-civil-society-participation/> accessed 18 September 2023.

[26] Algorithmic Accountability Act of 2022 <2022-02-03 Algorithmic Accountability Act of 2022 One-pager (senate.gov)> accessed 15 September 2023.

[27] ‘The Problem with AI Licensing & an “FDA for Algorithms” | The Federalist Society’ <https://fedsoc.org/commentary/fedsoc-blog/the-problem-with-ai-licensing-an-fda-for-algorithms> accessed 15 September 2023.

[28] ‘Clip: Amy Kapczynski on an Old Idea Getting New Attention–an “FDA for AI”. – AI Now Institute’ <https://ainowinstitute.org/general/clip-amy-kapczynski-on-an-old-idea-getting-new-attention-an-fda-for-ai> accessed 15 September 2023.

[29] Dylan Matthews, ‘The AI Rules That US Policymakers Are Considering, Explained’ (Vox, 1 August 2023) <www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable> accessed 15 September 2023; Belenguer L, ‘AI Bias: Exploring Discriminatory Algorithmic Decision-Making Models and the Application of Possible Machine-Centric Solutions Adapted from the Pharmaceutical Industry’ (2022) 2 AI and Ethics 771 <https://doi.org/10.1007/s43681-022-00138-8>

[30] ‘Senate Hearing on Regulating Artificial Intelligence Technology | C-SPAN.Org’ <www.c-span.org/video/?529513-1/senate-hearing-regulating-artificial-intelligence-technology> accessed 15 September 2023.

[31] ‘AI Algorithms Need FDA-Style Drug Trials | WIRED’ <www.wired.com/story/ai-algorithms-need-drug-trials/> accessed 15 September 2023.

[32] ‘One of the “Godfathers of AI” Airs His Concerns’ The Economist <www.economist.com/by-invitation/2023/07/21/one-of-the-godfathers-of-ai-airs-his-concerns> accessed 15 September 2023.

[33] ‘ISVP’ <www.senate.gov/isvp/?auto_play=false&comm=judiciary&filename=judiciary072523&poster=www.judiciary.senate.gov/assets/images/video-poster.png&stt=> accessed 15 September 2023.

[34] ‘Regulations.Gov’ <www.regulations.gov/docket/NTIA-2023-0005/comments> accessed 15 September 2023.

[35] Guidelines for Artificial Intelligence in Medicine: Literature Review and Content Analysis of Frameworks – PMC (nih.gov)

[36] ‘Foundation Models for Generalist Medical Artificial Intelligence | Nature’ <www.nature.com/articles/s41586-023-05881-4> accessed 15 September 2023.

[37] Anthropic admitted openly that “we do not know how to train systems to robustly behave well“. ‘Core Views on AI Safety: When, Why, What, and How’ (Anthropic) <www.anthropic.com/index/core-views-on-ai-safety> accessed 18 September 2023.

[38] NTIA AI Accountability Request for Comment <www.regulations.gov/docket/NTIA-2023-0005/comments> accessed 18 September 2023.

[39] Inioluwa Deborah Raji and others, ‘Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’ (arXiv, 9 June 2022) <http://arxiv.org/abs/2206.04737> accessed 18 September 2023.

[40] See Appendix for a list of interviewees

[41] Michael Moor and others, ‘Foundation Models for Generalist Medical Artificial Intelligence’ (2023) 616 Nature 259.

[42] Lewis Ho and others, ‘International Institutions for Advanced AI’ (arXiv, 11 July 2023) <http://arxiv.org/abs/2307.04699> accessed 18 September 2023.

[43] Center for Devices and Radiological Health, ‘Medical Device Single Audit Program (MDSAP)’ (FDA, 24 August 2023) <www.fda.gov/medical-devices/cdrh-international-programs/medical-device-single-audit-program-mdsap> accessed 18 September 2023.

[44] Center for Drug Evaluation and Research, ‘Conducting Clinical Trials’ (FDA, 2 August 2023) <www.fda.gov/drugs/development-approval-process-drugs/conducting-clinical-trials> accessed 18 September 2023.

[45] ‘Explainer: What Is a Foundation Model?’ <www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 18 September 2023.
Alternatively: ‘any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g.,fine-tuned) to a wide range of downstream tasks’.

Bommasani R and others, ‘On the Opportunities and Risks of Foundation Models’ (arXiv, 12 July 2022) <http://arxiv.org/abs/2108.07258>

[46] ‘Explainer: What Is a Foundation Model?’ <www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 18 September 2023.

[47] Ibid.

[48] AWS, ‘Fine-Tune a Model’ <https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-fine-tune.html> accessed 3 July 2023

[49] ‘Explainer: What Is a Foundation Model?’ <www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 18 September 2023.

[50] ‘ISO – ISO 9001 and Related Standards — Quality Management’ (ISO, 1 September 2021) <www.iso.org/iso-9001-quality-management.html> accessed 2 November 2023.

[51] 14:00-17:00, ‘ISO 13485:2016’ (ISO, 2 June 2021) <www.iso.org/standard/59752.html> accessed 2 November 2023.

 [52] OECD, ‘Risk-Based Regulation’ in OECD, OECD Regulatory Policy Outlook 2021 (OECD 2021) <www.oecd-ilibrary.org/governance/oecd-regulatory-policy-outlook-2021_9d082a11-en> accessed 18 September 2023.

[53] Center for Devices and Radiological Health, ‘International Medical Device Regulators Forum (IMDRF)’ (FDA, 15 September 2023) <www.fda.gov/medical-devices/cdrh-international-programs/international-medical-device-regulators-forum-imdrf> accessed 18 September 2023.

[54] Office of the Commissioner, ‘What We Do’ (FDA, 28 June 2021) <www.fda.gov/about-fda/what-we-do> accessed 18 September 2023.

[55] ‘FDA User Fees: Examining Changes in Medical Product Development and Economic Benefits’ (ASPE) <https://aspe.hhs.gov/reports/fda-user-fees> accessed 18 September 2023.

[56] ‘Premarket Approval (PMA)’ <www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpma/pma.cfm?id=P160009> accessed 18 September 2023.

[57] ‘Product Classification’ <www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPCD/classification.cfm?id=LQB> accessed 18 September 2023.

[58] Center for Devices and Radiological Health, ‘Et Control – P210018’ [2022] FDA <www.fda.gov/medical-devices/recently-approved-devices/et-control-p210018> accessed 18 September 2023.

[59] Note that only ~2% of SaMD are Class III, see Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis – The Lancet Digital Health and Drugs and Devices: Comparison of European and U.S. Approval Processes – ScienceDirect

[60] ‘Assessing the Efficacy and Safety of Medical Technologies (Part 4 of 12) (Princeton.Edu) – Google Search’ <www.google.com/search?q=Assessing+the+Efficacy+and+Safety+of+Medical+Technologies+(Part+4+of+12)+(princeton.edu)&rlz=1C1GCEA_enBE1029BE1030&oq=Assessing+the+Efficacy+and+Safety+of+Medical+Technologies+(Part+4+of+12)+(princeton.edu)&gs_lcrp=EgZjaHJvbWUyBggAEEUYOdIBBzM1N2owajSoAgCwAgA&sourceid=chrome&ie=UTF-8> accessed 18 September 2023.

[61] Ibid.

[62] For the purposes of this report, ‘effectiveness’ is used as a synonym of ‘efficacy’. In detail, effectiveness is concerned with the benefit of a technology under average conditions of use, whereas efficacy is the benefit under ideal conditions.

[63] ‘SAMD MDSW’ <www.quaregia.com/blog/samd-mdsw> accessed 18 September 2023.

[64] Office of the Commissioner, ‘The Drug Development Process’ (FDA, 20 February 2020) <www.fda.gov/patients/learn-about-drug-and-device-approvals/drug-development-process> accessed 18 September 2023.

[65] Eric Wu and others, ‘How Medical AI Devices Are Evaluated: Limitations and Recommendations from an Analysis of FDA Approvals’ (2021) 27 Nature Medicine 582.

[66] It can be debated whether this falls under the exact definition of SaMD as a stand-alone software feature, or as a software component of a medical device, but the lessons and process remain the same.

[67] SUMMARY OF SAFETYAND EFFECTIVENESS DATA (SSED) <www.accessdata.fda.gov/cdrh_docs/pdf21/P210018B.pdf> accessed 18 September 2023.

[68] A QMS is a standardised process for documenting compliance based on international standards (ISO 13485/820).

[69] Center for Devices and Radiological Health, ‘Overview of IVD Regulation’ [2023] FDA <www.fda.gov/medical-devices/ivd-regulatory-assistance/overview-ivd-regulation> accessed 18 September 2023.

[70] ‘When Science and Politics Collide: Enhancing the FDA | Science’ <www.science.org/doi/10.1126/science.aaw8093> accessed 18 September 2023.

[71] ‘Unique Device Identification System’ (Federal Register, 24 September 2013) <www.federalregister.gov/documents/2013/09/24/2013-23059/unique-device-identification-system> accessed 18 September 2023.

[72] ‘openFDA’ <https://open.fda.gov/data/faers/> accessed 10 November 2023.

[73] For example, Carpenter 2010, Hilts 2004, Hutt et al 2022

[74] ‘Factors to Consider Regarding Benefit-Risk in Medical Device Product Availability, Compliance, and Enforcement Decisions – Guidance for Industry and Food and Drug Administration Staff’.

[75] Center for Devices and Radiological Health, ‘510(k) Third Party Review Program’ (FDA, 15 August 2023) <www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/510k-third-party-review-program> accessed 18 September 2023.

[76] Office of Regulatory Affairs, ‘What Should I Expect during an Inspection?’ [2020] FDA <www.fda.gov/industry/fda-basics-industry/what-should-i-expect-during-inspection> accessed 18 September 2023.

[77] ‘Device Makers Can Take COTS, but Only with Clear SOUP’ <https://web.archive.org/web/20130123140527/http://medicaldesign.com/engineering-prototyping/software/device-cots-soup-1111/> accessed 18 September 2023.

[78] ‘FDA Clears Intellia to Start US Tests of “in Vivo” Gene Editing Drug’ (BioPharma Dive) <www.biopharmadive.com/news/intellia-fda-crispr-in-vivo-gene-editing-ind/643999/> accessed 18 September 2023.

[79] ‘FDA Authority Over Tobacco’ (Campaign for Tobacco-Free Kids) <www.tobaccofreekids.org/what-we-do/us/fda> accessed 18 September 2023.

[80] FDA AT A GLANCE: REGULATED PRODUCTS AND FACILITIES, November 2020 <www.fda.gov/media/143704/download#:~:text=REGULATED%20PRODUCTS%20AND%20FACILITIES&text=FDA%2Dregulated%20products%20account%20for,dollar%20spent%20by%20U.S.%20consumers.&text=FDA%20regulates%20about%2078%20percent,poultry%2C%20and%20some%20egg%20products.> accessed 18 September 2023.

[81] ‘Getting Smarter: FDA Publishes Draft Guidance on Predetermined Change Control Plans for Artificial Intelligence/Machine Learning (AI/ML) Devices’ (5 February 2023) <www.ropesgray.com/en/newsroom/alerts/2023/05/getting-smarter-fda-publishes-draft-guidance-on-predetermined-change-control-plans-for-ai-ml-devices> accessed 18 September 2023.

[82] Center for Veterinary Medicine, ‘Q&A on FDA Regulation of Intentional Genomic Alterations in Animals’ [2023] FDA <www.fda.gov/animal-veterinary/intentional-genomic-alterations-igas-animals/qa-fda-regulation-intentional-genomic-alterations-animals> accessed 18 September 2023.

[83] Andrew Kolodny, ‘How FDA Failures Contributed to the Opioid Crisis’ (2020) 22 AMA Journal of Ethics 743.

[84] Commissioner O of the, ‘Milestones in U.S. Food and Drug Law’ [2023] FDA <https://www.fda.gov/about-fda/fda-history/milestones-us-food-and-drug-law> accessed 3 December 2023

 

[85] Reputation and Power (2010) <https://press.princeton.edu/books/paperback/9780691141800/reputation-and-power> accessed 3 December 2023

 

[86] ‘Hutt, Merrill, Grossman, Cortez, Lietzan, and Zettler’s Food and Drug Law, 5th – 9781636596952 – West Academic’ <https://faculty.westacademic.com/Book/Detail?id=341299> accessed 3 December 2023

 

[87]For example Carpenter 2010, Hilts 2004, Hutt et al 2022

[88] ‘Hutt, Merrill, Grossman, Cortez, Lietzan, and Zettler’s Food and Drug Law, 5th – 9781636596952 – West Academic’ <https://faculty.westacademic.com/Book/Detail?id=341299> accessed 18 September 2023.

[89] Eric Wu and others, ‘How Medical AI Devices Are Evaluated: Limitations and Recommendations from an Analysis of FDA Approvals’ (2021) 27 Nature Medicine 582.

[90] Other public health regulators, for example NICE (UK) cover accessibility risk to a larger degree than the FDA, similarly on structural discrimination risks with NICE “Standing together” work on data curation and declarations of datasets used in developing SaMD. The FDA over time developed similar programs.

[91] Ziad Obermeyer and others, ‘Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations’ (2019) 366 Science 447.

[92] ’FDA-cleared artificial intelligence and machine learning-based medical devices and their 510(k) predicate networks’<www.thelancet.com/journals/landig/article/PIIS2589-7500(23)00126-7/fulltext#sec1> accessed 18 September 2023.

[93] ‘How the FDA’s Food Division Fails to Regulate Health and Safety Hazards’ <https://politico.com/interactives/2022/fda-fails-regulate-food-health-safety-hazards> accessed 18 September 2023.

[94] Christopher J Morten and Amy Kapczynski, ‘The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs and Vaccines’ (2021) 109 California Law Review 493.

[95] ‘Examination of Clinical Trial Costs and Barriers for Drug Development’ (ASPE) <https://aspe.hhs.gov/reports/examination-clinical-trial-costs-barriers-drug-development-0> accessed 18 September 2023.

[96] Office of the Commissioner, ‘Advisory Committees’ (FDA, 3 May 2021) <www.fda.gov/advisory-committees> accessed 18 September 2023.

[97] For example. Carpenter 2010, Hilts 2004, Hutt et al 2022

[98] ‘FDA’s Science Infrastructure Failing | Infectious Diseases | JAMA | JAMA Network’ <https://jamanetwork.com/journals/jama/article-abstract/1149359> accessed 18 September 2023.

[99] Bridget M Kuehn, ‘FDA’s Science Infrastructure Failing’ (2008) 299 JAMA 157.

[100] ‘What to Expect at FDA’s Vaccine Advisory Committee Meeting’ (The Equation, 19 October 2020) <https://blog.ucsusa.org/genna-reed/vrbpac-meeting-what-to-expect/> accessed 18 September 2023.

[101] Office of the Commissioner, ‘What Is a Conflict of Interest?’ [2022] FDA <www.fda.gov/about-fda/fda-basics/what-conflict-interest> accessed 18 September 2023.

[102] The Firm and the FDA: McKinsey & Company’s Conflicts of Interest at the Heart of the Opioid Epidemic <https://fingfx.thomsonreuters.com/gfx/legaldocs/akpezyejavr/2022-04-13.McKinsey%20Opioid%20Conflicts%20Majority%20Staff%20Report%20FINAL.pdf> accessed 18 September 2023.

[103] Causholli M, Chambers DJ and Payne JL, ‘Future Nonaudit Service Fees and Audit Quality’ (2014) ,<onlinelibrary.wiley.com/doi/abs/10.1111/1911-3846.12042> accessed 21 September 2023; Jamal K and Sunder S, ‘Is Mandated Independence Necessary for Audit Quality?’ (2011) 36 Accounting, Organizations and Society 284 <Is mandated independence necessary for audit quality? – ScienceDirect> accessed 21 September 2023

[104] Reputation and Power (2010) <https://press.princeton.edu/books/paperback/9780691141800/reputation-and-power> accessed 18 September 2023.

[105] ‘Hutt, Merrill, Grossman, Cortez, Lietzan, and Zettler’s Food and Drug Law, 5th – 9781636596952 – West Academic’ <https://faculty.westacademic.com/Book/Detail?id=341299> accessed 18 September 2023.

[106] Ana Santos Rutschman, ‘How Theranos’ Faulty Blood Tests Got to Market – and What That Shows about Gaps in FDA Regulation’ (The Conversation, 5 October 2021) <http://theconversation.com/how-theranos-faulty-blood-tests-got-to-market-and-what-that-shows-about-gaps-in-fda-regulation-168050> accessed 18 September 2023.

[107] Center for Devices and Radiological Health, ‘Classify Your Medical Device’ (FDA, 14 August 2023) <www.fda.gov/medical-devices/overview-device-regulation/classify-your-medical-device> accessed 18 September 2023.

[108] Anderljung and others, ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’ (arXiv, 4 September 2023) <http://arxiv.org/abs/2307.03718> accessed 15 September 2023.

[109] ‘A Law for Foundation Models: The EU AI Act Can Improve Regulation for Fairer Competition – OECD.AI’ <https://oecd.ai/en/wonk/foundation-models-eu-ai-act-fairer-competition> accessed 18 September 2023.

[110] ‘Stanford CRFM’ <https://crfm.stanford.edu/report.html> accessed 18 September 2023.

[111] Pegah Maham and Sabrina Küspert, ‘Governing General Purpose AI’.

[112] ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’ <https://openai.com/research/frontier-ai-regulation> accessed 18 September 2023.

[113] ‘Auditing Algorithms: The Existing Landscape, Role of Regulators and Future Outlook’ (GOV.UK) <www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook> accessed 18 September 2023.

[114] ‘Introducing Superalignment’ <https://openai.com/blog/introducing-superalignment> accessed 18 September 2023.

[115] ‘Why AI Safety?’ (Machine Intelligence Research Institute) <https://intelligence.org/why-ai-safety/> accessed 18 September 2023.

[116] ‘DAIR (Distributed AI Research Institute)’ (DAIR Institute) <https://dair-institute.org/> accessed 18 September 2023.

[117] Anthropic < https://www.anthropic.com/index/frontier-threats-red-teaming-for-ai-safety#:~:text=If%20unmitigated%2C%20we%20worry%20that,implementation%20of%20mitigations%20for%20them> accessed 29 November 2023

[118] ‘Explainer: What Is a Foundation Model?’ <www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 18 September 2023.

[119] Center for Devices and Radiological Health, ‘Software as a Medical Device (SaMD)’ (FDA, 9 September 2020) <www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd> accessed 10 November 2023.

[120] Pegah Maham and Sabrina Küspert, ‘Governing General Purpose AI’.

[121] ‘The Human Decisions That Shape Generative AI’ (Mozilla Foundation, 2 August 2023) <https://foundation.mozilla.org/en/blog/the-human-decisions-that-shape-generative-ai-who-is-accountable-for-what/> accessed 18 September 2023.

[122] ‘Frontier Model Security’ (Anthropic) <www.anthropic.com/index/frontier-model-security> accessed 18 September 2023.

[123] Is ChatGPT a cybersecurity threat? | TechCrunch

[124] ChatGPT Security Risks: What Are They and How To Protect Companies (itprotoday.com)

[125] 2307.03718.pdf (arxiv.org)

[126] 2307.03718.pdf (arxiv.org)

[127] 2307.03718.pdf (arxiv.org)

[128] 2307.03718.pdf (arxiv.org)

[129] ‘AI Assurance?’ <www.adalovelaceinstitute.org/report/risks-ai-systems/> accessed 21 September 2023.

[130] Preparing for Extreme Risks: Building a Resilient Society (parliament.uk) ‘Preparing for Extreme Risks: Building a Resilient Society’

[131] Nguyen T, ‘Insurability of Catastrophe Risks and Government Participation in Insurance Solutions’ (2013) <www.semanticscholar.org/paper/Insurability-of-Catastrophe-Risks-and-Government-in-Nguyen/dcecefd3f24a099b958e8ac1127a4bdc803b28fb> accessed 21 September 2023

[132] Banias MJ, ‘Inside CounterCloud: A Fully Autonomous AI Disinformation System’ (The Debrief, 16 August 2023) <https://thedebrief.org/countercloud-ai-disinformation/> accessed 21 September 2023

[133] Raji ID and others, ‘Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’ (arXiv, 9 June 2022) <http://arxiv.org/abs/2206.04737> accessed 21 September 2023

[134] McAllister LK, ‘Third-Party Programs to Assess Regulatory Compliance’ (2012) <www.acus.gov/sites/default/files/documents/Third-Party-Programs-Report_Final.pdf> accessed 21 September 2023

[135] Science in Regulation, A Study of Agency Decisionmaking Approaches, Appendices 2012 <www.acus.gov/sites/default/files/documents/Science%20in%20Regulation_Final%20Appendix_2_18_13_0.pdf> accessed 21 September 2023

[136] GPT-4-system-card (openai.com) (2023) <https://cdn.openai.com/papers/gpt-4-system-card.pdf> accessed 21 September 2023

[137] Intensive own evidence production of regulators, for example like the IAEA, is only suitable for non-complex industries

[138] The order does not indicate the importance of each dimension. The importance for risk reduction depends significantly on the specific implementation of the dimensions and the context.

[139] While other oversight regimes such as practised in cybersecurity, aviation or similar are an inspiration for foundation models too, FDA-style oversight is among the few that score towards the right on most dimensions identified in the regulatory oversight and audit literature and depicted above.

[140] Open AI Bug Bounty Program (2022) <Announcing OpenAI’s Bug Bounty Program> accessed 21 September 2023

[141] ‘MAUDE – Manufacturer and User Facility Device Experience’ <www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/search.cfm> accessed 21 September 2023

[142] ‘Auditor Independence and Audit Quality: A Literature Review – Nopmanee Tepalagul, Ling Lin, 2015’ <https://journals.sagepub.com/doi/abs/10.1177/0148558×14544505?casa_token=6R7ABlbi2I0AAAAA:K1pMF6sw6QrmvEhczXbW0BwjE8xXD0r3GKfOHpZczbeIvdMckGn00I6zkluRqd06WmBJXJ616xz_KXk> accessed 21 September 2023

[143] ‘Customer-Driven Misconduct: How Competition Corrupts Business Practices – Article – Faculty & Research – Harvard Business School’ <www.hbs.edu/faculty/Pages/item.aspx?num=43347> accessed 21 September 2023

[144] Donald R. Deis Jr and Giroux GA, ‘Determinants of Audit Quality in the Public Sector’ (1992) 67 The Accounting Review 462 <www.jstor.org/stable/247972?casa_token=luGLXHQ3nAoAAAAA:clOnnu3baxAfZYMCx7kJloL08GI0RPboKMovVPQz7Z6bi9w4grsJEqz1tNIKJD88yFXbpc8iqLDoeZY9U5jnECBH99hKFWKk3-WxI9e__HBwlQ_bOBhSWQ> accessed 21 September 2023

[145] Engstrom DF and Ho DE, ‘Algorithmic Accountability in the Administrative State’ (9 March 2020) <https://papers.ssrn.com/abstract=3551544> accessed 21 September 2023

[146] Causholli M, Chambers DJ and Payne JL, ‘Future Nonaudit Service Fees and Audit Quality’ (2014) ,<onlinelibrary.wiley.com/doi/abs/10.1111/1911-3846.12042> accessed 21 September 2023

[147] Jamal K and Sunder S, ‘Is Mandated Independence Necessary for Audit Quality?’ (2011) 36 Accounting, Organizations and Society 284 <Is mandated independence necessary for audit quality? – ScienceDirect> accessed 21 September 2023

[148] Widder DG, West S and Whittaker M, ‘Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI’ (17 August 2023) <https://papers.ssrn.com/abstract=4543807> accessed 21 September 2023

[149] Lamoreaux PT, ‘Does PCAOB Inspection Access Improve Audit Quality? An Examination of Foreign Firms Listed in the United States’ (2016) 61 Journal of Accounting and Economics 313

<Does PCAOB inspection access improve audit quality? An examination of foreign firms listed in the United States – ScienceDirect> accessed 21 September 2023

[150] ‘Introduction to NIST FRVT’ (Paravision) <www.paravision.ai/news/introduction-to-nist-frvt/> accessed 21 September 2023

[151] ‘Confluence Mobile – UN Statistics Wiki’ <https://unstats.un.org/wiki/plugins/servlet/mobile?contentId=152797274#content/view/152797274> accessed 21 September 2023

[152] ‘Large Language Models and Software as a Medical Device – MedRegs’ <https://medregs.blog.gov.uk/2023/03/03/large-language-models-and-software-as-a-medical-device/> accessed 21 September 2023

[153] Ada Lovelace Institute, AI assurance? Assessing and mitigating risks across the AI lifecycle (2023) < https://www.adalovelaceinstitute.org/report/risks-ai-systems/>

[154] ‘Inclusive AI Governance – Ada Lovelace Institute’ (2023) < www.adalovelaceinstitute.org/wp-content/uploads/2023/03/Ada-Lovelace-Institute-Inclusive-AI-governance-Discussion-paper-March-2023.pdf> accessed 21 September 2023

[155] ‘AI Assurance?’ <www.adalovelaceinstitute.org/report/risks-ai-systems/> accessed 21 September 2023

[156] ‘Comment of the AI Policy and Governance Working Group on the NTIA AI Accountability Policy’ (2023) <www.ias.edu/sites/default/files/AI%20Policy%20and%20Governance%20Working%20Group%20NTIA%20Comment.pdf> accessed 21 September 2023

[157] Weale S and correspondent SWE, ‘Lecturers Urged to Review Assessments in UK amid Concerns over New AI Tool’ The Guardian (13 January 2023) <https://www.theguardian.com/technology/2023/jan/13/end-of-the-essay-uk-lecturers-assessments-chatgpt-concerns-ai> accessed 23 November 2023

[158] ‘Proposing a Foundation Model Information-Sharing Regime for the UK | GovAI Blog’ <www.governance.ai/post/proposing-a-foundation-model-information-sharing-regime-for-the-uk> accessed 21 September 2023

[159] ‘Proposing a Foundation Model Information-Sharing Regime for the UK | GovAI Blog’ <www.governance.ai/post/proposing-a-foundation-model-information-sharing-regime-for-the-uk> accessed 21 September 2023

[160] ‘Regulating AI in the UK’ <www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 21 September 2023

[161] ‘Unique Device Identification System’ (Federal Register, 24 September 2013) <www.federalregister.gov/documents/2013/09/24/2013-23059/unique-device-identification-system> accessed 21 September 2023

[162] Anthropic AB is CL at and others, ‘How We Can Regulate AI—Asterisk’ <https://asteriskmag.com/issues/03/how-we-can-regulate-ai> accessed 21 September 2023

[163] ‘Opinion | Here’s a Simple Way to Regulate Powerful AI Models’ Washington Post (16 August 2023) <www.washingtonpost.com/opinions/2023/08/16/ai-danger-regulation-united-states/> accessed 21 September 2023

[164] Vidal DE and others, ‘Navigating US Regulation of Artificial Intelligence in Medicine—A Primer for Physicians’ (2023) 1 Mayo Clinic Proceedings: Digital Health 31

[165] ‘The Human Decisions That Shape Generative AI’ (Mozilla Foundation, 2 August 2023) <https://foundation.mozilla.org/en/blog/the-human-decisions-that-shape-generative-ai-who-is-accountable-for-what/> accessed 21 September 2023

[166] Birhane A, Prabhu VU and Kahembwe E, ‘Multimodal Datasets: Misogyny, Pornography, and Malignant Stereotypes’ (arXiv, 5 October 2021) <http://arxiv.org/abs/2110.01963> accessed 21 September 2023

[167] Schaul K, Chen SY and Tiku N, ‘Inside the Secret List of Websites That Make AI like ChatGPT Sound Smart’ (Washington Post) <www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/> accessed 21 September 2023

[168] ‘When AI Is Trained on AI-Generated Data, Strange Things Start to Happen’ (Futurism) <https://futurism.com/ai-trained-ai-generated-data-interview> accessed 21 September 2023

[169] Draft standards here are a very good example of the value of dataset documentation (that is, declaring metadata) on what is used in training and fine-tuning models. In theory, this could also all be kept confidential as commercially sensitive information once a legal infrastructure is in place www.datadiversity.org/draft-standards

[170] Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru, (2019), ‘Model Cards for Model Reporting’, doi: 10.1145/3287560.3287596

[171] Gebru, Morgenstern, Vecchione, Vaughan, Wallach, Daum and Crawford, (2021), Datasheets for Datasets, <https://m-cacm.acm.org/magazines/2021/12/256932-datasheets-for-datasets/abstract >(Accessed: 27 February 2023); Hutchinson, Smart, Hanna, Denton, Greer, Kjartansson, Barnes and Mitchell, (2021), ‘Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure’, doi: 10.1145/3442188.3445918;

[172] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023

[173] A pretrained AI model is a deep learning model that is already trained on large datasets to accomplish a specific task, meaning there are design choices which affect its output and performance (according to one leading lab ‘language models already learn a lot about human values during pretraining’ and this is where ‘implicit biases’ arise.)

[174] ‘running against a suite of benchmark objectionable behaviors… we find that the prompts achieve up to 84% success rates at attacking GPT-3.5 and GPT-4, and 66% for PaLM-2; success rates for Claude are substantially lower (2.1%), but notably the attacks still can induce behavior that is otherwise never generated.’ Zou A and others, ‘Universal and Transferable Adversarial Attacks on Aligned Language Models’ (arXiv, 27 July 2023) <http://arxiv.org/abs/2307.15043> accessed 21 September 2023

[175] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023; Nelson et al ; Kolt N, ‘Algorithmic Black Swans’ (25 February 2023) <https://papers.ssrn.com/abstract=4370566> accessed 21 September 2023

[176] Mökander J and others, ‘Auditing Large Language Models: A Three-Layered Approach’ [2023] AI and Ethics <http://arxiv.org/abs/2302.08500> accessed 21 September 2023; Wan A and others, ‘Poisoning Language Models During Instruction Tuning’ (arXiv, 1 May 2023) <http://arxiv.org/abs/2305.00944> accessed 21 September 2023; ‘Analyzing the European Union AI Act: What Works, What Needs Improvement’ (Stanford HAI) <https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement> accessed 21 September 2023; ‘EU AI Standards Development and Civil Society Participation’ <www.adalovelaceinstitute.org/event/eu-ai-standards-civil-society-participation/> accessed 21 September 2023

[177] ’Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’ <https://dl.acm.org/doi/pdf/10.1145/3514094.3534181> accessed 21 September 2023

[178] Gupta A, ‘Emerging AI Governance Is an Opportunity for Business Leaders to Accelerate Innovation and Profitability’ (Tech Policy Press, 31 May 2023) <https://techpolicy.press/emerging-ai-governance-is-an-opportunity-for-business-leaders-to-accelerate-innovation-and-profitability/> accessed 21 September 2023

[179] Key Enforcement Issues of the AI Act Should Lead EU Trilogue Debate’ (Brookings) <www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/> accessed 21 September 2023

[180] ‘Structured Access’ – Toby Shevlane (2022)< https://arxiv.org/ftp/arxiv/papers/2201/2201.05159.pdf> accessed 21 September 2023

[181] ‘Systematic probing of an AI model or system by either expert or non-expert human evaluators to reveal undesired outputs or behaviors’.

[182] House TW, ‘FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI’ (The White House, 21 July 2023) <www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/> accessed 21 September 2023

[183] ‘Keeping an Eye on AI’ <www.adalovelaceinstitute.org/report/keeping-an-eye-on-ai/> accessed 21 September 2023

[184].Janjeva A and others, ‘Strengthening Resilience to AI Risk’ (2023 <)https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdf> accessed 21 September 2023

[185] Shrishak K, ‘How to Deal with an AI Near-Miss: Look to the Skies’ (2023) 79 Bulletin of the Atomic Scientists 166

[186] ‘Guidance for Manufacturers on Reporting Adverse Incidents Involving Software as a Medical Device under the Vigilance System’ (GOV.UK) <www.gov.uk/government/publications/reporting-adverse-incidents-involving-software-as-a-medical-device-under-the-vigilance-system/guidance-for-manufacturers-on-reporting-adverse-incidents-involving-software-as-a-medical-device-under-the-vigilance-system> accessed 21 September 2023

[187] www.adalovelaceinstitute.org/blog/ai-regulation-learn-from-history/ Guidance always has its roots in legislation, but can be iterated more rapidly and flexibly whereas legislation requires several legal and political steps at minimum. Explainer here: www.oneeducation.org.uk/difference-between-laws-regulations-acts-guidance-policies/.

[188] www.tandfonline.com/doi/pdf/10.1080/01972243.2022.2124565?needAccess=true

[189] https://cip.org/alignmentassemblies

[190] https://arxiv.org/abs/2306.09871 ; https://openai.com/blog/democratic-inputs-to-ai

[191] Ada Lovelace Institute, Participatory data stewardship: A framework for involving people in the use of data’ (2021) < https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/>

[192] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023

[193] ‘Examining the Black Box’ <www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/> accessed 21 September 2023

[194] Nelson and et al., ‘AI Policy and Governance Working Group NTIA Comment.Pdf’ <www.ias.edu/sites/default/files/AI%20Policy%20and%20Governance%20Working%20Group%20NTIA%20Comment.pdf> accessed 21 September 2023

[195] Bill Chappell, ‘“It Was Installed For This Purpose,” VW’s U.S. CEO Tells Congress About Defeat Device’ NPR (8 October 2015) <www.npr.org/sections/thetwo-way/2015/10/08/446861855/volkswagen-u-s-ceo-faces-questions-on-capitol-hill> accessed 30 August 2023

[196] MedWatch is the FDA’s adverse event reporting program, while Medical Product Safety Network (MedSun) monitors the safety and effectiveness of medical devices. Commissioner O of the, ‘Step 5: FDA Post-Market Device Safety Monitoring’ [2018] FDA <www.fda.gov/patients/device-development-process/step-5-fda-post-market-device-safety-monitoring> accessed 21 September 2023

[197] AINOW, ‘Zero-Trust-AI-Governance.Pdf’ (August 2023) <https://ainowinstitute.org/wp-content/uploads/2023/08/Zero-Trust-AI-Governance.pdf> accessed 21 September 2023

[198] ‘The Value​​​ ​​​Chain of General-Purpose AI​​’ <www.adalovelaceinstitute.org/blog/value-chain-general-purpose-ai/> accessed 21 September 2023

[199] www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/

[200] Knott A and Pedreschi D, ‘State-of-the-Art Foundation AI Models Should Be Accompanied by Detection Mechanisms as a Condition of Public Release’ <https://gpai.ai/projects/responsible-ai/social-media-governance/Social%20Media%20Governance%20Project%20-%20July%202023.pdf> accessed 21 September 2023

[201] www.tspa.org/curriculum/ts-fundamentals/transparency-report/

[202] Bommasani R and others, ‘Do Foundation Model Providers Comply with the Draft EU AI Act?’ <https://crfm.stanford.edu/2023/06/15/eu-ai-act.html> accessed 21 September 2023

[203] ‘Keeping an Eye on AI’ <www.adalovelaceinstitute.org/report/keeping-an-eye-on-ai/> accessed 21 September 2023

[204] ‘Regulating AI in the UK’ <www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 21 September 2023

[205] Zinchenko V and others, ‘Changes in Software as a Medical Device Based on Artificial Intelligence Technologies’ (2022) 17 International Journal of Computer Assisted Radiology and Surgery 1969

[206] Shrishak K, ‘How to Deal with an AI Near-Miss: Look to the Skies’ (2023) 79 Bulletin of the Atomic Scientists 166

[207] AINOW, ‘Zero-Trust-AI-Governance.Pdf’ (August 2023) <https://ainowinstitute.org/wp-content/uploads/2023/08/Zero-Trust-AI-Governance.pdf> accessed 21 September 2023

[208] ‘How Boeing 737 MAX’s Flawed Flight Control System Led to 2 Crashes That Killed 346 – ABC News’ <https://abcnews.go.com/US/boeing-737-maxs-flawed-flight-control-system-led/story?id=74321424> accessed 21 September 2023

[209] A new national system to more quickly spot possible safety issues, using existing electronic health databases to keep an eye on the safety of approved medical products in real time. This tool will add to, but not replace, FDA’s existing post-market safety assessment tools. Commissioner of the, ‘Step 5: FDA Post-Market Device Safety Monitoring’ [2018] FDA <www.fda.gov/patients/device-development-process/step-5-fda-post-market-device-safety-monitoring> accessed 21 September 2023

[210] In the UK, the Civil Aviation Authority has a revenue of £140m and staff of over 1,000, and the Office for Nuclear Regulation around £90m with around 700 staff. An EU-level agency for AI should be funded well beyond this, given that the EU is more than six times the size of the UK.

[211] Affairs O of R, ‘Recalls, Market Withdrawals, & Safety Alerts’ (FDA, 11 February 2022) <www.fda.gov/safety/recalls-market-withdrawals-safety-alerts> accessed 21 September 2023

[212] Team NA, ‘NIST AIRC – Govern’ <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook/Govern> accessed 21 September 2023

[213]‘Committing to Effective Whistleblower Protection | En | OECD’ <www.oecd.org/corruption-integrity/reports/committing-to-effective-whistleblower-protection-9789264252639-en.html> accessed 21 September 2023

[214] Anderljung M and others, ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’ (arXiv, 4 September 2023) <http://arxiv.org/abs/2307.03718> accessed 21 September 2023

[215] Guidance always has its roots in legislation but can be iterated more rapidly and flexibly, whereas legislation requires several legal and political steps at minimum. ‘AI Regulation and the Imperative to Learn from History’ <www.adalovelaceinstitute.org/blog/ai-regulation-learn-from-history/> accessed 21 September 2023

Explainer here: www.oneeducation.org.uk/difference-between-laws-regulations-acts-guidance-policies/.

[216] Raji ID and others, ‘Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’ (arXiv, 9 June 2022) <http://arxiv.org/abs/2206.04737> accessed 21 September 2023

[217] Draft standards here are a very good example of the value of dataset documentation (i.e. declaring metadata) on what is used in training and fine-tuning models. In theory, this could also all be kept confidential as commercially sensitive information once a legal infrastructure is in place. www.datadiversity.org/draft-standards

[218] Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru, (2019), ‘Model Cards for Model Reporting’, doi: 10.1145/3287560.3287596

[219] Gebru, Morgenstern, Vecchione, Vaughan, Wallach, Daum and Crawford, (2021), Datasheets for Datasets, <https://m-cacm.acm.org/magazines/2021/12/256932-datasheets-for-datasets/abstract> (Accessed: 27 February 2023) Hutchinson, Smart, Hanna, Denton, Greer, Kjartansson, Barnes and Mitchell, (2021), ‘Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure’, doi: 10.1145/3442188.3445918;

[220] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023

[221]

[222] Knott A and Pedreschi D, ‘State-of-the-Art Foundation AI Models Should Be Accompanied by Detection Mechanisms as a Condition of Public Release’ <https://gpai.ai/projects/responsible-ai/social-media-governance/Social%20Media%20Governance%20Project%20-%20July%202023.pdf> accessed 21 September 2023

[223] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023

[224] ‘Examining the Black Box’ <www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/> accessed 21 September 2023

[225] AINOW, ‘Zero-Trust-AI-Governance.Pdf’ (August 2023) <https://ainowinstitute.org/wp-content/uploads/2023/08/Zero-Trust-AI-Governance.pdf> accessed 21 September 2023

[226] ‘Regulating AI in the UK’ <www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 21 September 2023

[227] Shrishak K, ‘How to Deal with an AI Near-Miss: Look to the Skies’ (2023) 79 Bulletin of the Atomic Scientists 166

[228] Team NA, ‘NIST AIRC – Govern 1.7’ <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook/Govern> accessed 21 September 2023

[229] In the UK, the Civil Aviation Authority has a revenue of £140m and staff of over 1,000, and the Office for Nuclear Regulation around £90m with around 700 staff). An EU-level agency for AI should be funded well beyond this, given that the EU is more than six times the size of the UK.

[230] In 2023 ~50% of the FDA’s ~$8bn budget was covered through mandatory fees by companies overseen by the FDA. See: < https://www.fda.gov/media/165045/download > accessed 24/11/2023

[231] 80% of the EMA’s funding comes from fees and charges levied on companies. See: EMA, “Funding,” European Medicines Agency, Sep. 17, 2018. <www.ema.europa.eu/en/about-us/how-we-work/governance-documents/funding> accessed Aug. 10, 2023

[232] ‘Governing General Purpose AI — A Comprehensive Map of Unreliability, Misuse and Systemic Risks’ (20 July 2023) <www.stiftung-nv.de/de/publikation/governing-general-purpose-ai-comprehensive-map-unreliability-misuse-and-systemic-risks> accessed 21 September 2023

[233] Nathalie Smuha: Beyond the Individual: Governing AI’s Societal Harm < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3941956 > accessed Nov. 24, 2023

[234] EMA, “Success rate for marketing authorisation applications from SMEs doubles between 2016 and 2020,” European Medicines Agency, Jun. 25, 2021 <www.ema.europa.eu/en/news/success-rate-marketing-authorisation-applications-smes-doubles-between-2016-2020> accessed Aug. 10, 2023

[235] ‘AI and Digital Regulations Service for Health and Social Care – AI Regulation Service – NHS’ <www.digitalregulations.innovation.nhs.uk/> accessed 21 September 2023

[236] EMA, “Advanced therapy medicinal products: Overview,” European Medicines Agency, Sep. 17, 2018. <www.ema.europa.eu/en/human-regulatory/overview/advanced-therapy-medicinal-products-overview> accessed Aug. 10, 2023

[237] ‘Key Enforcement Issues of the AI Act Should Lead EU Trilogue Debate’ (Brookings) <www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/> accessed 21 September 2023

[238] Infocomm Media Development Authority, Aicadium, and AI Verify Foundation, ‘Generative AI: Implications for Trust and Governance’ 2023 <https://aiverifyfoundation.sg/downloads/Discussion_Paper.pdf> accessed 21 September 2023

[239] EMA, “Transparency,” European Medicines Agency, Sep. 17, 2018 <www.ema.europa.eu/en/about-us/how-we-work/transparency> (accessed Aug. 10, 2023).

[240] ‘Governing General Purpose AI — A Comprehensive Map of Unreliability, Misuse and Systemic Risks’ (20 July 2023) <www.stiftung-nv.de/de/publikation/governing-general-purpose-ai-comprehensive-map-unreliability-misuse-and-systemic-risks> accessed 21 September 2023

[241] Ho L and others, ‘International Institutions for Advanced AI’ (arXiv, 11 July 2023) <http://arxiv.org/abs/2307.04699> accessed 21 September 2023

[242] ‘Three Regulatory Agencies: A Comparison’ <www.hmpgloballearningnetwork.com/site/frmc/articles/three-regulatory-agencies-comparison> accessed 21 September 2023

[243] ‘COVID-19 Disruptions of International Clinical Trials: Comparing Guidances Issued by FDA, EMA, MHRA and PMDA’ (4 February 2020) <www.ropesgray.com/en/newsroom/alerts/2020/04/national-authority-guidance-on-clinical-trials-during-the-covid-19-pandemic> accessed 21 September 2023

[244] Van Norman GA, ‘Drugs and Devices: Comparison of European and U.S. Approval Processes’ (2016) 1 JACC: Basic to Translational Science 399

[245] Cummings ML and Britton D, ‘Chapter 6 – Regulating Safety-Critical Autonomous Systems: Past, Present, and Future Perspectives’ in Richard Pak, Ewart J de Visser and Ericka Rovira (eds), Living with Robots (Academic Press 2020) <www.sciencedirect.com/science/article/pii/B9780128153673000062> accessed 21 September 2023


Image credit: Lyndon Stratford

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

The three legal mechanisms discussed in the report are data trusts, data cooperatives and corporate and contractual models, which can all be powerful mechanisms in the data-governance toolbox.

The report is a joint publication with the AI Council and endorsed by the ODI, the City of London Law Society and the Data Trusts Initiative.

Executive summary

Organisations, governments and citizen-driven initiatives around the world aspire to use data to tackle major societal and economic problems, such as combating the
COVID-19 pandemic. Realising the potential of data for social good is not an easy task, and from the outset efforts must be made to develop methods for the responsible
management of data on behalf of individuals and groups.

Widespread misuse of personal data, exemplified by repeated high-profile data breaches and sharing scandals, has resulted in ‘tenuous’ public trust[footnote]Centre for Data Ethics and Innovation (2020). Addressing trust in public sector data use. [online] GOV.UK. Available at: www.gov.uk/government/publications/cdei-publishes-its-first-report-on-public-sector-data-sharing/addressing-trust-in-public-sector-data-use [Accessed 18 Feb. 2021].[/footnote] in public and private-sector data sharing. Concentration of power and market dominance, based on extractive data practices from a few technological players, both entrench public concern about data use and impede data sharing and access in the public interest. The lack of transparency and scrutiny around public-private partnerships add additional layers of concerns when it comes to how data is used.[footnote]In 2020, in partnership with Understanding Patient Data at the Wellcome Trust, the Ada Lovelace Institute convened patient roundtables and citizen juries across the UK and commissioned a nationally representative survey of 2,095 people. The findings show that 82% of people expect the NHS to publish information about data access partnerships; 63% of people are unaware that the NHS gives third parties access to data; 75% of people believe the public should be involved in decisions about how NHS data is used. The two reports that underpin this research are available at: https://understandingpatientdata.org.uk/news/accountability-transparencyand- public-participation-must-be-established-third-party-use-nhs [Accessed 18 Feb. 2021].[/footnote] Part of these concerns comes from the fact that what individuals might consider to be ‘good’ is different to how those who process data may define it, especially if individuals have no say in that definition.

The challenges of the twenty-first century demand new data governance models for collectives, governments and organisations that allow data to be shared for individual and public benefit in a responsible way, while managing the harms that may emerge.

This work explores the legal mechanisms that could help to facilitate responsible data stewardship. It offers opportunities for shifting power imbalances through breaking data silos and allowing different levels of participatory data governance,[footnote]For a more detailed discussion on participatory governance see the Ada Lovelace Institute’s forthcoming report on Exploring participatory mechanisms for data stewardship (March 2021).[/footnote] and for enabling the responsible management of data in data-sharing initiatives by individuals, organisations and governments wanting to achieve societal, economic and environmental goals.

This report focuses on personal data management, as the most common type of data stewarded today in alternative data governance models.[footnote]See ‘Annex 2: Graphical Representation’ in Manohar, S., Kapoor, A. and Ramesh, A. (2020). Data Stewardship – A Taxonomy. [online] The Data Economy Lab. Available at: https://thedataeconomylab.com/2020/06/24/data-stewardship-a-taxonomy/ [Accessed 18 Feb. 2021].[/footnote] It points out where mechanisms are suited for non-personal data management and sees this area as requiring future exploration. The jurisdictional focus is mainly on UK law, however this report also introduces a section on EU legislative developments on data sharing and, where appropriate, indicates similarities with civil law systems (for example, fiduciary obligations resembling trust law mechanisms).

Produced by a working group of legal, technical and policy experts, this report describes three legal mechanisms which could help collectives, organisations and governments create flexible governance responses to different elements of today’s data governance challenges.
These may, for example, empower data subjects to more easily control decisions made about their data by setting clear boundaries on data use, assist in promoting desirable uses, increase confidence among organisations to share data or inject a new democratic element into data policy.

Data trusts,[footnote]For the purposes of this report, data trusts are regarded as underpinned by UK trust law.[/footnote] data cooperatives and corporate and contractual mechanisms can all be powerful mechanisms in the data-governance toolbox. There’s no one-size-fits-all
solution and choosing the type of governance mechanism will depend on a number of factors.

Some of the most important factors are purpose and benefits. Coming together around an agreed purpose is the critical starting point, and one which will subsequently determine the benefits and drive the nature of the relationship between the actors involved in a data-sharing initiative. These actors may include individuals, organisations and governments although
data-sharing structures do not necessarily need to include all actors mentioned.

The legal mechanisms presented in this report aim to facilitate this relationship, however the broader range of collective action and coordination mechanisms to address data challenges also need to be assessed on a case-by-case basis. The three mechanisms described here are meant to provide an indication as to the types of approaches, conditions and legal tools that can be employed to solve questions around responsible data sharing and governance.

To demonstrate briefly how purpose can be linked to the choice of legal tools:

Data trusts create a vehicle for individuals to state their aspirations for data use and mandate a trustee to pursue these aspirations.[footnote]Delacroix, S. and Lawrence, N.D. (2019). Bottom-up data Trusts: disturbing the “one size fits all” approach to data governance. International Data Privacy Law, [online] 9(4). Available at: https://academic.oup.com/idpl/article/9/4/236/5579842 [Accessed 6 Nov. 2019].[/footnote] Data trusts can be built with a highly participatory structure in mind, requiring systematic input from the individuals that set up the data trust. It’s also possible to build data trusts with the intention to delegate to the data trustee the responsibility to determine what type of data processing is to the beneficiaries’ interest.

The distinctive elements of this model are the role of the trustee, who bears a fiduciary duty in exercising data rights (or the beneficial interest in those rights) on behalf of the beneficiaries, and the role of the overseeing court in providing additional safeguards. Therefore, data trusts might work better in contexts where individuals and groups wish to define the terms of data use by creating a new institution (a trust) to steward data on their behalf, by representing them in negotiations about data use.

Data cooperatives can be considered when individuals want to voluntarily pool data resources and repurpose the data in the interests of those it represents. Therefore, data cooperatives could be the go-to governance mechanism when relationships are formed between peers or like-minded people who join forces to collectively steward their data and create one voice
in relation to a company or institution.

Corporate and contractual mechanisms can be used to design an ecosystem of trust in situations where a group of organisations see benefits in sharing data under mutually agreed terms and in a controlled way. This means these mechanisms might be better suited for creating data-sharing relationships between organisations. The involvement of an independent data steward is envisaged as a means of creating a trusted environment for stakeholders to feel comfortable sharing data with other parties, who they may not know or have had an opportunity to develop a relationship of trust.

This report captures the leading thinking on an emerging and timely issue of research and inquiry: how we can give tangible effect to the ideal of data stewardship: the trustworthy and responsible use and management of data.

Promoting and realising the responsible use of data is the primary objective of the Legal Mechanisms for Data Stewardship working group and the Ada Lovelace Institute, who produced this report, and who view this approach as critical to protecting the data rights of individuals and communities, and unlocking the benefits of data in a way that’s fair, equitable and focused on social benefit.

Chapter 1: Data trusts

Diagram illustrating how data trusts work
How data trusts work

Equity as a tool for establishing rights and remedies

Trust law has ancient roots, with the fiduciary responsibilities that sit at its core being traceable to practices established in Roman law. In the UK, the idea of a ‘trust’ as an entity has its origins in medieval England: with many landowners leaving England to fight in the Crusades, systems were needed to manage their estates in their absence.

Arrangements emerged through which Crusaders would transfer ownership of their estate to another individual, who would be responsible for managing their land and fulfilling any feudal responsibilities until their return. However, returning Crusaders often found themselves in disputes with their ‘caretaker’ landowners about land ownership. These disputes were referred to the Courts of Chancery to decide on an appropriate – equitable – remedy. These courts consistently recognised the claims of the returning Crusaders, creating the concepts of a ‘beneficiary’, ‘trustee’ and ‘trust’ to define a relationship in which one party would manage certain assets for the benefit of another – the establishment of a trust.

While the practices associated with trust law have changed over time, their core components have remained consistent: a trust is a legal relationship between at least two parties, in which one party (the trustee) manages the rights associated with an asset for the benefit of another (the beneficiary).[footnote]Chambers, R. (2010). Distrust: Our Fear of Trusts in the Commercial World. Current Legal Problems, [online] 63(1), pp.631–652. Available at: https://academic.oup.com/clp/article-abstract/63/1/631/379107 [Accessed 18 Feb. 2021].[/footnote] Almost any right can be held in trust, so long as the trust meets three conditions:

  1. there is a clear intention to establish a trust
  2. the subject matter or property of the trust is defined
  3. the beneficiaries of the trust are specified (including as a conceptual
    category rather than nominally).

In the centuries that followed their emergence, the Courts of Chancery have played an important role in settling claims over rights and creating remedies where these rights have been infringed. Core to the operation of these courts is the concept of equity – that disputes should be settled in a way that is fair and just. In centring this concept in their jurisprudence, they have found or clarified new rights or responsibilities that might not be directly codified in Common Law, but which can be adjudicated according to legal principles of fairness. This has enabled the courts to develop flexible and innovative responses in situations where there may be gaps in Common Law, or where the strict definitions of the Common Law are ill-equipped to manage new social practices.

It is this ability to flex and adapt over time that has ensured the longevity of trusts and trust law as a governance tool, and it is these characteristics that have attracted interest in current debates about data governance.

Why data trusts?

Today’s data environment is characterised by structural power imbalances. Those with access to large pools of data – often data about individuals – can leverage the value of aggregated data to create products and services that are foundational to many daily activities.

While offering many benefits, these patterns of data use can create new forms of vulnerability for individuals or groups. Recent years have brought examples of how new uses of data can, for example, create sensitive data about individuals by combining datasets that individually seemed innocuous, or use data to target individuals online in ways that might lead to discrimination or social division.

Today, these rights are typically managed through service agreements or other consent-based models of interaction between individuals and organisations. However, as patterns of data collection and use evolve, the weaknesses associated with these processes are becoming clearer. This has prompted re-examination of consent as a foundation for data exchange and the long-term risks associated with complex patterns of data use.

The limitations of consent as a model for data governance have already been well-characterised. Many terms and conditions are lengthy and difficult to understand, and individuals might not have the ability, knowledge or time to adequately review data access agreements; for many, interest in consent and control is sparked only after they have become aware of data misuse; and the processes for an individual to enact their data rights – or receive redress for data misuse – can be lengthy and inaccessible.[footnote]British Academy, techUK and Royal Society (2018). Data ownership, rights and controls: seminar report. [online] The British Academy. Available at: www.thebritishacademy.ac.uk/publications/data-ownership-rights-controls-seminar-report [Accessed 18 Feb. 2021].[/footnote]

Moreover, as interactions in the workplace, at home or with public services are increasingly shaped by digital technologies, there is pressure on individuals to ‘opt in’ to data exchanges, if they are to be able to participate in society. This reliance on digital interactions exacerbates power imbalances in the governance system.

Approaches to data governance that concentrate on single instances of data exchange also struggle to account for the pervasiveness of data use, much of this data being created as a result of a digital environment in which individuals ‘leak’ data during their daily activities. In many cases, vulnerabilities arising from data use come not from a single act of data processing, but from an accumulation of data uses that may have been innocuous individually, but that together form systems that shape the choices individuals make in their daily lives – from the news they read to the jobs adverts they see. Even if each single data exchange is underpinned by a consent-based interaction, this cumulative effect – and the long-term risks it can create – is something that existing policy frameworks are not well-placed to manage.[footnote]Delacroix, S. and Lawrence, N. D. (2019) ‘Bottom-up data Trusts.’[/footnote]

Nevertheless, it needs to be pointed out that the foundational elements of the GDPR that govern data processing are principles such as data protection by design and by default, and mechanisms such as data protection impact assessments (DPIAs), which are designed to help preempt potential risks as early as possible. These are legal obligations and a prerequisite step before individuals are asked for consent.[footnote]Jasmontaite, L., Kamara, I., Zanfir-Fortuna, G. and Leucci, S. (2018). Data Protection by Design and by Default: Framing Guiding Principles into Legal Obligations in the GDPR [online] European Data Protection Law Review, 4(2), pp.168–189. Available at: https://doi.org/10.21552/edpl/2018/2/7. [Accessed 18 Feb. 2021].[/footnote] Therefore, it is important to highlight the broader compliance failures as well as the limitations of the consent mechanism which play a significant role in creating imbalances of power and potential harm.

The imbalances of power or ability of individuals and groups to act in ways that define their own future create a data environment that is in some ways akin to the feudal system which fostered the development of trust law. Powerful actors are able to make decisions that affect individuals, and – even if those actors are expected to act with a duty of care for individual rights and interests – individuals have limited ability to challenge these structures.

There are also limited mechanisms allowing individuals who want to share data for public benefit to do so via a structure that warrants trust. In areas where significant public benefit is at stake, individuals and communities may wish to take a view on how data is used, or press for action to use data to tackle major societal challenges. At present, the vehicles for the public to have such a voice are limited.

For the purposes of this report, trust law is explored as a new form of governance that can achieve goals such as:

  • increase an individual’s ability to exercise the rights they currently have in law
  • redistribute power in the digital environment in ways that support individuals and groups to proactively define terms of data use
  • support data use in ways that reflect shifting understandings of social value and changing technological capabilities.

The opportunities for commercial or not-for-profit organisations focused on product or research development, or which are seriously concerned about implementing a high degree of ethical obligations when it comes to data pertaining to their customers (and empower these customers not only to make active choices about data management, but also benefit from insights from this data) are briefly discussed in the section on ‘Opportunities for organisations to engage with data trusts’.

What is a data trust?

A data trust is a proposed mechanism for individuals to take the data rights that are set out in law (or the beneficial interest in those rights) and pool these into an organisation – a trust – in which trustees would exercise the data rights conferred by the law on behalf of the trust’s
beneficiaries.

Public debates about data use often centre around key questions such as who has access to data about us and how is it used. Data trusts would provide a vehicle for individuals and groups to more effectively influence the answers to these questions, by creating a vehicle for individuals to state their aspirations for data use and mandate a trustee to pursue these aspirations. By connecting the aspiration to share data to structures that protect individual rights, data trusts could provide alternative forms of ‘weak’ democracy, or new mechanisms for holding those in power to account.

The purposes for which data should be used, or data rights exercised, would be specified in the trust’s founding documents, and these purposes would be the foundation for any decision about how the trust would manage its assets. Mechanisms for deliberation or consultation with beneficiaries could also be built into a trust’s founding charter, with the form and function of those mechanisms depending on the objectives and intentions of the parties creating the trust.

Trustees and their fiduciary duties

Trustees play a crucial role in the success of such a trust. Data trustees will be tasked with stewarding the assets managed in a trust on behalf of its beneficiaries. In a ‘bottom-up’ data trust,[footnote]Delacroix, S. and Lawrence, N. D. (2019) ‘Bottom-up data Trusts’.[/footnote] the beneficiaries will be the data subjects (whose interests may include research facilitation, etc.). Data trustees will have a fiduciary responsibility to exercise (or leverage the beneficial interest inherent in) their data rights. Data trustees may seek to further the interests of the data subjects by entering into data-sharing agreements on their behalf, monitoring compliance with those agreements or negotiating better terms with service providers.

By leveraging the negotiating power inherent in pooled data rights, the data trustee would become a more powerful voice in contract negotiations, and be better placed to achieve favourable terms of data use than any single individual. In so doing, the role of the data trustee would be to empower the beneficiaries, widening their choices about data use beyond the ‘accept or walk away’ dichotomy presented by current governance structures. This role would require a high level of skill and knowledge, and support for a cohort of data trustees would
be needed to ensure they can fulfil their responsibilities.

Core to the rationale for using trust law as a vehicle for data governance is the fiduciary duty it creates. Trustees are required to act with undivided loyalty and dedication to the interests and aspirations of the beneficiaries.[footnote]Ibid.[/footnote] The strong safeguards this provides can create a foundation for data governance that gives data subjects confidence that their data rights are being managed with care.

Adding to these fiduciary duties, the law of equity provides a framework for accountability. If not adhering to the constitutional terms of a trust, trustees can be held to account for their actions by the trust’s beneficiaries (or the overseeing Court acting on their behalf) or an
independent regulator. Not only is a Court’s equitable jurisdiction to supervise, and intervene if necessary, not easily replicable within a contractual or corporate framework, the importance of the fact that equity relies on ex-post moral standards and emphasises good faith cannot be overestimated.

The flexibility offered by trusts also offers benefits in creating a governance system that is able to adapt to shifting patterns of data use. A range of subject matters or application areas could form the basis of a trust, allowing trusts to be established according to need: trusts would therefore allow co-evolution of patterns of data use and regulation.

In conditions of change or uncertainty around data use, this flexibility offers the ability to act now to promote some types of data use, while creating space to change practices in the future.
A further advantage of trust law is its ability to enable collective action while providing institutional safeguards that are commensurate to the vulnerabilities at stake. It is possible to imagine situations in which individuals might group together on the basis of shared values or
attitudes to risk, and seek to use this shared understanding to promote data use. In coming together to define the terms of a trust, individuals would be able to express their agency and influence data use by defining their vision. The beneficiaries’ interest can be expressed in more restrictive or prudential terms, or may include a broader purpose such as the furthering of research or influencing patterns of data use. Current legal frameworks offer few opportunities to enable group action in this way.

The relationship between data rights and trusts

Almost any right or asset can be placed in trust. Trusts have already been established for rights relating to intellectual property and contracts, alongside a range of different types of property, including digital assets, and have proven themselves to be flexible in adapting to different types of asset across the centuries.[footnote]McFarlane, B. (2019). Data Trusts and Defining Property. [online] Oxford Law Faculty. Available at: www.law.ox.ac.uk/research-andsubject- groups/property-law/blog/2019/10/data-trusts-and-defining-property [Accessed 18 Feb. 2021].[/footnote]

Understanding what data rights can be placed in trust, when those rights arise and how a trust can manage those rights will be crucial in creating a data trust. Further work will be required to analyse the sorts of powers that a trustee tasked with stewarding those rights might be able to wield, and the advantages that might accrue to the trust’s beneficiaries as a result.

In the case of data about individuals, the GDPR confers individual rights in respect of data use, which could in principle be held in trust. These include ‘positive’ rights such as portability, access and erasure that would appear to be well-suited to being managed via a trust.

The development of data trusts will require further clarity on how these rights can be exercised. There is already active work on the extent to which (and conditions according to which) those positive rights may be mandatable to another party to act on behalf of an individual, such as a trustee. Opinions on the issue differ among GDPR experts and publication of the European Commission’s draft Data Governance Act raises new questions about how and whether data rights might be delegated to a trust. The feasibility of data trusts however does not hinge on a positive answer to this delegability question, since trust law offers a potential workaround that does not require any right transfer.[footnote]Prof. McFarlane puts forward this potential workaround in a conversation with Paul Nemitz and Sylvie Delacroix. See Data Trusts Initiative (2021) Understanding the Data Governance Act: in conversation with Sylvie Delacroix, Ben McFarlane and Paul Nemitz.[/footnote]

As trusts develop, they will also encounter new questions about the limitations of existing rights and what happens when different rights interact.[footnote]For further discussion of this and other issues in the development of data trusts, see: Data Trusts Initiative (2020b). Data Trusts: from theory to practice, working paper 1 [online] Data Trusts Initiative. Available at: https://static1.squarespace.com/ static/5e3b09f0b754a35dcb4111ce/t/5fdb21f9537b3a6ff2315429/1608196603713/Working+Paper+1+-+data+trusts+- +from+theory+to+practice.pdf [Accessed 18 Feb. 2021].[/footnote] For example, organisations can analyse aggregated datasets and create profiles of individuals, generating inferences about their likely preferences or behaviours. These profiles – created as a result of data analysis and modelling – would typically be considered the intellectual property of the entity that conducted the analysis or modelling. While input data might relate to individuals, once aggregated and anonymised to a certain extent, it would no longer be considered as personal data under the GDPR. However, if inferences are classified as personal data within the scope of the GDPR, individual data-protection rights should apply. Nevertheless, as some authors have explained, exercising data rights on inferences classified as personal data remains limited, and particularly in the case of data portability could give rise to different tensions with trade secrets and intellectual property.[footnote]Wachter, S. and Mittelstadt, B. (2018). A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. [online] papers.ssrn.com. Available at: https://papers.ssrn.com/abstract=3248829 [Accessed 18 Feb. 2021].[/footnote]

An example helps illustrate the challenges at stake: in the context of education technologies, data provided by a student – from homework to online test responses – would be portable under the rights set out in the GDPR, but model-generated inferences about what learning methods would be most effective for that student could be considered as the intellectual property of the training provider. The establishment of a trust to govern the use of pupil data (just like any other ‘bottom-up’ data trust) could help shed light on those necessarily contested borders between intellectual property (IP) rights – that arise from creative input in developing the models that produce individual profiles – and personal data rights.

There will never be a one-size-fits-all answer on where to draw these boundaries between IP and personal data.[footnote]A broader discussion could be around whether drawing boundaries is the right approach or whether we might need a different regime for inferences.[/footnote] Instead, what is needed is a mechanism for negotiating these borders between parties involved in data use. In such cases, data trustees could have a crucial public advocacy function in negotiations about the extent to which such inferences fall within the scope of portability provisions.

Examining the data rights that might be placed in trust points to important differences between the use of trusts as a data governance tool and their traditional application.

Typically, assets placed in trust have value at the time the trust is created. In contrast, modern data practices mean that data acquires value in aggregate – it is the bringing together of data rights in a trust that gives trustees power to influence negotiations about data use that would elude any individual. Whereas property is typically placed in trust to manage its value, data (or data rights) would be placed in trust in part to create value.

Another difference can be found in the ease with which assets can typically be removed from a trust. Central to the trusts proposition is that individuals would be able to move their data rights between trusts, within an ecosystem of trust entities that provide a choice in different types of data use.

The ecosystem of data trusts that would enable individuals to make choices between different approaches to data use and management presupposes the ability to switch from one trust to another relatively easily, probably more easily than in traditional trusts.

These differences need not present a barrier to the development of data trusts. The history of trusts demonstrates the flexibility of this branch of law, and trusts can have a range of properties or ways of working that are designed to match the intent of their creators.

Alternatives to trust law

The fiduciary duties owed by trustees to beneficiaries can be achieved by other legal models. For example, contractual frameworks or principal-agent relationships, can create duties between parties, with strong consequences if those duties are not fulfilled. Regulators can also perform a function similar to fiduciary responsibilities, for example in cases where imbalances of market power might have detrimental impacts on consumers. However, each has its limitations. For example:

  • Contracts allow use of data for a purpose. Coupled with an audit function, these can ensure that data is used in line with individual wishes, and – at least for simple data transactions – contracts would require less energy to establish than a trust. However, effective auditing relies on the ability to draw a line from the intention of those entering a contract to the wording of the contract then to its implementation. Given the complexity of patterns of data use – and the fact that many instances of undesirable data use arise from multiple inconsequential transactions – this function may be difficult to achieve. Due to their obligation of undivided loyalty, a trustee may be better placed and motivated to map intent to use and understand potential pitfalls arising
    from the interactions between data transactions.
  • Agents can be tasked with acting on behalf of an individual, taking a fiduciary responsibility in doing so. However, the interaction between an individual and their agent does not accommodate as easily the collective dimension enabled by the establishment of a trust, and it is in this collective dimension that the ability to disrupt digital power relationships lies. Another issue associated with the use of agents is accountability. Structures would be needed to ensure that agents could be held accountable by individuals, if they failed in their responsibilities. In comparison, under trust law, the Courts of Chancery (and the associated institutional safeguards) present a much stronger accountability regime.

Many jurisdictions do not have an equivalent to trust law. However, they may have mechanisms that could fulfil similar functions. For example, while Germany does not operate a trust law framework, some institutions have fiduciary responsibilities built into their very structure, with institutions such as Sparkassen, banks that operate on a cooperative and not-for-profit basis, taking on a fiduciary responsibility for their customers. Studying such mechanisms might uncover ways of delivering the key functions of trust law – stewarding the rights associated with data and delivering benefits for individuals, communities and society with strong safeguards against abuse.

Developing data trusts

Recent decades have brought radical changes in patterns of data collection and use, and the coming years will likely see further changes, many of which would be difficult to predict today. In this context, society will need a range of governance tools to anticipate and respond to emerging digital opportunities and challenges. In conditions of uncertainty, trusts offer a way of responding to emerging governance challenges, without requiring legislative intervention that can take time to produce (and is more difficult to adapt once in place).

Trusts occupy a special place in the UK’s legal system, and the skills and experience of the UK’s legal community in their development and use means it is well-placed to lead the development of data trusts. The next wave in the development of these governance mechanisms will require further efforts to analyse the assets that will be held by a data trust, investigate the powers that trustees may hold as a result, and consider the different forms of benefit that may arise as a result. Those seeking to capture this opportunity will need to:

  • clarify the limits of existing data rights
  • identify lessons from other jurisdictions in the use of fiduciary responsibilities to underpin data governance
  • support pilot projects that assess the feasibility of creating data trusts as a framework for data governance in areas of real-world need.

Problems and opportunities addressed by data trusts

Data trusts have the potential to address some of the digital challenges we face and could help individuals better position themselves in relationship to different organisations, offering new mechanisms for chanelling choices related to how their data is being used.

While organisations could also form data trusts, this section will deal only with data trusts where the beneficiaries are individuals (data subjects). Also, while trusts could manage rights over non-personal data, this section takes as starting point the opportunities coming from individuals delegating their rights (or beneficial interest therein) over personal data. In contexts where non-personal data is managed, the practical challenges in distinguishing personal and non-personal data need to be acknowledged, and it needs to be seen how managing mixed data sets influence the structure and running of a data trust.

There are a number of issues that might arise from setting up a data trust, which aims to balance the asymmetries between those who have less power and are more vulnerable (individuals or data subjects) and those who are in a more favoured position (organisations or data controllers). This section aims at briefly presenting a number of caveats in relation to data trusts and the ecosystem they create, however it should be noted that information asymmetries could also exist between individuals and trusts, not only between individuals and organisations.[footnote] For a more detailed discussion on caveats and shortcomings see O’hara, K. (2020) ‘Data Trusts’. For further discussion regarding the development of data trusts see: Data Trusts Initiative (2020) Data Trusts: from theory to practice, working paper 1.[/footnote]

1. Purpose of the trust and consent

Trusts are usually established for defined purposes set out in a constitutional document. The data subjects will either come together to define their vision about the purposes of data use or will need to adhere to an established data trust and be well-informed about the purposes of the trust and how data or data rights are handled. In either case, it is of the utmost importance that those joining a data trust can do so in full awareness of the trust’s terms and aims.

This raises important ‘enhanced consent’ questions: what mechanisms, if any, are available to data trustees to ensure informed and meaningful consent is achieved? Will the lack of mechanisms for deliberation or consultation with beneficiaries involve liability for the trustees? What would the trustee role be in a participatory structure (active or purely managerial)? Might data trustees for instance draw upon the significant body of work in medical ethics to delineate best practice in this respect?

This set of questions is related to the issues raised in the next section, regarding the status, oversight and required qualifications of data trustees. Important questions arise around how expertise is attracted to this position when, as we will see below, the challenges for remunerating this role and the responsibilities and liabilities of trustees are significant.

2. The role of the trustee

The trustee will be in charge of managing the relationship between the trust’s beneficiaries and the organisations the trust interacts with. Trustees will have a duty of undivided loyalty to the beneficiaries (understood here as the data subjects whose data rights they manage) and they would be responsible for skilfully negotiating the terms of use or access to the beneficiaries’ data. They could also be held responsible if terms are less than satisfactory or if beneficiaries find fault with their actions (in which case the burden of proof is reversed, and it is for the data trustee to demonstrate that they have acted with undivided loyalty).

There are open questions as to if and how beneficiaries will be able to monitor the trustees’ judgement and behaviour and how beneficiaries will be able to identify fault when complex data transactions are involved. More complexity is added also if an ecosystem of data trusts is developed, where one person’s data is spread across several trusts.

At the same time, in the context of increased concerns coming from combining different datasets, in a scenario where one data trust manages a particular dataset about its beneficiaries and another trust manages a different dataset, where the combination of these two datasets could result in harm, should there be mechanisms for trusts to cooperate in preventing such harms? Or would trustees just inform beneficiaries of potential dangers and ask them to sign a liability waiver?

If and when a data trust relies on a centralised model (rather than a decentralised one, whereby the data remains wherever it is, and the data trustee merely leverages the data rights to negotiate access, etc.), one of the central attributions of the trustees will be to ensure the privacy and security of the beneficiaries’ data. Such a task would involve a high degree of risk and complexity (hence the likely preference for decentralised models).

It is unclear what type of technical tools or interfaces will be needed in order for trustees to access credentials in a secure way, for example, and who will make these significant investments in the technical layer. Potential inspiration could come from the new Open Banking ecosystem, where data sharing is enabled by secure Application Programming Interfaces (APIs) which rely on the banks’ authentication methodologies, so that third-party providers do not have to access users’ credentials.

Managing such demanding attributions raises questions related to what will be the triggers, incentives and training required for trustees to take up such a complex role. Should there be formal training and entry requirements? Could data trustees eventually constitute a new type of profession, which could give rise to a ‘local’ and potentially more nimble layer of professional regulation (on top of court oversight and potential legislative interventions), not unlike the multilayered regulatory structure that governs medical practice today?

3. Incentives and sustainability of data trusts

The data trust ecosystem model suggests the importance of competition between trusts for members, yet at this stage it is not clear how enough competition between trusts will emerge. At the same time, it is presumed that a data trust would work best when it operates on behalf of a large number of people. This gives the data trust a bargaining power position in relation to different organisations such as companies and public institutions. Will this create a dependence on network effects, and how can the negative implications be addressed?

Moreover, there are questions related to the funding model and incentives structure underlying the sustainability of data trusts. What will attract individuals to a data trust? For example, if the concern of the beneficiaries is to restrict and to protect data, will the trust be able to generate an income stream or will the trust rely on funding from other sources (e.g. from beneficiaries, philanthropists, etc.)? At the same time, if potential income streams are maximised depending on the use of the data, what are the implications for privacy and data protection?

In addition, what happens when individuals are simply unaware or uninterested in joining a data trust? Might they be allocated to a publicly funded data trust, on the basis of arguments similar to those that were relied on when making pension contributions compulsory? If so, what would constitute adequate oversight mechanisms?

When individuals are interested in joining a data trust, will they be lured by the promise of streamlining their daily interaction with data-reliant service providers, effectively relying on data trusts as a lifestyle, paid-for intermediary service providing peace of mind when it comes to safeguarding personal data? Will individuals be motivated to join a data trust in order to contribute to the common good in a way that does not entail long-term data risks? Will there be monetary incentives for people joining a data trust (whereby individuals would obtain monetary compensation in exchange for providing data)? Should some incentives structures – such as monetary rewards – be controlled and regulated, or in some cases altogether banned?

There are a number of possible funding models for data trusts:

  • privately funded
  • publicly funded
  • charging a fee or subscription from data trust beneficiaries (the individuals or data subjects) in return for streamlining and/or safeguarding their data interactions
  • charging a fee or subscription from those who use the data (organisations)
  • charging individuals for related services
  • a combination of the above.

The different funding options will have both sustainability, and larger data ecosystem implications. If the trust needs to generate revenue by charging for access to the data it stewards or for related services, the focus might start to levitate towards the viability and performance of the trust. The trusts’ performance will correlate with the demand side (organisations using the trust’s beneficiaries’ data), how many people join a data trust (potentially reinforcing network effects) and which data trust can compete better. Will these interdependencies diminish the data trusts’ role as a rebalancing tool for adjusting asymmetries of power and consolidating the position of the disadvantaged?

At the same time, if the data trust operates on a model where the beneficiaries are charged for the service, much depends on how that service is understood. If the focus is on monetary rewards, and the latter are not regulated, the expectations of return from the data trust will increase, hence affecting the dynamics of the relationships. For example, if the data trusts’ funding model implies companies pay back profit on the data used, they will have to make a number of decisions regarding their profitability and viability on the market. Will this reinforce some of the business models that are considerably criticised today, such as the dominant advertising based model?

In the case of publicly funded data trusts, public oversight mechanisms and institutions will need to be developed. At the moment, it is unclear who will be responsible for ensuring funds are transparently allocated based on input from individuals, communities and data-sharing needs. The currently low levels of data awareness also raise concerns about ways of building genuine and adequate engagement mechanisms. Further, the impact, benefit, results or added value created by the data trust will need to be demonstrated. This calls for building transparency and accountability means that are specific to publicly funded data trusts, grafting themselves on top of existing fiduciary duties (and Court oversight mechanisms).

4. Opportunities for organisations to engage with data trusts

Data trusts could offer opportunities for commercial or not-for-profit organisations in a variety of ways. Some of the benefits have been briefly mentioned in the introductory section, pointing to reputational benefits, legal compliance and future-proofing data governance practices. In this respect, one may imagine a scenario whereby large corporate entities (such as banks for instance) are keen to go beyond mere regulatory compliance by sponsoring a data trust in a bid to show how seriously they take their ethical responsibilities when it comes to personal data.

Such a ‘sponsored data trust’ would be strictly separate from the bank itself (absence of conflict of interest would have to be very clear). It could be flagged as enabling the bank’s clients to ‘take the reins’ of their data and benefit from insights derived from this data. All the data that would normally be collected directly by the bank would only be so collected on the basis of terms and conditions negotiated by the data trustee on behalf of the trust’s beneficiaries. The trustee could also negotiate similar terms (or negotiate to revise terms of existing individual agreements) with other corporate entities (supermarkets for instance).

Other potential benefits for corporate and research bodies are around the trusts’ ability to enable access to potentially better quality data that fits organisations’ needs and enables a more agile use of data. This reduces overhead and provides more ease of mind, based on the trustees’ fiduciary responsibility to the data subjects. A trustee would be able to spot and prevent potential harms, therefore reducing liability issues for organisations that could have otherwise arisen from engaging with individual data subjects directly. At the same time, trusts offer a way of responding to emerging governance challenges, without requiring legislative intervention that can take time to produce (and is more difficult to adapt once in place). A broader discussion about opportunities for commercial or not-for-profit organisations could be
considered for a future report.

Mock case study: Greenfields High School

Greenfields High School is using an educational platform to deliver teaching materials, with homework being assigned by online tools that track student learning progress, for example recording test scores. The data collected is used to tailor learning plans, with the aim of improving student performance.

Students, parents, teachers and school leadership have a range of interests
and concerns when it comes to these tools:

  • Students wish to understand what data is collected about them, how it is used and for how long it is kept. Parents want assurances about how their children’s data is used, stored, and processed.
  • Parents, teachers, and school leadership wish to compare their performance against that of other schools, by sharing some types of data.
  • The school wants to keep records of educational data for all pupils for a number of years to track progress. It also wants to be able to compare the effectiveness of different learning platforms.
  • The company providing the learning platform requires access to the data to improve its products and services.

How would a data trust work?

A data trust is set up, pulling together the rights pupils and parents have over the personal data they share with the education platform provider. It tasks a data trustee with the exercise of those rights with the aim of negotiating the terms of service to the benefit and limits established by the school, parents and pupils. It also aims at maximising the school’s ability to evaluate different types of tools (and possibly pool this data with other schools), within an agreed scope of data use that maintains the pupils’ and parents’ confidence that they are minimising the risks associated with data sharing.

 

The trust will be able to leverage its members’ rights to data portability and/or access (under the GDPR) when the school discusses onwards terms of data usewith the educational platform service provider.

 

The data trust includes several schools who have joined a group of common interest in a certain educational approach. This group is overseen by a board. One of the persons sitting on that board is appointed as data trustee.

Chapter 2: Data cooperatives

How data cooperatives work
How data cooperatives work

Why data cooperatives?

The cooperative approach is attractive in situations where there is a desire to give members an equal stake in the organisation they establish and an equal say in its management, as for example with traditional mutuals – businesses owned by and run for the benefits of their members – which are common in financial services, such as building societies. As the business is owned and run by its members, the cooperative approach can be seen as a solution to a growing sense of powerlessness people feel over businesses and the economy.[footnote]See Co-operatives UK (n.d.). Understanding co‑ops. [online] uk.coop. Available at: www.uk.coop/understanding-co-ops [Accessed 18 Feb. 2021].[/footnote]

The cooperative approach in the context of data stewardship can be explored in examples where groups have voluntarily pooled data resources in a commonly owned enterprise, and where the stewardship of that data is a joint responsibility of the common owners. The aim of such enterprises is often to give members of the cooperative more control over their data and repurpose the data in the interests of those represented in it, as opposed to the erection of defensive restrictions around the use of data to prevent activities that conflict with the interests of data subjects (especially but not exclusively with respect to activities that threaten to breach their privacy). In other words, cooperatives tend to have a positive rather than a negative agenda, to achieve some goal held commonly by members, rather than to avoid some outcome resisted by them.

This chapter looks at some examples of data cooperatives, the problems and opportunities they address and patterns of data stewardship. It explores the structure and characteristics of cooperatives and provides a summary of the challenges presented by the cooperative model, together with descriptions of alternative approaches.

What is a cooperative?

A cooperative typically forms around a group that perceives itself as having collective interests, which it would be better to pursue jointly than individually. This may be because they have more bargaining power as a collective, because some kind of network effect means the value for all increases if resources are pooled, or simply because the members of the cooperative do not want to cede control of the assets to those outside the group. Cooperatives are typically formed to create benefits for members or to supply a need that was not being catered for by the market.

The International Cooperative Alliance or ICA[footnote]The ICA is the global federation of co-operative enterprises. More information available at International Cooperative Alliance (2019). Home. [online] ica.coop. Available at: www.ica.coop/en [Accessed 18 Feb. 2021].[/footnote] is the global steward of the Statement on the Cooperative Identity, which defines a cooperative as an ‘autonomous association of persons united voluntarily to meet their common economic, social, and cultural needs and aspirations through a jointly-owned and democratically controlled enterprise.’

According to the ICA there are an estimated three million cooperatives operating around the world,[footnote]International Cooperative Alliance (2019a). Facts and figures. [online] ica.coop. Available at: www.ica.coop/en/cooperatives/factsand- figures [Accessed 18 Feb. 2021].[/footnote] established to realise a vast array of economic, social and cultural needs and aspirations. Examples include:

  • Consumer cooperatives, which provide goods and services to their members/owners, and so serve the community of users. They value service and low price above profit, as well as being close to their customers. They might produce goods such as utilities, insurance or food, or services such as childcare.[footnote]More information available at: Consumer Federation of America (n.d.). Consumer Cooperatives. [online] Consumer Federation of America. Available at: https://consumerfed.org/consumer-cooperatives [Accessed 18 Feb. 2021].[/footnote] They might be ‘buyers’ clubs’, intended to enable the amalgamation of buyers’ power in order to reduce prices. Credit unions are also examples of consumer cooperatives, which mutualise loans based on social knowledge of local conditions and members’ needs, and are owned by the members and therefore able to devote more capital to members’ services rather than profits for external owners.[footnote]More information available at: Find Your Credit Union (n.d.). About Credit Unions. [online] Find Your Credit Union. Available at: www.findyourcreditunion.co.uk/about-credit-unions [Accessed 18 Feb. 2021].[/footnote]
  • Housing cooperatives take on a range of forms, from shared ownership of the entire asset to management of the leasehold or managing tenants’ participation in decision-making.
  • Worker cooperatives, where the entity is owned and controlled by employees.
  • Agricultural cooperatives, which might be concerned with marketing, supply of goods or sharing of machinery on behalf of members. Many agricultural cooperatives in the US are of significant size: the largest, for example, had revenues of $32 billion in 2019.[footnote]Morning AgClips (2021). A snapshot of the top 100 agricultural cooperatives. [online] morningagclips.com. Available at: www.morningagclips.com/a-snapshot-of-the-top-100-agricultural-cooperatives [Accessed 18 Feb. 2021].[/footnote] These cooperatives are formed to address a market power imbalance created by small producers and large distributors or buyers – power asymmetries that are also experienced by individuals in the data ecosystem.

The estimated three million cooperatives subscribe to a series of cooperative values and principles.[footnote]More information available at: www.ica.coop/en/cooperatives/cooperative-identity and International Cooperative Alliance (2017) The Guidance Notes on the Cooperative Principles. Available at: www.ica.coop/en/media/library/research-and-reviews/guidancenotes- cooperative-principles [Accessed 18 Feb. 2021].[/footnote] Values typically include self-help,self-responsibility, democracy, equality, equity, solidarity, honesty and transparency, social responsibility and an ethics of care.[footnote]For example, there have been a number of experiments in using cooperative forms to manage data equitably, especially in the area of healthcare. See Blasimme, A., Vayena, E. and Hafen, E. (2018). Democratizing Health Research Through Data Cooperatives. Philosophy & Technology, [online] 31(3), pp.473–479. Available at: https://doi.org/10.1007/s13347-018-0320-8 [Accessed 18 Feb. 2021]; Hafen, E. (2019). Personal Data Cooperatives – A New Data Governance Framework for Data Donations and Precision Health. Philosophical Studies Series, pp.141–149. Available at: https://doi.org/10.1007/978-3-030-04363-6_9 [Accessed 18 Feb. 2021].[/footnote] Fundamental cooperative characteristics include: voluntary and open membership, democratic member control (one member, one vote), member benefit and economic participation (with surpluses shared on an equitable basis), and autonomy and independence.[footnote]See International Cooperative Alliance, Facts and figures and Cooperatives UK (2017). Simply Legal. [online] Available at: www.uk.coop/sites/default/files/2020-10/simply-legal-final-september-2017.pdf [Accessed 18 Feb. 2021].[/footnote]

Cooperatives in the UK: characteristics and legal structures

According to Co-operatives UK[footnote]Co-operatives UK is a network for thousands of co-operative businesses with a mission to grow the co-operative economy. More information available at: www.uk.coop/about [Accessed 18 Feb. 2021].[/footnote] there are more than 7,000 independent cooperatives in the UK, operating in all parts of the economy and collectively contributing £38.2 billion to the British economy.[footnote]See Co-operatives UK (2021), Understanding co‑ops. [online]. Available at: www.uk.coop/about/what-co-operative [Accessed 18 Feb. 2021].[/footnote]

UK law does not provide a precise definition of a cooperative, nor is there a prescribed legal form that a cooperative must take. According to Co-operatives UK, a cooperative in the UK can generally be taken to be any organisation that meets the ICA’s definition of a cooperative and espouses the cooperative values and principles set out in the Statement on the Cooperative Identity.[footnote]Co-operatives UK (2017) Simply Legal.[/footnote] This status can be implemented via many different unincorporated and incorporated legal forms. Deciding which one is best will depend on a number of case-specific factors, including the level of liability members are willing to expose themselves to, and the way members want the cooperative to be governed.

A possible, and seemingly obvious, choice of legal form is registering as a cooperative society under the Co-operative and Community Benefit Societies Act 2014.[footnote]See: Co-operative and Community Benefit Societies Act 2014. [online] Available at: www.legislation.gov.uk/ukpga/2014/14/contents [Accessed 18 Feb. 2021].[/footnote] This Act consolidated a range of prior legislation and helped to clarify the legal form for cooperative societies in the UK (different rules apply for registration of a credit union under the Credit Unions Act 1979). Subsequent guidance from the Financial Conduct Authority (FCA) on registration, and the Charity Commission on share capital withdrawal allowances, have further clarified and codified the regulatory regime for cooperative societies. In particular, to register as a cooperative society under the Act, it must be a ‘bona fide co-operative society’. The Act however does not precisely define what is included as a bona fide co-operative society. In its guidance, the FCA adopted the definition in the ICA’s Statement on the Cooperative Identity and says it considers it an indicator that the condition for registration is met where the society puts the values from the ICA’s Statement into practice through the principles set out in the Statement.[footnote]See Financial Conduct Authority (2015) Guidance on the FCA’s registration function under the Co-operative and Community Benefit Societies Act 2014, Finalised guidance 15/12 [online]. Available at: www.fca.org.uk/publication/finalised-guidance/fg15-12.pdf[/footnote]

The cooperative society form is widely used by all types of cooperatives. Registration under the 2014 Act imposes a level of governance through a society’s rules and a level of transparency through certain reporting requirements that has some common ground with Companies Acts requirements for other types of organisations.

However, as noted above, this is not the only legal form available for a cooperative, and alternative legal forms that can be used include a private company limited by shares and a private company limited by guarantee. For a more detailed exploration of the options Co-operatives UK has published guidance,[footnote]Co-operatives UK (2017) Simply Legal.[/footnote] and has a ‘Select-a-Structure’ tool on its website.[footnote]See Co-operatives UK (2018), Support for your co‑op. [online]. Available at: www.uk.coop/developing-co-ops/select-structure-tool [Accessed 18 Feb. 2021].[/footnote]

Cooperatives and data stewardship

For the purposes of this report we see data cooperatives as cooperative organisations (whatever their legal form) that have as their main purpose the stewardship of data for the benefit of their members, who are seen as individuals (or data subjects).[footnote]Depending on the type of cooperative, members of a cooperative can also be SMEs, enterprises, different types of individuals or groups or a combination of these. For more information see Co-operatives UK (2018), Types of Co-ops. [online]. Available at: www.uk.coop/understanding-co-ops/what-co-op/types-co-ops [Accessed 18 Feb. 2021].[/footnote] This is in contrast to stewardship of data primarily or exclusively for the benefit of the community at large.
Under the Co-operative and Community Benefit Societies Act 2014,if the emphasis is to benefit a wider community then the appropriate legal form would be a community benefit society.

As for cooperative societies, other legal forms could also be used to achieve the same aims and deciding which is best will depend on a number of case-specific factors. However, that is not to say that a cooperative whose aim is to benefit its members might not also benefit wider society – we will see examples later (e.g. Salus Coop) where members’ benefits are also intended to benefit wider society. Indeed, where members see the wider benefits as their own priorities (as with philanthropic giving), the distinction between members’ benefits and social benefits may be hard to discern.

In a data cooperative, those responsible for stewarding the data act in the context of the collective interests of the members and – depending on how the cooperative is governed – may have to advance the interests of all members at once, and/or achieve consensus over whether an action is allowed.

The stewardship of data may be (and with increasing tech adoption is increasingly likely to be) a secondary function to the main purpose of a cooperative. For example, if the cooperative is enabled by technology, such as through the use of a social media platform, then it will routinely produce data that it may be able to capture. If so, this data might be of use to the cooperative’s own operations in future. Some of these groups have been described as social machines.[footnote]Shadbolt, N., O’Hara, K., De Roure, D. and Hall, W. (2019). The Theory and Practice of Social Machines. Lecture Notes in Social Networks. Cham: Springer International Publishing. Available at: https://www.springer.com/gp/book/9783030108885[/footnote]

Examples of areas where valuable data may be produced are medical applications, interest groups, such as religious or political groups, fitness, wellbeing and self-help groups, particularly including the quantified self movement, and gaming groups. While questions around the management and use of data produced by cooperatives through their ordinary business will become increasingly important (as with other types of organisations that produce data as part of their business) this is not our focus here.

Data cooperatives versus data commons

 

In their collaborative, consensual form, data cooperatives are similar to data commons. A commons is a collective set of resources that may be: owned by no one; jointly owned but indivisible; or owned by an individual with others nevertheless having rights to usage (as with some types of common land). Management of a commons is typically informal, via agreed institutions and social norms.[footnote]For a richer discussion on governing the commons see Ostrom, E. (2015). Governing the Commons. Cambridge: Cambridge University Press.[/footnote]

 

The distinction between commons and cooperatives is blurred; one possible marker is that a commons is an arrangement where the common resource is undivided, and the stakeholders all have equal rights, whereas in a cooperative, the resources may have been owned by the members and brought into the cooperative. The cooperative therefore grows or shrinks as resources are brought in or out as members join or leave, whereas the commons changes organically, and its stakeholders use but do not contribute directly to the resources.

 

In the case of data, the cooperative model would imply that data was brought to and withdrawn from the cooperative as members joined and left. A data commons implies a body of data whose growth or decline would be independent of the identity and number of stakeholders.

 

The governance of commons can provide sustainable support for public goods,[footnote]Ostrom, E. (2015) Governing the Commons. Available at: https://doi.org/10.1017/CBO9781316423936[/footnote] and data commons are often written and theorised about.[footnote]Grossman, R. (2018). A Proposed End-To-End Principle for Data Commons. [online] Medium. Available at: https://medium.com/ @rgrossman1/a-proposed-end-to-end-principle-for-data-commons-5872f2fa8a47 [Accessed 18 Feb. 2021].[/footnote] However, as this report is focused on existing examples of practice, in this respect it is difficult to identify actual paradigms of data commons (either intended as such, or merely as institutions whose governance happens to meet Ostrom’s principles).[footnote]See Ada Lovelace Institute (2020). Exploring principles for data stewardship. [online] www.adalovelaceinstitute.org. Available at: www.adalovelaceinstitute.org/project/exploring-principles-for-data-stewardship [Accessed 18 Feb. 2021] and Ostrom, E. (2015) Governing the Commons.[/footnote] Hence, while data commons may possibly be an exciting way forward, and while there are indeed some domains where a commons approach might be appropriate (such as OpenStreetMap and Wikidata), the prospects of their emergence from the complex legal position surrounding data at the time of writing are not strong, so will not be discussed further in this report.

Examples of cooperatives as stewards of data

For the purpose of this report, data cooperatives are seen as cooperative organisations (irrespective of their legal form) that have as their main purpose the stewardship of data for the benefit of its members. This section focuses on examples from the data cooperative space, sharing remarks on governance, approach to data rights and sustainability. Although they take different legal forms (particularly as they are not all UK-based projects) all are working along broadly cooperative principles.

1. Salus Coop

Salus Coop is a non-profit data cooperative for health research (referring not only to health data, but also lifestyle-related data more broadly, such as data that captures the number of steps a person takes in a day), founded in Barcelona by members of the public in September 2017. It set out to create a citizen-driven model of collaborative governance and management of health data ‘to legitimize citizens’ rights to control their own health records while facilitating data sharing to accelerate research innovation in healthcare’.[footnote]See Salus Coop (n.d.). Home. [online] SalusCoop. Available at: www.saluscoop.org [Accessed 18 Feb. 2021].[/footnote]

Governance: Salus have developed a ‘common good data license for health research’ together with citizens through a crowd-design mechanism,[footnote]More information available at: Salus Coop (2020). TRIEM: Let’s choose a better future for our data. [online] SalusCoop. Available at: www.saluscoop.org/proyectos/triem [Accessed 18 Feb. 2021].[/footnote] which it describes as the first health data-sharing license. The Salus CG license applies to data that members donate and specifies the conditions that any research projects seeking to use the member data must adhere to.[footnote]The terms of the licences are available at Salus Coop (2020). Licencia. [online]. Available at: www.saluscoop.org/licencia [Accessed 18 Feb. 2021].[/footnote] The conditions are:

  • health only: the data will only be used for biomedical research activities and health and/or social studies
  • non-commercial: research projects will be promoted by entities of general interest, such as public institutions, universities and foundations
  • shared results: all research results will be accessible at no cost maximum privacy: all data will be anonymised and unidentified before any use
  • total control: members can cancel or change the conditions of access to their data at any time.

Data rights: Individual members will have access to the data they’ve donated, but Salus will only permit third-party access to anonymised data. Salus describes itself as committed to ensuring, and requires researchers interacting with the data to ensure, that: individuals have the right to know under what conditions the data they’ve contributed will be used, for what uses, by which institutions, for how long and with what levels of anonymisation; individuals have the right to obtain the results of studies carried out with the use of data they’ve contributed openly and at no cost; and any technological architecture used allows individuals to know about and manage any data they contribute.

Note therefore that Salus meets the definition of a data cooperative, as it provides clear and specified benefits for its members – specifically a set of powers, rights and constraints over the use of their personal health data – in such a way as to also benefit the wider community by providing data for health research. Some of these powers and rights would be provided by GDPR, but Salus is committed to providing them to its members in a transparent and usable way.

Sustainability of the cooperative: Salus has run small-scale studies since 2016, and promotes itself as being about to generate ‘better’ data for research (in relation, for example, to surveys), creating ‘new’ datasets (such as heartbeat data generated through consumer wearables) and ‘more’ data than other approaches. However, the cooperative’s approach to sustainability is unclear. In June 2021, it aims to publicly launch CO3 (Cooperative COVID Cohort), a project stream to help COVID-19 research,[footnote]More information available at: Salus Coop (2020). Co3. [online]. Available at: www.saluscoop.org/proyectos/co3 [Accessed 18 Feb. 2021].[/footnote] and it aims to capture a fraction of the value generated by providing data for researchers to sustain itself.

2. Driver’s Seat

Driver’s Seat Cooperative LCA (‘Driver’s Seat’)[footnote]See Driver’s Seat Cooperative (n.d). Home. [online]. Available at: www.driversseat.co [Accessed 18 Feb. 2021].[/footnote] is a driver-owned cooperative incorporated in the USA in 2019,95 with ambitions to help unionise or collectivise the gig economy. It helps gig-economy workers gain access to work-related smartphone data and get insight from it:

it is ‘committed to data democracy … [and] empowering gig workers and local governments
to make informed decisions with insights from their rideshare data.’

The Driver’s Seat app, available only in the US, allows on-demand drivers to track the data they generate, and share it with the cooperative, which can then aggregate and analyse it to produce wider insights. These are fed back to members, enabling them to optimise their incomes. Driver’s Seat Cooperative also collects and sells mobility insights to city agencies to enable them to make better transportation-planning decisions. According to the website, when ‘the Driver’s Seat Cooperative profits from insight sales, driver-owners receive dividends and share the wealth’.

One issue here, unexplored on the website, is that in the ride-hailing market, in geographically limited areas, drivers may indeed have common interests, but they are also in competition with each other for rides. Access to data could also open up job allocation to scrutiny, something that is concerning drivers in the UK, where a recent complaint against Uber has been brought by drivers who want to see how algorithms are used to determine their work, on the basis that this could be allowing discriminatory or unfair practices to go unchecked.[footnote]See OpenCorporates (2021). Salus Coop. [online] opencorporates.com. Available at: https://opencorporates.com/companies/us_ co/20191545590 [Accessed 18 Feb. 2021].[/footnote]

Governance: Driver’s Seat Cooperative is an LCA or Limited Cooperative Association in the US, so will be governed by the legislation and rules associated with this type of entity. It is not obvious from the website what the terms and conditions are for becoming a member of the cooperative and how it is democratically controlled.

Data rights: Driver’s Seat is headquartered outside the jurisdiction of the GDPR. A detailed privacy notice sets out how Driver’s Seat collects and processes personal data from its platform, which includes its website and the Driver’s Seat app.[footnote]See Driver’s Seat Cooperative (2020). Privacy notice [online]. Available at: www.driversseat.co/privacy [Accessed 18 Feb. 2021].[/footnote] By accessing or using the platform the user consents to the collection and processing of personal data according to this notice.

Sustainability of the cooperative: Driver’s Seat is a very new cooperative and a graduate of the 2019 cohort of the start.coop accelerator programme in the US.[footnote]See Start.Coop (2019), Cohort report 2019. [online] Available at: https://start.coop/wp-content/uploads/2019/12/Start. coop_2019Report.pdf [Accessed 18 Feb. 2021].[/footnote] PitchBook reports that it secured $300k angel investment in August 2020.[footnote]See PitchBook (n.d.), Driver’s Seat Cooperative Company Profile: Valuation & Investors. [online] Available at: https://pitchbook.com/ profiles/company/251012-17 [Accessed 18 Feb. 2021].[/footnote] According to its website, Driver’s Seat sells mobility insights to city agencies, which is doubtless at least part of its plan for long-term sustainability. It is not obvious from the website if there is any further investment requirement from the driver-owners of the cooperative above and beyond sharing their data. The app itself is free.

3. The Good Data (now dissolved)

The Good Data Cooperative Limited (‘The Good Data’)[footnote]More information available at: TheGoodData (n.d). Home. [online]. Available at: www.thegooddata.org [Accessed 18 Feb. 2021].[/footnote] was a cooperative registered in the UK that developed technology
to collect, pool, anonymise (where possible) and sell members’ internet browsing data on their own terms, to correct the power imbalance between individuals and platforms (selling ‘on fair terms’).[footnote]For more information see: Nesta (n.d.). The Good Data. [online] Nesta. Available at: www.nesta.org.uk/feature/me-my-data-and-i/thegood- data/ [Accessed 18 Feb. 2021].[/footnote] Members participated in The Good Data by donating their browsing data through this technology, so that the cooperative could trade with it anonymously enabling the cooperative to raise funds to cover costs and fund charities.[footnote]See Partial Amendment to Rules dated 18 July 2017, filed at the FCA: https://mutuals.fca.org.uk/Search/Society/26166 [Accessed 18 Feb. 2021].[/footnote]

As with Salus Coop, The Good Data provided benefits for members while simultaneously promising potential benefits for the wider community (and indeed many of those wider benefits would also be reasons for members to join).

Governance: The Good Data was registered as a cooperative society under the Co-operative and Community Benefit Societies Act 2014, and accordingly was subject to the requirements of that Act and had to be governed according to its rules filed with the FCA. The Good Data determined which consumers should receive the data, and made decisions about what to sell and how far to anonymise on a case-by-case basis. It declined to collect data from ‘sensitive’ browsing behaviour, which included looking at ‘explicit’ websites, as well as health-related and political sites.[footnote]For more information see Nesta (n.d.). The Good Data.[/footnote] According to The Good Data’s last annual return filed at the FCA,[footnote]See Annual Return and Accounts dated 31 December 2018 filed at the FCA: https://mutuals.fca.org.uk/Search/Society/26166 [Accessed 18 Feb. 2021].[/footnote] The Good Data had three directors. Members had online access to all relevant information and based on that could present ideas or comments in the online collaboration platform at any time. Members could also participate in improving existing services and an Annual General Meeting was held once a year.

Data rights: It is hard to say what rights were invoked here. If the data has been anonymised, it is no longer personal data under the GDPR. If the data is likely to be re-identifiable or to be attributed to an individual, then the data is pseudonymised (and thus still personal data).

Sustainability of the cooperative: Revenue was generated from the sale of anonymised data to data brokers and other advertising platforms, and the profits redistributed, to maintain the system, and for social lending in developing countries. Decisions about the latter were determined by cooperative members. However, the model proved not to be sustainable, as its website announces the dissolution of the cooperative: ‘we thought that the best way to achieve our vision was by setting up a collaborative and not for profit initiative. But we failed to pass through the message and to attract enough members.’ The Request to Cancel filed at the FCA[footnote]See Request to Cancel dated 6 September 2019 filed at the FCA: https://mutuals.fca.org.uk/Search/Society/26166 [Accessed 18 Feb. 2021].[/footnote]
also indicated that this was due to Google rejecting The Good Data’s technology, which was intended to allow members to gain ownership of their browsing data from its Chrome Webstore, and being unable to build a new platform to pursue this objective given the required technical complexity and lack of sufficient human and financial resources.

Created with similar intentions, Streamr[footnote]Konings, R. (2019). Join a data union with the Swash browser plugin. [online] Medium. Available at: https://medium.com/streamrblog/ join-a-data-union-with-the-surf-streamr-browser-plugin-d9050d2d9332 [Accessed 18 Feb. 2021].[/footnote] advocates for the concept of ‘data unions’ and seeks to create financial value for individuals by creating aggregate collections of data in a similar way, including focusing on web browser data – it’s unclear whether this effort will prove more sustainable than The Good Data.

Problems and opportunities addressed by data cooperatives

From the examples surveyed above, data cooperatives appear mostly concerned with personal data (as opposed to non-personal data) and, in general, are directed towards giving members more control over data they generate, which in turn can be used to address existing problems (including social problems) or open up new opportunities. This is very much in line with the purpose of the cooperative model generally. For example, Salus Coop allows members to control the use of their health data, while opening up new opportunities for health research. The Good Data was aimed at giving data subjects more control and bargaining power with respect to data platforms, to get a better division of the economic benefits. Unionising initiatives, such as Driver’s Seat, have focused largely but not exclusively on the gig economy, and using data to empower workers and enable them to optimise their incomes and working practices.

Many data cooperatives seek to repurpose existing data at the discretion of groups of people, to create new cooperatively governed data assets. In this respect, they tend to pursue a positive agenda that uses data as a resource. For example, Driver’s Seat brings in data from sources such as rideshare platforms and sells mobility insights based on this data, sharing profits among members. The Good Data’s business model was to trade anonymised internet browsing data. Some data cooperatives do also seek to refactor the relationship between organisations that hold data and individuals who have an interest in it. The Good Data’s technology to collect internet browsing data was also designed to give members using it more privacy by blocking data trackers.

See also RadicalxChange’s proposal in Annex 3, which contains elements of all three legal mechanisms presented in this report. Described as a conceptual model, it would shake up the status quo even more by making corporate access to data subject’s data the cooperative decision of a Data Coalition.

Although privacy is usually a feature they respect, it is hard to find data cooperatives intending to preserve privacy as a first priority, through limiting the data that is collected and processed. Indeed, this is rather a negative aim, constraining the use of data, rather than pursuing a positive agenda and opening up a new purpose for the data.

More often data anonymisation techniques and privacy-preserving technologies are referred to, however these areas require research and investment,[footnote]Royal Society (2019). Protecting privacy in practice: The current use, development and limits of Privacy Enhancing Technologies in data analysis. [online] Royal Society. Available at: https://royalsociety.org/-/media/policy/projects/privacy-enhancing-technologies/ privacy-enhancing-technologies-report.pdf?la=en-GB&hash=862C5DE7C8421CD36C105CAE8F812BD0 [Accessed 18 Feb. 2021].[/footnote] especially given the legal uncertainty as to what it takes for companies to anonymise data in the light of the GDPR, and the complexity of the task of anonymisation itself, which requires a thorough understanding of the environment in which the data is held.[footnote]For a more detailed discussion see: UK Anonymisation Network (2020). Anonymisation Decision-Making Framework. [online] Available at: https://ukanon.net/framework [Accessed 18 Feb. 2021].[/footnote]

Examples that we have surveyed could be said to recognise the balance between 1) complete privacy and 2) the potential benefits to the individual from collecting and processing personal data and communicating the insights to the individual, and 3) in those individuals then being able to better influence the market and receive a better division of the economic benefits (e.g. through selling the data and/or insights).

Challenges

The cooperative approach appeals to a sense of data democracy, participation and fair dealing that may inform and shape the structuring of any data-sharing platform but, in themselves, cooperatives face a number of challenges:

1. Uptake

While the examples we’ve analysed represent experimentation around data cooperatives, there doesn’t appear to be significant uptake and use of them, and little evidence that they will scale to steward significant amounts of data within a particular geography or domain. This is perhaps unsurprising, given a number of challenges to uptake, as cooperatives require motivated individuals to come together and actively participate by:

  • recognising the significance of the problem a cooperative is trying to solve (resonance challenge)
  • being interested enough to find or engage with a data cooperative as a means to solve the problem (mobilisation challenge)
  • trusting a particular cooperative and its governance as the best place to steward data (trust challenge)
  • being data literate enough to understand the implications of different access permissions, and/or willing to devote time and effort to managing the process. Because cooperatives presume a role for voluntary members and rely on positive action to function, this is more likely to work in circumstances where all participants
    are suitably motivated and willing to consent to the terms of participation (capacity challenge).

The examples surveyed offer some insights into how these elements of the uptake challenge could be met. A strong common incentive could be enough to meet the mobilisation challenge by employing bottom-up attempts to create data cooperatives. For example, Driver’s Seat could use the interest and perceived injustice among gig-economy workers in their working conditions and pay to build an important worker-owned and controlled data asset. If endorsed or even delivered by trusted institutions such as labour unions this could further enhance uptake.

Other examples, such as The Good Data, were aiming to mobilise people around the concept of correcting a power imbalance between individuals and platforms. In a similar vein, the aim of the RadicalxChange model (discussed further in Annex 3) works at the level of power imbalance, with an added requirement for legislative change to make their data coalitions possible and reduce the market failure of data.[footnote]In RadicalxChange’s view, data fails because most of the information we have at our disposal (about ourselves and others) is largely the same as information others have at their disposal. The price is dragged down to zero as buyers can always find a cheaper seller for the same data. However, data’s combined value, which is higher than zero, is almost entirely captured by the (well-capitalised) parties that have capacity to combine data and extract insights. Because of this market failure, which is peculiar to data, RadicalxChange believes that top-down intervention is needed to make bottom-up organisation possible through Data Coalitions. Through the right type of legislation, the problem of buy-in for joining data coalitions would be removed, because joining would be costless or virtually costless and immediately advantageous or remunerative. RadicalxChange is discussed as a conceptual model in Annex 3.[/footnote]

Such a top-down approach could create challenges not too far removed from the issues that many data cooperatives seek to address, such as around the selected default sharing and processing options that the data would be subject to, and the abilities of people to opt-out or switch. Relying on individual buy-in for success may never move the needle, without more of a purpose or affiliation to coalesce around. Changing the world for the better is more abstract and often less motivating than changing one’s particular corner of it for one’s (and others’) benefit.

These uptake challenges are not unique to cooperatives and are experienced by many other data-stewardship approaches that focus on empowering individuals in relation to their personal data. However, potentially, the features of a cooperative approach to data stewardship could themselves hinder the uptake and scalability of a data cooperative
initiative. These are discussed next.

2. Scale

There are additional features of cooperatives that may make this approach unsuitable for large-scale data-stewardship initiatives:

a. Democratic control and shared ownership

The cooperative model presumes shared ownership. The implied level of commitment may be an asset to the organisation, but may similarly make it hard for the model to scale if everyone wants their say.

The cooperative model also favours democratic control. Depending on how the cooperative is established and governed, the democratic control of cooperatives could be too high a burden for all but the most motivated individuals, limiting its ability to scale. Alternatively, where a cooperative has managed to scale, this approach could become too unwieldy for a cooperative to effectively carry out its business in a nimble and timely fashion.

Democracy and ownership also need to be balanced by a constitution. It may aim for equal say for members (one member, one vote), or alternatively it may skew democratic powers toward those members with more of a commitment (e.g. based on the amount of data donated).
Questions need to be resolved about what members vote for – particular policies, or simply for an executive board. Can the latter restriction, which will lead to more efficient decision-making, still enable individual members to feel the commitment to the cause that is needed to meet
the mobilisation challenge? If, on the other hand, members’ votes feed directly into policy, can the cooperative sustain sufficient policy coherence to meet the trust challenge?

b. Rights, accountability and governance

To establish and enforce rights and obligations, a cooperative needs to be able to use additional contractual or corporate mechanisms, and this requires members to engage and understand their rights and obligations. This is particularly important where data is concerned, given legal duties under legislation such as the Data Protection Act 2018, which implements the GDPR in the UK.

Cooperatives can create a large audience of members who can demand accountability and these members may be exposed themselves to personal liability, with associated challenges to manage potential proliferation of claims and fear of unjust proceedings.

Cooperatives may establish high levels of fiduciary responsibility but do not inherently determine particular governance standards or establish clear management delegation and discretion. Registration under the Co-operative and Community Benefit Societies Act 2014
imposes a level of governance that partially echoes the greater body of legislation applicable to registered companies under the Companies Acts. Registration as a company under the Companies Acts will import a broader array of governance provisions.

With respect to data, governance is a particularly sensitive requirement, especially as a cooperative scales. If a cooperative ended up holding a large quantity of data, this may become extremely valuable as network effects kick in. The cooperative would certainly need a level of professionalism in its administration to prosper, especially if its mission required it to negotiate with large data consumers, such as social networks. Moreover, the overarching governance of the administrators of the cooperative would need to be addressed. For example, there could be a data cooperative board with each individual having ownership shares in the cooperative based on the data contributed (which in turn would need a quasi-contractual model to define the role of the board and its governance role regarding data use).

Failure of governance may also leave troves of data vulnerable, if the proper steps have not been taken. In one recent incident, a retail cooperative venture in Canada called Mountain Equipment Co-Op was sold to an American private-equity company from underneath its five million members, after years of poor financial performance (losing CAD$11 million in 2019), with the COVID-19 pandemic as the last straw. The board felt that the sale was the only alternative to liquidation, although the decision was likely to be challenged in court.[footnote]Cecco, L. (2020). Members of Canada’s largest retail co-op seek to block sale to US private equity fund. [online] The Guardian. Available at: www.theguardian.com/world/2020/sep/22/canada-mountain-equipment-co-op-members-bid-block-sale-us-firm [Accessed 18 Feb. 2021].[/footnote] This case throws up data issues specifically – does the buyer get access to data
about the members, for example? But the main point is that a data cooperative managing a large datastore effectively and securely might well have to endure significant costs (e.g. for security), and will need a commensurate income.

If that income could not be secured, could the cooperative members prevent the sale of the cooperative – and therefore the data – to a predator? Under UK law, the assets of a cooperative should be transferred, at least in some circumstances, to a ‘similar’ body or organisation with similar values if and when it is wound up. Sometimes even an asset lock can be involved under Community Interest Company law. The extent of legal restraint on the disposal of the assets of the cooperative will of course depend on how it is defined and incorporated, and the sensitivity of the data should be reflected in the care with which the fate of the data is constrained. There may be legal protections, but it is still worth pointing out that the very existence of the data cooperative, as a single point of access to the data, may represent a long-term vulnerability.

c. Financial sustainability

Cooperatives do not easily lend themselves to development funding other than grant aid or pure philanthropy. In combination with the mobilisation challenge this suggests financial sustainability is likely to be a significant issue.

One problem this creates for many cooperatives is that they have to fall back on internally generated resources (i.e. donated by the members). Without a substantial and sustainable income, a cooperative will find it difficult to recruit capable managers and administrators, and so will be forced to form committees selected from the membership. Without capable managers, a cooperative will be less able to generate income and manage resources effectively, and, for example, will be less able to raise external capital because of a low rate of expected return.

These factors constrain the scope for a cooperative to mature and operate in a commercial environment when compared with other models.

Mechanisms to address the challenges

The cooperative structure has longstanding heritage and diverse application, as demonstrated by the examples we have analysed, and ready appeal because of the inherent assumptions of common economic, social and cultural purpose. It is a natural mechanism by which an enterprise can be owned by people with a common purpose and managed for the benefit of those who supply and use shared services.

Recognising the challenges identified above that are inherent in a cooperative structure, we observe that cooperatives often rely on contract or incorporation to establish rights, obligations and governance, and either route might be selected as the preferred form while still
seeking to capture some of the essence of a cooperative through stated purpose, rights, obligations and oversight. However, neither is perfect – or, put another way, each, by diluting the cooperative ideal, may reintroduce some of the challenges that the cooperative model
was designed to address. These mechanisms are:

  • The contractual model, where all rules for the operation of the data platform should be set down in bilateral (or multilateral) agreements between data providers and data users. This, when combined with the fact that each party would need to take action on its own behalf to enforce the terms of these agreements against any counterparties,
    imposes a burden on participants to negotiate agreements and encourages participants to negotiate specific terms. It therefore has limited utility and is restricted to relatively limited groups of participants of similar sophistication, and may be vulnerable to the mobilisation or the capacity challenge.
  • The corporate model, often adopted in the form of a company limited by guarantee to underpin a cooperative, to achieve what a contractual model offers with additional flexibility, scalability and stability that is lacking from that model. This model may run into the trust challenge, however. In conceptual terms, data providers are being
    asked to give up a degree of control over the data they are providing in return for the inherent flexibility, scalability and stability of the structure. They will only do so if they feel they can trust the structure or organisation that has been set up to effect this, which can be offered via a combination of clear stated purpose of the institution, the
    reporting and accountability obligations of its board and an additional layer of oversight by a guarantor constituted to reflect the character of participants and charged with a duty to review and enforce due performance by the board. In time that might be supplemented by a suitably constituted, Government-sponsored regulator.

Although there is currently no obstacle in the way of data cooperatives – the law is in place, the cooperative model well-established – we can see a number of challenges to uptake, growth, governance and sustainability. The problem is rendered doubly hard by the fact that some of the challenges pull in different directions. For instance, the capacity challenge might be met by a division of labour, hiving off certain decision-making and executive functions, but then this might lead to the emergence of the trust challenge as the board’s decisions come under scrutiny. Failure to meet the mobilisation challenge could result in the members being as alienated from the stewardship of their data by the data cooperative as they were by other more remote corporate structures, but addressing the mobilisation challenge might lead to an
engaged set of members developing hard-to-meet expectations about the level of involvement they could aspire to, consistent with streamlined decision-making.

 

Mock case study: Greenfields High School

 

Greenfields High School and other educational facilities are interested in coordinating educational programs to meet the needs of their learners and communities in a way that complements and strengthens school programmes. All educational institutions use online educational tools to tailor learning plans for improving student performance and see a real opportunity to better serve their community through data sharing.

 

Greenfields High School proposes to the other educational boards to convene and explore the idea of pooling resources together for achieving these goals. They all have a shared interest in working together to gain better insights as to how they might improve educational outcomes for their community members.

 

In an act of good governance, educational facilities consult with their students, parents and teachers, and together they develop the rules and governance of the cooperative:

  • Members of the community vote on the collaborative agreement between
    educational facilities and decide what data can be shared and for what
    purposes. The agreement is transparent about what data is collected, stored,
    processed and how it is used.
  • The schools gain better understanding of the effectiveness of online tools
    and educational plans throughout the learning cycle.
  • Where educational programmes are developed for the community based
    on analysed data, members also decide on the price thresholds for such
    educational services.

How would a data cooperative work?

 

A data cooperative is set up, pulling together the data educational facilities have from using digital technologies. Schools maximise their aims in comparing performance and understanding what digital tools are more effective. Students have a direct say into how their data is used and decide on the management and organisation of the cooperative.

Chapter 3: Corporate and contractual mechanisms

How corporate and contractual mechanisms work
How corporate and contractual mechanisms work

Corporate and contractual mechanisms can create an ecosystem of trust where those involved:

  • establish a common purpose
  • share data on a controlled basis
  • agree on structure (corporate or contractual).

Why corporate and contractual mechanisms?

Corporate and contractual mechanisms can facilitate data sharing between parties for a defined set of aims or an agreed purpose. For the purposes of this report, it is envisaged that the overall purpose of a new data model will be to achieve more than mere data sharing, and
data stewardship can be used to generate trust between all the parties and help overcome relevant contextual barriers. The core purpose for data sharing will be wider than just the benefit gained by those who make use of data.

The role of the data model we envisage therefore includes:

  • enabling data to be shared effectively and on a sustainable basis
  • being for the benefit of those sharing the data, and for wider public benefit
  • ensuring the interests of those with legal rights over data
  • ensuring data is ethically used and in accordance with the rules of the institution
  • ensuring data is managed safely and securely.

How to establish the right approach?

The involvement of an independent data steward is envisaged as a means of creating a trusted environment for stakeholders to feel comfortable sharing data with other parties who they may
not necessarily know, or with whom they have not had an opportunity to develop a relationship of trust.

Incentives for allowing greater access to data and for making best use of internal data will vary according to an individual organisation’s circumstances and sector. While increased efficiency, data insights, improved decision making, new products and services and getting value from data are potential drivers, there are also a number of challenges to sharing data:

  • operating in highly competitive or regulated sectors, and concerns about undermining value in IP and confidential information
  • a fear of being shown up as having poor-quality or limited data sets
  • a fear of breaching commercial confidentiality, competition rules or GDPR
  • a lack of knowledge of business models to support data sharing – access to examples, lessons learned and data sharing terms can help others feel able to share
  • a lack of understanding of the potential benefits
  • not knowing where to find the data or limited technical resource to implement (e.g. to extract the data and transform it into appropriate formats for ingestion into a data-sharing platform)
  • fear of security and cybersecurity risks.

All these challenges can lead to inertia and lack of motivation.

Where a group of stakeholders see benefits in coming together to share data they will still need to be confident that this is done in a way that maintains a fair equilibrium between them, and that no single stakeholder will dominate the decision as regards the management and sharing of data. In order to establish and maintain the confidence of the stakeholders, they should all be fully engaged in the determination of what legal mechanism should be established. One or two stakeholders deciding and simply imposing a structure on other stakeholders is unlikely to engender a sense of trust, confidence and common purpose.

It is for this reason that we recommend the following approach.

1. Establish a clearly defined purpose

Establishing a clearly defined purpose is the essential starting point for stakeholders. Not only will a compelling statement of purpose engender trust among stakeholders, but it will also provide the ultimate measure against which governance bodies and stakeholders can check to ensure that the data-sharing venture remains true to its purpose. A clearly defined purpose can also help in assessing compliance with certain principles of the GDPR and other data-related regulations, including ePrivacy,[footnote]Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications). Available at: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32002L0058 [Accessed 18 Feb. 2021].[/footnote] or Payment Services Directive 2,[footnote]Directive 2015/2366 of the European Parliament and of the Council of 25 November 2015 on payment services in the internal market. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32015L2366 [Accessed 18 Feb. 2021].[/footnote] which are often tested against the threshold of whether a data-processing activity, or a way in which it is carried out, is ‘necessary’ for a particular purpose or objective.

Any statement of purpose will need to be underpinned by agreement on:

  • the types of data of which the data-sharing venture will take custody or facilitate sharing
  • the nature of the persons or organisations who will be permitted access to that data
  • the purpose for which they will be permitted access to that data
  • the data-stewardship model and governance arrangements for overseeing the structure and processing, including to enforce compliance with its terms and facilitate the exercise of rights by individuals, and to ensure that data providers and data users have adequate remedies if compliance fails.

2. Data provider considerations

The data-sharing model has to be an attractive proposition for the intended data providers, with clear value and benefit, and without unacceptable risk. There will need to be strong and transparent governance to engender the level of confidence required to encourage data sharing. This will include confidence in not only the data provider’s ability to share the data with the data-sharing venture without incurring regulatory risk or civil liability, but also in its ability to recoup losses from the data-sharing set up or from relevant data users if the governance fails and this results in a liability for the data provider. Other considerations for governance could be related to managing intellectual property rights and control over products developed based on the data shared.

3. Data user considerations

As with the data providers, the data-sharing model must be an attractive proposition for intended data users. The data will need to be of sufficient quality (including accuracy, reliability, currency and interoperability) and not too expensive, for the data users to want to participate. Data users will also require adequate protection against unlawful use of data. For example, in relation to personal data, data users will typically have no visibility of the origins of the datasets and the degree of transparency (or lack of it) provided to the underlying data subjects. They will also be relying on the data providers’ compliance with the governance model to ensure that use of the contributed datasets will not be a breach of third-party confidentiality or IP rights.

4. Data steward considerations

The data steward’s role is to make decisions and grant access to data providers’ data to approved data users in accordance with the purpose and rules of the data-sharing model. The steward may take on additional responsibilities such as due diligence on data providers and users, and enforcement of the purpose of the data-sharing model; however, the way in which the model is funded and structured[footnote]Article 11 and Recital 25 of the draft Data Governance Act include requirements for data-sharing services to be placed in a separate legal entity. This is required both in business-to-business data sharing as well as in business-to-consumer contexts where separation between data provision, intermediation and use needs to be provided. The text does not distinguish between closed or open groups.[/footnote] will impact on the extent of any such duties and who is practically responsible for performing them.

The responsibilities of the data steward may impact on considerations for the data providers and data users of the overall impact on risk and developing trust in the relationships.

5. Relationship/legal personality

The formal relationship between the parties will depend on the previous steps and the project structure that the stakeholders are comfortable with, based on the relevant risk, economic, regulatory and commercial considerations. Where there is no distinct legal personality, the
relationship may be governed by a series of contracts between the data providers, users and data steward – whether bilateral or a contract club with multiple parties. Where there is a legal personality, then as well as there being likely to be a series of contracts, there will be the documents establishing the relevant legal entity.

6. The rules

The rules of the data-sharing model will form part of the corporate and/or contractual relationship between stakeholders. This is discussed in more detail below in the ‘Mapping data protection requirements onto a data sharing venture’ section and in Annex 1 on ‘Existing mechanisms for supporting data stewardship’ when discussing regulatory mechanisms.

What is the appropriate legal structure?

As outlined, the aim is to design an ecosystem of trust. The data stewardship model will sit at the heart of this ecosystem. In this section we address two broad possibilities as to the legal form this should take:

  • a contractual model: this would involve a standardised form of data sharing
    agreement without the establishment of any form of additional legal structure or personality
  • a corporate model: this would involve the establishment of a company or other legal person, which would be responsible for various tasks relating to the provision of access to and use of data. The documents of incorporation would be supplemented by contractual arrangements.

In the contractual model, all of the rules for the operation of the data venture would need to be set down (and repeated) in a series of bilateral (or multilateral) agreements between data providers and data users. This, when combined with the fact that each party would need to take action on its own behalf to enforce the terms of that agreement against any counterparties, makes it likely that providers of data will only be willing to provide access to data on highly specific terms.

Where the aims of the stakeholders will require significant flexibility and scalability then a simple contractual model may not be the most appropriate. For example, a contractual model does not easily accommodate dedicated resources which may be required to govern and administer a growing data-sharing establishment (such as full-time employees, for which an employing entity is required). Also, an independent entity may find it easier to vary the rules of participation, or make other changes for the benefit of all, as the model evolves or laws change. Whereas a multilateral contractual arrangement may require protracted negotiation amongst the various stakeholders who each bring their own commercial objectives to the discussion.

In the corporate model, there is a degree of flexibility and scalability that is lacking from the contractual model. This model requires a greater degree of trust on the part of stakeholders, however. In conceptual terms, data providers are being asked to give up a degree of control over the data they are providing – presumably in return for some incentive or reward. They will only do so if they feel they can trust the structure or organisation that has been set up to effect this.

We consider three forms of company here: a company limited by shares, a company limited by guarantee (a CLG) and a community interest company (a CIC).

Whichever form is chosen, the company in question would operate as the data-platform owner and manager, and would enter into contractual arrangements with providers of data and proposed users.

The contractual terms would allow for:

  • required investment in the company to fund infrastructure requirements such as platform development and maintenance – this could be by way of non-returnable capital contribution or loan from either the data provider or data users as circumstances merit required returns on supply of data
  • required charges for use of the data
  • other contractual rights and obligations specific to the circumstances including access to and usage of data.

Returns and charges could be related to commercial exploitation or fixed. Also, depending on the nature of the venture, data users may be obliged to share insights gained from access to the data with the venture so that it can be shared with other data users (e.g. see the Biobank example below). The contract terms would dictate all required obligations and liabilities between the contracting parties. Bear in mind that the structure of a data-sharing venture could be adapted over time. For example, at the outset, the stakeholders may not be in a position to finance the establishment and resourcing of a corporate entity, or it may not be seen as appropriate to a data-sharing trial. As the venture scales, however, the stakeholders may determine that a corporate structure should be implemented.

1. Choice of corporate form

One of the key questions that will determine the appropriate form of company, is whether the data-sharing venture is intended to be able to make a profit other than for the benefit of its own business – i.e. whether profits are required to be applied to the furtherance of its business,
or whether surplus profits may be dividended up to the data-sharing venture’s shareholders.

CLGs are not usually used as a vehicle for a profit-making enterprise, and a CLG’s articles of association will often (but not always) prohibit or restrict the making of distributions to members. Any profits made by a CLG will generally be applied to a not-for-profit cause such as the data-sharing venture’s purpose.

A CLG may be the most appropriate vehicle where it is not envisaged that profit or surplus generated will be distributed to its members; and it is not envisaged that the institution will seek to raise debt or equity finance. In this case activities will need to be financed by other means,
such as revenue generated from its own activities including the provision of data services or third-party funding. If the focus changes over time to encompass more commercial activities, then establishing a trading subsidiary company limited by shares could also be considered.

It should be borne in mind that a CLG (unlike a company limited by shares) does not have share capital that it is able to show on its balance sheet. This often makes it more difficult for a CLG to raise external debt finance. The alternative possibility available to companies limited
by shares, of investment by way of equity finance, is precluded here because of the structure of the CLG. Because of these difficulties, it is worth drawing attention to CICs as a further alternative corporate vehicle.

A CIC is a limited-liability company that has been formed specifically for the purpose of carrying on a business for social purposes, or to benefit a community. Although it is a profit-making enterprise, its profits are largely applied to its community purpose rather than for private gain. This is achieved by way of a cap on any movements of value from the CIC to its shareholders or members (such as by way of dividends).

This model allows shareholders to share in some of the profit, while ensuring that the CIC continues to pursue its community purpose. CICs are regulated by the Office of the Regulator of CICs (the CIC Regulator), and are required to file a community interest statement at Companies House, which is also scrutinised by the CIC Regulator. The CIC’s share
capital would appear on its balance sheet, thus increasing its ability to raise external finance.

If surpluses generated by its activities (including the provision of data services) are to be applied to its business, and its financing arrangements are secure, then a CLG will likely assist in gaining traction with those stakeholders who believe that the independence of the data trust would be compromised by virtue of its ability to pay dividends to shareholders. The structure of a company limited by guarantee provides a well-established framework of governance and liability management, and avoids the risk of exposure to a proliferation of liabilities that exists
in shareholding and trust environments.

A guarantor, which could be a non-government organisation (NGO) or other suitably established and populated body, could be appointed to monitor compliance and governance. This could address the requirement for oversight in a way that is specific to the requirements of the platform and data supplier, and to subjects not easily undertaken by other pre-established bodies, such as the Charity Commission or Regulator of Community Interest Companies, neither of which is specifically equipped to perform this function.

2. Governance and rules

The agreed purpose for the data-sharing venture will drive the overall governance of the data arrangement and its objectives, the rules for its operation and the parameters for all data-sharing agreements entered into. That purpose and those objectives should be reflected (including, where appropriate, as binding obligations) in its governance framework,
rules and the contractual framework governing the provision and use of data.

While governance and rules are not necessarily made public documents, the greater the degree of transparency as to the data venture’s operations, the greater the level of confidence that stakeholders and the wider public will be likely to feel in its functioning. Strong and transparent governance is a critical factor in establishing trust to encourage data
sharing. The rules and governance framework will underpin the purpose. Confidence that strong governance will ensure strict compliance with the rules of the trust and enforcing any failings is critical.

There needs to be confidence that the interests of all key stakeholders are represented. In a corporate model, there are a number of means of achieving this that may include board representation and/or a mix of decision-making and advisory committees representing the various interest groups. Boards and committees that are made up of trusted, respected independent members will also help engender confidence.

Depending on the circumstances and scale of the data-sharing venture, as well as an overall Governance Board, there may be an Operations Committee, a Funding Risk Advisory Committee, an Ethics Committee, a Technical Committee and a Data Committee. Alternatively, committees might be set up to represent different groups of stakeholders e.g. data providers, data users and data subjects.

With the contractual model, it would also be possible to constitute an unincorporated governance body, such as a board that comprises representatives of the stakeholders, together with some independent members who have relevant expertise. However, one can foresee potential practical difficulties with governance bodies that are more ad hoc and decentralised, including generating sufficient trust for data providers and users to submit to the jurisdiction of the body via the contractual arrangements.

3. Documentation

The documentation will need to cover the constituent parts that make up the data-sharing venture and also, if the contractual model is adopted, how these will be constituted from among the stakeholders. Participants will need to sign up to the rules of the venture, either as a stand-alone document, or by incorporation into the operational agreements, such as a data-provision agreement or data-use agreement, or the articles of a corporate vehicle. The exact contracting arrangements will be bespoke to the specific arrangement. If the venture is intended to enable additional participants to join, there will also need to be robust
arrangements (e.g. through accession agreements) to avoid re-execution of multilateral arrangements for each new joiner.

The common agreement could prescribe the arrangement in broad terms, the nature of the data that will be collected; the identity or class of the persons or organisations with whom it will be shared; and the uses to which such persons or organisations will be entitled to put that data.
It can address leaver/joiner bases,[footnote]In order to improve the chances of participation, and where technically feasible, the exit arrangements for leavers should focus on the ability of a participant to leave the venture and remove their data. This respects the data sovereignty of the participant and enables them to remain in control of data, particularly important for personal data as participants will be conscious of their obligations under GDPR.[/footnote] due diligence, terms that underpin certain values or principles, for example the five data-access ‘control dimensions’ commonly referred to as the ‘Five Safes’.[footnote]The ‘Five Safes’ comprise: safe projects, safe people, safe data, safe settings and safe outputs. Ritchie, F. (2017). The “Five Safes”: a framework for planning, designing and evaluating data access solutions. [online] Zenodo. Available at: https://zenodo.org/ record/897821 [Accessed 18 Feb. 2021].[/footnote] Or, in the context of personal data, the core principles contained in Article 5 of the GDPR, change approval, the financial model for the operation of the club, dispute resolution, etc.

As mentioned above, the framework documents would need to cover the purpose of the venture and the type(s) of data in issue, along with the identity of persons or entities, or types of those that may be granted access, and the use to which they may put that data.

In addition, the documents will need to cover other important areas, such as:

  • technical architecture
  • interoperability
  • decision-making roles
  • the obligations of each participant and how any monitoring or audit of data use, particularly in respect of personal data will take place
  • information security.

There will inevitably be other areas that the rules should also cover.

Key legal considerations include data protection and privacy law; regulatory obligations or restrictions; commercial confidentiality; intellectual property rights; careful consideration of liability flows (particularly important if personal data is in issue), competition and external contractual obligations. As will be seen from some of the examples (detailed in the section below), such as iSHARE, it is possible to utilise existing standard documents to cover off some of the key issues, rather than developing everything from scratch. For example, existing open-source licences could be used to protect intellectual property rights of the data providers and control data usage, bolstered by data-sharing arrangements specific to the venture.

As regards the nature of the data and its use in specific circumstances, the data providers may want to share data on a segregated and controlled basis. This means there will not be access to overall aggregated data, but there may be layered access or access to a limited number of aggregated datasets to reflect any restrictions on sharing of some data (e.g. certain data only to be shared with certain users or shared for specific insights/activities). In some instances there may be agreement to pool datasets between parties. The following requirements may be set:

  • each contributor would provide raw data/datasets that include but are not limited to personal data, and that data could include normal personal data as well as special category/sensitive personal data
  • no contributor would see all the raw data provided by the other contributors[footnote]As part of the stewardship model, one of the protections should be only the data needed for an activity is accessed by other participants/stakeholders.[/footnote]
  • each contributor would want to be able to analyse, and to derive data and insights from aggregate datasets, without being able to identify individuals or confidential data in the datasets
  • individuals whose data is shared in this way would have the usual direct rights under data protection law in relation to the processing of their personal data.

Mapping data protection requirements onto a data-sharing venture

Where the data-sharing venture will involve processing of personal data, it will of course be necessary for all data providers, users and others processing personal data to comply with the GDPR (see in Annex 1 some of the key GDPR considerations). Depending on the nature of
the legal structure, there will be contractual terms and also potentially a Charter/Code of Conduct or Rulebook setting out the obligations of the data providers and data users including those relating to the GDPR. In some sectors, these may incorporate by referencing internationally recognised standards for data sharing, rather than completely reinventing the wheel.[footnote]An example is the Rules of Participation used by Health Data Research UK (HDR UK). Organisations requesting data access from one of the hubs set up through HDR UK (including the INSIGHT hub) are required to commit to these rules, which reference published standards. See Health Data Research UK (2020). Digital Innovation Hub Programme Prospectus Appendix: Principles For Participation. [online]. Available at: www.hdruk.ac.uk/wp-content/uploads/2019/07/Digital-Innovation-Hub- Programme-Prospectus-Appendix-Principles-for-Participation.pdf [Accessed 18 Feb. 2021].[/footnote]

It will be necessary for each stakeholder who processes data (whether they are a data controller, joint data controller or data processor) to ensure they are compliant with GDPR requirements. This will be determined by the individual circumstances and a particular stakeholder may well be a data controller in some regards and a joint data controller
in others. Similarly, a stakeholder may be a data controller as regards some processing and a data processor in relation to others.

Privacy-enhancing technologies (PETs) are increasingly being advocated as a means to help ensure regulatory compliance and the protection of commercially confidential information more generally. For example, technologies facilitating pseudonymisation, access control and
encryption of data (in transit and at rest) and more sophisticated PETs such as differential privacy and homomorphic encryption. This is an area of development with some already mature market offerings and others still undergoing significant development.

Examples of data-sharing initiatives with elements of data stewardship

1. The Data Sharing Coalition

The Data Sharing Coalition is an international initiative started in January 2020, after the Dutch Ministry of Economic Affairs and Climate Policy invited the market to seek cooperation in pursuit of cross-sectoral data-sharing.[footnote]See The Data Sharing Coalition (n.d.) Home. [online]. Available at: https://datasharingcoalition.eu.[/footnote] It ‘builds on existing data-sharing initiatives to enable data sharing across domains. By enabling multilateral interoperability between existing and future data-sharing initiatives with data sovereignty as a core principle, parties from different sectors and domains can easily share data with each other, unlocking significant economic and societal value.’

It aims to foster collaboration between a wider range of stakeholders, providing a platform for structured exchange of knowledge in the data-sharing coalition community.[footnote]The Data Sharing Coalition published an exploration on standards and agreements for enabling data sharing. See Data Sharing Coalition (2021). Harmonisation Canvas [online]. Available at: https://datasharingcoalition.eu/app/uploads/2021/02/210205- harmonisation-canvas-v05-1.pdf[/footnote] It plans to explore and document generic data-sharing agreements which it will capture in a Trust Framework governed by the Coalition. It will support the development of existing and new data-sharing initiatives, including around technical standards, data semantics, legal agreements, and trustworthy and reusable digital identities.

Principles

The Data Sharing Coalition has six core principles:

  1. Be open and inclusive: any interested party is welcome to participate in the Data Sharing Coalition.
  2. Deliver practical results: the Data Sharing Coalition will deliver functional frameworks and facilities that provide true value for all stakeholders of the data economy and that will help them accelerate in their data sharing context.
  3. Promote data sovereignty: the Data Sharing Coalition aims to enable the entitled party(ies) to control their data by including this as a requirement in the use cases and frameworks.
  4. Leverage existing building blocks: all Data Sharing Coalition frameworks and facilities will incorporate international open standards, technology and other existing facilities where possible.
  5. Utilise collective governance: all frameworks and facilities produced by the Data Sharing Coalition will be governed in a transparent, consensus-driven manner by a collective of all Data Sharing Coalition participants.
  6. Be ethical, societal and compliant: all activities of the Data Sharing Coalition are in line with societal values and compliant with relevant legislation.

Approach

It has two initial use cases:

  • green mortgages for investment in energy-saving measures
  • improving risk management for shipment insurance.

Members

The Data Sharing Coalition currently has about 30 member participants including: iSHARE, IDSA, MAAS Lab, Equinix, NLAI Coalition, Amsterdam University: Connect2Trust, Dexes, ECP, Equinix, FOCWA, Fortierra, GO FAIR, HDN, International Data Spaces Association, iSHARE, KPN, Maas-Lab, MedMij, Nederlandse AI Coalitie, NEN, Netbeheer Nederland, Nexus, NOAB, Ockto, Roseman Labs, SAE ITC, SBR, SURF, Sustainable Rescue, TanQyou, Techniek Nederland, Thuiswinkel.org, Universiteit van Amsterdam, UNSense, Verbond van Verzekeraars and Visma Connect.

2. iSHARE

iSHARE is a Dutch Transport and Logistics Trust Framework for data sharing and was developed as part of the Government-backed Data Sharing Coalition.[footnote]See Support Centre for Data Sharing (2020). iSHARE: Sharing Dutch transport and logistics data. [online] Support Centre for Data Sharing. Available at: https://eudatasharing.eu/examples/ishare-sharing-dutch-transport-and-logistics-data [Accessed 18 Feb. 2021].[/footnote]

It is a decentralised model, where parties maintain control of what data will be shared with whom and on what conditions/for what purpose. iSHARE is not a platform but a framework. INNOPAY co-created the iSHARE framework with about 20 organisations (customs, ports, logistics, etc). It has only the list of participants and the fact that they have agreed to and demonstrated conformance with operational, technical and legal specifications; so it deals with identification, authentication and access. The idea is that an accession agreement removes the need for separate bilaterals.

It doesn’t appear to involve any data stewardship in the sense of a trusted third party being given control of what data is shared, for what purpose and with whom.

iSHARE is trying to facilitate info on or access to various agreement terms to choose from. The website has a 50-page document setting out typical agreement terms for data sharing and then links to 10–15 sets of licences, and a table for each one setting out which of those typical
terms that particular licence covers.[footnote]Support Centre for Data Sharing (2019). Report on collected model contract terms. [online]. Available at: https://eudatasharing.eu/ sites/default/files/2019-10/EN_Report%20on%20Model%20Contract%20Terms.pdf [Accessed 18 Feb. 2021].[/footnote] The aim is to have 50 sets of terms during 2020. Currently, the licence agreements include Creative Commons, Google API Licence, Montreal, ONS, Open Banking, NIMHDA, Apache, CDLA – (copyleft Linux), Open Database Copyleft, Swedish API Open Source, Microsoft Data Use Agreement and Norwegian Open Data. Currently about 20 organisations are participants.

3. Amsterdam Data Exchange

AMDEX was initiated by the Amsterdam Economic Board and was backed by Amsterdam Science Park and Amsterdam Data Science.[footnote]For more information see Amsterdam Smart City (2020). Amsterdam Data Exchange [online]. Available at: https://amsterdamsmartcity.com/updates/project/amsterdam-data-exchange-amdex [Accessed 18 Feb. 2021].[/footnote] The project is supported by the City of Amsterdam.

Vision

‘The Amsterdam Data Exchange (in short: Amdex) aims to provide broad access to data for researchers, companies and private individuals. Inspired by the Open Science Cloud of the European Commission, the project is intended to connect with similar projects across Europe.
And eventually even become part of a global movement to share data more easily.’

Amdex’s CTO, Ger Baron is quoted as follows: ‘Since 2011, the municipality have had an open data policy. Municipal data is from the community and must therefore be available to everyone, unless privacy is at stake. In recent years we have learned to open up data in different
ways… We want to share data, but under the right conditions. This requires a transparent data market which is exactly what the Amsterdam Data Exchange can offer.’

The owner decides which data can be shared with whom and under what conditions. They build a ‘market model in which everyone is able to consult and use data in a transparent, familiar manner.’ [footnote]Ibid.[/footnote]

4. INSIGHT: The Health Data Research Hub for Eye Health

INSIGHT is a collaboration between University Hospitals Birmingham NHS Foundation Trust (lead institution), Moorfields Eye Hospital NHS Foundation Trust, the University of Birmingham, Roche, Google and Action Against AMD.

INSIGHT’s objective is to make anonymised, large-scale data, initially from Moorfields Eye Hospital and University Hospitals Birmingham, available for patient-focused research to develop new insights in disease detection, diagnosis, treatments and personalised healthcare.

Access to the datasets curated by INSIGHT is through the Health Data Research Innovation Gateway. Applications to access the data will be reviewed by INSIGHT’s Chief Data Officer and then passed to the Data Trust Advisory Board (Data TAB). The Data TAB is formed of members of the public, patients and other stakeholders joining in a private capacity.
Applications will be accepted or rejected in a transparent manner and applicants will need to sign strict licensing agreements that prioritise data security and patient benefit.

Currently the governance of INSIGHT is managed through the Advisory Board but at the recent ODI Data Institutions event, it is anticipated that a company Limited by Guarantee may be created.

5. Nallian for Cargo

Nallian is a common infrastructure for data sharing between commercial sectors.[footnote]For more information see Nallan (2020). Home. [online] Available at: www.nallian.com [Accessed 18 Feb. 2021].[/footnote] Nallian for Air Cargo is a set of applications built on top of Nallian’s Open Data Sharing Platform. The platform allows all stakeholders of a cargo community to connect and share relevant data across their processes, resulting in de-duplication and a single version of the truth for the benefit of airport operators, ground handlers, freight forwarders, shippers, etc. Each data source stays in control of who sees which parts of his data for which purpose. Example communities include Heathrow, Brussels and Luxembourg (e.g. Heathrow Cargo Cloud).[footnote]For more information see Heathrow (2020). Cargo. [online] Available at: www.heathrow.com/company/cargo [Accessed 18 Feb. 2021].[/footnote]

6. Pistoia Alliance

The Pistoia Alliance’s mission is to lower barriers to R&D innovation by providing a legal framework to enable straightforward and secure pre-competitive collaboration.[footnote]For more information see Pistoia Alliance (2020). About. [online]. Available at: www.pistoiaalliance.org/membership/about [Accessed 18 Feb. 2021].[/footnote] The Alliance is a global, not-for-profit members’ organisation conceived in 2007 and incorporated in 2009 by representatives of AstraZeneca, GSK, Novartis and Pfizer, who met at a conference in Pistoia, Italy.

The Pistoia Alliance’s projects help to overcome common obstacles to innovation and to transform R&D – whether identifying the root causes of inefficiencies, working with regulators to adopt new standards, or helping researchers implement AI effectively. There are currently more than 100 member companies – ranging from global organisations, to medium enterprises, to start-ups, to individuals – collaborating as equals on projects that generate value for the worldwide life sciences community.

7. Biobanks

Biobanks collect biological samples and associated data for medical-scientific research and diagnostic purposes and organise these in a systematic way for use by others.[footnote]For more information see UK Biobank (2020). Home. [online]. Available at: www.ukbiobank.ac.uk [Accessed 18 Feb. 2021].[/footnote] The UK Biobank is a registered charity that had initial funding of circa £62 million. Its aim is to improve the prevention, diagnosis and treatment of a wide range of serious and life-threatening illnesses such as cancer, heart disease and dementia.

UK Biobank was established by the Wellcome Trust medical charity, Medical Research Council, Department of Health, Scottish Government and the Northwest Regional Development Agency. It has also had funding from relevant charities. UK Biobank is supported by the National Health Service (NHS). Researchers apply to access its resources. The resource
is available to all bona fide researchers for all types of health-related research that is in the public interest. Researchers submit an application explaining what data they would like access to and for what purpose. The website provides summaries of funded research and academic papers.

Researchers have to pay for access to the resource on a cost-recovery basis for their proposed research, with a fixed charge for initiating the application review process and a variable charge depending on how many samples, tests and/or data are required for the research project.

  • UK Biobank remains the owner of the database and samples, but will have no claim over any inventions that are developed by researchers using the resource (unless they are used to restrict health-related research or access to health-care unreasonably).
  • Researchers granted access to the resource are required to publish their findings and return their results to UK Biobank so that they are available for other researchers to use for health-related research that is in the public interest.

The personal information of those joining the UK Biobank is held in strict confidence, so that identifiable information about them will not be available to anyone outside of UK Biobank. Identifying information is retained by UK Biobank to allow it to make contact with participants when required and to link with their health-related records. The level of access that is allowed to staff within UK Biobank is controlled by unique usernames and passwords, and restricted on the basis of their need to carry out particular duties.

8. Higher Education Statistics Agency

The Higher Education Statistics Agency (HESA) is the body responsible for collecting and publishing detailed statistical information about the UK’s higher education sector.[footnote]For more information see HESA (2020). About. [online] Available at: www.hesa.ac.uk/about[/footnote] It acts as a trusted steward of data that is made available and used by public-sector bodies including universities, public-funding bodies and the new Office for Students.

HESA was set up by agreement between funding councils, higher education providers and Government departments. It is a charitable company operating under a statutory framework and it is a recognised data source for ‘statistical information on all aspects of UK higher
education’.[footnote]HESA (2017). HE representatives comment on consultation on designated data body [online] hesa.ac.uk. Available at: www.hesa.ac.uk/news/19-10-2017/consultation-designated-data-body [Accessed 18 Feb. 2021].[/footnote] It was confirmed as a designated data body (DDB) for Higher Education in England in 2018.[footnote]See HESA (2020). Designated Data Body. [online]. Available at: www.hesa.ac.uk/about/what-we-do/designated-data-body [Accessed 18 Feb. 2021].[/footnote]

HESA collects, assures and disseminates higher education data on behalf of specific public bodies e.g. Department for Business, Energy and Industrial Strategy (DBEIS), Department for Education (DfE), Office for Students (OfS), UK Research & Innovation (UKRI) and its counterparts in the rest of the UK. As DDB, it compiles appropriate information about higher education providers and courses and makes this available to OfS, UKRI and the Secretary of State for Education. It consults as to the information it publishes with providers, students
and graduate employers. OfS holds HESA to account, reporting on its performance every three years.

HESA provides a trusted source of information, supporting better decision making, and promoting public trust in higher education. In addition, it is driven by the wider public purpose of advancing higher education in the UK.

It deploys statistical and open-data techniques to transform and present higher education data. It looks to develop low-cost techniques to improve quality and efficiency of data collection, and aims to ensure as much data as possible is open and accessible to all.

HESA may charge cost-based fees, operating on a subscription basis.

9. Safe Havens Scotland NHS Trusts for Patient Data

Safe Havens were developed in line with the Scottish Health Informatics Programme (SHIP), a blueprint that outlined a programme for a Scotland-wide research platform for the collation, management, dissemination and analysis of anonymised Electronic Patient Records(EPRs).[footnote]Scottish Government (2015). Charter for Safe Havens in Scotland: Handling Unconsented Data from National Health Service Patient Records to Support Research and Statistics. [online] www.gov.scot. Available at: www.gov.scot/publications/charter-safe-havensscotland- handling-unconsented-data-national-health-service-patient-records-support-research-statistics/pages/3 [Accessed 18 Feb. 2021].[/footnote] The agreed principles and standards to which the Safe Havens are required to operate are set out in the Safe Haven Charter. They aim to get funding research from grants.

The Safe Havens provide a virtual environment for researchers to securely analyse data without the data leaving the environment. Their data repositories provide secure handling and linking of data from multiple sources for research projects. They also provide research support, bringing together teams around health data science. The research coordinators provide support to researchers navigating the data requirements, permissions landscape and provide a mechanism to share the lessons from one project to the next. Users are researchers who are vetted and approved. Data is never released, and personal data cannot be sold. Together, the National Safe Haven within Scottish Informatics Linkage Collaboration (SILC)[footnote]For more information see Data Linkage Scotland (2020). Home. [online ] Available at: www.datalinkagescotland.co.uk [Accessed 18 Feb. 2021].[/footnote] and the four NHS Research Scotland (NRS) Safe Havens have formed a federated network of Safe Havens in order to work collaboratively to support health informatics research across Scotland.

All the Safe Havens have individual responsibility to operate at all times in full compliance with all relevant codes of practice, legislation, statutory orders and in accordance with current good professional practice. Each Safe Haven may also work independently to provide advice and assistance to researchers as well as secure environments, to enable health informatics research on the pseudonymised research datasets they create. The charter and the network facilitate collaboration between the Safe Havens by ensuring that they all work to the same principles and standards.

Problems and opportunities addressed by corporate and contractual mechanisms

Many organisations have started to explore data sharing via the use of contracts, and this model is already used in practice. The complexity of the governance model will vary depending on whether the relationships involved are one-to-one or multi-party data-sharing arrangements and whether there are singular use cases or multiple uses for the same type of purpose. Where the tools of use such as machine learning or AI become part of the agreement, further consideration is needed for defining the architecture of the legal mechanisms involved.

Multi-party and multi-use scenarios using corporate and contractual mechanisms will need to ensure an independent governance body is able to function within the structure. The role of the specific parties involved in the data ecosystem, their responsibilities, qualifications and potential competing interests will need to be considered and balanced. A difficult question emerges where the stewardship entity is absent. In this scenario, who would be the data steward that a contract could be entered into with? For example, an oversight committee composed of representatives of data users and providers could be established, but this would not be a legal entity with an ability to contract.

Other requirements that will need thoughtful consideration, as they have been mentioned throughout this chapter, are connected to the privacy and security of the data, the retention and deletion policy, and restrictions on use and onward transfers and rules of publication of
results or research.

To conclude, a series of steps need to be walked through with stakeholders to reach an agreed decision about the model to be employed. Concrete use cases are more likely to generate tangible and efficient mechanisms for the sharing of data, than vague overarching
statements of general purpose. The key element here is stakeholder engagement and the more engagement that can be encouraged at the design stage – in terms of purpose, structure and governance – the more likely it is that a data-sharing venture institution will succeed.

Case study: The Social Data Foundation

Brief overview

The Social Data Foundation[footnote] Boniface, M., Carmichael, L., Hall, W., Pickering, B., Stalla-Bourdillon, S. and Taylor, S. (2020). A Blueprint for a Social Data Foundation: Accelerating Trustworthy and Collaborative Data Sharing for Health and Social Care Transformation. [online] Available at: https://southampton.ac.uk/~assets/doc/wsi/WSI%20white%20paper%204%20social%20data%20foundations.pdf [Accessed 18 Feb. 2021][/footnote] aims to improve health and social care by accelerated access to linked data from citizens, local authorities and healthcare providers through the creation of an innovative trustworthy and scalable data-driven health and social-care ecosystem overseen by independent data stewards (i.e. the Independent Guardian).[footnote]The Independent Guardian is defined as follows: ‘A team of experts in data governance, who are independent from the Social Data Foundation Board and oversee the administration of the Social Data Foundation to ensure it achieves its purposes in accordance with its rulebook i.e. that all data related activities realise the highest standards of excellence for data governance. In particular, the Independent Guardian shall (i) help set up a risk-based framework for data sharing, (ii) assess the use cases in accordance with this risk-based framework and (iii) audit and monitor day-to-day all data-related activities, including data access, citizen participation and engagement.’ See Boniface, M. et al. (2020) A Blueprint for a Social Data Foundation.[/footnote] This new data institution takes a socio-technical approach to governing collaborative and trustworthy data linkage – and endeavours to support multi-party data sharing while respecting societal values endorsed by the community. Members of the Social Data Foundation will include the Southampton City Council, the University Hospital Southampton NHS Foundation Trust and the University of Southampton. Flexible membership is envisaged in order to allow other organisations to join and the institution to grow.

Governance

A key strength of the Social Data Foundation lies in its socio-technical approach to data governance, which necessitates a high-level of interdisciplinarity and strong stakeholder engagement from the outset (i.e. from the initial stages of design and development). This initiative therefore brings together a multi-disciplinary team of clinical and social-care practitioners with data governance, health data science, and security experts from ethics, law, technology and innovation, web science and digital health.

The Social Data Foundation builds on the data foundations governance framework[footnote] Stalla-Bourdillon, S., Wintour, A. and Carmichael, L. (2019). Building Trust Through Data Foundations: A Call for a Data Governance Model to Support Trustworthy Data Sharing. [online] Available at: https://cdn.southampton.ac.uk/assets/imported/transforms/ content-block/UsefulDownloads_Download/69C60B6AAC8C4404BB179EAFB71942C0/White%20Paper%202.pdf [Accessed 18 Feb. 2021]. The Social Data Foundation is an example of a functional data foundation – for more information see: StallaBourdillon, S., Carmichael, L., & Wintour, A. (Forthcoming). Fostering trustworthy data sharing: Establishing data foundations in practice. Data & Policy; Stalla-Bourdillon, S., Carmichael, L., & Wintour, A. (2020, September). Fostering Trustworthy Data Sharing: Establishing Data Foundations in Practice. Data for Policy Conference 2020, Available at: http://doi.org/10.5281/zenodo.3967690. [Accessed 18 Feb. 2021].[/footnote] developed by the Web Science Institute at the University of Southampton (UK) and Lapin Ltd (Jersey), which includes robust governance mechanisms together with strong citizen representation. Foundations laws are a source of inspiration for the data foundations governance framework. Two particular jurisdictions of interest are the Bailiwicks of Jersey and Guernsey (the Channel Islands) where the role of the guardian is a unique requirement, and is peculiar to these types of structures, which in a data governance model gives rise to independent data stewardship.[footnote]Note that all foundations incorporated under Jersey foundations law must have a guardian.[/footnote]

Data rights

The Social Data Foundation will not only empower citizens to co-create and participate in health and social care systems transformation, but to exercise their data-related rights. As a trusted third party intermediary (TTPI) that facilitates shared data-analysis projects, the Social Data Foundation will provide a centralised hub for citizens and their datarelated requests in relation to a wide range of data (re)usage activities. Agreements will govern relationships between all stakeholders.

The Social Data Foundation will promote adequate data protection and security – and will carry out a risk assessment for each shared data analysis project before any data is shared. Data providers will only share de-identified data as part of the Social Data Foundation. Each of the parties will undertake not to seek to reverse or circumvent any such de-identification of data. Where the Social Data Foundation provides a dynamic linking service[footnote]Dynamic linking service is understood as where two or more sources of health and social care data are brought together on demand according to the specific parameters of an authorised data user’s query where the risk of re-identification is both evaluated before and after data linkage, and mitigated through assurance processes facilitated by the Data Foundation.[/footnote] for authorised data users and data at rest remains within data providers’ premises, citizens are better empowered to exercise their rights over data linkage activities and oppose, restrict, or end their participation as part of the processing activities.

Case study: Emergent Alliance

Brief overview

The Emergent Alliance initiative was launched in April 2020 with the aim to aid societal recovery post COVID-19.[footnote]For more information see Emergent Alliance (n.d). Home. [online]. Available at: https://emergentalliance.org [Accessed 18 Feb. 2021][/footnote] Its objectives are to use data in order to accelerate global economic recovery in response to the outbreak, to make available datasets in the public domain and to develop secure data-sharing systems and infrastructure.[footnote]See Emergent Alliance (2020). Articles of Incorporation, p. 16. Available at: https://find-and-update.company-information.service.gov. uk/company/12562913/filing-history [Accessed 18 Feb. 2021].[/footnote]

The Emergent Alliance operates as a not-for-profit voluntary community made out of corporations, individuals, NGOs and government bodies that ‘contribute knowledge, expertise, data, and resources to inform decision making on regional and global economic challenges to aid societal recovery.’[footnote]See Emergent Alliance (n.d.). Frequently Asked Questions. Available at: https://emergentalliance.org/?page_id=440 [Accessed 18 Feb. 2021][/footnote]

The Emergent Alliance operates as a not-for-profit voluntary community made out of corporations, individuals, NGOs and government bodies that ‘contribute knowledge, expertise, data, and resources to inform decision making on regional and global economic challenges to aid societal recovery.’[footnote]See Emergent Alliance (n.d.). Frequently Asked Questions. Available at: https://emergentalliance.org/?page_id=440 [Accessed 18 Feb. 2021][/footnote] There can be different roles in this community, such as data contributors (either members of the alliance or participants in the community) making available agreed datasets to the public domain. There can be data scientists interpreting or modelling the data with resources coming from members or crowd-sourced from partners. There could also be individuals or organisations bringing or responding to domain-based problems to the alliance, contributing with datasets, data science or technical resources.

Governance

This case study is based on information from September 2020, and the Emergent Alliance’s legal structure has progressed significantly since then. Initially, the governance structure was operating on the basis of Articles of Association, and using ‘letters of intent’ from members to govern the alliance.[footnote]See Emergent Alliance (n.d), Statement of Intent. Available at: https://emergentalliance.org/?page_id=452 [Accessed 18 Feb. 2021].[/footnote] Two directors were appointed, and the structure was designed to allow different committees to be formed in order to carry out the set objectives.

Mock case study: Greenfields High School

Greenfields High School is increasingly using digital technologies to deliver
teaching materials and improve educational processes. It uses different
service providers, which are used by other schools as well. On the one hand,
Greenfields High School is interested to compare its performance with other
schools, and gain access to data and insights from its service providers. On
the other hand, Greenfields High School is interested to learn from the other
schools’ experience, and share data to understand the effectiveness of different
learning tools and methods.

 

Greenfields High School is not the only one in this situation. Other schools
using online tools are interested in the same goal: to get better insights
from the different service providers, to compare performances and to learn
from other schools about what tools are most effective for delivering better
educational outcomes. They all need data from the different service providers,
and from each other, to reach these goals, which ultimately serve the wider
public benefit of improving education. Greenfields High School proposes
to the other school leadership boards to convene and explore the idea of
working together. They also invite their service providers and start discussing
a data-sharing agreement that enables a trustworthy environment where each
party feels confident to share data with each other.

 

An independent data steward is appointed in order to ensure the proper
management of data and oversee who gets to access what type of data and
under which conditions. The data-governance framework also takes into account
the students, parents, teachers’ rights and interests. The agreement establishes
rules for:
• schools to safely and reliably exchange relevant data among themselves,
to compare their performance against that of other schools, by sharing some
types of data
• schools to share data, to understand the effectiveness of different learning
tools and methods for different educational cycles by comparing student
progress (schools keep records of educational data for all pupils for a number
of years to track progress)
• a transparent agreement about what data is collected, stored, processed
and how it is used, including rules for safeguarding students’ and parents’
rights and interests.

How would contractual mechanisms work?

Data-sharing agreements are set up with a very clear purpose in mind, and the
rules and documents could be made public to increase transparency.

 

An independent data steward is appointed and oversees data management. The
governance framework contains provisions around who will be permitted access
to data, for what purpose and under what circumstances. The governance
arrangements will include mechanisms for enforcing compliance and ensuring
that data users have adequate remedies if compliance fails.

 

The stakeholders could establish a company limited by guarantee (CLG) to fulfil
these roles with its members being participating schools – both state and private,
academies, further education bodies and data providers.

Final remarks and next steps

This report makes the first attempts to answer the question of how
legal mechanisms can help enable trustworthy data use and promote
responsible data stewardship. Trustworthy and responsible data
use are seen as key to protecting the data rights of individuals and
communities, increasing the confidence of organisational data sharing
and unlocking the benefits of data in a way that’s fair, equitable and
focused on societal benefit.

The legal mechanisms suggested in this report may offer support
for encouraging fair and trusted data sharing where individuals and
organisations retain control over the use of their data for their own
benefit, and often for wider societal good. At the same time, it is
important to highlight that responsible data stewardship should not
be equated in all circumstances with data sharing, and that responsible
data use may sometimes necessitate a decision not to share data.

Responsible data use also means robust data-governance architectures
that allow for a participatory element in taking decisions about data.
It remains to be seen whether the demand for transformation of data
practices will be driven bottom up, top down or from a mixture of both.
The mechanisms presented here may form part of the triggers that
increase the confidence of individuals to hand over the management
of their data, as well as of organisations to break data silos and
encourage beneficial uses.

As experience in the digital-platform economy demonstrates, the
commodification of data use may ultimately undermine individual or
societal interests. For this reason, it needs to be carefully considered
whether introducing financial gains for stimulating people to join
a data trust or a data cooperative would risk creating an even greater
dependency on how efficiently data is exploited, as the economic
performance of the company will translate directly into the type
of financial rewards those individuals would receive.[footnote]For a more detailed description of this failure model and others see Porcaro, K. (2020). Failure Modes for Data Stewardship. [online] Mozilla Insights. Available at: https://drive.google.com/file/d/1twxDGIBYz0TyM3yHDgA8qyf16Ltkk4V7/view [Accessed 18 Feb. 2021].[/footnote]

Extractive data practices have proven to be successful in maximising the
economic performance of some of the big technological companies on
the market, despite these problematic business models being criticised
today. Therefore, open questions remain around the incentives models
for establishing the structures presented in this report, and to what
extent such incentives can be considered empowering and truly driving
the transformation of data practices.

Importantly, in considering these alternative mechanisms, the benefits
coming out of them as institutions – rather than a relationship between
parties – is vital. As digital technologies advance and patterns of data
use shift, the rules and principles on which civic institutions are founded
can act as a stabilising force for collective good. Further exploration
is needed as to what democratic accountability would look like for
more effective control, compared to the type of control contractual
interactions offer.

Remaining challenges

A number of challenges and difficult questions have been pointed
out throughout the report, and more issues will arise from the digital
challenges that we face today. For example, while the different
mechanisms presented here imply structures that offer considerable
flexibility, further questions remain regarding how they are able to
respond in the context of the new Internet of Things ecosystem, where
data sharing is part of everyday life, in real time.

At the same time, it can be imagined that the same type of mechanism
can be seen as the solution to distinct problems. For example, there
might be groups interested in increasing the amount of data gathered,
others interests may be around increasing the amount of data shared,
or decreasing the amount of data shared.[footnote]See O’hara, K. (2020). Data Trusts[/footnote] If the same mechanism is used to respond to such different objectives, what are the potential tensions and how can they be addressed?

Moreover, there is the question of dealing with potential conflicts arising
between trusts, cooperatives, and corporate and contractual models.
These models will control overlapping data, therefore this could create potential tensions between structures of the same type (for example between different data trusts themselves), as well as between different structures (for example a data trust in a rivalrous relationship with
opposite interest from a data cooperative).

These models should not be seen as container-based models, and
important questions arise from interactions between the different types
of structures presented. For example, what types of interventions will
be needed in order to address potential conflicts between the different
structures? How will data rights be enforced when potentially combining
datasets across such structures?

This leads to questions around identifying ways in which more
granular mechanisms for data protection can be built in and how
to strengthen existing regulation. The structures presented here are
not meant as enclaves of protection, therefore a strong underlying
data protection layer is essential for preventing harm and achieving
responsible outcomes.
There is also an important conversation about how legal mechanisms
and other types of mechanisms such as technical ones (for example
data passports and others briefly described in Annex 1) might interact
or reinforce data stewardship.

Other difficult questions that need further research and consideration
would be:
• How will different privacy standards apply in certain situations,
for instance if the data is stored by a merchant located outside
of the UK (or the EU)?
• How can the challenges related to ensuring the independence
of different governance boards be addressed?
• What are the limitations for each legal mechanism presented? For
example, in a contractual model where a stewardship entity is absent,
who would be the data steward that a contract could be entered into
with? (An oversight committee composed of representatives of data
users and providers could be established, but this would not be a legal
entity with an ability to contract.)
•What are the implications for the transferability or mandatability
of GDPR rights in light of the Data Governance Act?
• Would a certification scheme similar to BCorps provide value for
certifying data stewardship structures?[footnote]BCorps are companies balancing profit gains with societal outcomes which receive a certification based on social and environmental performance, public transparency, and accountability. For more information see B Corporation (n.d.) About B Corps. [online]. Available at: https://bcorporation.net/about-b-corps [Accessed 18 Feb. 2021].[/footnote]
• Could these models be used for handling other types of assets?

On a broader scale, in the context of data sovereignty or data
nationalism, where increasing numbers of countries insist that the
personal data of their nationals be stored on servers in that jurisdiction,
the demands of data governance are likely to increase going forward. If
data contexts involve data from nationals of more than one jurisdiction,
managing data across jurisdictions would involve complex administration
requiring sufficient income to support it.

Notwithstanding the aim to facilitate trusted data sharing that results in
wider societal, economic and environmental benefit, there remains the
broader societal question of what do we want societies to do with data,
and towards which positive ambitions are we aspiring in practice?

Next steps

As observed from the list of case studies, some of the legal mechanisms
are in existence and available for immediate operation. Important lessons
can be drawn from these examples, but there remains an overarching
need for more testing, development, investment and knowledge building.
Other mechanisms such as data trusts represent a novel and unexplored
model in practice and require piloting and better understanding.

Next steps would involve practical implementation of each approach,
research and trialling and developing guidance for practitioners.
Challenges created by the global state of public health emergency from
the COVID-19 virus, as well as developments on the geopolitical side
(such as the UK leaving the European Union and new trade agreements
being discussed) and on the technological side (for example with new
data sources and new ways of data processing), trigger the need for
robust data-sharing structures where data is stewarded responsibly.

This creates an opportunity for the UK to take the lead in shaping the
emerging data-sharing ecosystem by investing in alternative approaches
to data governance. The mechanisms presented in this report offer
a starting ground for consolidating responsible and trustworthy
data management and a way towards establishing best practices
and innovative approaches that can be used as reference points
more globally.

 

 

Annexes

Annex 1

The legal mechanisms presented in this report support organisational solutions to collective action problems with data, and can be complemented by norms and rules for data stewardship and technology.

Examples of these complementarities include regulatory mechanisms, like the General Data Protection Regulation and the European Commission’s proposed Data Governance Act (which envisions data-sharing intermediaries and mechanisms for ‘data for the common good’ or data altruism).

By way of illustration, some of the key GDPR considerations that will translate into all the legal mechanisms described in this report for data providers will include:
1. ensuring that the data sharing is lawful and fair, which in addition to not being in breach of other laws, will include establishing a lawful basis under GDPR, such as:
a. the ‘legitimate interests’ basis, which requires the data provider to satisfy itself, via a three-part test and documented Legitimate
Interests Assessment, that the data-sharing is necessary to achieve legitimate interests of the data provider or a third party and that these interests are not overridden by the rights and interests of the data subjects; or
b. that the data provider has the consent of the data subjects to
share the data, which may be impractical or difficult to achieve, particularly for legacy data; to the extent that the data is ‘special category data’ (such as health data), whether one of the limited conditions for sharing such data is satisfied e.g. necessary for scientific research;
2. whether the principle of transparency has been satisfied in terms of informing data subjects of the specific disclosure of their data to, and use of their data by, the data-sharing venture;
3. whether processing of the data by the venture is incompatible with the original purposes for which the data provider collected and processed the data and thereby in breach of GDPR’s ‘purpose limitation’ principle;
4. ensuring that the shared data is limited to what is necessary
for the purposes for which the venture will process it (the ‘data minimisation’ principle);
5. ensuring that the data is accurate and where necessary kept up to date (‘accuracy’);
6. ensuring that the data will not be retained in a form that permits identification of the data subjects for any longer than necessary;
7. conducting due diligence on the data security measures established to protect data contributed to the venture;
8. ensuring that there is a mechanism in place enabling data subjects to
exercise their rights of data access, rectification, erasure, portability and right to object, including the right not to be subject to automated decision-making (‘rights’);
9. identifying any cross-border transfers of the data, or remote access to the data from outside the UK, and ensuring that such transfers or access are conducted in compliance with one of the mechanisms under GDPR; and
10. ensuring that all accountability requirements under GDPR are satisfied where appropriate, including Data Protection by Design and Default, Data Protection Impact Assessments, Appropriate Policy Document, Record of Processing Activities and mandatory
contractual requirements.[footnote]The Information Commissioner’s Office (ICO) published a draft Data Sharing Code of Practice that covers many of the above requirements, including expectations in terms of data sharing agreements. See Information Commissioner’s Office (2020). ICO publishes new Data Sharing Code of Practice (online). Available at: https://ico.org.uk/about-the-ico/news-and-events/news-andblogs/2020/12/ico-publishes-new-data-sharing-code-of-practice[/footnote]

Other complementaries could be technical mechanisms, such as Decidim, a digital platform for citizen participation[footnote]For more information see https://decidim.org[/footnote] – mechanisms that also being explored as part of the Open Data Institute programme[footnote]See Thereaux, O. and Hill, T. (2020). Understanding the common technical infrastructure of shared and open data. [online] theodi.org. Available at: https://theodi.org/article/understanding-the-common-technical-infrastructure-of-shared-and-open-data [Accessed 18 Feb. 2021][/footnote] – or the Alan Turing Institute’s framework on Data safe havens in the cloud,[footnote]Alan Turing Institute (n.d.). Data safe havens in the cloud. [online] The Alan Turing Institute. Available at: www.turing.ac.uk/research/
research-projects/data-safe-havens-cloud [Accessed 18 Feb. 2021].[/footnote] and the UK Anonymisation Network (UKAN) methodology for Data Situation Audits, part of the Anonymisation Decision-Making Framework.[footnote]See UK Anonymisation Network (UKAN), Anonymisation Decision-Making Framework.[/footnote] Together, these are the building blocks of a trustworthy institutional regime for data governance that could unlock the value of data.

There are also governance mechanisms that are starting to show what might work. For example, the participatory data governance mechanisms deployed in Genomics England[footnote]For more information see: Genomics England (n.d.) Home. [online]. Available at: www.genomicsengland.co.uk[/footnote] or The Good Data[footnote] For more information see: TheGoodData (2020). Home. Available at: www.thegooddata.org[/footnote] mean that members can participate in the decision-making process and realise the potential of good data stewardship. Furthermore, work highlighted by researchers such as Salomé Viljoen and research institutes such as the Bennett Institute for Public Policy show there are also institutional mechanisms which can be used to improve the stewardship of data.[footnote] Viljoen, S. (2020). Democratic Data: A Relational Theory For Data Governance. [online] Available at: https://doi.org/10.2139/ ssrn.3727562 [Accessed 18 Feb. 2021]; Coyle, D. et al. (2020) Valuing data.[/footnote] The rules in place, the choice of collaboration and how this translates in contractual terms constitute the ‘institutional framework’ within which organisational forms. This report speaks to the possibilities of how these organisational structures and how association take place.

Other complementaries could be codes of practice or ethical codes together with social arrangements that create pressure for abiding by the rules (e.g. being thrown out of the group and denied access to the data). For example, aside from contractual terms, different legal structures might also have a rulebook or code of conduct that sets out the obligations of the data providers and data users, including those relating to GDPR. This could form a formal code of conduct under GDPR. The UK Information Commissioner’s Office (ICO) is keen to incentivise such codes. If such a code was created in compliance with the GDPR and approved by the ICO, there is the potential to create a standard form Rulebook that could be used by other similar data models. There are however certain requirements that would need to be complied with – e.g. the Code must have a clear purpose and scope. It would have to be prepared and submitted by a body representative of the categories of the data controllers and data processors involved. The Code would need to meet the particular needs of the sector or processing activities and address a clearly identified problem. It would need to facilitate the application of GDPR and be tailored to the sector – in other words add value through clear specific solutions and go beyond mere compliance with the law. Any amendments would need to be approved by the ICO. It is also important to note the ICO’s efforts in establishing regulatory sandboxes to enable companies to test new data innovations and technologies – including data-sharing projects – in a safe and controlled environment, while receiving privacy and regulatory guidance. Such regulatory sandboxes provide an interesting tool to promote data sharing for the benefit of individuals and society, while minimizing risks to people’s privacy, security and human rights

Annex 2: EU data economy regulation

Background information

Between 1960 and 1980 public concerns around automation increased around the world. In Europe, member states were facing challenges around computerisation, predominantly in public administration, and member states started adopting different data-protection rules. The first efforts to harmonise data-protection rules began and led to the adoption of the Directive 95/46/EC (Data Protection Directive) on personal data protection, which entered into force in 1995.[footnote]Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Available at https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=celex%3A31995L0046[/footnote]

The two main objectives of the Data Protection Directive were to protect fundamental rights and freedoms of individuals, and to focus on the free movement of personal information as an important component of the internal market. Therefore, the adoption of European data protection legislation is rooted in the internal market and integration efforts.

With the consolidation of individual rights in the EU in the Charter of Fundamental Rights, which entered into force in 2009, the right to personal data protection was recognised as a distinct right to the right to privacy. The right to data protection is enshrined in Article 8 of the Charter of Fundamental Rights of the European Union (the Charter) and in Article 16 of the Treaty on the Functioning of the European Union (TFUE). Thus, the EU’s competence to enact the Data Protection Directive was an internal market one.

In 2015, building on early harmonisation and integration efforts, the European Commission adopted the Digital Single Market (DSM) Strategy, which set the goal to develop a European data economy.[footnote] European Commission (2015). A Digital Single Market Strategy for Europe. [online]. Available at: https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=COM%3A2015%3A192%3AFIN[/footnote] This means creating a common market across member states that eliminates impediments to transnational online activity in order to foster competition, investments and innovation:

A Digital Single Market is one in which the free movement of goods, persons, services and capital is ensured and where individuals and businesses can seamlessly access and exercise online activities under conditions of fair competition, and a high level of consumer and personal data protection, irrespective of their nationality or place of residence.’

The Digital Agenda talks about better access to online goods and services, high-speed, secure and trustworthy infrastructures and investment in cloud computing and big data.[footnote]Ibid.[/footnote] For these purposes a number of regulatory interventions were proposed, such as consumer protection laws, the reform of the telecommunications framework, a review of the privacy and data protection in electronic communications law, and new rules for ensuring the free flow of data.

In 2018, the General Data Protection Regulation entered into force after a two-year transition period.[footnote] Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679[/footnote] The regulation updates the data-protection measures while maintaining the same two goals as the 1995 Data Protection Directive: strengthen individual rights and enable the free flow of data in the EU internal market.

Another relevant regulation adopted in 2018 was the Regulation on the Free flow of non-personal data. It aims to ensure data processing increases productivity to create new opportunities and supports the development of the data economy in the Union.[footnote] Recital 2 of the Regulation 2018/1807 on a framework for the free flow of non-personal data in the European Union. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32018R1807[/footnote] It aims to achieve these goals by prohibiting data localisation requirements in member states (except for national security grounds) and counters vendor lock-in practices in the private sector. It also includes rules supporting data portability and interoperability as a way to ensure data mobility within the EU, increase competition and foster innovation. The Regulation intends to deal only with anonymised and aggregate data sets such as for big data analytics, farming related data, industrial production data – e.g. data on maintenance for industrial machines.

On 19 February 2020, the European Commission published the EU Data Strategy,[footnote] See European Commission (2020). A European strategy for data.[/footnote] along with a whitepaper on artificial intelligence[footnote] European Commission (2020c). On Artificial Intelligence – A European approach to excellence and trust. [online] Available at: https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf [Accessed 18 Feb. 2021][/footnote] and a communication on shaping Europe’s digital future.[footnote] European Commission (2020e). Shaping Europe’s Digital Future. [online]. Available at: https://ec.europa.eu/info/sites/info/files/ communication-shaping-europes-digital-future-feb2020_en_4.pdf [Accessed 18 Feb. 2021].[/footnote] The European Commission supports a ‘human centric approach’ to technological development and the creation of ‘EU-wide common, interoperable data spaces […] overcoming legal and technical barriers to data sharing across organisations.’[footnote] European Commission (2020). A European strategy for data.[/footnote]

Annex 3: RadicalxChange’s Data Coalitions

This is a conceptual model that incorporates elements of all of the three legal mechanisms presented in this report

The RadicalxChange Foundation is a non-profit ‘envisioning institutions that preserve democratic values in a rapidly-changing technological landscape’,[footnote]Posner, E. A. and Weyl, E. G. (2018) Radical markets. Uprooting Capitalism and Democracy for a Just Society.
Princeton University Press. Available at: https://doi.org/10.2307/j.ctvc77c4f[/footnote] premised on the idea that data is essentially associated with groups, not individuals. If value comes from network effects, they ask, who owns the network? Social graphs of individuals necessarily contain information about a network of others; most records such as emails and calendar entries also refer to others; any data about one individual may be used to create a predictive profile of others. Through this account, in correcting imbalances and asymmetries, privacy is a red herring.[footnote]See RadicalxChange Foundation’s Data Freedom Act. Available at: www.radicalxchange.org/kiosk/papers/data-freedom-act.pdf [Accessed 18 Feb. 2021].[/footnote]

To that end, RadicalxChange proposes data coalitions, which are fiduciaries for their members, but would require legislation, new regulation and an oversight board (in the US context). The problem they are meant to solve is that data subjects have less bargaining power with data consumers because the data they supply overlaps in content with that of other individuals. A data coalition would in effect bargain for all its members, aggregating and thereby increasing their influence. In this respect, they are intended to play a similar role to bottom-up data trusts.[footnote]Delacroix, S. and Lawrence, N. D. (2019). ‘Bottom-up data Trusts’[/footnote]

Governance: RadicalxChange envisages a Data Relations Board created by legislation with quasi-judicial powers to administer the area. A data coalition would legally interpose between individuals and data consumers to negotiate terms of use, privacy policies, etc. Governance
would be democratic through the membership. Decisions would have
to be binding on all members.

Data rights: To become a member, individuals would assign exclusive rights to use (some of) their data to the coalition (e.g. assigning exclusive rights to all their browsing data). The coalition would then negotiate with data consumers for the use of the data. The coalition’s rights to data would be defined contractually, and the board would ensure that the relevant data could not be collected by another entity, except through the coalition. Rights to the use of data could never be transferred permanently to a data consumer. Members could leave, and take their data with them, perhaps to an alternative coalition.

The outcome of a successful initiative would be not unlike the ambitions of the UK Government’s Smart Data Initiative.[footnote]UK Department for Business, Energy and Industrial Strategy (2019). Smart Data: Putting consumers in control of their data and
enabling innovation. [online] Gov.uk Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/
attachment_data/file/808272/Smart-Data-Consultation.pdf [Accessed 18 Feb. 2021].[/footnote]

Sustainability of the initiative: Given the legal framework the idea requires, it would be sustainable if there was enough business to support a coalition. The proposed business model is that the coalition makes money from the data, and passes a proportion of the profits on to its
members. It is, however, on the drawing board and presumes an objective to share profits with members proportionally. The legal framework itself is unlikely to emerge in the near term.


This report was authored by Valentina Pavel.

Preferred citation: Ada Lovelace Institute. (2021). Exploring legal mechanisms for data stewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/

Image credit: Jirsak

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

This Review is supplemented by a policy report that also draws on the Ada Lovelace Institute’s public engagement research on attitudes towards biometric technologies,  and desk research to provide background on current developments in the realm of biometric technologies and their governance. It puts forward a set of ambitious policy recommendations, that are primarily for policymakers and will also be of interest to civil-society organisations and academics working in this contested area.

The Ryder Review: Independent legal review of the governance of biometric data in England and Wales

Foreword

The world is at the beginning of an ambitious new revolution in the collection, use and processing of biometric data both by public authorities and the private sector. In almost every aspect of our lives – from online identification, to health status and law enforcement – our biometric data is being collected and processed in a way that previously would have been considered unimaginable.

In order to protect our fundamental rights, particularly our data and privacy rights, this revolution in biometric data use will need to be accompanied by a similarly ambitious new legal and regulatory regime. That regime will need to be put into effect by firm, assiduous and proactive lawmakers and regulators. This is vital to ensure that we do not allow the use of biometric data across society to evolve in a flawed way, with inadequate laws and insufficient regulation.

More than 20 years ago, English law took a wrong turn in relation to the regulation of biometric data. That misstep took over a decade to rectify, and the law surrounding biometric data has struggled to stay current and effective ever since. The aim of this Review is to ensure that, at this important time, we do not take a similar wrong turn.

With hindsight, we can see how easily legal understanding of the significance of processing biometric data went awry. It is worth recalling how this happened, because of the potential parallels with the position we are in today:

In 1998 the biometric data of a man accused of burglary – his DNA sample – was retained by the police, inadvertently, and in breach of the law. After his acquittal for burglary the sample should have been destroyed. But, because it had been unlawfully retained, it was later used to identify the same man in a much more serious case. He was arrested and subsequently convicted of a horrific rape and assault that might otherwise never have been detected.

In 2001, as a direct result of that case, the law was changed not only to allow biometric data – DNA and fingerprints – to be collected in a wide range of circumstances but also to allow it to be retained almost indefinitely. The understandable desire to provide an effective tool to those seeking to investigate crime pushed aside concerns over the consequences of collecting biometric data on a vast scale.

Within a few years the UK had created the world’s largest DNA database, which included the biometric data of people who had never been charged or convicted of offences, including children. The data retained was disproportionately weighted towards those who had contact with the police, whether or not they were at fault, potentially embedding and exacerbating systemic flaws in the policing of particular communities. Young Black men, in particular, were disproportionately represented on that database.

When those raising concerns about this legal change brought legal challenges, the UK Courts, including the House of Lords, failed properly to appreciate the level of interference with rights that was caused by the accumulation of a large database. Police and courts were woefully slow to recognise how much the collection of biometric data impacted on the rights of those whose data had been retained.

It was not until a legal challenge to the DNA database in the foundational case in the European Court of Human Rights of S and Marper v United Kingdom [2008][footnote]ECHR 1581; (2009) 48 EHRR 50[/footnote] that, very slowly, a legislative change occurred in UK law, culminating in the Protections of Freedoms Act 2012. 

That legislation not only limited the scope and retention of biometric data but also created both a Biometrics Commissioner and a Surveillance Camera Commissioner in England and Wales. In 2021 those key roles were merged. In 2022 they may be changed even further by being placed within the remit of the Information Commissioner’s Office, despite objection to that proposal from the current Commissioner.

Even at the time of the 2012 legislation, the law was already lagging behind technical developments. The use and range of different types of biometric data had increased dramatically from the use of fingerprints, photos and DNA contemplated by the earlier litigation. That pace of change has continued exponentially.

In the same period there was a transformation of the global economy around the use of data. Most of the world’s largest companies, used by billions of people every day, are collectors and aggregators of vast amounts of personal data. Many commentators argue that weak laws and regulations on the use of personal data at the turn of the century caused our economies to become overly dependent on dysfunctional and detrimental uses of data by those major companies.[footnote]Most notably: Zuboff, S. (2019). The Age of Surveillance Capitalism. London: Profile Books.[/footnote] We should learn from those errors, and the power imbalances they perpetuate, in our regulation of biometric data in the private sector.

The increasing use of live facial recognition (LFR), which we discuss in this Review, is perhaps the clearest example of why a better legal and regulatory framework for biometric data is needed urgently. But LFR is merely the technology that has the most focus currently. The concerns it raises apply in numerous other areas. As we have set out in the Review, a new regulatory framework must be applicable to a range of biometric technologies, rather than simply react in a piecemeal way to each new development. Similarly, we strongly recommend urgent research on regulating biometric data in the context of use by private companies. We found such research to be significantly lacking, due to the particular focus thus far on biometric data use by public authorities, particularly LFR by law enforcement.

While the global COVID-19 pandemic delayed work on the Review, it also forced us to consider the use of biometric data in a context we might otherwise have overlooked. As the pandemic moves into its third year, world governments are rushing to use biometric data both for identification and categorisation, perhaps on a mandatory basis. This has profound implications and merits specific consideration outside the scope of the original remit of the Review. We hope our recommendations can assist in that work.[footnote]See: Ada Lovelace Institute. (2021). Checkpoints for vaccine passports. Available at: https://www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports/[/footnote]

It is important to acknowledge that in the last 20 years there have been huge legislative changes around the use and processing of personal data, including the EU General Data Protection Regulation (GDPR), mirrored by the UK General Data Protection Regulation. There are even more dramatic legal changes in the pipeline, such as the forthcoming EU Artificial Intelligence Draft Regulation (‘the AI Act’). However, these legislative changes have not brought sufficient clarity to the regulation of biometric technologies. There remains legal uncertainty as to when, if at all, techniques such as LFR can be used in accordance with the law, and how the use of biometric data should be regulated.

This Review has sought to address that uncertainty by assessing the existing legal and regulatory framework and by making 10 recommendations.

In arriving at those recommendations, and while taking evidence and conducting research, we were repeatedly struck by two counterintuitive features in this area.

First, strong law and regulation is sometimes characterised as hindering advancements in the practical use of biometric data. This should not be the case. In practice a clear regulatory framework enables those who work with biometric data to be confident of the ethical and legal lines within which they must operate. They are freed from the unhelpful burden of self-regulation that arises from unclear guidelines and overly flexible boundaries. This confidence liberates innovation and encourages effective working practices. Lawmakers and regulators are not always helping those who want to act responsibly by taking a light touch.

Second, the importance of transparency and public consultation was emphasised by all stakeholders, but the practical effect of such emphasis was not always positive. On the one hand, obtaining active and informed public understanding through a structured process – such as a ‘citizens’ jury’ – could provide valuable information on which to base policy. But too often public and private authorities were relying on the public’s partially understood purported consent; an ill-defined assessment of public opinion; or the mere fact of an election victory, as a broad mandate for intrusive collection and use of the public’s biometric data.

The protection of our fundamental rights in relation to biometric data is a complex area which lawmakers and regulators must not delegate to others, or allow public or private authorities to avoid merely by relying on purported public consent. Now more than ever, they have a responsibility to step up to protect the public from the harms and risks that the public themselves may not fully appreciate or even be aware of.

Lastly, I would like to thank those involved in the work of the Review.

I am grateful to my Review team: Jessica Jones, Javier Ruiz and Sam Rowe.

We would like to thank the Advisory Board who shared their time and expertise, and kept us alerted to important points as we were carrying out our work. They are, Anneke Lucasson, Lillian Edwards, Marion Oswald, Edgar Whitley, Pamela Ugwudike, Renate Samson and Matthew Rice.

We would like to express huge gratitude to all the witnesses from whom we took evidence and for their willingness to share their experiences and views. Our thanks, also, to Venetia Tate of Matrix Chambers, whose diligent organisation of the evidence sessions allowed that stage of the Review to proceed smoothly.

Most of all, I am personally grateful to those at the Ada Lovelace Institute (Carly Kind, Octavia Reeve, Imogen Parker, Madeleine Chang, George King and Sohaib Malik, in particular) who commissioned and supported this work, without ever seeking to direct it, and with considerable patience and understanding for the COVID-19-related delays that we encountered along the way.

Matthew Ryder QC

London 2022

1. Introduction

    1. This Review was commissioned by the Ada Lovelace Institute in January 2020. Its remit was to conduct an independent, impartial and evidence-led analysis of the governance of biometric data in England and Wales, and to reach conclusions and make recommendations on regulatory reform.
    2. The impetus for the Review was multi-faceted but a key concern, both before and after the review was commissioned, was police use of live facial recognition (LFR) technology. It received considerable public attention, following the Metropolitan Police Service’s deployment of LFR at Notting Hill Carnival in 2017 and South Wales Police’s piloting of the same technology in 2017­–18. In 2019, the Biometrics and Forensics Ethics Group noted the lack of independent oversight and governance of LFR[footnote]Facial Recognition Working Group of the Biometrics and Forensics Ethics Group. (2019). Ethical issues arising from the police use of live facial recognition technology. Available at : https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/781745/Facial_Recognition_Briefing_BFEG_February_2019.pdf[/footnote] and, in 2019 and 2020, the Divisional Court and Court of Appeal gave judgments on the lawfulness of the South Wales deployments,[footnote]The Divisional Court judgment is available at: https://www.bailii.org/ew/cases/EWHC/Admin/2019/2341.html The Court of Appeal judgment is available at: https://www.bailii.org/ew/cases/EWHC/Admin/2019/2341.html[/footnote] with the Court of Appeal finding that there was an insufficient legal framework around the deployment of LFR to ensure compliance with human rights. The public and legal concerns around LFR have not diminished, but have increased substantially during the course of this Review. As recently as October 2021 the European Parliament voted overwhelmingly in favour of a resolution calling for a ban on the use of facial recognition technology in public places.[footnote]European Parliament. Minutes: Wednesday 6 October 2021 – Strasbourg. Available at: https://www.europarl.europa.eu/doceo/document/PV-9-2021-10-06-ITM-002_EN.html[/footnote]
    3. The Home Office Biometrics Strategy was published in 2018, in response to the growing prominence of biometric data, but was criticised as lacking substance and for failing to set out future plans.[footnote]Orme, D. (2018). ‘Tackling the UK Government’s identity crisis’. Government & Public Sector Journal. Available at: https://www.gpsj.co.uk/?p=4325[/footnote] In November 2019, the Conservative Party manifesto pledged to ‘empower the police to safely use new technologies like biometrics and artificial intelligence, along with the use of DNA, within a strict legal framework’.[footnote]The Conservative and Unionist Party Manifesto 2019. Available at: https://assets-global.website-files.com/5da42e2cae7ebd3f8bde353c/5dda924905da587992a064ba_Conservative%202019%20Manifesto.pdf[/footnote] But there has not yet been any new legislation, and the rapid rate of technological advance has left many concerned that existing legislative and policy frameworks are outdated and fail to account for the new and various ways in which biometric data is, or might be, accessed and used by public and private organisations alike. There has been an increasing clamour from civil liberties organisations, supported by statements from the former Biometrics Commissioner among others,[footnote]See, for example: Biometrics Commissioner. (2019). Annual Report 2018, paragraph 33. Available at: https://www.gov.uk/government/publications/biometrics-commissioner-annual-report-2018; Biometrics Commissioner. (2020). Annual Report 2019. Available at: https://www.gov.uk/government/publications/biometrics-commissioner-annual-report-2019[/footnote] that human rights standards of proportionality and necessity are not being respected in the context of public-sector biometric data use. In July 2019, the Commons Science and Technology Select Committee called for ‘an independent review of options for the use and retention of biometric data’[footnote]House of Commons Science and Technology Committee. (2019). The work of the Biometrics Commissioner and the Forensic Science Regulator. Available at: https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/1970/197003.htm[/footnote] and, after 6 months of no response from the Government, the Ada Lovelace Institute heeded that call and established the Review.
    4. The Review team[footnote]The Review was led by Matthew Ryder QC, with a team comprising Jessica Jones, Javier Ruiz and Samuel Rowe. Short curriculum vitae of the Review team members are at Annex 2.[/footnote] has enjoyed full independence from the Ada Lovelace Institute and has formulated its recommendations on the basis of its own analysis of the evidence received. We have benefited from the support of an expert Advisory Board (see Annex 3), whose expertise covers genetics, internet law, information systems, criminology and digital policy. Their input to this Review has been invaluable. So too has been the input of all the expert witnesses, who willingly shared their time and knowledge with us to shine a light on areas of predominant concern and opportunity (see Annex 4). We hope that this Review, representing the culmination of more than a year’s work and a broad set of conversations with different interested parties, will help identify and shape the way in which a new legal framework, which rises to the challenges of biometric data use, might be established.

2. Executive summary and recommendations

    1. Over the course of the Review, we heard several clear and consistent messages from nearly all of the individuals from whom we took evidence, irrespective of their particular interest: the current legal framework is not fit for purpose, has not kept pace with technological advances and does not make clear when and how biometrics can be used, or the processes that should be followed; the current oversight arrangements are fragmented and confusing, meaning that, for example, it is not clear to police forces to whom they should turn for advice about the lawful use of biometrics; and the current legal position does not adequately protect individual rights or confront the very substantial invasions of personal privacy that the use of biometrics can cause. There was also considerable concern about how to achieve public engagement with an area that can be technical and complex, and how to achieve a sufficient level of public understanding to ensure legitimacy and democratic accountability in the future regulation and use of biometric data.
    2. We began the Review intending to address public and private-sector uses of biometrics in equal measure. It quickly became apparent, however, that public sector organisations were more willing to engage with the Review, more of the research which was available to us focused on public-sector uses, and the academics and civil liberties organisations we spoke to had given considerably more thought to public-sector use of biometrics than private and commercial use. That has informed the way in which this Review is, ultimately, directed predominantly at public-sector use of biometrics. Where we have felt we have a sufficiently robust evidence base to make recommendations relating to the regulation of biometrics in private sector and commercial entities, we have done so; but it is also one of our recommendations that specific, additional private-sector focused work be undertaken.
    3. Taking account of all of this, we make the following ten recommendations:

Recommendation 1: There is an urgent need for a new, technologically neutral, statutory framework. Legislation should set out the process that must be followed, and considerations that must be taken into account, by public and private bodies before biometric technology can be deployed against members of the public.

Recommendation 2: The scope of the legislation should extend to the use of biometrics for unique identification of individuals, and for classification. Simply because the use of biometric data does not result in unique identification does not remove the rights-intrusive capacity of biometric systems, and the legal framework needs to provide appropriate safeguards in this area.

Recommendation 3: The statutory framework should require sector and/or technology-specific codes of practice to be published. Such codes should set out specific and detailed duties that arise in particular types of cases.

Recommendation 4: A legally binding code of practice governing the use of LFR should be published as soon as possible. We consider that a specific code of practice for police use of LFR is necessary, but a code of practice that regulates other uses of LFR, including use by private entities and public-private data sharing in the deployment of facial recognition products, is also required urgently.

Recommendation 5: The use of LFR in public should be suspended until the framework envisaged by Recommendations 1 and 4 is in place.

Recommendation 6: The framework envisaged by Recommendations 1 and 4 should supplement, and not replace, the existing duties arising under the Human Rights Act 1998, Equality Act 2010 and Data Protection Act 2018.

Recommendation 7: A national Biometrics Ethics Board should be established, building on the good practice of the London Policing Ethics Panel and West Midlands Police, and drawing on the expertise and experience of the Biometrics and Forensics Ethics Group. This Board should have a statutory advisory role in respect of public-sector biometrics use.

Recommendation 8: The Biometrics Ethics Board’s advice should be published. Where a decision is taken to deploy biometric technology contrary to the advice of the Biometrics Ethics Board, the deploying public authority should publish a summary explanation of their reasons for rejecting the Board’s advice, or the steps they have taken to respond to the Board’s advice. The public authority’s response should be published within 14 days of the decision to act contrary to the Biometrics Ethics Board’s advice and prior to deployment.

Recommendation 9: The regulation and oversight of biometrics should be consolidated, clarified and properly resourced. The overlapping and fragmented nature of oversight at present impedes good governance. We have significant concerns about the proposed incorporation of the role of Biometrics and Surveillance Camera Commissioner into the existing duties of the ICO. We believe that the prominence and importance of biometrics means that it requires either a specific independent role, and/or a specialist Commissioner or  Deputy Commissioner within the ICO. Wherever it is located, it must be adequately resourced financially, logistically, and in expertise, to perform the governance role that this field requires.

Recommendation 10: Further work is necessary on the topic of private-sector use of biometrics. While we consider that the statutory framework envisaged by Recommendation  1 must regulate private-sector use to some extent, many of those we interviewed had extensive knowledge about public-sector use of biometrics but much less experience and expertise in the challenges and issues arising in the private sector. There are plainly considerable, rights-engaging concerns around private-sector use of biometrics, but we have not received enough private-sector input to the Review to be able to propose detailed solutions. We recommend that further, private- sector-specific research and evidence gathering is undertaken. This is particularly important given the porous relationship between private-sector organisations gathering and processing biometric data and developing biometric tools, and public authorities accessing those datasets and deploying those tools.

3. Our methodology

    1. The work of the Review involved three core strands: (1) research undertaken by the Review team; (2) interviews with various interested parties; and (3) liaison with the Advisory Board.

      Research

    2. The recent prominence of biometrics as a topic of public interest and debate has resulted in the publication of numerous reports and papers which we considered carefully. These include work from leading UK organisations such as the Centre for Data Ethics and Innovation, the Royal United Services Institute, the Alan Turing Institute and the Biometrics and Forensics Ethics Group, among others. In addition to these reports, the Review team also considered the relevant statutory reports from regulators and public bodies such as the Biometrics and Surveillance Camera Commissioners.
    3. Our policy research was not limited to the UK. It included analysis of international developments, mainly in the US and EU. Our sources were varied,[footnote]It is appropriate here to acknowledge the helpful and extensive news developments on biometric data that can be found at biometricupdate.com.[/footnote] ranging from reputable media outlets covering the extensive developments in those countries to policy publications from think tanks – prominently the AI Now Institute – along with organisations such as the American Civil Liberties Union and public bodies such as the National Institute of Standards and Technology (NIST), which is a global authoritative reference for the technical accuracy of certain biometrics. EU organisations, from the European Data Protection Board to various units in the European Commission and Parliament, have been active in the development of the conceptual underpinning on biometrics and regulatory initiatives.
    4. Besides policy, advocacy and legal documents, we also reviewed academic literature in the fields of social sciences and humanities, where there is a helpful body of work on the study of the social impacts of algorithms and data, often in interdisciplinary approaches with legal scholars and computer scientists. These newer developments on social impact complement the existing analyses from areas such as surveillance studies or science and technology studies.
    5. We also surveyed the technical literature on biometrics to the best of our abilities. Although our team did not include computer scientists or biometric technologists, several members have experience in the analysis of technical systems and were supported by the Advisory Board in this regard. This approach ensured that the Review’s recommendations and analysis have been informed by the science. The literature on facial recognition and algorithms is particularly extensive and includes both academic journals and a variety of online publications, some of it from technology companies such as Facebook and Google but also from independent researchers and developers, showing the very dynamic nature of this area. Other areas where scientific literature provides necessary insights are the role of training datasets, accuracy, bias and new biometric modalities.

      Interviews

    6. We took evidence from 24 individuals over a series of interviews conducted between September 2020 and February 2021. Some of our interviews were with a single individual, while some took place in a small group. Each interview lasted between an hour and an hour-and-a-half and addressed a series of themes identified by the Review team as being of particular interest, though with sufficient flexibility to respond to the particular interests and expertise of those with whom we talked. Our interview timetable was delayed by the COVID-19 pandemic and lockdown arrangements that were introduced. Nevertheless, once arrangements had been put in place for the taking of evidence remotely, we were able to obtain a comprehensive cross-section of evidence from individuals engaged with biometrics and their use in the public sector, which has underpinned and provided the basis for the recommendations put forward in this Review. We spoke to, among others, the then Biometrics Commissioner, the then Forensic Science Regulator, the then Surveillance Camera Commissioner, Home Office ministers, the Information Commissioner’s Office, the Metropolitan Police Service, West Midlands Police, the College of Policing, the Centre for Data Ethics and Innovation, AI Now, Liberty and Big Brother Watch. A full list of the interviewees who agreed to be on the record is at Annex 4. We were also assisted by several off-the-record conversations which provided useful background. The Review team received less engagement from private-sector organisations, and the more limited scope of the evidence that was received on the issues arising from private-sector and commercial use of biometrics is reflected in the recommendations that the Review puts forward,  in particular Recommendation 10 which recognises that there is further work to be done on this aspect.

      Advisory Board

    7. The Review team were also assisted by several meetings with the Advisory Board, who provided useful direction, resources and contacts, and who asked thought-provoking questions, which helped to steer the focus of the Review.

4. What is biometric data?

    1. Most members of the public have a general understanding of biometric data – that it is personal data, often obtained from or relating to a person’s body or behaviour, which may be used to uniquely identify them. Thus, the most common forms of biometrics in use, and recognised by the public, are a person’s fingerprints and DNA. Iris scans, voice recognition and facial recognition are also forms of biometrics that are part of the public consciousness. Less well-known are the more novel forms of biometrics such as behavioural traits like gait analysis or keystroke analysis. As technology advances, so too will the forms of biometric data which can be derived from individuals. Indeed, data relating to physical and physiological characteristics of an individual have fallen within the definition of biometric data for several years,[footnote]Explanatory Notes to the Protection of Freedoms Act 2012.[/footnote] but data relating to behavioural characteristics are novel, having been aided by developments in big data analysis. In his evidence to the Review, the former Biometrics Commissioner considered that the addition of behavioural data ‘significantly broadens what had previously been thought about as biometrics’.[footnote]Biometrics Commissioner, interviewed 9 November 2020.[/footnote]
    2. One of our first tasks was to consider the current scope of what constitutes biometric data, in order to determine the focus of the Review. Various organisations and legal instruments provide different definitions of biometric data, and we considered those alternatives and the potential repercussions of choosing one over another, in terms of the safeguards that would apply to privacy-invading practices or systems. We also discussed the difficulty of defining biometrics with those who gave evidence to the Review, discovering that there were differences of opinion as to the importance of the definition and which definition should prevail.
    3. When considering biometric data, there are two relevant stages: first, identifying what it is; and secondly, identifying what requirements must be met when processing it. At the first stage, the focus is on the inherent properties of the data. At the second stage, the focus shifts to consider why the data is being processed. Both stages are, in our view, relevant to the safeguards that should attach to biometric data.
    4. Our foundational starting point was the UK General Data Protection Regulation (UK GDPR) which, consistent with the EU GDPR and the Law Enforcement Directive,[footnote]Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016. Available at: http://data.europa.eu/eli/dir/2016/680/oj[/footnote] defines biometric data as ‘personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of an individual, which allows or confirms the unique identification of that individual, such as facial images or dactyloscopic data’.[footnote]Data Protection Act 2018, Section 205(1); GDPR, Article 4(14).[/footnote] That definition is made up of three elements. First, the data’s source; secondly, how the data was obtained; and finally, the data’s ability to identify an individual uniquely. The ICO believes that the second stage is the operative part of the definition, stating that ‘it is the type of processing that matters’.[footnote]Information Commissioner’s Office (ICO). (2021). Information Commissioner’s Opinion: The Use of Live Facial Recognition Technology in Public Places, Section 4.1. Available at: https://ico.org.uk/media/2619985/ico-opinion-the-use-of-lfr-in-public-places-20210618.pdf[/footnote]
    5. Although the GDPR definition was our starting point, we looked closely at other definitions employed by different organisations. For example, the Article 29 Working Party in its Opinion on the Concept of Personal Data offered an alternative description which focuses on two components (and not on how the data was obtained): for them, biometric data is ‘biological properties, behavioural aspects, physiological characteristics, living traits or repeatable actions where those features and/or actions are both unique to that individual and measurable, even if the patterns used in practice to technically measure them involve a certain degree of probability’.[footnote]Article 29 Data Protection Working Party, Opinion 4/2007 (WP136, 2007), p.8.[/footnote]
    6. Common to both the UK GDPR and Article 29 Working Party definitions is a requirement that the data at least has the capacity to uniquely identify a person. The capacity for unique identification was considered to be important by a number of interviewees, as well.[footnote]Amba Kak, interviewed 8 October 2020; Centre for Data Ethics and Innovation, interviewed 9 December 2020.[/footnote] There is, however, some debate regarding the extent of individuation that is necessary before information is considered to be biometric data. During her interview, the then Forensic Science Regulator expressed some concern that ‘there is no such thing as absolute identification from biometrics’,[footnote]Forensic Science Regulator, interviewed 11 November 2020.[/footnote] which would undermine the usefulness of a definition that required absolute unique identification in order for safeguards to be engaged. The courts have dealt with the probabilistic nature of biometric data pragmatically. In R (Bridges) v Chief Constable of South Wales Police (‘Bridges’), a case about police use of automatic live facial recognition technology on crowds, the High Court (in a definition also adopted by the Court of Appeal) stated that ‘biometric data enables the unique identification of individuals with some accuracy. It is this which distinguishes it from many other forms of data.’[footnote]R (Bridges) v Chief Constable of South Wales Police and Secretary of State for the Home Department [2019] EWHC 2341 (Admin), paragraph 42.[/footnote] Consequently, there should be no expectation that the biometric data will be capable of identifying an individual with total accuracy, but it should at least be capable of providing a confident identification.
    7. We agree that the capacity to uniquely identify individuals with some, but not absolute, certainty is a central feature of biometric data, as it is a feature of all personal data. But that does not mean, we think, that only data being processed for the purposes of unique identification (the second stage identified at 3 above) should fall within a framework for the regulation of biometrics.
    8. We concluded that, where data which has the capacity to uniquely identify individuals with some confidence is obtained or used for purposes other than unique identification – for example, where facial images are captured which could identify individuals but which are used instead for classifying them into race or sex categories – that use, or systems that provide for such activity, must also be subject to robust, rights-safeguarding regulation equivalent to the regulation necessary where identification actually takes place.
    9. We note that this is not currently the case under UK GDPR, which only introduces ‘special category’ protections in respect of biometric data where the purpose of the processing is for unique identification.
    10.  In our view, however, the fact of unique identification is not necessarily more rights-intrusive than the use of sensitive personal data, from which identification could be obtained, for classification or other purposes. Both scenarios require appropriate and careful regulation. Our conclusion is consistent with the views of the Information Commissioner’s Office (ICO), the European Commission, the European Data Protection Supervisor and the European Data Protection Board. We have approached our recommendations (and in particular, Recommendation 2) on this basis.

5. The existing legal framework for the governance of biometric data in England and Wales

    1. The governance of biometric data at present relies on a patchwork of overlapping laws addressing data protection, human rights, discrimination and criminal justice issues. There is no single overarching legal framework for the management of biometric data. Sources of law that developed in response to more general issues cater for the management and regulation of biometric data in an ad hoc manner.

      Human rights law

    2. Human rights law regulates the treatment of individuals by public authorities. The primary relevant legal instrument is the Human Rights Act 1998 (HRA), which implements as part of UK domestic law many of the rights protected by the European Convention on Human Rights (ECHR).
    3. The HRA is relevant to the regulation of biometric data because it protects the right to privacy. By Section 1 of the HRA, key provisions of the ECHR form part of the law of England and Wales, including Article 8 which protects the right to privacy.
    4. Section 6 of the HRA makes it unlawful for public authorities to act incompatibly with the rights protected by Section 1 of the HRA. Public authorities are therefore under a duty to respect an individual’s right to privacy as enshrined by Article 8. It is important to note that the HRA only places duties on public authorities – private companies do not, generally, owe human rights obligations towards individuals, and this is a potential lacuna in the regulation of biometric data use by entities other than public bodies.[footnote]Partly to avoid this kind of gap, domestic courts may themselves develop private law rights to ensure consistency with the protection of human rights – through the principle of ‘horizontal effect’.[/footnote]

      The concept of private life and its application to biometrics

    5. The concept of ‘private life’ within the meaning of Article 8 includes the collection and retention of biometric data about a person. In S and Marper v United Kingdom [2008] ECHR 1581 (‘S and Marper’), a case about the collection and retention of fingerprint and DNA data, the Grand Chamber of the European Court of Human Rights held that, ‘[t]he protection of personal data is of fundamental importance to a person’s enjoyment of his or her right to respect for private and family life, as guaranteed by Article 8 of the Convention’. The collection of biometric data about a person ‘allowing his or her identification with precision in a wide range of circumstances’ is, in the Court’s view, ‘capable of affecting his or her private life’ and gives rise ‘to important private-life concerns’. Indeed, in Aycaguer v France [2017] ECHR 587, a case about DNA retention, the Court went as far as to say that ‘personal data protection plays a primordial role in the exercise of a person’s right to respect for his private life enshrined in Article 8 of the Convention.’
    6. In Gaughran v Chief Constable of Northern Ireland [2015] UKSC 29, the Supreme Court endorsed the position that ‘the indefinite retention of a person’s DNA profile, fingerprints and photograph interferes with the right to respect for private life recognised by Article 8(1).’ In Bridges, the High Court held that Article 8 is engaged ‘if biometric data is captured, stored and processed, even momentarily’. In this regard, ‘the fact that the process involves the near instantaneous processing and discarding of a person’s biometric data…does not matter’; the Court of Appeal agreed.
    7. Interference with a person’s private life may be justified if it is ‘in accordance with law’ and ‘necessary in a democratic society’ for the purposes of a legitimate aim.
    8. In S and Marper, the European Court of Human Rights noted that, for the collection and retention of biometric data to be ‘in accordance with law’, it is essential ‘to have clear, detailed rules governing the scope and application of measures, as well as minimum safeguards concerning, among other things, duration, storage, usage, access of third parties, procedures for preserving the integrity and confidentiality of data and procedures for its destruction, thus providing sufficient guarantees against the risk of abuse and arbitrariness’. In R (Catt) v Commissioner of Police of the Metropolis [2015] UKSC 9, at paragraph 11, Lord Sumption JSC described the purpose of the ‘in accordance with law’ requirement as follows:‘Its purpose is not limited to requiring an ascertainable legal basis for the interference as a matter of domestic law. It also ensures that the law is not so wide or indefinite as to permit interference with the right on an arbitrary or abusive basis.’
    9. In Bridges, the Court of Appeal held that South Wales Police’s piloting of LFR had not satisfied the ‘in accordance with law’ requirement and, accordingly, violated Article 8.
    10.  If a measure is in accordance with law, the next step in justifying its interference with Article 8 is to consider whether it is ‘necessary in a democratic society’. That requires identifying a relevant legitimate aim and assessing whether the interference is a proportionate means of pursuing that legitimate aim. Proportionality is assessed by reference to a four-stage test (set out by the Supreme Court in e.g. Bank Mellat v HM Treasury (No 2) [2013] UKSC 39:1. Whether the objective of the measure pursued is sufficiently important to justify the limitation of a fundamental right.
      2. Whether it is rationally connected to the legitimate aim.
      3. Whether a less intrusive measure could have been adopted without unacceptably compromising the objective.
      4. Whether, having regard to these matters and to the severity of the consequences, a fair balance has been struck between the rights of the individual and the interests of the community.
    11. All of these criteria will need to be satisfied in order for the collection and retention of biometric data to be compatible with the requirements of the HRA

      Data protection law

    12. The legal framework on the protection of personal data consists of (1) the UK General Data Protection Regulation (UK GDPR), and (2) the Data Protection Act 2018 (DPA 2018). These are relevant to biometric data because biometric data is, essentially, a sub-category of personal data.
    13.  UK GDPR and DPA 2018 define biometric data as ‘personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data’ (UK GDPR, Article 4(14); DPA 2018, Section 205). Personal data is defined as ‘any information relating to an identified or identifiable natural person (“data subject”)’ (UK GDPR, Article 4(1); DPA 2018, Section 3(2)). ‘An identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person’ (UK GDPR, Article 4(1); DPA 2018, Section 3(3)).
    14.  Data protection law governs the lawful processing of personal data. In this context, ‘processing’ means any operation or set of operations performed on personal data or sets of personal data, including collection, recording, storage, retrieval, consultation, use and disclosure, among others (UK GDPR Article 4(2), DPA 2018, Section 3(4)). Processing must demonstrate compliance with the Data Protection Principles set out in Article 5 of UK GDPR, which stipulates that personal data shall be:1. processed lawfully, fairly, and in a transparent manner in relation to the data subject (‘lawfulness, fairness and transparency’)
      2. collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes (‘purpose limitation’)
      3. adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’)
      4. accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay (‘accuracy’, which includes a right of rectification of inaccuracies)
      5. kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed (‘storage limitation’)
      6. processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).
    15. Part 2 of the DPA 2018 addresses the general processing of data and provides for the same rights for the data subject as arise under UK GDPR – for example, the right of access to data, the right of rectification where data is incorrect, and the right of erasure. Section 10 of the DPA 2018 makes provision for the processing of ‘special category data’ (defined by Article 9 of UK GDPR), which includes biometric data if the purpose of processing is to uniquely identify an individual.[footnote]The phrase ‘for purposes of uniquely identifying’ was added during the GDPR trilogue in 2016, although no official record of that trilogue, or the rational for adopting the phrase, exists. See Council position, 05419/1/2016, April 8, 2016. The words were added later during the trilogue in 2016.[/footnote] It should be noted that there is an ongoing debate regarding what is meant by ‘the purpose of uniquely identifying an individual’.[footnote]See: Clifford, D. (2019). The Legal Limits to the Monetisation of Online Emotions, pp. 177–183. Available at: https://limo.libis.be/primo-explore/fulldisplay?docid=LIRIAS2807964&context=L&vid=Lirias&search_scope=Lirias&tab=default_tab&lang=en_US[/footnote] This Review considers that the phrase refers to the purpose of processing under the UK GDPR, Article 5(1)(b) (the ‘purpose limitation’ principle), since doing so follows the words’ natural and ordinary meaning. Even where ‘purpose of uniquely identifying’ is construed broadly, for example encompassing any instance that a biometric template is generated for an individual to achieve comparison with others,[footnote]Bridges at paragraph 133.[/footnote] it will still fail to capture all methods of biometric classification.
    16. UK GDPR prohibits the processing of special category data other than in certain limited circumstances, similar to those permitted under the DPA 2018. The DPA 2018 allows for the processing of special category data for the purposes of employment, social security and social protection, health and social care, public health, archiving, research and statistics, in relation to criminal convictions or offences, or where there is a substantial public interest, if the relevant conditions in Schedule 1 of the DPA 2018 are met. Part 2 of the DPA 2018 applies to both public and private sector organisations and individuals.
    17. The operative definition of special category data means that biometric data only qualifies as special category data if it is used for the purpose of uniquely identifying an individual. This means that there will be circumstances where biometric data is used to profile individuals or groups. This data will not be required to meet the higher bar imposed on the processing of special category data, unless it falls under one of the other existing forms of special category data, such as data revealing racial origin. That means fewer safeguards exist where, for example, biometric analysis is used to profile individuals for job worthiness[footnote]Electronic Privacy Information Centre. (2019). In re HireVue. Available at: https://epic.org/privacy/ftc/hirevue/[/footnote] but without uniquely identifying anyone. Such practices could have effects on an individual that are just as serious as those arising from unique identification. We consider that position to be unsatisfactory. As addressed above, in section 4 of this Review, the use of biometrics for classification has the potential to be just as rights-intrusive as their use for unique identification and, in our view, similar safeguards should apply.

      Data protection in the context of law enforcement

    18.  Part 3 of the DPA 2018 provides for the processing of personal data by competent authorities for criminal law enforcement purposes.[footnote]Part 3 was intended to transpose into domestic law the EU Data Protection Directive 2016/680 (Law Enforcement Directive).[/footnote] Pursuant to Section 30 and Schedule 7 of the DPA 2018, ‘competent authorities’ for the purposes of Part 3 includes police, prosecuting authorities, and ‘any United Kingdom government department other than a non-ministerial government department’, but not the intelligence services (the processing of personal data by the intelligence services is covered by Part 4 of the DPA 2018).
    19.  Section 31 of the DPA 2018 defines ‘law enforcement purposes’ as ‘the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security’. By Sections 35–40, similar general data protection principles to those contained in UK GDPR apply in the context of processing for law enforcement purposes. Provision is also made for ‘sensitive processing’, which includes the processing of biometric data, for the purpose of uniquely identifying an individual (Section 35(8)(b)). Pursuant to Section 35, ‘sensitive processing’ of personal data is only lawful if consent has been obtained from the data subject or the processing, or it is ‘strictly necessary’, and if it meets at least one of the conditions in Schedule 8 of the DPA 2018 (i.e. it is (a) necessary ‘for the exercise of a function conferred on a person by an enactment or rule of law’ or (b) necessary ‘for reasons of substantial public interest’ or (c) necessary ‘for the administration of justice’).  Whether the data controller is relying on consent or strict necessity, they must have an appropriate policy document in place at the time the processing is carried out.
    20.   The test of necessity ‘is a strict one, requiring any interference with the subject’s rights to be proportionate to the gravity of the threat to the public interest. The exercise therefore involves a classic proportionality analysis’.[footnote]Guriev and others v Community Safety Development (UK) Limited [2016] EWHC 643 (QB), at paragraph 45.[/footnote] The assessment also requires ‘direct personal evaluation’, not a generalised evaluation.[footnote]R (El Gizouli) v. Secretary of State for the Home Department [2020] 2 UKSC 10, at paragraph 44.[/footnote] ICO guidance suggests that ‘strictly necessary in this context means that the processing has to relate to a pressing social need, and you cannot reasonably achieve it through some less intrusive means.’[footnote]ICO. Guidance to Law Enforcement Processing: Conditions for sensitive processing. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-law-enforcement-processing/conditions-for-sensitive-processing/[/footnote] 
    21.  In Zaw Lin v Commissioner of the Police of the Metropolis,[footnote][2015] EWHC 2484 (QB).[/footnote] the High Court noted that the ‘raison d’être’ of the Data Protection Act 1998 (the precursor statute to DPA 2018) was to act as ‘a protector of an individual’s fundamental rights’.[footnote]At paragraph 80.[/footnote] As a consequence, ‘when construing the DPA…decision makers and courts must have regard to all relevant fundamental rights that arise when balancing the interest of the State and those of the individual. There are no artificial limits to be placed on the exercise.’[footnote]At paragraph 69.[/footnote] Thus, the data protection and human rights statutory frameworks are not independent of each other, but overlap and inform the interpretation of lawful action overall.

      Data protection in the context of intelligence services

    22.   Part 4 of the DPA 2018 addresses intelligence services processing. Section 82(2) describes the ‘intelligence services’ as the Security Service (MI5), the Secret Intelligence Service (MI6) and GCHQ. The structure of Part 4 mirrors that of Part 3 (law enforcement processing), although its content is more akin to Part 2 (general processing). Under Section 86(7)(c), biometric data processed for the purpose of identifying someone uniquely is categorised as ‘sensitive processing’. Processing sensitive data is only permitted if one of the conditions in Schedule 9 is met, as well as one of the additional conditions in Schedule 10.[footnote]See: Section 86(2)(b).[/footnote]

      Criminal justice and terrorism legislation

    23.   Police and other law enforcement authorities have specific powers for the collection and retention of biometric data through a range of criminal justice and anti-terrorism legislation. The most commonly invoked powers are those contained in the Police and Criminal Evidence Act 1984 (PACE), as amended by the Protection of Freedoms Act 2012 (PoFA).

      Police and Criminal Evidence Act 1984

    24.   Sections 61–64A of PACE give the police the power to take fingerprints, ‘intimate samples’, ‘non-intimate samples’ and photographs of suspects subject to criminal investigation. Section 65 of PACE defines intimate and non-intimate samples. Both either amount to biometric data themselves, or are sources from which the biometric data of subjects could be extracted.
    25.   PACE also contains provisions requiring the deletion of biometric data. For example, Section 63D of PACE requires fingerprints and DNA profiles derived from DNA samples (‘Section 63D material’) to be destroyed if it appears they were taken unlawfully or on the basis of an unlawful arrest or an arrest premised on mistaken identity. In any other case, Section 63D material must be destroyed unless it is retained under a power contained in Sections 63E–63O of PACE. For example, Section 63E of PACE allows for the retention of Section 63D material until the conclusion of the investigation into the offence, or the conclusion of the proceedings if the investigation gives rise to proceedings. Section 63F allows for the retention of Section 63D material obtained from a person charged with, but not convicted of, a qualifying offence for three years from the date the biometrics were obtained – extendable to a period of five years on application to a District Judge. Where a person was convicted of a qualifying offence (as defined in Section 65A of PACE), by Section 63I of PACE, the police have the power to retain their Section 63D material indefinitely.
    26.   Section 63R of PACE requires all DNA samples taken from individuals to be destroyed as soon as a DNA profile has been obtained from them (though this obligation is subject to the provisions on retention of criminal evidence contained in the Criminal Procedure and Investigations Act 1996 which provides for the retention of evidence if it may be required for disclosure to the defence).

      Terrorism Act 2000

    27.   Pursuant to Schedule 7 and 8 of the Terrorism Act 2000 (TACT 2000), police have the power to stop, question and detain for up to 6 hours any persons at ports or border areas for the purposes of determining whether they appear to be a person who has been concerned in the commission, preparation or instigation of acts of terrorism; and, when conducting a stop under Schedule 7, pursuant to paragraph 2 of Schedule 8, an authorised person (which includes a police officer, prison officer, or person otherwise authorised by the Secretary of State) may ‘take any steps which are reasonably necessary for – (a) photographing the detained person, (b) measuring him, or (c) identifying him’. Paragraph 10 of Schedule 8 sets out when fingerprints and non-intimate samples may be taken, including being taken without consent when authorised by a superintendent under paragraph 10(4), 10(6) and 10(6A).
    28.   Paragraphs 20A–20E of Schedule 8 make provision for the destruction and retention of samples obtained during Schedule 7 stops, with the general requirement being that they are retained for no more than 6 months unless a national security determination is made that authorises their retention for a longer period.
    29.   Other similar provisions for the retention of biometric data in an anti-terrorist context appear in the Counter-Terrorism Act 2008 and the Terrorism Prevention and Investigation Measures Act 2011. The Counter-Terrorism and Border Security Act 2019, which is currently only partially in force, also contains provisions (in Schedule 3) enabling the taking of samples and fingerprints from individuals detained for questioning at a port or border area.
    30.   In the national security and criminal justice context, it is worth noting the provisions of the Regulation of Investigatory Powers Act 2000 (RIPA) which provide a coercive power to require an individual to provide a ‘key’ or password for the accessing of electronic information obtained with appropriate authorisations (see e.g. Section 49 of RIPA). While not explicitly relating to biometric data, Section 56(1), defines ‘key’ as ‘any key, code, password, algorithm or other data the use of which (with or without other keys) (a) allows access to the electronic data, or (b) facilitates the putting of the data into an intelligible form’. Whether this would now be interpreted to include requiring an individual to provide biometric data to access a device remains an open and untested legal question

      Investigatory Powers Act 2016

    31.  The Investigatory Powers Act 2016 (IPA) does not set out specific provisions relating to biometric data. However, under Part 7, it regulates the intelligence services’ powers to retain ‘bulk personal datasets’ of personal data, which would include biometric data. Section 206 specifically contemplates such powers being used for health records.

      Protections of Freedoms Act 2012

    32.   The Protection of Freedoms Act 2012 (PoFA) deliberately includes provisions to regulate the processing of biometric data – in particular, DNA, fingerprints, photographic images and video surveillance. It was enacted partly in response to the European Court of Human Rights’ decision in S and Marper that found the UK in violation of the Article 8 rights of those whose data was retained on a DNA database.
    33.   Sections 1–19 of PoFA inserted the various provisions for the retention and deletion of biometric data discussed above (see 5.24) into PACE. Alongside those provisions, Section 20 of PoFA provides for the appointment and functions of the Biometrics Commissioner (see 5.50, below), whose responsibility it is to make national security determinations for the retention of biometric data and keep under review the use and retention of biometrics pursuant to the statutory powers in PACE, TACT 2000, the Counter-Terrorism Act 2008 and the Terrorism Prevention and Investigation Measures Act 2011. Section 21 of PoFA also obliges the Commissioner to report annually on the carrying out of their functions. Separately, Section 34 of PoFA establishes the role of Surveillance Camera Commissioner (see 5.52, below).
    34.   Chapter 2 of Part 1 of PoFA makes provision for the protection of biometric information of children in schools. Section 26 requires that parents are informed of an intention by a school to process a child’s biometric information, and prohibits such processing unless at least one parent consents to the information being processed. Even if a parent consents, by Section 26(5), if the child refuses to participate or continue to participate in anything that involves the processing of the child’s biometric information, or otherwise objects to the processing of the information, the processing may not continue irrespective of the parent’s consent. In such circumstances, the school ‘must ensure that reasonable alternative means are available by which the child may do, or be subject to, anything which the child would have been able to do, or be subject to, had the child’s biometric information been processed’ (Section 26(7)).

      Equality and anti-discrimination legislation

    35.   The Equality Act 2010 contains a number of provisions that bear on the use of biometric data. First, the Equality Act prohibits direct and indirect discrimination on the basis of any of a list of specified ‘protected characteristics’: age, disability, gender reassignment, marriage or civil partnership, race, religion or belief, sex and sexual orientation.
    36.   The prohibition of indirect discrimination means that even a policy or practice which is ostensibly neutral will be unlawful if it produces a disproportionate disadvantageous effect on people with a protected characteristic. Accordingly, systems for the collection, processing and storing of biometric data will need to comply with the requirement not to indirectly discriminate against people with protected characteristics (for example, they must not disproportionately impact people of a certain race or certain sex) in order to be compatible with the Equality Act 2010. This is particularly significant in relation to existing law enforcement tools that rely on biometric data, that are alleged to have significant differences in their reliability rates between men and women, or between people of different ethnicities.
    37.   Alongside the prohibition of substantive discrimination, the Equality Act 2010 also imposes a procedural ‘public sector equality duty’, or ‘PSED’, with which public authorities must comply whenever they make decisions in the exercise of their functions. The duty is to have ‘due regard’ to the impact of decisions on the statutory equality aims, namely, the need to:1. Eliminate discrimination, harassment, victimisation and any other conduct prohibited under the Equality Act.
      2. Advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it.
      3. Foster good relations between persons who share a relevant protected characteristic and persons who do not share it.
    38.   The PSED is a process duty, rather than an obligation to achieve a particular result. It will be discharged if a decision-maker can show they have had due regard to (i.e. taken appropriate account of) the statutory equality aims, whether or not their decision actually achieves those aims. That does not, however, diminish its importance. The Court of Appeal observed in Bridges, that compliance with the PSED ‘helps to reassure members of the public, whatever their race or sex, that their interests have been properly taken into account before policies are formulated or brought into effect’.
    39.   The PSED also requires a public authority to obtain the information necessary to properly assess the impact of their decision on the statutory equality aims – so that, for example, when technology is deployed, the public authority seeking to use it must satisfy itself that it does not have any inbuilt bias, or any bias that does exist can be overcome. As Megan Goulding from Liberty observed when giving evidence to the Review, the PSED ‘might mean that companies are forced to give more information to public authorities regarding training datasets so that an investigation into bias can be done before biometric technologies are deployed’.
    40.   The possibility identified by Megan Goulding arises, in part, because the PSED is a ‘non-delegable duty’ which falls on a public decision-maker personally.[footnote]See: R (Brown) v Secretary of State for Work and Pensions [2009] EWHC 3158 (Admin) at [94]; but also: Panayiotou v London Borough of Waltham Forest [2017] EWCA Civ 1624 at [79].[/footnote] Other than in expressly permitted circumstances, it is not possible for a public authority to ‘outsource’ or forego its obligation to consider statutory equality objectives which arise under the PSED, on the basis that they have purportedly been considered at some earlier stage by another party. This is of critical importance in the context of the use of technology products that rely on the processing of biometric data: as was made clear in Bridges, it is not enough that a commercial provider of relevant software tells a public authority that there are no adverse equality impacts – the public authority must be in a position to give ‘due regard’ itself to that question.
    41.  However, what will constitute ‘due regard’ for the purposes of discharging the PSED is a fact-sensitive question. It is not possible to say categorically what a decision-maker will be required to do in any particular case. We do not consider that it will generally require the relevant decision-maker to have the technical expertise to understand the operation of any relevant software, but we do consider that they will require sufficient information (whether by way of summaries, explanation or statistics) about the practical operation of the software to have an understanding of the way in which its operation interacts with the statutory equality objectives. That may include, for example, a need to have some information about the datasets on which an algorithm was trained in order to identify any adverse equality effects it might be expected to cause.

      Regulation and oversight

      The Information Commissioner’s Office

    42.   The ICO is the primary oversight body with a remit which includes biometrics. The ICO is an independent public body acting as the supervisory authority for data protection and freedom of information; and biometrics therefore falls within its scope by virtue of its status as a form of personal data.
    43.   The ICO’s general powers are described in Schedule 13 to the Data Protection Act 2018 (DPA 2018). They are split across the issuance of information, assessment and enforcement notices. Its powers are regulated by safeguards (see DPA 2018 Sections 115(5) to 115(9)). Pursuant to powers under Part 6 of the DPA 2018, the ICO can take various enforcement measures against individuals and organisations who breach data protection law. These include the imposition of pecuniary penalties and prosecution, with the consent of the Director of Public Prosecutions, for criminal offences (including, for example, unlawfully obtaining personal data).
    44.   The ICO has a duty to advise Parliament and the Government on administrative measures concerning individuals’ rights and freedoms in relation to the processing of personal data (DPA 2018, Section 115). It has published codes of practice (under the Data Protection Act 1998, but which remain relevant under the DPA 2018 regime) which have a bearing on the processing of biometric data, for example: (i) the Anonymisation code of practice;[footnote]ICO. Anonymisation: code of practice. Available at: https://ico.org.uk/media/for-organisations/documents/1061/anonymisation-code.pdf[/footnote] (ii) the CCTV Code of Practice;[footnote]Note: this code is no longer available on the ICO’s website.[/footnote] (iii) the Employment Practices Code;[footnote]ICO. The employment practices code. Available at: https://ico.org.uk/media/for-organisations/documents/1064/the_employment_practices_code.pdf[/footnote]; and (iv) the Employment Practices Code: Supplementary Guidance.[footnote]ICO. The Employment Practices Code: supplementary guidance. Available at: https://ico.org.uk/media/for-organisations/documents/1066/employment_practice_code_supplementary_guidance.pdf[/footnote]
    45.   It has a statutory obligation to produce at least four codes of practice under the DPA 2018 (DPA 2018, Sections 121–124).  Two statutory codes of practice have been issued so far: the Data Sharing Code[footnote]ICO. Data sharing: a code of practice. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/data-sharing-a-code-of-practice/[/footnote] and the Age Appropriate Design Code.[footnote]ICO. Age Appropriate Design Code. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-code/[/footnote]
    46.   Under Section 116(2) of the DPA 2018, in conjunction with Schedule 13(2)(d), the Information Commissioner may issue formal Opinions to Government, other institutions or bodies as well as the public, on any issue related to the protection of personal data. This may form the basis for the Information Commissioner’s approach to enforcement.
    47.   Two relevant examples of this role of the ICO are its two Opinions on facial recognition technology.
    48.   The first was The use of live facial recognition technology by law enforcement in public places, published in October 2019.[footnote]ICO. (2019). Information Commissioner’s Opinion: The use of live facial recognition technology by law enforcement in public places. Available at: https://ico.org.uk/media/about-the-ico/documents/2616184/live-frt-law-enforcement-opinion-20191031.pdf[/footnote] Although it was published before the Court of Appeal decision in Bridges it was a prescient document, correctly anticipating the direction the law would take and reflecting the position the ICO took as an intervener in the Bridges litigation. Nine ‘key messages’ are summarised in the opinion including the following:[footnote]ICO. (2019). p.3.[/footnote]‘The Commissioner intends to work with relevant authorities with a view to strengthening the legal framework by means of a statutory and binding code of practice issued by government. In the Commissioner’s view, such a code would build on the standards established in the Surveillance Camera Code and sit alongside data protection legislation, but with a clear and specific focus on law enforcement use of LFR and biometric technology. It should be developed to ensure that it can be applicable to current and future biometric technology.’
    49.   The second was The use of live facial recognition technology in public places, published in June 2021,[footnote]ICO. (2021). Information Commissioners’ Opinion:The use of live facial recognition technology in public places. Available at: https://ico.org.uk/media/for-organisations/documents/2619985/ico-opinion-the-use-of-lfr-in-public-places-20210618.pdf[/footnote] which considered the use of similar technology but not in the law enforcement context covered by the earlier opinion. It examines biometric technology use both for identification and for categorisations of persons. A key issue raised, but not entirely resolved, in that opinion is who should bear the responsibility for the use of badly designed biometric technology, and what burden is there on the user of that technology to make detailed enquiry of the vendor/manufacturer. This reveals how the ICO’s Opinions, while welcome, are not able to conclusively resolve some of the more difficult legal issues. But it can highlight areas that will require better guidance, new legal provisions, or – ultimately – judicial determination.

      The Biometrics and Surveillance Camera Commissioner(s)

    50.   The Biometrics Commissioner was established under Section 20 of the Protection of Freedoms Act 2012 (PoFA). The Biometrics Commissioner is independent of Government. Despite the generality of the role’s title, the Commissioner does not have a general remit over all public issues relating to the use of biometrics, but has four specific statutory functions:1. Reviewing the retention and use of DNA samples, DNA profiles and fingerprints by law enforcement agencies, assessing their compliance with the obligations under the PoFA and under the Police and Criminal Evidence Act 1984 (PACE).[footnote]The Protection of Freedoms Act 2012 (PoFA), Sections 20(2) and 20(6).[/footnote]
      2. Determining applications by the police to retain DNA profiles and fingerprints[footnote]PoFA, Section 20(9).[/footnote] (PoFA, Section 20(9)).
      3. Reviewing national security determinations which are made or renewed by the police in relation to the retention of DNA profiles and fingerprints, with the ability to order that relevant material be destroyed (PoFA, Sections 20(3) to 20(5)).
      4. Providing reports to the Home Secretary about the carrying out of the Commissioner’s functions and any matter relating to the Commissioner’s functions, (PoFA, Section 21).
    51.  Consequently, the scope of the Biometrics Commissioner is limited to law enforcement agencies and only concerned with certain types of biometric data. However, the ability of the Commissioner to report on any matter relating to its functions allows it to address topics beyond its immediate scope, such as the deployment of novel technologies by law enforcement agencies.
    52.   Under Section 29 of the PoFA, the Secretary of State must prepare a code of practice containing guidance about ‘surveillance camera systems’. Section 34 of the PoFA established a Surveillance Camera Commissioner with a special remit relating to that code. The Surveillance Camera Commissioner has three primary functions:1. Encouraging compliance with the Surveillance Camera Code of Practice.[footnote]PoFA, Section 34(2)(a).[/footnote]
      2. Reviewing the operation of the Surveillance Camera Code of Practice.[footnote]PoFA, Section 34(2)(b).[/footnote]
      3. Providing advice to Government ministers about the Code, including changes or it or breaches of it.[footnote]PoFA, Section 34(2)(c).[/footnote]
    53.   ‘Surveillance camera systems’ includes CCTV and any system for recording or viewing images for surveillance purposes.[footnote]PoFA, Section 29(6)(a) to (c). It also includes Automated Number Plate Recognition, but that is not relevant to this Review.[/footnote] It also extends to any other system associated with, or otherwise connected with CCTV and any other system for recording or viewing visual images for surveillance purposes.[footnote]PoFA, section 29(6)(d).[/footnote] This could therefore include a multitude of vision-based biometrics, within scope of the definition. 
    54.   The Commissioner does not have enforcement functions or powers of inspection. It works with relevant authorities, including local authorities and police forces in England and Wales, to make them aware of their duty to have regard to the Code.[footnote]PoFA, section 33(5).[/footnote] As part of the Commissioner’s duties, it is responsible for providing advice on effective, appropriate, proportionate and transparent use of surveillance camera systems.
    55.   An example of the Surveillance Camera Commissioner’s power to provide advice about the Code is the detailed opinion published in November 2020, on LFR, entitled: Facing the Camera: Good Practice and Guidance for the Police Use of Overt Surveillance Camera Systems Incorporating Facial Recognitions Technology to Locate Persons on a Watchlist, in Public Places in England and Wales.[footnote]Surveillance Camera Commissioner. (2020). Facing the Camera: Good Practice and Guidance for the Police Use of Overt Surveillance Camera Systems Incorporating Facial Recognitions Technology to Locate Persons on a Watchlist, in Public Places in England and Wales. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/940386/6.7024_SCC_Facial_recognition_report_v3_WEB.pdf[/footnote] Its recommendations gave particular emphasis on the importance of consistent ethical standards in the way such work is carried out.
    56.   Although the Surveillance Camera Code of Practice is only binding upon relevant authorities, the Commissioner has a responsibility to provide the surveillance camera industry with recommended standards.[footnote]PoFA, Section 29(3).[/footnote] The updated Code came into effect on 12 January 2022. The Commissioner’s responsibilities towards the private sector extend to encouraging voluntary compliance with the Code.
    57.  The updated Code of Practice contains only two references to biometric technologies.[footnote]Home Office. (2022). Surveillance Camera Code of Practice, paragraphs 2.4 and 12.3. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1035067/Surveillance_Camera_CoP_Accessible_PDF.pdf[/footnote] The first reference states no more than that such technologies must be justified, proportionate and for a stated purpose. It also states that they must also be ‘validated’, explaining that the Commissioner will validate systems. The amended Code now provides guidance for chief officers of police that want to use LFR to find people on watchlists. It recommends, amongst other things, that chief officers ‘establish an authorisation process for LFR deployments and identify the criteria by which officers are empowered to issue LFR deployment authorisations’.[footnote]Home Office. (2022). Paragraph 12.3.[/footnote]
    58.   In July 2020, the Government announced that the Biometrics Commissioner and Surveillance Camera Commissioner roles would be merged into a single appointment. The announcement prompted criticism from the existing post-holders and, the new office holder, Fraser Sampson, was appointed in March 2021. No new law has been introduced to circumscribe the new role and it is understood that the legal basis of the position will remain the same, but with a single person fulfilling all the relevant functions.
    59.   In addition, in September 2021, the Government suggested, in its consultation on the domestic data protection regime, that the dual roles of the Biometrics and Surveillance Camera Commissioner’s role could be absorbed into the ICO.[footnote]Department for Digital, Culture, Media & Sport (DCMS). (2021). Data: A new direction, paragraphs 409 and 410. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1022315/Data_Reform_Consultation_Document__Accessible_.pdf[/footnote] The Biometrics and Surveillance Commissioner subsequently expressed concerns that such a move would undermine the Commissioner’s dual roles. In his view, neither role could be characterised as regulatory, whereas the ICO’s was a statutory regulator.[footnote]DCMS. (2021). Section 6.[/footnote] It is unclear whether such a move would involve an amendment to the statutory foundation on which the Biometrics and Surveillance Camera Commissioner rests or if it would just mean a reallocation of resources.[footnote]As this Review went to press, DCMS published its response to the Data: A new direction consultation, which proposes dissolving the Office of the Biometrics and Surveillance Camera Commissioner, and to distributing its functions to other regulators, potentially moving casework functions to the Investigatory Powers Commissioner and moving surveillance-related functions to the ICO. See: DCMS. (2022). Data: a new direction – Government response to consultation. Available at: https://www.gov.uk/government/consultations/data-a-new-direction/outcome/data-a-new-direction-government-response-to-consultation#ch5[/footnote]

      Forensic Science Regulator

    60.   The Forensic Science Regulator ‘ensures that the provision of forensic science services across the criminal justice system is subject to an appropriate regime of scientific quality standards’.[footnote]UK Government. Forensic Science Regulator. Available at: https://www.gov.uk/government/organisations/forensic-science-regulator[/footnote] As Dr Tully, the post-holder at the time of our interviews, explained in our evidence session, it is a broad remit with only a small overlap with biometrics. However, since some forensic science uses biometrics, and the Forensic Science Regulator sets quality standards that must be met in the use of forensic science in the criminal justice system, Dr Tully’s role provides at least some regulation of the use of biometrics (for example, in setting standards for fingerprint or DNA comparison).
    61.  The Forensic Science Regulator has, since April 2021, existed pursuant to a statutory basis (Forensic Science Regulator Act 2021, Section 1). The Regulator now has a duty to publish a statutory code of practice (Section 2), as well as statutory powers to undertake investigations and issue formal notices (Sections 5 and 6).

      The Law Enforcement Facial Images and New Biometrics Oversight and Advisory Board

    62.   In 2018, the Home Secretary established an Oversight and Advisory Board in respect of LFR and new biometrics use by the police. Its membership was comprised of representatives from the police, the Home Office, the then Surveillance Camera Commissioner, the Information Commissioner, the then Biometrics Commissioner and the Forensic Science Regulator. The Court of Appeal in Bridges described the purpose of the Board as ‘to co-ordinate consideration of the use of facial imaging and [Automated Facial Recognition] by law enforcement authorities’. In the conversations we had with relevant stakeholders, many referred, for example, to the Surveillance Camera Commissioner and Information Commissioner’s roles in the oversight of LFR, but none made any mention of this Board. It last met in September 2019. The Gov.uk website asserts that ‘alternative governance arrangements are now in place’, but does not identify what those are.[footnote]UK Government. Law Enforcement Facial Images and New Biometrics Oversight and Advisory Board. Available at: https://www.gov.uk/government/groups/law-enforcement-facial-images-and-new-biometrics-oversight-and-advisory-board[/footnote]

      Biometrics and Forensics Ethics Group (BFEG)

    63.   The Biometrics and Forensics Ethics Group (BFEG) is an advisory, non-departmental public body, sponsored by the Home Office and comprised of experts in law, psychiatry, political theory, human geography, genetics and forensic science. It provides independent ethical advice to Home Office ministers on issues relating to the use of biometrics and forensics.
    64.   In December 2020, BFEG published 6 ‘Governing Principles’ for the use of biometric, forensic and data analysis procedures:[footnote]Biometrics and Forensics Ethics Group (BFEG). (2020). Ethical Principles. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/946996/BFEG_Principles_Update_December_2020.pdf[/footnote]
      1. Procedures should enhance public safety and the public good
      2. Procedures should seek to respect the dignity of individuals and groups
      3. Procedures should not deliberately or inadvertently target or selectively disadvantage those most vulnerable people nor people or groups on the basis of protected characteristics as defined in the Equality Act 2010
      4. Procedures should respect, without discrimination, human rights as defined in the Human Rights Act 1998
      5. Scientific and technological developments should be harnessed to advance the process of criminal justice; promote swift exoneration of the innocent, and afford protection and resolution for victims
      6. Procedures should be based on robust evidence.
    65.   It has also published briefing papers addressing the ethical issues arising, for example, in relation to LFR use.[footnote]BFEG. (2021). Briefing note on the ethical issues arising from public– private collaboration in the use of live facial recognition technology. Available at: https://www.gov.uk/government/publications/public-private-use-of-live-facial-recognition-technology-ethical-issues/briefing-note-on-the-ethical-issues-arising-from-public-private-collaboration-in-the-use-of-live-facial-recognition-technology-accessible[/footnote]

6. The EU AI Draft Regulation

    1. In April 2021, the European Commission published its proposed legal framework for the regulation of artificial intelligence (‘AI’).[footnote]Council of the European Union. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence (EU Artificial Intelligence Act) and amending certain Union legislative acts. 2021/0106 (COD). Available at: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2021:206:FIN[/footnote] While only a first draft, and therefore likely to be revised substantially during the trilogue process, it is an important reference that will set a benchmark against which other laws and regulations will be developed and measured. The proposal concerns AI in general, but makes express reference to certain forms of AI systems, such as biometric technologies, emotion recognition systems and social scoring systems.
    2. It is no longer the case that EU law automatically becomes part of UK law. But just as the GDPR is now reflected in the UK GDPR, it seems highly likely that even after the UK has left the EU the legal regulation of AI and biometric data is likely to be highly influenced by, if not precisely mirror, EU law. As a result we considered it important to assess this attempt by the EU to set out a binding legal framework around AI including the processing of biometric data.
    3. The proposed regulation takes a risk-based approach, with different rules applying to ‘unacceptable-risk’, ‘high-risk’, ‘limited-risk’ and ‘minimal-risk’ categories of AI. Into each of these categories fall different types of AI systems, as well as particular purposes for using AI.
    4. Although the risk-based approach means that biometric technologies will generally be assigned to a risk category on a case-by-case basis, there are certain biometric identification technology uses which fall explicitly into the unacceptable and high-risk categories: remote biometric identification systems.
    5. Throughout the proposal, the notion of biometric data is supposed to be interpreted in line with the definition under Article 4(14) of the EU GDPR.[footnote]EU Artificial Intelligence Act, Recital 7.[/footnote] Much like under the GDPR, biometric identification systems are subject to more stringent requirements than biometric categorisation systems. Additionally, an important distinction is made between ‘real-time’ biometric identification systems and ‘post’ remote biometric systems. The latter are identification systems made after the biometric data has been collected and with a significant delay.[footnote]EU Artificial Intelligence Act, Recital 37.[/footnote]

      ‘Real-time’ biometric identification systems in publicly accessible spaces

    6. The use of real-time biometric identification systems by law enforcement[footnote]‘Law enforcement authorities’ has the same meaning as in Directive (EU) 2016/680 (the Law Enforcement Directive), (see: Article 3(40)).[/footnote] in publicly accessible spaces, such as LFR, falls into the category of unacceptable risk. This is because it is seen as ‘particularly intrusive’.[footnote]Law Enforcement Directive, Recital 18.[/footnote] It is therefore prima facie prohibited,[footnote]Law Enforcement Directive, Article 5(1).[/footnote] although subject to explicit and inferred caveats.[footnote]When law enforcement authorities use biometric technologies in private spaces, or use biometric technologies for purposes other than law enforcement, the category of risk assigned will depend on the type of biometric technology used and the purpose for which it is used.[/footnote]
    7. There are three explicit exceptions: where the use is strictly necessary for (1) targeted search for potential crime victims, including missing children; (2) the prevention of specific, substantial and imminent threats to life, for example terrorist attacks; or (3) the detection, localisation, identification or prosecution of a perpetrator or suspect of criminal offences referred to referred to in Article 2(2) of Council Framework Decision 2002/584/JHA, leading to a custodial sentence of over 3 years.
    8. There are two implicit caveats of note. First, even when the prohibition to LFR was not subject to an exception, it would only apply where the technology is used in a publicly accessible space. As stated in Recital 9, ‘publicly accessible’ ‘does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised’. Whether a space is considered publicly accessible will be determined on a case-by-case basis.[footnote]EU Artificial Intelligence Act, Recital 9.[/footnote]
    9. The second caveat concerns the scope of the provisions relating to high-risk biometric identification systems. Recital 2 states that the basis in EU law for these provisions is Article 16 of the Treaty on the Functioning of the European Union (TFEU). Article 16 of the TFEU limits the scope of EU law and Member States’ national security is a paradigm example of activity that falls outside the scope of EU law.[footnote]Treaty on European Union, Article 4.[/footnote] Consequently, where law enforcement uses biometric technologies in the context of national security, the proposed regulation would not apply. We consider this to be particularly problematic, given that the use of biometric technologies for national security purposes was identified by the former Biometrics Commissioner as giving rise to considerable concern due to a lack of adequate oversight.[footnote]Biometrics Commissioner. (2020). Annual Report 2019, chapter 4. Available at: https://www.gov.uk/government/publications/biometrics-commissioner-annual-report-2019[/footnote]
    10. Where law enforcement is permitted to use biometric technologies in publicly accessible spaces for one of the aforementioned purposes, the use is still subject to further constraints. First, the use must be limited and proportionate, taking into account the seriousness, likelihood and scale of potential harm caused in absence of the use of the technology, as well as the rights impact caused by using the technology.[footnote]EU Artificial Intelligence Act, Article 5(2).[/footnote] In addition, prior authorisation must be given by a judicial or independent authority.[footnote]EU Artificial Intelligence Act, Article 5(3).[/footnote] This function is more akin to the role played by the Investigatory Powers Commissioners than this Review’s proposed national Biometrics Ethics Board. Finally, Member States must implement national legislation concerning the use of real-time biometric technologies by law enforcement in publicly accessible spaces prior to its use.[footnote]EU Artificial Intelligence Act, Article 5(4).[/footnote] That legislation can be more restrictive than the proposed regulation by only allowing it for some of the three explicit situations.

      Biometric identification systems

    11. Both real-time and post remote biometric identification systems are categorised as high-risk.[footnote]Annex III to the Proposal for a Regulation of the European Parliament and of the Council, Section 1.[/footnote] In order to ‘mitigate the risks for health, safety and fundamental rights’,[footnote]EU Artificial Intelligence Act, Recital 43.[/footnote] high-risk biometric technologies are subject to several requirements. There are three of particular note.
    12.  First, high-risk biometric technologies must undergo an ex ante conformity assessment, which must be carried out by a designated testing authority.[footnote]EU Artificial Intelligence Act, Articles 19 and 43.[/footnote] The conformity assessments extend to an examination of source code, where necessary.[footnote]EU Artificial Intelligence Act, Article 64(2).[/footnote] The conformity assessment explores issues such as statistical bias, which must be mitigated in the training and testing of datasets of high-risk biometric systems.[footnote]EU Artificial Intelligence, Article 10(3).[/footnote]
    13.  Secondly, both real-time and post remote biometric identification systems must be designed and developed in a way that enables human oversight and intervention while the system is in use.[footnote]EU Artificial Intelligence Act, Article 14.[/footnote] The Surveillance Camera Code of Practice already mandates human intervention before a decision is made based on the output of a facial recognition system.[footnote]Home Office. (2013). Surveillance Camera Code of Practice, paragraph 3.2.3. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1055736/SurveillanceCameraCodePractice.pdf[/footnote] The rationale is that it mitigates the likelihood of a false positive occurring on the basis of a wholly automated decision.[footnote]Bridges, paragraph 184.[/footnote] However, there may be edge cases where mandatory human intervention inadvertently precludes an individual’s right to object to a decision made based solely on an automated decision under the EU GDPR.[footnote]EU GDPR, Article 22(1).[/footnote] Nonetheless, the benefit caused by mandating the opportunity for human oversight may outweigh the detriment suffered by individuals unable to exercise their right to not be subject to a decision based on solely automated processing.
    14.  Thirdly, providers of ‘high-risk’ AI systems have to implement a quality management system that, amongst other things, involves testing and validation procedures to be carried out before, during and after development.[footnote]EU Artificial Intelligence Act, Article 17(1).[/footnote] In parallel, users of high-risk biometric technologies must monitor the use of the system for problems, passing on information to providers where they are identified.[footnote]EU Artificial Intelligence Act, Article 29(4).[/footnote] Concerningly, these measures seem to be aimed at issues inherent in the technology, rather than also seeking to mitigate problems that might arise due to the way an AI system is used. For example, there doesn’t appear to be any oversight or mitigating actions to prevent harm caused due to users of high-risk biometric systems deviating from a provider’s recommended false positive rate, thereby increasing the likelihood wrong identification. An example of the issues that can arise has been demonstrated by the ACLU,[footnote]American Civil Liberties Union.[/footnote] which ran a test on US Congress members using Amazon’s Rekognition facial recognition technology with a confidence threshold set below the recommended level,[footnote]Ghaffray, S. (2019). ‘How to Avoid a Dystopian Future of Facial Recognition in Law Enforcement’. Vox. Available at: https://www.vox.com/recode/2019/12/10/20996085/ai-facial-recognition-police-law-enforcement-regulation[/footnote] misidentifying 28 members of Congress as criminals, and disproportionately providing false matches for Black and Latinx lawmakers.
    15.  It is also worth noting that the proposal clarifies that the condition for processing special category data under Article 9(2)(g) of the EU GDPR (‘necessary for reasons of substantial public interest’) includes the purposes of bias monitoring, detection and correction.[footnote]EU Artificial Intelligence Act, Article 10(5).[/footnote] Any processing must include safeguards for fundamental rights, including technical limitations on the reuse of processing.

      Biometric categorisation systems

    16. Unlike biometric identification systems, biometric categorisation systems do not fall expressly into the risk categories. Therefore, determining where such systems fall on the risk spectrum will be undertaken on a case-by-case basis. Nonetheless, there are transparency notifications that apply to biometric categorisation systems.[footnote]EU Artificial Intelligence Act, Article 52.[/footnote] Those deploying such systems must make clear to data subjects that the subject is being categorised, except where they are permitted by law to detect, prevent and investigate criminal offences.[footnote]EU Artificial Intelligence Act, Article 52(2).[/footnote] These exceptions reflect the permitted derogations when an individual wishes to exercise their rights under Part 3 of the DPA 2018.
    17. The proposal also intends for voluntary codes of conduct to be drawn up, which would apply to AI systems other than high-risk AI systems,[footnote]EU Artificial Intelligence Act, Article 69(1).[/footnote] which would include biometric categorisation systems. 
    18. The potential rights impact caused by biometric categorisation systems can be equal to the potential rights impact caused by biometric identification systems. Therefore drawing a distinction between the two appears to be artificial and it is difficult to discern a clear basis for the proposed regulation holding that lower transparency requirements and self-regulation are considered adequate protections for biometric categorisation systems. There appears to be little justification for not deeming biometric categorisation systems as high-risk, thereby subjecting them to the more onerous obligations of high-risk AI systems.
    19. On 29 November 2021, the Council of the European Union published its compromise text of the Act (i.e. a response to the original text).  One notable amendment was the suggestion that biometric systems be defined as high-risk where such systems are ‘intended to be used for the “real-time” and “post” biometric identification of natural persons without their agreement’.[footnote]Council of the European Union. (2021). Presidency Compromise Text, Annex III, paragraph 1. 2021/0106 (COD). Available at: https://data.consilium.europa.eu/doc/document/ST-14278-2021-INIT/en/pdf[/footnote]
    20.   Such a change would seemingly increase the scope of biometric systems considered high-risk, as the amended definition applies irrespective of whether the identification is taking place remotely, and requires consent to have been obtained for a system to be defined as not high-risk. However, it is important to note that it is currently not clear whether any such amendments will be adopted in the final text of the Act.

7. Evidence

The current legal framework and what regulation should look like

    1. None of the individuals with whom we spoke thought that the current legal framework was fit-for-purpose, though there was a considerable divergence of views over what an improved framework should look like. This ranged from those who believed fundamental change was essential and others who believed that imminent changes would be sufficient to rectify existing deficiencies.
    2. Liberty and Big Brother Watch, for example, both consider that a fit-for-purpose legal framework would have to include an outright ban on the use of LFR – a technology which they consider causes ‘unmitigable’ human rights issues. That approach has been adopted, for example, in California, which in 2019 introduced a 3-year ban on the use by law enforcement agencies of LFR,[footnote]Paulson, E. (2019). ‘California bans police use of facial recognition for three years’. ITPro. Available at: https://www.itpro.co.uk/policy-legislation/34603/california-bans-police-use-of-facial-recognition-for-three-years[/footnote] and Amazon, IBM and Microsoft have also announced the suspension of sales of LFR technology to police forces.[footnote]Paul, K. ‘Amazon to ban police use of facial recognition software for a year’. The Guardian Available at: https://www.theguardian.com/technology/2020/jun/10/amazon-rekognition-software-police-black-lives-matter; Dastin, J. and Vengattil, M. (2020). ‘Microsoft bans face-recognition sales to police as Big Tech reacts to protests.’ Reuters Available at: https://www.reuters.com/article/us-microsoft-facial-recognition-idUSKBN23I2T6; BBC News.(2020). ‘IBM abandons ’“biased” facial recognition tech.’ Available at: https://www.bbc.co.uk/news/technology-52978191[/footnote] Following our evidence sessions, in August 2021 over 30 human rights organisations published an open letter calling on the UK Government to ban the use of LFR in public.[footnote]Privacy International and other Civil Society Groups. (2021). Live Facial Recognition Technology should not be used in public spaces. Available at: https://privacyinternational.org/sites/default/files/2021-08/LFRT%20Open%20Letter%20Final.pdf[/footnote] A non-binding resolution banning the use of LFR in public by police was also overwhelmingly approved by the European Parliament in October 2021.[footnote]European Parliament. (2021). Resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters. 2020/2016(INI)). Available at: https://www.europarl.europa.eu/doceo/document/TA-9-2021-0405_EN.html[/footnote]
    3. Government ministers, Baroness Williams and Kit Malthouse MP emphasised to us their 2019 manifesto commitment for police use of biometrics and the need to introduce a legal framework to ‘put it beyond doubt that we are operating in a legal manner’, but were less clear as to how that would be done. Others felt that the intrusive impact of such technology could, at least, be tempered by an improved legal framework (as well as accuracy improvements – on which see further at 7.26. below).
    4. Notwithstanding these differences, there was almost complete consensus that new legislation is necessary in this field. While Lindsey Chiswick, Director of Security in the Metropolitan Police Service, considered that the introduction of guidance to govern the use of LFR would be sufficient to provide an adequate legal framework, most interviewees felt that a statutory footing for the use of intrusive biometric technologies was a necessary development in the field. Dr Daragh Murray, for example, observed that reliance on the common law ‘could lead to arbitrariness’, with the current framework ‘not currently sufficiently clear to guide activity’, and the then Surveillance Camera Commissioner described himself as ‘unimpressed that the police believed all they needed to do was publish guidance’. Some police experts, however, are in favour of legislation: Detective Chief Superintendent Chris Todd of West Midlands Police underscored how difficult the current legal framework is for the police to apply operationally: ‘the [existing] legislation was written before relevant technologies were normalised. In the absence of any specific regulatory framework, the police are having to work with that legislation and guidance and take a case-by-case basis.’ That clearly increases the scope for error and arbitrariness which could be addressed by legislation. Interviewees did note that existing duties under the HRA, Equality Act, and the DPA 2018 were useful and ought to be maintained in the next phase of biometric regulation.
    5. In terms of the substance of a new legal framework, the predominant view expressed to us was that it ought to be technologically neutral and that it should not only take account of the type of data in issue (e.g. personal data), but also the purpose for which data was collected or used and the degree of interference with personal rights. That would enable legislation to take what Amba Kak of AI Now described as ‘a risk-based approach’, recognising the prevailing view that ‘the most pernicious uses of biometrics [at present] are LFR and the least pernicious are 1:1 matching.’
    6. In terms of the procedure or permissions which would be necessary for the use of biometrics, most interviewees rejected the idea of a warranty system (by which specific authorisation via warrants would be necessary before biometric technologies could be deployed). They considered it to be too cumbersome and opaque, and we had particular interest in the views of those, such as police, who would have to work with such warranty requirements. They highlighted that much can be achieved by co-operation and dialogue to ensure ‘improved working practices and organisational standards that demonstrate regard for human rights principles’.[footnote]Elaine Scott and David Hamilton from Police Scotland. A very similar view was expressed by the former Biometrics Commissioner.[/footnote] There was significant agreement to the need for certain procedural steps (such as accuracy testing or impact assessments) to be conducted in order for deployments to be lawful; but most interviewees had not given much thought to exactly what new legislation ought to look like. Robin Allen QC and Dee Masters from AI Law Hub noted that a weakness with the current system is that it permits claims to be brought once there has been a breach of a legal rule (for example, of the HRA or Equality Act) but does not provide sufficient prior protection to prevent those breaches from occurring. We were strongly persuaded that this is something that a new legislative framework should attempt to address.  We agree, and the capacity to put in place safeguards prior to deployment considered elsewhere, including under Recommendation 8.
    7. All witnesses thought that legislation would have to be supplemented by guidance and codes of practice to provide comprehensive governance. The former Surveillance Camera Commissioner described the Surveillance Camera Code of Practice as ‘very weak and old … a new regulator would need an up-to-date Code of Practice to provide new guidance’.[footnote]This is reflected in the fact that the first major amendment to the Code since 2013, was published in August 2021.[/footnote] The Centre for Data Ethics and Innovation (CDEI) observed that ‘many groups have called for a Code of Practice to strengthen governance of police deployment’ of biometrics; and the evidence we took from the Metropolitan Police, West Midlands Police, and the College of Policing confirmed that police stakeholders also see the need for additional guidance to be introduced to ensure, in the words of Chris Todd of West Midlands Police, ‘consistency across forces’.

      The Court of Appeal judgment in Bridges

    8. We asked all witnesses about the Court of Appeal judgment in Bridges. We were surprised by the very different interpretations they had of the judgment – in particular, the view of some witnesses, including Government ministers, that the Court of Appeal had said that South Wales Police’s use of LFR was lawful when, in fact, the Court found the opposite.
    9. This misunderstanding seems to have arisen in two ways, First, the Court of Appeal found that the police have a common-law power to use LFR. That was interpreted by some witnesses as meaning its use was lawful. The important error here, is that the Court of Appeal found that, while there was a common-law source of the power, the exercise of that power was not ‘in accordance with law’ for the purposes of Article 8 of the European Convention on Human Rights (ECHR). This was because there was an insufficient legal framework to protect individual rights: the legal framework did not comply with the required standards of accessibility and foreseeability. That made the exercise of the power unlawful. It is a fundamental legal principle that the existence of a power does not automatically mean that its exercise is lawful. But that distinction appears to have been overlooked by some witnesses.
    10. Secondly, others seemed to think that, because the Court of Appeal did not find that South Wales Police had ‘broken’ any law, the use of LFR was lawful. Again, that overlooks the Court of Appeal’s finding that there was no adequate legal framework in place, and insufficient impact assessments had been performed. Those failures rendered the use of LFR unlawful.
    11. If an appropriate legal framework is to be introduced, the deficiencies in the current legal framework must be frankly addressed. We were concerned about the lack of understanding displayed among those, including lawmakers, who will have a role in determining a future legal framework as to what has been deemed to be deficient in the existing system.
    12. Notwithstanding the above, many witnesses (including police witnesses) described the Bridges judgment as ‘useful’ in setting the parameters within which further development of the law around LFR can occur. Some were disappointed that the judgment did not go far enough. For example, Liberty (who were involved in the case) were disappointed that the Court of Appeal overlooked the impact of a privacy intrusion that occurs on groups or categories of people when LFR is deployed on crowds. Instead it assessed the impact of LFR on the rights of individuals, which the Court of Appeal then considered to be limited because of the automatic deletion of individual images. Liberty believed this was a misunderstanding of the nature of the interference with privacy rights caused by LFR and how its necessity, proportionality, and other purported justifications, as well as its discriminatory impact, should be assessed.

      Oversight and regulators

    13. There was near unanimity that oversight and regulator structures are unclear, fragmented and confusing. We were struck how the need for clear and firm guidance was sought by all the interviewees even when their views differed on other issues. Police witnesses, for example, described how difficult it was to know who to go to for advice or guidance. Regulators themselves described how their functions overlapped risked confusion or gaps in the overall framework.
    14. The former Biometrics Commissioner and Surveillance Camera Commissioner were frank about their experiences of the roles. We were especially grateful to them for their candour. The former Biometrics Commissioner wondered whether the commissionership ‘does the job legislators intended’ because it is ‘too easy to side-line and there are no obligations on relevant bodies in Parliament or in Government’ to meet with or take the Commissioner’s recommendations into account. The former Surveillance Camera Commissioner noted that ‘surveillance takes many modalities’ with biometrics being ‘an important aspect but not the sole issue’. Conflict over the scope of their respective roles had not been a problem because of the good working relationship between the two Commissioners, but we heard from various witnesses that the distinctions and overlaps between the Biometrics Commissioner, Surveillance Camera Commissioner and Forensic Science Regulator could be problematic in terms of the transparency and legitimacy of regulation. Suzanne Shale who chairs the London Policing Ethics Panel described the Panel as having been ‘struck’ by ‘regulatory confusion regarding who has competence to look at these issues’. The former Forensic Science Regulator identified some areas – such as the use of biometrics in the family courts – over which none of the existing regulators appear to have jurisdictional competence. Fragmentation exists at ministerial level too, with Baroness Williams responsible for biometrics but Kit Malthouse MP responsible for facial matching.
    15. The ICO’s evidence was that it is empowered and competent to act as the regulator of biometrics. Other witnesses were less confident that this was or would be an effective arrangement. They emphasised that since the ICO has general jurisdiction over ‘data processing’ and ‘because the world is increasingly data driven, the ICO could have an unlimited remit’. This may be damaging to democratic engagement and control,[footnote]Evidence of the Surveillance Camera Commissioner.[/footnote] and the ICO’s focus on individual privacy might cause the group privacy concerns associated with biometrics to be overlooked.[footnote]Evidence of the Biometrics Commissioner.[/footnote]
    16. There were some positive comments about the approach adopted in Scotland, where the Biometrics Commissioner (a role established in 2020, with Dr Brian Plastow announced as the first Commissioner in March 2021) has greater independence, being appointed by the Scottish Parliament rather than the executive. That role has a clear remit to draw up a Code of Practice to meet principles legislated for by the Scottish Parliament. It was ‘a good method of governance’, in the view of the outgoing England and Wales Biometrics Commissioner.

      Ethics

    17. We had useful conversations about ethics with various witnesses. Under Chris Todd, West Midlands Police has developed a Digital Ethics Panel to which all considerations of new technologies are referred for advice; but that is a local arrangement not yet replicated in other forces. Suzanne Shale found it ‘striking to see how little ethical scrutiny there was of trials of policing technologies of the population’. Having come from a health background, where ethical considerations are embedded in practice, she thought that a similar approach could work in relation to biometrics. It was her view that underlying good practice, ‘there are a set of choices on how to conduct oneself and promote ethical behaviour.’ An Ethics Panel can help with those ethical judgments.
    18. While we had sought to interview members of the Biometrics and Forensics Ethics Group (BFEG), in the event we were unable to arrange an interview. We have, however, taken account of their useful publications. In considering the issue of ethics with those we did interview, we were struck that very few were aware of, or volunteered information relating to, BFEG. In our view, this was indicative of the limited remit BFEG has been given to advise the Home Office rather than to provide broader ethical guidance to those deploying biometric technology.
    19. Overall, we found considerable support for the creation of a national biometrics ethics group, in particular in relation to police use of biometrics. Chris Todd was keen for the West Midlands model to be expanded nationally, and the College of Policing also considered that ‘the concept of a national ethics body is a good thing’. Suzanne Shale highlighted the benefit that can accrue from ‘externality, especially with historically closed institutions’ but warned that ‘externality can be weak because it can be easy to ignore the advice.’
    20.  Taking this into account, we asked interviewees two important questions about an independent Biometrics Ethics Board in practice:1. Should an independent Biometrics Ethics Board have a mandatory remit? Planned uses of biometric technology (particularly by the police but also, potentially, by other bodies including private bodies) would be legally required to undergo scrutiny by the Board before deployment against the public.
      2. Should an independent Ethics Board have the right to veto deployments or merely an advisory-only role?
    21. For the avoidance of doubt, we made clear that on both proposals – mandatory referral and the more extreme step of the power of veto – any rules may be subject to exceptions such as public emergencies, or agreement by the Ethics Board that scrutiny was unnecessary.
    22.  Views on these questions were mixed.
    23.  On mandatory referral of new biometric technology for consideration by an Ethics Board before deployment, most of those who worked in or with policing were opposed to it. Lindsey Chiswick from the Metropolitan Police Service did not think mandatory referral was necessary because the police would want an Ethics Board’s insight and so would make voluntary referrals; that view was echoed by the College of Policing. We queried this position on the basis that if it was correct, mandatory referral should be of no detriment to the police: it would merely be mandating what they wanted to do anyway. But, in further discussion there appeared to be a point of principle in play as to whether the decision to make a referral to an Ethics Board should always be a voluntary process for the police to engage, rather than one the police should be required to go through. This caused us to contemplate how effective an Ethics Board would be, if referrals to it were left entirely to the discretion of the very public authorities it was there to scrutinise.
    24.  Additionally, the Metropolitan Police had practical concerns that mandatory referral to an Ethics Board would add ‘more hoops to jump through’ which ‘may not achieve better policing’. Suzanne Shale thought it would be difficult to formulate a schema for mandatory referral to the London Policing Ethics Panel – but that Panel’s focus is broader than biometrics (it oversees any ethical issue arising in policing) and that difficulty may, therefore, be easier to overcome for a biometrics-specific Ethics Board.
    25.  On the question of whether such a Board should be able to impose a binding veto on the use of new biometric technology, there was a general consensus that it should be advisory. However, the College of Policing was prepared to contemplate that there might be ‘exceptional circumstances’ in which it could veto a planned use.

      Accuracy and bias

    26.   Accuracy and bias were two of the most frequently cited concerns with biometric technology, along with the associated risks of privacy intrusion and power imbalances.
    27.  It was recognised that bias can arise in multiple ways in the deployment of biometric technologies: it may be inherent to the technology, or it may accrue in the manner in which it is used (reflecting, for example, existing bias in policing practices who have disproportionate engagement with particular communities).
    28.  One acute example of bias in biometric technology is the apparent embedding of race and sex bias in the computer vision software tools that facial recognition developers use. MIT researchers found that commercial face classification algorithms performed better on male than on female faces, and also on lighter than on darker ones with an error rates of up to 35% for darker female faces. That compares with an error rate for white male faces of 0.8%, at highest.[footnote]Buolamwini, J. and Gebru, T. (2018). ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Proceedings of Machine Learning Research, 81:pp.77-91.[/footnote] Similar findings have been made in tests of commercial facial recognition systems (for ID pictures, not video cameras) where all of these had biased performance for various characteristics including skin reflectance, gender, age and even height.[footnote] Cook, C. M., Howard, J. J., Sirotin, Y. B., Tipton, J. L. and Vemury, A. R. (2019). ‘Demographic Effects in Facial Recognition and Their Dependence on Image Acquisition: An Evaluation of Eleven Commercial Systems’. IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 1, no. 1, pp. 32-41. doi: 10.1109/TBIOM.2019.2897801.[/footnote]
    29.  Bias in biometric technology is often also caused by statistical bias. An example of statistical bias is selection and sampling bias, where a dataset does not reflect the subjects being scrutinised. In evidence to us, the former Biometrics Commissioner argued that the use of biometric technologies should reflect the demographics of the population that will be subject to it, which ‘could mean the UK population or it could mean the population of an area of interest to the police’. Accuracy bias may be improved by, for example, analysing and reviewing the datasets which are used,[footnote]Evidence of Detective Chief Superintendent Chris Todd, West Midlands Police.[/footnote] leading to fewer automated false positives and false negatives. For the former Forensic Science Regulator, ‘the key part of standards in relation to discrimination is the requirement to scientifically test and understand the implications of what you are doing.’ In the same vein, the former Surveillance Camera Commissioner recommended that unless bias is removed or significantly reduced, police forces should continue to carry out context-specific trials and continue to monitor the results. This is not straightforward: one of the challenges in addressing bias is that some rights-protecting safeguards (for example, the immediate deletion of data by LFR systems where no positive match to a watch list is made) inhibit the possibility of post-facto bias analysis. This is because the deleted data cannot be checked or assessed to ascertain any bias in the operation of the system. That does not mean that material should be retained (indeed, to do so might violate data protection law), but it highlights the difficulties in overcoming bias in facial recognition technology, and the acute need for testing and protection against discrimination to be rigorously performed prior to a system’s deployment. Other proposals for reducing bias in facial recognition technology include: improving diversity in training datasets; mandatory standards for accuracy; higher quality photo capture; and tailoring of threshold settings to different demographics to ensure greater accuracy.[footnote]Mclaughlin, M. and Castro, D. (2020).‘The Critics Were Wrong: NIST Data Shows the Best Facial Recognition Algorithms Are Neither Racist Nor Sexist’. Information & Technology Innovation Foundation. Available at: https://itif.org/sites/default/files/2020-best-facial-recognition.pdf[/footnote]
    30.  However, in the view of some interviewees, even if accuracy bias is overcome, ‘there are wider discriminatory issues’ which cannot be overcome.[footnote]Evidence of Silkie Carlo, Big Brother Watch and Megan Gould, Liberty.[/footnote] In Liberty’s view, remedying accuracy deficiencies in biometric technologies would merely lead to ‘the perfection of surveillance technology’, which would continue to have a significant detrimental impact on individual rights. The former Biometrics Commissioner reasoned that ‘it’s important to make sure that biometric technology doesn’t make worse the discriminatory problem it was supposed to address.’

      Taking account of public opinion

    31. We asked interviewees for their views on the extent and manner in which public opinion could validly inform the regulation of biometric data. This included the extent to which public consent, or tacit assent, to data collection and processing could supersede legal protections that might otherwise be in place. This was of interest to us for a number of reasons. First, because public authorities often seek to justify their approach to privacy by reference to a perception of what the public would be willing to accept in a balance between achieving particular goals – such as the prevention of crime – and reducing their right to biometric data privacy. Second, because if such public opinion is relied upon, the technical nature of the area in question, in combination with the complex intrusions of personal freedom that biometrics can occasion, makes informed public engagement difficult.
    32.  Most of those from whom we took evidence agreed there are real challenges in public engagement and public understanding of the risks of biometric technology use. The police interviewees were clear that policing rests on democratic legitimacy and that public understanding and consent is therefore crucial.
    33.  The Government ministers with whom we spoke considered that this was adequately addressed by the inclusion in a manifesto of biometric-related commitments. An election victory itself provided the necessary endorsement of proposed biometric data use by public authorities and the extent to which they might intrude on privacy protections.
    34.  Others took a more nuanced approach. Liberty recognised the ‘inherent dangers with allowing the public to determine the outcome of these issues, since they are complex and can be misunderstood’. The CDEI also warned about the need to ‘be wary of following majority opinion where minority groups may be more affected than the majority.’
    35.  The former Surveillance Camera Commissioner considered that ‘there has been no proper informed consent in relation to the deployment of biometric technologies so far. The public do not fully appreciate the ways in which biometric technologies work or the implications of their deployment – there needs to be better public engagement.’
    36.  The ICO thought that citizen councils might be an appropriate means of ensuring sufficient understanding underpinned any public opinion that would then influence policy-making. On this latter point we benefited from the Ada Lovelace Institute convening such councils while we were conducting our research. They led to a series of thoughtful recommendations being proposed.[footnote]Ada Lovelace Institute. (2021). The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/[/footnote]
    37.  The democratic legitimacy element of public engagement was also important for Big Brother Watch who thought that, nevertheless, public opinion ‘doesn’t necessarily have to inform the structure of regulation’.
    38.  Few interviewees demurred from what we believed to be the starting point of our analysis in this area: public engagement is essential, and insofar as it can be determined, understanding public opinion is important. But ultimately the determination of how law and regulation protects fundamental rights is to be determined by legislators and regulators.

8. Recommendations

    1. Taking account of all the views expressed by those with whom we talked, as well as extensive research from expert bodies in the public domain, we have formulated 10 recommendations for the development of the governance of biometric data in England and Wales.

      Recommendation 1: The need for a new legislative framework

    2. First, we recognise that there is an urgent need for a new legislative framework specifically addressing and making provision for the regulation of biometric data.
    3. This recommendation responds to, and endorses, the growing awareness (not just in the UK) that legislation is necessary as the starting point for governance of biometric data, in order to ensure a clear, accessible and rights-compliant basis for regulation and use of biometric technologies. The need for legislation has become clearest, perhaps, in the context of uses of LFR: the Bridges judgment pointed to the inadequacy of the current legal framework for police use of LFR, and the former Surveillance Camera Commissioner, in evidence to the Review, made clear that the problem with mass deployment of biometrics, and particularly live facial recognition, goes beyond law enforcement. Vendors of CCTV systems now offer facial recognition as standard and many public and private organisations have to make a decision on whether to enable the facility.[footnote]Sabbagh, D. (2019). ‘Lack of guidance leaves public services in limbo on AI, says watchdog’. The Guardian. Available at: https://www.theguardian.com/technology/2019/dec/29/lack-of-guidance-leaves-public-services-in-limbo-on-ai-says-watchdog[/footnote] Some companies such as Microsoft have called for specific legislation on facial recognition. The company states plainly that the ‘use of facial recognition technology could unleash mass surveillance on an unprecedented scale’ and wants to avoid a commercial race to the bottom on rights by creating a level playing field.[footnote]Smith, B. (2018). ‘Facial recognition: It’s time for action.’ Microsoft On the Issues. Available at: https://blogs.microsoft.com/on-the-issues/2018/12/06/facial-recognition-its-time-for-action/[/footnote] The European Data Protection Supervisor has noted that ‘in the absence of specific regulation so far, private companies and public bodies in both democracies and authoritarian states have been adopting [LFR] technology for a variety of uses. There is no consensus in society about the ethics of facial recognition, and doubts are growing as to its compliance with the law as well as its ethical sustainability over the long term.’[footnote]Wiewiórowski, W. (2019). ‘Facial recognition: A solution in search of a problem?’. European Data Protection Supervisor. Available at: https://edps.europa.eu/press-publications/press-news/blog/facial-recognition-solution-search-problem_en[/footnote]
    4. The prominence of LFR in public debate at the moment should not, however, cloud the fact that the same principles will apply, and the same risks are likely to emerge, from the development and use of other biometric technologies. Indeed, the witnesses with whom we spoke recognised that other technologies (some not yet fully developed) could be equally rights-intrusive and that legislation should, so far as possible, be technologically neutral to account for future developments as well. Legislation would therefore need to be procedure and principle-setting, rather than use-specific. The police and public authority witnesses with whom we spoke also generally embraced the need for legislation, recognising that democratic legitimacy and their operational capacity would be improved by clear statutory rules setting out the way in which biometric technology could be used.
    5. In June 2021, the Prime Ministerial Taskforce on Innovation, Growth, and Regulatory Reform, known as TIGRR, published a wide-ranging report on future regulation for the UK. It included proposals to replace the UK GDPR with a novel ‘UK Framework of Citizen Data Rights’ in order to ‘cement [the UK’s] position as a world leader in data’.[footnote]Taskforce on Innovation, Growth, and Regulatory Reform (TIGRR). (2021). Independent Report, paragraph 204. Available at: https://www.gov.uk/ government/publications/taskforce-on-innovation-growth-and-regulatory-reform-independent-report[/footnote] That new framework would be ‘based more in common law’, which is contrasted with the ‘prescriptive’ and ‘inflexible’ UK GDPR.[footnote]TIGRR. (2021). Paragraphs 206–207.[/footnote] It has an emphasis on reforming the data protection regime in order to ‘boost innovation’. Although the proposals are high level and occasionally conflicting, it is worth noting that Oliver Dowden MP, the then Secretary of State for Digital, Culture, Media and Sport indicated a desire to amend the current data protection regime. As he said in a recent policy paper, ‘the UK now controls our own data protections laws and regulations’[footnote]DCMS. (2021). Digital Regulation: Driving growth and unlocking innovation. Available at: https://www.gov.uk/government/publications/digital-regulation-driving-growth-and-unlocking-innovation/digital-regulation-driving-growth-and-unlocking-innovation#our-digital-regulation-principles[/footnote] and can therefore remove what the Government sees as ‘unnecessary barriers to data use’.[footnote]DCMS. (2021).[/footnote] That suggests a potential move towards the liberalisation of standards required for the use of biometric data, rather than the introduction of stronger safeguards. In our view this would be inconsistent with what the regulatory landscape requires and what we heard from most of the witnesses who gave evidence to us. Indeed, we also do not see that such a move away from the standards contained in GDPR would be possible, because of the importance of the GDPR to international data protection. If we intend to maintain any cross-border data transfer co-operation with the EU, we will have to meet the minimum standards of the GDPR. In our view any TIGRR proposals to the contrary are unlikely to be adopted.
    6. Calls for the introduction of new legislation are happening globally. In October 2020 the Global Privacy Assembly, composed of a majority of the world’s data protection authorities, adopted a resolution on facial recognition technology that reiterated the importance of ‘legal frameworks that are fit for purpose in regulating evolving technologies such as facial recognition technology’.[footnote]Global Privacy Assembly. (2020). Adopted resolution on facial recognition technology. Available at: https://edps.europa.eu/sites/default/files/publication/final_gpa_resolution_on_facial_recognition_technology_en.pdf[/footnote]
    7. In the United States, the AI Now Institute, a world-class policy research centre at based at New York University (whose Director of Global Policy gave evidence to the Review), has addressed some of the constraints of existing data regulation as the source of biometric regulation:‘While data-protection laws have made fundamental shifts in the way companies and government approach the collection, retention, and use of personal data, there are clear limitations on their ability to address the full spectrum of potential harms produced by new forms of data-driven technology, like biometric identification and analysis.’[footnote]Kak, A. (ed.). (2020). Regulating Biometrics: Global Approaches and Urgent Questions. AI Now Institute. Available at: https://ainowinstitute.org/regulatingbiometrics.html[/footnote]
    8. This is in part because data protection laws focus on ‘individual (rather than group) conceptions of harm [which] fails to meaningfully address questions of discrimination and algorithmic profiling’.[footnote]Kak, A. (ed.). (2020).[/footnote] This was also a concern we heard from Big Brother Watch, particularly when discussing the Bridges The use of existing sources of law, both data protection and human rights law, as the entry point for biometric governance, fails to take into account some of the specific features and specific risks posed by biometrics, particularly on the group level.
    9. A new English statutory framework need not attempt to solve these issues in isolation, but can tackle lacunae in existing law by drawing on developments taking place elsewhere. Legislators in the United States have proposed several bills to address the need for specific biometrics regulation. The Algorithmic Accountability Act 2020 would have required entities that use, store, or share personal information to audit and conduct impact assessments for ‘high-risk’ automated systems, including those that can generate discriminatory outcomes.[footnote]Algorithmic Accountability Act, H.R.2231, 116th Cong. (2019) <https://www.congress.gov/bill/116th-congress/house-bill/2231>[/footnote] An updated Act has recently been reintroduced, containing similar provisions to the previous iteration.[footnote]Algorithmic Accountability Act 2022 <https://www.wyden.senate.gov/imo/media/doc/Algorithmic%20Accountability%20Act%20of%202022%20Bill%20Text.pdf>[/footnote] The No Biometric Barriers to Housing Act[footnote]No Biometric Barriers to Housing Act, H.R.4008, 117th Cong. (2021) <https://www.congress.gov/bill/117th-congress/house-bill/4360?s=1&r=87>[/footnote] would ban the use of biometric recognition technology in some dwelling places, while the Commercial Facial Recognition Privacy Act 2019[footnote]Commercial Facial Recognition Privacy Act, S.847, 116th Cong. (2019) <https://www.congress.gov/bill/116th-congress/senate-bill/847>[/footnote] strengthens transparency requirements and consent in that context. The European Parliament and the European Commission have detailed ongoing projects to regulate the both the use of artificial intelligence generally and the use of biometric data, in particular.
    10.  In our view the new legislation should make provision for four stages of biometric technology development: (i) testing, (ii) piloting, (iii) use and (iv) evaluation. Before any testing, piloting or use on members of the public, legislation should, at minimum, make provision for specific procedural duties that must be satisfied:1. The conduct and publication of an equality impact assessment, following the requirements imposed by Section 149 of the Equality Act 2010 but strengthening that requirement by requiring publication of the assessment to ensure transparency.
      2. The conduct and publication of a privacy impact assessment, which should consider the individual and group privacy rights ramifications of the intended deployment.
      3. An accuracy assessment, by which the technological specifications and performance of the technology (including, for example, the particular software to be deployed) is assessed.
      4. A necessity and proportionality analysis, requiring up-front consideration by the intended user of a biometric technology whether that use is (i) necessary in pursuit of a legitimate aim and (ii) proportionate, including whether a less intrusive means of pursuing the legitimate aim could be used and whether a fair balance will be struck between the various rights and interests at stake.
      5. Where the intended user is a public body, mandatory referral to an Ethics Board (see Recommendation 7).
    11. There are two further features of a new legislative framework that are of critical importance.
    12.  First, the introduction of a new legal framework should simplify rather than complicate, the existing patchwork of statutory bodies overseeing the law and regulation of biometric data. We were struck by numerous witnesses expressing concern over potential confusion over which commissioner or regulator had key responsibility over the safeguards relating to a new technology and how overlapping roles were resolved. This is important in relation to Recommendation 3, below, on the importance of clear codes of practice. In the absence of a clearer oversight structure, the numerous codes of practice or guidance notes issued by different public authorities at various times create confusion, rather than clarity.
    13.  Second, the new legal framework should include a mechanism for prior authorisation – or at least prior consultation – with a statutory body prior to the use of new biometric technology, or existing technology in a new way. This would overcome the problem identified by Robin Allen QC and Dee Masters, namely that the current legal framework only provides for legal action vindicating individual rights to be brought once there has been a breach of those rights. Authorisation and/or consultation prior to use would also inform risk assessments and ultimately, judgments on proportionality.

      Recommendation 2: The statutory framework should cover use of biometric data for identification and for classification

    14. All biometric data is personal data because it is data that allows or confirms the unique identification of an individual. However, as discussed at paragraph 5.15, under UK GDPR, Article 9, biometric data is only classified as special category data when it is processed for the purpose of uniquely identifying an individual. The ICO approaches this issue in the following way: ‘If you use biometrics to learn something about an individual, authenticate their identity, control their access, make a decision about them, or treat them differently in any way, you need to comply with Article 9.’[footnote]ICO. ‘What is Special Category Data?’. Guide to the General Data Protection Regulation (GDPR). Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/special-category-data/what-is-special-category-data/[/footnote] But there is a risk that such an approach is open to challenge. Article 9 of the UK GDPR requires the purpose of processing to be for ‘unique identification’. Therefore, when biometric data is processed for another purpose, such as profiling an individual, it may be argued that this does not fall within the Article 9 UK GDPR conditions.
    15.  We consider this uncertainty to be a potential weakness in the current regime, notwithstanding the ICO’s helpful attempt to clarify it. Data that has the capacity to uniquely identify an individual with some confidence is, at present, subject to lesser legal safeguards if it is being used for purposes other than unique identification.
    16. Few witnesses had given consideration to the issues that could arise from the use of biometric data for classification or profiling purposes – though where they had, they recognised that those practices could be significantly rights-intrusive and needed careful regulation too.
    17. The capacity of data to uniquely identify is an important defining characteristic of biometric data. We therefore do not consider that the current definition of biometric data under UK GDPR needs to be changed. While unique identification is potentially the most rights-intrusive use of biometrics, significant detriment may also be caused by the use of biometrics for categorisation or profiling purposes. There is an uncomfortable parallel with erroneous views by lawmakers that because the acquisition of the content of communications is potentially more intrusive than the acquisition of the metadata of communications, the latter needed far less protection.[footnote]See Big Brother Watch and others v UK (Grand Chamber) App No.s 58170/13; 62322/14, 24960/15 at paragraph 364[/footnote] That type of legislative wrong turn[footnote]Regulation of Investigatory Powers Act 2000, Sections 15 and 16.[/footnote] is one that needs to be avoided in relation to the processing of biometric data for categorisation rather than identification.
    18. The European Data Protection Board and European Data Protection Supervisor has deemed biometric categorisation to be sufficiently rights-intrusive to warrant a ban in circumstances where categorisation is undertaken on the basis of protected characteristics.[footnote]EDPS and EDPB, Joint Opinion, paragraph 33.[/footnote] We did not receive sufficient evidence on this point to adopt the call for a ban. However, for the reasons below, we do believe that future regulation of biometric data should embed equal safeguards for biometric categorisation systems as biometric identification systems.
    19. A new legal framework should seek to regulate this use of biometrics for categorisation with clarity. This would also allow new provisions to address discrimination issues that arise when biometric data is processed for purposes other than unique identification. But if biometric data is extracted from an individual, and then used for purposes other than individuation, statute must regulate the steps that a user must comply with, and the safeguards that must attach, to any other use of that data.
    20.   Indeed, the potential practical uses of biometric profiling or classification in everyday life are pertinent to the need to provide for safeguards in the new statutory scheme. Biometric classification may be readily deployable in a wide array of circumstances that would impact on individual liberties, for example, to ascertain eligibility for certain rights and services, including sifting through job applications or determining health status prior to travel. A recent EU-funded development of a border control system called iBorderCtrl deployed biometric classification ‘to detect deception based on facial recognition technology and the measurement of, termed “biomarkers of deceit”’.[footnote]Sánchez-Monedero, J.and Dencik, L. (2020). ‘The politics of deceptive borders: “biomarkers of deceit” and the case of iBorderCtrl’. Information, Communication & Society,p.1. DOI: 10.1080/1369118X.2020.1792530[/footnote] The ways in which such classifications could materially impact on individuals’ lives – and the need for them to be clearly regulated – are self-evident.
    21.  This includes using biometric tools to classify individuals on the basis of characteristics that are protected by the Equality Act, potentially leading to discrimination. The CDEI has raised concerns about algorithmic profiling around the use of Origins software by various police forces in the UK to determine whether particular ethnic groups specialise in particular types of crime.[footnote]Centre for Data Ethics and Innovation (CDEI). (2020). Review into bias in Algorithmic decision-making Available at: https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making[/footnote] The Ada Lovelace Institute has pointed out that such categorisations are particularly problematic if those using the tools are ‘resorting to legal or scientific definitions that are in themselves contested, flawed or constructed in the context of a biased system and may overlook new axes of discrimination that can occur in algorithmic systems’.[footnote]Ada Lovelace Institute and DataKind UK. (2020). Examining the Black Box: Tools for assessing algorithmic systems. Available at: https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/[/footnote]
    22.   All of these factors lead us to conclude that any new legal framework should regulate the use of biometric data for identification and classification. It should not exclude the possibility of further regulation being necessary for other forms of processing if similar privacy rights infringements may arise.

      Recommendation 3: Specific codes of practice on the use of biometric data should be published to regulate particular sectors and/or technologies. A compliance mechanism, similar to that given to the Scottish Biometrics Commissioner, should accompany such codes

    23.   The extent to which new legislation can regulate the detail of how different types of biometric technology are used is limited. While legislation can and should set out the overarching framework, several witnesses emphasised that more fast-moving and granular detailed guidance will be necessary to ensure operational decisions about the use of biometrics are taken lawfully.
    24.   Codes of practice can fulfil that role and provide guidance the issues arising from specific uses of biometric data.
    25.   It is, of course, a precursor to proposing new codes of practice that there must be clarity as to which oversight body has primary responsibility for issuing such codes and what powers they have to do so. There must be consistency in the way guidance is given. This is something a new legal framework must address (See Recommendation 1 above). In relation to LFR, the numerous guidance documents issued by the ICO, the Surveillance Camera Commissioner and police – both at national and local level – illustrate the dangers of a fragmented approach. We have considered this further, below, under Recommendation 9.
    26.   The detail of regulation of biometric technologies will differ between these uses and distinct codes of practice will be required. It is not possible to anticipate the range of uses of biometric data that will require regulation, and therefore focused consideration, rather than being covered by general legislation. For example, multimodal systems (which combine more than one type of biometric in a single system) are currently used in the US-VISIT programme, which uses the face and fingerprints, and in the Indian national identity AADHAAR card, which uses the face, iris and fingerprints. These variations may be best covered by codes of practice relating to use cases, rather than the technology being used.
    27.  Further distinct issues are raised by the development of behavioural biometrics such as gait and voice analysis or automated speech recognition (ASR). A recent paper indicated that significant racial disparities exist in the performance of five popular commercial ASR systems,[footnote]Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z. & Goel, S. (2020). ‘Racial disparities in automated speech recognition’. Proceedings of the National Academy of Sciences, 7684-7689 117 (14). Available at: https://www.pnas.org/doi/10.1073/pnas.1915768117[/footnote] and each technology is likely to present its own challenges, which would need consideration in any appropriate code of practice.
    28.   It is not possible to set out in the abstract, what the content of any of the envisaged codes of practice should be. However, they should impose clear, accessible, and meaningful standards against which deployments of biometric technologies can be assessed and reviewed.
    29.   The processes and thresholds adopted by the codes of practice (for example, a requirement to fulfil specific criteria of reliability before being used; or how proportionality should be assessed) should be clearly defined. The codes of practice should enable public authorities and members to understand how decisions are made and what safeguards are in place.
    30.   An issue discussed at length during the course of our evidence sessions was the status that new codes of practice should have. Again, this is dependent on clarity as to which body should issue them and the statutory framework in which they are doing so.
    31.  Decision-makers will need to know whether they merely have to have regard to such codes, as opposed to being legally required to follow them. The latter approach has been adopted in Scotland, by way of Section 9(1) of the Scottish Biometrics Commissioner Act 2020. That provision states that police ‘must comply’ with the relevant code of practice. A breach of that obligation does not itself give rise to civil or criminal liability, but the Scottish Biometrics Commissioner can issue a compliance notice, failure to comply with which is a contempt of court.
    32.   Provided there is clarity as to who has responsibility for issuing relevant codes of practice within a new legal framework for the regulation of biometric data, we recommend that such codes should have a similar status for relevant stakeholders as the Code of Practice under the Scottish Biometrics Commissioner Act 2020. Relevant stakeholders should be required to comply to an applicable code of practice (though the codes may themselves provide for circumstances where departure from them may be permissible). A failure to comply with an applicable provision by a public authority would potentially be a public law error which could ground judicial review proceedings. But we also consider that a compliance regime (such as the compliance notice and contempt of court approach adopted in Scotland) should be part of the regulatory regime.

      Recommendation 4: A legally binding code of practice governing LFR, and in particular the police use of LFR, should be published by the Government as soon as possible

    33.   We agree with the recommendation in the ICO opinion of October 2019 that there should be a ‘statutory and binding code of practice issued by Government’. [footnote]ICO. (2019). Information Commissioner’s Opinion: The use of live facial recognition technology by law enforcement in public places. Available at: https://ico.org.uk/media/about-the-ico/documents/2616184/live-frt-law-enforcement-opinion-20191031.pdf [/footnote]
    34.   LFR was the central concern of many of our witnesses. As the ICO’s witnesses told us, there was a particular focus on it ‘due to potential risks identified to data subjects’, and taking account of its active use against the public. Current police use of LFR was the focus of that concern, though we also heard from witnesses concerned about public-private partnership uses (for example, the collaboration between the Metropolitan Police and Argent, a private company who own the King’s Cross development area, to deploy facial recognition technology in the King’s Cross area),[footnote]Sabbagh, D. (2019). ‘Facial recognition row: police gave King’s Cross owner images of seven people’. The Guardian. Available at: https://www.theguardian.com/technology/2019/oct/04/facial-recognition-row-police-gave-kings-cross-owner-images-seven-people[/footnote] and purely private uses (for example, the use by Southern Co-op supermarkets of facial recognition technology in 18 of its stores).[footnote]Wakefield, J. (2020). ‘Co-op facial recognition trial raises privacy concerns’. BBC News. Available at: https://www.bbc.co.uk/news/technology-55259179[/footnote] LFR was described as a being carried out in a ‘vacant regulatory landscape’, with a ‘responsive rather than proactive’ model for regulation.[footnote]Evidence of Suzanne Shale.[/footnote]
    35.   Our LFR-specific recommendations respond to the predominance of LFR in the public consciousness and in the evidence that we received from stakeholders. It does not reflect any special status that we consider attaches inherently to LFR. Many other biometric technologies pose similar risks and challenges – but they are not in use or being developed to the same extent. As additional technologies or additional uses emerge, in our view the safeguards that we currently propose in respect of LFR are very likely to be necessary in other contexts, too.
    36.   The ruling by the Court of Appeal in the Bridges case commented that ‘too much discretion is currently left to individual police officers’ to decide the deployment and targets of LFR, with the Court recommending consistency at the national level.[footnote]Bridges, at paragraph 91.[/footnote] The former Biometrics Commissioner welcomed moves by the Home Office that could deal with this deficiency by creating national guidelines for the use of facial matching by police in England & Wales. We were aware, from our evidence-taking, that the College of Policing was developing guidelines on LFR use and was consulting on a new code of practice on information management by the police generally. The College of Policing has now published authorised professional practice on police use of LFR.[footnote]College of Policing. Authorised Professional Practice: Live Facial Recognition. Available at:https://www.app.college.police.uk/app-content/live-facial-recognition/?s=[/footnote] We understand that the NPCC is also developing guidance on the creation and management of watchlists. We consider that these might provide a useful interim measure, but a statutory and centrally promulgated code of practice will ultimately be necessary to regulate this sensitive area. We are also concerned that some of those bodies seeking to publish their own guidance documents on LFR have differing views on fundamental issues, including the interpretation of the Bridges judgment.
    37.   If the ICO’s recommendation for a single, Government-issued code of practice is taken up, it would clarify the limitations on the use of LFR, setting the required criteria for strict necessity and proportionality and required safeguards.
    38.   The ICO is also consulting on an auditing framework for AI, which would be applicable to LFR users and vendors.[footnote]ICO. (2020). Guidance on the AI auditing framework: Draft guidance for consultation. Available at: https://ico.org.uk/media/about-the-ico/consultations/2617219/guidance-on-the-ai-auditing-framework-draft-for-consultation.pdf[/footnote] A report by the Royal United Services Institute (RUSI) on the use of algorithms by police, commissioned by the CDEI, found the lack of national consistency guidance to be the biggest issue raised by the law enforcement community,[footnote]Babuta, A. and Oswald, M. (2020). ‘Data Analytics and Algorithms in Policing in England and Wales: Towards A New Policy Framework’. RUSI. Available at: https://rusi.org/publication/occasional-papers/data-analytics-and-algorithms-policing-england-and-wales-towards-new[/footnote] and proposed a new code for algorithmic tools in policing as the means to address this current inadequacy. For them, a new code should establish ‘clear roles and responsibilities regarding scrutiny, regulation and enforcement’ for the NPCC, Home Office, College of Policing and regulators such as the ICO and IPCO. This echoes some of the concerns we heard about the fragmentation of regulation in this area.
    39.   RUSI argues that a new code should also create ‘a standard process for model design, development, trialling, and deployment, along with ongoing monitoring and evaluation. It should provide clear operationally relevant guidelines and complement existing authorised professional practice and other guidance in a tech-agnostic way’.[footnote]Babuta, A. and Oswald, M. (2020).[/footnote] We share and echo that view.
    40.   The College of Policing’s new authorised professional practice (APP) on LFR, published in March 2022, is likely the most detailed guidance on LFR, at least in a policing context.  The APP covers the legal framework following the Bridges decision and therefore provides wide-ranging discussion of, for example, the importance of the public sector equality duty.  It also goes beyond strict legal requirements, and covers topics such as operational governance, as well as undertaking ‘community impact assessments’ and committing to ongoing community engagement.
    41.  However, as it is only authorised professional practice, police adherence is discretionary rather than mandatory. This undermines the APP’s ability to ensure standardised practice across the regional police forces.  It also lowers the guidance’s ability to be used as a mechanism for enhancing accountability with police use of LFR.   In our view, a code of practice that was binding on the police would ensure that these issues are addressed.
    42.   The former Surveillance Camera Commissioner made various recommendations for the Home Office and NPCC, ranging from introducing an authorisation process by a senior officer not involved in the operation, to improving guidance for human decision-making and national performance indicators. The College of Policing’s LFR APP recommends that any deployment of LFR is authorised in writing by an authorising officer, not below the rank of superintendent, and that the authorising officer should be distinct from the officer with operational command over LFR deployment ‘on the ground’. The creation of a ‘standard trials methodology’ to provide quality evidence base for future decisions is also recommended by the former Biometrics Commissioner,[footnote]Biometrics Commissioner. (2020). Annual Report 2019, chapter 4. Available at: https://www.gov.uk/government/publications/biometrics-commissioner-annual-report-2019[/footnote] who has also recommended improvements to Home Office data systems, including the Police National Database, to be able to implement privacy-by-design and legal compliance processes, such as the automatic deletion of biometric data.[footnote]Biometrics Commissioner. (2020).[/footnote] We would suggest that these recommendations ought to be taken into account in the content of an LFR Code of Practice.
    43.   A new LFR code will also have to deal with public-private collaborations, which the College of Policing’s APP expressly states is out of scope of the guidance. The subcontracting of facial recognition services and police access to privately assembled biometric datasets are the two most prominent issues in this regard. Permissive standards applied to private companies risk undermining the safeguards which exist, or which might be introduced, in respect of public authority use. This is because, without clear limitations being put in place, public authorities may access private companies’ datasets and data tools through public-private partnerships.  While such an agreement may be lawful,[footnote]See for example: R (M) v Chief Constable of Sussex & anor. [2021] EWCA Civ 42.[/footnote] that does not lessen the importance of providing further guidance for those who might wish to create such a partnership, and embed further safeguards for those who might be affected.
    44.   Indeed, there is nothing inherently less rights-intrusive about private use of LFR to which public authorities may have access and an LFR Code of Practice will need to grapple with these complexities.
    45.   One striking example of public use of private biometric data is the US company Clearview AI. It used more than three billion images scraped from millions of websites including Facebook and YouTube to create a facial recognition search engine. Their search engine was then used by over 600 law enforcement agencies and other organisations in the US without adequate safeguards or public scrutiny.[footnote]Hill, K. (2020). ‘The Secretive Company That Might End Privacy as We Know It’. The New York Times. Available at: https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html[/footnote] Investigative journalists found that several European police forces have also used Clearview. The Swedish data protection authority has fined a police authority for its use,[footnote]Lomas, N. (2021). ‘Sweden’s data watchdog slaps police for unlawful use of Clearview AI’.Techcrunch. Available at: https://techcrunch.com/2021/02/12/swedens-data-watchdog-slaps-police-for-unlawful-use-of-clearview-ai/[/footnote] and the Hamburg Data Protection Authority has deemed such biometric profiles of people in the EU illegal and ordered the company to delete the biometric profile of the person who raised a complaint.[footnote]Noyb.(2021). ‘Clearview AI deemed illegal in the EU, but only partial deletion ordered’. noyb.eu. Available at: https://noyb.eu/en/clearview-ai-deemed-illegal-eu[/footnote] Buzzfeed News reported that the Metropolitan Police and the National Crime Agency, along with ‘a number of other police forces, private investment firms, the Ministry of Defence…’ had carried out hundreds of searches using the service.[footnote]Ashton, E. and Mac, R. (2020). ‘More Than A Dozen Organizations From The Met Police To J.K. Rowling’s Foundation Have Tried Clearview AI’s Facial Recognition Tech’. Buzzfeed. Available at: https://www.buzzfeed.com/emilyashton/clearview-users-police-uk[/footnote] The ICO opened an investigation into the personal information handling practices of Clearview AI, which concluded with the ICO announcing its provisional intent to impose a fine of slightly over £17 million on the company.[footnote]ICO. (2021). ICO issues provisional view to fine Clearview AI Inc over £17 million. Available at: https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2021/11/ico-issues-provisional-view-to-fine-clearview-ai-inc-over-17-million/[/footnote]
    46.   As the former Biometrics Commissioner confirmed in his evidence to us, ‘the boundary between public policing and private policing has been blurred by biometric technology’, and the former Surveillance Camera Commissioner recommended the regulation of police engagement with the private sector. While we accept (see Recommendation 10, below) that further work needs to be done on the regulation of biometrics in the private sphere, it is quite clear that for a code of practice to properly regulate the ongoing and present issues arising from LFR, it must provide for regulation of public-private collaboration and for safeguards in relation to private-sector use as well. We note and endorse the factors identified by the Biometrics and Forensics Ethics Group in its briefing note on the ethical issues arising from public-private collaboration in the use of LFR technology[footnote]BFEG. (2021). Briefing note on the ethical issues arising from public– private collaboration in the use of live facial recognition technology. Available at: https://www.gov.uk/government/publications/public-private-use-of-live-facial-recognition-technology-ethical-issues/briefing-note-on-the-ethical-issues-arising-from-public-private-collaboration-in-the-use-of-live-facial-recognition-technology-accessible[/footnote] and the standards that ought to be imposed by a code of practice before such collaboration is permissible: the user to demonstrate that (1) the collaboration is necessary, (2) that the data-sharing required by the collaboration is proportionate and (3) that there is clarity in the types of data that will be shared.

      Recommendation 5: The use of LFR in any circumstance should be suspended until a new statutory framework and code of practice are in place

    47.   The reliance by police on common-law powers to carry out live facial recognition and potentially other intrusive biometric surveillance has been criticised by Daragh Murray and Peter Fussey, who have studied police deployments of LFR. In their view it is inconsistent with the UK’s obligations under the Human Rights Act and European Convention on Human Rights and they suggest that, ‘establishing an explicit legal and regulatory basis for the use of LFR would provide much needed clarity, both for the public and for the police.’[footnote]Fussey, P. and Murray, D. (2020). ‘Policing Uses of Live Facial Recognition in the United Kingdom’ in Kak, A. (ed.). Regulating Biometrics: Global Approaches and Urgent Questions. AI Now Institute. Available at: https://ainowinstitute.org/regulatingbiometrics.html[/footnote] While the Court of Appeal in Bridges found that the common law provided the source of the police’s power to use LFR technology, it found at the same time that the manner in which that power was used (without an adequate legal framework) violated individual privacy rights. It was of significant concern to us that this was not always clearly understood by those working with guidance and devising policy in this area.
    48.   Due to the concerns raised about the rights-intrusion caused by LFR, many organisations and campaigning groups have called for an outright ban on live facial recognition in public places and/or a moratorium or the ban of specific uses. Human rights groups Liberty, Big Brother Watch and Privacy International are all campaigning to stop facial recognition. In their evidence to the Review, Liberty and Big Brother Watch confirmed that their organisations seek a total ban on LFR. A large coalition of European rights groups launched a public campaign to get 1 million signatures to ‘ban biometric mass surveillance’ in public spaces within the EU.[footnote]See: https://reclaimyourface.eu/[/footnote] The German think tank AlgorithmWatch has found that ‘public uses of facial recognition that might amount to mass surveillance are decisively banned until further notice, and urgently, at the EU level’.[footnote]Chiusi, F., Fischer, S., Kayser-Bril, N. and Spielkamp, M. (eds.). (2020). Automating Society Report 2020. AlgorithmWatch. Available at: https://automatingsociety.algorithmwatch.org/wp-content/uploads/2020/12/Automating-Society-Report-2020.pdf[/footnote] Local bans on LFR have been issued by municipalities in the US, starting in San Francisco, following local campaigns from citizens and rights organisations. In the summer of 2020, several technology companies were driven by the public protests about racism and policing in the US and elsewhere to introduce unilateral moratoria on facial recognition. IBM was the first to announce that it would not offer general purpose facial recognition or analysis software, asking for a ‘national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies’.[footnote]IBM. (2020). ‘IBM CEO’s Letter to Congress on Racial Justice Reform’. IBM Policy Lab. Available at: https://www.ibm.com/blogs/policy/facial-recognition-sunset-racial-justice-reforms/[/footnote] Soon after, Amazon announced a one-year moratorium on police use of their facial recognition technology Rekognition, while still allowing its use by organisations working to rescue human trafficking victims and reunite missing children with their families.[footnote]Amazon. (2020). ‘We are implementing a one-year moratorium on police use of Rekognition’. aboutAmazon.com. Available at: https://www.aboutamazon.com/news/policy-news-views/we-are-implementing-a-one-year-moratorium-on-police-use-of-rekognition[/footnote] It is worth noting that the ACLU was quick to point out that they had already asked Amazon in 2018 to stop providing this technology to governments[footnote]American Civil Liberties Union (ACLU). (2018). Letter from Nationwide Coalition to Amazon CEO Jeff Bezos regarding Rekognition. Available at: https://www.aclu.org/letter-nationwide-coalition-amazon-ceo-jeff-bezos-regarding-rekognition[/footnote] and demanded a ‘blanket moratorium on law enforcement use of facial recognition until the dangers can be fully addressed’.
    49.   The UK group WebrootsUK has called for a ‘generational moratorium’ of several decades on LFR, which they position ‘between a moratorium and a general ban’. They view that long time span as necessary for addressing ‘the much deeper societal issues related to racialised surveillance’, while recognising ‘that the technology could have a role to play in an anti-racist society for specific purposes, e.g. identifying missing children’.[footnote]Chowdhury, A. (2020). ‘Unmasking Facial Recognition: An exploration of the racial bias implications of facial recognition surveillance in the United Kingdom’. WebRoots Democracy. Available at: https://webrootsdemocracy.files.wordpress.com/2020/08/unmasking-facial-recognition-webroots-democracy.pdf[/footnote]
    50.   We consider the numerous and varied voices calling for a ban on LFR – from a diverse range of stakeholders – to be persuasive. We are fortified in that view by the key legal challenge to LFR in England finding it to be unlawful. For that reason we recommend a complete moratorium on the use of LFR by public and private entities until a sufficient legal framework in place.
    51. We do not believe this is a radical position: it is supported by the organisations cited above, among others, and there is precedent for it. AI Now has called for a halt to all use of facial recognition in sensitive social and political contexts until the risks are fully studied and adequate regulations are in place.[footnote]AI Now. (2019). AI Now 2019 Report. Available at: https://ainowinstitute.org/AI_Now_2019_Report.pdf[/footnote] Microsoft have said they will not sell facial recognition technology to US police forces until a federal law is passed.[footnote]Greene, J. (2020). ‘Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM’. The Washington Post. Available at: https://www.washingtonpost.com/technology/2020/06/11/microsoft-facial-recognition/[/footnote] The US Senate has introduced a bill for a Facial Recognition and Biometric Technology Moratorium Act prohibiting federal use of certain biometric technologies.[footnote]Facial Recognition and Biometric Technology Moratorium Act, S.4084, 116th Cong. (2020) <https://www.congress.gov/bill/116th-congress/senate-bill/4084>[/footnote] In the UK, The House of Commons Science and Technology Committee has said that ‘facial recognition technology should not be generally deployed, beyond the current pilots, until the current concerns over the technology’s effectiveness and potential bias have been fully resolved’, asking for Parliament to have ‘an opportunity to debate and vote on the issue’.[footnote]House of Commons Science and Technology Committee. (2018). Biometrics strategy and forensic services: Fifth Report of Session 2017–19. Available at: https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/800/800.pdf[/footnote] We strongly endorse this position, and would go further: even piloting should cease until a proper legal framework is in place. The deployment analysed by the courts in Bridges were ‘pilots’ but that did not prevent them from violating individual rights.
    52.   We do not, however, go as far as the human rights organisations who call for a permanent ban on the use of LFR. With a proper legal framework, we cannot exclude the possibility that it could be deployed in a rights-compatible way. But, we are persuaded that, at present, it is not possible. We therefore recommend a moratorium on its use until an adequate legal framework is introduced.

      Recommendation 6: Duties arising under the Human Rights Act 1998, Equality Act 2010 and Data Protection Act 2018 should continue to apply to uses of biometric data

    53.   In our view, the need for new legislation and codes of practice focused on biometric data is because of current gaps in the legal framework to properly regulate the use of these technologies. We do not, however, consider that the more general legal framework should no longer apply in the context of biometrics – there are important legal duties under the Human Rights Act, UK GDPR, DPA 2018 and Equality Act which will and must continue to apply to uses of biometrics. The existing duties are not sufficient but they remain, in our view, necessary.
    54.   The most important existing duties are: (1) the obligation on public authorities not to violate rights protected by the Human Rights Act; (2) the obligation not to discriminate, directly or indirectly, and (for public authorities) to comply with the public sector equality duty;[footnote]See our discussion of what the PSED requires in this context, at paragraph 5.37. above.[/footnote] and (3) data processing obligations. 
    55.   In evidence to the Review, the CDEI’ noted that ‘organisations struggle to interpret the law’, and we recommend that the biometrics statutory framework incorporates, clarifies and augments the existing duties to provide a clear and logical framework which users must follow before any potential use of biometrics can occur.
    56.   Many of the principles that we consider should form the basis of a new legislation find their origin in human rights and data protection law. Necessity, for example, is a crucial duty under both regimes. According to the European Data Protection Supervisor (EDPS), processing of biometric data for uniquely identifying purposes cannot take place unless one can rely on specific exemptions for special category data in GDPR, also found in the new UK GDPR.[footnote]Wiewiórowski, W. (2019). ‘Facial recognition: A solution in search of a problem?’. European Data Protection Supervisor. Available at: https://edps.europa.eu/press-publications/press-news/blog/facial-recognition-solution-search-problem_en[/footnote] Its use must be demonstrably necessary, meaning that there are no other less intrusive means.
    57.  The EDPS identifies as particular data protection problems the impossibility of claiming consent in the monitoring of public spaces and the difficulties with implementing data minimisation and a privacy-by-design approach, as required by law.[footnote]Wiewiórowski, W. (2019).[/footnote] The problems with ineffective consent have also been raised in the US context, where laws such as the Illinois Biometric Privacy Act (BIPA) force businesses to ask for consent before collecting biometrics data.[footnote]Hartzog, W. (2020). ‘BIPA: The Most Important Biometric Privacy Law in the US?’ in Kak, A. (ed.). Regulating Biometrics: Global Approaches and Urgent Questions. AI Now Institute. Available at: https://ainowinstitute.org/regulatingbiometrics.html[/footnote]
    58.   There is consensus on the need for impact assessments both under data protection and equality legislation. The former Surveillance Camera Commissioner recommended that ‘the Home Office, regulators and other stakeholders collaborate to consider the development of a single “integrated impact assessment” process/format which provides for a comprehensive approach to such matters. This is to improve focus, reduce duplication, reduce bureaucracy and avoid gaps’.[footnote]Surveillance Camera Commissioner. (2020). Facing the Camera: Good Practice and Guidance for the Police Use of Overt Surveillance Camera Systems Incorporating Facial Recognition Technology to Locate Persons on a Watchlist, in Public Places in England & Wales, paragraph 3.86. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/940386/6.7024_SCC_Facial_recognition_report_v3_WEB.pdf[/footnote]
    59.   While our proposal is for those impact assessments to remain standalone, we agree with the Surveillance Camera Commissioner that both are crucial and should be incorporated as prior obligations under the Biometrics Bill.

      Recommendation 7: A national Biometrics Ethics Board should be established, and given a mandatory advisory role in respect of public sector biometrics use

    60.   One of our key concerns, discussed extensively with various witnesses to the Review, was the absence – at present – of any formal process for ethics to be taken into account in respect of operational decisions relating to biometric technologies.
    61.  While the Biometric and Forensics Ethics Group (BFEG) has produced some impressive and thoughtful papers on the ethical implications of various biometrics uses, their role is limited to advising the Home Office and we found that witnesses were not well-acquainted with their work.
    62.   West Midlands Police and the Metropolitan Police have established their own ethics processes but they are exceptions. Most police forces do not have ethics boards and there is no formal provision for ethical oversight of any other public bodies which may use biometrics. This is unsatisfactory. Ethical considerations are critical: as Detective Chief Superindendent Chris Todd of West Midlands Police observed, ‘in order to maintain legitimacy, ethics must be embedded’ alongside the law. Suzanne Shale, Chair of the London Policing Ethics Panel, told us that until recently ‘it was striking to see how little ethical scrutiny there was of trials of policing technologies of the population’.[footnote]Evidence of Suzanne Shale.[/footnote] The witnesses working in law enforcement generally agreed that ’the concept of a national ethics body is a good thing.[footnote]Evidence of the College of Policing.[/footnote]
    63.   Suzanne Shale underscored the benefits that can accrue from external ethical oversight, independent from the operational decision-making structure – though several witnesses also warned of the risk that externalising ethical considerations may allow organisations to ignore ethics internally. We accept the existence of that risk, but we consider that external, ethical oversight is necessary in a field as complex and sensitive as biometrics, and that the formalising of ethical oversight can play a positive role in emphasising the importance of, and therefore embedding, ethical principles in decision-making around biometrics.
    64.   Supportive of our view that a national Biometrics Ethics Board should be established, the CDEI in a review found that correcting the fragmentation of responsibility for the ethical use of data in the policing context requires leadership from the Home Office to define roles and responsibilities and ensure that ‘work underway by the National Police Chiefs’ Council and other policing stakeholders to develop guidance and ensure ethical oversight of data analytics tools is appropriately supported’.[footnote]CDEI. (2020). Review into bias in algorithmic decision-making. Available at : https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/957259/Review_into_bias_in_algorithmic_decision-making.pdf[/footnote] For the CDEI, there is a need to ‘establish standard processes for independent ethical review and oversight to ensure transparency and accountability and facilitate meaningful public engagement before tools are deployed operationally’.
    65.   The former Surveillance Camera Commissioner has also recommended that police use of LFR is subjected to meaningful and independent ethical oversight from the initial planning to before and during operations. He supports Ethics Committees, or in their absence a multiagency structure similar to that created for stop and search.[footnote]Surveillance Camera Commissioner. (2020). Facing the Camera: Good Practice and Guidance for the Police Use of Overt Surveillance Camera Systems Incorporating Facial Recognition Technology to Locate Persons on a Watchlist, in Public Places in England & Wales, paragraph 2.26. Available at : https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/940386/6.7024_SCC_Facial_recognition_report_v3_WEB.pdf[/footnote] Taking account of both Bridges, and the views expressed by the witnesses with whom we spoke, we recommend that a single, independent national Biometrics Ethics Board should be established to provide ethical oversight.
    66.   The good practice of BFEG, the London Policing Ethics Panel, and the West Midlands Police ethics committee should all inform the establishment, and membership, of this Board. We have considered the example of the national DNA and fingerprints database, which is overseen by the Forensic Information Databases Strategy Board (FIND-SB), and which includes representatives of the NPCC, Home Office and Police and Crime Commissioners.[footnote]Biometrics Commissioner. (2020). Annual Report 2019. Available at: https://www.gov.uk/government/publications/biometrics-commissioner-annual-report-2019[/footnote] We consider that greater independence is necessary in our proposed Biometrics Ethics Board: parties with a stake in operational decision-making should not be members of the Board, although those with experience of law enforcement (for example, retired Chief Constables) are likely to have a valuable role to play. Otherwise, the Board should be comprised of a mix of members with expertise in ethics, human rights and relevant technology.

      Recommendation 8: The Biometrics Ethics Board’s advice should be published. Where a decision is taken to deploy biometric technology contrary to the advice of the Ethics Board, the deploying body should publish, within 14 days of the Ethics Board’s advice, a summary explanation of their reasons for rejecting the Board’s advice, or the steps they have taken to respond to the Board’s advice prior to deployment.

    67.   Although there was broad support among our witnesses for the establishment of a national Biometrics Ethics Board, there were divergent views on the extent of powers that it should be given. There was general agreement that ‘the role of the Ethics Committee should be to expose or highlight concerns’,[footnote]Evidence of the College of Policing.[/footnote] but those working in law enforcement believed that autonomy of operational decision-making by the police remained crucial. In their view an Ethics Board that did not have the power to veto deployments and could only consider matters referred to it by the police would be ‘in the right place on the spectrum’.[footnote]Evidence of Chris Todd.[/footnote] Part of the rationale given for its being a referral-by-choice system was that ‘there wouldn’t be a scenario where [the police] wouldn’t want to subject’ important issues to scrutiny.[footnote]Evidence of Lindsey Chiswick.[/footnote]
    68.   In our view, it is right that the Ethics Board should not have the power to veto deployments, and that the ultimate decision-making on whether to proceed with a particular use of biometric technology should rest with operational decision-makers. We do not, however, agree that a referral-by-choice system would be sufficiently robust (and we put to all witnesses who suggested it that if they were right that public bodies would always want ethical input, there would be no detriment in such input being mandatory). We recognise that there may be exceptional circumstances in which referral to an Ethics Board may not be possible before a particular deployment, for reasons of urgency or sensitivity. But those occasions will be very limited, and in general we consider that every proposed use by a public body of biometric technologies on members of the public should be subject to mandatory referral to a national Ethics Board.
    69.   We also recommend that the advice of the Ethics Board is published and that a public authority choosing to deploy the use of biometric technology against the advice of the Ethics Board should be required to publish reasons justifying its position. Those steps will protect transparency and accountability and encourage public confidence and credibility in the process. We agree that ‘the design and implementation process should be as transparent and as open to public scrutiny as possible.’[footnote]Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. Available at: https://doi.org/10.5281/zenodo.3240529[/footnote] Public consideration by an Ethics Board will be one means of achieving this.
    70.   It will be for others to determine when and how quickly such reasons should be published, but we see no reason why – subject to narrow and justifiable exceptions – the reasons should not be published within 14 days of the decision to reject the Ethics Board recommendation and prior to the deployment commencing.
    71. The publication of advice will also provide the public with an understanding of the issues and risks identified in respect of different intended uses, and will equip them with the knowledge necessary to challenge any problematic uses. That is a critical aspect of democratic legitimacy and accountability. As the German Data Ethics Commission has stressed:

      ‘It is vitally important to ensure not only that the users of algorithmic systems understand how these systems function and can explain and control them, but also that the parties affected by a decision are provided with sufficient information to exercise their rights properly and challenge the decision if necessary.’[footnote]German Federal Government’s Data Ethics Commission. (2019). Opinion of the Data Ethics Commission. Available at: https://www.bmjv.de/DE/Themen/FokusThemen/Datenethikkommission/Datenethikkommission_EN_node.htm[/footnote]

    72.  The House of Commons Science and Technology Committee (STC), heard in evidence from the ICO that ethics boards can aid transparency by publishing their deliberations, so that the development of the algorithm is openly documented. The STC recommended that where there is a conflict with commercial confidentiality or privacy, the CDEI should help with improving the transparency and explainability of the system.[footnote]House of Commons Science and Technology Committee. (2018). Algorithms in decision-making. Available at: https://publications.parliament.uk/pa/cm201719/cmselect/cmsctech/351/35102.htm[/footnote]
    73.  The CDEI has set a clear priority for transparency in algorithmic systems, which would include biometrics and facial recognition:

      ‘Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals… it should require the proactive publication of information on how the decision to use an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals.’[footnote]CDEI. (2020). Review into bias in Algorithmic decision-making Available at: https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making[/footnote]

    74.  Our recommendation that the advice of the Ethics Board – and any response from the public body – be published is reflective of and consistent with these views.

      Recommendation 9: Oversight functions should be consolidated, clarified and properly resourced

    75.   The effectiveness of our recommendations also requires oversight functions over biometric data should be consolidated, clarified and properly resourced.
    76.  We accept that such consolidation and clarity may be achieved by strengthening the ICO’s capacity in regard to biometrics. But we believe that the prominence and importance of biometrics means that it requires a specialist Commissioner. We note, in particular, the powerful points advanced by the current Biometrics and Surveillance Camera Commissioner against his role being incorporated into the work of the ICO, in his blogpost What we talk about when we talk about biometrics….[footnote]Biometrics and Surveillance Camera Commissioner. (2021). What we talk about when we talk about biometrics…*. Available at https://videosurveillance.blog.gov.uk/2021/10/12/what-we-talk-about-when-we-talk-about-biometrics/[/footnote]
    77.  Wherever that role is located, it must be adequately resourced financially, and have both sufficient powers and expertise to perform the governance that a role like this requires.
    78.   Many of the witnesses believed that regulatory simplicity would be desirable, and that the current oversight structures were unduly complex and fragmented. The current regulatory landscape was characterised as one of ‘regulatory overlaps’, which is ‘confusing’.
    79.   However, those same witnesses had highly conflicting views as to which existing or hypothetical regulators should take the reigns over the various uses of biometric technologies. The former Surveillance Camera Commissioner, like his successor, argued that maintaining a specialist Commissioner ‘provides for democratic engagement and accountability’, while the ICO considered that biometrics oversight was an area that could probably be accomplished by them within their existing remit. Others noted that as personal data and technology becomes more and more central to the way in which society operates, there is a risk of overloading the ICO with too broad a role in too many areas. There is a danger of it becoming a behemoth responsible for regulating all aspects of life without the capacity to address any particular area in the detail, or with the resources necessary. There would be a risk that a Commissioner located in the ICO, under the Information Commissioner, would be less accessible to the public and less accountable than a standalone role.
    80.   We note the views on both sides about this. We consider that biometrics are sufficiently prominent as a cause of concern and emerging opportunity and risk in society that a named, prominent Commissioner is necessary. In our view, that could be achieved in various formats, and we do not recommend any particular ‘location’ of the role over any other. We note that, if a standalone Commissioner role is created or strengthened (perhaps by an evolution of the now-consolidated Biometrics and Surveillance Camera Commissioner role), the ICO would still continue to play a role in the regulatory oversight of biometrics: UK GDPR makes the Information Commissioner the sole supervisory authority for data protection purposes,[footnote]General Data Protection Regulation Keeling Schedule showing changes which would be affected by the Data Protection, Privacy And Electronic Communications (Amendments Etc)(EU Exit) Regulations 2019 MADE ON 28 FEBRUARY 2019 (As amended by the Data Protection, Privacy And Electronic Communications (Amendments Etc)(EU Exit) Regulations 2020 laid on 14 October 2020) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/969514/20201102_-_GDPR_-__MASTER__Keeling_Schedule__with_changes_highlighted__V4.pdf[/footnote] including to handle complaints and enforce data protection compliance. If a Commissioner role is created outside the ICO, there would need to be clear processes for the referral of complaints or enforcement requests between the two bodies.
    81.  Wherever it sits, there are certain features that the regulator must have: primarily, adequate resourcing, expertise, powers and capacities. For example:1. The Competition and Markets Authority (CMA) has stipulated that a regulator of AI systems must have the capacity to perform continuous monitoring of biometric systems: one-off audits may become quickly outdated, so ongoing monitoring is therefore necessary.[footnote]Competition and Markets Authority. (2021). Algorithms: How they can reduce competition and harm consumers. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/954331/Algorithms_++.pdf[/footnote] We agree.
      2. The regulator must also be technically capable and legally empowered to carry out a variety of assessments.[footnote]Ada Lovelace Institute and DataKind. (2020). Examining the Black Box. Available at: https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/[/footnote] Linked to the discussions on transparency in the previous section, the regulator needs to be able to check for bias in outcomes and the overall compliance of the systems with the applicable regulations.
      3. The regulator must have appropriate equality expertise; as the CDEI has pointed out, ‘the PSED (public sector equality duty) also extends to regulators who are responsible, as public sector bodies, to ensure the industry they regulate is upholding appropriate standards for testing for bias in the use of algorithms. In order for regulators to fulfil these obligations, they will need support, through building relationships between regulators and from organisations with specific expertise in these areas.[footnote]Ahamat, G. (2020). Public Sector Equality Duty and bias in algorithms. CDEI. Available at: https://cdei.blog.gov.uk/2020/12/09/public-sector-equality-duty-and-bias-in-algorithms/[/footnote] That is, of course, of particular importance in the field of biometrics where the risk of discrimination is so acute.

      Recommendation 10: Further work is necessary on the topic of private-sector use of biometrics and public/private sharing of biometrics

    82.   Despite our early aspirations, we have not been able to investigate and reach conclusions on the regulation of private-sector uses of biometrics to the extent that we would have liked. That the focus in our Review is on public-sector, and particularly police, uses of biometrics should not be interpreted as an acceptance that those are the most important or sensitive uses of biometrics, requiring the greatest safeguards. As the CDEI captured the point succinctly: ‘The potential for nefarious use in the private sector is as present as it is in the public sector.’
    83.   We did not, however, receive engagement from private-sector stakeholders to the same extent as from public authorities; the witnesses with whom we spoke were less knowledgeable on private-sector issues than public-sector issues; and the literature on private-sector uses of biometrics is less well developed. As a consequence, our final recommendation is that further work is commissioned that focuses specifically on private-sector use and the particular concerns and requirements for regulation that arise in that sphere.
    84.   There are two particular areas of private-sector use that appeared, to us, in the course of conducting the Review, to warrant particular attention. The first, as addressed earlier, is the issue of public-private collaboration. The second is the use of biometrics in the workplace.
    85.   The issue of public-private collaboration came to particular prominence in the summer of 2019, when the Financial Times reported that the managers of the King’s Cross estate in London, a large private property development, had been using facial recognition for two years to track thousands of people.[footnote]Madhumita Murgia, M. (2019). ‘London’s King’s Cross uses facial recognition in security cameras’. Financial Times. Available at:https://www.ft.com/content/8cbcb3ae-babd-11e9-8a88-aa6628ac896c[/footnote] After initial denials it transpired that the Metropolitan Police and British Transport Police had given the company databases of persons of interest.[footnote]The London Assembly, Questions to the Mayor (MQT on 2019-07-18) <https://www.london.gov.uk/questions/2019/14214#a-173736>[/footnote] The estate managers claimed that the system was used to help the police in fighting crime in the area and that all data had been deleted.[footnote]King’s Cross Central Limited Partnership (KCCLP). (2019). Updated Statement: Facial Recognition. Available at: https://www.kingscross.co.uk/press/2019/09/02/facial-recognition[/footnote] Other LFR partnerships between police and private sector operators have been reported in Manchester,[footnote]Robson, S. (2018) ‘Trafford Centre bosses explain why they used controversial cameras to monitor shoppers’. Manchester Evening News. Available at: https://www.manchestereveningnews.co.uk/news/greater-manchester-news/trafford-centre-bosses-explain-used-15283677[/footnote] where a single positive match was made out of 15 million samples, and in Sheffield.[footnote]BBC Sheffield & South Yorkshire. (2019). ‘Meadowhall shoppers scanned in facial recognition trial’. Available at: https://www.bbc.co.uk/news/uk-england-south-yorkshire-49369772[/footnote] The ICO started an investigation on the King’s Cross incident in August 2019[footnote]ICO. (2019). Statement: Live facial recognition technology in King’s Cross. Available at: https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2019/08/statement-live-facial-recognition-technology-in-kings-cross/[/footnote] but has not, to date, published any findings.
    86.   The scale of public-private collaboration on LFR could grow dramatically if plans for an updated ‘ring of steel’ in the City of London come to fruition. The City built a CCTV security perimeter barrier after the IRA bombings in the 1990s, through which police can take control over privately owned cameras. City of London Police wants to upgrade their pioneering but now outdated CCTV system, with a central control room and links to ‘smart’ city technology such as street lighting. The force also wants to integrate LFR in the new system.[footnote]Professional Security Magazine. (2018). ‘New “ring of steel” proposed’. Available at: https://www.professionalsecurity.co.uk/news/interviews/new-ring-of-steel-proposed/[/footnote] 
    87.   Other recent private users of LFR include Southern Co-op supermarkets, where 18 shops have been using the facial recognition technology to reduce shoplifting and abuse against staff.[footnote]Burgess, M. (2020). ‘Co-op is using facial recognition tech to scan and track shoppers’. Wired UK. Available at: https://www.wired.co.uk/article/coop-facial-recognition#:~:text=Branches%20of%20Co%2Dop%20in,shoplifting%20and%20abuse%20against%20staff.[/footnote] The system used, by a company called Facewatch, compiles watchlists from the individuals flagged by their various clients. The lack of any safeguards over the creation of watchlists in the private sector raises particular equality and discrimination concerns. Similar developments have taken place elsewhere: in the US, the pharmacy chain Rite Aid installed facial recognition systems in hundreds of stores over a period of eight years. Most deployments were in low-income non-white areas. The practice stopped in 2020 after an in-depth investigation by Reuters.[footnote]Dastin, J. (2020). ‘Rite Aid deployed facial recognition systems in hundreds of U.S. stores’. Reuters Investigates. Available at: https://www.reuters.com/investigates/special-report/usa-riteaid-software/[/footnote]
    88.   As the Biometrics and Forensics Ethics Group found in its report on public-private collaboration in LFR, real-time collaborative deployment of LFR technology means that it is not just images that are shared by collaborators, but a wider biometric data system that can be combined and processed with other data sources: ‘machine learning tools, deep neural network algorithms, training datasets, and so on. For example, the providers of the LFR technology could use data collected during [public-private] collaborations to train or refine their algorithm… (This) means that datasets collected for one purpose (and by one organisation) are repurposed for processing in a new way by another organisation, which has implications for the data subjects’ rights’. This is a particular issue with the use of cloud-based platforms that enable data from various sources to be used in machine-learning ‘without any actual “sharing”’.
    89.   These concerns are only likely to increase as private uses of biometric technology proliferate. While we have recommended that new legislation should ensure that obligations on private entities using biometric data should be similar to those imposed on public authorities, we recommend that further work is undertaken to ascertain the specific additional safeguards and duties that may be necessary to ensure adequate regulation of private-sector, and private-public collaboration, uses of biometrics.
    90.   The other key area of concern in the private sector is the use of biometrics in the workplace. This came became particularly noticeable as the Review was conducted over the course of the COVID-19 pandemic. The shift to remote working and videoconferencing accelerated by the pandemic means that many interactions are now digital, which enables forms of processing that are not possible in face-to-face communications. Over a quarter of large firms surveyed said they had implemented remote monitoring or planned to do so.[footnote]Dodd, V. (2020). ‘Remote-working Compliance YouGov Survey’. Skillcast. Available at: https://www.skillcast.com/blog/remote-working-compliance-survey-key-findings[/footnote] This may cause existing deployments of AI and analytics to proliferate, building on technologies that are already in place, for example, in call centres – such as speech analytics, text analytics, sentiment analysis, customer behaviour prediction, persona-based interactions, presented as a way to enable more ‘meaningful conversations’[footnote]Dharshan, N., Nair, P., Nagraj, B. and Aase, J.E. (2020). ‘ISG Provider Lens™ Contact Center – Customer Experience Services – Global 2020.’ ISG Research. Available at:https://research.isg-one.com/reportaction/Quadrant-CC-Global-2020/Marketing[/footnote] at a distance.
    91. One example of workplace monitoring is the development by PwC of a facial recognition tool to check whether remote workers left their screens. The tool was developed for financial firms with strict compliance requirements, presumably to avoid information leaks or backroom deals during the pandemic, but it raised concerns about its intrusiveness and its potential to be used by employers for other purposes.[footnote]McNulty, L. and Kelley, T. (2020). ‘PwC under fire for tech that tracks traders’ loo breaks’. Financial News. Available at: https://www.fnlondon.com/articles/pwc-under-fire-for-tech-that-tracks-traders-loo-breaks-20200615[/footnote]
    92.   The trade union Prospect found that 80% of polled workers were uncomfortable with camera monitoring and similarly high proportions were also opposed to other forms of monitoring such as keystroke recording.[footnote]Prospect. (2020). Workers are not prepared for the future of working from home. Available at: https://prospect.org.uk/news/workers-are-not-prepared-for-the-future-of-working-from-home/[/footnote] UNISON has published a guide on monitoring and surveillance at work,[footnote]UNISON. (2020). Bargaining On Monitoring And Surveillance Workplace Policies. Available at: https://www.unison.org.uk/content/uploads/2018/08/Monitoring-and-surveillance-at-work-08-2018.pdf[/footnote] which notes that UNISON members have seen an increase in the use of biometrics for staff time-keeping and sickness absence. A recent report by the TUC on the impact of technology in the workplace found that biometrics were still experienced by only a small proportion of workers, but identified as a key objective achieving more worker consultation on the development, introduction and operation of new technologies in the workplace.[footnote]Trade Union Congress (TUC). (2020). Technology Managing People – the worker experience. Available at: https://www.tuc.org.uk/sites/default/files/2020-11/Technology_Managing_People_Report_2020_AW_Optimised.pdf[/footnote]
    93.   A report on AI and discrimination by Robin Allen QC and Dee Masters found that, among other things, companies are using AI in recruitment and other HR functions including pay and promotion, technologies including video-analysis, robot interviews and conversation analysis. They found that there is a lack of specific regulation to prevent discrimination from AI systems, and noted that the intrusive nature of biometric technologies such as facial recognition may be particularly difficult (and even impossible) to justify in ‘more mundane commercial contexts’[footnote]Allen QC, R. and Masters, D. (2021). Technology Managing People – the legal implications. TUC. Available at: https://www.tuc.org.uk/sites/default/files/Technology_Managing_People_2021_Report_AW_0.pdf[/footnote] as compared, for example, to law enforcement contexts where personal safety may be positively impacted by the deployment of the technology.
    94.   Each of these factors warrants further careful consideration. We recommend the commissioning of further, private-sector focused work to achieve this.

Annex 1: Legal provisions

This Annex contains extracts from the materials that currently provide law and regulation relating to biometric data relevant to the UK, and are referred to in the body of the Review.

Data Protection Act 2018

Section 3 – Terms relating to the processing of personal data

[…]

(2) Personal datameans any information relating to an identified or identifiable living individual (subject to subsection (14)(c)).

(3) “Identifiable living individual” means a living individual who can be identified, directly or indirectly, in particular by reference to—

(a) an identifier such as a name, an identification number, location data or an online identifier, or

(b) one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of the individual

(4) “Processing”, in relation to information, means an operation or set of operations which is performed on information, or on sets of information, such as—

(a) collection, recording, organisation, structuring or storage,

(b) adaptation or alteration,

(c) retrieval, consultation or use,

(d) disclosure by transmission, dissemination or otherwise making available,

(e) alignment or combination, or

(f) restriction, erasure or destruction,

(subject to subsection (14)(c) and sections 5(7), 29(2) and 82(3), which make provision about references to processing in the different Parts of this Act).

Section 10 – Special categories of personal data and criminal convictions etc data

(1) Subsections (2) and (3) make provision about the processing of personal data described in Article 9(1) of the UK GDPR (prohibition on processing of special categories of personal data) in reliance on an exception in one of the following points of Article 9(2)—

(a) point (b) (employment, social security and social protection);

(b) point (g) (substantial public interest);

(c) point (h) (health and social care);

(d) point (i) (public health);

(e) point (j) (archiving, research and statistics).

(2) The processing meets the requirement in point (b), (h), (i) or (j) of Article 9(2) of the UK GDPR for authorisation by, or a basis in, the law of the United Kingdom or a part of the United Kingdom only if it meets a condition in Part 1 of Schedule 1.

(3) The processing meets the requirement in point (g) of Article 9(2) of the UK GDPR for a basis in the law of the United Kingdom or a part of the United Kingdom only if it meets a condition in Part 2 of Schedule 1.

(4) Subsection (5) makes provision about the processing of personal data relating to criminal convictions and offences or related security measures that is not carried out under the control of official authority.

(5) The processing meets the requirement in Article 10 of the UK GDPR for authorisation by the law of the United Kingdom or a part of the United Kingdom only if it meets a condition in Part 1, 2 or 3 of Schedule 1.

(6) The Secretary of State may by regulations—

(a) amend Schedule 1—

(i) by adding or varying conditions or safeguards, and

(ii) by omitting conditions or safeguards added by regulations under this section, and

(b) consequentially amend this section.

(7) Regulations under this section are subject to the affirmative resolution procedure.

Section 30 – Meaning of “competent authority”

(1) In this Part, “competent authority” means—

(a) a person specified or described in Schedule 7, and

(b) any other person if and to the extent that the person has statutory functions for any of the law enforcement purposes.

(2) But an intelligence service is not a competent authority within the meaning of this Part.

[…]

(7) In this section—

“intelligence service” means—

(a) the Security Service;

(b) the Secret Intelligence Service;

(c) the Government Communications Headquarters;

Section 31 – “The law enforcement purposes”

For the purposes of this Part, “the law enforcement purposes” are the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security

Section 35 – The first data protection principle

(1) The first data protection principle is that the processing of personal data for any of the law enforcement purposes must be lawful and fair.

(2) The processing of personal data for any of the law enforcement purposes is lawful only if and to the extent that it is based on law and either—

(a) the data subject has given consent to the processing for that purpose, or

(b) the processing is necessary for the performance of a task carried out for that purpose by a competent authority.

(3) In addition, where the processing for any of the law enforcement purposes is sensitive processing, the processing is permitted only in the two cases set out in subsections (4) and (5).

(4) The first case is where—

(a) the data subject has given consent to the processing for the law enforcement purpose as mentioned in subsection (2)(a), and

(b) at the time when the processing is carried out, the controller has an appropriate policy document in place (see section 42).

(5) The second case is where—

(a) the processing is strictly necessary for the law enforcement purpose,

(b) the processing meets at least one of the conditions in Schedule 8, and

(c) at the time when the processing is carried out, the controller has an appropriate policy document in place (see section 42).

[…]

(8) In this section, “sensitive processing” means—

[…]

(b) the processing of genetic data, or of biometric data, for the purpose of uniquely identifying an individual;

Section 36 – The second data protection principle

(1) The second data protection principle is that—

(a) the law enforcement purpose for which personal data is collected on any occasion must be specified, explicit and legitimate, and

(b) personal data so collected must not be processed in a manner that is incompatible with the purpose for which it was collected.

(2) Paragraph (b) of the second data protection principle is subject to subsections (3) and (4).

(3) Personal data collected for a law enforcement purpose may be processed for any other law enforcement purpose (whether by the controller that collected the data or by another controller) provided that—

(a) the controller is authorised by law to process the data for the other purpose, and

(b) the processing is necessary and proportionate to that other purpose.

(4) Personal data collected for any of the law enforcement purposes may not be processed for a purpose that is not a law enforcement purpose unless the processing is authorised by law.

Section 37 – The third data protection principle

The third data protection principle is that personal data processed for any of the law enforcement purposes must be adequate, relevant and not excessive in relation to the purpose for which it is processed.

Section 38 – The fourth data protection principle

(1) The fourth data protection principle is that—

(a) personal data processed for any of the law enforcement purposes must be accurate and, where necessary, kept up to date, and

(b) every reasonable step must be taken to ensure that personal data that is inaccurate, having regard to the law enforcement purpose for which it is processed, is erased or rectified without delay.

(2) In processing personal data for any of the law enforcement purposes, personal data based on facts must, so far as possible, be distinguished from personal data based on personal assessments.

(3) In processing personal data for any of the law enforcement purposes, a clear distinction must, where relevant and as far as possible, be made between personal data relating to different categories of data subject, such as—

(a) persons suspected of having committed or being about to commit a criminal offence;

(b) persons convicted of a criminal offence;

(c) persons who are or may be victims of a criminal offence;

(d) witnesses or other persons with information about offences.

(4) All reasonable steps must be taken to ensure that personal data which is inaccurate, incomplete or no longer up to date is not transmitted or made available for any of the law enforcement purposes.

(5) For that purpose—

(a) the quality of personal data must be verified before it is transmitted or made available,

(b) in all transmissions of personal data, the necessary information enabling the recipient to assess the degree of accuracy, completeness and reliability of the data and the extent to which it is up to date must be included, and

(c) if, after personal data has been transmitted, it emerges that the data was incorrect or that the transmission was unlawful, the recipient must be notified without delay.

Section 39 – The fifth data protection principle

(1) The fifth data protection principle is that personal data processed for any of the law enforcement purposes must be kept for no longer than is necessary for the purpose for which it is processed.

(2) Appropriate time limits must be established for the periodic review of the need for the continued storage of personal data for any of the law enforcement purposes.

Section 40 – The sixth data protection principle

The sixth data protection principle is that personal data processed for any of the law enforcement purposes must be so processed in a manner that ensures appropriate security of the personal data, using appropriate technical or organisational measures (and, in this principle, “appropriate security” includes protection against unauthorised or unlawful processing and against accidental loss, destruction or damage).

Section 82 – Processing to which this Part applies

[…]

(2) In this Part, “intelligence service” means—

(a) the Security Service;

(b) the Secret Intelligence Service;

(c) the Government Communications Headquarters.

Section 86 – The first data protection principle

[…]

(2) The processing of personal data is lawful only if and to the extent that—

[…]

(b) in the case of sensitive processing, at least one of the conditions in Schedule 10 is also met.

[…]

(7) In this section, “sensitive processing” means—

(c) the processing of biometric data for the purpose of uniquely identifying an individual;

Section 87 – The second data protection principle

(1) The second data protection principle is that—

(a) the purpose for which personal data is collected on any occasion must be specified, explicit and legitimate, and

(b) personal data so collected must not be processed in a manner that is incompatible with the purpose for which it is collected.

(2) Paragraph (b) of the second data protection principle is subject to subsections (3) and (4).

(3) Personal data collected by a controller for one purpose may be processed for any other purpose of the controller that collected the data or any purpose of another controller provided that—

(a) the controller is authorised by law to process the data for that purpose, and

(b) the processing is necessary and proportionate to that other purpose.

(4) Processing of personal data is to be regarded as compatible with the purpose for which it is collected if the processing—

(a) consists of—

(i) processing for archiving purposes in the public interest,

(ii) processing for the purposes of scientific or historical research, or

(iii) processing for statistical purposes, and

(b) is subject to appropriate safeguards for the rights and freedoms of the data subject.

Section 88 – The third data protection principle

The third data protection principle is that personal data must be adequate, relevant and not excessive in relation to the purpose for which it is processed.

Section 89 – The fourth data protection principle

The fourth data protection principle is that personal data undergoing processing must be accurate and, where necessary, kept up to date.

Section 90 – The fifth data protection principle

The fifth data protection principle is that personal data must be kept for no longer than is necessary for the purpose for which it is processed.

Section 91 – The sixth data protection principle

(1) The sixth data protection principle is that personal data must be processed in a manner that includes taking appropriate security measures as regards risks that arise from processing personal data.

(2) The risks referred to in subsection (1) include (but are not limited to) accidental or unauthorised access to, or destruction, loss, use, modification or disclosure of, personal data.

Section 205 – General interpretation

(1) In this Act—

“biometric data” means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of an individual, which allows or confirms the unique identification of that individual, such as facial images or dactyloscopic data;

Schedule 7: Competent authorities

Paragraph 1

Any United Kingdom government department other than a non-ministerial government department.

[…]

Paragraph 5

The chief constable of a police force maintained under section 2 of the Police Act 1996.

Paragraph 6

The Commissioner of Police of the Metropolis.

Paragraph 7

The Commissioner of Police for the City of London.

[…]

Paragraph 10

The chief constable of the British Transport Police.

Paragraph 11

The chief constable of the Civil Nuclear Constabulary.

Paragraph 12

The chief constable of the Ministry of Defence Police.

Paragraph 13

The Provost Marshal of the Royal Navy Police.

Paragraph 14

The Provost Marshal of the Royal Military Police.

Paragraph 15

The Provost Marshal of the Royal Air Force Police.

Paragraph 16

The chief officer of—

(a) a body of constables appointed under provision incorporating section 79 of the Harbours, Docks, and Piers Clauses Act 1847;

(b) a body of constables appointed under an order made under section 14 of the Harbours Act 1964;

(c) the body of constables appointed under section 154 of the Port of London Act 1968.

Paragraph 17

A body established in accordance with a collaboration agreement under section 22A of the Police Act 1996.

Paragraph 18

The Director General of the Independent Office for Police Conduct.

[…]

Paragraph 21

The Commissioners for Her Majesty’s Revenue and Customs.

Paragraph 24

The Director General of the National Crime Agency.

Paragraph 25

The Director of the Serious Fraud Office.

Paragraph 26

The Director of Border Revenue.

Paragraph 27

The Financial Conduct Authority.

Paragraph 28

The Health and Safety Executive.

Paragraph 29

The Competition and Markets Authority.

Paragraph 30

The Gas and Electricity Markets Authority.

Paragraph 31

The Food Standards Agency.

[…]

Paragraph 33

Her Majesty’s Land Registry.

Paragraph 34

The Criminal Cases Review Commission.

[…]

Paragraph 36

A provider of probation services (other than the Secretary of State), acting in pursuance of arrangements made under section 3(2) of the Offender Management Act 2007.

Paragraph 37

The Youth Justice Board for England and Wales.

Paragraph 38

The Parole Board for England and Wales.

[…]

Paragraph 43

A person who has entered into a contract for the running of, or part of—

(a) a prison or young offender institution under section 84 of the Criminal Justice Act 1991, or

(b) a secure training centre under section 7 of the Criminal Justice and Public Order Act 1994.

Paragraph 44

A person who has entered into a contract with the Secretary of State—

(a) under section 80 of the Criminal Justice Act 1991 for the purposes of prisoner escort arrangements, or

(b) under paragraph 1 of Schedule 1 to the Criminal Justice and Public Order Act 1994 for the purposes of escort arrangements.

Paragraph 45

A person who is, under or by virtue of any enactment, responsible for securing the electronic monitoring of an individual.

Paragraph 46

A youth offending team established under section 39 of the Crime and Disorder Act 1998.

Paragraph 47

The Director of Public Prosecutions.

[…]

Paragraph 52

The Information Commissioner.

[…]

Paragraph 55

The Crown agent.

Paragraph 56

A court or tribunal.

Schedule 8 – Conditions for sensitive processing under Part 3

Paragraph 1

This condition is met if the processing—

(a) is necessary for the exercise of a function conferred on a person by an enactment or rule of law, and

(b) is necessary for reasons of substantial public interest.

Paragraph 2

This condition is met if the processing is necessary for the administration of justice.

Paragraph 3

This condition is met if the processing is necessary to protect the vital interests of the data subject or of another individual.

Paragraph 4

(1) This condition is met if—

(a) the processing is necessary for the purposes of—

(i) protecting an individual from neglect or physical, mental or emotional harm, or

(ii) protecting the physical, mental or emotional well-being of an individual,

(b) the individual is—

(i) aged under 18, or

(ii) aged 18 or over and at risk,

(c) the processing is carried out without the consent of the data subject for one of the reasons listed in sub-paragraph (2), and

(d) the processing is necessary for reasons of substantial public interest.

(2) The reasons mentioned in sub-paragraph (1)(c) are—

(a) in the circumstances, consent to the processing cannot be given by the data subject;

(b) in the circumstances, the controller cannot reasonably be expected to obtain the consent of the data subject to the processing;

(c) the processing must be carried out without the consent of the data subject because obtaining the consent of the data subject would prejudice the provision of the protection mentioned in sub-paragraph (1)(a).

(3) For the purposes of this paragraph, an individual aged 18 or over is “at risk” if the controller has reasonable cause to suspect that the individual—

(a) has needs for care and support,

(b) is experiencing, or at risk of, neglect or physical, mental or emotional harm, and

(c) as a result of those needs is unable to protect himself or herself against the neglect or harm or the risk of it.

(4) In sub-paragraph (1)(a), the reference to the protection of an individual or of the well-being of an individual includes both protection relating to a particular individual and protection relating to a type of individual.

Paragraph 5

This condition is met if the processing relates to personal data which is manifestly made public by the data subject.

Paragraph 6

This condition is met if the processing—

(a) is necessary for the purpose of, or in connection with, any legal proceedings (including prospective legal proceedings),

(b) is necessary for the purpose of obtaining legal advice, or

(c) is otherwise necessary for the purposes of establishing, exercising or defending legal rights.

Paragraph 7

This condition is met if the processing is necessary when a court or other judicial authority is acting in its judicial capacity.

Paragraph 8

(1) This condition is met if the processing—

(a) is necessary for the purposes of preventing fraud or a particular kind of fraud, and

(b) consists of—

(i) the disclosure of personal data by a competent authority as a member of an anti-fraud organisation,

(ii) the disclosure of personal data by a competent authority in accordance with arrangements made by an anti-fraud organisation, or

(iii) the processing of personal data disclosed as described in sub-paragraph (i) or (ii).

(2) In this paragraph, “anti-fraud organisation” has the same meaning as in section 68 of the Serious Crime Act 2007.

Paragraph 9

This condition is met if the processing is necessary—

(a) for archiving purposes in the public interest,

(b) for scientific or historical research purposes, or

(c) for statistical purposes.

Schedule 9 – Conditions for processing under Part 4

Paragraph 1

The data subject has given consent to the processing.

Paragraph 2

The processing is necessary—

(a) for the performance of a contract to which the data subject is a party, or

(b) in order to take steps at the request of the data subject prior to entering into a contract.

Paragraph 3

The processing is necessary for compliance with a legal obligation to which the controller is subject, other than an obligation imposed by contract.

Paragraph 4

The processing is necessary in order to protect the vital interests of the data subject or of another individual.

Paragraph 5

The processing is necessary—

(a) for the administration of justice,

(b) for the exercise of any functions of either House of Parliament,

(c) for the exercise of any functions conferred on a person by an enactment or rule of law,

(d) for the exercise of any functions of the Crown, a Minister of the Crown or a government department, or

(e) for the exercise of any other functions of a public nature exercised in the public interest by a person.

Paragraph 6

(1) The processing is necessary for the purposes of legitimate interests pursued by—

(a) the controller, or

(b) the third party or parties to whom the data is disclosed.

(2) Sub-paragraph (1) does not apply where the processing is unwarranted in any particular case because of prejudice to the rights and freedoms or legitimate interests of the data subject.

(3) In this paragraph, “third party”, in relation to personal data, means a person other than the data subject, the controller or a processor or other person authorised to process personal data for the controller or processor.

Schedule 10 – Conditions for sensitive processing under Part 4

Paragraph 1

The data subject has given consent to the processing.

Paragraph 2

The processing is necessary for the purposes of exercising or performing any right or obligation which is conferred or imposed by an enactment or rule of law on the controller in connection with employment.

Paragraph 3

The processing is necessary—

(a) in order to protect the vital interests of the data subject or of another person, in a case where—

(i) consent cannot be given by or on behalf of the data subject, or

(ii) the controller cannot reasonably be expected to obtain the consent of the data subject, or

(b) in order to protect the vital interests of another person, in a case where consent by or on behalf of the data subject has been unreasonably withheld.

Paragraph 4

(1) This condition is met if—

(a) the processing is necessary for the purposes of—

(i) protecting an individual from neglect or physical, mental or emotional harm, or

(ii) protecting the physical, mental or emotional well-being of an individual,

(b) the individual is—

(i) aged under 18, or

(ii) aged 18 or over and at risk,

(c) the processing is carried out without the consent of the data subject for one of the reasons listed in sub-paragraph (2), and

(d) the processing is necessary for reasons of substantial public interest.

(2) The reasons mentioned in sub-paragraph (1)(c) are—

(a) in the circumstances, consent to the processing cannot be given by the data subject;

(b) in the circumstances, the controller cannot reasonably be expected to obtain the consent of the data subject to the processing;

(c) the processing must be carried out without the consent of the data subject because obtaining the consent of the data subject would prejudice the provision of the protection mentioned in sub-paragraph (1)(a).

(3) For the purposes of this paragraph, an individual aged 18 or over is “at risk” if the controller has reasonable cause to suspect that the individual—

(a) has needs for care and support,

(b) is experiencing, or at risk of, neglect or physical, mental or emotional harm, and

(c) as a result of those needs is unable to protect himself or herself against the neglect or harm or the risk of it.

(4) In sub-paragraph (1)(a), the reference to the protection of an individual or of the well-being of an individual includes both protection relating to a particular individual and protection relating to a type of individual.

Paragraph 5

The information contained in the personal data has been made public as a result of steps deliberately taken by the data subject.

Paragraph 6

The processing—

(a) is necessary for the purpose of, or in connection with, any legal proceedings (including prospective legal proceedings),

(b) is necessary for the purpose of obtaining legal advice, or

(c) is otherwise necessary for the purposes of establishing, exercising or defending legal rights.

Paragraph 7

The processing is necessary—

(a) for the administration of justice,

(b) for the exercise of any functions of either House of Parliament,

(c) for the exercise of any functions conferred on any person by an enactment or rule of law, or

(d) for the exercise of any functions of the Crown, a Minister of the Crown or a government department.

Paragraph 8

(1) The processing is necessary for medical purposes and is undertaken by—

(a) a health professional, or

(b) a person who in the circumstances owes a duty of confidentiality which is equivalent to that which would arise if that person were a health professional.

(2) In this paragraph, “medical purposes” includes the purposes of preventative medicine, medical diagnosis, medical research, the provision of care and treatment and the management of healthcare services.

Paragraph 9

(1) The processing—

(a) is of sensitive personal data consisting of information as to racial or ethnic origin,

(b) is necessary for the purpose of identifying or keeping under review the existence or absence of equality of opportunity or treatment between persons of different racial or ethnic origins, with a view to enabling such equality to be promoted or maintained, and

(c) is carried out with appropriate safeguards for the rights and freedoms of data subjects.

(2) In this paragraph, “sensitive personal data” means personal data the processing of which constitutes sensitive processing (see section 86(7)).

UK General Data Protection Regulation (‘GDPR’)

Article 4 – Definitions

 

(1) ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;

(2) ‘processing’ means any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction;

[…]

(14) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data;[…]

Article 5 – Principles relating to processing of personal data

(1) Personal data shall be:

[…]

(b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes (‘purpose limitation’); […]

Article 9 – Processing of special categories of personal data

(1) Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.

(2) Paragraph 1 shall not apply if one of the following applies:

(a) the data subject has given explicit consent to the processing of those personal data for one or more specified purposes, except domestic law provides that the prohibition referred to in paragraph 1 may not be lifted by the data subject;

(b) processing is necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment and social security and social protection law insofar as it is authorised by domestic law or a collective agreement pursuant to domestic law providing for appropriate safeguards for the fundamental rights and the interests of the data subject;

(c) processing is necessary to protect the vital interests of the data subject or of another natural person where the data subject is physically or legally incapable of giving consent;

(d) processing is carried out in the course of its legitimate activities with appropriate safeguards by a foundation, association or any other not-for-profit body with a political, philosophical, religious or trade union aim and on condition that the processing relates solely to the members or to former members of the body or to persons who have regular contact with it in connection with its purposes and that the personal data are not disclosed outside that body without the consent of the data subjects;

(e) processing relates to personal data which are manifestly made public by the data subject;

(f) processing is necessary for the establishment, exercise or defence of legal claims or whenever courts are acting in their judicial capacity;

(g) processing is necessary for reasons of substantial public interest, on the basis of domestic law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject;

(h) processing is necessary for the purposes of preventive or occupational medicine, for the assessment of the working capacity of the employee, medical diagnosis, the provision of health or social care or treatment or the management of health or social care systems and services on the basis of domestic law or pursuant to contract with a health professional and subject to the conditions and safeguards referred to in paragraph 3;

(i) processing is necessary for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health or ensuring high standards of quality and safety of health care and of medicinal products or medical devices, on the basis of domestic law which provides for suitable and specific measures to safeguard the rights and freedoms of the data subject, in particular professional secrecy;

(j) processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) (as supplemented by section 19 of the 2018 Act) based on domestic law which shall be proportionate to the aim pursued, respect the essence of the right to data protection and provide for suitable and specific measures to safeguard the fundamental rights and the interests of the data subject.

(3) Personal data referred to in paragraph 1 may be processed for the purposes referred to in point (h) of paragraph 2 when those data are processed by or under the responsibility of a professional subject to the obligation of professional secrecy under domestic law or rules established by national competent bodies or by another person also subject to an obligation of secrecy under domestic law or rules established by national competent bodies.

(3A) In paragraph 3, ‘national competent bodies’ means competent bodies of the United Kingdom or a part of the United Kingdom.

[…]

(5) In the 2018 Act-

(a) section 10 makes provision about when the requirement in paragraph 2(b), (g), (h), (i) or (j) of this Article for authorisation by, or a basis in, domestic law is met;

(b) section 11(1) makes provision about when the processing of personal data is carried out in circumstances described in paragraph 3 of this Article.

Human Rights Act 1998

Section 1 – The Convention Rights

(1) In this Act “the Convention rights” means the rights and fundamental freedoms set out in—

(a) Articles 2 to 12 and 14 of the Convention,

(b) Articles 1 to 3 of the First Protocol, and

(c) Article 1 of the Thirteenth Protocol, as read with Articles 16 to 18 of the Convention.

(2) Those Articles are to have effect for the purposes of this Act subject to any designated derogation or reservation (as to which see sections 14 and 15).

(3) The Articles are set out in Schedule 1.

Section 6 – Acts of public authorities

(1) It is unlawful for a public authority to act in a way which is incompatible with a Convention right.

(2) Subsection (1) does not apply to an act if—

(a) as the result of one or more provisions of primary legislation, the authority could not have acted differently; or

(b) in the case of one or more provisions of, or made under, primary legislation which cannot be read or given effect in a way which is compatible with the Convention rights, the authority was acting so as to give effect to or enforce those provisions.

(3) In this section “public authority” includes—

(a) a court or tribunal, and

(b) any person certain of whose functions are functions of a public nature, […]

Schedule 1, Article 8 – Right to respect for private and family life

(1) Everyone has the right to respect for his private and family life, his home and his correspondence.

(2) There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

Police and Criminal Evidence Act 1984

Section 61 — Finger-printing

(1) Except as provided by this section no person’s fingerprints may be taken without the appropriate consent.

(2) Consent to the taking of a person’s fingerprints must be in writing if it is given at a time when he is at a police station.

(3) The fingerprints of a person detained at a police station may be taken without the appropriate consent if—

(a) he is detained in consequence of his arrest for a recordable offence; and

(b) he has not had his fingerprints taken in the course of the investigation of the offence by the police.

(3A) Where a person mentioned in paragraph (a) of subsection (3) or (4) has already had his fingerprints taken in the course of the investigation of the offence by the police, that fact shall be disregarded for the purposes of that subsection if–

(a) the fingerprints taken on the previous occasion do not constitute a complete set of his fingerprints; or

(b) some or all of the fingerprints taken on the previous occasion are not of sufficient quality to allow satisfactory analysis, comparison or matching (whether in the case in question or generally).

(4) The fingerprints of a person detained at a police station may be taken without the appropriate consent if—

(a) he has been charged with a recordable offence or informed that he will be reported for such an offence; and

(b) he has not had his fingerprints taken in the course of the investigation of the offence by the police.

(4A) The fingerprints of a person who has answered to bail at a court or police station may be taken without the appropriate consent at the court or station if–

(a) the court, or

(b) an officer of at least the rank of inspector,

authorises them to be taken.

(4B) A court or officer may only give an authorisation under subsection (4A) if–

(a) the person who has answered to bail has answered to it for a person whose fingerprints were taken on a previous occasion and there are reasonable grounds for believing that he is not the same person; or

(b) the person who has answered to bail claims to be a different person from a person whose fingerprints were taken on a previous occasion.

(5) An officer may give an authorisation under subsection (4A) above orally or in writing but, if he gives it orally, he shall confirm it in writing as soon as is practicable.

(5A) The fingerprints of a person may be taken without the appropriate consent if (before or after the coming into force of this subsection) he has been arrested for a recordable offence and released and—

(a) he has not had his fingerprints taken in the course of the investigation of the offence by the police; or

(b) he has had his fingerprints taken in the course of that investigation but

(i) subsection (3A)(a) or (b) above applies, or

(ii) subsection (5C) below applies.

(5B) The fingerprints of a person not detained at a police station may be taken without the appropriate consent if (before or after the coming into force of this subsection) he has been charged with a recordable offence or informed that he will be reported for such an offence and—

(a) he has not had his fingerprints taken in the course of the investigation of the offence by the police; or

(b) he has had his fingerprints taken in the course of that investigation but

(i) subsection (3A)(a) or (b) above applies, or

(ii) subsection (5C) below applies.

(5C) This subsection applies where—

(a) the investigation was discontinued but subsequently resumed, and

(b) before the resumption of the investigation the fingerprints were destroyed pursuant to section 63D(3) below.

(6) Subject to this section, the fingerprints of a person may be taken without the appropriate consent if (before or after the coming into force of this subsection)—

(a) he has been convicted of a recordable offence, or

(b) he has been given a caution in respect of a recordable offence which, at the time of the caution, he has admitted, and

16 either of the conditions mentioned in subsection (6ZA) below is met.

(6ZA) The conditions referred to in subsection (6) above are—

(a) the person has not had his fingerprints taken since he was convicted or cautioned;

(b) he has had his fingerprints taken since then but subsection (3A)(a) or (b) above applies.

(6ZB) Fingerprints may only be taken as specified in subsection (6) above with the authorisation of an officer of at least the rank of inspector.

(6ZC) An officer may only give an authorisation under subsection (6ZB) above if the officer is satisfied that taking the fingerprints is necessary to assist in the prevention or detection of crime.

(6A) A constable may take a person’s fingerprints without the appropriate consent if—

(a) the constable reasonably suspects that the person is committing or attempting to commit an offence, or has committed or attempted to commit an offence; and

(b) either of the two conditions mentioned in subsection (6B) is met.

(6B) The conditions are that—

(a) the name of the person is unknown to, and cannot be readily ascertained by, the constable;

(b) the constable has reasonable grounds for doubting whether a name furnished by the person as his name is his real name.

(6C) The taking of fingerprints by virtue of subsection (6A) does not count for any of the purposes of this Act as taking them in the course of the investigation of an offence by the police.

(6D) Subject to this section, the fingerprints of a person may be taken without the appropriate consent if—

(a) under the law in force in a country or territory outside England and Wales the person has been convicted of an offence under that law (whether before or after the coming into force of this subsection and whether or not he has been punished for it);

(b) the act constituting the offence would constitute a qualifying offence if done in England and Wales (whether or not it constituted such an offence when the person was convicted); and

(c) either of the conditions mentioned in subsection (6E) below is met.

(6E) The conditions referred to in subsection (6D)(c) above are—

(a) the person has not had his fingerprints taken on a previous occasion under subsection (6D) above;

(b) he has had his fingerprints taken on a previous occasion under that subsection but subsection (3A)(a) or (b) above applies.

(6F) Fingerprints may only be taken as specified in subsection (6D) above with the authorisation of an officer of at least the rank of inspector.

(6G) An officer may only give an authorisation under subsection (6F) above if the officer is satisfied that taking the fingerprints is necessary to assist in the prevention or detection of crime.

(7) Where a person’s fingerprints are taken without the appropriate consent by virtue of any power conferred by this section—

(a) before the fingerprints are taken, the person shall be informed of—

(i) the reason for taking the fingerprints;

(ii) the power by virtue of which they are taken; and

(iii) in a case where the authorisation of the court or an officer is required for the exercise of the power, the fact that the authorisation has been given; and

(b) those matters shall be recorded as soon as practicable after the fingerprints are taken.

(7A) If a person’s fingerprints are taken at a police station, or by virtue of subsection (4A), (6A) at a place other than a police station, whether with or without the appropriate consent—

(a) before the fingerprints are taken, an officer shall inform him that they may be the subject of a speculative search; and

(b) the fact that the person has been informed of this possibility shall be recorded as soon as is practicable after the fingerprints have been taken.

(8) If he is detained at a police station when the fingerprints are taken, the matters referred to in subsection (7)(a)(i) to (iii) above and, in the case falling within subsection (7A) above, the fact referred to in paragraph (b) of that subsection shall be recorded on his custody record.

(8B) Any power under this section to take the fingerprints of a person without the appropriate consent, if not otherwise specified to be exercisable by a constable, shall be exercisable by a constable.

(9) Nothing in this section—

(a) affects any power conferred by paragraph 18(2) of Schedule 2 to the Immigration Act 1971; or

(b) applies to a person arrested or detained under the terrorism provisions or detained under Part 1 of Schedule 3 to the Counter-Terrorism and Border Security Act 2019.

(10) Nothing in this section applies to a person arrested under an extradition arrest power.

Section 62 – Intimate samples

(1) Subject to section 63B below an intimate sample may be taken from a person in police detention only—

(a) if a police officer of at least the rank of inspector authorises it to be taken; and

(b) if the appropriate consent is given.

(1A) An intimate sample may be taken from a person who is not in police detention but from whom, in the course of the investigation of an offence, two or more non-intimate samples suitable for the same means of analysis have been taken which have proved insufficient—

(a) if a police officer of at least the rank of inspector authorises it to be taken; and

(b) if the appropriate consent is given.

(2) An officer may only give an authorisation under subsection (1) or (1A) above if he has reasonable grounds—

(a) for suspecting the involvement of the person from whom the sample is to be taken in a recordable offence; and

(b) for believing that the sample will tend to confirm or disprove his involvement.

(2A) An intimate sample may be taken from a person where—

(a) two or more non-intimate samples suitable for the same means of analysis have been taken from the person under section 63(3E) below (persons convicted of offences outside England and Wales etc) but have proved insufficient;

(b) a police officer of at least the rank of inspector authorises it to be taken; and

(c) the appropriate consent is given.

(2B) An officer may only give an authorisation under subsection (2A) above if the officer is satisfied that taking the sample is necessary to assist in the prevention or detection of crime.

(3) An officer may give an authorisation under subsection (1) or (1A) or (2A) above orally or in writing but, if he gives it orally, he shall confirm it in writing as soon as is practicable.

(4) The appropriate consent must be given in writing.

(5) Before an intimate sample is taken from a person, an officer shall inform him of the following—

(a) the reason for taking the sample;

(b) the fact that authorisation has been given and the provision of this section under which it has been given; and

(c) if the sample was taken at a police station, the fact that the sample may be the subject of a speculative search.

(6) The reason referred to in subsection (5)(a) above must include, except in a case where the sample is taken under subsection (2A) above, a statement of the nature of the offence in which it is suspected that the person has been involved.

(7) After an intimate sample has been taken from a person, the following shall be recorded as soon as practicable—

(a) the matters referred to in subsection (5)(a) and (b) above;

(b) if the sample was taken at a police station, the fact that the person has been informed as specified in subsection (5)(c) above; and

(c) the fact that the appropriate consent was given.

(8) If an intimate sample is taken from a person detained at a police station, the matters required to be recorded by subsection (7) above shall be recorded in his custody record.

(9) In the case of an intimate sample which is a dental impression, the sample may be taken from a person only by a registered dentist.

(9A) In the case of any other form of intimate sample, except in the case of a sample of urine, the sample may be taken from a person only by—

(a) a registered medical practitioner; or

(b) a registered health care professional.

(10) Where the appropriate consent to the taking of an intimate sample from a person was refused without good cause, in any proceedings against that person for an offence—

(a) the court, in determining–

(ii) whether there is a case to answer; and

(aa) a judge, in deciding whether to grant an application made by the accused under paragraph 2 of Schedule 3 to the Crime and Disorder Act 1998 (applications for dismissal); and

(b) the court or jury, in determining whether that person is guilty of the offence charged,

may draw such inferences from the refusal as appear proper.

(11) Nothing in this section applies to the taking of a specimen for the purposes of any of the provisions of sections 4 to 11 of the Road Traffic Act 1988 or of sections 26 to 38 of the Transport and Works Act 1992.

(12) Nothing in this section applies to a person arrested or detained under the terrorism provisions; and subsection (1A) shall not apply where the non-intimate samples mentioned in that subsection were taken under paragraph 10 of Schedule 8 to the Terrorism Act 2000.

(13) Nothing in this section applies to a person detained under Part 1 of Schedule 3 to the Counter-Terrorism and Border Security Act 2019; and subsection (1A) does not apply where the non-intimate samples mentioned in that subsection were taken under Part 2 of that Schedule.

Section 63 – Other samples

(1) Except as provided by this section, a non-intimate sample may not be taken from a person without the appropriate consent.

(2) Consent to the taking of a non-intimate sample must be given in writing.

(2A) A non-intimate sample may be taken from a person without the appropriate consent if two conditions are satisfied.

(2B) The first is that the person is in police detention in consequence of his arrest for a recordable offence.

(2C) The second is that—

(a) he has not had a non-intimate sample of the same type and from the same part of the body taken in the course of the investigation of the offence by the police, or

(b) he has had such a sample taken but it proved insufficient.

(3) A non-intimate sample may be taken from a person without the appropriate consent if—

(a) he is being held in custody by the police on the authority of a court; and

(b) an officer of at least the rank of inspector authorises it to be taken without the appropriate consent.

(3ZA) A non-intimate sample may be taken from a person without the appropriate consent if (before or after the coming into force of this subsection) he has been arrested for a recordable offence and released and—

(a) he has not had a nonintimate sample of the same type and from the same part of the body taken from him in the course of the investigation of the offence by the police; or

(b) he has had a non-intimate sample taken from him in the course of that investigation but—

(i) it was not suitable for the same means of analysis, or

(ii) it proved insufficient, or

(iii) subsection (3AA) below applies.

(3A) A non-intimate sample may be taken from a person (whether or not he is in police detention or held in custody by the police on the authority of a court) without the appropriate consent if he has been charged with a recordable offence or informed that he will be reported for such an offence and—

(a) he has not had a non-intimate sample taken from him in the course of the investigation of the offence by the police; or

(b) he has had a non-intimate sample taken from him in the course of that investigation but—

(i) it was not suitable for the same means of analysis, or

(ii) it proved insufficient, or

(iii) subsection (3AA) below applies; or

(c) he has had a non-intimate sample taken from him in the course of that investigation and—

(i) the sample has been destroyed pursuant to section 63R below or any other enactment, and

(ii) it is disputed, in relation to any proceedings relating to the offence, whether a DNA profile relevant to the proceedings is derived from the sample.

(3AA) This subsection applies where the investigation was discontinued but subsequently resumed, and before the resumption of the investigation—

(a) any DNA profile derived from the sample was destroyed pursuant to section 63D(3) below, and

(b) the sample itself was destroyed pursuant to section 63R(4), (5) or (12) below.

(3B) Subject to this section, a non-intimate sample may be taken from a person without the appropriate consent if (before or after the coming into force of this subsection)—

(a) he has been convicted of a recordable offence, or

(b) he has been given a caution in respect of a recordable offence which, at the time of the caution, he has admitted, and

either of the conditions mentioned in subsection (3BA) below is met.

(3BA) The conditions referred to in subsection (3B) above are—

(a) a non-intimate sample has not been taken from the person since he was convicted or cautioned;

(b) such a sample has been taken from him since then but—

(i) it was not suitable for the same means of analysis, or

(ii) it proved insufficient.

(3BB) A non-intimate sample may only be taken as specified in subsection (3B) above with the authorisation of an officer of at least the rank of inspector.

(3BC) An officer may only give an authorisation under subsection (3BB) above if the officer is satisfied that taking the sample is necessary to assist in the prevention or detection of crime.

(3C) A non-intimate sample may also be taken from a person without the appropriate consent if he is a person to whom section 2 of the Criminal Evidence (Amendment) Act 1997 applies (persons detained following acquittal on grounds of insanity or finding of unfitness to plead).

(3E) Subject to this section, a non-intimate sample may be taken without the appropriate consent from a person if—

(a) under the law in force in a country or territory outside England and Wales the person has been convicted of an offence under that law (whether before or after the coming into force of this subsection and whether or not he has been punished for it);

(b) the act constituting the offence would constitute a qualifying offence if done in England and Wales (whether or not it constituted such an offence when the person was convicted); and

(c) either of the conditions mentioned in subsection (3F) below is met.

(3F) The conditions referred to in subsection (3E)(c) above are—

(a) the person has not had a non-intimate sample taken from him on a previous occasion under subsection (3E) above;

(b) he has had such a sample taken from him on a previous occasion under that subsection but—

(i) the sample was not suitable for the same means of analysis, or

(ii) it proved insufficient.

(3G) A non-intimate sample may only be taken as specified in subsection (3E) above with the authorisation of an officer of at least the rank of inspector.

(3H) An officer may only give an authorisation under subsection (3G) above if the officer is satisfied that taking the sample is necessary to assist in the prevention or detection of crime.

 

(4) An officer may only give an authorisation under subsection (3) above if he has reasonable grounds—

(a) for suspecting the involvement of the person from whom the sample is to be taken in a recordable offence; and

(b) for believing that the sample will tend to confirm or disprove his involvement.

(5) An officer may give an authorisation under subsection (3) above orally or in writing but, if he gives it orally, he shall confirm it in writing as soon as is practicable.

(5A) An officer shall not give an authorisation under subsection (3) above for the taking from any person of a non-intimate sample consisting of a skin impression if–

(a) a skin impression of the same part of the body has already been taken from that person in the course of the investigation of the offence; and

(b) the impression previously taken is not one that has proved insufficient.

(6) Where a non-intimate sample is taken from a person without the appropriate consent by virtue of any power conferred by this section—

(a) before the sample is taken, an officer shall inform him of—

(i) the reason for taking the sample;

(ii) the power by virtue of which it is taken; and

(iii) in a case where the authorisation of an officer is required for the exercise of the power, the fact that the authorisation has been given; and

(b) those matters shall be recorded as soon as practicable after the sample is taken.

(7) The reason referred to in subsection (6)(a)(i) above must include, except in a case where the non-intimate sample is taken under subsection (3B) or (3E) above, a statement of the nature of the offence in which it is suspected that the person has been involved.

(8B) If a non-intimate sample is taken from a person at a police station, whether with or without the appropriate consent—

(a) before the sample is taken, an officer shall inform him that it may be the subject of a speculative search; and

(b) the fact that the person has been informed of this possibility shall be recorded as soon as practicable after the sample has been taken.

(9) If a non-intimate sample is taken from a person detained at a police station, the matters required to be recorded by subsection (6) or (8B) above shall be recorded in his custody record.

(9ZA) The power to take a non-intimate sample from a person without the appropriate consent shall be exercisable by any constable.

(9A) Subsection (3B) above shall not apply to –

(a) any person convicted before 10th April 1995 unless he is a person to whom section 1 of the Criminal Evidence (Amendment) Act 1997 applies (persons imprisoned or detained by virtue of pre-existing conviction for sexual offence etc.); or

(b) a person given a caution before 10th April 1995.

(10) Nothing in this section applies to a person arrested or detained under the terrorism provisions or detained under Part 1 of Schedule 3 to the Counter-Terrorism and Border Security Act 2019.

(11) Nothing in this section applies to a person arrested under an extradition arrest power.

Section 63D – Destruction of fingerprints and DNA profiles

(1) This section applies to—

(a) fingerprints—

(i) taken from a person under any power conferred by this Part of this Act, or

(ii) taken by the police, with the consent of the person from whom they were taken, in connection with the investigation of an offence by the police, and

(b) a DNA profile derived from a DNA sample taken as mentioned in paragraph (a)(i) or (ii).

(2) Fingerprints and DNA profiles to which this section applies (“section 63D material”) must be destroyed if it appears to the responsible chief officer of police that—

(a) the taking of the fingerprint or, in the case of a DNA profile, the taking of the sample from which the DNA profile was derived, was unlawful, or

(b) the fingerprint was taken, or, in the case of a DNA profile, was derived from a sample taken, from a person in connection with that person’s arrest and the arrest was unlawful or based on mistaken identity.

(3) In any other case, section 63D material must be destroyed unless it is retained under any power conferred by sections 63E to 63O (including those sections as applied by section 63P).

(4) Section 63D material which ceases to be retained under a power mentioned in subsection (3) may continue to be retained under any other such power which applies to it.

(5) Nothing in this section prevents a speculative search, in relation to section 63D material, from being carried out within such time as may reasonably be required for the search if the responsible chief officer of police considers the search to be desirable.

Section 63E – Retention of section 63D material pending investigation or proceedings

(1) This section applies to section 63D material taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of an offence in which it is suspected that the person to whom the material relates has been involved.

(2) The material may be retained until the conclusion of the investigation of the offence or, where the investigation gives rise to proceedings against the person for the offence, until the conclusion of those proceedings.

Section 63F – Retention of section 63D material: persons arrested for or charged with a qualifying offence

(1) This section applies to section 63D material which—

(a) relates to a person who is arrested for, or charged with, a qualifying offence but is not convicted of that offence, and

(b) was taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of the offence.

(2) If the person has previously been convicted of a recordable offence which is not an excluded offence, or is so convicted before the material is required to be destroyed by virtue of this section, the material may be retained indefinitely.

(2A) In subsection (2), references to a recordable offence include an offence under the law of a country or territory outside England and Wales where the act constituting the offence would constitute a recordable offence if done in England and Wales (and, in the application of subsection (2) where a person has previously been convicted, this applies whether or not the act constituted such an offence when the person was convicted).

(3) Otherwise, material falling within subsection (4), (5) or (5A) may be retained until the end of the retention period specified in subsection (6).

(4) Material falls within this subsection if it—

(a) relates to a person who is charged with a qualifying offence but is not convicted of that offence, and

(b) was taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of the offence.

(5) Material falls within this subsection if—

(a) it relates to a person who is arrested for a qualifying offence, other than a terrorism-related qualifying offence, but is not charged with that offence,

(b) it was taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of the offence, and

(c) the Commissioner for the Retention and Use of Biometric Material has consented under section 63G to the retention of the material.

(5A) Material falls within this subsection if—

(a) it relates to a person who is arrested for a terrorism-related qualifying offence but is not charged with that offence, and

(b) it was taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of the offence.

(6) The retention period is—

(a) in the case of fingerprints, the period of 3 years beginning with the date on which the fingerprints were taken, and

(b) in the case of a DNA profile, the period of 3 years beginning with the date on which the DNA sample from which the profile was derived was taken (or, if the profile was derived from more than one DNA sample, the date on which the first of those samples was taken).

(7) The responsible chief officer of police or a specified chief officer of police may apply to a District Judge (Magistrates’ Courts) for an order extending the retention period.

(8) An application for an order under subsection (7) must be made within the period of 3 months ending on the last day of the retention period.

(9) An order under subsection (7) may extend the retention period by a period which—

(a) begins with the end of the retention period, and

(b) ends with the end of the period of 2 years beginning with the end of the retention period.

(10) The following persons may appeal to the Crown Court against an order under subsection (7), or a refusal to make such an order—

(a) the responsible chief officer of police;

(b) a specified chief officer of police;

(c) the person from whom the material was taken.

(11) In this section—

“excluded offence”, in relation to a person, means a recordable offence—

(a) which—

(i) is not a qualifying offence,

(ii) is the only recordable offence of which the person has been convicted, and

(iii) was committed when the person was aged under 18, and

(b) for which the person was not given a relevant custodial sentence of 5 years or more,

“relevant custodial sentence” has the meaning given by section 63K(6),

“a specified chief officer of police” means—

(a) the chief officer of the police force of the area in which the person from whom the material was taken resides, or

(b) a chief officer of police who believes that the person is in, or is intending to come to, the chief officer’s police area.

“terrorism-related qualifying offence” means—

(a) an offence for the time being listed in section 41(1) of the Counter-Terrorism Act 2008 (see section 65A(2)(r) below), or

(b) an ancillary offence, as defined by section 65A(5) below, relating to an offence for the time being listed in section 41(1) of that Act.

(12) For the purposes of the definition of “excluded offence” in subsection (11)—

(a) references to a recordable offence or a qualifying offence include an offence under the law of a country or territory outside England and Wales where the act constituting the offence would constitute a recordable offence or (as the case may be) a qualifying offence if done in England and Wales (whether or not it constituted such an offence when the person was convicted), and

(b) in the application of paragraph (b) of that definition in relation to an offence under the law of a country or territory outside England and Wales, the reference to a relevant custodial sentence of 5 years or more is to be read as a reference to a sentence of imprisonment or other form of detention of 5 years or more.

Section 63G – Retention of section 63D material by virtue of section 63F(5): consent of Commissioner

(1) The responsible chief officer of police may apply under subsection (2) or (3) to the Commissioner for the Retention and Use of Biometric Material for consent to the retention of section 63D material which falls within section 63F(5)(a) and (b).

(2) The responsible chief officer of police may make an application under this subsection if the responsible chief officer of police considers that the material was taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of an offence where any alleged victim of the offence was, at the time of the offence—

(a) under the age of 18,

(b) a vulnerable adult, or

(c) associated with the person to whom the material relates.

(3) The responsible chief officer of police may make an application under this subsection if the responsible chief officer of police considers that—

(a) the material is not material to which subsection (2) relates, but

(b) the retention of the material is necessary to assist in the prevention or detection of crime.

(4) The Commissioner may, on an application under this section, consent to the retention of material to which the application relates if the Commissioner considers that it is appropriate to retain the material.

(5) But where notice is given under subsection (6) in relation to the application, the Commissioner must, before deciding whether or not to give consent, consider any representations by the person to whom the material relates which are made within the period of 28 days beginning with the day on which the notice is given.

(6) The responsible chief officer of police must give to the person to whom the material relates notice of—

(a) an application under this section, and

(b) the right to make representations.

(7) A notice under subsection (6) may, in particular, be given to a person by—

(a) leaving it at the person’s usual or last known address (whether residential or otherwise),

(b) sending it to the person by post at that address, or

(c) sending it to the person by email or other electronic means.

(8) The requirement in subsection (6) does not apply if the whereabouts of the person to whom the material relates is not known and cannot, after reasonable inquiry, be ascertained by the responsible chief officer of police.

(9) An application or notice under this section must be in writing.

(10) In this section—

“victim” includes intended victim,

“vulnerable adult” means a person aged 18 or over whose ability to protect himself or herself from violence, abuse or neglect is significantly impaired through physical or mental disability or illness, through old age or otherwise,

and the reference in subsection (2)(c) to a person being associated with another person is to be read in accordance with section 62(3) to (7) of the Family Law Act 1996.

Section 63H – Retention of section 63D material: persons arrested for or charged with a minor offence

(1) This section applies to section 63D material which—

(a) relates to a person who—

(i) is arrested for or charged with a recordable offence other than a qualifying offence,

(ii) if arrested for or charged with more than one offence arising out of a single course of action, is not also arrested for or charged with a qualifying offence, and

(iii) is not convicted of the offence or offences in respect of which the person is arrested or charged, and

(b) was taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of the offence or offences in respect of which the person is arrested or charged.

(2) If the person has previously been convicted of a recordable offence which is not an excluded offence, the material may be retained indefinitely.

(2A) In subsection (2), the reference to a recordable offence includes an offence under the law of a country or territory outside England and Wales where the act constituting the offence would constitute a recordable offence if done in England and Wales (whether or not it constituted such an offence when the person was convicted).

(3) In this section “excluded offence” has the meaning given by section 63F (11) (read with section 63F(12)).

Section 63I – Retention of material: persons convicted of a recordable offence

(1) This section applies, subject to subsection (3), to—

(a) section 63D material which—

(i) relates to a person who is convicted of a recordable offence, and

(ii) was taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of the offence, or

(b) material taken under section 61(6) or 63(3B) which relates to a person who is convicted of a recordable offence.

(2) The material may be retained indefinitely.

(3) This section does not apply to section 63D material to which section 63K applies.

Section 63J – Retention of material: persons convicted of an offence outside England and Wales: other cases

(1) This section applies to material falling within subsection (2) relating to a person who is convicted of an offence under the law of any country or territory outside England and Wales.

(2) Material falls within this subsection if it is—

(a) fingerprints taken from the person under section 61(6D) (power to take fingerprints without consent in relation to offences outside England and Wales), or

(b) a DNA profile derived from a DNA sample taken from the person under section 62(2A) or 63(3E) (powers to take intimate and non-intimate samples in relation to offences outside England and Wales).

(3) The material may be retained indefinitely.

Section 63K – Retention of section 63D material: exception for persons under 18 convicted of first minor offence

(1) This section applies to section 63D material which—

(a) relates to a person who—

(i) is convicted of a recordable offence other than a qualifying offence,

(ii) has not previously been convicted of a recordable offence, and

(iii) is aged under 18 at the time of the offence, and

(b) was taken (or, in the case of a DNA profile, derived from a sample taken) in connection with the investigation of the offence.

(1A) In subsection (1)(a)(ii), the reference to a recordable offence includes an offence under the law of a country or territory outside England and Wales where the act constituting the offence would constitute a recordable offence if done in England and Wales (whether or not it constituted such an offence when the person was convicted).

(2) Where the person is given a relevant custodial sentence of less than 5 years in respect of the offence, the material may be retained until the end of the period consisting of the term of the sentence plus 5 years.

(3) Where the person is given a relevant custodial sentence of 5 years or more in respect of the offence, the material may be retained indefinitely.

(4) Where the person is given a sentence other than a relevant custodial sentence in respect of the offence, the material may be retained until—

(a) in the case of fingerprints, the end of the period of 5 years beginning with the date on which the fingerprints were taken, and

(b) in the case of a DNA profile, the end of the period of 5 years beginning with—

(i) the date on which the DNA sample from which the profile was derived was taken, or

(ii) if the profile was derived from more than one DNA sample, the date on which the first of those samples was taken.

(5) But if, before the end of the period within which material may be retained by virtue of this section, the person is again convicted of a recordable offence, the material may be retained indefinitely.

(5A) In subsection (5), the reference to a recordable offence includes an offence under the law of a country or territory outside England and Wales where the act constituting the offence would constitute a recordable offence if done in England and Wales.

(6) In this section, “relevant custodial sentence” means any of the following—

(a) a custodial sentence within the meaning of section 76 of the Powers of Criminal Courts (Sentencing) Act 2000 or section 222 of the Sentencing Code;

(b) a sentence of a period of detention and training (excluding any period of supervision) which a person is liable to serve under an order under section 211 of the Armed Forces Act 2006 or a secure training order.

Section 63L – Retention of section 63D material: persons given a penalty notice

(1) This section applies to section 63D material which—

(a) relates to a person who is given a penalty notice under section 2 of the Criminal Justice and Police Act 2001 and in respect of whom no proceedings are brought for the offence to which the notice relates, and

(b) was taken (or, in the case of a DNA profile, derived from a sample taken) from the person in connection with the investigation of the offence to which the notice relates.

(2) The material may be retained—

(a) in the case of fingerprints, for a period of 2 years beginning with the date on which the fingerprints were taken,

(b) in the case of a DNA profile, for a period of 2 years beginning with—

(i) the date on which the DNA sample from which the profile was derived was taken, or

(ii) if the profile was derived from more than one DNA sample, the date on which the first of those samples was taken.

Section 63M – Retention of section 63D material for purposes of national security

(1) Section 63D material may be retained for as long as a national security determination made by a chief officer of police has effect in relation to it.

(2) A national security determination is made if a chief officer of police determines that it is necessary for any section 63D material to be retained for the purposes of national security.

(3) A national security determination—

(a) must be made in writing,

(b) has effect for a maximum of 5 years beginning with the date on which it is made, and

(c) may be renewed.

Section 63N – Retention of section 63D material given voluntarily

(1) This section applies to the following section 63D material—

(a) fingerprints taken with the consent of the person from whom they were taken, and

(b) a DNA profile derived from a DNA sample taken with the consent of the person from whom the sample was taken.

(2) Material to which this section applies may be retained until it has fulfilled the purpose for which it was taken or derived.

(3) Material to which this section applies which relates to—

(a) a person who is convicted of a recordable offence, or

(b) a person who has previously been convicted of a recordable offence (other than a person who has only one exempt conviction), may be retained indefinitely.

(4) For the purposes of subsection (3)(b), a conviction is exempt if it is in respect of a recordable offence, other than a qualifying offence, committed when the person is aged under 18.

(5) The reference to a recordable offence in subsection (3)(a) includes an offence under the law of a country or territory outside England and Wales where the act constituting the offence would constitute a recordable offence if done in England and Wales.

(6) The reference to a recordable offence in subsections (3)(b) and (4), and the reference to a qualifying offence in subsection (4), includes an offence under the law of a country or territory outside England and Wales where the act constituting the offence would constitute a recordable offence or (as the case may be) a qualifying offence if done in England and Wales (whether or not it constituted such an offence when the person was convicted).

Section 63O – Retention of section 63D material with consent

(1) This section applies to the following material—

(a) fingerprints (other than fingerprints taken under section 61(6A)) to which section 63D applies, and

(b) a DNA profile to which section 63D applies.

(2) If the person to whom the material relates consents to material to which this section applies being retained, the material may be retained for as long as that person consents to it being retained.

(3) Consent given under this section—

(a) must be in writing, and

(b) can be withdrawn at any time.

Section 63R – Destruction of samples

(1) This section applies to samples—

(a) taken from a person under any power conferred by this Part of this Act, or

(b) taken by the police, with the consent of the person from whom they were taken, in connection with the investigation of an offence by the police.

(2) Samples to which this section applies must be destroyed if it appears to the responsible chief officer of police that—

(a) the taking of the samples was unlawful, or

(b) the samples were taken from a person in connection with that person’s arrest and the arrest was unlawful or based on mistaken identity.

(3) Subject to this, the rule in subsection (4) or (as the case may be) (5) applies.

(4) A DNA sample to which this section applies must be destroyed—

(a) as soon as a DNA profile has been derived from the sample, or

(b) if sooner, before the end of the period of 6 months beginning with the date on which the sample was taken.

(5) Any other sample to which this section applies must be destroyed before the end of the period of 6 months beginning with the date on which it was taken.

(6) The responsible chief officer of police may apply to a District Judge (Magistrates’ Courts) for an order to retain a sample to which this section applies beyond the date on which the sample would otherwise be required to be destroyed by virtue of subsection (4) or (5) if—

(a) the sample was taken from a person in connection with the investigation of a qualifying offence, and

(b) the responsible chief officer of police considers that the condition in subsection (7) is met.

(7) The condition is that, having regard to the nature and complexity of other material that is evidence in relation to the offence, the sample is likely to be needed in any proceedings for the offence for the purposes of—

(a) disclosure to, or use by, a defendant, or

(b) responding to any challenge by a defendant in respect of the admissibility of material that is evidence on which the prosecution proposes to rely.

(8) An application under subsection (6) must be made before the date on which the sample would otherwise be required to be destroyed by virtue of subsection (4) or (5).

(9) If, on an application made by the responsible chief officer of police under subsection (6), the District Judge (Magistrates’ Courts) is satisfied that the condition in subsection (7) is met, the District Judge may make an order under this subsection which—

(a) allows the sample to be retained for a period of 12 months beginning with the date on which the sample would otherwise be required to be destroyed by virtue of subsection (4) or (5), and

(b) may be renewed (on one or more occasions) for a further period of not more than 12 months from the end of the period when the order would otherwise cease to have effect.

(10) An application for an order under subsection (9) (other than an application for renewal)—

(a) may be made without notice of the application having been given to the person from whom the sample was taken, and

(b) may be heard and determined in private in the absence of that person.

(11) A sample retained by virtue of an order under subsection (9) must not be used other than for the purposes of any proceedings for the offence in connection with which the sample was taken.

(12) A sample that ceases to be retained by virtue of an order under subsection (9) must be destroyed.

(13) Nothing in this section prevents a speculative search, in relation to samples to which this section applies, from being carried out within such time as may reasonably be required for the search if the responsible chief officer of police considers the search to be desirable.

Section 64A – Photographing of suspects etc.

(1) A person who is detained at a police station may be photographed—

(a) with the appropriate consent; or

(b) if the appropriate consent is withheld or it is not practicable to obtain it, without it.

(1A) A person falling within subsection (1B) below may, on the occasion of the relevant event referred to in subsection (1B), be photographed elsewhere than at a police station—

(a) with the appropriate consent; or

(b) if the appropriate consent is withheld or it is not practicable to obtain it, without it.

(1B) A person falls within this subsection if he has been—

(a) arrested by a constable for an offence;

(b) taken into custody by a constable after being arrested for an offence by a person other than a constable;

(c) made subject to a requirement to wait with a community support officer or a community support volunteer under paragraph 7 of Schedule 3B to the Police Reform Act 2002 (“the 2002 Act”);

(ca) given a direction by a constable under section 35 of the Anti-social Behaviour, Crime and Policing Act 2014;

(d) given a penalty notice by a constable under Chapter 1 of Part 1 of the Criminal Justice and Police Act 2001, a penalty notice by a constable under section 444A of the Education Act 1996, or a fixed penalty notice by a constable in uniform under section 54 of the Road Traffic Offenders Act 1988;

(e) given a fixed penalty notice by a community support officer or community support volunteer who is authorised to give the notice by virtue of his or her designation under section 38 of the Police Reform Act 2002;

(f) given a notice in relation to a relevant fixed penalty offence (within the meaning of paragraph 1 of Schedule 5 to the 2002 Act) by an accredited person by virtue of accreditation specifying that that paragraph applies to him; or

(g) given a notice in relation to a relevant fixed penalty offence (within the meaning of Schedule 5A to the 2002 Act) by an accredited inspector by virtue of accreditation specifying that paragraph 1 of Schedule 5A to the 2002 Act applies to him.

(2) A person proposing to take a photograph of any person under this section—

(a) may, for the purpose of doing so, require the removal of any item or substance worn on or over the whole or any part of the head or face of the person to be photographed; and

(b) if the requirement is not complied with, may remove the item or substance himself.

(3) Where a photograph may be taken under this section, the only persons entitled to take the photograph are constables.

(4) A photograph taken under this section—

(a) may be used by, or disclosed to, any person for any purpose related to the prevention or detection of crime, the investigation of an offence or the conduct of a prosecution or to the enforcement of a sentence; and

(b) after being so used or disclosed, may be retained but may not be used or disclosed except for a purpose so related.

(5) In subsection (4)—

(a) the reference to crime includes a reference to any conduct which—

(i) constitutes one or more criminal offences (whether under the law of a part of the United Kingdom or of a country or territory outside the United Kingdom); or

(ii) is, or corresponds to, any conduct which, if it all took place in any one part of the United Kingdom, would constitute one or more criminal offences; and

(b) the references to an investigation and to a prosecution include references, respectively, to any investigation outside the United Kingdom of any crime or suspected crime and to a prosecution brought in respect of any crime in a country or territory outside the United Kingdom; and

(c) “sentence” includes any order made by a court in England and Wales when dealing with an offender in respect of his offence.

(6) References in this section to taking a photograph include references to using any process by means of which a visual image may be produced; and references to photographing a person shall be construed accordingly.

(6A) In this section, a “photograph” includes a moving image, and corresponding expressions shall be construed accordingly.

(7) Nothing in this section applies to a person arrested under an extradition arrest power.

Section 65 – Part V—supplementary

(1) In this Part of this Act—

“analysis”, in relation to a skin impression, includes comparison and matching;

“appropriate consent” means —

(a) in relation to a person who [has attained the age of 18 years, the consent of that person;

(b) in relation to a person who has not attained that age but has attained the age of 14 years, the consent of that person and his parent or guardian; and

(c) in relation to a person who has not attained the age of 14 years, the consent of his parent or guardian;

“DNA profile” means any information derived from a DNA sample;

“DNA sample” means any material that has come from a human body and consists of or includes human cells;

“extradition arrest power” means any of the following—

(a) a Part 1 warrant (within the meaning given by the Extradition Act 2003) in respect of which a certificate under section 2 of that Act has been issued;

(b) section 5 of that Act;

(c) a warrant issued under section 71 of that Act;

(d) a provisional warrant (within the meaning given by that Act);

(e) section 74A of that Act;

“fingerprints”, in relation to any person, means a record (in any form and produced by any method) of the skin pattern and other physical characteristics or features of–

(a) any of that person’s fingers; or

(b) either of his palms;

“intimate sample” means—

(a) a sample of blood, semen or any other tissue fluid, urine or pubic hair;

(b) a dental impression;

(c) a swab taken from any part of a person’s genitals (including pubic hair) or from a person’s body orifice other than the mouth;

“intimate search” means a search which consists of the physical examination of a person’s body orifices other than the mouth;

“non-intimate sample” means—

(a) a sample of hair other than pubic hair;

(b) a sample taken from a nail on from under a nail;

(c) a swab taken from any part of a person’s body other than a part from which a swab taken would be an intimate sample;

(d) saliva;

(e) a skin impression;

“offence”, in relation to any country or territory outside England and Wales, includes an act punishable under the law of that country or territory, however it is described;

“registered dentist” has the same meaning as in the Dentists Act 1984;

“skin impression” , in relation to any person, means any record (other than a fingerprint) which is a record (in any form and produced by any method) of the skin pattern and other physical characteristics or features of the whole or any part of his foot or of any other part of his body;

“registered health care professional” means a person (other than a medical practitioner) who is—

(a) a registered nurse; or

(b) a registered member of a health care profession which is designated for the purposes of this paragraph by an order made by the Secretary of State;

“the responsible chief officer of police”, in relation to material to which section 63D or 63R applies, means the chief officer of police for the police area—

(a) in which the material concerned was taken, or

(b) in the case of a DNA profile, in which the sample from which the DNA profile was derived was taken;

“section 63D material” means fingerprints or DNA profiles to which section 63D applies;

“speculative search”, in relation to a person’s fingerprints or samples, means such a check against other fingerprints or samples or against information derived from other samples as is referred to in section 63A(1) above;

“sufficient” and “insufficient”, in relation to a sample, means (subject to subsection (2) below) sufficient or insufficient (in point of quantity or quality) for the purpose of enabling information to be produced by the means of analysis used or to be used in relation to the sample.

“the terrorism provisions” means section 41 of the Terrorism Act 2000, and any provision of Schedule 7 to that Act conferring a power of detention; and

“terrorism” has the meaning given in section 1 of that Act.

“terrorist investigation” has the meaning given by section 32 of that Act;

(1A) A health care profession is any profession mentioned in section 60(2) of the Health Act 1999 other than the profession of practising medicine and the profession of nursing.

(1B) An order under subsection (1) shall be made by statutory instrument and shall be subject to annulment in pursuance of a resolution of either House of Parliament.

(2) References in this Part of this Act to a sample’s proving insufficient include references to where, as a consequence of–

(a) the loss, destruction or contamination of the whole or any part of the sample,

(b) any damage to the whole or a part of the sample, or

(c) the use of the whole or a part of the sample for an analysis which produced no results or which produced results some or all of which must be regarded, in the circumstances, as unreliable, the sample has become unavailable or insufficient for the purpose of enabling information, or information of a particular description, to be obtained by means of analysis of the sample.

(2A) In subsection (2), the reference to the destruction of a sample does not include a reference to the destruction of a sample under section 63R (requirement to destroy samples).

(2B) Any reference in sections 63F, 63H, 63P or 63U to a person being charged with an offence includes a reference to a person being informed that the person will be reported for an offence.

(3) For the purposes of this Part, a person has in particular been convicted of an offence under the law of a country or territory outside England and Wales if—

(a) a court exercising jurisdiction under the law of that country or territory has made in respect of such an offence a finding equivalent to a finding that the person is not guilty by reason of insanity; or

(b) such a court has made in respect of such an offence a finding equivalent to a finding that the person is under a disability and did the act charged against him in respect of the offence.

Section 65A – “Qualifying offence”

(1) In this Part, “qualifying offence” means—

(a) an offence specified in subsection (2) below, or

(b) an ancillary offence relating to such an offence.

(2) The offences referred to in subsection (1)(a) above are—

(a) murder;

(b) manslaughter;

(c) false imprisonment;

(d) kidnapping;

(da) an offence of indecent exposure;

(db) an offence under section 4 of the Vagrancy Act 1824, committed by a person by wilfully, openly, lewdly, and obscenely exposing his person with intent to insult any female;

(dc) an offence under section 28 of the Town Police Clauses Act 1847, committed by a person by wilfully and indecently exposing his person;

(e) an offence under section 4, 16, 18, 20 to 24 or 47 of the Offences Against the Person Act 1861;

(f) an offence under section 2 or 3 of the Explosive Substances Act 1883;

(fa) an offence under section 1 of the Infant Life (Preservation) Act 1929;

(g) an offence under section 1 of the Children and Young Persons Act 1933;

(ga) an offence under section 1 of the Infanticide Act 1938;

(gb) an offence under section 12 or 13 of the Sexual Offences Act 1956, other than an offence committed by a person where the other person involved in the conduct constituting the offence consented to it and was aged 16 or over;

(gc) an offence under any other section of that Act, other than sections 18 and 32;

(gd) an offence under section 128 of the Mental Health Act 1959;

(ge) an offence under section 1 of the Indecency with Children Act 196;

(h) an offence under section 4(1) of the Criminal Law Act 1967 committed in relation to murder;

(ha) an offence under section 5 of the Sexual Offences Act 1967;

(i) an offence under sections 16 to 18 of the Firearms Act 1968;

(j) an offence under [section 8, 9 or 10 of the Theft Act 1968 or an offence under section 12A of that Act involving an accident which caused a person’s death;

(ja) an offence under section 1(1) of the Genocide Act 1969;

(k) an offence under section 1 of the Criminal Damage Act 1971 required to be charged as arson;

(ka) an offence under section 54 of the Criminal Law Act 1977;

(l) an offence under section 1 of the Protection of Children Act 1978;

(m) an offence under section 1 of the Aviation Security Act 1982;

(n) an offence under section 2 of the Child Abduction Act 1984;

(na) an offence under section 1 of the Prohibition of Female Circumcision Act 1985;

(nb) an offence under section 1 of the Public Order Act 1986;

(o) an offence under section 9 of the Aviation and Maritime Security Act 1990;

(oa) an offence under section 3 of the Sexual Offences (Amendment) Act 2000;

(ob) an offence under section 51 of the International Criminal Court Act 2001;

(oc) an offence under section 1, 2 or 3 of the Female Genital Mutilation Act 2003;

(p) an offence under any of [sections 1 to 19, 25, 26, 30 to 41, 47 to 50, 52, 53, 57 to 59A, 61 to 67, 69 and 70 of the Sexual Offences Act 2003;

(q) an offence under section 5 of the Domestic Violence, Crime and Victims Act 2004;

(r) an offence for the time being listed in section 41(1) of the Counter-Terrorism Act 2008;

(s) an offence under section 2 of the Modern Slavery Act 2015 (human trafficking) ;

(t) an offence under paragraph 1 of Schedule 4 to the Space Industry Act 2018.

(3) The Secretary of State may by order made by statutory instrument amend subsection (2) above.

(4) A statutory instrument containing an order under subsection (3) above shall not be made unless a draft of the instrument has been laid before, and approved by resolution of, each House of Parliament.

(5) In subsection (1)(b) above “ancillary offence”, in relation to an offence, means—

(a) aiding, abetting, counselling or procuring the commission of the offence;

(b) an offence under Part 2 of the Serious Crime Act 2007 (encouraging or assisting crime) in relation to the offence (including, in relation to times before the commencement of that Part, an offence of incitement);

(c) attempting or conspiring to commit the offence.

Terrorism Act 2000

Schedule 7 – Port and Border Controls

Paragraph 2

(1) An examining officer may question a person to whom this paragraph applies for the purpose of determining whether he appears to be a person falling within section 40(1)(b).

(2) This paragraph applies to a person if—

(a) he is at a port or in the border area, and

(b) the examining officer believes that the person’s presence at the port or in the area is connected with his entering or leaving Great Britain or Northern Ireland or his travelling by air within Great Britain or within Northern Ireland.

(3) This paragraph also applies to a person on a ship or aircraft which has arrived at any place in Great Britain or Northern Ireland (whether from within or outside Great Britain or Northern Ireland).

(4) An examining officer may exercise his powers under this paragraph whether or not he has grounds for suspecting that a person falls within section 40(1)(b).

[…]

Paragraph 6

(1) For the purposes of exercising a power under paragraph 2 or 3 an examining officer may—

(a) stop a person or vehicle;

(b) detain a person.

(2) For the purpose of detaining a person under this paragraph, an examining officer may authorise the person’s removal from a ship, aircraft or vehicle.

(3) Where a person is detained under this paragraph the provisions of Parts 1 and 1A of Schedule 8 (treatment and review of detention) shall apply.

Schedule 8 – Detention

Paragraph 2 — Identification

(1) An authorised person may take any steps which are reasonably necessary for–

(a) photographing the detained person,

(b) measuring him, or

(c) identifying him.

(2) In sub-paragraph (1) “authorised person” means any of the following–

(a) a constable,

(b) a prison officer,

(c) a person authorised by the Secretary of State, and

(d) in the case of a person detained under Schedule 7, an examining officer.

(3) This paragraph does not confer the power to take–

(a) fingerprints, non-intimate samples or intimate samples (within the meaning given by paragraph 15 below), […].

 

Paragraph 10

[…]

(2) Fingerprints may be taken from the detained person only if they are taken by a constable–

(a) with the appropriate consent given in writing, or

(b) without that consent under sub-paragraph (4).

[…]

(4) Fingerprints or a non-intimate sample may be taken from the detained person without the appropriate consent only if–

(a) he is detained at a police station and a police officer of at least the rank of superintendent authorises the fingerprints or sample to be taken, or

(b) he has been convicted of a recordable offence and, where a non-intimate sample is to be taken, he was convicted of the offence on or after 10th April 1995 (or 29th July 1996 where the non-intimate sample is to be taken in Northern Ireland).

[…]

(6) Subject to sub-paragraph (6A) an officer may give an authorisation under sub-paragraph (4)(a) or (5)(c) only if–

(a) in the case of a person detained under section 41, the officer reasonably suspects that the person has been involved in an offence under any of the provisions mentioned in section 40(1)(a), and the officer reasonably believes that the fingerprints or sample will tend to confirm or disprove his involvement, or

(b) in any case in which an authorisation under that sub-paragraph may be given, the officer is satisfied that the taking of the fingerprints or sample from the person is necessary in order to assist in determining whether he falls within section 40(1)(b).

(6A) An officer may also give an authorisation under sub-paragraph (4)(a) for the taking of fingerprints if—

(a) he is satisfied that the fingerprints of the detained person will facilitate the ascertainment of that person’s identity; and

(b) that person has refused to identify himself or the officer has reasonable grounds for suspecting that that person is not who he claims to be.

 

Paragraph 20A

(1) This paragraph applies to—

(a) fingerprints taken under paragraph 10,

(b) a DNA profile derived from a DNA sample taken under paragraph 10 or 12,

(c) relevant physical data taken or provided by virtue of paragraph 20, and

(d) a DNA profile derived from a DNA sample taken by virtue of paragraph 20.

(2) Fingerprints, relevant physical data and DNA profiles to which this paragraph applies (“paragraph 20A material”) must be destroyed if it appears to the responsible chief officer of police that—

(a) the taking or providing of the material or, in the case of a DNA profile, the taking of the sample from which the DNA profile was derived, was unlawful, or

(b) the material was taken or provided, or (in the case of a DNA profile) was derived from a sample taken, from a person in connection with that person’s arrest under section 41 and the arrest was unlawful or based on mistaken identity.

(3) In any other case, paragraph 20A material must be destroyed unless it is retained under any power conferred by paragraphs 20B to 20E.

(4) Paragraph 20A material which ceases to be retained under a power mentioned in sub-paragraph (3) may continue to be retained under any other such power which applies to it.

(5) Nothing in this paragraph prevents a relevant search, in relation to paragraph 20A material, from being carried out within such time as may reasonably be required for the search if the responsible chief officer of police considers the search to be desirable.

(6) For the purposes of sub-paragraph (5), “a relevant search” is a search carried out for the purpose of checking the material against—

(a) other fingerprints or samples taken under paragraph 10 or 12 or a DNA profile derived from such a sample,

[…]

(d) material to which section 18 of the Counter-Terrorism Act 2008 applies,

(e) any of the fingerprints, data or samples obtained under paragraph 1 or 4 of Schedule 6 to the Terrorism Prevention and Investigation Measures Act 2011, or information derived from such samples,

(ea) any of the fingerprints, data or samples obtained under or by virtue of paragraph 34 or 42 of Schedule 3 to the Counter-Terrorism and Border Security Act 2019, or information derived from such samples,

(f) any of the fingerprints, samples and information mentioned in section 63A(1)(a) and (b) of the Police and Criminal Evidence Act 1984 (checking of fingerprints and samples), […]

 

Paragraph 20B

(1) This paragraph applies to paragraph 20A material relating to a person who is detained under section 41.

(2) In the case of a person who has previously been convicted of a recordable offence (other than a single exempt conviction), or an offence in Scotland which is punishable by imprisonment, or is so convicted before the end of the period within which the material may be retained by virtue of this paragraph, the material may be retained indefinitely.

(2A) In sub-paragraph (2) —

(a) the reference to a recordable offence includes an offence under the law of a country or territory outside the United Kingdom where the act constituting the offence would constitute—

(i) a recordable offence under the law of England and Wales if done there, […]

(3) In the case of a person who has no previous convictions, or only one exempt conviction, the material may be retained until the end of the retention period specified in sub-paragraph (4).

(4) The retention period is—

(a) in the case of fingerprints or relevant physical data, the period of 3 years beginning with the date on which the fingerprints or relevant physical data were taken or provided, and

(b) in the case of a DNA profile, the period of 3 years beginning with the date on which the DNA sample from which the profile was derived was taken (or, if the profile was derived from more than one DNA sample, the date on which the first of those samples was taken).

(5) The responsible chief officer of police or a specified chief officer of police may apply to a relevant court for an order extending the retention period.

(6) An application for an order under sub-paragraph (5) must be made within the period of 3 months ending on the last day of the retention period.

(7) An order under sub-paragraph (5) may extend the retention period by a period which—

(a) begins with the date on which the material would otherwise be required to be destroyed under this paragraph, and

(b) ends with the end of the period of 2 years beginning with that date.

(8) The following persons may appeal to the relevant appeal court against an order under sub-paragraph (5), or a refusal to make such an order—

(a) the responsible chief officer of police;

(b) a specified chief officer of police;

(c) the person from whom the material was taken.

[…]

(10) In this paragraph—

“relevant court” means—

(a) in England and Wales, a District Judge (Magistrates’ Courts),

[…]

“the relevant appeal court” means—

(a) in England and Wales, the Crown Court,

[…]

“a specified chief officer of police” means—

(a) in England and Wales and Northern Ireland—

(i) the chief officer of the police force of the area in which the person from whom the material was taken resides, or

(ii) a chief officer of police who believes that the person is in, or is intending to come to, the chief officer’s police area, […]

 

Paragraph 20C

(1) This paragraph applies to paragraph 20A material relating to a person who is detained under Schedule 7.

(2) In the case of a person who has previously been convicted of a recordable offence (other than a single exempt conviction), or an offence in Scotland which is punishable by imprisonment, or is so convicted before the end of the period within which the material may be retained by virtue of this paragraph, the material may be retained indefinitely.

(2A) In sub-paragraph (2) —

(a) the reference to a recordable offence includes an offence under the law of a country or territory outside the United Kingdom where the act constituting the offence would constitute—

(i) a recordable offence under the law of England and Wales if done there, […]

(3) In the case of a person who has no previous convictions, or only one exempt conviction, the material may be retained until the end of the retention period specified in sub-paragraph (4).

(4) The retention period is—

(a) in the case of fingerprints or relevant physical data, the period of 6 months beginning with the date on which the fingerprints or relevant physical data were taken or provided, and

(b) in the case of a DNA profile, the period of 6 months beginning with the date on which the DNA sample from which the profile was derived was taken (or, if the profile was derived from more than one DNA sample, the date on which the first of those samples was taken).

 

Paragraph 20D

(1) For the purposes of paragraphs 20B and 20C, a person is to be treated as having been convicted of an offence if—

(a) in relation to a recordable offence in England and Wales or Northern Ireland—

(i) the person has been given a caution in respect of the offence which, at the time of the caution, the person has admitted,

(ii) the person has been found not guilty of the offence by reason of insanity,

(iii) the person has been found to be under a disability and to have done the act charged in respect of the offence, or

(iv) the person has been warned or reprimanded under section 65 of the Crime and Disorder Act 1998 for the offence,

[…]

(2) Paragraphs 20B and 20C and this paragraph, so far as they relate to persons convicted of an offence, have effect despite anything in the Rehabilitation of Offenders Act 1974.

(3) But a person is not to be treated as having been convicted of an offence if that conviction is a disregarded conviction or caution by virtue of section 92 of the Protection of Freedoms Act 2012.

(4) For the purposes of paragraphs 20B and 20C—

(a) a person has no previous convictions if the person has not previously been convicted—

(i) in England and Wales or Northern Ireland of a recordable offence, […]

(ii) […] and

(b) if the person has previously been convicted of a recordable offence in England and Wales or Northern Ireland, the conviction is exempt if it is in respect of a recordable offence, other than a qualifying offence, committed when the person was aged under 18.

(5) In sub-paragraph (4), “qualifying offence” has—

(a) in relation to a conviction in respect of a recordable offence committed in England and Wales, the meaning given by section 65A of the Police and Criminal Evidence Act 1984, […]

(5A) For the purposes of sub-paragraph (4)—

(a) a person is to be treated as having previously been convicted in England and Wales of a recordable offence if —

(i) the person has previously been convicted of an offence under the law of a country or territory outside the United Kingdom, and

(ii) the act constituting the offence would constitute a recordable offence under the law of England and Wales if done there (whether or not it constituted such an offence when the person was convicted);

[…]

(d) the reference in sub-paragraph (4)(b) to a qualifying offence includes a reference to an offence under the law of a country or territory outside the United Kingdom where the act constituting the offence would constitute a qualifying offence under the law of England and Wales if done there […].

(5B) For the purposes of paragraphs 20B and 20C and this paragraph—

(a) offence, in relation to any country or territory outside the United Kingdom, includes an act punishable under the law of that country or territory, however it is described;

(b) a person has in particular been convicted of an offence under the law of a country or territory outside the United Kingdom if—

(i) a court exercising jurisdiction under the law of that country or territory has made in respect of such an offence a finding equivalent to a finding that the person is not guilty by reason of insanity, or

(ii) such a court has made in respect of such an offence a finding equivalent to a finding that the person is under a disability and did the act charged against the person in respect of the offence.

(6) If a person is convicted of more than one offence arising out of a single course of action, those convictions are to be treated as a single conviction for the purposes of calculating under paragraph 20B or 20C whether the person has been convicted of only one offence.

(7) Nothing in paragraph 20B or 20C prevents the start of a new retention period in relation to paragraph 20A material if a person is detained again under section 41 or (as the case may be) Schedule 7 when an existing retention period (whether or not extended) is still in force in relation to that material.

 

Paragraph 20E

(1) Paragraph 20A material may be retained for as long as a national security determination made by a chief officer of police has effect in relation to it.

(2) A national security determination is made if a chief officer of police determines that it is necessary for any paragraph 20A material to be retained for the purposes of national security.

(3) A national security determination—

(a) must be made in writing,

(b) has effect for a maximum of 5 years beginning with the date on which the determination is made, and

(c) may be renewed.

(4) In this paragraph “chief officer of police” means—

(a) a chief officer of police of a police force in England and Wales, […]

Counter-Terrorism Act 2008

Section 18 – Destruction of national security material not subject to existing statutory restrictions

(1) This section applies to fingerprints, DNA samples and DNA profiles that—

(a) are held for the purposes of national security by a law enforcement authority under the law of England and Wales or Northern Ireland, and

(b) are not held subject to existing statutory restrictions.

(2) Material to which this section applies (“section 18 material”) must be destroyed if it appears to the responsible officer that the condition in subsection (3) is not met.

(3) The condition is that the material has been—

(a) obtained by the law enforcement authority pursuant to an authorisation under Part 3 of the Police Act 1997 (authorisation of action in respect of property),

(b) obtained by the law enforcement authority in the course of surveillance, or use of a covert human intelligence source, authorised under Part 2 of the Regulation of Investigatory Powers Act 2000,

(c) supplied to the law enforcement authority by another law enforcement authority, or

(d) otherwise lawfully obtained or acquired by the law enforcement authority for any of the purposes mentioned in section 18D(1).

(4) In any other case, section 18 material must be destroyed unless it is retained by the law enforcement authority under any power conferred by section 18A or 18B, but this is subject to subsection (5).

(5) A DNA sample to which this section applies must be destroyed—

(a) as soon as a DNA profile has been derived from the sample, or

(b) if sooner, before the end of the period of 6 months beginning with the date on which it was taken.

(6) Section 18 material which ceases to be retained under a power mentioned in subsection (4) may continue to be retained under any other such power which applies to it.

(7) Nothing in this section prevents section 18 material from being checked against other fingerprints, DNA samples or DNA profiles held by a law enforcement authority within such time as may reasonably be required for the check, if the responsible officer considers the check to be desirable.

(8) For the purposes of subsection (1), the following are “existing statutory restrictions”—

(a) paragraph 18(2) of Schedule 2 to the Immigration Act 1971;

(b) sections 22, 63A and 63D to 63U of the Police and Criminal Evidence Act 1984 and any corresponding provision in an order under section 113 of that Act;

[…]

(d) section 2(2) of the Security Service Act 1989;

(e) section 2(2) of the Intelligence Services Act 1994;

(f) paragraphs 20(3) and 20A to 20J of Schedule 8 to the Terrorism Act 2000;

(g) section 56 of the Criminal Justice and Police Act 2001;

(h) paragraph 8 of Schedule 4 to the International Criminal Court Act 2001;

(i) sections 73, 83, 87, 88 and 89 of the Armed Forces Act 2006 and any provision relating to the retention of material in an order made under section 74, 93 or 323 of that Act;

(j) paragraphs 5 to 14 of Schedule 6 to the Terrorism Prevention and Investigation Measures Act 2011;

(k) paragraphs 43 to 51 of Schedule 3 to the Counter-Terrorism and Border Security Act 2019.

Section 18A – Retention of material: general

[…]

(2) The retention period is—

(a) in the case of fingerprints, the period of 3 years beginning with the date on which the fingerprints were taken, and

(b) in the case of a DNA profile, the period of 3 years beginning with the date on which the DNA sample from which the profile was derived was taken (or, if the profile was derived from more than one DNA sample, the date on which the first of those samples was taken).

Terrorism Prevention and Investigation Measures Act 2011

Schedule 6

Paragraph 6 – Requirement to destroy material

(1) This paragraph applies to—

(a) fingerprints taken under paragraph 1,

(b) a DNA profile derived from a DNA sample taken under that paragraph,

(c) relevant physical data taken or provided under paragraph 4,

(d) a DNA profile derived from a DNA sample taken under that paragraph.

(2) Fingerprints, relevant physical data and DNA profiles to which this paragraph applies (“paragraph 6 material”) must be destroyed if it appears to the responsible chief officer of police that the taking or providing of the material or, in the case of a DNA profile, the taking of the sample from which the DNA profile was derived, was unlawful.

(3) In any other case, paragraph 6 material must be destroyed unless it is retained under a power conferred by paragraph 8, 9 or 11.

(4) Paragraph 6 material that ceases to be retained under a power mentioned in sub-paragraph (3) may continue to be retained under any other such power that applies to it.

(5) Nothing in this paragraph prevents a relevant search from being carried out, in relation to paragraph 6 material, within such time as may reasonably be required for the search if the responsible chief officer of police considers the search to be desirable.

 

Paragraph 7 – Requirement to destroy material

(1) If fingerprints or relevant physical data are required by paragraph 6 to be destroyed, any copies of the fingerprints or data held by a police force must also be destroyed.

(2) If a DNA profile is required by that paragraph to be destroyed, no copy may be retained by a police force except in a form which does not include information which identifies the individual to whom the DNA profile relates.

 

Paragraph 8 – Retention of paragraph 6 material

(1) This paragraph applies to paragraph 6 material taken from, or provided by, an individual who has no previous convictions or (in the case of England and Wales or Northern Ireland) only one exempt conviction.

(2) The material may be retained until the end of the period of 6 months beginning with the date on which the TPIM notice that was in force when the material was taken ceases to be in force (subject to sub-paragraphs (3) and (4)).

(3) If, before the end of that period, the TPIM notice is quashed by the court under this Act, the material may be retained only until there is no possibility of an appeal against—

(a) the decision to quash the notice, or

(b) any decision made on an appeal against that decision.

(4) If, after a TPIM notice is quashed or otherwise ceases to be in force, measures are imposed on the individual (whether by the revival of a TPIM notice or the imposition of a new TPIM notice)—

(a) within the period for which material in relation to the individual is retained by virtue of sub-paragraph (2), or

(b) within, or immediately after the end of, the period for which such material is retained by virtue of sub-paragraph (3),

sub-paragraphs (2) and (3) apply again for the purposes of the retention of that material (taking references to the TPIM notice as references to the revived or new TPIM notice).

(5) In determining whether there is no further possibility of an appeal against a decision of the kind mentioned in sub-paragraph (3), any power to extend the time for giving notice of application for leave to appeal, or for applying for leave to appeal, must be ignored.

 

Paragraph 9 – Retention of paragraph 6 material

(1) This paragraph applies to paragraph 6 material taken from, or provided by, an individual—

(a) who has been convicted of a recordable offence (other than a single exempt conviction) or of an offence in Scotland which is punishable by imprisonment, or

(b) who is so convicted before the end of the period within which the material may be retained by virtue of paragraph 8.

(2) The material may be retained indefinitely.

 

Paragraph 10 – Retention of paragraph 6 material

(1) For the purposes of paragraphs 8 and 9 an individual is to be treated as having been convicted of an offence if—

(a) in relation to a recordable offence in England and Wales or Northern Ireland—

(i) the individual has been given a caution in respect of the offence which, at the time of the caution, the individual has admitted,

(ii) the individual has been found not guilty of the offence by reason of insanity, or

(iii) the individual has been found to be under a disability and to have done the act charged in respect of the offence, […]

(2) Paragraphs 8, 9 and this paragraph, so far as they relate to individuals convicted of an offence, have effect despite anything in the Rehabilitation of Offenders Act 1974.

(2A) But a person is not to be treated as having been convicted of an offence if that conviction is a disregarded conviction or caution by virtue of section 92 of the Protection of Freedoms Act 2012.

(3) For the purposes of paragraphs 8 and 9—

(a) an individual has no previous convictions if the individual has not previously been convicted—

(i) in England and Wales or Northern Ireland of a recordable offence, […]

(ii) […], and

(b) if the individual has previously been convicted of a recordable offence in England and Wales or Northern Ireland, the conviction is exempt if it is in respect of a recordable offence, other than a qualifying offence, committed when the individual was aged under 18.

(4) In sub-paragraph (3) “qualifying offence” has—

(a) in relation to a conviction in respect of a recordable offence committed in England and Wales, the meaning given by section 65A of the Police and Criminal Evidence Act 1984 […].

(5) If an individual is convicted of more than one offence arising out of a single course of action, those convictions are to be treated as a single conviction for the purposes of calculating under paragraph 8 or 9 whether the individual has been convicted of one offence.

Counter-Terrorism and Border Security Act 2019

Schedule 3 – Border security

Paragraph 34

(1) This paragraph applies where a detainee is detained in England, Wales or Northern Ireland.

(2) Fingerprints may be taken from the detainee only if they are taken by a constable—

(a) with the appropriate consent given in writing, or

(b) without that consent under sub-paragraph (4).

(3) A non-intimate sample may be taken from the detainee only if it is taken by a constable—

(a) with the appropriate consent given in writing, or

(b) without that consent under sub-paragraph (4).

(4) Fingerprints or a non-intimate sample may be taken from the detainee without the appropriate consent only if—

(a) the detainee is detained at a police station and a police officer of at least the rank of superintendent authorises the fingerprints or sample to be taken, or

(b) the detainee has been convicted of a recordable offence and, where a non-intimate sample is to be taken, was convicted of the offence on or after 10th April 1995 (or 29th July 1996 where the non-intimate sample is to be taken in Northern Ireland).

(5) An officer may give an authorisation under sub-paragraph (4)(a) only if—

(a) in the case of the taking of fingerprints or samples, condition 1 is met, or

(b) in the case of the taking of fingerprints, condition 2 is met.

(6) Condition 1 is met if the officer is satisfied that it is necessary for the fingerprints or sample to be taken in order to assist in determining whether the detainee is or has been engaged in hostile activity.

(7) Condition 2 is met if—

(a) the officer is satisfied that the fingerprints of the detainee will facilitate the ascertainment of the detainee’s identity, and

(b) the detainee has refused to identify himself or herself or the officer has reasonable grounds for suspecting that the detainee is not who the detainee claims to be.

(8) In this paragraph references to ascertaining a person’s identity include references to showing that the person is not a particular person.

(9) If an authorisation under sub-paragraph (4)(a) is given orally, the person giving it must confirm it in writing as soon as is reasonably practicable.

 

Paragraph 35

(1) Before fingerprints or a sample are taken from a person under paragraph 34, the person must be informed—

(a) that the fingerprints or sample may be used for the purposes of—

(i) a relevant search, as defined by paragraph 43(6),

(ii) section 63A(1) of the Police and Criminal Evidence Act 1984, or

(iii) […], and

(b) where the fingerprints or sample are to be taken under paragraph 34(2)(a), (3)(a) or (4)(b), of the reason for taking the fingerprints or sample.

(2) Before fingerprints or a sample are taken from a detainee upon an authorisation given under paragraph 34(4)(a), the detainee must be informed—

(a) that the authorisation has been given,

(b) of the grounds upon which it has been given, and

(c) where relevant, of the nature of the offence in which it is suspected that the detainee has been involved.

(3) After fingerprints or a sample are taken under paragraph 34, any of the following which apply must be recorded as soon as reasonably practicable—

(a) the fact that the person has been informed in accordance with sub-paragraphs (1) and (2),

(b) the reason referred to in sub-paragraph (1)(b),

(c) the authorisation given under paragraph 34(4)(a),

(d) the grounds upon which that authorisation has been given, and

(e) the fact that the appropriate consent has been given.

(4) Where a sample of hair is to be taken under paragraph 34, the sample may be taken either by cutting hairs or by plucking hairs with their roots so long as no more are plucked than the person taking the sample reasonably considers to be necessary for a sufficient sample.

 

Paragraph 36

(1) In the application of paragraphs 26, 34 and 35 in relation to a person detained in England or Wales, the following expressions have the meaning given by section 65 of the Police and Criminal Evidence Act 1984—

(a) “appropriate consent”,

(b) “fingerprints”,

(c) “intimate sample”,

(d) “non-intimate sample”, and

(e) “sufficient”.

(2) In the application of section 65(2A) of the Police and Criminal Evidence Act 1984 for the purposes of sub-paragraph (1) of this paragraph, the reference to the destruction of a sample under section 63R of that Act is a reference to the destruction of a sample under paragraph 43 of this Schedule.

[…].

(4) In paragraph 34 “recordable offence” has—

(a) in relation to a detainee in England or Wales, the meaning given by section 118(1) of the Police and Criminal Evidence Act 1984 […]

 

Paragraph 43

(1) This paragraph applies to—

(a) fingerprints taken under paragraph 34,

(b) a DNA profile derived from a DNA sample taken under paragraph 34,

(c) relevant physical data taken or provided by virtue of paragraph 42, and

(d) a DNA profile derived from a DNA sample taken by virtue of paragraph 42.

(2) Fingerprints, relevant physical data and DNA profiles to which this paragraph applies (“paragraph 43 material”) must be destroyed if it appears to the responsible chief officer of police that the taking or providing of the material or, in the case of a DNA profile, the taking of the sample from which the DNA profile was derived, was unlawful.

(3) In any other case, paragraph 43 material must be destroyed unless it is retained under a power conferred by paragraph 44, 46 or 47.

(4) Paragraph 43 material which ceases to be retained under a power mentioned in sub-paragraph (3) may continue to be retained under any other power which applies to it.

(5) Nothing in this paragraph prevents a relevant search, in relation to paragraph 43 material, from being carried out within such time as may reasonably be required for the search if the responsible chief officer of police considers the search to be desirable.

(6) For the purposes of sub-paragraph (5), “a relevant search” is a search carried out for the purpose of checking the material against—

(a) other fingerprints or samples taken under paragraph 34 or a DNA profile derived from such a sample,

[…]

(c) fingerprints or samples taken under paragraph 10 or 12 of Schedule 8 to the Terrorism Act 2000 or a DNA profile derived from a sample taken under one of those paragraphs,

[…]

(e) material to which section 18 of the Counter-Terrorism Act 2008 applies,

(f) any of the fingerprints, data or samples obtained under paragraph 1 or 4 of Schedule 6 to the Terrorism Prevention and Investigation Measures Act 2011, or information derived from such samples,

(g) any of the fingerprints, samples and information mentioned in section 63A(1)(a) and (b) of the Police and Criminal Evidence Act 1984 (checking of fingerprints and samples) […]

 

Paragraph 44

(1) Paragraph 43 material may be retained indefinitely in the case of a detainee who—

(a) has previously been convicted of a recordable offence (other than a single exempt conviction), […], or

(b) is so convicted before the end of the period within which the material may be retained by virtue of this paragraph.

(2) In sub-paragraph (1)—

(a) the reference to a recordable offence includes an offence under the law of a country or territory outside the United Kingdom where the act constituting the offence would constitute—

(i) a recordable offence under the law of England and Wales if done there,[…]

(and, in the application of sub-paragraph (1) where a person has previously been convicted, this applies whether or not the act constituted such an offence when the person was convicted);

[…]

(3) In the case of a person who has no previous convictions, or only one exempt conviction, the material may be retained until the end of the retention period specified in sub-paragraph (4).

(4) The retention period is—

(a) in the case of fingerprints or relevant physical data, the period of 6 months beginning with the date on which the fingerprints or relevant physical data were taken or provided, and

(b) in the case of a DNA profile, the period of 6 months beginning with the date on which the DNA sample from which the profile was derived was taken (or, if the profile was derived from more than one DNA sample, the date on which the first of those samples was taken).

 

Paragraph 46

(1) Paragraph 43 material may be retained for as long as a national security determination made by a chief officer of police has effect in relation to it.

(2) A national security determination is made if a chief officer of police determines that it is necessary for any paragraph 43 material to be retained for the purposes of national security.

(3) A national security determination—

(a) must be made in writing,

(b) has effect for a maximum of 5 years beginning with the date on which the determination is made, and

(c) may be renewed.

(4) In this paragraph “chief officer of police” means—

(a) a chief officer of police of a police force in England and Wales […]

 

Paragraph 48

(1) If fingerprints or relevant physical data are required by paragraph 43 to be destroyed, any copies of the fingerprints or relevant physical data held by a police force must also be destroyed.

(2) If a DNA profile is required by that paragraph to be destroyed, no copy may be retained by a police force except in a form which does not include information which identifies the person to whom the DNA profile relates.

 

Paragraph 49

(1) This paragraph applies to—

(a) samples taken under paragraph 34, or

(b) samples taken by virtue of paragraph 42.

(2) Samples to which this paragraph applies must be destroyed if it appears to the responsible chief officer of police that the taking of the sample was unlawful.

(3) Subject to this, the rule in sub-paragraph (4) or (as the case may be) (5) applies.

(4) A DNA sample to which this paragraph applies must be destroyed—

(a) as soon as a DNA profile has been derived from the sample, or

(b) if sooner, before the end of the period of 6 months beginning with the date on which the sample was taken.

(5) Any other sample to which this paragraph applies must be destroyed before the end of the period of 6 months beginning with the date on which it was taken.

(6) Nothing in this paragraph prevents a relevant search, in relation to samples to which this paragraph applies, from being carried out within such time as may reasonably be required for the search if the responsible chief officer of police considers the search to be desirable.

(7) In this paragraph “a relevant search” has the meaning given by paragraph 43(6).

 

Paragraph 50

(1) Any material to which paragraph 43 or 49 applies must not be used other than—

(a) in the interests of national security,

(b) for the purposes of a terrorist investigation, as defined by section 32 of the Terrorism Act 2000,

(c) for purposes related to the prevention or detection of crime, the investigation of an offence or the conduct of a prosecution, or

(d) for purposes related to the identification of a deceased person or of the person to whom the material relates.

(2) Subject to sub-paragraph (1), a relevant search (within the meaning given by paragraph 43(6)) may be carried out in relation to material to which paragraph 43 or 49 applies if the responsible chief officer of police considers the search to be desirable.

(3) Material which is required by paragraph 43 or 49 to be destroyed must not at any time after it is required to be destroyed be used—

(a) in evidence against the person to whom the material relates, or

(b) for the purposes of the investigation of any offence.

(4) In this paragraph—

(a) the reference to using material includes a reference to allowing any check to be made against it and to disclosing it to any person;

(b) the references to an investigation and to a prosecution include references, respectively, to any investigation outside the United Kingdom of any crime or suspected crime and to a prosecution brought in respect of any crime in a country or territory outside the United Kingdom.

 

Paragraph 51

In paragraphs 43 to 50—

“DNA profile” means any information derived from a DNA sample;

“DNA sample” means any material that has come from a human body and consists of or includes human cells;

“fingerprints” has the meaning given by section 65(1) of the Police and Criminal Evidence Act 1984; […]

Regulation of Investigatory Powers Act 2000

Section 49 – Notices requiring disclosure

[…]

(2) If any person with the appropriate permission under Schedule 2 believes, on reasonable grounds–

(a) that a key to the protected information is in the possession of any person,

(b) that the imposition of a disclosure requirement in respect of the protected information is–

(i) necessary on grounds falling within subsection (3), or

(ii) necessary for the purpose of securing the effective exercise or proper performance by any public authority of any statutory power or statutory duty,

(c) that the imposition of such a requirement is proportionate to what is sought to be achieved by its imposition, and

(d) that it is not reasonably practicable for the person with the appropriate permission to obtain possession of the protected information in an intelligible form without the giving of a notice under this section, the person with that permission may, by notice to the person whom he believes to have possession of the key, impose a disclosure requirement in respect of the protected information.

Investigatory Powers Act 2016

Section 199 – Bulk personal datasets: interpretation

(1) For the purposes of this Part, an intelligence service retains a bulk personal dataset if—

(a) the intelligence service obtains a set of information that includes personal data relating to a number of individuals,

(b) the nature of the set is such that the majority of the individuals are not, and are unlikely to become, of interest to the intelligence service in the exercise of its functions,

(c) after any initial examination of the contents, the intelligence service retains the set for the purpose of the exercise of its functions, and

(d) the set is held, or is to be held, electronically for analysis in the exercise of those functions.

Section 200 – Requirement for authorisation by warrant: general

(1) An intelligence service may not exercise a power to retain a bulk personal dataset unless the retention of the dataset is authorised by a warrant under this Part.

(2) An intelligence service may not exercise a power to examine a bulk personal dataset retained by it unless the examination is authorised by a warrant under this Part.

(3) For the purposes of this Part, there are two kinds of warrant—

(a) a warrant, referred to in this Part as “a class BPD warrant” , authorising an intelligence service to retain, or to retain and examine, any bulk personal dataset of a class described in the warrant;

(b) a warrant, referred to in this Part as “a specific BPD warrant” , authorising an intelligence service to retain, or to retain and examine, any bulk personal dataset described in the warrant.

Section 201 – Exceptions to section 200(1) and (2)

(1) Section 200(1) or (2) does not apply to the exercise of a power of an intelligence service to retain or (as the case may be) examine a bulk personal dataset if the intelligence service obtained the bulk personal dataset under a warrant or other authorisation issued or given under this Act.

(2) Section 200(1) or (2) does not apply at any time when a bulk personal dataset is being retained or (as the case may be) examined for the purpose of enabling any of the information contained in it to be destroyed.

Section 205 – Specific BPD warrants

(1) The head of an intelligence service, or a person acting on his or her behalf, may apply to the Secretary of State for a specific BPD warrant in the following cases.

(2) Case 1 is where—

(a) the intelligence service is seeking authorisation to retain, or to retain and examine, a bulk personal dataset, and

(b) the bulk personal dataset does not fall within a class described in a class BPD warrant.

(3) Case 2 is where—

(a) the intelligence service is seeking authorisation to retain, or to retain and examine, a bulk personal dataset, and

(b) the bulk personal dataset falls within a class described in a class BPD warrant but either—

(i) the intelligence service is prevented by section 202(1), (2) or (3) from retaining, or retaining and examining, the bulk personal dataset in reliance on the class BPD warrant, or

(ii) the intelligence service at any time considers that it would be appropriate to seek a specific BPD warrant.

(4) The application must include—

(a) a description of the bulk personal dataset to which the application relates, and

(b) in a case where the intelligence service is seeking authorisation for the examination of the bulk personal dataset, the operational purposes which it is proposing should be specified in the warrant (see section 212).

(5) Where subsection (3)(b)(i) applies, the application must include an explanation of why the intelligence service is prevented by section 202(1), (2) or (3) from retaining, or retaining and examining, the bulk personal dataset in reliance on a class BPD warrant.

(6) The Secretary of State may issue the warrant if—

(a) the Secretary of State considers that the warrant is necessary—

(i) in the interests of national security,

(ii) for the purposes of preventing or detecting serious crime, or

(iii) in the interests of the economic well-being of the United Kingdom so far as those interests are also relevant to the interests of national security,

(b) the Secretary of State considers that the conduct authorised by the warrant is proportionate to what is sought to be achieved by the conduct,

(c) where the warrant authorises the examination of a bulk personal dataset, the Secretary of State considers that—

(i) each of the specified operational purposes (see section 212) is a purpose for which the examination of the bulk personal dataset is or may be necessary, and

(ii) the examination of the bulk personal dataset for each such purpose is necessary on any of the grounds on which the Secretary of State considers the warrant to be necessary,

(d) the Secretary of State considers that the arrangements made by the intelligence service for storing the bulk personal dataset and for protecting it from unauthorised disclosure are satisfactory, and

(e) except where the Secretary of State considers that there is an urgent need to issue the warrant, the decision to issue it has been approved by a Judicial Commissioner.

(7) The fact that a specific BPD warrant would authorise the retention, or the retention and examination, of bulk personal datasets relating to activities in the British Islands of a trade union is not, of itself, sufficient to establish that the warrant is necessary on grounds falling within subsection (6)(a).

(8) A specific BPD warrant relating to a bulk personal dataset (“dataset A”) may also authorise the retention or examination of other bulk personal datasets (“replacement datasets”) that do not exist at the time of the issue of the warrant but may reasonably be regarded as replacements for dataset A.

(9) An application for a specific BPD warrant may only be made on behalf of the head of an intelligence service by a person holding office under the Crown.

Section 207 – Protected data: power to impose conditions

Where the Secretary of State decides to issue a specific BPD warrant, the Secretary of State may impose conditions which must be satisfied before protected data retained in reliance on the warrant may be selected for examination on the basis of criteria which are referable to an individual known to be in the British Islands at the time of the selection.

Section 208 – Approval of warrants by Judicial Commissioners

(1) In deciding whether to approve a decision to issue a class BPD warrant or a specific BPD warrant, a Judicial Commissioner must review the Secretary of State’s conclusions as to the following matters—

(a) whether the warrant is necessary on grounds falling within section 204(3)(a) or (as the case may be) section 205(6)(a),

(b) whether the conduct that would be authorised by the warrant is proportionate to what is sought to be achieved by that conduct, and

(c) where the warrant authorises examination of bulk personal datasets of a class described in the warrant or (as the case may be) of a bulk personal dataset described in the warrant, whether—

(i) each of the specified operational purposes (see section 212) is a purpose for which the examination of bulk personal datasets of that class or (as the case may be) the bulk personal dataset is or may be necessary, and

(ii) the examination of bulk personal datasets of that class or (as the case may be) the bulk personal dataset is necessary as mentioned in section 204(3)(c)(ii) or (as the case may be) section 205(6)(c)(ii).

(2) In doing so, the Judicial Commissioner must—

(a) apply the same principles as would be applied by a court on an application for judicial review, and

(b) consider the matters referred to in subsection (1) with a sufficient degree of care as to ensure that the Judicial Commissioner complies with the duties imposed by section 2 (general duties in relation to privacy).

(3) Where a Judicial Commissioner refuses to approve a decision to issue a class BPD warrant or a specific BPD warrant, the Judicial Commissioner must give the Secretary of State written reasons for the refusal.

(4) Where a Judicial Commissioner, other than the Investigatory Powers Commissioner, refuses to approve a decision to issue a class BPD warrant or a specific BPD warrant, the Secretary of State may ask the Investigatory Powers Commissioner to decide whether to approve the decision to issue the warrant.

Sections 213 – Duration of warrants

(1) A class BPD warrant or a specific BPD warrant ceases to have effect at the end of the relevant period (see subsection (2)) unless—

(a) it is renewed before the end of that period (see section 214), or

(b) it is cancelled or (in the case of a specific BPD warrant) otherwise ceases to have effect before the end of that period (see sections 209 and 218).

(2) In this section, “the relevant period” —

(a) in the case of an urgent specific BPD warrant (see subsection (3)), means the period ending with the fifth working day after the day on which the warrant was issued;

(b) in any other case, means the period of 6 months beginning with—

(i) the day on which the warrant was issued, or

(ii) in the case of a warrant that has been renewed, the day after the day at the end of which the warrant would have ceased to have effect if it had not been renewed.

(3) For the purposes of subsection (2)(a), a specific BPD warrant is an “urgent specific BPD warrant” if—

(a) the warrant was issued without the approval of a Judicial Commissioner, and

(b) the Secretary of State considered that there was an urgent need to issue it.

(4) For provision about the renewal of warrants, see section 214.

Section 214 – Renewal of warrants

(1) If the renewal conditions are met, a class BPD warrant or a specific BPD warrant may be renewed, at any time during the renewal period, by an instrument issued by the Secretary of State.

(2) The renewal conditions are—

(a) that the Secretary of State considers that the warrant continues to be necessary on grounds falling within section 204(3)(a) or (as the case may be) section 205(6)(a),

(b) that the Secretary of State considers that the conduct that would be authorised by the renewed warrant continues to be proportionate to what is sought to be achieved by the conduct,

(c) where the warrant authorises examination of bulk personal datasets of a class described in the warrant or (as the case may be) of a bulk personal dataset described in the warrant, that the Secretary of State considers that—

(i) each of the specified operational purposes (see section 212) is a purpose for which the examination of bulk personal datasets of that class or (as the case may be) the bulk personal dataset continues to be, or may be, necessary, and

(ii) the examination of bulk personal datasets of that class or (as the case may be) the bulk personal dataset continues to be necessary on any of the grounds on which the Secretary of State considers that the warrant continues to be necessary, and

(d) that the decision to renew the warrant has been approved by a Judicial Commissioner.

(3) “The renewal period” means—

(a) in the case of an urgent specific BPD warrant which has not been renewed, the relevant period;

(b) in any other case, the period of 30 days ending with the day at the end of which the warrant would otherwise cease to have effect.

(4) The decision to renew a class BPD warrant or a specific BPD warrant must be taken personally by the Secretary of State, and the instrument renewing the warrant must be signed by the Secretary of State.

(5) Section 207 (protected data: power to impose conditions) applies in relation to the renewal of a specific BPD warrant as it applies in relation to the issue of such a warrant (whether or not any conditions have previously been imposed in relation to the warrant under that section).

(6) Section 208 (approval of warrants by Judicial Commissioner) applies in relation to a decision to renew a warrant as it applies in relation to a decision to issue a warrant.

(7) In this section—

“the relevant period” has the same meaning as in section 213;

“urgent specific BPD warrant” is to be read in accordance with subsection (3) of that section.

Section 215 – Modification of warrants

(1) The provisions of a class BPD warrant or a specific BPD warrant may be modified at any time by an instrument issued by the person making the modification.

(2) The only modifications which may be made under this section are—

(a) in the case of a class BPD warrant, adding, varying or removing any operational purpose specified in the warrant as a purpose for which bulk personal datasets of a class described in the warrant may be examined;

(b) in the case of a specific BPD warrant, adding, varying or removing any operational purpose specified in the warrant as a purpose for which the bulk personal dataset described in the warrant may be examined.

(3) In this section—

(a) a modification adding or varying any operational purpose is referred to as a “major modification” , and

(b) a modification removing any operational purpose is referred to as a “minor modification”.

(4) A major modification—

(a) must be made by the Secretary of State, and

(b) may be made only if the Secretary of State considers that it is necessary on any of the grounds on which the Secretary of State considers the warrant to be necessary (see section 204(3)(a) or (as the case may be) section 205(6)(a)).

(5) Except where the Secretary of State considers that there is an urgent need to make the modification, a major modification has effect only if the decision to make the modification is approved by a Judicial Commissioner.

(6) A minor modification may be made by—

(a) the Secretary of State, or

(b) a senior official acting on behalf of the Secretary of State.

(7) Where a minor modification is made by a senior official, the Secretary of State must be notified personally of the modification and the reasons for making it.

(8) If at any time a person mentioned in subsection (6) considers that any operational purpose specified in a warrant is no longer a purpose for which the examination of any bulk personal datasets to which the warrant relates is or may be necessary, the person must modify the warrant by removing that operational purpose.

(9) The decision to modify the provisions of a class BPD warrant or a specific BPD warrant must be taken personally by the person making the modification, and the instrument making the modification must be signed by that person. This is subject to subsection (10).

(10) If it is not reasonably practicable for an instrument making a major modification to be signed by the Secretary of State, the instrument may be signed by a senior official designated by the Secretary of State for that purpose.

(11) In such a case, the instrument making the modification must contain a statement that—

(a) it is not reasonably practicable for the instrument to be signed by the Secretary of State, and

(b) the Secretary of State has personally and expressly authorised the making of the modification.

Section 216 – Approval of major modifications by Judicial Commissioners

(1) In deciding whether to approve a decision to make a major modification of a class BPD warrant or a specific BPD warrant, a Judicial Commissioner must review the Secretary of State’s conclusions as to whether the modification is necessary on any of the grounds on which the Secretary of State considers the warrant to be necessary.

(2) In doing so, the Judicial Commissioner must—

(a) apply the same principles as would be applied by a court on an application for judicial review, and

(b) consider the matter referred to in subsection (1) with a sufficient degree of care as to ensure that the Judicial Commissioner complies with the duties imposed by section 2 (general duties in relation to privacy).

(3) Where a Judicial Commissioner refuses to approve a decision to make a major modification under section 215, the Judicial Commissioner must give the Secretary of State written reasons for the refusal.

(4) Where a Judicial Commissioner, other than the Investigatory Powers Commissioner, refuses to approve a decision to make a major modification under section 215, the Secretary of State may ask the Investigatory Powers Commissioner to decide whether to approve the decision to make the modification.

Section 217 – Approval of major modifications made in urgent cases

(1) This section applies where—

(a) the Secretary of State makes a major modification of a class BPD warrant or a specific BPD warrant without the approval of a Judicial Commissioner, and

(b) the Secretary of State considered that there was an urgent need to make the modification.

(2) The Secretary of State must in form a Judicial Commissioner that the modification has been made.

(3) The Judicial Commissioner must, before the end of the relevant period—

(a) decide whether to approve the decision to make the modification, and

(b) notify the Secretary of State of the Judicial Commissioner’s decision.

“The relevant period” means the period ending with the third working day after the day on which the modification was made.

(4) If the Judicial Commissioner refuses to approve the decision to make the modification—

(a) the warrant (unless it no longer has effect) has effect as if the modification had not been made, and

(b) the person to whom the warrant is addressed must, so far as is reasonably practicable, secure that anything in the process of being done in reliance on the warrant by virtue of that modification stops as soon as possible,

and section 216(4) does not apply in relation to the refusal to approve the decision.

(5) Nothing in this section affects the lawfulness of—

(a) anything done in reliance on the warrant by virtue of the modification before the modification ceases to have effect;

(b) if anything is in the process of being done in reliance on the warrant by virtue of the modification when the modification ceases to have effect—

(i) anything done before that thing could be stopped, or

(ii) anything done which it is not reasonably practicable to stop.

Section 218 – Cancellation of warrants

(1) The Secretary of State, or a senior official acting on behalf of the Secretary of State, may cancel a class BPD warrant or a specific BPD warrant at any time.

(2) If the Secretary of State, or a senior official acting on behalf of the Secretary of State, considers that any of the cancellation conditions are met in relation to a class BPD warrant or a specific BPD warrant, the person must cancel the warrant.

(3) The cancellation conditions are—

(a) that the warrant is no longer necessary on any grounds falling within section 204(3)(a) or (as the case may be) section 205(6)(a);

(b) that the conduct authorised by the warrant is no longer proportionate to what is sought to be achieved by that conduct;

(c) where the warrant authorises examination of bulk personal datasets of a class described in the warrant or (as the case may be) of a bulk personal dataset described in the warrant, that the examination of bulk personal datasets of that class or (as the case may be) of the bulk personal dataset is no longer necessary for any of the specified operational purposes (see section 212).

Section 221 – Safeguards relating to examination of bulk personal datasets

(1) The Secretary of State must ensure, in relation to every class BPD warrant or specific BPD warrant which authorises examination of bulk personal datasets of a class described in the warrant or (as the case may be) of a bulk personal dataset described in the warrant, that arrangements are in force for securing that—

(a) any selection of data contained in the datasets (or dataset) for examination is carried out only for the specified purposes (see subsection (2)), and

(b) the selection of any such data for examination is necessary and proportionate in all the circumstances.

(2) The selection of data contained in bulk personal datasets for examination is carried out only for the specified purposes if the data is selected for examination only so far as is necessary for the operational purposes specified in the warrant in accordance with section 212.

(3) The Secretary of State must also ensure, in relation to every specific BPD warrant which specifies conditions imposed under section 207, that arrangements are in force for securing that any selection for examination of protected data on the basis of criteria which are referable to an individual known to be in the British Islands at the time of the selection is in accordance with the conditions specified in the warrant.

(4) In this section “specified in the warrant” means specified in the warrant at the time of the selection of the data for examination.

Section 224 – Offence of breaching safeguards relating to examination of material

(1) A person commits an offence if—

(a) the person selects for examination any data contained in a bulk personal dataset retained in reliance on a class BPD warrant or a specific BPD warrant,

(b) the person knows or believes that the selection of that data is in breach of a requirement specified in subsection (2), and

(c) the person deliberately selects that data in breach of that requirement.

(2) The requirements specified in this subsection are that any selection for examination of the data—

(a) is carried out only for the specified purposes (see subsection (3)),

(b) is necessary and proportionate, and

(c) if the data is protected data, satisfies any conditions imposed under section 207.

(3) The selection for examination of the data is carried out only for the specified purposes if the data is selected for examination only so far as is necessary for the operational purposes specified in the warrant in accordance with section 212. In this subsection, “specified in the warrant” means specified in the warrant at the time of the selection of the data for examination.

(4) A person guilty of an offence under this section is liable—

(a) on summary conviction in England and Wales—

(i) to imprisonment for a term not exceeding 12 months (or 6 months, if the offence was committed before the commencement of [paragraph 24(2) of Schedule 22 to the Sentencing Act 2020]1), or

(ii) to a fine, or to both;

[…]

(d) on conviction on indictment, to imprisonment for a term not exceeding 2 years or to a fine, or to both.

(5) No proceedings for any offence which is an offence by virtue of this section may be instituted—

(a) in England and Wales, except by or with the consent of the Director of Public Prosecutions; […]

Protection of Freedoms Act 2012

Section 20 – Appointment and functions of Commissioner

(1) The Secretary of State must appoint a Commissioner to be known as the Commissioner for the Retention and Use of Biometric Material (referred to in this section and section 21 as “the Commissioner” ).

(2) It is the function of the Commissioner to keep under review—

(a) every national security determination made or renewed under—

(i) section 63M of the Police and Criminal Evidence Act 1984 (section 63D material retained for purposes of national security),

(ii) paragraph 20E of Schedule 8 to the Terrorism Act 2000 (paragraph 20A material retained for purposes of national security),

(iii) section 18B of the Counter-Terrorism Act 2008 (section 18 material retained for purposes of national security),

(iv) paragraph 11 of Schedule 6 to the Terrorism Prevention and Investigation Measures Act 2011 (paragraph 6 material retained for purposes of national security),

(iva) paragraph 46 of Schedule 3 to the Counter-Terrorism and Border Security Act 2019, […]

(b) the uses to which material retained pursuant to a national security determination is being put.

(3) It is the duty of every person who makes or renews a national security determination under a provision mentioned in subsection (2)(a) to—

(a) send to the Commissioner a copy of the determination or renewed determination, and the reasons for making or renewing the determination, within 28 days of making or renewing it, and

(b) disclose or provide to the Commissioner such documents and information as the Commissioner may require for the purpose of carrying out the Commissioner’s functions under subsection (2).

(4) If, on reviewing a national security determination made or renewed under a provision mentioned in subsection (2)(a), the Commissioner concludes that it is not necessary for any material retained pursuant to the determination to be so retained, the Commissioner may order the destruction of the material if the condition in subsection (5) is met.

(5) The condition is that the material retained pursuant to the national security determination is not otherwise capable of being lawfully retained.

(6) The Commissioner also has the function of keeping under review—

(a) the retention and use in accordance with sections 63A and 63D to 63T of the Police and Criminal Evidence Act 1984 of—

(i) any material to which section 63D or 63R of that Act applies (fingerprints, DNA profiles and samples), and

(ii) any copies of any material to which section 63D of that Act applies (fingerprints and DNA profiles),

(b) the retention and use in accordance with paragraphs 20A to 20J of Schedule 8 to the Terrorism Act 2000 of—

(i) any material to which paragraph 20A or 20G of that Schedule applies (fingerprints, relevant physical data, DNA profiles and samples), and

(ii) any copies of any material to which paragraph 20A of that Schedule applies (fingerprints, relevant physical data and DNA profiles),

(c) the retention and use in accordance with sections 18 to 18E of the Counter-Terrorism Act 2008 of—

(i) any material to which section 18 of that Act applies (fingerprints, DNA samples and DNA profiles), and

(ii) any copies of fingerprints or DNA profiles to which section 18 of that Act applies,

(d) the retention and use in accordance with paragraphs 5 to 14 of Schedule 6 to the Terrorism Prevention and Investigation Measures Act 2011 of—

(i) any material to which paragraph 6 or 12 of that Schedule applies (fingerprints, relevant physical data, DNA profiles and samples), and

(ii) any copies of any material to which paragraph 6 of that Schedule applies (fingerprints, relevant physical data and DNA profiles),

(e) the retention and use in accordance with paragraphs 43 to 51 of Schedule 3 to the Counter-Terrorism and Border Security Act 2019 of—

(i) any material to which paragraph 43 or 49 of that Schedule applies (fingerprints, relevant physical data, DNA profiles and samples), and

(ii) any copies of any material to which paragraph 43 of that Schedule applies (fingerprints, relevant physical data and DNA profiles).

(7) But subsection (6) does not apply so far as the retention or use of the material falls to be reviewed by virtue of subsection (2).

[…]

(9) The Commissioner also has functions under sections 63F(5)(c) and 63G (giving of consent in relation to the retention of certain section 63D material).

(10) The Commissioner is to hold office in accordance with the terms of the Commissioner’s appointment; and the Secretary of State may pay in respect of the Commissioner any expenses, remuneration or allowances that the Secretary of State may determine.

(11) The Secretary of State may, after consultation with the Commissioner, provide the Commissioner with—

(a) such staff, and

(b) such accommodation, equipment and other facilities, as the Secretary of State considers necessary for the carrying out of the Commissioner’s functions.

Section 21 – Reports by Commissioner

(1) The Commissioner must make a report to the Secretary of State about the carrying out of the Commissioner’s functions as soon as reasonably practicable after the end of—

(a) the period of 9 months beginning when this section comes into force, and

(b) every subsequent 12 month period.

(2) The Commissioner may also, at any time, make such report to the Secretary of State on any matter relating to the Commissioner’s functions as the Commissioner considers appropriate.

(3) The Secretary of State may at any time require the Commissioner to report on any matter relating to the Commissioner’s functions.

(4) On receiving a report from the Commissioner under this section, the Secretary of State must—

(a) publish the report, and

(b) lay a copy of the published report before Parliament.

(5) The Secretary of State may, after consultation with the Commissioner, exclude from publication any part of a report under this section if, in the opinion of the Secretary of State, the publication of that part would be contrary to the public interest or prejudicial to national security.

Section 26 – Requirement to notify and obtain consent before processing biometric information

(1) This section applies in relation to any processing of a child’s biometric information by or on behalf of the relevant authority of—

(a) a school,

(b) a 16 to 19 Academy, or

(c) a further education institution.

(2) Before the first processing of a child’s biometric information on or after the coming into force of subsection (3), the relevant authority must notify each parent of the child—

(a) of its intention to process the child’s biometric information, and

(b) that the parent may object at any time to the processing of the information.

(3) The relevant authority must ensure that a child’s biometric information is not processed unless—

(a) at least one parent of the child consents to the information being processed, and

(b) no parent of the child has withdrawn his or her consent, or otherwise objected, to the information being processed.

(4) Section 27 makes further provision about the requirement to notify parents and the obtaining and withdrawal of consent (including when notification and consent are not required).

(5) But if, at any time, the child—

(a) refuses to participate in, or continue to participate in, anything that involves the processing of the child’s biometric information, or

(b) otherwise objects to the processing of that information,

the relevant authority must ensure that the information is not processed, irrespective of any consent given by a parent of the child under subsection (3).

(6) Subsection (7) applies in relation to any child whose biometric information, by virtue of this section, may not be processed.

(7) The relevant authority must ensure that reasonable alternative means are available by which the child may do, or be subject to, anything which the child would have been able to do, or be subject to, had the child’s biometric information been processed.

Section 29 – Code of practice for surveillance camera systems

(1) The Secretary of State must prepare a code of practice containing guidance about surveillance camera systems.

(2) Such a code must contain guidance about one or more of the following—

(a) the development or use of surveillance camera systems,

(b) the use or processing of images or other information obtained by virtue of such systems.

(3) Such a code may, in particular, include provision about—

(a) considerations as to whether to use surveillance camera systems,

(b) types of systems or apparatus,

(c) technical standards for systems or apparatus,

(d) locations for systems or apparatus,

(e) the publication of information about systems or apparatus,

(f) standards applicable to persons using or maintaining systems or apparatus,

(g) standards applicable to persons using or processing information obtained by virtue of systems,

(h) access to, or disclosure of, information so obtained,

(i) procedures for complaints or consultation.

(4) Such a code—

(a) need not contain provision about every type of surveillance camera system,

(b) may make different provision for different purposes.

(5) In the course of preparing such a code, the Secretary of State must consult—

(a) such persons appearing to the Secretary of State to be representative of the views of persons who are, or are likely to be, subject to the duty under section 33(1) (duty to have regard to the code) as the Secretary of State considers appropriate,

(b) the National Police Chiefs’ Council,

(c) the Information Commissioner,

(d) the Investigatory Powers Commissioner,

(e) the Surveillance Camera Commissioner,

(f) the Welsh Ministers, and

(g) such other persons as the Secretary of State considers appropriate.

(6) In this Chapter “surveillance camera systems” means—

(a) closed circuit television or automatic number plate recognition systems,

(b) any other systems for recording or viewing visual images for surveillance purposes,

(c) any systems for storing, receiving, transmitting, processing or checking images or information obtained by systems falling within paragraph (a) or (b), or

(d) any other systems associated with, or otherwise connected with, systems falling within paragraph (a), (b) or (c).

(7) In this section—

“processing” has the same meaning as in Parts 5 to 7 of the Data Protection Act 2018 (see section 3(4) and (14) of that Act).

Section 33 – Effect of code

(1) A relevant authority must have regard to the surveillance camera code when exercising any functions to which the code relates.

[…]

(5) In this section “relevant authority” means—

(a) a local authority within the meaning of the Local Government Act 1972,

(b) the Greater London Authority,

(c) the Common Council of the City of London in its capacity as a local authority,

(d) the Sub-Treasurer of the Inner Temple or the Under-Treasurer of the Middle Temple, in their capacity as a local authority,

(e) the Council of the Isles of Scilly,

(f) a parish meeting constituted under section 13 of the Local Government Act 1972,

(g) a police and crime commissioner,

(h) the Mayor’s Office for Policing and Crime,

(i) the Common Council of the City of London in its capacity as a police authority,

(j) any chief officer of a police force in England and Wales,

(k) any person specified or described by the Secretary of State in an order made by statutory instrument.

Section 34 – Commissioner in relation to code

(1) The Secretary of State must appoint a person as the Surveillance Camera Commissioner (in this Chapter “the Commissioner”).

(2) The Commissioner is to have the following functions—

(a) encouraging compliance with the surveillance camera code,

(b) reviewing the operation of the code, and

(c) providing advice about the code (including changes to it or breaches of it).

(3) The Commissioner is to hold office in accordance with the terms of the Commissioner’s appointment; and the Secretary of State may pay in respect of the Commissioner any expenses, remuneration or allowances that the Secretary of State may determine.

(4) The Secretary of State may, after consultation with the Commissioner, provide the Commissioner with—

(a) such staff, and

(b) such accommodation, equipment and other facilities, as the Secretary of State considers necessary for the carrying out of the Commissioner’s functions.

Equality Act 2010

Section 4 – The protected characteristics

The following characteristics are protected characteristics—

age;

disability;

gender reassignment;

marriage and civil partnership;

pregnancy and maternity;

race;

religion or belief;

sex;

sexual orientation.

Section 13 – Direct discrimination

(1) A person (A) discriminates against another (B) if, because of a protected characteristic, A treats B less favourably than A treats or would treat others.

(2) If the protected characteristic is age, A does not discriminate against B if A can show A’s treatment of B to be a proportionate means of achieving a legitimate aim.

(3) If the protected characteristic is disability, and B is not a disabled person, A does not discriminate against B only because A treats or would treat disabled persons more favourably than A treats B.

(4) If the protected characteristic is marriage and civil partnership, this section applies to a contravention of Part 5 (work) only if the treatment is because it is B who is married or a civil partner.

(5) If the protected characteristic is race, less favourable treatment includes segregating B from others.

(6) If the protected characteristic is sex—

(a) less favourable treatment of a woman includes less favourable treatment of her because she is breast-feeding;

(b) in a case where B is a man, no account is to be taken of special treatment afforded to a woman in connection with pregnancy or childbirth.

(7) Subsection (6)(a) does not apply for the purposes of Part 5 (work).

(8) This section is subject to sections 17(6) and 18(7).

Section 19 – Indirect discrimination

(1) A person (A) discriminates against another (B) if A applies to B a provision, criterion or practice which is discriminatory in relation to a relevant protected characteristic of B’s.

(2) For the purposes of subsection (1), a provision, criterion or practice is discriminatory in relation to a relevant protected characteristic of B’s if—

(a) A applies, or would apply, it to persons with whom B does not share the characteristic,

(b) it puts, or would put, persons with whom B shares the characteristic at a particular disadvantage when compared with persons with whom B does not share it,

(c) it puts, or would put, B at that disadvantage, and

(d) A cannot show it to be a proportionate means of achieving a legitimate aim.

Section 149 – Public sector equality duty

(1) A public authority must, in the exercise of its functions, have due regard to the need to—

(a) eliminate discrimination, harassment, victimisation and any other conduct that is prohibited by or under this Act;

(b) advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it;

(c) foster good relations between persons who share a relevant protected characteristic and persons who do not share it.

(2) A person who is not a public authority but who exercises public functions must, in the exercise of those functions, have due regard to the matters mentioned in subsection (1).

(3) Having due regard to the need to advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it involves having due regard, in particular, to the need to—

(a) remove or minimise disadvantages suffered by persons who share a relevant protected characteristic that are connected to that characteristic;

(b) take steps to meet the needs of persons who share a relevant protected characteristic that are different from the needs of persons who do not share it;

(c) encourage persons who share a relevant protected characteristic to participate in public life or in any other activity in which participation by such persons is disproportionately low.

(4) The steps involved in meeting the needs of disabled persons that are different from the needs of persons who are not disabled include, in particular, steps to take account of disabled persons’ disabilities.

(5) Having due regard to the need to foster good relations between persons who share a relevant protected characteristic and persons who do not share it involves having due regard, in particular, to the need to—

(a) tackle prejudice, and

(b) promote understanding.

(6) Compliance with the duties in this section may involve treating some persons more favourably than others; but that is not to be taken as permitting conduct that would otherwise be prohibited by or under this Act.

Draft EU Proposal for a Regulation laying down harmonised rules on artificial intelligence (COM/2021/206 final)

Article 3 – Definitions

[…]

(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified;

[…]

(40) ‘law enforcement authority’ means:

(a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or

(b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;

Article 5

(1) The following artificial intelligence practices shall be prohibited:

(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;

(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;

(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:

(i) the targeted search for specific potential victims of crime, including missing children;

(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;

(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA 62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State.

[…]

(3) As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use.

The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.

(4) A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.

Article 10 – Data and data governance

[…]

(3) Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.

Article 14 – Human oversight

(1) High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.

(2) Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.

(3) Human oversight shall be ensured through either one or all of the following measures:

(a) identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;

(b) identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user.

(4) The measures referred to in paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate to the circumstances:

(a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;

(b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;

(c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available;

(d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;

(e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure.

(5) For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons.

Article 17 – Quality management system

(1) Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:

(a) a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system;

(b) techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system;

(c) techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system;

(d) examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out;

(e) technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full, the means to be used to ensure that the high-risk AI system complies with the requirements set out in Chapter 2 of this Title;

(f) systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service of high-risk AI systems;

(g) the risk management system referred to in Article 9;

(h) the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 61;

(i) procedures related to the reporting of serious incidents and of malfunctioning in accordance with Article 62;

(j) the handling of communication with national competent authorities, competent authorities, including sectoral ones, providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;

(k) systems and procedures for record keeping of all relevant documentation and information;

(l) resource management, including security of supply related measures;

(m) an accountability framework setting out the responsibilities of the management and other staff with regard to all aspects listed in this paragraph.

(2) The implementation of aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation.

Article 19 – Conformity assessment

(1) Providers of high-risk AI systems shall ensure that their systems undergo the relevant conformity assessment procedure in accordance with Article 43, prior to their placing on the market or putting into service. Where the compliance of the AI systems with the requirements set out in Chapter 2 of this Title has been demonstrated following that conformity assessment, the providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking of conformity in accordance with Article 49.

Article 29 – Obligations of users of high-risk AI systems

[…]

(4) Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall inform the provider or distributor and suspend the use of the system. They shall also inform the provider or distributor when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis.

Article 43 – Conformity assessment

(1) For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:

(a) the conformity assessment procedure based on internal control referred to in Annex VI;

(b) the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.

Where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied or has applied only in part harmonised standards referred to in Article 40, or where such harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider shall follow the conformity assessment procedure set out in Annex VII.

For the purpose of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, when the system is intended to be put into service by law enforcement, immigration or asylum authorities as well as EU institutions, bodies or agencies, the market surveillance authority referred to in Article 63(5) or (6), as applicable, shall act as a notified body.

(2) For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to 101 of that Directive.

(3) For high-risk AI systems, to which legal acts listed in Annex II, section A, apply, the provider shall follow the relevant conformity assessment as required under those legal acts. The requirements set out in Chapter 2 of this Title shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply.

For the purpose of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Chapter 2 of this Title, provided that the compliance of those notified bodies with requirements laid down in Article 33(4), (9) and (10) has been assessed in the context of the notification procedure under those legal acts.

Where the legal acts listed in Annex II, section A, enable the manufacturer of the product to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may make use of that option only if he has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering the requirements set out in Chapter 2 of this Title.

(4) High-risk AI systems shall undergo a new conformity assessment procedure whenever they are substantially modified, regardless of whether the modified system is intended to be further distributed or continues to be used by the current user.

For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification.

(5) The Commission is empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating Annexes VI and Annex VII in order to introduce elements of the conformity assessment procedures that become necessary in light of technical progress.

(6) The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.

Article 52 – Transparency obligations for certain AI systems

[…]

(2) Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.

Article 64 – Access to data and documentation

[…]

(2) Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system.

Article 69 – Codes of conduct

[…]

(1) The Commission and the Member States shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.

Annex III – High-risk AI systems referred to in Article 6(2)

High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:

(1) Biometric identification and categorisation of natural persons:

(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;

[…]


 

Annex 2: The Review team

Matthew Ryder QC, Senior Barrister, Matrix Chambers

Jessica Jones, Barrister, Matrix Chambers

Javier Ruiz,  Independent Consultant on data ethics

Samuel Rowe, Pupil Barrister, 5 Essex Court

Annex 3: Advisory Board

Lillian Edwards, Professor of Law, Innovation and Society, Newcastle Law School

Anneke Lucasson, Professor of Clinical Genetics within Medicine, University of Southampton

Marion Oswald, Professor, School of Law, Northumbria University

Matthew Rice, Scotland Director, Open Rights Group

Renate Samson, Senior Policy Advisor, Open Data Institute

Pamela Ugwudike, Associate Professor of Criminology, University of Southampton

Edgar Whitley, Associate Professor of Information Management, London School of Economics

Annex 4: Consultees

Amba Kak, AI Now, 8 October 2020

Megan Gould, Liberty, 8 October 2020

Elaine Hamilton and David Scott, Scottish Biometrics Commissioner Bill, 13 October 2020

Michael Gribben, Ben Dellot and Jessica Smith, Centre for Data Ethics and Innovation,
16 October 2020

Ali Shah, Anne Russell, Ali Hall, Information Commissioner’s Office, 19 October 2020

Baroness Susan Williams, Kit Malthouse MP 20 October 2020

Professor Paul Wiles, Biometrics Commissioner, 9 November 2020

Professor Gillian Tully CBE, Forensic Science Regulator, 11 November 2020

Dr Daragh Murray, University of Essex, 12 November 2020

Tony Porter, Surveillance Camera Commissioner, 9 December 2020

Detective Chief Superintendent Chris Todd, West Midlands Police, 7 January 2021

Rachel Tuffin OBE, College of Policing, 19 January 2021

Silkie Carlo, Big Brother Watch, 20 January 2021

Dr Suzanne Shale, London Policing Ethics Panel, 25 January 2021

Assistant Chief Constable Dr Iain Raphael and David Hudson, College of Policing,
1 February 2021

Lindsey Chiswick, Metropolitan Police, 2 February 2021

Robin Allen QC and Dee Masters, Cloisters Chambers, 9 February 2021

All organisational affiliations correct at time of interview.


Image credit: Grafissimo

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50

Skip to content

Executive summary

‘Where should I go for dinner? What should I read, watch or listen to next? What should I buy?’ To answer these questions, we might go with our gut and trust our intuition. We could ask our friends and family, or turn to expert reviews. Recommendations large and small can come from a variety of sources in our daily lives, but in the last decade there has been a critical change in where they come from and how they’re used.

Recommendations are now a pervasive feature of the digital products we use. We are increasingly living a world of recommendation systems, a type of software designed to sift through vast quantities of data to guide users towards a narrower selection of material, according to a set of criteria chosen by their developers.

Examples of recommendation systems include Netflix’s ‘Watch next’ and Amazon’s ‘Other users also purchased’; TikTok’s recommendation system drives its main content feed.

But what is the risk of a recommendation? As recommendations become more automated and data-driven, the trade-offs in their design and use are becoming more important to understand and evaluate.

Background

This report explores the ethics of recommendation systems as used in public service media organisations. These independent organisations have a mission to inform, educate and entertain the public, and are often funded by and accountable to the public.

In media organisations, producers, editors and journalists have always made implicit and explicit decisions about what to give prominence to, both in terms of what stories to tell and what programmes to commission, but also in how those stories are presented. Deciding what makes the front page, what gets the primetime slot, what makes top billing on the evening news – these are all acts of recommendation. While private media organisations like Netflix primarily use these systems to drive user engagement with their content, public service media organisations, like the British Broadcasting Corporation (BBC) in the UK, operate with a different set of principles and values.

This report also explores how public service media organisations are addressing the challenge of designing and implementing recommendation systems within the parameters of their mission, and identifies areas for further research into how they can accomplish this goal.

While there is an extensive literature exploring public service values and a separate literature around the ethics and operational challenges of designing and implementing recommendation systems, there are still many gaps in the literature around how public service media organisations are designing and implementing these systems. Addressing these gaps can help ensure that public service media organisations are better able to design these systems. With this in mind, this project has explored the following questions:

  • What are the values that public service media organisations adhere to? How do these differ from the goals that private-sector organisations are incentivised to pursue?
  • In what contexts do public service media use recommendation systems?
  • What value can recommendation systems add for public service media and how do they square with public service values?
  • What are the ethical risks that recommendation systems might raise in those contexts? And what challenges should teams consider?
  • What are the mitigations that public service media can implement in the design, development, and implementation of these systems?

In answering these questions, we focused on European public service media organisations and in particular on the BBC in the UK, who are project partners on this research.

The BBC is the world’s largest public service media organisation and has been at the forefront of public service broadcasters exploring the use of recommendation systems. As the BBC has historically set precedents that other public service media have followed, it is valuable to understand its work in depth in order to draw wider lessons for the field.

In this report, we explore an in-depth snapshot of the BBC’s development and use of several recommendation systems from summer and autumn 2021, alongside an examination of the work of several other European public service media organisations. We place these examples in the broader context of debates around 21st century public service media and use them to explore the motivations, risks and evaluation of the use of recommendation systems by public service media and their use more broadly.

The evidence for this report stems from interviews with 11 current staff from editorial, product and engineering teams involved in recommendation systems at the BBC, along with interviews with representatives of six other European public service broadcasters that use recommendation systems. This report also draws on a review of the existing literature on public service media recommendation systems and on interviews with experts from academia, civil society and government.

Findings

Across these different public service media organisations, our research has found five key findings:

  1. The contextual role of public service media organisations is a major driver for their increasing use of recommendation systems. The last few decades have seen public service media organisations lose market share of news and entertainment to private providers, putting pressure on public service media organisations to use recommendation systems to stay competitive.
  2. The values of public service media organisations create different objectives and practices to those in the private sector. While private-sector media organisations are primarily driven to maximise shareholder revenue and market share, with some consideration of social values, public service media organisations are legally mandated to operate with a particular set of public interest values at their core, including universality, independence, excellence, diversity, accountability and innovation.
  3. These value differences translate into different objectives for the use of recommendation systems. While private firms seek to maximise metrics like user engagement, ‘time on product’ and subscriber retention in the use of their recommendation systems, public service media organisations seek related but different objectives. For example, rather than maximising engagement with recommendation systems, our research found public service media providers want to broaden their reach to a more diverse set of audiences. Rather than maximising time on product, public service media organisations are more concerned with ensuring the product is useful for all members of society, in line with public interest values.
  4. Public service media recommendation systems can raise a range of well-documented ethical risks, but these will differ depending on the type of system and context of its use. Our research found that public service media recognise a wide array of well-documented ethical risks of recommendation systems, including risks to personal autonomy, privacy, misinformation and fragmentation of the public sphere. However, the type and severity of the risks highlighted depended on which teams we spoke with, with audio-on-demand and video-on-demand teams raising somewhat different concerns to those working on news.
  5. Evaluating the risks and mitigations of recommendation systems must be done in the context of the wider product. Addressing the risks of public service media recommendation systems should not just focus on technical fixes. Aligning product goals and other product features with public service values are just as important in ensuring recommendation systems positive contribute the experiences of audiences and to wider society.

Recommendations

Based on these key findings, we make nine recommendations for future research, experimentation and collaboration between public service media organisations, academics, funders and regulators:

  1. Define public service value for the digital age. Recommendation systems are designed to optimise against specific objectives. However, the development and implementation of recommendation systems is happening at a time when the concept of public service value and the role of public service media organisations is under question. Unless public service media organisations are clear about their own identities and purpose, it will be difficult for them to build effective recommendation systems. In the UK, significant work has already been done by Ofcom as well as the Department for Digital, Culture, Media and Sport’s parliamentary Select Committee to identify the challenges public service media face and offer new approaches to regulation. Their recommendations must be implemented so that public service media can operate within a paradigm appropriate to the digital age and build systems that address a relevant mission.
  2. Fund a public R&D hub for recommendation systems and responsible recommendation challenges. There is a real opportunity to create a hub for R&D of recommendation systems that are not tied to industry goals. This is especially important as recommendation systems are one of the prime use cases of behaviour modification technology but research into it is impaired by lack of access to interventional data.  Therefore, as part of UKRI’s National AI Research and Innovation (R&I) Programme set out in the UK AI Strategy, it should fund the development of a public research hub on recommendation technology.
  3. Publish research into audience expectations of personalisation. There was a striking consensus in our interviews with public service media teams working on recommendations that personalisation was both wanted and expected by the audience. However, there is limited publicly available evidence underlying this belief and more research is needed. Understanding audience’s views towards recommendation systems is an important part of ensuring those systems are acting in the public interest. Public service media organisations should not widely adopt recommendation systems without evidence that they are either wanted or needed by the public. Otherwise, public service media risk simply following a precedent set by commercial competitors, rather than defining a paradigm aligned to their own missions.
  4. Communicate and be transparent with audiences. Although most public service media organisations profess a commitment to transparency about their use of recommendation systems, in practice there is little effective communication with their audiences about where and how recommendation systems are being used. Public service media should invest time and research into understanding how to usefully and honestly articulate their use of recommendation systems in ways that are meaningful to their audiences. This communication must not be one way. There must be opportunities for audiences to give feedback and interrogate the use of the systems, and raise concerns.
  5. Balance user control with convenience. Transparency alone is not enough. Giving users agency over the recommendations they see is an important part of responsible recommendation. Simply giving users direct control over the recommendation system is an obvious and important first step, but it is not a universal solution. We recommend that public service media providers experiment with different kinds of options, including enabling algorithmic choice of recommendation systems and ‘joint’ recommendation profiles.
  6. Expand public participation. Beyond transparency or individual user choice and control over the parameters of the recommendation systems already deployed, users and wider society could also have greater input during the initial design of the recommendation systems and in the subsequent evaluations and iterations. This is particularly salient for public service media organisations as, unlike private companies which are primarily accountable to their customers and shareholders, public service media organisations have an obligation to serve the interests of society. Therefore, even those who are not direct consumers of content should have a say in how public service media recommendations are shaped.
  7. Standardise metadata. Inconsistent, poor quality metadata – an essential resource for training and developing recommendation systems – was consistently highlighted as a barrier to developing recommendation systems in public service media, particularly in developing more novel approaches that go beyond user engagement and try to create diverse feeds of recommendations. Each public service media organisation should have a central function that standardises the format, creation and maintenance of metadata across the organisation. Institutionalising the collection of metadata and making access to it more transparent across each individual organisation is an important investment in public service media’s future capabilities.
  8. Create shared recommendation system resources. Given their limited resources and shared interests, public service media organisations should invest more heavily in creating common resources for evaluating and using recommendation systems. This could include a shared repository for evaluating recommendation systems on metrics valued by public service media, including libraries in common coding languages.
  9. Create and empower integrated teams. When developing and deploying recommendation systems, public service media organisations need to integrate editorial and development teams from the start. This ensures that the goals of the recommendation system are better aligned with the organisation’s goals as a whole and ensure the systems augment and complement existing editorial expertise.

How to read this report

This report examines how European public service media organisations think about using automated recommendation systems for content curation and delivery. It covers the context in which recommendation systems are being deployed, why that matters, the ethical risks and evaluation difficulties posed by these systems and how public service media are attempting to mitigate these risks. It also provides ideas for new approaches to evaluation that could enable better alignment of their systems with public service values.

If you need an introduction or refresher on what recommendation systems are, we recommend starting with the ‘Introducing recommendation systems’.

If you work for a public service media organisation

  • We recommend the chapters on ‘Stated goals and potential risks of using recommendation systems in public service media’ and ‘Evaluation of recommendation systems’.
  • For an understanding of how the BBC has deployed recommendation systems, see the case studies.
  • For ideas on how public service media organisations can advance their responsible use of recommendation systems, see the chapter on ‘Outstanding questions and areas for further research and experimentation’.

If you are a regulator of public service media

  • We recommend you pay particular attention to the section on ‘Stated goals and potential risks of using recommendation systems in public service media’ and ‘How do public service media evaluate their recommendation systems?’.
  • In addition, to understand the practices and initiatives that we believe should be encouraged within and experimented with by public service media organisations to ensure responsible and effective use of recommendation systems, see ‘Outstanding questions and areas for further research and experimentation’.

If you are a regulator of online platforms

  • If you need an introduction or refresher on what recommendation systems are, we recommend starting with the ‘Introducing recommendation systems’. Understanding this context can help disentangle the challenges in regulating recommendation systems, by highlighting where problems arise from the goals of public service media versus the process of recommendation itself.
  • To understand the issues faced by all deployers of recommendation systems, see the sections on the ‘Stated goals of recommendation systems’ and ‘Potential risks of using recommendation systems’.
  • To better understand how these risks change due to the context and choices of public service media, relative to other online platforms, and the difficulties even organisations explicitly oriented towards public value have in auditing their own recommendation systems to determine whether they are socially beneficial, beyond simple quantitative engagement metrics, see the section on ‘How these risks are viewed and addressed by public service media’ and the chapter on ‘Evaluation of recommendation systems’.

If you are a funder of research into recommendation systems or a researcher interested in recommendation systems

  • Public service media organisations, with mandates that emphasise social goals of universality, diversity and innovation over engagement and profit-maximising, can offer an important site of study and experimentation for new approaches to recommendation system design and evaluation. We recommend starting with the sections on ‘The context of public service values and public service media’ and ‘why this matters’, to understand the different context within which public service media organisations operate.
  • Then, the sections on ‘How do public service media evaluate their recommendation systems?’ and ‘How could evaluations be done differently?’, followed by the chapter on ‘Outstanding questions and areas for further research and experimentation’, could provide inspiration for future research projects or pilots that you could undertake or fund.

Introduction

Scope

Recommendation systems are tools designed to sift through the vast quantities of data available online and use algorithms to guide users towards a narrower selection of material, according to a set of criteria chosen by their developers. Recommendation systems sit behind a vast array of digital experiences. ‘Other users also purchased…’ on Amazon or ‘Watch next’ on Netflix guide you to your next purchase or night on the sofa. Deliveroo will suggest what to eat, LinkedIn where to work and Facebook who your friends might be.

These practices are credited with driving the success of companies like Netflix and Spotify. But they are also blamed for many of the harms associated with the internet, such as the amplification of harmful content, the polarisation of political viewpoints (although the evidence is mixed and inconclusive)[footnote]Cobbe, J. and Singh, J. (2019). ‘Regulating Recommending: Motivations, Considerations, and Principles’. European Journal of Law and Technology, 10(3), pp. 8–10. Available at: https://ejlt.org/index.php/ejlt/article/view/686; Steinhardt, J. (2021). ‘How Much Do Recommender Systems Drive Polarization?’. UC Berkeley. Available at: https://jsteinhardt.stat.berkeley.edu/blog/recsys-deepdive; Stray, J. (2021). ‘Designing Recommender Systems to Depolarize’, p. 2. arXiv. Available at: http://arxiv.org/abs/2107.04953[/footnote] and the entrenchment of inequalities.[footnote]Born, G. Morris, J. Diaz, F. and Anderson, A. (2021). Artificial intelligence, music recommendation, and the curation of culture: A white paper, pp. 10–13. Schwartz Reisman Institute for Technology and Society. Available at: https://static1.squarespace.com/static/5ef0b24bc96ec4739e7275d3/t/60b68ccb5a371a1bcdf79317/1622576334766/Born-Morris-etal-AI_Music_Recommendation_Culture.pdf[/footnote] Regulators and policymakers worldwide are paying increasing attention to the potential risks of recommendation systems, with proposals in China and Europe to regulate their design, features and uses.[footnote]See: European Union. (2022). Digital Services Act, Article 27. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L:2022:277:TOC; For details of Article 17 of the Cybersecurity Administration of China (CAC)’s Internet Information Service Algorithm Recommendation Management Regulations, see: Huld, A. (2022). ‘China Passes Sweeping Recommendation Algorithm Regulations’. China Briefing News. Available at: https://www.china-briefing.com/news/china-passes-sweeping-recommendation-algorithm-regulations-effect-march-1-2022/[/footnote]

Public service media organisations are starting to follow the example of their commercial rivals and adopt recommendation systems. Like the big digital streaming service providers, they sit on huge catalogues of news and entertainment content, and can use recommendation systems to direct audiences to particular options.

But public service media organisations face specific challenges in deploying these technologies. Recommendation systems are designed to optimise for certain objectives: a hotel’s website is aiming for maximum bookings, Spotify and Netflix want you to renew your subscription.

Public service media serve many functions. They have a duty to serve the public interest, not the company bottom line. They are independently financed and are controlled by, if not answerable to, the public.[footnote]Conseil mondial de la radiotélévision. (2001). Public broadcasting: why? how? pp. 11–15. UNESCO Digital Library. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000124058[/footnote] Their mission is to inform, educate and entertain. Public service media are committed to values including independence, excellence and diversity.[footnote]European Broadcasting Union. (2012). Empowering Society: A Declaration on the Core Values of Public Service Media. Available at: https://www.ebu.ch/files/live/sites/ebu/files/Publications/EBU-Empowering-Society_EN.pdf[/footnote] They must fulfil an array of duties and responsibilities set down in legislation that often predates the digital era. How do you optimise for all that?

Developing recommendation systems for public service media is not just about finding technical fixes. It requires an interrogation of the organisations’ role in democratic societies in the digital age. How do the public service values that have guided them for a century translate to a context where the internet has fragmented the public sphere and audiences are defecting to streaming services? And how can public service media use this technology in ways that serve the public interest?

These are questions that resonate beyond the specifics of public service media organisations. All public institutions that wish to use technologies for societal benefit must grapple with similar issues. And all organisations – public or private – have to deploy technologies in ways that align with their values. Asking these questions can be helpful to technologists more generally.

In a context where the negative impacts of recommendation systems are increasingly apparent, public service media must tread carefully when considering their use. But there is also an opportunity for public service media to do what, historically, it has excelled at – innovating in the public interest.

A public service approach to building recommendation systems that are both engaging and trustworthy could not only address the needs of public service media in the digital age, but provide a benchmark for scrutiny of systems more widely and create a challenge to the paradigm set by commercial operators’ practices.

In this report, we explore how public service media organisations are addressing the challenge of designing and implementing recommendation systems within the parameters of their organisational mission, and identify areas for further research into how they can accomplish this goal.

While there is an extensive literature exploring public service values and a separate literature around the ethics and operational challenges of designing and implementing recommendation systems, there are still many gaps in the literature around how public service media organisations are designing and implementing these systems. Addressing that gap can help ensure that public service media organisations are better able to design these systems. With that in mind, this report explores the following questions:

  • What are the values that public service media organisations adhere to? How do these differ from the goals that private-sector organisations are incentivised to pursue?
  • In what contexts do public service media use recommendation systems?
  • What value can recommendation systems add for public service media and how do they square with public service values?
  • What are the ethical risks that recommendation systems might raise in those contexts? And what challenges should different teams within public service media organisations (such as product, editorial, legal and engineering) consider?
  • What are the mitigations that public service media can implement in the design, development and implementation of these systems?

In answering these questions, this report:

  • provides greater clarity about the ethical challenges that developers of recommendation systems must consider when designing and maintaining these systems
  • explores the social benefit of recommendation systems by examining the trade-offs between their stated goals and their potential risks
  • provides examples of how public service broadcasters are grappling with these challenges, which can help inform the development of recommendation systems in other contexts.

This report focuses on European public service media organisations and in particular on the British Broadcasting Corporation (BBC) in the UK, who are project partners on this research. The BBC is the world’s largest public service media organisation and has been at the forefront amongst public service broadcasters of exploring the use of recommendation systems. As the BBC has historically set precedents that other public service media have followed, it is valuable to understand its work in depth in order to draw wider lessons for the field.

In this report, we explore an in-depth snapshot of the BBC’s development and use of several recommendation systems as it stood in 2021, alongside an examination of the work of several other European public service media organisations. We place these examples in the broader context of debates around 21st century public service media and use them to explore the motivations, risks and evaluation of the use of recommendation systems by public service media and their use more broadly.

The evidence for this report stems from interviews with 11 current staff from editorial, product and engineering teams, involved in recommendation systems at the BBC, along with interviews with representatives of six other European public service broadcasters that use recommendation systems. This report also draws on a review of the existing literature on public service media recommendation systems and on interviews with experts from academia, civil society and regulation who work on the design, development, and evaluation of recommendation systems.

Although a large amount of the academic literature focuses on the use of recommendations in news provision, we look at the full range of public service media content, as we found more of the advanced implementations of recommendation systems lie in other domains. We have drawn on published research about recommendation systems from commercial platforms, however, internal corporate studies are unavailable to independent researchers and our requests to interview both researchers and corporate representatives of platforms were unsuccessful.

Background

In this chapter, we set out the context for the rest of the report. We outline the history and context of public service media organisations, what recommendation systems are and how they are approached by public service media organisations, and what external and internal processes and constraints govern their use.

The context of public service values and public service media

The use of recommendation systems in public service media is informed by their history, values and remit, their governance and the landscape in which they operate. In this section we situate the deployment of recommendation systems in this context.

Broadly, public service media are independent organisations that have a mission to inform, educate and entertain. Their values are rooted in the founding vision for public service media organisations a century ago and remain relevant today, codified into regulatory and governance frameworks at organisational, national and European levels. However the values that public service media operate under are inherently qualitative and, even with the existence of extensive guidelines, are interpreted through the daily judgements of public service media staff and the mental models and institutional culture built up over time.

Although public service media have been resilient to change, they currently face a trio of challenges:

  1. Losing audiences to online digital content providers including Netflix, Amazon, YouTube and Spotify.
  2. Budget cuts and outdated regulation, framed around analogue broadcast commitments, hampering their ability to respond to technological change.
  3. Populist political movements undermining their independence.

Public service media are independent media organisations financed by and answerable to the publics they serve.[footnote]Conseil mondial de la radiotélévision. (2001). Public broadcasting: why? how? pp. 11–15. UNESCO Digital Library. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000124058[/footnote] Their roots lie in the 1920s technological revolution of radio broadcasting when the BBC was established as the world’s first public service broadcaster, funded by a licence fee, and with the ambition to ‘bring the best of everything to the greatest number of homes’.[footnote]BBC. (2022). The BBC Story – 1920s factsheet. Available at: http://downloads.bbc.co.uk/historyofthebbc/1920s.pdf[/footnote] Other national broadcasters were soon founded across Europe and also adopted the BBC’s mission to ‘inform, educate and entertain’. Although there are now public service media organisations in almost every country in the world, this report focuses on European public service media, which share comparable social, political and regulatory developments and therefore a similar context when considering the implementation of recommendation systems.

Public service media organisations have come to play an important institutional role within democratic societies in Europe, creating a bulwark against the potential control of public opinion either by the state or by particular interest groups.[footnote]Tambini, D. (2021). ‘Public service media should be thinking long term when it comes to AI’. Media@LSE. Available at: https://blogs.lse.ac.uk/medialse/2021/05/12/public-service-media-should-be-thinking-long-term-when-it-comes-to-ai/[/footnote] The establishment of public service broadcasters for the first time created a universally accessible public sphere where, in the words of the BBC’s founding chairman Lord Reith, ‘the genius and the fool, the wealthy and the poor listen simultaneously’. They aimed to forge a collective experience, ‘making the nation as one man’.[footnote]Higgins, C. (2014). ‘What can the origins of the BBC tell us about its future?’. The Guardian. Available at: https://www.theguardian.com/media/2014/apr/15/bbc-origins-future[/footnote] At the same time public service media are expected to reflect the diversity of a nation, enabling the wide representation of perspectives in a democracy, as well as giving people sufficient information and understanding to make decisions on issues of public importance. These two functions create an inherent tension between public service media as an agonistic space where different viewpoints compete and a consensual forum where the nation comes together. 

Public service values

The founding vision for public service media has remained within the DNA of organisations as their public service values – often called Reithian principles, in reference to the influence of the BBC’s founding chairman.

The European Broadcasting Union (EBU), the membership organisation for public service media in Europe, has codified the public service mission into six core values: universality, independence, excellence, diversity, accountability and innovation, and member organisations commit to strive to uphold these in practice.[footnote]European Broadcasting Union. (2012). Empowering Society: A Declaration on the Core Values of Public Service Media. Available at: https://www.ebu.ch/files/live/sites/ebu/files/Publications/EBU-Empowering-Society_EN.pdf[/footnote]

 

Public service value Meaning
Universality ·  reach all segments of society, with no-one excluded

· share and express a plurality of views and ideas

· create a public sphere, in which all citizens can form their own opinions and ideas, aiming for inclusion and social cohesion

· multi-platform

· accessible for everyone

· enable audiences to engage and participate in a democratic society.

Independence · trustworthy content

· act in the interest of audiences

· completely impartial and independent from political, commercial and other influences and ideologies

· autonomous in all aspects of the remit such as programming, editorial decision-making, staffing

· independence underpinned by safeguards in law.

Excellence · high standards of integrity professionalism and quality; create benchmarks within the media industries

·  foster talent

· empower, enable and enrich audiences

· audiences are also participants.

Diversity · reflect diversity of audiences by being diverse and pluralistic in the genres of programming, the views expressed, and the people employed

· support and seek to give voice to a plurality of competing views – from those with different backgrounds, histories and stories. Help build a more inclusive, less fragmented society.

Accountability · listen to audiences and engage in a permanent and meaningful debate

· publish editorial guidelines. Explain. Correct mistakes. Report on policies, budgets, editorial choices

· be transparent and subject to constant public scrutiny

· be efficient and managed according to the principles of good governance.

Innovation · enrich the media environment

· be a driving force of innovation and creativity

· develop new formats, new technologies, new ways of connectivity with audiences

· attract, retain and train our staff so that they can participate in and shape the digital future, serving the public.

As well as signing up to these common values, each individual public service media organisation has its own articulation of its mission, purpose and values, often set out as part of its governance.[footnote]Statutory governance of public service media also varies from country to country and reflects national political and regulatory norms. The BBC is regulated by the independent broadcasting regulator Ofcom. The European Union’s revised Audio Visual Service Directive requires member states to have an independent regulator but this can take different forms. See: European Commission. (2018). Digital Single Market: updated audiovisual rules. Available at: https://ec.europa.eu/commission/presscorner/detail/en/MEMO_18_4093. For example, France has a central regulator, the Conseil Supérieur de l’Audiovisuel. But in Germany, although public service media objectives are defined in the constitution, oversight is provided by a regional broadcasting council, Rundfunkrat, reflecting the country’s federal structure. In Belgium too, regulation is devolved to two separate councils representing the country’s French and Flemish speaking regions.[/footnote] Ultimately these will align with those described by the EBU but may use different terms or have a different emphasis. Policymakers and practitioners operating at a national level are more likely to refer to these specific expressions of public values. The overarching EBU values are often referenced in academic literature as the theoretical benchmark for public service values. 

In the case of the BBC, the Royal Charter between the Government and the BBC is agreed for a 10 year period.[footnote]BBC. (2017). ‘Mission, values and public purposes’. Available at: https://www.bbc.com/aboutthebbc/governance/bbc.com/aboutthebbc/governance/mission/. For comparison, ARD, the German public service media organisation articulates its values as: ‘Participation, Independence, Quality, Diversity, Localism, Innovation, Value Creation, Responsibility’. See: ARD. (2021). Die ARD – Unser Beitrag zum Gemeinwohl. Available at: https://www.ard.de/die-ard/was-wir-leisten/ARD-Unser-Beitrag-zum-Gemeinwohl-Public-Value-100[/footnote]

The BBC: governance and values

 

Mission: to act in the public interest, serving all audiences through the provision of impartial, high-quality and distinctive output and services which inform, educate and entertain.

 

Public purposes:

  1. To provide impartial news and information to help people understand and engage with the world around them.
  2. To support learning for people of all ages.
  3. To show the most creative, highest quality and distinctive output and services.
  4. To reflect, represent and serve the diverse communities of all of the United Kingdom’s nations and regions and, in doing so, support the creative economy across the United Kingdom.
  5. To reflect the United Kingdom, its culture and values to the world.

 

Additionally, the BBC has its own set of organisational values that are not part of the governance agreement but that ‘represent the expectations we have for ourselves and each other, they guide our day-to-day decisions and the way we behave’:

  • Trust: Trust is the foundation of the BBC – we’re independent, impartial and truthful.
  • Respect: We respect each other – we’re kind, and we champion inclusivity.
  • Creativity: Creativity is the lifeblood of our organisation.
  • Audiences: Audiences are at the heart of everything we do.
  • One BBC: We are One BBC – we collaborate, learn and grow together.
  • Accountability: We are accountable and deliver work of the highest quality.

These kinds of regulatory requirements and values are then operationalised internally through organisations’ editorial guidelines which again will vary from organisation to organisation, depending on the norms and expectations of their publics. Guidelines can be extensive and their aim is to help teams put public service values into practice. For example, the current BBC guidelines run to 220 pages, covering everything from how to run a competition, to reporting on wars and acts of terror.

Nonetheless, such guidelines leave a lot of room for interpretation. Public service values are, by their nature, qualitative and difficult to measure objectively. For instance, consider the BBC guidelines on impartiality – an obligation that all regulated broadcasters in the UK must uphold – and over which the BBC has faced intense scrutiny:

‘The BBC is committed to achieving due impartiality in all its output. This commitment is fundamental to our reputation, our values and the trust of audiences. The term “due” means that the impartiality must be adequate and appropriate to the output, taking account of the subject and nature of the content, the likely audience expectation and any signposting that may influence that expectation.’

‘Due impartiality usually involves more than a simple matter of ‘balance’ between opposing viewpoints. We must be inclusive, considering the broad perspective and ensuring that the existence of a range of views is appropriately reflected. It does not require absolute neutrality on every issue or detachment from fundamental democratic principles, such as the right to vote, freedom of expression and the rule of law. We are committed to reflecting a wide range of subject matter and perspectives across our output as a whole and over an appropriate timeframe so that no significant strand of thought is under-represented or omitted.’ 

It’s clear that impartiality is a question of judgement and may not even be expressed in a single piece of content but over the range of BBC output over a period of time. In practice, teams internalise these expectations and make decisions based on institutional culture and internal mental models of public service value, rather than continually checking the editorial guidelines or referencing any specific public values matrix.[footnote]Mazzucato, M., Conway, R., Mazzoli, E., Knoll E. and Albala, S. (2020). Creating and measuring dynamic public value at the BBC, p.22. UCL Institute for Innovation and Public Purpose. Available at: https://www.ucl.ac.uk/bartlett/public-purpose/sites/public-purpose/files/final-bbc-report-6_jan.pdf[/footnote]

How public service media differ from other media organisations

Public service media are answerable to the publics they serve.[footnote]Not all public service media are publicly funded. Channel 4 in the UK for example is financed through advertising but owned by the public (although the UK Government has opened a consultation on privatisation).[/footnote] They should be independent from both government influence and from the influence of commercial owners. They operate to serve the public interest.

Commercial media, however, serve the interests of their owners or shareholders. Success for Netflix for example is measured in numbers of subscribers which then translates into revenues.[footnote]Circulation and profits for print media have declined in recent years but in some cases promote their proprietors’ interests through political influence – for instance the Murdoch-owned Sun in the UK or the Axel Springer-owned Bild Zeitung in Germany.[/footnote]

The activities of commercial media are nonetheless limited by regulation. In the UK the independent regulator Ofcom’s Broadcasting Code requires all broadcasters (not just public service media) to abide by principles such as fairness and impartiality.[footnote]Ofcom. (2020). The Ofcom Broadcasting Code (with the Cross-promotion Code and the On Demand Programme Service Rules). Available at: https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code[/footnote] Russia Today for example has been investigated for allegedly misleading reporting on the conflict in Ukraine.[footnote]Ofcom. (2022). ‘Ofcom launches 15 investigations into RT’. Available at: https://www.ofcom.org.uk/news-centre/2022/ofcom-launches-investigations-into-rt[/footnote] Streaming services are subject to more limited regulation which covers child protection, incitement to hatred and product placement,[footnote]Ofcom. (2021). Guide to video on demand. Available at: https://www.ofcom.org.uk/tv-radio-and-on-demand/advice-for-consumers/television/video-on-demand[/footnote] while the press – both online and in print – are largely lightly self-regulated through the Independent Press Standards Organisation, with some publications regulated by IMPRESS.[footnote]Independent Press Standards Organisation (IPSO). (2022). ‘What we do’. Available at: https://www.ipso.co.uk/what-we-do/; IMPRESS. ‘Regulated Publications’. Available at: https://impress.press/regulated-publications/[/footnote]

However, public service media have extensive additional obligations, amongst others to ‘meet the needs and satisfy the interests of as many different audiences as practicable’ and ‘reflect the lives and concerns of different communities and cultural interests and traditions within the United Kingdom, and locally in different parts of the United Kingdom’,[footnote]UK Government. Communications Act 2003, section 265. Available at: https://www.legislation.gov.uk/ukpga/2003/21/section/265[/footnote] 

These regulatory systems vary from country to country but hold broadly the same characteristics. In all cases, the public service remit entails far greater duties than in the private sector and broadcasters are more heavily regulated than digital providers.

These obligations are also framed in terms of public or societal benefit. This means public service media are striving to achieve societal goals that may not be aligned with a pure maximisation of profits, while commercial media pursue interests more aligned with revenue and the interests of their shareholders.

Nonetheless, public service media face scrutiny about how well they meet their objectives and have had to create proxies for these intangible goals to demonstrate their value to society.

‘[Public service media] is fraught today with political contention. It must justify its existence and many of its efforts to governments that are sometimes quite hostile, and to special interest groups and even competitors. Measuring public value in economic terms is therefore a focus of existential importance; like it or not diverse accountability processes and assessment are a necessity.’[footnote]Lowe, G. and Martin, F. (eds.). (2014). The Value and Values of Public Service Media.[/footnote]

In practice this means public service media organisations measure their services against a range of hard metrics, such as audience reach and value for money, as well as softer measures like audience satisfaction surveys.[footnote]BBC. (2021). BBC Annual Plan 2021-22, Annex 1. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote] In the mid-2000s the BBC developed a public value test to inform strategic decisions that has since been adopted as a public interest test which remains part of the BBC’s governance. Similar processes have been created in other public service media systems, such as the ‘Three Step Test’ in German broadcasting.[footnote]The 12th Inter-State Broadcasting Treaty, the regulatory framework for public service and commercial broadcasting across Germany’s federal states, introduced a three-step test for assessing whether online services offered by public service broadcasters met their public service remit. Under the three-step test, the broadcaster needs to assess: first, whether a new or significantly amended digital service satisfies the democratic, social and cultural needs of society; second, whether it contributes to media competition from a qualitative point of view and; third, the associated financial cost. See: Institute for Media and Communication Policy. (2009). Drei-Stufen-Test. Available at: http://medienpolitik.eu/drei-stufen-test/[/footnote] These methods have their own limitations, drawing public media into a paradigm of cost-benefit analysis and market fixing, rather than articulating wider values to individuals, society and industry.[footnote]Mazzucato, M., Conway, R., Mazzoli, E., Knoll E. and Albala, S. (2020). Creating and measuring dynamic public value at the BBC, p.22. UCL Institute for Innovation and Public Purpose. Available at: https://www.ucl.ac.uk/bartlett/public-purpose/sites/public-purpose/files/final-bbc-report-6_jan.pdf[/footnote] 

This does not mean commercial media are devoid of values. Spotify for example says its mission ‘is to unlock the potential of human creativity—by giving a million creative artists the opportunity to live off their art and billions of fans the opportunity to enjoy and be inspired by it’,[footnote]Spotify. (2022). ‘About Spotify’. Available at: https://newsroom.spotify.com/company-info/[/footnote] while Netflix’s organisational values are judgment, communication, curiosity, courage, passion, selflessness, innovation, inclusion, integrity and impact.[footnote]Netflix. (2022). ‘Netflix Culture’. Available at: https://jobs.netflix.com/culture[/footnote] Commercial media are also sensitive to issues that present reputational risk, for instance the outcry over Joe Rogan’s Spotify podcast propagating disinformation about COVID-19 or Jimmy Carr’s joke about the Holocaust.[footnote]Silberling, A. (2022). ‘Spotify adds COVID-19 content advisory’. TechCrunch. Available at: https://social.techcrunch.com/2022/03/28/spotify-covid-19-content-advisory-joe-rogan/; Jackson, S. (2022). ‘Jimmy Carr condemned by Nadine Dorries for “shocking” Holocaust joke about travellers in Netflix special His Dark Material’. Sky News. Available at: https://news.sky.com/story/jimmy-carr-condemned-for-disturbing-holocaust-joke-about-travellers-in-netflix-special-his-dark-material-12533148[/footnote]

However, commercial media harness values in service of their business model, whereas for public service media the values themselves are the organisational objective. Therefore, while the ultimate goal of a commercial media organisation is quantitative (revenue) the ultimate goal of public service media is qualitative (public value) – even if this is converted into quantitative proxies.

This difference between public and private media companies is fundamental in how they adopt recommendation systems. We discuss this further later in the report when examining the objectives of using recommendation systems.

Current challenges for public service media

Since their inception, public service media and their values have been tested and reinterpreted in response to new technologies.

The introduction of the BBC Light Programme in 1945, a light entertainment alternative to the serious fare offered by the BBC Home Service, challenged the principle of universality (not everyone was listening to the same content at the same time) as well as the balance between the mission to inform, educate and entertain (should public service broadcasting give people what they want or what they need?). The arrival of the video recorder, and then new channels and platforms, gave audiences an option to opt out of the curated broadcast schedule –where editors determined what should be consumed. While this enabled more and more personalised and asynchronous listening and viewing, it potentially reduced exposure to the serendipitous and diverse content that is often considered vital to the public service remit.[footnote]van Es, K. F. (2017). ‘An Impending Crisis of Imagination : Data‐Driven Personalization in Public Service Broadcasters’. Media@LSE. Available at: https://dspace.library.uu.nl/handle/1874/358206[/footnote] The arrival and now dominance of digital technologies comes amid a collision of simultaneous challenges which, in combination, may be existential.

Audience

Public service media have always had a hybrid role. They are obliged to serve the public simultaneously as citizens and consumers.[footnote]BBC Trust. (2012). BBC Trust assessment processes Guidance document. Available at: http://downloads.bbc.co.uk/bbctrust/assets/files/pdf/about/how_we_govern/pvt/assessment_processes_guidance.pdf[/footnote]

Their public service mandate requires them to produce content and serve audiences that the commercial market does not provide for. At the same time, their duty to provide a universal service means they must aim to reach a sizeable mainstream audience and be active participants in the competitive commercial market.

Although people continue to use and value public service media, the arrival of streaming services such as Netflix, Amazon and Spotify, as well as the availability of content on YouTube, has had a massive impact on public service media audience share.

In the UK, the COVID-19 pandemic has seen people return to public service media as a source of trusted information, and with more time at home they have also consumed more public service content.[footnote]BBC. (2021). Annual Plan 2021-22. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote]

But lockdowns also supercharged the uptake of streaming. By September 2020, 60% of all UK households subscribed to an on-demand service, up from 49% a year earlier. Just under half (47%) of all adults who go online now consider online services to be their main way of watching TV and films, rising to around two-thirds (64%) among 18–24 year olds.[footnote]Ofcom. (2021). Small Screen: Big Debate – Recommendations to Government on the future of Public Service Media. Available at: https://www.smallscreenbigdebate.co.uk/__data/assets/pdf_file/0023/221954/statement-future-of-public-service-media.pdf[/footnote]

Public service media are particularly concerned about their failure to reach younger audiences.[footnote]Lowe, G.F. and Maijanen, P. (2019). ‘Making sense of the public service mission in media: youth audiences, competition, and strategic management’. Journal of Media Business Studies. doi: 10.1080/16522354.2018.1553279; Schulz, A., Levy, D. and Nielsen, R.K. (2019). ‘Old, Educated, and Politically Diverse: The Audience of Public Service News’, pp. 15–19, 29–30. Reuters Institute for the Study of Journalism. Available at: https://reutersinstitute.politics.ox.ac.uk/our-research/old-educated-and-politically-diverse-audience-public-service-news[/footnote] Although this group still encounters public service media content, they tend to do so on external services: younger viewers (16–34 year olds) are more likely to watch BBC content on subscription video-on-demand (SVoD) services rather than through BBC iPlayer (4.7 minutes per day on SVoD vs. 2.5 minutes per day on iPlayer).[footnote]Ofcom. (2021). Small Screen: Big Debate – Recommendations to Government on the future of Public Service Media. Available at: https://www.smallscreenbigdebate.co.uk/__data/assets/pdf_file/0023/221954/statement-future-of-public-service-media.pdf[/footnote] They are not necessarily aware of the source of the content and do not create an emotional connection with the public service media as a trusted brand. Meanwhile, platforms gain valuable audience insight data through this consumption which they do not pass onto the public service media organisations.[footnote]House of Commons Digital, Culture, Media and Sport Committee. (2021). The future of public service broadcasting, HC 156. Available at: https://publications.parliament.uk/pa/cm5801/cmselect/cmcumeds/156/156.pdf[/footnote]

Regulation

Legislation has not kept pace with the rate of technological change. Public service media are trying to grapple with the dynamics of the competitive digital landscape on stagnant or declining budgets, while continuing to meet their obligations to provide linear TV and radio broadcasting to a still substantial legacy audience.

The UK broadcasting regulator Ofcom published recommendations in 2021, repeating its previous demands for an urgent update to the public service media system to make it sustainable for the future. These include modernising the public service objectives, changing licences to apply across broadcast and online services and allowing greater flexibility in commissioning across platforms.[footnote]Ofcom. (2021). Small Screen: Big Debate – Recommendations to Government on the future of Public Service Media. Available at: https://www.smallscreenbigdebate.co.uk/__data/assets/pdf_file/0023/221954/statement-future-of-public-service-media.pdf[/footnote]

The Digital, Culture, Media and Sport Select Committee of the House of Commons has also demanded regulatory change. It warned that ‘hurdles such as the Public Interest Test inhibit the ability of [public service broadcasters] to be agile and innovate at speed in order to compete with other online services’ and that the core principle of universality would be threatened unless public service media were better able to attract younger audiences.[footnote]House of Commons Digital, Culture, Media and Sport Committee. (2021). The future of public service broadcasting, HC 156. Available at: https://publications.parliament.uk/pa/cm5801/cmselect/cmcumeds/156/156.pdf[/footnote]

Although there has been a great deal of activity around other elements of technology regulation, particularly the Online Safety Bill in the UK and the Digital Services Act in the European Union, the regulation of public service media has not been treated with the same urgency. There is so far no Government white paper for a promised Media Bill that would address this in the UK and the European Commission’s proposals for a European Media Freedom Act are in the early stages of consultation.[footnote]European Commission. (2022). ‘European Media Freedom Act: Commission launches public consultation’. Available at: https://ec.europa.eu/commission/presscorner/detail/en/ip_22_85[/footnote]

Political context

Public service media have always been a political battleground and have often had fractious relationships with the government of the day. But the rise of populist political movements and governments has created new fault lines and made public service media a battlefield in the culture wars. The Polish and Hungarian Governments have moved to undermine the independence of public service media, while the far-right AfD party in eastern Germany refused to approve funding for public broadcasting.[footnote]The Economist. (2021). ‘Populists are threatening Europe’s independent public broadcasters’. Available at: https://www.economist.com/europe/2021/04/08/populists-are-threatening-europes-independent-public-broadcasters[/footnote] In the UK, the Government has frozen the licence fee for two years and has said future funding arrangements are ‘up for discussion’. It has also been accused of trying to appoint an ideological ally to lead the independent media regulator Ofcom. Elsewhere in Europe, journalists from public service media have been attacked by anti-immigrant and COVID-denial protesters.[footnote]The Economist. (2021).[/footnote]

At the same time, public service media are criticised as unrepresentative of the publics they are supposed to serve. In the UK, both the BBC and Channel 4 have attempted to address this by moving parts of their workforce out of London.[footnote]The Sutton Trust. (2019). Elitist Britain, pp. 40–42. Available at: https://www.suttontrust.com/our-research/elitist-britain-2019/; Friedman, S. and Laurison, D. (2019). ‘The class pay gap: why it pays to be privileged’. The Guardian. Available at: https://www.theguardian.com/society/2019/feb/07/the-class-pay-gap-why-it-pays-to-be-privileged[/footnote] As social media has removed traditional gatekeepers to the public sphere, there is less acceptance of and deference towards the judgement of media decision-makers. In a fragmented public sphere, it becomes harder for public service media to ‘hold the ring’ – on issues like Brexit, COVID-19, race and transgender rights, public service media find themselves distrusted by both sides of the argument.

Although the provision of information and educational resources through the COVID-19 pandemic has given public service media a boost, both in audiences and in levels of trust, they can no longer take their societal value or even their continued existence for granted.[footnote]BBC. (2021). Annual Plan 2021-22. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote] Since the arrival of the internet, their monopoly on disseminating real-time information to a wide public has been broken and so their role in both the media and democratic landscape is up for grabs.[footnote]Interview with Jannick Kirk Sørensen, Associate Professor in Digital Media, Aalborg University (2021).[/footnote] For some, this means public service media is redundant.[footnote]Booth, P. (2020). New Vision: Transforming the BBC into a subscriber-owned mutual. Institute of Economic Affairs. Available at: https://iea.org.uk/publications/new-vision[/footnote] For others, its function should now be to uphold national culture and distinctiveness in the face of the global hegemony of US-owned platforms.[footnote]Department for Digital, Culture, Media & Sport and John Whittingdale OBE MP. (2021). John Whittingdale’s speech to the RTS Cambridge Convention 2021. UK Government. Available at: https://www.gov.uk/government/speeches/john-whittingdales-speech-to-the-rts-cambridge-convention-2021[/footnote]

The Institute for Innovation and Public Purpose has proposed reimagining the BBC as a ‘market shaper’ rather than a market fixer, based on a concept of dynamic public value,[footnote]Mazzucato, M., Conway, R., Mazzoli, E., Knoll E. and Albala, S. (2020). Creating and measuring dynamic public value at the BBC, p.22. UCL Institute for Innovation and Public Purpose. Available at: https://www.ucl.ac.uk/bartlett/public-purpose/sites/public-purpose/files/final-bbc-report-6_jan.pdf[/footnote] while the Media Reform Coalition calls for the creation of a Media Commons of independent, democratic and accountable media organisations, including a People’s BBC and Channel 4.[footnote]Grayson, D. (2021). Manifesto for a People’s Media. Media Reform Coalition. Available at: https://drive.google.com/file/u/1/d/1_6GeXiDR3DGh1sYjFI_hbgV9HfLWzhPi/view?usp=embed_facebook[/footnote] The wide range of ideas in play demonstrates how open the possible futures of public service media could be.

Introducing recommendation systems

The main steps in the development of a recommendation: user engagement with the platform, data gathering, algorithmic analysis and recommendation generation.

Day-to-day, we might turn to friends or family for their recommendations when it comes to decisions large and small. From dining out and entertainment, to big purchases. We might also look at expert reviews. But in the last decade, there has been a critical change in where recommendations come from and how they’re used. Recommendations have now become a pervasive feature of the digital products we use.

Recommendation systems are a type of software that filter information based on contextual data and according to criteria set by its designers. In this section, we briefly outline how recommendation systems operate and how they are used in practice by European public service media. At least a quarter of European public service media have begun deploying recommendation systems. They are mainly used on video platforms but they are only applied on small sections of services – the vast majority of public service content continues to be manually curated by editors.

In media organisations, producers, editors and journalists have always made implicit and explicit decisions about what to give prominence to, from what stories to tell and what programmes to commission, to – just as importantly – how those stories are presented. Deciding what makes the front page, what gets prime time, what makes top billing on the evening news – these are all acts of recommendation. For some, the entire institution is a system for recommending content to their audiences.

Public service media organisations are starting to automate these decisions by using recommendation systems.

Recommendation systems are context-driven information filtering systems. They don’t use explicit search queries from the user (unlike search engines) and instead rank content based only on contextual information.[footnote]Tennenholtz, M. and Kurland, O. (2019). ‘Rethinking Search Engines and Recommendation Systems: A Game Theoretic Perspective’. Communications of the ACM, December 2019, 62(12), pp. 66–75. Available at: https://cacm.acm.org/magazines/2019/12/241056-rethinking-search-engines-and-recommendation-systems/fulltext; Jannach, D. and Adomavicius, G. (2016), ‘Recommendations with a Purpose’. RecSys ’16: Proceedings of the 10th ACM Conference on Recommender Systems, pp7–10. Available at: https://doi.org/10.1145/2959100.2959186; Jannach, D., Zanker, M., Felfernig, and Friedrich, G. (2010). Recommender Systems: An Introduction. Cambridge University Press. doi: 10.1017/CBO9780511763113; Ricci, F., Rokach, L. and Shapira, B. (2015). Recommender Systems Handbook. Springer New York: New York. doi: 10.1007/978-1-4899-7637-6[/footnote]

This can include:

  • the item being viewed, e.g. the current webpage, the article being read, the video that just finished playing etc.
  • the item being filtered and recommended, e.g. the length of the content, when the content was published, characteristics of the content, e.g. drama, sport, news – often described as metadata about the content
  • the users, e.g. their location or language preferences, their past interactions with the recommendation system etc.
  • the wider environment, e.g. the time of day.

Examples of well-known products utilising recommendation systems include:

  • Netflix’s homepage
  • Spotify’s auto-generated playlists and auto-play features
  • Facebook’s ‘People You May Know’ and ‘News Feed’
  • YouTube’s video recommendations
  • TikTok’s ‘For You’ page
  • Amazon’s ‘Recommended For You’, ‘Frequently Bought Together’, ‘Items Recently Viewed’, ‘Customers Who Bought This Item Also Bought’, ‘Best-Selling’ etc.[footnote]Singh, S. (2020). Why Am I Seeing This? – Case study: Amazon. New America. Available at: https://www.newamerica.org/oti/reports/why-am-i-seeing-this[/footnote]
  • Tinder’s swiping page[footnote]Liu, S. (2017). ‘Personalized Recommendations at Tinder’ [presentation]. Available at: https://www.slideshare.net/SessionsEvents/dr-steve-liu-chief-scientist-tinder-at-mlconf-sf-2017[/footnote]
  • LinkedIn’s ‘Recommend for you’ jobs page.
  • Deliveroo or UberEats’ ‘recommended’ sort for restaurants.

Recommendation systems and search engines

It is worth acknowledging the difference between recommendation systems and search engines, which can be thought of as query-driven information filtering systems. They filter, rank and display webpages, images and other items primarily in response to a query from a user (such as Google searching for ‘restaurants near me’). This is then often combined with the contextual information mentioned above. Google Search is the archetypal search engine in most Western countries but other widely used search engines include Yandex, Baidu and Yahoo. Many public service media organisations offer a query-driven search feature on their services that enables users to search for news stories or entertainment content.

In this report, we have chosen to focus on recommendation systems rather than search engines as the context-driven rather than query-driven approach of recommendation systems is much more analogous to traditional human editorial judgment and content curation.

Broadly speaking, recommendation systems take a series of inputs, filter and select which ones are most important, and produce an output (the recommendation). The inputs and outputs of recommendation systems are subject to content moderation (in which the pool of content is pre-screened and filtered) and curation (in which content is selected, organised and presented).

This starts by deciding what to input into the recommendation system. The pool of content to draw from is often dictated by the nature of the platform itself, such as activity from your friends, groups, events, etc. alongside adverts, as in the case of Facebook. In the case of public service media, the pool of content is often their back catalogue of audio, video or news content.

This content will have been moderated in some way before it reaches the recommendation system, either manually by human moderators or editors, or automatically through software tools. On Facebook, this means attempts to remove inappropriate user content, such as misinformation or hate speech, from the platform entirely, according to moderation guidelines. For a public service media organisation, this will happen in the commissioning and editing of articles, radio programmes and TV shows by producers and editorial teams.

The pool of content will then be further curated as it moves through the recommendation system, as certain pieces of content might be deemed appropriate to publish but not to recommend in a particular context, e.g. Facebook might want to avoiding recommending you posts in languages you don’t speak. In the case of public service media, this generally takes the form of business rules, which are editorial guidelines implemented directly into the recommendation system.

Some business rules apply equally across all users and further constrain the set of content that the system recommends content from, such as only selecting content from the past few weeks. Other rules apply after individual user recommendations have been generated and filter those recommendations based on specific information about the user’s context, such as not recommending content the user has already consumed.

For example, below are business rules that were implemented in BBC Sounds’ Xantus recommendation system, as of summer 2021:[footnote]Note that the business rules are subject to change, and so the rules given here are intended to be an indicative example only, representing a snapshot of practice at one point in time. See: Al-Chueyr Martins, T. (2021). ‘From an idea to production: the journey of a recommendation engine’ [presentation recording]. MLOps London. Available at: https://www.youtube.com/watch?v=dFXKJZNVgw4[/footnote]

Non-personalised business rules Personalised business rules
Recency Already seen items
Availability Local radio (if not consumed previously)
Excluded ‘master brands’, e.g., particular radio channels[footnote]Smethurst, M. (2014). Designing a URL structure for BBC programmes. Available at: https://smethur.st/posts/176135860[/footnote] Specific language (if not consumed previously)
Excluded genres Episode picking from a series
Diversification (1 episode per brand/series)

How different types of recommendation systems work

Not all recommendation systems are the same. One major difference relates to what categories of items a system is filtering and curating for. This can include, but isn’t limited to:

  • content, e.g. news articles, comments, user posts, podcasts, songs, short-form video, long-form video, movies, images etc. or any combination of these content types
  • people, e.g. dating app profiles, Facebook profiles, Twitter accounts etc.
  • metadata, e.g. the time, data, location, category etc. of a piece of content or the age, gender, location etc. of a person.

In this report, we mainly focus on:

  1. Media content recommendation systems: these systems rank and display pieces of media content, e.g. news articles, podcasts, short-form videos, radio shows, television shows, movies etc. to users of news websites, video-on-demand and streaming services, music and podcast apps etc.
  2. Media content metadata recommendation systems: these rank and display suggestions for information to classify pieces of media content, e.g. genre, people or places which appear in the piece of media, or other tags, to journalists, editors or other members of staff at media organisations.

Another important distinction between applications of recommendation systems is the role of the provider in choosing which set of items the recommendation system is applied to. There are three categories of use for recommendation systems:

  1. Open recommending: The recommendation system operates primarily on items that are generated by users of the platform, or otherwise indiscriminately automatically aggregated from other sources, without the platform curating or individually approving the items. Examples include YouTube, TikTok’s ‘For You’ page, Facebook’s ‘News Feed’ and many dating apps.
  2. Curated recommending: The recommendation system operates on items which are curated, approved or otherwise editorialised by the platform operating the recommendation system. These systems still primarily rely on items generated by external sources, sometimes blended with items produced by the platform. Often these external items will come in the form of licensed or syndicated content such as music, films, TV shows, etc. rather than user-generated items. Examples include Netflix, Spotify and Disney+.
  3. Closed recommending: The recommendation system operates exclusively on items generated or commissioned by the platform operating the recommendation system. Examples include most recommendation systems used on the website of news organisations.

Lastly, there are different types of technical approaches that a recommendation system may use to sort and filter content. The approaches detailed below are not mutually exclusive and can be combined in recommendation systems in particular contexts:

Type of filtering Example What does it do?
Collaborative filtering ‘Customers Who Bought This Item Also Bought’ on Amazon The system recommends items to users based on the past interactions and preferences of other users who are classified as having similar past interactions and preferences. These patterns of behaviour from other users are used to predict how the user seeing the recommendation would rate new items. Those item rating predictions are used to generate recommendations of items that have a high level of similarity with content previously popular with similar users.
Matrix factorisation Netflix’s ‘Watch Next’ feature A subclass of collaborative filtering, this method codifies users and items into a small set of categories based on all the user ratings in a system. When Netflix recommends movies, a user may be codified by how much they like action, comedy, etc. and a movie might be codified by how much it fits into these genres. This codified representation can then be used to guess how much a user will like a movie they haven’t seen before, based on whether these codified summaries ‘match’.

 

Content-based filtering Netflix’s ‘Action Movies’ list These methods recommend items based on the codified properties of the item stored in the database. If the profile of items a user likes mostly consists of action films, the system will recommend other items that are tagged as action films. The system does not draw on user data or behaviour to make recommendations.

Of these typologies, the public service media that we surveyed only use closed recommendation systems as they are applying recommendations to content they have commissioned or produced. However, we found examples of public service media using all types of filtering approaches: collaborative filtering, content-based filtering and hybrid recommendation systems.

How do European public service media organisations use recommendation systems?

The use of recommendation systems is common but not ubiquitous among public service media organisations in Europe. As of 2021, at least a quarter of European Broadcasting Union (EBU) member organisations were using recommendation systems on at least one of their content delivery platforms.[footnote]See Annex 1 for more details.[/footnote] Video-on-demand platforms are the most common use case for recommendation systems, followed by audio-on-demand and news content. As well as these public-facing recommendation systems, some public service media also use recommendation systems for internal-only purposes, such as systems that assist journalists and producers with archival research.[footnote]Interview with Ben Fields, Lead Data Scientist, Digital Publishing, BBC (2021).[/footnote]

Figure 1: Recommendation system use by European public service media by platform (EBU, 2020)

Platform on which public service media offers personalised recommendations Number of European Broadcasting Union member organisations Examples
Video-on-demand At least 18 BBC iPlayer
Audio-on-demand At least 10 BBC Sounds, ARD Audiothek
News content At least 7 VRT NWS app

Among the EBU member organisations which reported using recommendation systems in a 2020 survey, recommendations were displayed:

  • in a dedicated section on the on-demand homepage (by at least 16 organisations)
  • in the player as ‘play next’ suggestions (by at least 10 organisations)
  • as ‘top picks’ on the on-demand homepage (by at least 9 organisations).

Even among organisations that have adopted recommendation systems, their use remains very limited. NPO in the Netherlands was the only organisation we encountered that aims to have a fully algorithmically driven homepage on its main platform. In most cases, the vast majority of content remains under human editorial control, with only small sub-sections of the interface offering recommended content.

As editorial independence is a key public service value, as well as a differentiator of public service media from its private-sector competitors, it is likely most public service media will retain a significant element of curation. The requirement for universality also creates a strong incentive to ensure that there is a substantial foundation of shared information to which everyone in society should be exposed.

Recommendation systems in the BBC

The BBC is significantly larger in staff, output and audience than other European public service media organisations. It has a substantial research and development department and has been exploring the use of recommendation systems across a range of initiatives since 2008.[footnote]See Annex 2 for more details.[/footnote]

In 2017, the BBC Datalab was established with the aim of helping audiences discover relevant content by bringing together data from across the BBC, augmented machine learning and editorial expertise.[footnote]BBC. (2019). ‘Join the DataLab team at the BBC!’. BBC Careers. Available at: https://careerssearch.bbc.co.uk/jobs/job/Join-the-DataLab-team-at-the-BBC/40012; BBC Datalab. ‘Machine learning at the BBC’. Available at: https://datalab.rocks/[/footnote] It was envisioned as a central capability across the whole of the BBC (TV, radio, news and web) which would build a data platform for other BBC teams that would create consistent and relevant experiences for audiences across different products. In practice, this has meant collaborating with different product teams to develop recommendation systems.

The BBC now uses several recommendation systems, at different degrees of maturity, across different forms of media, including:

  • written content, e.g. the BBC News app and some international news services, such as the Spanish-language BBC Mundo, recommending additional new stories[footnote]McGovern, A. (2019). ‘Understanding public service curation: What do “good” recommendations look like?’. BBC. Available at: https://www.bbc.co.uk/blogs/internet/entries/887fd87e-1da7-45f3-9dc7-ce5956b790d2[/footnote]
  • audio-on-demand, e.g. BBC Sounds recommending radio programmes and music mixes a user might like
  • short-form video, e.g. BBC Sport and BBC+ (now discontinued) recommending videos the user might like
  • long-form video, e.g. BBC iPlayer recommending TV shows or films the user might like.
Approaches to the development of recommendation systems

Public service media organisations have the choice to buy an external ‘off the shelf’ recommendation system or build it themselves.

The BBC initially used third-party providers of recommendation systems but, as part of a wider review of online services, began to test the pros and cons of bringing this function in-house. Building on years of their own R&D work, the BBC found they were able to build a recommendation system that not only matched but could outperform the bought-in systems. Once it was clear that personalisation would be central to the future strategy of the BBC, they decided to bring all systems in-house with the aim of being ‘in control of their destiny’.[footnote]Interview with Andrew McParland, Principal Engineer, BBC R&D (2021).[/footnote] The perceived benefits include building up technical capability and understanding within the organisation, better control and integration of editorial teams, better alignment with public service values and greater opportunity to experiment in the future.[footnote]Commercial (i.e. non public service) BBC services however still use external recommendation providers. See: Taboola. (2021). ‘BBC Global News Chooses Taboola as its Exclusive Content Recommendations Provider’. Available at: https://www.taboola.com/press-release/bbc-global-news-chooses-taboola-as-its-exclusive-content-recommendations-provider[/footnote]

The BBC has far greater budgets and expertise than most other public service media organisations to experiment with and develop recommendation systems. But many other organisations have also chosen to build their own products. Dutch broadcaster NPO has a small team of only four or five data scientists, focused on building ‘smart but simple’ recommendations in-house, having found third-party products did not cater to their needs. It is also important to them that they should be able to safeguard their audience data and be able to offer transparency to public stakeholders about the way their algorithms work, neither of which they felt confident about when using commercial providers.[footnote]Interview with Arno van Rijswijk, Head of Data & Personalization, and Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep (NPO) (2021).[/footnote]

Several public service media organisations have joined forces through the EBU to develop PEACH[footnote]European Broadcasting Union. PEACH. Available at: https://peach.ebu.io/[/footnote] – a personalisation system that can be adopted by individual organisations and adapted to their needs. The aim is to share technical expertise and capacity across the public service media ecosystem, enabling those without their own in-house development teams to still adopt recommendation systems and other data-driven approaches. Although some public service media feel this is still not sufficiently tailored to their work,[footnote]Interview with Arno van Rijswijk, Head of Data & Personalization, and Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep (NPO) (2021).[/footnote] others find it not only caters to their needs but that it embodies their public service mission through its collaborative approach.[footnote]Interview with Matthias Thar, Bayerische Rundfunk (2021).[/footnote]

Although we are aware that some public service media continue to use third-party systems, we did not manage to secure research interviews with any organisations that currently do so.

How are public service media recommendation systems currently governed and overseen?

The governance of recommendation systems in public service media is created through a combination of data protection legislation, media regulation and internal guidelines. In this section, we outline the present and future regulatory environment in the UK and EU, and how internal guidelines influence development in the BBC and other public service media. Some public service media have reinterpreted their existing guidelines for operationalising public service values to make them relevant to the use of recommendation systems.

The use of recommendation systems in public service media is not governed by any single piece of legislation or governance. Oversight is generated through a combination of the statutory governance of public service media, general data protection legislation and internal frameworks and mechanisms. This complex and fragmented picture makes it difficult to assess the effectiveness of current governance arrangements.

External regulation

The structures that have been established to regulate public service media are based around analogue broadcast technologies. Many are ill-equipped to provide oversight of public service media’s digital platforms in general, let alone to specifically oversee the use of recommendation systems.

For instance, although Ofcom regulates all UK broadcasters, including the particular duties of public service media, its remit only covers the BBC’s online platforms and not, for example, the ITV Hub or All 4. Its approach to the oversight of BBC iPlayer is to set broad obligations rather than specific requirements and it does not inspect the use of recommendation systems. Both the incentives and sanctions available to Ofcom are based around access to the broadcasting spectrum and so are not relevant to the digital dissemination of content. In practice this means that the use of recommendation systems within public service media are not subject to scrutiny by the communications regulator.

However, like all other organisations that process data, public service media within the European Union are required to comply with the General Data Protection Regulation (GDPR). The UK adopted this legislation before leaving the EU, though  a draft Data Protection and Digital Information Bill (‘Data Reform Bill’) introduced in July 2022 includes a number of important changes, including removing the prohibition on automated decision-making, and maintaining restrictions for automated decision-making only if special categories of data are involved. The draft bill also introduces a new ground to allow the processing of special categories of data for the purpose of monitoring and correcting algorithmic bias in AI systems. A separate set of provisions centred around fairness and explainability for AI systems is also expected as part of the Government’s upcoming white paper on AI governance.

The UK GDPR shapes the development and implementation of recommendation systems because it requires:

  • Consent: the UK GDPR requires that the use of personal data be made with freely-given, genuine and unambiguous consent from an individual. There are other lawful bases for processing personal data that do not require consent, including legal obligations, processing in a vital interest and processing for a ‘legitimate interest’ (a justification that public authorities cannot rely on if they are processing for their tasks as a public authority).
  • Data minimisation: under Article 5(1), the ‘data minimisation’ principle of the UK GDPR states that personal data should be ‘adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed’. Under Article 17 of the UK GDPR, the ‘right to erasure’ grants individuals the right to have personal data erased that is not necessary for the purposes of processing.
  • Automated decision-making, the right to be informed and explainability:  under the UK GDPR, data subjects have a right not to be subject to solely automated decisions that do not involve human intervention, such as profiling.[footnote]The Article 29 Working Group defines profiling in this instance as ‘automated processing of data to analyze or to make predictions about individuals’.[/footnote] Where such automated decision-making occurs, meaningful information about the logic involved, the significance and the envisaged consequences of such processing need to be provided to the data subject (Article 15 (1) h). Separate guidance from the Information Commissioner’s Office also touches on making AI systems explainable for users.[footnote]Information Commissioner’s Office and The Alan Turing Institute. (2021). Explaining decisions made with AI. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/[/footnote]

Our interviews with practitioners indicated that GDPR compliance is foundational to their approach to recommendation systems, and that careful consideration must be paid to how personal data is collected and used. While the forthcoming Data Reform Bill makes several changes to the UK GDPR, most of these effects on the development and implementation of recommendation systems will likely continue under the current bill’s language.

GDPR regulates the use of data that a recommendation system draws on, but there is not currently any legislation that specifically regulates the ways in which recommendation systems are designed to operate on that data, although there are a number of proposals in train at national and European levels.

In July 2022, the European Parliament adopted the Digital Services Act, which includes (in Article 24a) an obligation for all online platforms to explain, in their terms and conditions, the main parameters of their recommendation system and the options for users to modify or influence those parameters. There are additional requirements imposed on very large online platforms (VLOPs) to provide at least one option for each of their recommendation systems which is not based on profiling (Article 29). There are also further obligations for VLOPs in Article 26 to perform systemic risk assessments, including taking into account the design of the recommendation systems (Article 26 (2) a) and to implement steps to mitigate risk by testing and adapting their recommendation systems (Article 27 (1) ca).

In order to ensure compliance with the transparency provisions in the regulation, the Digital Services Act includes a provision that enables independent auditors and vetted researchers to have access to the data that led to the company’s risk assessment conclusions and mitigation decisions (Article 31). This provision ensures oversight over the self-assessment (and over the independent audit) that companies are required to carry out, as well as scrutiny over the choices large companies make around their recommendation systems.

The draft AI Act proposed by the European Commission in 2021 also includes recommendation systems in its remit. The proposed rules require harm mitigations such as risk registers, data governance and human oversight but only make obligations mandatory for AI systems used in ‘high-risk’ applications. Public service media are not mentioned within this category, although due to their democratic significance it’s possible they might come into consideration. Outside the high-risk categories, voluntary adoption is encouraged. These proposals are still at an early stage of development and negotiation and are unlikely to be adopted until at least 2023.

In another move, in January 2022 the European Commission launched a public consultation on a proposed European Media Freedom Act that aims to further increase the ‘transparency, independence and accountability of actions affecting media markets, freedom and pluralism within the EU’. The initiative is a response to populist governments, particularly in Poland and Hungary attempting to control media outlets, as well as an attempt to bring media regulation up to speed with digital technologies. The proposals aim to secure ‘conditions for [media markets’] healthy functioning (e.g. exposure of the public to a plurality of views, media innovation in the EU market)’. Though there is little detail so far, this framing could allow for the regulation of recommendation systems within media organisations.

In the UK, public service media are excluded from the draft Online Safety Bill which imposes responsibilities on platforms to safeguard users from harm. Ofcom, as well as the Digital Culture Media and Sport Select Committee, have called for urgent reform to regulation that would update the governance of public service media for the digital age. As of this report, there has been no sign of progress on a proposed Media Bill that would provide this guidance.

Internal oversight

Public service media have well-established practices for operationalising their mission and values through the editorial guidelines described earlier. But the introduction of recommendation systems has led many of them to reappraise these and, in some cases, introduce additional frameworks to translate these values for the new context.

The BBC has brought together teams from across the organisation to discuss and develop a set of machine learning engine principles, which they believe will uphold the Corporation’s mission and values:[footnote]Macgregor, M. (2021). Responsible AI at the BBC: Our Machine Learning Engine Principles. BBC Research and Development. Available at: https://www.bbc.co.uk/rd/publications/responsible-ai-at-the-bbc-our-machine-learning-engine-principles[/footnote]

  • Reflecting the BBC’s values of trust, diversity, quality, value for money and creativity.
  • Using machine learning to improve our audience’s experience of the BBC
  • Carrying out regular review, ensuring data is handled securely and that algorithms serve our audiences equally and fairly
  • Incorporating the BBC’s editorial values and seeking to broaden, rather than narrow horizons.
  • Continued innovation and human-in-the-loop oversight.

These have then been adopted into a checklist for teams to use in practice:

‘The MLEP [Machine Learning Engine Principles] Checklist sections are designed to correspond to each stage of developing a ML project, and contain prompts which are specific and actionable. Not every question in the checklist will be relevant to every project, and teams can answer in as much detail as they think appropriate. We ask teams to agree and keep a record of the final checklist; this self-audit approach is intended to empower practitioners, prompting reflection and appropriate action.[footnote]Macgregor, M. (2021).[/footnote]

Reflecting on putting this into practice, BBC staff members observed that ‘the MLEP approach is having real impact in bringing on board stakeholders from across the organisation, helping teams anticipate and tackle issues around transparency, diversity, and privacy in ML systems early in the development cycle’.[footnote]Boididou, C., Sheng, D., Moss, M. and Piscopo, A. (2021), ‘Building Public Service Recommenders: Logbook of a Journey’. RecSys ’21: Proceedings of the 15th ACM Conference on Recommender Systems, pp. 538–540. Available at: https://doi.org/10.1145/3460231.3474614[/footnote]

Other public service media organisations have developed similar frameworks. Bayerische Rundfunk, the public broadcaster for Bavaria in Germany, found that their existing values needed to be translated into practical guidelines for working with algorithmic systems and developed ten core principles.[footnote]Bedford-Strohm, J., Köppen, U. and Schneider, C. (2020). ‘Our AI Ethics Guidelines’. Bayerisch Rundfunk. https://www.br.de/extra/ai-automation-lab-english/ai-ethics100.html[/footnote] These align in many ways to the BBC principles but have additional elements, including a commitment to transparency and discourse, ‘strengthening open debate on the future role of public service media in a data society’, support for the regional innovation economy, engagement in collaboration and building diverse and skilled teams.[footnote]Bedford-Strohm, J., Köppen, U. and Schneider, C. (2020).[/footnote]

In the Netherlands, public service broadcaster NPO along with commercial media groups and the Netherlands Institute for Sound and Vision drew up a declaration of intent.[footnote]Media perspectives. (2021). ‘Intentieverklaring voor verantwoord gebruik van KI in de media. [Letter of intent for responsible use of AI in the media]’. Available at: https://mediaperspectives.nl/intentieverklaring/[/footnote] Drawing on the European Union high-level expert group principles on ethics in AI, the declaration is a commitment to the responsible use of AI in the media sector. NPO are developing this into a ‘data promise’ that offers transparency to audiences about their practices. 

Other stakeholders

Beyond these formal structures, the use of recommendation systems in public service media is shaped by these organisations’ accountability to, and scrutiny by wider society.

All the public service media organisations we interviewed welcomed this scrutiny in principle and were committed to openness and transparency.  Most publish regular blogposts about their work, present at academic conferences and invite feedback about their work. These, however, reach a small and specialist audience.

There are limited opportunities for the broader public to understand and influence the use of recommendation systems. In practice, there is little accessible information about recommendation systems on most public service media platforms and even where it exists, teams admit that it is rarely read.

The Voice of the Listener and Viewer, a civil society group that represents audience interests in the UK, has raised concerns with the BBC about a lack of transparency in its approach to personalisation but has been dissatisfied with the response. The Media Reform Coalition has proposed that recommendations systems used in UK public service media should be co-designed with citizens’ media assemblies and that the underlying algorithms should be made public.[footnote]Grayson, D. (2021). Manifesto for a People’s Media. Media Reform Coalition. Available at: https://drive.google.com/file/u/1/d/1_6GeXiDR3DGh1sYjFI_hbgV9HfLWzhPi/view?usp=embed_facebook[/footnote]

Despite this low level of public engagement, public service media organisations were sensitive to external perceptions of their use of recommendation systems. Teams expected that, as public service media, they would be held to a higher standard than their commercial competitors. At the BBC in particular, staff frequently mentioned concerns about how their work might be seen by the press, the majority of which tends to take an anti-BBC stance. In practice, we have found little coverage of the BBC’s use of algorithms outside of specialist publications such as Wired.

Public service media have a dual role, both as innovators in the use of recommendation services and as scrutineers of the impacts of new technologies. The BBC believes it has a ‘critical contribution, as part of a mixed AI ecosystem, to the development of beneficial AI both technically, through the development of AI services, and editorially, by encouraging informed and balanced debate’.[footnote]BBC. (2017). Written evidence to the House of Lords Select Committee on Artificial Intelligence. Available at: https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/70493.html[/footnote] At Bayerische Rundfunk, this combined responsibility has been operationalised by integrating the product team and data investigations team into an AI and Automation Lab. However, we are not aware of any instances where public service media have reported on their own products and subjected them to critical scrutiny. 

Why this matters

The history of public service media, their current challenges and the systems for their governance are the framing context in which these organisations are developing and deploying recommendation systems. As with any technology, organisations must consider how the tool can be used in ways that are consistent with their values and culture and whether it can address the problems they face.

In his inaugural speech, BBC Director-General Tim Davie identified increased personalisation as a pillar of addressing the future role of public service media in a digital world:[footnote]BBC Media Centre. (2020). Tim Davie’s introductory speech as BBC Director-General. Available at: https://www.bbc.co.uk/mediacentre/speeches/2020/tim-davie-intro-speech[/footnote]

‘We will need to be cutting edge in our use of technology to join up the BBC, improving search, recommendations and access. And we must use the data we hold to create a closer relationship with those we serve. All this will drive love for the BBC as a whole and help make us an indispensable part of everyday life. And create a customer experience that delivers maximum value.’

But recommendation systems also crystallise the current existential dilemmas of public service media. The development of a technology whose aim is optimisation requires an organisation to be explicit about what and who it is optimising for. A data-driven system requires an institution to quantify those objectives and evaluate whether or not the tool is helping them to achieve them.

This can seem relatively straightforward when setting up a recommendation system for e-commerce, for example, where the goal is to sell more units. Other media organisations may also have clear metrics around time spent on a platform, advertising revenues or subscription renewals.

In this instance, the broadly framed public service values that have proven flexible to changing contexts in the past are a hindrance rather than a help. A concept like ‘diversity’ is hard to pin down and feed into a system.[footnote]Hildén, J. (2021). ‘The Public Service Approach to Recommender Systems: Filtering to Cultivate’. Television & New Media, 23(7). Available at: https://doi.org/10.1177/15274764211020106[/footnote] Organisations that are supposed to serve the public as both citizens and consumers must decide which role gets more weight.

Recommendation systems might offer an apparently obvious solution to the problem of falling public service media audience share – if you are able to better match the vast amount of content in public service media catalogues to listeners and viewers, you should be able to hold and grow your audience. But is universality achieved if you reach more people but they don’t share a common experience of a service? And how do you measure diversity and ensure personalised recommendations still offer a balance of content?

‘The introduction of algorithmic systems will force [public service media] to express its values and goals as measurable key performance indicators, which could be useful and perhaps even necessary. But this could also create existential threats to the institution by undermining the core principles and values that are essential for legitimacy.’[footnote]Sørensen, J.K. and Hutchinson, J. (2018). ‘Algorithms and Public Service Media’. Public Service Media in the Networked Society: RIPE@2017, pp.91–106. Available at: http://www.nordicom.gu.se/sites/default/files/publikationer-hela-pdf/public_service_media_in_the_networked_society_ripe_2017.pdf[/footnote]

Recommendation systems force product teams within public service media organisations to settle on an interpretation of public service values, at a time when the regulatory, social and political context makes them particularly unclear.

It also means that this interpretation will be both instantiated and then systematised in a way that has never previously occurred. As we saw with the example of the impartiality guidelines of the BBC, individuals and teams have historically made decisions under a broad governance framework and founded on editorial judgement. Inconsistencies in those judgements could be ironed out through the multiplicity of individual decisions, the diversity of contexts and the number of different decision-makers. Questions of balance could be considered over a wider period of time and breadth of output. Evolving societal norms could be adopted as audience expectations change.

However, building a decision-making system sets a standardised response to a set of questions and repeats that every time. In this way it nails an organisation’s colours to one particular mast and then replicates that approach repeatedly.

Stated goals and potential risks of using recommendation systems in public service media

Organisations deploy recommendation systems to address certain objectives. However, these systems also bring potential risks. In this chapter, we look at what public service media aim to achieve through deploying recommendation systems and the potential drawbacks.

Stated goals of recommendation systems

In this section, we look at the stated objectives for the use of recommendation systems and the degree to which public service media reference those objectives and motivations when justifying their own use of recommendation systems.

Recommendation systems bring several benefits to different actors, including users who access the recommendations (in the case of public service media, audiences), as well as the organisations and businesses that maintain the platforms on which recommendation systems operate. Some of the effects of recommendation systems are also of broader societal interest, especially where the recommendations interact with large numbers of users, with the potential to influence their behaviour. Because they serve the interests of multiple stakeholders,[footnote]Milano, S., Taddeo, M. and Floridi, L. (2021). ‘Ethical aspects of multi-stakeholder recommendation systems’. The Information Society, 37(1). Available at: https://doi.org/10.1080/01972243.2020.1832636; Abdollahpouri, H., Adomavicius, G., Burke, R., et al. (2020). ‘Multistakeholder recommendation: Survey and research directions’. User Modeling and User-Adapted Interaction, pp.127–158. Available at: https://doi.org/10.1007/s11257-019-09256-1[/footnote] recommendation systems support data-based value creation in multiple ways, which can pull in different directions.[footnote]Tempini, N. (2017). ‘Till data do us part: Understanding data-based value creation in data-intensive infrastructures’. Information and Organization, 27(4). Available at: http://dx.doi.org/10.1016/j.infoandorg.2017.08.001 [/footnote]

Four key areas of value creation are:

  1. Reducing information overload for the receivers of recommendations: It would be overwhelming for individuals to trawl the entire catalogue of Netflix or Spotify, for example. Their recommendation systems reduce the amount of content to a manageable number of choices for the audience. This creates value for users.
  2. Improved discoverability of items: E-commerce sites can recommend items they are particularly keen to sell, or direct people to niche products for which there is a specific customer base. This creates value for businesses and other actors that provide the items in the recommender’s catalogue. It can also be a source of societal value, for example where improved discoverability increases the diversity of news items that are accessed by the audience.
  3. Attention capture: Targeted recommendations which cater to users’ preferences encourage people to spend more time on services, generating revenue through subscriptions or advertising. This is a source of economic value for platform providers, who monetise attention via advertising revenue or paid subscriptions. But it can also be a source of societal value, if it means that people pay more attention to content that has public service value, in line with the mandate for universality.
  4. Data gathering to derive business insights and analysis: For example, platforms gain valuable insights into their audience through A/B testing which enables them to plan marketing campaigns or commission content. This is a source of economic value, when it is used to derive business insights. But under appropriate conditions, it could be a source of societal value, for example by enabling socially responsible scientific research (see our recommendations below).

We explored how these objectives map to the motivations articulated by public service media organisations for their use of recommendation systems.

1. Reducing information overload

‘Under conditions of information abundance and attention scarcity, the modern challenges to the realisation of media diversity as a policy goal lie less and less in guaranteeing a diversity of supply and more in the quest to create the conditions under which users can actually find and choose between diverse content.’[footnote]Helberger, N., Karppinen, K. and D’Acunto, L. (2018). ‘Exposure diversity as a design principle for recommender systems’. Information, Communication & Society, 21(2). Available at: https://doi.org/10.1080/1369118X.2016.1271900[/footnote]

We heard from David Graus: ‘So finding different ways to enable users to find content is core there. And in that context, I think recommender systems really serve to be able to surface content that users may not have found otherwise, or may surface content that users may not know they’re interested in.’

We heard from David Graus: ‘So finding different ways to enable users to find content is core there. And in that context, I think recommender systems really serve to be able to surface content that users may not have found otherwise, or may surface content that users may not know they’re interested in.’

2. Improved discoverability

Public service media also deploy recommendation systems with the objective of showcasing much more of their vast libraries of content. BBC Sounds, for example, has more than 200,000 items available, of which only a tiny amount can be surfaced either through broadcast schedules or an editorially curated platform. Recommendation systems can potentially unlock the long tail of rarely viewed content and allow individuals’ specific interests to be met.

They can also, in the view of some organisations, meet the public service obligation of diversity by exposing audiences to a greater variety of content.[footnote]Interview with David Graus, Lead Data Scientist, Randstad Groep Nederland (2021). This point was also captured in separate studies of public service media organisations – see: Hildén, J. (2021). ‘The Public Service Approach to Recommender Systems: Filtering to Cultivate’. Television & New Media, 23(7). Available at: https://doi.org/10.1177/15274764211020106[/footnote] Recommendation systems need not simply cater to, or replicate people’s existing interests but can actively push new and surprising content.

This approach is also deployed in commercial settings, notably in Spotify’s ‘Discover’ playlists, as novelty is also required for audience retention. Additionally, some public service media organisations, such as Swedish Radio and NPO, are experimenting with approaches that promote content they consider particularly high in public value.

Traditional broadcasting provides one-to-many communication. Through personalisation, platforms have created a new model of many-to-many communication, creating ‘fragmented user needs’.[footnote]Interview with Uli Köppen, Head of AI + Automation Lab, Co-Lead BR Data, Bayerische Rundfunk (2021).[/footnote] Public service media must now grapple with how they create their own way of engaging in this landscape. The BBC’s ambition for the iPlayer is to make output, ‘accessible to the audience wherever they are, whatever devices they are using, finding them at the right moments with the right content’.[footnote]BBC. (2021). BBC Annual Plan 2021-22. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote]

Jonas Schlatterbeck, ARD (German public broadcaster), takes a similar view:

‘We can’t actually serve majorities anymore with one content. It’s not like the one Saturday night show that will attract like half of the German population […] but more like tiny mosaic pieces of different content that are always available to pretty much everyone but that are actually more targeted.’[footnote]Interview with Jonas Schlatterbeck, Head of Content ARD Online & Leiter Programmplanung, ARD (2021).[/footnote]

3. Attention capture

The need to maintain audience reach in a fiercely competitive digital landscape was mentioned by almost every public service media organisation we spoke to.

Universality, the obligation to reach every section of society, is central to the public service remit.

And if public service media lose their audience to their digital competitors, they cannot deliver the other societal benefits within their mission. As Koen Muylaert of Belgian VRT said: ‘we want to inspire people, but we also know that you can only inspire people if they intensively use your products, so our goal is to increase the activity on our platform as well. Because we have to fight for market share’.[footnote]Interview with Koen Muylaert, Project Lead, VRT data platform and data science initiative, Vlaamse Radio- en Televisieomroeporganisatie (VRT) (2021).[/footnote]

The assumption among most public service media organisations is that recommendation systems improve engagement, although there is still little conclusive evidence of this in academic literature. The BBC has specific targets for 16-34 year-olds to use the iPlayer and BBC Sounds, and staff consider recommendations as a route to achieving those metrics.[footnote]BBC. (2021). BBC Annual Plan 2021-22. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote]

From our interview with David Caswell, Executive Product Manager, BBC News Labs:

‘We have seen that finding in our research on several occasions that there’s sort of some transition that audiences and particularly younger audiences have gone through where there’s an expectation of personalization they don’t expect to be doing the same thing again and again and again, and in terms of active searching for things they expect they expect a personalized experience… There isn’t a lot of tolerance, increasingly with younger and digitally native audiences for friction in the experience. And so personalization is a major technique for removing friction from the experience because audience members don’t have to do all the work of discovery and selection and so on, they can have that done for them that this is.’[footnote]Interview with David Caswell, Executive Product Manager, BBC News Labs (2021).[/footnote]

Across the teams we interviewed from European public service media organisations there was widespread consensus that audiences now expect content to be personalised. Netflix and Spotify’s use of recommendation systems was described as a ‘gold standard’ for public service media organisations to aspire to. But few of our interviewees offered evidence to support this view of audience expectations.

‘I see the risk that when we are compared with some of our competitors that are dabbling with a much more sophisticated personalisation, there is a big risk of our services being perceived as not adaptable and not relevant enough.’[footnote]Interview with Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio (2021).[/footnote]

4. Data gathering and behavioural interventions

Recommendation systems collect and analyse a wealth of data in order to serve personalised recommendations to their users. The data collected often pertains to user interactions with the system, including data that is produced as a result of interventions on the part of the system that are intended to influence user behaviour (interventional data).[footnote]Greene, T., Martens, D. and Shmueli, G. (2022) ‘Barriers to academic data science research in the new realm of algorithmic behaviour modification by digital platforms’. Nature Machine Intelligence, 4(4), pp. 323–330. Available at: https://doi.org/10.1038/s42256-022-00475-7[/footnote] For example, user data collected by a recommendation system may include data about how different users responded to A/B tests, so that the system developers can track the effectiveness of different designs or recommendation strategies in stimulating some desired user behaviour. 

Interventional data can thus be used to support targeted behavioural interventions, as well as scientific research into the mechanisms that underpin the effectiveness of recommendations. This marks recommendation systems as a key instrument of what Shoshana Zuboff has called a system of ‘surveillance capitalism’.[footnote]Zuboff, S. (2015). ‘Big other: Surveillance Capitalism and the Prospects of an Information Civilization’. Journal of Information Technology, 30(1). Available at: https://doi.org/10.1057/jit.2015.5[/footnote] In this system, platforms extract economic value from personal data, usually in the form of advertising revenue or subscriptions, at the expense of the individual autonomy afforded to individual users of the technology.

As access to the services provided by the platforms becomes essential to daily life, users increasingly find themselves tracked in all aspects of their online experience, without meaningful options to avoid it. The possibility of surveillance constitutes a grave risk associated with the use of recommendation systems.

Because recommendation systems have been mainly researched and developed in commercial settings, many of the techniques and  types of data collected work within this logic of surveillance.[footnote]van Dijck, J. (2014). ‘Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology’. Surveillance & Society, 12(2). Available at: https://doi.org/10.24908/ss.v12i2.4776; Srnicek, N. (2017). Platform capitalism. Polity.[/footnote] However, it is also possible to envisage uses of recommendation systems that do not obey the same logic.[footnote]Lane, J. (2020). Democratizing Our Data: A Manifesto. MIT Press.[/footnote] Recommendation systems used by public service media are a case in point. Public service media organisations are in a position to decide which data to collect and use in the service of creating public value, scientific value and individual value for their audiences, instead of economic value that would be captured by shareholders.[footnote]Tempini, N. (2017). ‘Till data do us part: Understanding data-based value creation in data-intensive infrastructures’. Information and Organization, 27(4). Available at: http://dx.doi.org/10.1016/j.infoandorg.2017.08.001[/footnote]

Examples of public value that could be created from user data include insights into effective and impartial communication that serves the public interest and fosters community building. Social science research into the effectiveness of behavioural interventions, and basic research into the psychological mechanisms that underpin audience’s trust in recommendations would contribute to the creation of scientific value from behavioural data. From the perspective of the audience, value could be created by fostering user empowerment to learn more about their own interests and develop their tastes, letting users feel more in control and understand the value of the content that they can access.

We found little evidence of public service media deploying recommendation systems with the explicit aim of capturing data on their audiences and content or deriving greater insights. On the contrary, interviewees stressed the importance of data minimisation and privacy. At Bayerische Rundfunk for example, a product owner said that the collection of demographic data on the audience was a red line that they would not cross.[footnote]Interview with Matthias Thar, Bayerische Rundfunk (2021).[/footnote]

However, we did find that most public service media organisations introduced recommendation systems as part of a wider deployment of automated and data-driven approaches. In many cases, these are accompanied by significant organisational restructures to create new ways of working adapted to the technologies, as well as to respond to the budget cuts that almost all public service media are facing.

Public service media organisations are often fragmented, with teams separated by region and subject matter and with different systems for different channels and media that have evolved over time. The use of recommendation systems requires a consistent set of information about each item of content (commonly known as metadata). As a result, some public service media have started to better connect different services so that recommendation systems can draw on them.

For instance, Swedish Radio has overhauled its entire news output to improve its digital service, creating standalone items of content that do not need to be slotted into a particular programme or schedule but can be presented in a variety of contexts. Alongside this, it has introduced a scoring system to rank its content against its own public values, prompting a rearticulation of those values as well as a renewed emphasis on their importance.

Bayerische Rundfunk (BR) is creating a new infrastructure for the consistent use of data as a foundation for the future use of recommendation systems. This is already allowing for news stories to automatically upload data specific to different localities, as well as generating automated text on data-heavy stories such as sports results. This allows BR to cover a broader range of sports and cater to more specialist interests, as well as freeing up editorial teams from mundane tasks.

While there is not a direct objective of behavioural intervention and data capture at present, the introduction of recommendation systems is part of a wider orientation towards data-driven practices across public service media organisations. This has the potential to enable wider data collection and analysis to generate business insights in the future.

Conclusion

We find that public service media organisations articulate similar objectives to the field more broadly, in their motivations for deploying recommendation systems, although unlike commercial actors, they do not currently use recommendations for the explicit aim of data capture and behavioural intervention. In some respects they reframe these established motivations to align with their public service mission and values.

Many staff across public service media organisations display a belief that because the organisation is motivated by public service values, and produces content that adheres to those values, the use of recommendation systems to filter that content is a furtherance of their mission.

This has meant that staff at public service media organisations have not always critically examined whether the recommendation system itself is operating in accordance with public service values.

However, public service media organisations have begun to put in place principles and governance mechanisms to encourage staff to explicitly and systematically consider how the development of their systems furthers their public service values. For example, the BBC published its Machine Learning Engine Principles in 2019 and subsequently continues to iterate on a checklist for project teams to put those principles into practice.[footnote]Macgregor, M. (2021). Responsible AI at the BBC: Our Machine Learning Engine Principles. BBC Research and Development. Available at: https://www.bbc.co.uk/rd/publications/responsible-ai-at-the-bbc-our-machine-learning-engine-principles[/footnote]

Public service media organisations are also in the early stages of developing new metrics and methods to measure the public service value of the outputs of the recommendation systems, both with explicit measures of ‘public service value’ and implicitly through evaluation by editorial staff. We explore these more in our chapter on evaluation and in our case studies on the BBC’s use of recommendation systems.

Additionally, we found that alongside these stated motivations, public service media interviewees had internalised a set of normative values around recommendation systems. When asked to define what a recommendation system is in their own terms, they spoke of systems helping users to find ‘relevant’, ‘useful’, ‘suitable’, ‘valuable’ or ‘good’ content.[footnote]This is not unique to the BBC, and many academic papers and industry publications also reflect a similar implicit normative framework in their definitions of recommendation systems.[/footnote]

This framing around user benefit obscures the fact that the systems are ultimately deployed to achieve organisations’ goals, and so if they are ‘relevant’ or ‘useful’ this is because that helps achieve the organisations’ goals, not because of an inherent property of the system.[footnote]The organisations’ goals are not necessarily in tension with that of the users, e.g. helping audiences finding more relevant content might help audiences get better value for money (which is a goal of many public service media organisations) but that is still goal which shapes how the recommendation system is developed, rather than a necessary feature of the system.[/footnote] It also adopts the vocabulary of commercial recommendation systems (e.g. targeted advertising options encourage users to opt for more ‘relevant’ adverts) which the Competition and Markets Authority has identified as problematic. This indicates that public service media are essentially adopting the paradigm established by the use of commercial recommendation systems.

Potential risks from recommendation systems

In this section, we explore some of the ethical risks associated with the use of recommendation systems and how they might manifest in uses by public service media.

A review of the literature on recommendation systems helps identify some of the potential ethical and societal risks that have been raised in relation to their use beyond the specific context of public service media. Milano et al highlight six areas of concern for recommendation systems in general:[footnote]Milano, S., Taddeo, M. and Floridi, L. (2020). ‘Recommender systems and their ethical challenges’. AI & Society, 35, pp.957–967. Available at: https://doi.org/10.1007/s00146-020-00950-y[/footnote]

  1. Privacy risks to users of a recommendation system: including direct risks from non-compliance with existing privacy regulations and/or malicious use of personal data, and indirect risks resulting from data leaks, deanonymisation of public datasets or unwanted exposure of inferred sensitive characteristics to third parties.
  2. Problematic or inappropriate content could be recommended and amplified by a recommendation system.
  3. Opacity in the operation of a recommendation system could lead to limited accountability and lower the trustworthiness of the recommendations.
  4. Autonomy: recommendations could limit users’ autonomy by manipulating their beliefs or values, and by unduly restricting the range of meaningful options that are available to them.
  5. Fairness constitutes a challenge for any algorithmic system that operates using human-generated data and is therefore liable to (re)produce social biases. Recommendation systems are no exception, and can exhibit unfair biases affecting a variety of stakeholders whose interests are tied to recommendations.
  6. Social externalities such as polarisation, the formation of echo chambers, and epistemic fragmentation, can result from the operation of recommendation systems that optimise for poorly defined objectives.

How these risks are viewed and addressed by public service media

In this section, we examine the extent to which ethical risks of recommendation systems, identified in the literature, are present in the development and use of recommendation systems in practice by public service media.

1. Privacy

The data gathering and operation of recommendation systems can pose direct and indirect privacy risks. Direct privacy risks come from how personal data is handled by the platform, as its collection, usage and storage need to follow procedures to ensure prior consent from individual users. In the context of EU law, these stages are covered by General Data Protection Regulation (GDPR).

Indirect privacy risks arise when recommendation systems expose sensitive user data unintentionally. For instance, indirect privacy risks may come about as a result of unauthorised data breaches, or when a system reveals sensitive inferred characteristics about a user (e.g. targeted advertising for baby products could indicate a user is pregnant).

Privacy relates to a number of public service values: independence (act in the interest of audiences), excellence (high standards of integrity) and accountability (good governance).

Privacy was raised as a potential risk by every interviewee from a public service organisation. Specifically, public service media were concerned about users’ consent to the use of their data, emphasising data security as a key concern for the responsible collection and use of user data.[footnote]Interview with Jonas Schlatterbeck, Head of Content ARD Online & Leiter Programmplanung, ARD (2021). [/footnote] Several interviewees stressed that public service media organisations do not generally require mandatory sign-in for certain key products, such as news. Other services, focusing more on entertainment, such as BBC iPlayer, do require sign-on, but the amount of personal data collected is limited.

Sebastien Noir, Head of Software, Technology and Innovation at the European Broadcasting Union, emphasised how the need to comply with privacy regulations in practice means that projects have to jump through several hoops with legal teams before trials with user data are allowed. While this uses up time and resources in project development, it also means that robust measures are in place to protect users from direct threats to privacy. Koen Muylaert,  at Belgian VRT, also spoke to us about how there is a distinction between personal data, which poses privacy risks, and behavioural data, which may be safer to use for public service media recommendation systems and which they actively monitor.[footnote]Interview with Koen Muylaert, Project Lead, VRT data platform and data science initiative, Vlaamse Radio- en Televisieomroeporganisatie (VRT) (2021).[/footnote]

None of the organisations that we interviewed spoke to us about indirect threats to privacy or ways to mitigate them.

2. Problematic or inappropriate content

Open recommendation systems on commercial platforms that host limitless, user-generated content have a high risk of recommending low quality or harmful content. This risk is lower for public service media that deploy closed recommendation systems to filter their own catalogue of content which has already been extensively scrutinised for quality and adherence to editorial guidelines. Nonetheless, some risk may still exist for closed recommendation systems, such as the risk of recommended age-inappropriate content to younger users.

The risk of inappropriate content relates to the public service media values of excellence (high standards of integrity, professionalism and quality) and independence (completely impartial and independent from political commercial and other influences and ideologies).

In interviews, many members of public service media staff were generally confident that recommendations would be of high quality and represent public service values because the content pool had already passed that test. Nonetheless, some staff identified a risk that the system could surface inappropriate content, for example, archive items that include sexist or racist language that is no longer acceptable or through the juxtaposition of items that could be jarring.

However, a more commonly identified potential risk arises in connection to independence and impartiality. Many of the interviewees we spoke to mentioned that the algorithms used to generate user recommendations needed to be impartial. The BBC and other public service media organisations have traditionally operated a policy of ‘balance over time and output’, meaning a range of views on a subject or party political voices will be heard over a given period of programming on a specific channel. However, recommendation systems disrupt this. The audience is no longer exposed to a range of content broadcast through channels. Instead, individuals are served up specific items of content without the balancing context of other programming. In this way they may only encounter one side of an argument.

Therefore, some interviewees expressed that fine-tuning balanced recommendations are especially important in this context. This is an area where the close integration of editorial and technical teams was seen to be essential

3. Opacity of the recommendation

Like many other algorithmic systems, many recommendation systems operate as black boxes whose internal workings are sometimes difficult to interpret, even for their developers. The process by which a recommendation is generated is often not transparent to individual users or other parties that interact with a recommendation system. This can have negative effects, by limiting the accountability of the system itself, and diminishing the trust that audiences put in the good operation of the service.

Opacity is a challenge to the public service media values of independence (autonomous in all aspects of the remit) and accountability (be transparent and subject to constant public scrutiny). The issue of opacity and the risks that it raises was touched upon in several of our interviews.

The necessity to exert more control over the data and algorithms used for building recommendation systems was among the motivations for the BBC in bringing their development in house. The same is true of other public service media in Europe. While most European broadcasters did not choose to bring the development of recommendation systems in house, many of them now rely on PEACH, a recommendation system developed collaboratively by several public service media organisations under the umbrella of the European Broadcasting Union (EBU).

Previously, the BBC as well as other public service media had relied on external commercial contractors to build the recommendation systems they used. This however meant that they could exert little control over the data and algorithms used, which represented a risk. In the words of Sebastien Noir, Head of Software, Technology and Innovation at the EBU:

‘As a broadcaster, you are defined by what you promote to the people, that’s your editorial line. This is, in a way, also your brand or your user experience. If you delegate that to a third party company, […] then you have a problem, because you have given your very identity, the way you are perceived by the people to a third party company […] No black box should be your editorial line.’[footnote]Interview with Sébastien Noir, Head of Software, Technology and Innovation, and Dmytro Petruk, Developer, European Broadcasting Union (2021).[/footnote]

But bringing the development of recommendation systems in-house does not solve all the issues connected with the opacity of these systems. Jannick Sørenson, Associate Professor in Digital Media at Aalborg University, summarised the concern:

‘I think the problem of the accountability, first within the public service institution, is that editors, they have no real chance to understand what data scientists are doing. And data scientists, neither they do. […] And so the dilemma here is that it requires a lot of specialised knowledge to understand what is going on inside this process of computing recommendation[s]. Right. And, I mean, with Machine Learning, it’s become literally impossible to follow.’[footnote]Interview with Jannick Kirk Sørensen, Associate Professor in Digital Media, Aalborg University (2021).[/footnote]

Sørenson highlighted how the issue of opacity arises both internally and externally for public service media.

Internally to the institution, the opacity of the systems utilised to produce recommendations hinders the collaboration of editorial and technical staff. Some public service media organisations, such as Swedish Radio, have tried to tackle this issue by explicitly having both a technical and an editorial project lead, while Bayerische Rundfunk have established an interdisciplinary team with their AI and Automation Lab[footnote]We explore these examples in more detail later in the chapter.[/footnote]

Documentation is another approach taken by public service media organisations to reduce the opacity of the system. For example, the BBC’s Machine Learning Engine Principles checklist (as of version 2.0) explicitly asks teams to document what their model does and how it was created, e.g. via a data science decision log, and to create a Plain English explanation or visualisation of the model to communicate the model’s purpose and operation.

Externally, public service media struggle to provide effective explanations to audiences about the systems that they use. The absence of industry standards for explanation and transparency was identified as a risk. Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio, also expressed this worry:

‘One particular risk, I think, with all these kind of more automatic services, and especially with the introduction of […] AI powered services, is that the audience doesn’t understand what we’re doing. And […] I know that there’s a big discussion going on at the moment, for example, about Explainable AI. How should we explain in a better way what the services are doing? […] I think that there’s a very big need for kind of industry dialogue about setting standards here, you know.’[footnote]Interview with Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio (2021).[/footnote]

Other interviewees, however, highlighted that the use of explanations has limited efficacy in addressing the external opacity of individual recommendations, since users rarely pay attention to them. Sarah van der Land, Digital Innovation Advisor at NPO in the Netherlands, cited internally conducted consumer studies as evidence that audiences might not care about explanations:

‘Recently, we did some experiments also on data insight, into what extent our consumers want to have feedback on why they get a certain recommendation? And yeah, unfortunately, our research showed that a lot of consumers are not really interested in the why. […] Which was quite interesting for us, because we thought, yeah, of course, as a public value, we care about our consumers. We want to elaborate on why we do the things we do and why, based on which data, consumers get these recommendations. But yeah, they seem to be very little interested in that.’[footnote]Interview with Arno van Rijswijk, Head of Data & Personalization, and Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep (2021).[/footnote]

This finding indicates that pursuing this strategy has limited practical effects in improving the value of recommendations for audiences. David Graus, Lead Data Scientist, Randstad Groep Nederland, also told us that he is sceptical of the use of technical explanations, but that ‘what is more important is for people to understand what a recommender system is, and what it aims to do, and not how technically a recommendation was generated.’[footnote]Interview with David Graus, Lead Data Scientist, Randstad Groep Nederland (2021).[/footnote] This could be achieved by providing high-level explanations of the processes and data that were used to produce the recommendations, instead of technical details of limited interest to non-technical stakeholders.

4. Autonomy

Research on recommendation systems has highlighted how they could pose risks to user autonomy, by restricting people’s access to information and by potentially being used to shape preferences or emotions. Autonomy is a fundamental human value which ‘generally can be taken to refer to a person’s effective capacity for self-governance’.[footnote]Prunkl, C. (2022). ‘Human autonomy in the age of artificial intelligence’. Nature Machine Intelligence, 4, pp.99–101. Available at: doi: https://doi.org/10.1038/s42256-022-00449-9[/footnote] Writing on the concept of human autonomy in the age of AI, Prunkl distinguishes two dimensions of autonomy: one internal, relating to the authenticity of the beliefs and values of a person; and the other external, referring to the person’s ability to act, or the availability of meaningful options that enables them to express agency.

The risk to autonomy relates to the public service media value of universality (creating a public sphere, in which all citizens can form their own opinions and ideas, aiming for inclusion and social cohesion).

Public service media historically have made choices on behalf of their audiences in line with what the organisation has determined is in the public interest. In this sense audiences have limited autonomy due to public service media organisations restricting individuals’ access to information, albeit with good intentions.

The use of recommendation systems could, in one respect, be seen as increasing the autonomy of audiences. A more personalised experience, that is more tailored to the individual and their interests, could support the ‘internal’ dimension of autonomy, because it could enable a recommendation system to more accurately reflect the beliefs and values of an individual user, based on what other users of that demographic, region or age might like.

At the same time, public service media strive to ‘create a public sphere, in which all citizens can form their own opinions and ideas, aiming for inclusion and social cohesion’.[footnote]European Broadcasting Union. (2012). Empowering Society: A Declaration on the Core Values of Public Service Media, p. 4. Available at: https://www.ebu.ch/files/live/sites/ebu/files/Publications/EBU-Empowering-Society_EN.pdf[/footnote] There is a risk in using recommendation systems that public service media might filter information in such a way that they inhibit people’s autonomy to form their views independently.[footnote]Interview with David Caswell, Executive Product Manager, BBC News Labs (2021).[/footnote]

By design, recommendation systems tailor recommendations to a specific individual, often in such a way where these recommendations are not visible to other people. This means individual members of the audience may not share a common context or may be less aware of what information others have access to, a condition that Milano et al have called ‘epistemic fragmentation’.[footnote]Milano, S., Mittelstadt, B., Wachter, S. and Russell, C. (2021), ‘Epistemic fragmentation poses a threat to the governance of online targeting’. Nature Machine Intelligence. Available at: https://doi.org/10.1038/s42256-021-00358-3[/footnote] Coming to an informed opinion often requires being able to have meaningful conversations about a topic with other people. If recommendations isolate individuals from each other, then this may undermine the ability of audiences to form authentic beliefs and reason about their values. Since this ability is essential to having autonomy, epistemic fragmentation poses a risk.

Recommendations are also based on an assumption that there is such a thing as a single, legible individual for whom content can be personalised. In practice, people’s needs vary according to context and relationships. They may want different types of content at different times of day, whether they are watching videos with family or listening to the news in the car, for example. However, contextual information is difficult to factor in a recommendation, and doing so requires access to more user data which could pose additional privacy risks. Moreover, recommendations are often delivered via a user’s account with a service that uses recommendation systems. However, some people may choose to share accounts, create a joint one or maintain multiple personal accounts to compartmentalise different aspects of their information needs and public presence.[footnote]Milano, S., Taddeo, M. and Floridi, L. (2021). ‘Ethical aspects of multi-stakeholder recommendation systems’. The Information Society, 37(1). Available at: https://doi.org/10.1080/01972243.2020.1832636[/footnote]

Finally, the use of recommendation systems by public service media can pose a risk to autonomy when the categories that are used to profile users are not accurate, not transparent or not easily accessible and modifiable by the users themselves. This concern is linked to the opacity of the system, but it was not addressed explicitly as a risk to user autonomy in our interviews.

As above, several interviews highlighted that internal research indicates users do not want more explanations and control over the recommendation system, when this comes at the cost of a frictionless experience. If so, public service media need to consider whether there is a trade-off between supporting autonomy and the ease of use of a recommendation system, and research alternative strategies to provide audiences with more meaningful opportunities to participate in the construction of their digital profiles.

5. Fairness

Researchers have documented how the use of machine learning and AI in applications ranging from credit scoring to facial recognition,[footnote]Buolamwini, J. and Gebru, T. (2018). ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Conference on Fairness, Accountability and Transparency, PMLR, pp. 77–91. Available at: https://proceedings.mlr.press/v81/buolamwini18a.html[/footnote] medical triage to parole decisions,[footnote]Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). ‘Machine Bias’. ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing[/footnote] advert delivery[footnote]Sweeney, L. (2013). ‘Discrimination in online ad delivery’. arXiv. Available at: https://doi.org/10.48550/arXiv.1301.6822[/footnote] to automatic text generation[footnote]Noble, S. U. (2018). Algorithms of Oppression. New York: New York University Press; Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021). ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’. FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp.610–623. Available at: https://doi.org/10.1145/3442188.3445922[/footnote] and many others, often leads to unfair outcomes which perpetuate historical social biases or introduce new, machine-generated ones. Given the pervasiveness of these systems in our societies, this has given rise to increasing pressure to improve their fairness, which has contributed to a burgeoning  area of research.

This risk relates to the public service media value of universality (reach all segments of society, with no-one excluded) and diversity (support and seek to give voice to a plurality of competing views – from those with different backgrounds, histories and stories. Help build a more inclusive, less fragmented society).

Developers of algorithmic systems today can draw on a growing array of technical approaches to addressing fairness issues; however, fairness remains a challenging issue that cannot be fully solved by technical fixes. Instead, as Wachter et al argue in the context of EU law, the best approach may be to recognise that algorithmic systems are inherently and inevitably biased, and to put in place accountability mechanisms to ensure that there are no biases that perpetuate unfair discrimination, but to the contrary biases are used to help to redress historical injustices.[footnote]Wachter, S., Mittelstadt, B. and Russell, C. (2020). ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI’. Computer Law & Security Review, 41. Available at: http://dx.doi.org/10.2139/ssrn.3547922[/footnote]

Recommendation systems are no exception. Biases in recommendation can arise at a variety of levels and for different stakeholders. From the perspective of users, a recommendation system could be unfair if the quality of the recommendations varies across users. For example, if a music recommendation system is much worse at predicting the tastes of and serving interesting recommendations to a minority group, this could be unfair.

Recommendations could also be unfair from a provider perspective. For instance, one recent study found a film recommendation system trained on a well-known dataset (MovieLens 10M), and designed to optimise for relevance to users, systematically underrepresented films by female directors.[footnote]Boratto, L., Fenu, G. and Marras, M. (2021) ‘Interplay between upsampling and regularization for provider fairness in recommender systems’. User Modeling and User-Adapted Interaction, 31(3), pp. 421–455.Available at: https://doi.org/10.1007/s11257-021-09294-8[/footnote] This example illustrates a phenomenon that is more pervasive. Since recommendation systems are primarily built to optimise for user relevance, provider-side unfairness has been observed to emerge in a variety of settings, ranging from content recommendations to employment websites.[footnote]Biega, A. J., Gummadi, K. P. and Weikum, G. (2018). ‘Equity of Attention: Amortizing Individual Fairness in Rankings’. SIGIR ’18: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 405–414. Available at: https://dl.acm.org/doi/10.1145/3209978.3210063[/footnote]

Because different categories of stakeholders derive different types of value from recommendation systems, issues of fairness can arise separately for each of them. In e-commerce applications, for example, users derive value from relevant recommendations for items that they might be interested in buying, while sellers derive value from their items being exposed to more potential buyers. Moreover, attempts to address unfair bias for one category of stakeholders might lead to making things worse for another category. In the case of e-commerce applications, for example, attempts to improve provider-side fairness could have negative effects on the relevance of recommendations for users. Bringing these competing interests together, comparing them and devising overarching fairness metrics remains an open challenge.[footnote]Abdollahpouri, H., Adomavicius, G., Burke, R., et al. (2020). ‘Multistakeholder recommendation: Survey and research directions’. User Modeling and User-Adapted Interaction, pp.127–158. Available at: https://doi.org/10.1007/s11257-019-09256-1[/footnote]

Issues of fairness were not prominently mentioned by our interview participants. When fairness was referenced, it was primarily with regards to fairness concerns for users and whether recommendation systems performed better for some demographics than others. However, the extent to which recommendation systems are currently used across public service media organisations we spoke to was low enough that the risk did not generate too much concern among many staff. Sebastien Noir, European Broadcasting Union, said that ‘Recommendation appears, at least for the moment more than something like [the] cherry on the cake, it’s a little bit of a personalised touch on the world where everything is still pretty much broadcast content where everyone gets to receive the same content.’[footnote]Interview with Sébastien Noir, Head of Software, Technology and Innovation, and Dmytro Petruk, Developer, European Broadcasting Union (2021).[/footnote] Since, for now, recommendations represent a very small portion of the content that users access on these platforms, the risk that this poses to fairness was deemed to be very low. 

However, if recommendations were to take a more prominent role in future, this would pose concerns that need to be addressed. Some of our BBC interviewees expressed a concern that some recommendations currently cater best to the interests of some demographics, while they work less well for others. Differential levels of accuracy and quality of experience across groups of users is a known issue in recommendation systems, although the way in which it manifests can be difficult to predict before the system is deployed.

In general, our respondents believed that ‘majority’ users, whose informational needs and preferences are closest to the average, and therefore more predictable, tend to be served best by a recommendation system – though many acknowledge this assertion has been difficult to empirically prove. If the majority of BBC users belong to a specific demographic, this could skew the system towards their interests and tastes, posing fairness issues with respect to other demographics. However, this can sometimes be reversed when other factors beyond user relevance, such as increasing the diversity of users and the diversity of content, are introduced. Therefore, the emerging patterns from recommendations are difficult to predict, but will need to be monitored on an ongoing basis. BBC interviewees reported that this issue is currently addressed by looping in more editorial oversight.

6. Social effects or externalities

One of the features of recommendation systems that has attracted most controversy in recent years is their apparent tendency to produce negative social effects. Social media networks that use recommendation systems to structure user feeds, for instance, have come under scrutiny for increasing polarisation by optimising for engagement. Other social networks have come under fire for facilitating the spread of disinformation.

The social externality risk relates to the public service media values of universality (create a public sphere, in which all citizens can form their own opinions and ideas, aiming for inclusion and social cohesion) and diversity (support and seek to give voice to a plurality of competing views – from those with different backgrounds, histories and stories. Help build a more inclusive, less fragmented society).

Pariser introduced the concept of a ‘filter bubble’, which can be understood as an informational ecosystem where individuals are only or predominantly exposed to certain types of content, while they never come into contact with other types.[footnote]Pariser, E. (2011). The filter bubble: what the Internet is hiding from you. Penguin Books.[/footnote] The philosopher C Thi Nguyen has offered an analysis of how filter bubbles might develop into echo chambers, where users’ beliefs are reflected at them and reinforced through interaction with media that validates them, leading to potentially dangerous escalation.[footnote]Nguyen, C. T. (2018). ‘Why it’s as hard to escape an echo chamber as it is to flee a cult’. Aeon. Available at: https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-chamber-as-it-is-to-flee-a-cult[/footnote] However, some recent empirical research has cast doubt on the extent to which recommendation systems deployed on social media really give rise to filter bubbles and political polarisation in practice.[footnote]Arguedas, A. R., Robertson, C. T., Fletcher, R. and Nielsen R.K. (2022). ‘Echo chambers, filter bubbles, and polarisation: a literature review.’ Reuters Institute for the Study of Journalism. Available at: https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review[/footnote]

In one study, it was observed that consuming news through social media increases the diversity of content consumed, with users engaging with a larger and more varied selection of news sources.[footnote]Scharkow, M., Mangold, F., Stier, S. and Breuer, J. (2020). ‘How social network sites and other online intermediaries increase exposure to news’. Proceedings of the National Academy of Sciences, 117(6), pp. 2761–2763. Available at: https://doi.org/10.1073/pnas.1918279117[/footnote] These studies highlight how recommendation systems can be programmed to increase the diversity of exposure to varied sources of content.[footnote]A similar finding exists in other studies of public service media organisations – see: Hildén, J. (2021). ‘The Public Service Approach to Recommender Systems: Filtering to Cultivate’. Television & New Media, 23(7). Available at: https://doi.org/10.1177/15274764211020106[/footnote] However, they do not control for the quality of the sources or the individual reaction to the content (e.g. does the user pay attention or merely scroll down on some of the news items?). Without this information it is difficult to know what the effects are of exposure to different types of sources. More research is needed to probe the links between exposure to diverse sources and the influence this has on the evolution of political opinions. 

Another known risk for recommendation systems is exposure to manipulation by external agents. Various states, for example Russia and China, have been documented to engage in what has been called ‘computational propaganda’. This type of propaganda exploits some features of recommendation systems on social media to spread mis- or disinformation, with the aim of destabilising the political context of the countries targeted. State-sponsored ‘content farms’ have been documented to produce content that is engineered to be picked up by recommendation systems to go viral. This kind of hostile strategy is made possible by the vulnerability of the recommendation system, especially open ones, because the system is programmed to optimise for engagement.

The risk that the use of recommendation systems could increase polarisation and create filter bubbles was regarded as very low by our interviewees. Unlike social media that recommend content generated by users or other organisations, the BBC and other public service media that we spoke to operate closed content platforms. This means that all the content recommended on their platforms has already passed multiple editorial checks, including for balanced and truthful reporting.

The relatively minor role that recommendation systems play on the platform currently also means that they do not pose a risk of creating filter bubbles. Therefore, this was not recognised as a pressing concern.

However, many raised concerns that recommendation systems could undermine the principle of diversity by serving audiences homogenous content. Historically, programme schedulers have had mechanisms to expose audiences to content they might not choose of their own accord – for example by ‘hammocking’ programmes of high public value between more popular items on the schedule and relying on audiences not to switch channels. Interviewees also mentioned the importance of serendipity and surprise as part of the public service remit. This could be lost if audiences are only offered content based on their previous preferences. These concerns motivate ongoing research into new methods for producing more accurate and diversified recommendations.[footnote]Paudel, B., Christoffel, F., Newell, C. and Bernstein, A. (2017). ‘Updatable, Accurate, Diverse, and Scalable Recommendations for Interactive Applications’. ACM Transactions on Interactive Intelligent Systems, 7(1), pp.1–34. Available at: https://doi.org/10.1145/2955101[/footnote]

Conclusion

The categories of risk related to the use of recommendation systems, identified in the literature, can be applied to their use in the context of public service media. However, the way in which these risks manifest and the emphasis that organisations put on them can be quite different to a commercial context.

We found that public service media have, to a greater or lesser extent, mitigated their exposure to these risks through a number of factors such as the high quality of the content being recommended; the limited deployment of the systems; the substantial level of human curation; a move towards greater integration of technical and editorial teams; ethical principles; associated practice checklists and system documentation. It is not enough for public service media organisations to believe that having a public service mission will ensure that recommendation systems serve the public. If public service media are to use recommendation systems responsibly, they must interrogate and mitigate the potential risks.

We find these risks can also be seen in relation to the six core public service values of universality, independence, excellence, diversity, accountability and innovation.

We believe it is useful for public service media to consider both the known risks, as understood within the wider research field, as well as the risks in relation to public service values. By approaching the potential challenges of recommendation systems through this dual lens, public service media organisations should be able to develop and deploy systems in line with their public service remit.

An additional consideration, broader than any specific risk category, is that of audience trust in public service media. Trust doesn’t fall under any specific category because it is associated with the relationship between  public service media and their audience more broadly. But failure to address the risks identified by the categories can negatively affect trust. All public service media organisations place trust as central to their mission. In the context of a fragmented digital media environment, their trustworthiness has taken on increased importance and is now a unique quality that distinguishes them from other media and which is pivotal to the argument in favour of sustaining public service media. Many public service media organisations are beginning to recognise and address the potential risks of recommendation systems and it is vital that this continues in order to retain audience trust.

Additional challenges for public service media

As well as the ethical risks described above, public service media face practical challenges in implementing recommendation systems that stem from their mission, the make-up of their teams and their organisational infrastructure.

Quantifying values

Recommendation systems filter content according to criteria laid down by the system developers. Public service media organisations that want to filter content in ways that prioritise public service values first need to translate these values into information that is legible to an algorithmic system. In other words, the values must be quantified as data.

However, as we noted above, public service values are fluid, can change over time and depend on context. And as well as the stated mission of public service media, laid down in charters, governance and guidelines, there are a set of cultural norms and individual gut instincts that determine day-to-day decision making and prioritisation in practice. Over time, public service media have developed a number of ways to measure public value, through systems such as the public value test assessment and with metrics such as audience reach, value for money and surveys of public sentiment (see section above). However, these only account for public value at a macro level. Recommendation systems that are filtering individual items of content require metrics that quantify values at a micro level.

Swedish Radio is a pioneer in attempting to do this work of translation. Olle Zachrison of Swedish Radio summarised it as: ‘we have central tenets to our public service mission stuff that we have been talking about for decades and also stuff that is in the kind of gut of the news editors. But in a way, we had to get them out there in an open way and into a system also, that we in a way could convert those kinds of editorial values that have been sitting in these kind of really wise news assessments for years, but to get them out there into a system that we also convert them into data.’[footnote]Interview with Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio (2021).[/footnote]

Working across different teams and different disciplines

The development and deployment of recommendation systems for public service media requires expertise in both technical development and content creation and curation. This proves challenging in a number of ways.

Firstly, technology talent is hard to come by, especially when public service media cannot offer anything near the salaries available at commercial rivals.[footnote]Interview with Dietmar Jannach, Professor, University of Klagenfurt (2021).[/footnote] Secondly, editorial teams often do not trust or value the role of technologists, especially when the two do not work closely with each other.[footnote]Interview with Nic Newman, Senior Research Associate, Reuters Institute for the Study of Journalism (2021).[/footnote] In some organisations, the introduction of recommendation systems stalls because it is perceived as a direct threat to editorial jobs and an attempt to replace journalists with algorithms.[footnote]Interview with Sébastien Noir, Head of Software, Technology and Innovation, and Dmytro Petruk, Developer, European Broadcasting Union (2021).[/footnote]

Success requires bridging this gap and coordinating between teams of experts in technical development, such as developers and data scientists, and experts in content creation and curation, the journalists and editors.[footnote]Boididou, C., Sheng, D., Moss, M. and Piscopo, A. (2021), ‘Building Public Service Recommenders: Logbook of a Journey’. RecSys ’21: Proceedings of the 15th ACM Conference on Recommender Systems, pp. 538–540. Available at: https://doi.org/10.1145/3460231.3474614[/footnote]

As Sørensen and Hutchinson note: ‘Data analysts and computer programmers (developers) now perform tasks that are key determinants for exposure to public service media content. Success is no longer only about making and scheduling programmes. This knowledge is difficult to communicate to journalists and editors, who typically don’t engage in these development projects […] Deep understanding of how a system recommends content is shared among a small group of experts’.[footnote] Sørensen, J.K. and Hutchinson, J. (2018). ‘Algorithms and Public Service Media’. Public Service Media in the Networked Society: RIPE@2017, pp.91–106. Available at: http://www.nordicom.gu.se/sites/default/files/publikationer-hela-pdf/public_service_media_in_the_networked_society_ripe_2017.pdf[/footnote]

Some, such as Swedish Radio and BBC News Labs, have tried to tackle this issue by explicitly having two project leads, one with an editorial background and one with a technical background, to emphasise the importance of working together and symbolically indicate that this was a joint process.[footnote]Interview with Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio (2021); BBC News Labs. ‘About’. Available at: https://bbcnewslabs.co.uk/about[/footnote] Swedish Radio’s Olle Zachrison noted that: 

‘We had a joint process from day one. And we also deliberately had kind of two project managers, one, clearly from the editorial side, like a very experienced local news editor. And the other guy was the product owner for our personalization team. So they were the symbols internally of this project […] that was so important for the, for the whole company to kind of team up behind this and also for the journalists and the product people to do it together.’

If this coordination fails, this can ‘weaken the organisation strategically and, on a practical level, create problems caused by failing to include or correctly mark the metadata that is essential for findability’.

Bayerische Rundfunk has established a unique interdisciplinary team. The AI and Automation Lab has a remit to not only create products, but also produce data-driven reporting and coverage of the impacts of artificial intelligence on society. Building from the existing data journalism unit, the Lab fully integrates the editorial and technical teams under the leadership of Director Uli Köppen. Although she recognises the challenges of bringing together people from different backgrounds, she believes the effort has paid off:

‘This technology is so new, and it’s so hard to persuade the experts to work in journalism. We had the data team up and running, these are journalists that are already in the mindset at this intersection of tech and journalism. And I had the hope that they are able to help people from other industries to dive into journalism, and it’s easier to have this kind of conversation with people who already did this cultural step in this hybrid world.

‘It was astonishing how those journalists helped the new people to onboard and understand what kind of product we are. And we are also reinventing our role as journalists in the product world. And this really worked out so I would say it’s worth the effort.’

Metadata, infrastructure and legacy systems

In order to filter content, recommendation systems require clear information about what that content is. For example, if a system is designed to show people who enjoyed soap operas other series that they might enjoy, individual items of content must be labelled as being soap operas in a machine-readable format. This kind of labelling is called metadata.

However, public service media have developed their programming around the needs of individual channels and stations organised according to particular audiences and tastes (e.g. BBC Radio 1 is aimed at a younger audience around music, BBC Radio 4 at an older audience around speech content) or by a particular region (e.g. in Germany Bayerische Rundfunk serves Bavaria, WDR serves West Germany but both are members of the federal broadcaster ARD). Each of these channels will have evolved their own protocols and systems and may label content differently – or not at all. This means the metadata to draw on for the deployment of recommendation systems is often sparse and low quality, and the metadata infrastructure is often disjointed and unsystematic.

We heard from many interviewees across public service media organisations that access to high-quality metadata was one of the most significant barriers to implementing recommendation systems. This was particularly an issue when they wanted to go beyond the most simplistic approaches and experiment with assigning public service value to pieces of content or measuring the diversity of recommended content.

Recommendation system projects often required months of setting up systems for data collection, then assessing and cleaning that data, before the primary work of building a recommendation system could begin. To achieve this requires a significant strategic and financial commitment on the part of the organisation, as well as buy-in from the editorial teams involved in labelling.

Evaluation of recommendation systems

We’ve explored the possible benefits and harms of recommendation systems, and how those benefits and harms might manifest in a public service media context. To try to understand whether and when those benefits and harms occur, developers of recommendation systems need to evaluate their systems. Conversely, looking at how developers and organisations evaluate their recommendation systems can tell us what benefits and harms, and to whom, they prioritise and optimise for in their work.[footnote]Evaluation of recommendation systems in not limited to the developers and deployers of those systems. Other stakeholders such as users, government, regulators, journalists and civil society organisations may all have their own goals for what they think a particular recommendation system should be optimising for. Here however, we focus on evaluation as seen by the developer and deployer of the system, as this is where there is the tightest feedback loop between evaluation and changes to the system and the developers and deployers generally have privileged access to information about the system and a unique ability to run tests and studies on the system. For more on how regulators (and others) can evaluate social media companies in an online-safety context, see: Ada Lovelace Institute. (2021). Technical methods for regulatory inspection of algorithmic systems. Available at: https://www.adalovelaceinstitute.org/report/technical-methods-regulatory-inspection/[/footnote]

In this chapter, we look at:

  • how recommendation systems can be evaluated
  • how public service media organisations evaluate their own recommendation systems
  • how evaluation might be done differently in future.

How recommendation systems are evaluated

In this section, we lay out a framework for understanding the evaluation of recommendation systems as a three-stage process of:

  1. Setting objectives.
  2. Identifying metrics.
  3. Selecting methods to measure those metrics.

This framework is informed by three aspects of evaluation (objectives, metrics and methods) as identified by Francesco Ricci, Professor of Computer Science at the Free University of Bozen-Bolzano.

Objectives

Evaluation is a process of determining how well a particular system achieves a particular set of goals or objectives. To evaluate a system, you need to know what goals you are evaluating against.[footnote]Interview with Francesco Ricci, Professor of Computer Science, Free University of Bozen-Bolzano (2021).[/footnote]

However, this is not a straightforward exercise. There is no singular goal for a recommendation system and different stakeholders will have different goals for the system. For example, on a privately-owned social media platform:

  • the engineering team’s goal might be to create a recommendation system that serves ‘relevant’ content to users
  • the CEO’s goal might be to maximise profit while minimising personal reputational risk
  • the audience’s goal may be to discover new and unexpected content (or just avoid boredom).

If a developer wants to take into account the goals of all the stakeholders in their evaluation, they will need to decide how to prioritise or weigh these different goals.

Balancing goals is ultimately a ‘political’ or ‘moral’ question, not a technical one, and there will never be a universal answer about how to weigh these different factors, or even who the relevant stakeholders whose goals should be weighted are.

Any process of evaluation ultimately needs a process to determine the relevant stakeholders for a recommendation system and how their priorities should be weighted.

This is made more difficult because people are often confused or uncertain about their goals, or have multiple competing goals, and so the process of evaluation will need to help people clarify their goals and their own internal weightings between those goals.[footnote]Interview with Francesco Ricci.[/footnote]

Metrics

Furthermore, goals are often quite general and whether they have been met cannot be directly observed.[footnote]Interview with Francesco Ricci, Professor of Computer Science, Free University of Bozen-Bolzano (2021).[/footnote] Therefore, once a goal has been decided, such as ‘relevance to the user’, the goal needs to be operationalised into a set of specific metrics to judge the recommendation system against.[footnote]Operationalising is a process of defining how a vague concept, which cannot be directly measured, can nevertheless be estimated by empirical measurement. This process inherently involves replacing one concept, such as ‘relevance’, with a proxy for that concept, such as ‘whether or not a user clicks on an item’ and thus will always involve some degree of error.[/footnote] These metrics can be quantitative, such as the number of users who click on an item, or qualitative, such as written feedback from users about how they feel about a set of recommendations.

Whatever the metrics used, the choice of metrics is always a choice of a particular interpretation of the goal. The metric will always be a proxy for the goal, and determining a proxy is a political act that grants power to the evaluator to decide what metrics reflect their view of the problem to be solved and the goals to be achieved.[footnote]Beer, D. (2016). Metric Power. London: Palgrave Macmillan. Available at: https://doi.org/10.1057/978-1-137-55649-3[/footnote]

The people who define these metrics for the recommendation system are often the engineering or product teams. However, these teams are not always the same people who set the goals of an organisation. Furthermore, they may not directly interact with other stakeholders who have a role in setting the goals of the organisation or the goal of deploying the recommendation system.

Therefore, through misunderstanding, lack of knowledge or lack of engagement with others’ views, the engineering and product teams’ interpretation of the goal will likely never quite match the intention of the goal as envisioned by others.

Metrics will also always be a simplified vision of reality, summarising individual interactions with the recommendation system into a smaller set of numbers, scores or lines of feedback.[footnote]Raji, I. D., Bender, E. M., Paullada, A. et al. (2021). ‘AI and the Everything in the Whole Wide World Benchmark’, p2. arXiv. Available at: https://doi.org/10.48550/arXiv.2111.15366[/footnote] This does not mean metrics cannot be useful indicators of real performance; this very simplicity is what makes them useful in understanding the performance of the system. However, those creating the metrics need to be careful not to confuse the constructed metric with the reality underlying the interactions of people with the recommendation system. The metric is a measure of the interaction, not the interaction itself.

Methods

Evaluating is then the process of measuring these metrics for a particular recommendation system in a particular context, which requires gathering data about the performance of the recommendation system. Recommendation systems are evaluated in three main ways:[footnote]Gunawardana, A. and Shani, G. (2015). ‘Evaluating Recommender Systems’. Recommender Systems Handbook, pp 257–297. Available at: https://doi.org/10.1007/978-0-387-85820-3_8[/footnote]

  1. Offline evaluations test recommendation systems without real users interacting with the system, for example by measuring recommendation system performance on historical user interaction data or in a synthetic environment with simulated users.
  2. User studies test recommendation systems against a small set of users in a controlled environment with the users being asked to interact with the system and then typically provide explicit feedback about their experience afterwards.
  3. Online evaluations test recommendation systems deployed in a live environment, where the performance of the recommendation system is measured against interactions with real users.

These methods of evaluation are not mutually exclusive and a recommendation system might be tested with each method sequentially, as it moves from design to development to deployment.

Offline evaluation has been a historically popular way to evaluate recommendation systems. It is comparatively easy to do, due to the lack of interaction with real users or a live platform. In principle, they are reproducible by other evaluators, and allow standardised comparison of the results of different recommendation system.[footnote]Jannach, D. and Jugovac, M. (2019), ‘Measuring the Business Value of Recommender Systems’. ACM Transactions on Management Information Systems, 10(4), pp 1–23. Available at: https://doi.org/10.1145/3370082[/footnote]

However, there is increasing concern that offline evaluation results based on historical interaction data do not translate well into real-world recommendation system performance. This is because the training data is based on a world without the new recommendation system in it, and evaluations therefore cannot account for how that system might itself shift wider aspects of the service like user preferences.[footnote]Rohde, D., Bonner, S., Dunlop, T., et al. (2018). ‘RecoGym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising’. arXiv. Available at: https://doi.org/10.48550/arXiv.1808.00720; Beel, J. and Langer, S. (2015)., ‘A Comparison of Offline Evaluations, Online Evaluations, and User Studies in the Context of Research-Paper Recommender Systems’. Proceedings of the 19th International Conference on Theory and Practice of Digital Libraries (TPDL), pp.153-168. Available at: doi: 10.1007/978-3-319-24592-8_12; Jannach, D., Pu, P., Ricci, F. and Zanker, M. (2021). ‘Recommender Systems: Past, Present, Future’. AI Magazine, 42 (3). Available at: https://doi.org/10.1609/aimag.v42i3.18139[/footnote] This limits their usefulness in evaluating which recommendation system would actually be the best performing in the dynamic live environments most stakeholders are interested in, such as a video-sharing website with an ever-growing set of videos and ever-changing set of viewers and content creators.

Academics we spoke to in the field of recommendation systems identified user studies in labs and simulations as the state of the art in academic recommendation system evaluation. Whereas in industry, common practice is to use online evaluation via A/B testing to optimise key performance indicators.[footnote]Interview with Dietmar Jannach, Professor, University of Klagenfurt (2021).[/footnote]

How do public service media evaluate their recommendation systems?

In this section, we use the framework of objectives, metrics and methods to examine how public service media organisations evaluate their recommendation systems in practice.

Objectives

As we discussed in the previous chapter, recommendation systems are ultimately developed and deployed to serve the goals of the organisation using them; in this case, public service media organisations. In practice, however, the objectives that recommendation systems are evaluated against are often multiple levels of operationalisation and contextualisation down from the overarching public service values of the organisation.

For example, as discussed previously, the BBC Charter agreement sets out the mission and public purposes of the organisation for the following decade. These are derived from the public service values, but are also shaped by political pressures as the Charter is negotiated with the British Government of the time.

The BBC then publishes an annual plan setting out the organisation’s strategic priorities for that year, drawing explicitly on the Charter’s mission and purposes. These annual plans are equally shaped by political pressures, regulatory constraints and challenges from commercial providers. The plan also sets out how each product and service will contribute towards meeting those strategic priorities and purposes, setting the goals for each of the product teams.

For example, the goals of BBC Sounds as a product team in 2021 were to:

  1. Increase the audience size of BBC Sounds’ digital products.
  2. Increase the demographic breadth of consumption across BBC Sounds’ products, especially among the young.
  3. Convert ‘lighter users’ into regular users.
  4. Enable users to more easily discover content from the more than 50 hours of new audio produced by the BBC on an hourly basis.[footnote]According to David Jones (Executive Product Manager, BBC Sounds, interviewed in 2021), his top-line KPI is to reach 900,000 members of the British population who are under 35 by March 2022. These numbers are determined centrally by BBC senior managers based on the BBC’s Service Licence for BBC Online and Red Button. See: BBC Trust. (2016). BBC Online and Red Button Service Licence. Available at: http://downloads.bbc.co.uk/bbctrust/assets/files/pdf/regulatory_framework/service_licences/online/2016/online_red_button_may16.pdf[/footnote]

These objectives map onto the goals for using recommendation systems we discussed in the previous chapter. Specifically, the first three relate to capturing audience attention and the fourth relates to reducing information overload and improving discoverability for audiences.

These product goals then inform the objectives of the engineering and product teams in the development and deployment of a recommendation system, as a feature within the wider product.

At each stage, as the higher level objectives are interpreted and contextualised lower down, they may not always align with each other.

The objectives for the development and deployment of recommendation systems in public service media seem most clear for entertainment products, e.g. audio-on-demand and video-on-demand. Here, the goal of the system is clearly articulated as a combination of audience engagement with reaching underserved demographics and serving more diverse content. These are often explicitly linked by the development teams to achieving the public service values of diversity and a personalised version of universality, which they see as serving the needs of each and every group in society

In these cases, public service media organisations seem better at articulating goals for recommendation systems when they are using recommendation systems for a similar purpose as private-sector commercial media organisations. This seems, in part, because there is greater existing knowledge of how to operationalise those objectives, and the developers can draw on their own private sector experience and existing industry practice, open-source libraries and similar resources.

However, when setting objectives that focus more focus on public service value, public service media organisations often seem less clear about the goals of the recommendation system within the wider product.

This seems partly because in the domain of news, for example, the use of recommendation systems by public service media is more experimental and at an earlier stage of maturity. Here, the motivations often come further apart from commercial providers, with the implicit motivation of public service media developers seemingly to augment existing editorial capabilities with a recommendation system, rather than drive engagement with the news content. This means public service media developers have less existing practices and resources to draw upon for translating product goals and articulating recommendation system objectives in those domains.

In general, it seems that some public service values are easier to operationalise in the context of recommendation systems than others, such as diversity and universality. These values get privileged over others, such as accountability, in the development of recommendation systems, as they are the easiest to translate through from the overarching set of organisational values down to the product and feature objectives.

Metrics

Public service media organisations have struggled to operationalise their complex public service values into specific metrics. There seem to be three broad responses to this:

  1. Fall back on established engagement metrics, e.g. click-through rate and watch time, often with additional quantitative measures of the diversity of audience content consumption.
  2. The above approach combined with attempts to create crude numerical measures (e.g. a score from 1 to 5) of ‘public service value’ for pieces of content, often reducing complex values to a single number subjectively judged by journalists, then measuring the consumption of content with a ‘high’ public service value score.
  3. Try to indirectly optimise for public service value by making their metrics the satisfaction of editorial stakeholders, whose preferences are seen as the best ‘ground truth’ proxy for public service value. Then optimise for lists of recommendations which are seen to have high public service value by editorial stakeholders.

Karin van Es found that, as of 2017, the European Broadcasting Union and the Dutch public service media organisation NPO evaluated pilot algorithms using the same metrics found in commercial systems i.e. stream starts and average‐minute ratings.[footnote]van Es, K. F. (2017). ‘An Impending Crisis of Imagination : Data‐Driven Personalization in Public Service Broadcasters’. Media@LSE. Available at: https://dspace.library.uu.nl/handle/1874/358206[/footnote] As van Es notes, these metrics are a proxy for audience retention and even if serving diverse content was an explicit goal in designing the system, the chosen metrics reflect – and will ultimately lead to – a focus on engagement over diversity.

Therefore, despite different stated goals, the public service media use of recommendation systems ends up optimising for similar outcomes as private providers.

By now, most public service media organisations using recommendation systems also have explicit metrics for diversity, although there is no single shared definition of diversity across the different organisations, nor is there one single metric used to measure the concept.

However, most quantitative metrics for diversity in the evaluation of public service media recommendation systems focus on diversity in terms of audience exposure to unique pieces of content or to categories of content, rather than on the representation of demographic groups and viewpoints across the content audiences are exposed to.[footnote]This was generally attributed by interviewees to a combination of a lack of metadata to measure the representativeness within content and assumption that issues of representation within content were better dealt with at the point at which content is commissioned, so that the recommendation systems have diverse and representative content over which to recommend.[/footnote]

Some aspects of diversity, as Hildén observes, are easier to define and ‘to incorporate into a recommender system than others. For example, genres and themes are easy to determine at least on a general level, but questions of demographic representation and the diversity of ideas and viewpoints are far more difficult as they require quite detailed content tags in order to work. Tagging content and attributing these tags to users might also be politically sensitive especially within the context of news recommenders’.[footnote]Hildén, J. (2021). ‘The Public Service Approach to Recommender Systems: Filtering to Cultivate’. Television & New Media, 23(7). Available at: https://doi.org/10.1177/15274764211020106[/footnote]

Commonly used metrics for diversity include intra-list diversity, i.e. the average difference between each pair of items in a list of recommendations and inter-list diversity, i.e. the ratio of items recommended to total items recommended across all the lists of recommendations.

Some public service media organisations are experimenting with more complex measures of exposure diversity. For example, Koen Muylaert at Belgian VRT explained how they measure an ‘affinity score’ for each user for each category of content, e.g. your affinity with documentaries or with comedy shows, which increases as you watch more pieces of content in that category.[footnote]Interview with Koen Muylaert, Project Lead, VRT data platform and data science initiative, Vlaamse Radio- en Televisieomroeporganisatie (VRT) (2021).[/footnote] VRT then measures the diversity of content that each user consumes by looking at the difference between a user’s affinity scores for different categories.[footnote]By measuring the entropy of the distribution of affinity scores across categories, and trying to improve diversity by increasing that entropy.[/footnote] RT see this method of measuring diversity as valuable because they can explain it to others and measure it across users over time, to track how new iterations of their recommendation system increase users’ exposure to diverse content.

To improve on this, some public service media organisations have tried to implement ‘public service value’ as an explicit metric in evaluating their recommendation systems. NPO, for example, ask a panel of 1,500 experts and ordinary citizens to assess the public value of each piece of content, including the diversity of actors and viewpoints represented in the content, and then ask those panellists to assign a single ‘public value’ from 1 to 100 to all pieces of content on their on-demand platform. They then calculate an average ‘public value’ score for the consumption history of each user. According to Sara van der Land, Digital Innovation Advisor at NPO, their target is to make sure that the average ‘public value’ score of every user rises over time.[footnote]Interview with Arno van Rijswijk, Head of Data & Personalization, and Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep (2021).[/footnote]

At the moment, they are only specifically focusing on optimising for that metric within a specific ‘public value’ recommendations section within their wider on-demand platform, which is a mixture of recommendations based on user engagement and  the ‘public value’ of the content. However, through experiments, they found there was a trade-off between optimising for ‘public value’ and viewership, as noted by Arno van Rijswijk, Head of Data & Personalization at NPO:

‘When we’re focusing too much on the public value, we see that the percentage of people that are watching the actual content from the recommender is way lower than when you’re using only the collaborative filtering algorithm […] So when you are focusing more on the relevance then people are willing to watch it. And when you’re adding too much weight on the public values, people are not willing to watch it anymore.’

This resulted in them choosing to have a ‘low ratio’ of public value content to engaging content, making explicit the choice that public service media organisations often do and have to make between audience retention and other public service values like diversity, at least over the short-term these metrics measure.

Others, when faced with the inadequacy of conventional engagement and diversity metrics, have tried to indirectly optimise for public service value by making their metrics the satisfaction of editorial stakeholders, whose preferences are seen as the best ‘ground truth’ proxy for public service value.

In the early stages of developing an article-to-article news recommendation system in 2018,[footnote]The Datalab team was experimenting with and evaluating a number of approaches using a combination of content and user interaction data, such as neural network approaches that combine both content and user data as well as collaborative filtering models based only on user interactions.[/footnote] the BBC Datalab initially used a number of quantitative metrics for its offline evaluation.[footnote]Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p. 4. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf[/footnote]

They evaluated these using offline metrics, with proxies for engagement, diversity and relevance to audiences, including:

  • hit rate, i.e. whether the list of recommended articles includes an article a user did in fact view within 30 minutes of viewing the original article
  • normalised discounted cumulative gain, i.e. how relevant the recommended articles were assumed to be to the user, with a higher weighting for the relevance of articles higher up in the list of recommendations
  • intra-list diversity, i.e. the average difference between every pair of articles in a list of recommendations
  • inter-list diversity, i.e. the ratio of unique articles recommended to total articles recommended across all the lists of recommendations
  • popularity-based surprisal, i.e. how novel the articles recommended were
  • recency, i.e. how old the articles recommended were when shown to the user.

However, they found that performance on these metrics didn’t match the editorial teams’ priorities. When they tried to instead operationalise into metrics what public service value meant to the editors,  existing quantitative metrics were unable to capture editorial preferences and creating new ones was not straightforward. As Alessandro Piscopo, Lead Data Scientist, BBC Datalab notes:[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

‘We did notice that in some cases, one of the recommender prototypes was going higher in some metrics and went to editorial and [they would] say well we just didn’t like it […] Sometimes it was just comments from editorial world, we want to see more depth. We want to see more breadth. Then you have to interpret what that means.’

This difficulty in finding appropriate metrics led to the Datalab team changing their primary method of evaluation, from offline evaluation to user studies with BBC editorial staff, which they called ‘subjective evaluation’.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote]

In this approach, they asked editorial staff to score each list of articles generated by the recommendation systems as either: unacceptable, inappropriate, satisfactory or appropriate. The editors were then prompted to describe what properties they considered in choosing how appropriate the recommendations were. The development team would then iterate the recommendation system based on the scoring and written feedback along with discussion with editorial about the recommendation.

Early in the process, the Datalab team agreed with editorial what percentage of each grade they were aiming for, and so what would be a benchmark for success in creating a good recommendation system. In this case, the editorial team decided that they wanted:[footnote]Piscopo, A. (2021); Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

  1. No unacceptable recommendations, on the basis that any unacceptable recommendations would be detrimental to the reputation of the BBC.
  2. Maximum 10% inappropriate recommendations.

This change of metrics meant that the evaluation of the recommendation system, and the iteration of the system as a result, was optimising for the preferences of the editorial team, over imperfect measures of audience engagement, relevance and diversity. The editors are seen as the most reliable ‘source of truth’ for public service value, in lieu of better quantitative metrics.

Methods

Public service media often rely on internal user studies with their own staff as an evaluation method during the pre-deployment stage of recommendation system development. For example, Greg Detre, ex-Chief Data Scientist at Channel 4, said that when developing a recommendation system for All 4 in 2016, they would ask staff to subjectively compare the output of two recommendation systems side by side, based on the staff’s understanding of Channel 4’s values:

‘So we’re making our recommendations algorithms fight, “Robot Wars” style, pick the one that you think […] understood this view of the best, good recommendations are relevant and interesting to the viewer. Great recommendations go beyond the obvious. Let’s throw in something a little unexpected, or showcase the Born Risky programming that we’re most proud of, [clicking the] prefer button next to the […]one you like best […] Born Risky, which was one of the kind of Channel Four cultural values for like, basically being a bit cheeky. Going beyond the mainstream, taking a chance. It was one of, I think, a handful of company values.’[footnote]Interview with Greg Detre, ex-Chief Data Scientist, Channel 4 (2021).[/footnote]

Similarly, when developing a recommendation system for BBC Sounds, the BBC Datalab decided to use a process of qualitative evaluation. BBC Sounds uses a factorisation machine approach, which is a mixture of content matching and collaborative filtering. This uses your listening history, metadata about the content and other users’ listening history to make recommendations in two ways:

  1. It recommends items that have similar metadata to items you have already listened to.
  2. It recommends items that have been listened to by people with otherwise similar listening histories.

When evaluating this approach, BBC compared the new factorisation machine recommendation system head-to-head with the existing external provider’s recommendations.

They recruited 30 BBC staff members under the age of 35 to be test users.[footnote]Al-Chueyr Martins, T. (2021). ‘From an idea to production: the journey of a recommendation engine’ [presentation recording]. MLOps London. Available at: https://www.youtube.com/watch?v=dFXKJZNVgw4[/footnote] They then showed these test users two sets of nine recommendations side by side. One set was provided by the current external provider’s recommendation system, and the other set was provided by the team’s internal factorisation machine recommendation system. The users were not told which system had produced which set of recommendations, and had to choose whether they preferred ‘A’ or ‘B’, or ‘both’ or ‘neither’, and then explain their decision why in words.

Over 60% of test users preferred the recommendation sets provided by the internal factorisation machine.[footnote]Al-Chueyr Martins, T. (2021).[/footnote] This convinced the stakeholders that the system should move into production and A/B testing, and helped editorial teams get hands-on experience evaluating automated curations, increasing their confidence in the recommendation system.

Similarly, when later deploying the recommendation system to create personalised sorting system for feature items, the Datalab team held a number of digital meetings with editorial staff, showing them the personalised and non-personalised featured items side-by-side. The Datalab then got feedback from the editors on which they preferred.[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote] This approach allowed them to more directly capture internal staff preferences and manually step towards meeting those preferences. However, the team acknowledged its limitations upfront, particularly in terms of scale.[footnote]Interview with Alessandro Piscopo.[/footnote] Editorial teams and other internal staff only have so much capacity to judge recommendations, and thus would struggle to assess every edge case or judge recommendations, if every recommendation changed depending on the demographics of the audience member viewing it. 

Once the recommendation systems are deployed to a live environment, i.e. accessible by audiences on their website or app, public service media all have some form of online evaluation in place, most commonly in the form of A/B testing in which viewers are given two different recommendations to choose from.

Channel 4 used online evaluation in the form of A/B testing to evaluate the recommendation system used by their video-on-demand service, All 4 Greg Detre noted that:

‘We did A/B test it eventually. And it didn’t show a significant effect. That said [Channel 4] had an already somewhat good system in place. That was okay. And we were very constrained in terms of the technical solutions that we were allowed, there were only a very, very limited number of algorithms that we were able to implement, given the constraints that have already been agreed when I got there. And so as a result, the solution we came up with was, you know, efficient in terms of it was fast to compute in real time, and easy to sort of deploy, but it wasn’t that great… I think perhaps it didn’t create that much value.’[footnote]Interview with Greg Detre, ex-Chief Data Scientist, Channel 4 (2021).[/footnote]

BBC Datalab also used A/B testing in combination with continued user studies and behavioural testing. By April/May 2020, editorial had given sign-off and the recommendation system was deemed ready for initial deployment.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote]

During deployment, the team took a ‘failsafe approach’ with weekly monitoring of the live version of the recommendation system by editorial staff. This included further subjective evaluation described above and behavioural tests. In these behavioural tests, developers use a list of pairs of inputs and desired outputs, comparing the output of the recommendation system with the desired output for each given input.[footnote]See: BBC. RecList. GitHub. Available at: https://github.com/bbc/datalab-reclist; Tagliabue, J. (2022). ‘NDCG Is Not All You Need’. Towards Data Science. Available at: https://towardsdatascience.com/ndcg-is-not-all-you-need-24eb6d2f1227[/footnote]

After deployment, there was still a need to understand the effect and success of the recommendation systems. This took the form of A/B testing the live system. This included measuring the click-through rate on the recommended articles. However, members of the development team noted it was only a rough proxy for user satisfaction and were working to go beyond click-through rate.

Ultimately at the post-deployment stage, the success of the recommendation system is determined by the product teams, with input by development teams in the identification of appropriate metrics. It is editorial considerations that are central to product teams decide which metrics they think they are best suited to evaluate for.[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

Once the system reaches the stage of online evaluation, these methods can only tell public service media whether the recommendation system was worthwhile after it is has already been built and considering the time and resources invested in building it. Therefore the evaluation becomes about whether to continue to use and maintain the system given the operating costs versus the costs involved in removing or replacing it. This can mean even systems that only provide limited value to the audience or to the public service media organisation will remain in use in this phase of evaluation.

How could evaluations be done differently?

In this section, we explore how the objectives, metrics and methods for evaluating recommendation systems could be done differently by public service media organisations.

Objectives

Some public service media organisations could benefit from more explicitly drawing a connection from their public service values to the organisational and product goals and finally to the recommendation system itself, showing how each level links to the next. This can help prevent value drift as goals go through several levels of interpretation and operationalisation, and help contextualise the role of the recommendation system in achieving public value within the wider process of content delivery.

More explicitly connecting these objectives can help organisations to recognise that, while a product as a whole should achieve public service objectives, a recommendation system doesn’t need to achieve every objective in isolation. While a recommendation system’s objectives should not be in conflict with the higher level objectives, they may only need to achieve some of those goals (e.g. its primary purpose might be to attract and engage younger audiences and thus promote diversity and universality). Therefore, its contribution to the product and organisational objectives should be seen in the context of the overall audience experience and the totality of the content an individual user interacts with. Evaluating against the recommendation system’s feature-level objectives alone is not enough to know whether a recommendation system is also consistent with product and organisational objectives.

Audience involvement in goal-setting

Another area worthy of further exploration is providing greater audience input and control over the objectives and therefore the initial system design choices. This could involve eliciting individual preferences from a panel of audience members and then working with staff to collaboratively trade-off and explicitly set different weighting for different objectives of the system. This should take place as part of a broader co-design approach at the product level. This is because the evaluation process for a recommendation system should include the option to say a recommendation system is not the most appropriate tool for achieving the higher-level objectives of the product and providing the outcomes the staff and the audiences want from the product, rather than constraining audiences to just choose between different versions of a recommendation system.

Making safeguards an explicit objective in system evaluation

A final area worthy of exploration is building in system safeguards like accountability, transparency and interpretability as explicit objectives in the development of the system, rather than just as additional governance considerations. Some interviewees suggested making considerations such as interpretability a specific objective in evaluating recommendation systems. By explicitly weighing those considerations against other objectives and attempting to measure the degree of interpretability or transparency, it would ensure greater salience of those safeguards in the selection of systems.[footnote]Interview with Greg Detre, ex-Chief Data Scientist, Channel 4 (2021).[/footnote]

Metrics

More nuanced metrics for public service value

If public service media organisations want to move beyond optimising for a mix of engagement and exposure diversity in their recommendation systems, then they will need to develop better metrics to measure public service value. As we’ve seen above, some are already moving in this direction with varying degrees of success, but more experimentation and learning will be required.

When creating metrics for public service value, it will be important to disambiguate between different meanings of ‘public service value’. A public service media organisation cannot expect to have one quantitative measure of ‘public service value’, which conflates a number of priorities that can be in tension with one another.

One approach would be to explicitly break each public service value down into separate metrics for universality, independence, excellence, diversity, accountability and innovation, and most likely sub-values within those. This could help public service media developers to clearly articulate the components of each value and make it explicit how they are weighted against each other. However, quantifying concepts like accountability and independence can be challenging to do, and this approach may struggle to work in practice. More experimentation is needed.

The most promising approach may be to adopt more subjective evaluations of recommendation systems. This approach recognises that ‘public service value’ is going to be inherently subjective and uses metrics which reflect that. Qualitative metrics based on feedback from individuals interacting with the recommendation system can let developers balance the tensions between different aspects of public service value. This places less of a burden on developers to weight those values themselves, which they might be poorly suited to, and can accommodate different conceptions of public service value from different stakeholders.

However, subjective evaluations do have their limits. They are only able to evaluate a tiny subset of the overall recommendations, and will only capture the subjective evaluation of features appearing in that subset. These evaluations may miss features that were not present in the content evaluated, or which are only able to be observed in aggregate over some wider set of recommendations. These challenges can be mitigated by broadening subjective evaluations to a more representative sample of the public, but that may raise other challenges around the costs of running these evaluations at that scale.

More specific metrics

In a related way, evaluation metrics could be improved by greater specificity and explicitness about what concept the metric is trying to measure and therefore explicitness about how different interpretations of the same high-level concept are weighted.[footnote]van Es, K. F. (2017). ‘An Impending Crisis of Imagination : Data‐Driven Personalization in Public Service Broadcasters’. Media@LSE. Available at: https://dspace.library.uu.nl/handle/1874/358206[/footnote] In particular, public service media organisations could be more explicit about the kind of diversity they want to optimise, e.g. unique content viewed, the balance of categories viewed or the representation of demographics and viewpoints across recommendations, and whether they care about each individual’s exposure or exposure across all users.

Longer-term metrics

Another issue identified is that most metrics used in the evaluation of recommendation systems, within public service media and beyond, are short-term metrics, measured in days or weeks, rather than years. Yet at least some of the goals of stakeholders will be longer-term than the metrics used to approximate them. Users may be interested in both immediate satisfaction and in discovering new content so they continue to be informed and entertained in the future. Businesses may both be trying to maximise quarterly profits and also trying to retain users into the future to maximise profits in the quarters to come.

Short-term metrics are not entirely ineffective at predicting long-term outcomes. Better outcomes right now could mean better outcomes months or years down the road, so long as the context the recommendation system is operating in stays relatively stable and the recommendation system itself doesn’t change user behaviour in ways that lead to poorer long-term outcomes.

By definition, long-term consequences take a longer time to occur, and thus there is a longer waiting period between a change in the recommendation system and the resulting change in outcome. A longer period between action and evaluation also means a greater number of confounding variables which make it more challenging to assess the causal link between the change in the system and the change in outcomes.

Dietmar Jannach, Professor at the University of Klagenfurt, highlighted this was a problem across academic and industry evaluations, and that ‘when Netflix changes the algorithms, they measure, let’s see, six weeks, two months to try out different things in parallel and look what happens. I’m not sure they know what happens in the long run.’[footnote]Interview with Dietmar Jannach, Professor, University of Klagenfurt (2021).[/footnote]

Methods

Simulation-based evaluation

One possible method to estimate long-term metrics is to use simulation-based offline evaluation approaches. In this approach, the developers use a virtual environment with a set of content which can be recommended and a user model which simulates the expected preferences of users based on parameters selected by the developers (which could include interests, demographics, time already spent on the product, previous interactions with the product etc.).[footnote]Ie, E., Hsu, C., Mladenov, M. et al. (2019). ‘RecSim: A Configurable Simulation Platform for Recommender Systems’. arXiv. Available at: https://doi.org/10.48550/arXiv.1909.04847[/footnote] This recommendation system then makes recommendations to the user model, which generates a simulated response to that recommendation. The user model can also update its preferences in response to the recommendations it has received, e.g. a user might become more or less interested in a particular category of content, and model the simulated users’ overall satisfaction with the recommendations over time.

This provides some indication of how the dynamics of the recommendation system and changes to it might play out over a long period of time. It can evaluate how users respond to a series of recommendations over time and therefore whether a recommendation system could lead to audience satisfaction or diverse content exposure over a period longer than a single recommendation or user session. However, this approach still has many of the limitations of other kinds of offline evaluation. Historical user interaction data is still required to model the preferences of users, and that data is not neutral because it is itself the product of interaction with the previous system, including any previous recommendation system that was in place.

The user model is also only based on data from previous users, which might not generalise well to new users. Given that many of these recommendation systems are put in place to reach new audiences, specifically younger and more diverse audiences than those who currently use the service, the simulation-based evaluation might lead to unintentionally underserving those audiences and overfitting to existing user preferences.

Furthermore, the simulation can only model the impact of parameters coded into it by the developers. The simulation only reflects the world as a developer understands it, and may not reflect the real considerations users take into account in interacting with recommendation systems, nor the influences on user behaviour beyond the product.

This means that if there are unexpected shocks, exogenous to the recommendation system, that change user interaction behaviour to a significant degree, then the simulation will not take those factors into account. For example, a simulation of a news recommendation system’s behaviour in December 2019 would not be a good source of truth for a recommendation system in operation during the COVID-19 pandemic. The further the simulation tries to look ahead at outcomes, the more vulnerable it will be to changes in the environment that may invalidate its results.

User panels and retrospective feedback

After deployment, asking audiences for informed and retrospective feedback on their recommendations is a promising method for short-term and long-term recommendation system evaluation.[footnote]Stray, J., Adler, S. and Hadfield-Menell, D. (2020), ‘What are you optimizing for? Aligning Recommender Systems with Human Values’, pp. 4–5. Participatory Approaches to Machine Learning ICML 2020 Workshop (July 17). Available at: https://participatoryml.github.io/papers/2020/42.pdf[/footnote] This could involve asking the users to review, rate and provide feedback on a subsection of the recommendations they received over the previous month, in a similar manner to the subjective evaluations undertaken by the BBC Datalab. This would provide development and product teams with much more informative feedback than through A/B testing.

This could be particularly effective in the form of a representative longitudinal user panel which returns to the same audience members at regular intervals to get their detailed feedback on recommendations.[footnote]Stray, J. (2021). ‘Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals’. Partnership on AI. Available at: https://www.partnershiponai.org/beyond-engagement-aligning-algorithmic-recommendations-with-prosocial-goals/[/footnote] Participants in these panels should be compensated for their participations to recognise the contribution they are making to the improvement of the system and ensure long-term retention of participants. This would allow development and product teams to gauge how audience responses change over time, by seeing how they react to the same recommendations months later, to understand how their opinions on that recommendation may have changed over time, including in response to changes to the underlying system over longer periods.

Case studies

Through two case studies, we examine how the differing prioritisation of values in different forms of public service media and the differing nature of the content itself manifests itself in different approaches to recommendation systems. We will focus on the use of recommendation systems across BBC News for news content, and BBC Sounds for audio-on-demand.

Case study 1: BBC News

Introduction

BBC News is the UK’s dominant news provider and one of the world’s most influential news organisations.[footnote]This case study focuses on the parts of BBC News that function as a public service, rather than BBC Global News, the international commercial news division.[/footnote] It reaches 57% of UK adults every week and 456 million globally. Its news websites are the most-visited English language news websites on the internet.[footnote]As of 2021, BBC News on TV and radio reaches 57% of UK adults every week and across all channels, BBC News globally reaches a weekly global audience of 456 million adults., Ssee: BBC Media Centre. (2021). ‘BBC on track to reach half a billion people globally ahead of its centenary in 2022′. BBC Media Centre. Available at: https://www.bbc.co.uk/mediacentre/2021/bbc-reaches-record-global-audience; BBC News is equally influential globally within the domain of digital news. By one measure, the BBC News and BBC World News websites combined are the most-visited English-language news websites, receiving three to four times the website traffic of the New York Times, Daily Mail, or The Guardian, see: Majid, A. (2021). ‘Top 50 largest news websites in the world: Surge in traffic to Epoch Times and other ring-wing sites’. Press Gazette. Available at: https://pressgazette.co.uk/top-50-largest-news-websites-in-the-world-right-wing-outlets-see-biggest-growth/; As of 2021, BBC News Online reaches 45% of UK adults every week, approximately triple the reach of its nearest competitors: The Guardian (17%), Sky News Online (14%) and the MailOnline (14%). Estimates of UK reach are based on a sample 2029 adults surveyed by YouGov (and their partners) using an online questionnaire at the end of January and beginning of February 2021. See: Reuters Institute for Institute for the Study of Journalism. Reuters Institute Digital News Report 2021, 10th Edition, p. 62. Available at: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2021-06/Digital_News_Report_2021_FINAL.pdf[/footnote] For most of the time that BBC News has had an online presence, it has not used any recommendation systems on its platforms.

In recent years, BBC News has taken a more experimental approach to recommendation systems, with a number of different systems for recommending news content developed, piloted and deployed across the organisation.[footnote]The team initially developed an experimental recommendation system for BBC Mundo, the BBC World Service’s Spanish-language news website. See: Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p.1. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf; These are also live on BBC World Service websites in Russian, Hindi and Arabic and in beta on the BBC News App. See: Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk; Al-Chueyr Martins, T. (2019). ‘Responsible Machine Learning at the BBC’ [presentation]. Available at: https://www.slideshare.net/alchueyr/responsible-machine-learning-at-the-bbc-194466504[/footnote]

Goal

For editorial teams, the goal of adding recommendation systems to BBC News was to augment editorial curation and make it easier to scale on a more personalised level. This addresses challenges relating to editors facing an ‘information overload’ of content to recommend. Additionally, product teams at BBC believed this feature would improve the discoverability of news content for different users.[footnote]Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p. 4. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf[/footnote]

What did they build?

From around 2019, , a team (which later become part of BBC Datalab) collaborated with a team building out the BBC News app to develop a content-to-content recommendation system. This focused on ‘onward journeys’ from news articles. Partway through each article the recommendation system generated a section that was titled ‘You might be interested in’ (in the language relevant to that news website) that listed four recommended articles.[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

Figure 2: BBC News ‘You might be interested in’ section (image courtesy of the BBC)

The recommendation system is combined with a set of business rules which constrain the set of articles that the system recommends content from. The rules aim to ensure ‘sufficient quality, breadth, and depth’ in the recommendations.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote]

For example, these included:

  • recency, e.g. only selecting content from the past few weeks
  • unwanted content, e.g. content in the wrong language
  • contempt of court
  • elections
  • children-safe content.

In an earlier project, this team had developed an experimental recommendation system for BBC Mundo, the BBC World Service’s Spanish-language news website.[footnote]Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p. 4. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf[/footnote] Similar recommendation systems are also live on BBC World Service websites in Russian, Hindi and Arabic and in beta on the BBC News App.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk; Al-Chueyr Martins, T. (2019). ‘Responsible Machine Learning at the BBC’ [presentation]. Available at: https://www.slideshare.net/alchueyr/responsible-machine-learning-at-the-bbc-194466504[/footnote]

Figure 3: BBC Mundo recommendation system (image courtesy of the BBC)

Figure 4: Recommendation system on BBC World Service website in Hindi (image courtesy of the BBC)

Criteria (and how they relate to public service values)

The BBC News team eventually settled on a content-to-content recommendation system using a model (called ‘tf-idf’) that encoded article data (like text) and metadata (like the categorical tags that editorial teams gave the article) into vectors. Once articles were represented as vectors, additional metrics could be applied to measure the similarity between them. This enabled the ability to penalise more popular content.[footnote]Crooks, M. (2019). ‘A Personalised Recommender from the BBC’. BBC Data Science. Available at: https://medium.com/bbc-data-science/a-personalised-recommender-from-the-bbc-237400178494[/footnote]

The business rules the BBC used sought to ensure ‘sufficient quality, breadth, and depth’ in the recommendations, which aligns with the BBC’s values around universality and excellence.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote]

There was also an emphasis on the recommendation system needing to be easy to understand and explain. This can be attributed to BBC News being more risk-averse than other parts of the organisation.[footnote]Piscopo, A. (2021).[/footnote] Given the BBC’s mandate to be a ‘provider of accurate and unbiased information’ and BBC News that staff themselves identify as ‘the product that likely contributes most to its reputation as a trustworthy and authoritative media outlet’.[footnote]Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p. 4. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf[/footnote] It is unsurprising they would want to pre-empt any accusations of bias for any automated news recommendation system, by making it understandable to audiences.

Evaluation

The Datalab team experimented with a number of approaches using a combination of content and user interaction data.

Initially, they found that a content-to-content approach to item recommendations was more suited to the editorial requirements for the product, and user interaction data was therefore less relevant to the evaluation of the recommender, prompting a shift to a different approach.

As they began to compare different content-to-content approaches, they found that performance in quantitative metrics often didn’t match the editorial teams priorities, and it was difficult to operationalise editorial judgement of public service value into metrics. As Alessandro Piscopo notes: ‘We did notice that in some cases, one of the recommender prototypes was going higher in some metrics and went to editorial and [they would] say well we just didn’t like it.’ And, ‘Sometimes it was just comments from editorial world, we want to see more depth. We want to see more breadth. Then you have to interpret what that means.’[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

The Datalab team chose to take a subjective evaluation-first approach, whereby editors would directly compare and comment on the output of two recommendation systems. This approach allowed them to capture editorial preferences more directly and manually work towards meeting those preferences.

However, the team acknowledged its limitations upfront, particularly in terms of scale.[footnote]Interview with Alessandro Piscopo.[/footnote] They tried to pick articles that would bring up the most challenging cases. However, editorial teams only have so much capacity to judge recommendations, and thus would struggle to assess every edge case or judge every recommendation. This issue would be even more acute if in a future recommendation system, every article’s associated recommendations changed depending on the demographics of the audience member viewing it.

By May 2020, editorial had given sign-off and the recommendation system was deemed ready for initial deployment.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote] During deployment, the team took a ‘failsafe approach’ with weekly monitoring of the live version of the recommendation system by editorial staff, alongside A/B testing measuring the click-through rate on the recommended articles. However, members of the development team noted it was only a rough proxy for user satisfaction and were working to go beyond click-through rate.

Case Study 2: BBC Sounds

Introduction

BBC Sounds is the BBC’s audio streaming and download service for live radio, music, audio-on-demand and podcasts,[footnote]BBC. ‘What is BBC Sounds?’. Available at: https://www.bbc.co.uk/contact/questions/help-using-bbc-services/what-is-sounds[/footnote] replacing the BBC’s previous live and catch-up audio service, iPlayer Radio.[footnote]The BBC Sounds website replaced the iPlayer Radio website in October 2018; the BBC Sounds app was launched in beta in the United Kingdom in June 2018 and made available internationally in September 2020, with the iPlayer Radio app decommissioned for the United Kingdom in September 2019 and internationally in November 2020. See: BBC. (2018). ‘The next major update for BBC Sounds’ Available at: https://www.bbc.co.uk/blogs/aboutthebbc/entries/03e55526-e7b4-45de-b6f1-122697e129d9; BBC. (2018). ‘Introducing the first version of BBC Sounds’, Available at: https://www.bbc.co.uk/blogs/aboutthebbc/entries/bde59828-90ea-46ac-be5b-6926a07d93fb; BBC. (2020). ‘An international update on BBC Sounds and BBC iPlayer Radio’. Available at: https://www.bbc.co.uk/blogs/internet/entries/166dfcba-54ec-4a44-b550-385c2076b36b; BBC Sounds. ‘Why has the BBC closed the iPlayer Radio app?’. Available at: https://www.bbc.co.uk/sounds/help/questions/recent-changes-to-bbc-sounds/iplayer-radio-message[/footnote] A key difference between BBC Sounds and iPlayer Radio is that BBC Sounds was built with personalisation and recommendation as a core component of the product, rather than as a radio catch-up service.[footnote]In May 2019, six months after the launch of BBC Sounds, James Purnell, then Director of Radio & Education at the BBC, said that ‘“The [BBC Sounds] app, for instance, is built for personalisation, but is not yet fully personalised. This means that right now a user sees programmes that have not been curated for them. That is changing, as of this month in fact. By the autumn, Sounds will be highly personalised.’” See: BBC Media Centre. (2019). ‘Changing to stay the same – Speech by James Purnell, Director, Radio & Education, at the Radio Festival 2019 in London.’ Available at: https://www.bbc.co.uk/mediacentre/speeches/2019/bbc.com/mediacentre/speeches/2019/james-purnell-radio-festival/[/footnote]

Goal

The goals of BBC Sounds as a product team are:

  • increase the audience size of BBC Sounds’ digital products
  • increase the demographic breadth of consumption across BBC Sounds’ products, especially among the young[footnote]According to David Jones (Executive Product Manager, BBC Sounds, interviewed in 2021), his top-line KPI is to reach 900,000 members of the British population who are under 35 by March 2022. These numbers are determined centrally by BBC senior managers based on the BBC’s Service Licence for BBC Online and Red Button. See: BBC Trust. (2016). BBC Online and Red Button Service Licence. Available at: http://downloads.bbc.co.uk/bbctrust/assets/files/pdf/regulatory_framework/service_licences/online/2016/online_red_button_may16.pdf [/footnote]
  • convert ‘lighter users’ who only engage a certain number of times a week into regular users
  • enable users to more easily discover content from the more than 50 hours of new audio produced by the BBC on an hourly basis.

Product

BBC Sounds initially used an outsourced recommendation system from a third-party provider. Having knowledge about the inner working of the recommendation systems and the ability to quickly iterate were seen as valuable by the development team, as it proved challenging to request changes to the external provider. The BBC decided it wanted to own the technology and the experience as a whole, and believed they could achieve better value-for-money for TV License-payers by bringing the system in-house. So the BBC Datalab developed a hybrid recommendation system named Xantus for BBC Sounds.

BBC Sounds use a factorisation machine approach, which is a mixture of content matching and collaborative filtering. This uses your listening history, metadata about the content, and other users’ listening history to make recommendations in two ways:

  1. It recommends items that have similar metadata to items you have already listened to.
  2. It recommends items that have been listened to by people with otherwise similar listening histories.

Figure 5: BBC Sounds’ ‘Recommended For You’ section (image courtesy of the BBC)

Figure 6: ‘Music Mixes’ on BBC Sounds (image courtesy of the BBC)

Criteria (and how they relate to public service media values)

On top of this factorisation machine approach are a number of business rules. Some rules apply equally across all users and constrain the set of content that the system recommends content from, e.g. only selecting content from the past few weeks. Other rules apply after individual user recommendations have been generated and filter the recommendations based on specific information about the user, e.g. not recommending content the user has already consumed.

As of summer 2021, the business rules used in the BBC Sounds’ Xantus recommendation system were:[footnote]Note that the business rules are subject to change, and so the rules given here are intended to be an indicative example only, representing a snapshot of practice at one point in time. See: Al-Chueyr Martins, T. (2021). ‘From an idea to production: the journey of a recommendation engine’ [presentation recording]. MLOps London. Available at: https://www.youtube.com/watch?v=dFXKJZNVgw4[/footnote]

Non-personalised business rules Personalised business rules
Recency Already seen items
Availability Local radio (if not consumed previously)
Excluded ‘master brands’, e.g. particular radio channels[footnote]Smethurst, M. (2014). Designing a URL structure for BBC programmes. Available at: https://smethur.st/posts/176135860[/footnote] Specific language (if not consumed previously)
Excluded genres Episode picking from a series
Diversification (1 episode per brand/series)

Governance

Editorial and others help define the business rules for Sounds.[footnote]Interview with Kate Goddard, Senior Product Manager, BBC Datalab (2021).[/footnote] The product team adopted the business rules from the incumbent system and then checked whether they made sense in the context of the new system. They constantly review the business rules. Kate Goddard, Senior Product Manager, BBC Datalab, noted that: 

‘Making sure you are involving [editorial values] at every stage and making sure there is strong collaboration between data scientists in order to define business rules to make sure we can find good items. For instance with BBC Sounds you wouldn’t want to be recommending news content to people that’s more than a day or two old and that would be an editorial decision along with UX research and data. So, it’s a combination of optimizing for engagement while making sure you are working collaboratively with editorial to make sure you have the right business rules in there.’

Evaluation

To decide whether to progress further with the prototype, the team decided to use a process of subjective evaluation. The Datalab team showed recommendations generated by both the new factorisation machine recommendation system head-to-head with the existing external provider’s recommendations and got feedback from the editors on which of the two they liked.[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote] The factorisation machine recommendation system was preferred by the editors and so was deployed into the live environment.

After deployment, UX testing, qualitative feedback and A/B testing were used to fine-tune the system. In their initial A/B tests, they were optimising for engagement, looking at click-throughs, play throughs and play completes. In these tests, they were able to achieve:[footnote]Al-Chueyr Martins, T. (2021). ‘From an idea to production: the journey of a recommendation engine’ [presentation recording]. MLOps London. Available at: https://www.youtube.com/watch?v=dFXKJZNVgw4[/footnote]

  • 59% increase in interactions in the ‘Recommended for You’ rail
  • 103% increase in interactions for under-35s.

 

Outstanding questions and areas for further research and experimentation

Through this research we have built up an understanding of the use of recommendation systems in public service media in the BBC and Europe, as well as the opportunities and challenges that arise. This section offers recommendations to address some of the issues that have been raised and indicate areas beyond the scope of this project that merit further research. These recommendations are directed at the research community, including funders, regulators and public service media organisations themselves.

There is an opportunity for public service media to define a new, responsible approach to the development of recommendation systems that work to the benefit of society as a whole and offer an alternative to the paradigm established by big technology platforms. Some initiatives that are already underway could underpin this, such as the BBC’s Databox project with the University of Nottingham and subsequent work on developing personal data stores.[footnote]Sharp, E. (2021). ‘Personal data stores: building and trialling trusted data services’. BBC R&Desearch & Development. Available at: https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research; Leonard, M. and Thompson, B. (2020), ‘Putting audience data at the heart of the BBC’. BBC Research & Development. Available at: https://www.bbc.co.uk/rd/blog/2020-09-personal-data-store-privacy-services[/footnote] These personal data stores primarily aim to address issues around data ownership and portability, but could also act as a foundation for more holistic recommendations across platforms and greater user control over the data used in recommending them content.

But in making recommendations to public service media we recognise the pressures they face. In the course of this project, a real-terms cut to BBC funding has been announced and the corporation has said it will have to reduce the services it offers in response.[footnote]Hansard – Volume 707: debated on Monday 17 January 2022. ‘BBC Funding’. UK Parliament. Available at: https://hansard.parliament.uk//commons/2022-01-17/debates/7E590668-43C9-43D8-9C49-9D29B8530977/BBCFunding[/footnote] We acknowledge that, in the absence of new resources and faced with the reality of declining budgets, public service media organisations would have to cut other activities to carry out our suggestions. 

We therefore encourage both funders and regulators to support organisations to engage in public service innovation as they further explore the use of recommendation systems. Historically the BBC has set a precedent for using technology to serve the public good, and in doing so brought soft power benefits to the UK. As the UK implements its AI strategy, it should build on this strong track record and comparative advantage and invest in the research and implementation of responsible recommendation systems.

1. Define public service value for the digital age

Recommendation systems are designed to optimise against specific objectives. However, the development and implementation of recommendation systems is happening at a time when the concept of public service value and the role of public service media organisations in the wider media landscape is rapidly changing.

Although we make specific suggestions for approaches to these systems, unless public service media organisations are clear about their own identities and purpose, it will be difficult for them to build effective recommendation systems. It is essential that public service media revisit their values in the digital age, and articulate their role in the contemporary media ecosystem.

In the UK, significant work has already been done by Ofcom as well as the Digital, Culture, Media and Sport Select Committee to identify the challenges public service media face and offer new approaches to regulation. Their recommendations must be implemented so that public service media can operate within a paradigm appropriate to the digital age and build systems that address a relevant mission.

2. Fund a public R&D hub for recommendation systems and responsible recommendation challenges

There is a real opportunity to create a hub for the research and development of recommendation systems that are not tied to industry goals. This is especially important as recommendation systems are one of the prime use cases of behaviour modification technology, but research into it is impaired by lack of access to interventional data.[footnote]Greene, T., Martens, D. and Shmueli, G. (2022). ‘Barriers to academic data science research in the new realm of algorithmic behaviour modification by digital platforms’. Nature Machine Intelligence, 4, pp.323–330. Available at: https://www.nature.com/articles/s42256-022-00475-7[/footnote]

Existing academic work on responsible recommendations could be brought together into a public research hub on responsible recommendation technology, with the BBC as an industry partner. It could involve developing and deploying methods for democratic oversight of the objectives of recommendation systems and the creation and maintenance of useful datasets for researchers outside of private companies.

We recommend that the strategy for using recommendation systems in public service media should be integrated within a broader vision to make this part of a publicly accountable infrastructure for social scientific research.

Therefore, as part of UKRI’s National AI Research and Innovation (R&I) Programme, set out in the UK AI Strategy, it should fund the development of a public research hub on recommendation technology. This programme could also connect with the European Broadcasting Union’s PEACH project, which has similar goals and aims.

Furthermore, one of the programme’s aims is to create challenge-driven AI research and innovation programmes for key UK priorities. The arrival of Netflix in 2006 spurred the development of today’s recommendation systems. The UK could create new challenges to spur the development of responsible recommendation system approaches  encouraging a better information environment. For example, the hub could release a dataset and benchmark for a challenge on generating automatic labels for a dataset of news items.

3. Publish research into audience expectations of personalisation

There was a striking consensus in our interviews with public service media teams working on recommendation systems that personalisation was both wanted and expected by the audience. However, we were offered little evidence to support this belief. Research in this area is essential for a number of reasons.

  1. Public service media exist to serve the public. They must not assume they are acting in the public interest without any evidence of their audience’s views towards recommendation systems.
  2. The adoption of recommendation systems without evidence that they are either wanted or needed by the public raises the risk that public service media are blindly following a precedent set by commercial competitors, rather than defining a paradigm aligned to their own missions.
  3. Public service media have limited resources and multiple demands. It is not strategic to invest heavily in the development and implementation of these systems without an evidence base to support their added value.

If research into user expectations of recommendation systems does exist, the BBC should strive to make this public.

4. Communicate and be transparent with audiences

Although most public service media organisations profess a commitment to transparency about their use of recommendation systems, in practice there is limited effective communication with their audiences about where and how recommendation systems are being used.

What communication there is tends to adopt the language of commercial services, for example talking about ‘relevance’. In our interviews, we found that within teams there was no clear responsibility for audience communication. Staff often assumed that few people would want to know more, and that any information provided would only be accessed by a niche group of users and researchers.

However, we argue that public service organisations have a responsibility to explain their practices clearly and accessibly and to put their values of transparency into practice. This should not only help retain public trust at a time when scandals from big technology companies have understandably made people view algorithmic

systems with suspicion, but also develop a new, public service narrative around the use of these technologies.

Part of this task is to understand what a meaningful explanation of a recommendation system looks like. Describing the inner workings of algorithmic decision-making is not only unfeasible but probably unhelpful. However, they can educate audiences about the interactive nature of recommendation systems. They can make salient the idea that when consuming content through a recommendation system, they are in effect ‘voting with their attention’. Their viewing behaviour is something private, but at the same time affects what the system learns and what others will view.

Public service media should invest time and research into understanding how to usefully and honestly articulate their use of recommendation systems in ways that are meaningful to their audiences.

This communication must not be one-way. There must be opportunities for audience members to give feedback and interrogate the use of the systems, and raise concerns where things have gone wrong.

5. Balance user control with convenience

However, transparency alone is not enough. Giving users agency over the recommendations they see is an important part of responsible recommendation. Simply giving users direct control over the recommendation system is an obvious and important first step, but it is not a universal solution.

Some interviewees pointed to evidence that the majority of users do not choose to use these controls and instead opt for the default setting. But there is also evidence that younger users are beginning to use a variety of accounts, browsers and devices, with different privacy settings and aimed at ‘training’ the recommendation algorithm to serve different purposes.

Many public service media staff we spoke with described providing this level of control. Some challenges that were identified include the difficulty of measuring how well the recommendations meet specific targets, as well as risks relating to the potential degradation of the user experience.

Firstly, some of our interviewees noted how it would be more difficult to measure how well the recommendation system is performing on dimensions such as diversity of exposure, if individual users were accessing recommendations through multiple accounts. Secondly, it was highlighted how recommendation systems are trained on user behavioural data, and therefore giving more latitude to users to intentionally influence the recommendations may give rise to negative dynamics that degrade the overall experience for all users over the long run, or even expose the system to hostile manipulation attempts.

While these are valid concerns, we believe that there is some space for experimentation, between giving users no control and too much control. For example, users could be allowed to have different linked profiles, and key metrics could be adjusted to take into account the content that is accessed across these profiles. Users could be more explicitly shown how to interact with the system to obtain different styles of recommendations, making it easy to maintain different ‘internet personas’. Some form of ongoing monitoring for detecting adversarial attempts at influencing recommendation choices could also be explored. We encourage the BBC to experiment with these practices and publish research on their findings.

Another trial worth exploring is allowing ‘joint’ user recommendation profiles, where the recommendations are made based on multiple individuals’ aggregated interaction history and preferences, such as a couple, a group of friends or a whole community. This would allow users to create their own communities and ‘opt-in’ to who and what influenced their recommendations in an intuitive way. This could enabled by the kind of personal data stores being explored by the BBC and Belgian VRT.[footnote]Sharp, E. (2021). ‘Personal data stores: building and trialling trusted data services’. BBC Research & Development. Available at: https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research[/footnote]

There are multiple interesting versions of this approach. In one version, you would see recommendations ‘meant’ for others and know it was a recommendation based on their preferences. In another version, users would simply be exposed to a set of unmarked recommendations based on all their combined preferences.

Another potential approach to pilot would be to create different recommendation systems that coexist and allow users to choose which they want to use or offer different ones at different times of day or when significant events happen (e.g. switching to a different recommendation system during the run up to an election or overriding them with breaking news). Such an approach might offer a chance to invite audiences to play a more active part in the formulation of recommendations, and open up opportunities for experimentation, which would need to be balanced against the additional operational costs that would be introduced.

6. Expand public participation

Beyond transparency or individual user choice and control over the parameters of the recommendation systems already deployed, users and wider society could also have greater input during the initial design of the recommendation systems and in the subsequent evaluations and iterations.

This is particularly salient for public service media organisations, as unlike private companies, which are primarily accountable to their customers and shareholders, public service media organisations see themselves as having a universal obligation to wider society. Therefore, even those who are not direct consumers of content should have a say in how public service media recommendations are shaped.

User panels

One approach to this, suggested by Jonathan Stray, is to create user panels that provide informed, retrospective feedback about live recommendation systems.[footnote]Stray, J. (2021). ‘Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals’. Partnership on AI. Available at: https://www.partnershiponai.org/beyond-engagement-aligning-algorithmic-recommendations-with-prosocial-goals/[/footnote] These would involve paying users for detailed, longitudinal data about their experiences with the recommendation system. 

This could involve daily questions about their satisfaction with their recommendations, or monthly reviews where users are shown a summary of their recommendations and interaction with them. They could be asked how happy they are with the recommendations, how well do their interests are served and how informed they feel.

This approach could provide new, richer and more detailed metrics for developers to optimise the recommendation systems against, which would potentially be more aligned with the interests of the audience. It might also open up the ability to try new approaches to recommendation, such as reinforcement learning techniques that optimise for positive responses to daily and monthly surveys.

Co-design

A more radical approach would be to involve audience communities directly in the design of the recommendation system. This could involve bringing together representative groups of citizens, analogous to citizens’ assemblies, which have direct input and oversight of the creation of public service media recommendation systems, creating a third core pillar in the design process, alongside editorial teams and developer teams. This is an approach that has been proposed by the Media Reform Coalition Manifesto for a People’s Media.[footnote]Grayson, D. (2021). Manifesto for a People’s Media. Media Reform Coalition. Available at: https://drive.google.com/file/u/1/d/1_6GeXiDR3DGh1sYjFI_hbgV9HfLWzhPi/view?usp=embed_facebook[/footnote]

These would allow citizens to ask questions of the editors and developers about how the system is intended to work, what kinds of data inform those systems and about what alternative approaches exist (including not using recommendation systems at all). These groups could then set out their requirements for the system and iteratively provide feedback on versions of the system as its developed, in the same way that editorial teams have, for example, by providing qualitative feedback on recommendations provided by different systems.

7. Standardise metadata

Each public service media organisation should have a central function that standardises the format, creation and maintenance of metadata across the organisation.

Inconsistent, poor quality metadata was consistently highlighted as a barrier to developing recommendation systems in public service media, particularly in developing more novel approaches that go beyond user engagement and try to create diverse feeds of recommendations.

Institutionalising the collection of metadata and making access to it more transparent across each individual organisation is an important investment in public service media’s future capabilities.

We also think it’s worth exploring how much metadata can be standardised across European media organisations. The European Broadcasting Union (EBU)’s ‘A European Perspective’ project is already trialling bringing together content from across different European public service media organisations onto a single platform, underpinned by the EBU’s PEACH system for recommendations and the EuroVOX toolkit for automated language services. Further cross-border collaboration could be enabled by sharing best practices among member organisations.

8. Create shared recommendation system resources

Some public service media organisations have found it valuable to have access to recommendations-as-a-service provided by the European Broadcasting Union (EBU) through their PEACH platform. This reduces the upfront investment required to start using the recommendation system and provides a template for recommendations that have already been tested and improved upon by other public service media organisations.

One area identified as valuable for the future development of PEACH was greater flexibility and customisation. For example, some asked for the ability to incorporate different concepts of diversity into the system and control the relative weighting of diversity. Others would have found it valuable to be able to incorporate more information on the public service value of content into the recommendations directly.

We also heard from several interviewees that they would value a similar repository for evaluating recommendation systems on metrics valued by public service media, including libraries in common coding languages, e.g. Python, and a number of worked examples for measuring the quality of recommendations. The development of this could be led by the EBU or a single organisation like the BBC.

This would help systemise the quantifying of public service values and collate case studies of how values are quantified. This would be best as an open-source repository that others outside of public service media could learn from and draw on. This would:

  • lower costs and thus easier to justify investment
  • reduce the technical burden, making it easier for newer and smaller teams to implement
  • point to how they’re used elsewhere, reducing the burden of proof and making the alternative approach appear less risky
  • provide source of existing ideas, meaning the team have to spend less time either coming up with their own (which might be suboptimal and discover that for themselves) or spend time wading through the technical literature.

Future public service media recommendation systems projects, and responsible recommendation system development more broadly, could then more easily evaluate their system against more sophisticated metrics than just engagement.

9. Create and empower integrated teams

When developing and deploying recommendation systems, public service media organisations need to integrate editorial and development teams from the start. This ensures that the goals of the recommendation system are better aligned with the organisation’s goals as a whole and ensures the systems augment and complement existing editorial expertise.

An approach that we have seen applied successfully is having two project leads, one with an editorial background and one with a technical development background, who are jointly responsible for the project.

Public service media organisations could also consider adopting a combined product and content team. This can ensure that both editorial and development staff have a shared language and common context, which can reduce the burden of communication and help staff feel like they have a common purpose rather than competition between the different teams.

Methodology

To investigate our research questions, we adopted two main methods:

  1. Literature review
  2. Semi-structured interviews

Our literature review surveyed current approaches to recommendation systems, the motivations and risks in using recommendation systems, and existing approaches and challenges in evaluating recommendation systems. We then focused in on reviewing existing public information on the operation of recommendation systems across European public service media, and the existing theorical work and case studies on the ethics implications of the use of those systems.

In order to situate the use of these systems, we also surveyed the history and context of public service media organisations, with a particular focus on previous technological innovations and attempts at measuring values.

We also undertook 29 semi-structured interviews with 8 current and 3 former BBC staff members, across engineering, product and editorial, 9 interviews with current and former staff from other public service media organisations and the European Broadcasting Union, and 9 further interviews with external experts from academia, civil society and regulators.

Partner information and acknowledgements

This work was undertaken with support from the Arts and Humanities Research Council (AHRC).

This report was co-authored by Elliot Jones, Catherine Miller and Silvia Milano, with substantive contributions from Andrew Strait.

We would like to thank the BBC for their partnership on this project, and in particular, the following for their support, feedback and cooperation throughout the project:

  • Miranda Marcus, Acting Head, BBC News Labs
  • Tristan Ferne, Lead Producer, BBC R&D
  • George Wright, Head of Internet Research and Future Services, BBC R&D
  • Rhia Jones, Lead R&D Engineer for Responsible Data-Driven Innovation

We would like to thank the following colleagues for taking the time to be interviewed for this project:

  • Alessandro Piscopo, Principal Data Scientist, BBC Datalab
  • Anna McGovern, Editorial Lead for Recommendations and Personalisation, BBC
  • Arno van Rijswijk, Head of Data & Personalization, & Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep
  • Ben Clark, Senior Research Engineer, Internet Research & Future Services, BBC Research & Development
  • Ben Fields, Lead Data Scientist, Digital Publishing, BBC
  • David Caswell, Executive Product Manager, BBC News Labs
  • David Graus, Lead Data Scientist, Randstad Groep Nederland
  • David Jones, Executive Product Manager, BBC Sounds
  • Debs Grayson, Media Reform Coalition
  • Dietmar Jannach, Professor, University of Klagenfurt
  • Eleanora Mazzoli, PhD Researcher, London School of Economics
  • Francesco Ricci, Professor of Computer Science, Free University of Bozen-Bolzano
  • Greg Detre, Chief Product & Technology Officer, Filtered and former Chief Data Scientist, Channel 4
  • Jannick Kirk Sørensen, Associate Professor in Digital Media, Aalborg University
  • Jonas Schlatterbeck, Head of Content ARD Online & Leiter Programmplanung, ARD
  • Jonathan Stray, Visiting Scholar, Berkeley Center for Human-Compatible AI
  • Kate Goddard, Senior Product Manager, BBC Datalab
  • Koen Muylaert, Head of Data Platform, VRT
  • Matthias Thar, Bayerische Rundfunk
  • Myrna McGregor, BBC Lead, Responsible AI+ML
  • Natalie Fenton, Professor of Media and Communications, Goldsmiths, University of London
  • Nic Newman, Senior Research Associate, Reuters Institute for the Study of Journalism
  • Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio
  • Sébastien Noir, Head of Software, Technology and Innovation, European Broadcasting Union and Dmytro Petruk, Developer, European Broadcasting Union
  • Sophie Chalk, Policy Advisor, Voice of the Listener & Viewer
  • Uli Köppen, Head of AI + Automation Lab, Co-Lead BR Data, Bayerische Rundfunk

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 50