Skip to content
Virtual event

Contact tracing: to centralise or not to centralise, that is the question

This is our second virtual event, exploring approaches of different nation states to using technology to control the spread of COVID-19.

Date and time
1:00pm – 1:45pm, 1 May 2020 (BST)

In this second virtual event, following the publication of our rapid evidence review Exit through the App Store, we explore approaches of different nation states to using technology to control the spread of COVID-19 and re-open the economy.

This week we ask three speakers with expertise in data governance, privacy and regulation of emerging technologies about the thorny question of data collection in contact tracing, and the merits of centralised and decentralised models.

This blog summarises the key points of discussion and debate from the webinar, which you can watch in full below:

As European governments gear up to roll out digital contact tracing that uses mobile phone data as a proxy for human contact, questions of data governance loom large, zeroing in on differences between the data public health services want to collect, and the data individuals might reasonably be asked to give away.

Public health services are understandably keen to collect data that enables them to stop the spread of COVID-19 now, and to learn more about the epidemiology of the disease for the future. Meanwhile, individuals and civil society organisations are expressing concern about the way individuals’ contact data may be collected and stored, and how it might be used in ways that weren’t transparent when consent was given.

The arguments are polarising around the relative merits of ‘centralised’ collection (chosen this week by the UK Government, and developed as a proprietary protocol by NHSX), which asks users to share their contacts to a centralised server, and ‘decentralised’ collection (chosen by European countries including Germany and Italy, and developed by the multi-institution DP-3T proposal and supported by Google / Apple), where data is stored on personal devices. Some questions include:

  • Can digital contact tracing inform public health policy while preserving individual data privacy?
  • Should digital contact tracing data be anonymised or pseudonymised, and should sharing be mandatory or voluntary?
  • How do we guard against ‘feature creep’ and unintended future uses of the data that is collected in an emergency?
  • What are the ethical questions that come into play when the public are asked to trade personal privacy for public good?

This discussion builds on our rapid evidence review Exit through the App Store, which explored approaches of different nation states to using technology to control the spread of COVID-19 and re-open the economy, and took place shortly before the announcement of testing of the UK NHS contact tracing app on the Isle of Wight.



  • Carmela Tronsoco

    Assistant Professor, EPFL (Swiss Federal Institute of Technology), Lausanne, and lead author of the DP-3T decentralised data model paper
  • Christian D'Cunha

    Cybersecurity and Privacy, Directorate-General for Communications Networks, Content and Technology, European Commission
  • Ross Anderson

    Professor of Security Engineering at the Department of Computer Science and Technology, University of Cambridge

Why are contact tracing apps under consideration?


There’s a trinity of essential things that need to be done in order to contain a pandemic:

  1. Contact tracing
  2. Testing
  3. Isolating

All of these could potentially be supported by digital tools.

Contact tracing has traditionally been done in manual ways and there’s great potential that this could be supported by apps – to fill gaps in people’s memory, identify interactions that people won’t remember and assist with the speed and scale of operations.

However, the WHO has been clear: where digital tools or apps are being used to support these efforts it’s experimental territory – we don’t know if these apps will really help.

What are decentralised approaches about?


If we’re going to deploy an extremely invasive technology and ask hundreds of millions of Europeans to carry it in their pocket, it’s extremely important that we have an option that produces data that cannot be exploited beyond the purpose of contact tracing. That is what we’re trying to do with DP-3T – provide an alternative and create a discussion around the design.

We’re not here to say whether the apps are going to work, or needed, but that if it is the political decision that they are then we need an option that does not create a centralised database. A centralised database would be extremely dangerous because we have no control over how they are going to be used in the future.

The system should not be built at the expense of users’ privacy or produce data that can be repurposed. It should also be easy to dismantle – because what happens if a database is built and, after the pandemic ends someone decides they don’t want to press the delete button on the data? The decentralised proposal means that once everyone removes the app from their phones it is useless; there is no more data to be repurposed. That’s what we’re trying to provide society from the technical side with DP3T.

EU considerations


Most, but not all, EU member states have decided to deploy and approve a national contact tracing app. They are generally familiar with the risks: from data security, to exacerbating digital divides, to risks of stigma, false positives/negatives and questions of inclusivity.

The EU toolbox on the use of contact tracing apps was published mid-April. It recommends, as the EU commissioner responsible has said, that the apps have to be voluntary as, due to their experimental nature, we can’t impose a solution that we don’t know will work. We have to have the following features:

  • Transparency
  • Anonymisation and encryption, as far as possible
  • Data use minimisation
  • Sunset clauses – to ensure they are temporary interventions
  • Interoperability across borders to enable the restoring of freedom of movement within the EU

Are there sufficient protections in existing EU law to require data deletion at the end of the pandemic? Are existing legal frameworks enough?


The GDPR is deliberately flexible when it comes to exceptional circumstances and talks about combating pandemics explicitly in the text.

By and large, for a contact tracing app, you need a legal basis. Some EU member states are considering relying on public interest and public health which might allow a centralised model, such as uploading a person’s contact history to a public health authority once they’ve tested positive. Data protection authorities have said this can be legal if it’s set up clearly in the law. However, a lot of member states are tending to be more cautious because there isn’t hard evidence contact tracing apps work. Instead, they are relying more on individual consent to decide whether contact history would be shared.

Is there potential for these apps to become de facto, or repurposed as, digital ID type apps? Has the Commission started thinking about this and is this thinking connected to existing discussions around a Europe-wide identity system?


From the beginning, the principle, key principle of data protection that we have applied here is that the purpose has to be limited – you have to guard against function creep. One of the essential guardrails of such a system is that it is used exclusively for the purpose of supporting efforts in the pandemic emergency, and when that’s no longer necessary it is disbanded: we stop using it and delete the data. This means we can’t get into a situation where we’ve created this tool and then think of other uses for it.

But it might be that this is a new normal – that we plateau instead of exiting the crisis – in which case we need to find ways of calibrating our responses so that we don’t put in place a degree of surveillance or security risk which is unsustainable.

NHS development approach: what process should the NHS follow to have a strong evidence base?


If there’s a chance that an app would work, it is right for the NHS to go ahead with developing it – but if it turns out not to work, then they must scrap it without blinking. An agile approach is needed – not long-winded design cycles.

Is there value in open sourcing the code, and is that a useful protection from the perspective of the public, security and privacy?


Unless we have access to the source code, we have no idea what is in there. Features can be added for good reasons, but end up being used for very different, potentially unethical ones.


The need for transparency – around encryption, anonymisation, data minimisation – is why a lot of emphasis is placed on having the code behind these apps be open source.

What should the next steps be in the UK given the direction it seems to be going in?


The first step has already been taken by hundreds of academics this week raising the problems of a centralised system. (Referring to this joint statement on 29th April from scientists and researchers in the UK working in information security and privacy raising concerns about the NHSX plans for a contact tracing application, including reports of the approach recording a centrally de-anonymised IDs of infected people and their contacts).

Are there broader considerations for contact tracing and privacy?


Britain is building a contact tracing system that goes beyond digital contact tracing – recruiting thousands of contact tracers. The app is only going to be a small component of this operation.

There are a number of reasons why the contribution the app can make is marginal, including:

  • False alarm rates due to the limitations of Bluetooth technology – depending on the range the accuracy of it for recording contacts varies. At a range of five or ten metres you’ll record contact with people in apartments you pass, or behind the bus stop.
  • Missed alarm rates due to challenges of adoption – even if you get 40% of the population to run the app (which is itself unlikely), then the likelihood that two strangers sat next to each other at a bus stop will have the app and therefore register as contact is 40% squared – just 16%.

As a result, the bigger privacy issues will be around the main contact tracing databases – these will contain large amounts of personal information, mostly collected manually by human contact tracers. The digital contact tracing app risks being an excuse and avenue to blame Apple, Google and privacy researchers for its inevitable failure, to distract from concerns privacy activists will raise with wider new government data operations.

Background: what does it mean for a contact tracing app to be centralised or decentralised?

If a contact tracing app is centralised, then the data about who has been identified as having COVID-19 and the people they come into contact with is stored in a central database, and the decisions about whether someone is ‘at risk’ as a result of that contact is made on a central server.

If the app is decentralised, then the identification of contact with someone thought to have COVID-19 will happen on the phone itself and so there is no central database that can see all the people that someone had contact with.

Centralised approaches under consideration around the world include the UK’s NHS contact tracing app, the Singaporean TraceTogether app, and the European PEPP-PT initiative.

Decentralised approaches include Apple and Google’s joint specification, currently being built upon in Germany and Ireland amongst others; the DP-3T protocol, under consideration in Switzerland, Austria, Finland and elsewhere; and the TCN protocol.

Further reading mentioned in discussion

Coronavirus: An EU approach for efficient contact tracing apps to support gradual lifting of confinement measures – European Commission

Decentralized Privacy-Preserving Proximity Tracing (DP-3T) protocol white paper – DP-3T consortium

Health data and COVID-19 tech