Skip to content
Blog

The NHS COVID-19 app: is it an enduring public health technology?

Will the long-awaited contract tracing app deliver on its promises for 2020 and beyond?

Elliot Jones , Imogen Parker

24 September 2020

Reading time: 8 minutes

NHS COVID-19 app

With new restrictions introduced by the UK Government and expected to last until spring, growing expectation that the UK is beginning to experience the second wave of COVID-19 and large parts of the country back in lockdown, the race for effective tools to trace and minimise the spread of the virus is a vital element of the Government’s public health strategy.

Part of that effort is the long-awaited NHS COVID-19 app, launching nationwide today. Joining StopCOVID NI and Protect Scotland, the whole of the United Kingdom will now be covered by digital contact tracing apps. The great hope is that they can rapidly flag risk to unknown contacts that a manual approach would fail to capture.

The first version of the NHS COVID-19 app didn’t make it past an Isle of Wight trial, so the big question is: how will version two perform in real life?

There are certainly some reasons to be positive. The developers of the second version – to their credit – have clearly addressed many of the issues raised in the first pilot. Moving to the Google-Apple framework (GAENS) means it can detect 99% of Android and iPhone handsets (compared to 75% and 4% respectively in version one). It’s also by default more privacy preserving, minimising numerous risks and concerns around potential uses of data and scope creep.

Users can turn contact detection on and off easily while using the app, giving them greater control, and there has been far greater transparency upfront in publishing the Data Protection Impact Assessment (DPIA) and about data practices in the marketing communications and the app interface. While the evidence from the pilots for this version isn’t yet available, the approach, for example including the London Borough of Newham to test the app in a population with a diversity of groups and languages, is an important step forward.

The inclusion of QR codes, which enable users to ‘check-in’ at venues like restaurants or pubs, could be viewed tentatively as a positive step. Unlike manually providing a name and contact number, the app’s QR check-in only records the time, date and an identifier for the venue, all of which is stored on the device and only shared voluntarily if the user gets a positive test. This data is also deleted after 21 days. Early findings from the Newham trial raise questions about whether it will work in practice.

Of course, the fundamental issue at stake is whether the app will prove effective at its primary purpose: minimising the spread of COVID-19. But the long-term prognosis also matters. Experimental technologies need to build an evidence base as they go and it’s important to explore the app’s impact not just as an emergency measure but as an ongoing approach to public health.

So what indicators of success should we be looking for?

Efficacy: how well can the app identify ‘contact’ and measure risk?

On the technical side, the key question is how effective the app is at identifying risk of contagion. The downside to the GAENS approach (and one reason the UK Government initially avoided it) is that it makes accurate distance measurement more difficult. It provides data more abstract than the raw data used in version one, so the app team need to use modelling to extrapolate distance. The accuracy of the distance calculation also depends on contextual factors, such as where the phone is on the body and the surrounding environment, which means each individual distance measurement is inexact.

In pre-pilots, the team reported 45% false positives, when you are incorrectly recorded as being within two metres of someone, and 31% false negatives, when you were within two metres but not recorded. The developers are erring on the side of caution, so this most likely means the app might have quite a few false positives that lead to more people self-isolating than is strictly necessary. They also now differentiate between close (0–2 metres) and medium (2–4 metres) contact, checking distance every five minutes, to generate a risk score.

A more complex aspect is that this risk score is only a proxy for the actual risk, and as evidence about transmission is growing daily, so the understanding of risk changes. An example is the growing body of scientific evidence suggesting that individuals are at significantly lower risk of transmission outside than inside, information that the app currently cannot capture but that we know would be significant in understanding the risk of transmission through proximity to an infected person.

Downloads: will enough people use it?

Studies and pre-print models suggest that all levels of uptake, referral and adherence have some effect in reducing the transmission of COVID-19.

Modelling in April that supported the case for a contact tracing app suggested that 60% of the population (i.e. about 80% of smartphones users) installing, using and adhering to the app would be enough for an app alone to suppress transmission of COVID-19. A later pre-print ‘estimates that in Washington State, a well-staffed manual contact tracing workforce combined with 15% uptake of an exposure notification system could reduce infections by 15% and deaths by 11%.’

Uptake is clearly a necessary factor in getting the app to work. There are clearly network effects from a high density of other users around you to detect contact: without users, there are no proximity notifications. To date, no country has achieved the magic 60% level of uptake, let alone adherence. The most successful countries, Iceland and Singapore, have reached around 40% uptake, and these countries have had apps for a long time. Looking at a sample of other high-income countries, the highest uptakes have been 30% in Ireland and 28% in Switzerland. Then Australia, Norway, Finland, Germany at 22–27%, Scotland already at 18%, and Denmark and Japan at 13–14%.

There is also an open question of how interoperable this app will be – with the concern that requirements to download two apps might minimise uptake. Currently, the Republic of Ireland and Northern Ireland apps work together and share data, with Protect Scotland due to join them. The European Union has been developing a European Federation Gateway Service, so that apps across Europe using the GAENS can detect and alert users across borders. This system is already being tested between the Czech Republic, Denmark, Germany, Ireland, Italy and Latvia and is due to roll out across Europe in October.

Testing access: will relying on tests create a bottleneck?

One proposed advantage of digital contact tracing over manual contact tracing is the speed at which it allows contacts to be traced and notified. This is crucial when diseases like COVID-19 can spread before symptoms show, or never show.

However, notification alerts rely on inputting a code from a positive test into the app – a decision taken to protect against unnecessary isolation and the possible risk of malicious or fake reporting. The downside is the effectiveness of the app will be bottlenecked by the speed at which tests are being processed and returned. Given the app is being launched just as the UK appears to be unable to meet the demand for testing, this could significantly reduce the app’s effectiveness, at least in the short-term.

Adherence: will people who receive alerts actually self-isolate?

As the models mentioned above highlight, you don’t just need uptake, you need people to adhere to the advice given by the app. A key question is how the adherence to an instruction to isolate from the app compares to a call from a contact tracing service. Worryingly, according to a pre-print looking at 21 UK-wide surveys, only around 16–20% experiencing symptoms of COVID-19 fully self-isolated. Only around 8–14% alerted by the NHS contact tracing service fully self-isolated.

To be able to understand how the app compares to manual instructions, we will need to measure whether people adhere to self-isolation notifications. The motivations for this will go beyond the technology: the public’s willingness to isolate – whether following manual or digital advice – will be intrinsically bound up with broader policy interventions, for example, income support or job protection.

Impact: is it possible to measure whether the app is effective at stopping transmission of the virus? Do we have the data?

From our public deliberation work conducted in May to discuss and debate the role of technology in the crisis, we found that an important aspect for trust, use and adherence is providing a transparent evidence base. This app asks people to trust it when it tells them to disrupt their lives and isolate. To be effective, it must be able to prove that disruption is justified for the sake of public health – that the app will save lives.

The problem is that, by design, it may not be able to do that. As explored above, the GAENS system is, by nature, designed to be privacy-preserving for individual users. The flipside of that privacy is that public health authorities do not match identifiers in a central server. This means they cannot track when one person’s positive test leads to another person receiving an exposure notification, nor know who that second person is without their consent, making measurement of these apps’ effectiveness much more difficult.

The upsides of privacy-preserving settings and greater control to their users (like the ability to delete check-ins at any time) have the downside, or greater difficulties, in using the data for public health research.

There are encouraging signs of other countries finding workarounds to this which NHS could explore. In the Republic of Ireland, users can choose to share data with public health authorities, giving them an idea of how many exposure notifications have occurred as a result of positive tests, although they don’t know if one user’s positive test led directly to another’s notification. In Denmark, the Statens Serum Institute, used a questionnaire of those booking a COVID-19 test, to demonstrate that 48 people had booked a test as a result of being notified through the app. Of those 48, 46 reported not having been in contact with manual contact tracers, suggesting the app was finding a lot of previously unknown contacts.

Value: how much does the app add to existing manual contact tracing?

Given the Government aim is that this is an enduring new medical technology to manage public health, we need to understand its value in comparison to, or alongside, manual or traditional approaches.

To do this, we need to know the crossover in cases detected by manual and digital contact tracing, the proportion of total cases detected by digital contact tracing, how much faster the app is reaching people than traditional methods, how much this is reducing transmission of COVID-19, and ultimately, how many lives the app is saving. This information will not only demonstrate the value of the app now but also provide important evidence to contribute to approaches to health technology in the future.

Related content