Skip to content
Blog

Evaluating data-driven COVID-19 technologies through a human-centred approach

What we can learn from missing evidence on digital contact tracing and vaccine passports?

Melis Mevsimler

5 October 2023

Reading time: 8 minutes

Four years from the beginning of the COVID-19 pandemic, new variants are still emerging and circulating. It remains important to explore whether people and communities have been involved in policymaking to regulate the use of data-intensive technologies (such as contact tracing apps) – and whether policies have followed a human-centred approach to healthcare.

The benefits of involving individuals and communities in policymaking are widely recognised – providing alignment with public opinion, access to on-the-ground thinking, as well as legitimacy and accountability. The Innovation in Politics Institute defines human-centred policymaking as a way to centre the ‘needs of the users, looking at a given problem from their angle and going through several feedback loops to fulfil [those needs].’

An effective human-centred approach would have been beneficial in the implementation of COVID-19 digital technologies, because they played a central role in the policy response to the pandemic. However, there is insufficient evidence to determine whether this was the case.

The Ada Lovelace Institute’s three year-long research on contact tracing apps and digital vaccine passports highlights several evidence gaps. In particular, there is inadequate information on the lived experiences of those affected by these technologies, most notably marginalised and disadvantaged members of the public.

Both contact tracing apps and vaccine passports were rolled out quickly as part of the pandemic response. This meant there wasn’t time to establish a framework to evaluate their effectiveness, which, in turn, means there is no opportunity to reflect on their impacts according to criteria that go beyond technical efficiency, like those of a human-centred approach.

This type of evaluation framework is necessary for future pandemic planning and can significantly contribute to further developing everyday, human-centred healthcare. Indeed, COVID-19 technologies provide an important case for study for this approach.

Literature on human-centred digital healthcare is rapidly growing, but it focuses on digital healthcare websites and remote healthcare applications. Contact tracing apps and digital vaccine passports are distinct public-facing health technologies and constitute especially interesting cases. For them to be effective, people must have consented to sharing their data, be using the technologies in their everyday lives and changing their behaviour because of that use – for example, by self-isolating after receiving a notification from a contact tracing app.

COVID-19 technologies provide important insights and lessons for many emerging trends that require public acceptance and cooperation (for example, digital identity and wearable healthcare technology). But how can we evaluate their effectiveness according to a human-centred approach?

In this blog post, I identify four requirements that should underpin evaluation frameworks and monitoring mechanisms, to assess the success and impacts of data-driven health technologies in public crises and health and social-care provision.

1) Use qualitative research methods

Quantitative data has been central to the COVID-19 emergency response and has had substantial epidemiological value. By using local surveillance data (e.g., the number of deaths, hospitalisations and transmissions), analysts have been able to forecast local outbreaks and guide the decision-making processes of healthcare authorities to contain infections.

Quantitative data and statistical modelling has also supported assessments of the implementation of COVID-19 technologies. The majority of analysis has focused on technical efficacy and/or public uptake (i.e. the number of downloads). The pandemic has also shown the value of qualitative social science research for policy.

Qualitative research methods enable researchers to explore and contextualise the lived experiences of members of the public. Yet, these types of studies have rarely been referenced in evaluations. Most notably, there is inadequate evidence on whether COVID-19 technologies resulted in positive change in people’s health behaviours.

2) Include all actors

The effectiveness of data-driven health technologies should be understood within a framework that recognises the insights and experiences of all involved (and affected) actors.  Yet, as with members of the public, the evidence collected among public health workers and local authorities, both in the UK and internationally, is inadequate.

In May 2020, in England, the Government set up a national contact tracing system without a local delivery arm. Feedback from local authorities and the public indicated that national contact tracing was not effective in containing outbreaks. This led the Government to give more power and responsibilities to local authorities from summer 2020 onwards. Evidence from international sources highlights the significance of local expertise and leadership in managing outbreaks and helping communities during the pandemic.

Despite the important role they played, however, there was little consistent attempt to explore the views of local public health teams and authorities on the effectiveness of COVID-19 technologies. The limited available evidence indicates that the technologies were governed without effective coordination between local and central government authorities.

This brings us to the next point: the importance of considering the varying nature of place-based structures.

3) Consider place-based structures

Available data on COVID-19 technologies was largely collected and analysed at a national level, mostly ignoring place-specific localised factors. Research undertaken in the early stages of the pandemic shows that public attitudes towards COVID-19 technologies depended on a variety of structural and social issues, most notably trust in government, digital access and skills. All these issues have local dimensions. For example, digital infrastructure is poorer in remote areas than urban centres in the UK. In many countries, trust in the national government varies significantly across regions/territories. However, we do not have adequate evidence to establish the impact of such factors on public attitudes towards COVID-19 technologies. This prevents us from understanding the varying outcomes of COVID-19 technologies in specific contexts and across geographical areas. For example, we do not know whether transmission rate of the COVID-19 virus was higher in a region due to lower trust in government advice, and hence, lower contact tracing app adoption rate.

This is particularly important since the COVID-19 pandemic has demonstrated that national policies do not always apply directly to local concerns. Evaluation and monitoring mechanisms should be well-equipped to identify the impact of local factors and assess the success of partnerships between local and national authorities in deploying digital technologies. These local factors may include individuals’ attitudes towards technology, the technical infrastructure available in specific locations, the resources of local governments, the digital skills of public health staff and so on.

Having explored the importance of national and local levels, we also need to acknowledge the responsibilities of regional and international institutions and organisations. While the COVID-19 pandemic has shown how tackling public health crises requires global cooperation in today’s world and solidarity among countries, the implementation of digital vaccine passports has amplified global inequalities and tensions. As international digital vaccine passport schemes linked individuals to verifiable tests or vaccines, low-income countries found it difficult to meet rigid standards for compliance, due to their limited access to and uptake of vaccines.

As major digital initiatives like the global digital health initiative and the EU Digital Identity Wallet are on the way, transnational organisations will have an important role to play. Governments need to be prepared to effectively assess their success in harmonising national, regional and international regulatory tools to avoid reinforcing and entrenching global inequalities and tensions.

4) Monitor over time

In the early stages of the pandemic, a substantial amount of research on public attitudes towards COVID-19 technologies emerged. This research provided valuable insights into people’s willingness to use COVID-19 technologies when they were first built and deployed.

However, the limited available evidence also indicates that people’s attitudes towards COVID-19 technologies might have changed over time. Some specific incidents might have compromised people’s confidence in these technologies’ safety and effectiveness and, possibly, in the use of digital technologies for public health in general.

For example, not enough is known about people’s ongoing use of contact tracing apps. In July 2021, the BBC reported that although two million people downloaded the Protect Scotland app, only 950,000 people actively used it, and that around 50,000 people stopped using it only a few months after its launch. A research study on the UK contact tracing apps demonstrates that some people stopped using them after a short while because they lost confidence in their effectiveness.

Any evaluation mechanism that investigates people’s views and practices in relation to data-driven technologies should take account of real-life contexts and monitor attitudes over time. This will help review the effectiveness of the technologies in use, not just technically but in terms of their outcomes for people and society in the long-term.

There are other questions left unanswered due to the lack of evidence on public attitudes. These are vital for understanding pandemic responses, and also everyday healthcare. How and why have COVID-19 technologies affected people’s attitudes towards data-driven technologies beyond the pandemic? It’s unlikely that enough evidence will emerge to answer these questions robustly – leaving a certainty that, faced with another pandemic, effective, inclusive collection of data and involvement of people in policymaking remain priorities.

Related content