In the third in our series of events addressing the nascent ‘public health identity’ systems developing around the world in response the COVID-19 crisis, we explore the uses of digital immunity certificates in the context of the private sector. What are the ethical and social implications, how are those uses currently governed and what steps should government take, if any, to offer further regulation and guidance for the private sector?
This post summarises the key points of discussion and debate from the webinar which you can watch in full below:
Katie JacobsSenior Stakeholder Lead, CIPD
Siddharth VenkataramakrishnanEuropean Technology Correspondent, Financial Times
While governments are still debating whether to regulate digital immunity certificates and whether to develop their own immunity certification system, private companies around the world are already developing systems and beginning trials. It has been reported that major consulting companies PwC and Ernst & Young are partnering with start-ups to trial immunity certificates, and MTÜ Back To Work is already piloting a digital immunity certificate with local businesses in Talinn, Estonia.
However, we are yet to have a proper public conversation about what we agree are the socially acceptable uses of digital immunity certificates, in the workplace or anywhere else. And it is not clear whether existing protections under the law are enough to ensure citizens’ rights are respected.
There needs to be a conversation between employers, employees, and wider society about the intended users of these systems, how effective these systems might be in these use cases, and the societal, legal, and ethical risks. Each country will need to have its own discussion in its own legal and social context before there can be a legitimate implementation of these systems.
The Ada Lovelace Institute wants to start this debate in the UK context, and help surface key concerns that can be translated in an international context. In this event we will ask:
- How might digital immunity certification or broader public health identity systems be used by private companies? Could they be used to identify high-risk employees and adjust working conditions to protect them? Or will it be a tool of discrimination, bordering out already vulnerable workers?
- What existing UK and international regulations and laws will govern the use of digital immunity certification by private companies?
- Are additional governance mechanisms, e.g. legislation, regulation, standards etc. needed?
What trends have you have seen in recent weeks on the private usage of public health identity systems in the UK and abroad?
Back in May was really when we had the first surge of interest in these topics. We had news that companies from the UK and Europe had presented their systems to the government for potential use in an immunity passport system. There’s been a lull in the last few months and now we’re seeing interest pick up again.
I think it’s worth noting there is a lot of PR around this space. It’s hard to work out what is actually happening and what is a company retooling their existing tools to argue they could be useful when dealing with COVID-19.
In general, immunity passports have tended to take a similar form because most of the major companies pitching them are from a digital ID background. The system would use biometric information combined with a passport photo created when you have a test done. When you went to an employer or private venue you would present that information in the QR code in most cases. They would then be able to see that you either had immunity implied through antibodies or that you were clean of COVID-19 at a certain point.
There are specific ways in which different companies have tried to differentiate themselves, and particularly in terms of cybersecurity – every company has said their system is the most secure and most sufficient way of minimising data.
There is also a discussion around terminology. Immunity passport was used in the past but there have been examples with attempts to avoid the term because they understand immunity is fundamentally not something we can rely on when it comes to COVID-19. The WHO statement on immunity passports was stinging.
There is also the question around linking with wider health data. Companies have talked in general terms about either connecting it to biomarkers and there’s work being done around wearables to spot what is happening in our bodies around COVID-19 before it really breaks out into full-blown symptoms. There’s also the idea of looking at other test results and creating an all-in-one digital health system. These have been pitched as longer-term plans rather than for immediate roll-out.
One interesting question is home testing versus professional administration. There is only one company or one organisation which has discussed this – the Citizens Science Group. They are looking at how you can deploy these tests at home in order to the get to the mass testing scale that you need for this to be effective.
In the private space, it’s interesting that there is a lot of tech solutionism going on which includes the use of facial recognition and thermal scanning. There’s also lots of social distancing scanning AI based systems and even wearables being deployed by various companies with the implication you would be able to tell what was happening to people in real-time.
There are certain sectors where we’ve seen uptake:
- Sport is one where we have seen the most early interest, with the digital identity companies suggesting their system could be used for reopening the Premier League.
- Hospitality – a British company, Onfido, and Sidehide were testing an app which would allow access to specific parts of hotels where social distancing was not possible, e.g. saunas. Estonia’s Back to Work initiative has been working with various hotels in the country.
- Air transport – we’ve seen bullishness from the Heathrow CEO and Back to Work has moved on from doing the in-country testing to looking at how cross-border travel would work.
How do these fit into wider trends? There’s an interesting culture clash going on. We have, on the one hand, a greater acceptance of these kind of broad surveillance techniques around COVID-19 even as if there is a general move against the use of this technology and mission creep in areas such as police surveillance. Facial recognition or criminal recognition algorithms are attacked by activists. We don’t have the reckoning as yet with the systems being deployed for health purposes which are likely to have longstanding impacts on society.
So, there is a question about how we walk back these systems in the future and whether there is a way we remove systems which are found perhaps not to be as effective or are found to have embedded unhealthy or exploitive practices into society.
Further, specifically around the use of immunity passports and health passes, is the question of mass testing. How many tests are available in any given country affect how often we test and how often we test will affect how effective the systems are. There’s a lot of talk that the Government must set the line, or else companies will and, as yet, there isn’t any immediate proof that the companies know what is best to do.
Have you seen much interest from the organisations you work with in immunity passports or health status apps? Do you see an emerging use case for them?
Thinking about a lot of the conversations I’ve had recently; I’ve been holding almost weekly ‘therapy calls’ with HR directors in quite large businesses. Like Sid said, there are sectors where immunity certificates will be more important. Organisations where the priority must be keeping a clean site and where anything that compromises that would have a profound impact on productivity and income. These are, for example, sites of food production, manufacturing, and hospitality – anything where if you need to close you lose your income stream.
One thing that has come up is antibody testing. Whether people can start testing their workforces. I’m going to quote from someone on a forum I run for CHROs (Chief Human Resources Officers). They have been asked by a senior leader if it’s feasible to decide a priority order for who should come back to workplace or out of furlough:
I’m certain this isn’t legal. If anyone can point me in the right direction to show them why we can’t do this I would be grateful.
So, there is confusion.
Another use case I thought might be potentially useful would be organisations that want to get people dealing with profound anxiety issues back to the workplace. We know from research it is a big issue. According to our research:
- 49% of people not currently attending their workplace are anxious about returning
- 21% of people who have been attending their workplace during the pandemic said they aren’t satisfied with the health and safety measures put in place
- 12% of workers don’t trust their employer to provide a safe environment when they return to the workplace.
There is a sense that public health identity systems could give peace of mind. The HR directors I speak with have been using temperature checks at entrances and acknowledge there isn’t a huge amount of evidence they work. But they do find they make people feel more comfortable and confident. So, they’re now in a space where, as one said on a call a few weeks ago: “now we put them in, we can’t take them out again.” They make people feel a lot more comfortable. That could backfire if people get too comfortable and behave recklessly and carelessly.
I also want to run through considerations and questions that come up when organisations, and particularly people leaders, are deciding whether to implement or use these kinds of tools.
The first obvious one would be data governance and privacy concerns related to GDPR. HR directors see this as a concern where you store the data, use the data, and must keep it secure.
The second massive issue is ethics. Just because you can do something doesn’t necessarily mean you should. What might impact be on things like employee trust, engagement or morale? It could be a positive – they could feel like you are trying to protect them. Or people could feel edgy about a loss of privacy. Health data is the most personal form of data so the risk is it could expose the health conditions that people didn’t want their employers to know about. Further, is there an incentive to beat the system by catching COVID-19 or becoming immune in order to become preferred for work?
There’s also a worry about mission creep. You start here, but where does it end in terms of surveillance and monitoring of your staff. Could you potentially end up in an almost new class system of employment in which you discriminate against people who didn’t have the virus? That opens questions such as: should you be making a value judgment at all based on people’s health and choices? It’s like with wearable health technology. Is it right your employer sets expectations around what appropriate behaviour is in terms of getting your 10,000 steps a day or going to the gym? Does going to the gym make you more worthy of promotion? It gets you into murky areas.
Then there are lots of practicalities that businesses must consider as well. As a multinational, for example, is it possible to be consistent across countries? Global businesses I work with were struggling with issues about whether to get everybody masks because it’s not mandatory everywhere. If you expect people to use apps on their personal phone is that a step too far, blurring the line between personal and professional? What if you don’t carry your phone all the time? In a manufacturing site you might not be allowed, a woman might not have pockets. How do you get over those issues?
Employee surveillance is an emergent area. It tends to be linked to productivity and I think it’s raised its head more recently given many that of us are working for home. Many managers are not comfortable managing outputs and are obsessed with inputs. They want to know people are working but that’s a dangerous game. It comes with lots of risks and unintended consequences. New technologies can promise greater management insights but can also have implications on trust and morale. Further, there are data issues and legal considerations around GDPR. We did research recently on employees and usage of technology. We found that 45% of employees believed that monitoring is currently taking place in the workplace. 86% believe that work monitoring and surveillance will increase in the future.
We need to consider the people impact front and centre and involve HR in the process early. People professionals need to ask objective and challenging ethical questions about the application of any new technologies and think critically about unintended consequences that might occur.
Do you feel there’s enough guidance and regulation around private sector usages of public health identity systems?
I think probably there is room for more, but the issue is that you can have the legal guidance, but it’s about the practical application.
That’s some work we’re going to be kicking off quite soon – looking at responsible technology adoption. You can have all the regulation in the world but ultimately there is the practical side of it as well which can often get missed.
What are the impacts of digital verified antibody certificates on questions arising around employment law and non-discrimination?
The first point to make is that there are numerous very real discrimination angles when employers, or indeed any organisation, is using a digital immunity certificate system in the wrong way.
The most obvious angle is disability. We know that many disabled people were shielding during lockdown. We know that many disabled people, even now, will be minimising their contact with populated areas, whether it be the supermarket or the workplace. Accordingly, it’s very likely that we will have a disabled population which would find it very difficult, as a whole, to demonstrate that they have COVID-19 antibodies.
I would imagine that we would see similar patterns when it came to other protected characteristics. So, it might well be the case, for example, that older people – who tend to be more vulnerable and so more likely to have been shielding – may also have lower likelihood of COVID-19 antibodies. Equally, we know there is a racial impact, and so certain racial groups might again have a different profile when it comes to testing.
Now if that is right, if there is a connection between protected characteristics and the ability for someone to demonstrate that they have, for example, COVID-19 antibodies in their system. There is a real risk that if people are denied the ability to come back to work, or perhaps denied access to certain areas of an airport or a hotel, that we see a form of discrimination which would be unlawful under the Equality Act.
I stress the word “might” because, in reality, much of this will be context specific and will ultimately come down to what the employer or organisation trying to achieve.
I want to illustrate some of the nuance around this by positing two quite extreme scenarios, but I think illustrate the grey area here:
Scenario 1: Imagine an employer who runs a huge department store and have been really hit as a result of lockdown. They need to cut costs. They decide to have a redundancy exercise and decide to prioritise redundancy for those individuals who don’t have COVID-19 antibodies.
Let’s also imagine this employer is doing so in entirely good faith – they believe it’s a sensible way to do their redundancy exercise, it will make shoppers feel confident coming back to the store, and they predict they will have lower sickness rates going forward. If this redundancy exercise goes through, the employer ends up with a working population that is predominantly younger, nondisabled, and perhaps even dominated by certain racial groups.
Now if that were to happen, I think there would be real difficulties from an Equality Act perspective and when it comes to showing the employer acted proportionally. They would struggle to show the science backs up what they are trying to do. To my knowledge there is no evidence to suggest that just because you have the COVID-19 antibodies you are not then passing it on to other people. There would be questions about privacy and reasonableness, which would be highly relevant to different types of legal claims such unfair dismissal.
Scenario 2: At the other end of the spectrum, imagine you have a patient who needs a lifesaving operation at a particular date in the future. You have a pool of surgeons who are equally qualified, and equally skilled. In those circumstances it might make perfect sense for an employer to say, the surgeon I’m going to choose for this operation is the one that has the highest COVID-19 antibodies in their system because whilst that doesn’t necessarily mean they are immune I have the highest chance of that surgeon being well enough on the day that the surgery needs to happen.
It’s nuanced. We can’t say something like digital immunity certificates will always be wrong. Nor can we say they will always be right. Equally, we know there are nuanced health monitoring systems. They might be helpful to employers when it comes to fulfilling other obligations they have. An obligation to make reasonable adjustments for disabled employees or making sure the workplace is safe for their employees. This tech, properly used, could be very powerful.
I think we have a legal system that can adequately place this: The Equality Act, unfair dismissal claims and the right to privacy. The big problem is that this legislation only protects people in so far as you have a workforce, for example, that is willing and able to litigate. It seems to me where you have something as controversial or as important as, for example, immunity passports, we can’t expect or ask individual employers to be the guardians of safe tech.
Something that my colleague and I have been thinking hard about with the work we have been doing with the CDIE, and talking to the TUC about, is how can we manage this? How can we harness the benefits of tech? How can we make it trustworthy by sidestepping the potential unlawfulness and some of the discrimination that might occur?
For us, the answer to that is some sort of very sophisticated and robust certification scheme. Imagine a world in which if you are an employer and where you want to use these forms of tech, it can be audited. It can be certified by an independent third party. You can be told before you deploy it, yes, this is an acceptable system. Yes, you have a set of facts here which means it’s acceptable and OK for you to do what you want to do with it.
That, to us, seems to be the answer. It is not a new idea in the UK. So here we have something called the Surveillance Camera Commissioner. They can certify, as an independent third-party, certain types of tech to make sure they are safe, security safe and being used appropriately. This is the point at which we must pause, and take stock and say: how do we think organisations should use this tech? We don’t just want laws, we want practical guidance so employers and organisations know precisely what they can do and what they can’t do.
Do we need further law, policy or government guidance for employers about private sector usage or is the current regulations and guidance adequate in the short-term?
I absolutely they think we need more guidance. This is a new area. People have got all sorts of ideas about how it might be used. All these companies are pushing this technology and saying there are wonderful benefits but it’s easy to get lulled into a false sense of security that all uses will be acceptable or used lawfully.
What we need is a cross regulatory approach. So that you have all these organisations, whether it be employer or employee organisations, or data protection organisations, pulling together and producing something practical to explain what you can do. This would then ideally be backed up by an auditing and certification scheme so you can know beforehand that it’s acceptable rather than falling back on litigation after the event to identify when something has stepped a line and isn’t acceptable.
Are there countries that are leading the way, if not in use then in guidance, and ensuring any tech use is seen as publicly legitimate that the UK should learn from?
All countries are grappling with it equally. There is a view within Europe more broadly at a European Commission level to look at regulating this. They are producing various white papers and consultations and proposing legal instruments which will identify high-risk uses of tech and AI and put in place common rules across Europe, so we know where we stand.
Of course, it takes time to put this sort of legislation together so we are not expecting to have any concrete regulation until next year.
That is why whilst the UK is obviously Brexiting it’s likely to be influenced by European standards. More importantly, it needs to be thinking now in advance of any European standards that international companies will have to comply with. What is acceptable and what isn’t acceptable so that employers know, bearing in mind the change in guidance this weekend, what is going to be appropriate going forward.
Not so far on this. I think the major differences have been appetites for things like whether to wear masks or not, or to get people back or not. I haven’t seen a best practice example with the multinationals. It’s a feeling of, this is so complex and basically impossible to get it right across every jurisdiction.
The HR director for one organisation I work with, that’s headquartered in France but that has a huge population over here, would agree that everybody would be given masks, but they weren’t doing that in the UK. It was creating a bit of tension between the employee groups that were seeing – you have it in this country, why have they got it, why don’t we have it?
Is a challenge here about who owns those social questions if we are talking about the private uses?
I think it’s interesting how much variety there is around ethical discussions within companies. Some have issues with discrimination, and a few who will acknowledge this as a problem. Others have a blanket idea you can easily separate people and for however long you need to do it, it will happen. It is concerning there is a disconnect from the social-ethical problems on the ground.
To be fair there is more discussion from the ones I spoke to in the UK around this. I don’t know if that is because we have a lot of CDEI and other organisations in the UK who were discussing these topics. Generally, there is a failure to necessarily engage with the social aspect because it’s being seen as a business decision by individual companies.
How does the regulation of private sector use of public health identity systems compare to the regulation of medical devices?
What we’ve seen from the European Commission is this argument you should see AI systems as similar to medical devices in the way they are regulated. You would have very specific rules around liability in the way in which you would have to have compulsory insurance if you operated certain types of system. We should borrow those ideas and translate them, not to all uses of AI, but to AI uses that are considered high-risk. I can see that as a very powerful analogy.
I think that is the big challenge here when it comes to thinking about ethical standards and regulation. You have such a range of usages of tech. Your out of office is a type of algorithm but no one thinks that needs to be regulated, licenced and have an auditing system. At the other end of the spectrum there’s invasive technology that uses personal and sensitive medical data like a medical device. The ideas for regulating that field absolutely can and should translate to the workplace and to private employers in very different types of scenarios.
What are the differences if antibodies and any associated immunity turns out to only last a few months rather than more longer lasting?
I think if there is no evidence, or limited evidence, this works or works for a long period of time, you risk losing more than you gain. If you are thinking about the kind of stuff you are putting at risk, in terms of potentially revealing really private health data about people they might feel uncomfortable with their employer knowing and the impact it will have on trust. I feel this would potentially be a bigger risk than what you would gain by having data that potentially within three months is useless to you.
The other thing I wanted to raise was something around the power imbalance. Obviously, there is always a power balance in the employer/employee relationship, but especially now as we move into a period of deep recession when people will be really concerned about their roles. I think that is another angle from which we need to view this because ultimately you could be asking people to do something that normally they wouldn’t consent to, but they feel they have to because what is the alternative? They might lose their job and their income. That is another way we need to be thinking about this in terms of the ethics.
Following on from Katie, what is important is that I hope it doesn’t stop being asked because at some point we will come out of this pandemic, hopefully. In truth, these types of invasive technology and using data to infer other things about people has been going on for years.
What COVID-19 has done is put a spotlight on what has been a transformation in the employer/employee relationship. A lot of this came from the States. In America there are apps which look at your social media history in order to interfere what kind of views you hold and whether you are the employee an organisation might want to recruit. Chat bots make reduction diction about whether you will stay with this particular employer or move in a short period of time or you need to be nudged with a pay rise and what is the minimise pay rise you can be given to remain an employee. This is coming.
COVID-19 brings this to people’s attention because of the public health implications but what data an employer is entitled to have is a real and important debate. How can they use it? How long can they use it? Is there going to be mission creep? What can they interfere from your data? These are questions we should be asking for now and years to come.
Do we need a specific mandatory Data Protection Impact Assessment?
One of the big criticisms of Data Protection Impact Assessments is they are not public. They can be pretty limited. They often don’t look at the equality implications and arguments we were looking at earlier. I would be in favour of a much more meaningful auditing and assessment process, but crucially one that is transparent so people can know how their data is being used. Sometimes people don’t even know their data is being used. There is a huge problem in terms of people’s perception and understanding of what is going on and I think something like a Data Protection Impact Assessment, but better, would be one step and one tool in terms of trying to fix that.
What action you think needs to be taken, if any, by government(s) in terms of managing, advising, regulating, guiding or testing the private use of these?
Governments are going to be slammed on this as they try to work out what to do. I think with regards to immunity passports or public health identity systems in particular, there are certain cases where it does make sense and is not as purely evil as it has been cast in some ways.
There are cases such as in sports or very specific areas where we do need analysis. At the same time, points about employee/employer power balance are important to consider. Fundamentally the idea of opt-in versus exploitation, being forced to act in a certain way, needs to be discussed. Whether or not it takes more regulation or some other form of inquiry by government, there needs to be an understanding fundamentally of what rights are being potentially infringed and what the trade-offs are.
I think that any kind of more guidance is always helpful especially because this is such a new and fast-moving area and an incredibly emotive area where nobody is sure they are doing the right thing. Most people want to be doing the right thing but are just not sure what the right thing is or looks like.
There is a whole spectrum of tech and a whole spectrum of uses. It’s dependent on government and regulators to provide more guidance, as well as on employers and HR leaders. Then, those people who are responsible for people need to think about what the impact on employees is going to be and then critically assess whether it’s worth it or not.
One of the frustrations I have with this debate is that we seem to fall into this sort of black-and-white, left versus right discussion. You have commentators saying we can’t regulate because we will stifle innovation, a caricature of the regulation debate saying that actually what you need is to free up business, they are sensible and can make their own decisions.
I think this is wrong. Companies are crying out to be told what is right and what is wrong here. I hate the word “ethical” I don’t know what it means. I think businesses don’t know what it means. Everyone has a million different ideas of what ethical means. At some point we need a clear steer from the Government saying – you can do this, and you can’t do that. Government needs to be brave enough to provide clarity without fearing they will then be criticised for stifling innovation. I think clarity will encourage it.
Testing immunity certificates: do the new antibody tests open the door to the creation of a ‘public health identity’?
The first in our new series of virtual events addressing health identity systems developing in response to COVID-19.
The societal impacts of introducing a public health identity system: legal, social and ethical issues
The second in our series of events addressing the nascent ‘public health identity’ systems developing around the world.
A tracker collating developments in policy and practices around vaccine certification and COVID status apps as they emerge around the world
Report with recommendations and findings of a public deliberation on biometrics technology, policy and governance