Skip to content
Blog

High visibility and COVID-19: returning to the post-lockdown workplace

In the workplace, technology has the potential to help us respond to the health pandemic – and causes concerns about data, privacy and power.

Andrew Pakes

19 May 2020

Reading time: 8 minutes

Illustration of surveillance at work

In the current crisis, we are hearing a lot about digital tools to track, monitor and share data about people. In the workplace, technology has the potential to help us respond to the health pandemic – and causes concerns about data, privacy and power.

Digital monitoring and data gathering by employers has been growing at pace over the past few years. There is already evidence that workers are either unaware or being excluded from information and decision-making about workplace surveillance. Our own research at Prospect shows that most workers are unsure what data is currently collected about them by employers. The workplace is a critical arena for testing the relationship between digital transformation and issues of consent, rights and how the benefits of new technology are shared.

COVID-19 has put this debate into sharper relief.

The new world of workplace surveillance

Surveillance used only to be viable at state level but is increasingly being used by private employers. Growing numbers have been turning to new technologies to monitor quality standards, analyse work processes or assess individual performance. This offers genuine opportunities to improve management and productivity – Prospect’s members are generally positive about new technologies, and many of them are involved in developing it.

But we also hear worrying reports of bad practice – for example, employees finding data they had shared for an agreed purpose being used for another, or significant decisions being taken about people’s working lives or careers on the basis of inaccurate information or faulty analysis.

Some of these excesses may involve breaches of relevant legal frameworks such as GDPR; however awareness and understanding of these rights is generally not high and many questions have not been tested in case law. We have found that the most effective counter to the increase in unaccountable power that technology can create is the collective union voice – which not only protects the rights and interests of employees but can, if engaged with, provide the context of assurance and legitimacy that can allow new technologies to be deployed effectively for the benefit of all. It is also why we continue to press for the inclusion of workforce perspectives in many of the governance structures related to digital technology, such as company advisory committees or regulatory bodies.

There is another dimension worth discussing here as well. The focus on data as an individual right also obscures the impact of technology on collective rights or as groups of people.  There are two reasons for this. First, personal data is a collective asset, as this pandemic is showing us. Second, more of the issues around personal data involve ethical decisions, as they involve trade-offs. For example, privacy risks can be managed to tolerable levels, but what about the unintended consequences? If health data is collected on individual workers during the pandemic, can it be aggregated or repurposed?

This is particularly important in a work setting, where activities are governed by a contractual – or power – relationship. For example, workplace surveillance provides a means to monitor individuals and groups of workers. In the case of a wearable wristband in a packing warehouse, this could measure individual performance as well as other characteristics – how fast was the performance of a entire shift? Are younger workers better than older workers?

Or take the impact of algorithmic decisions that may discriminate in performance management against people of colour. Prospect has already had to take up cases of discrimination embedded in supposedly objective systems for evaluating people for promotion The potential for inequalities to be deepened by hidden biases in complex, ‘black box’ algorithms is even greater, as the case of Amazon’s hiring software demonstrates. The result is discrimination at both individual and group level. This combination of bias and power creates the basis on which systemic breaches can be made into our data rights and lives.

How COVID-19 is accelerating these changes

The use of technology to contain COVID-19 shifts the digital workspace towards datafication and surveillance. The worry in many quarters is that we could be sleepwalking into further surveillance without safeguards in place. There is already a major reshaping of work underway – the question is whether we are adapting to this tech rather than these technologies adapting to us (and who is making the decisions on this).

The widespread move to homeworking during lockdown has meant that video conferencing and digital technologies are being used more and more. This accelerated advance of digital technologies into workspaces mean many more employers will have the potential to accumulate huge amounts of data on their workforces.

The debate shouldn’t just be about what video conferencing platform is the most secure for companies, but about what the fair and appropriate relationships are between workers and their supervisors, between spaces of autonomy and lines of accountability, and between workers’ private and domestic lives at home and the hours they give to work.

A recent Institute for the Future of Work report said that employers were ‘panic-buying’ monitoring systems as their employees shift to remote homeworking. The availability of keyboard tracking technology, video monitoring and work engagement tools open up further opportunities to monitor performance and control behaviour.

Many of these tools bring benefits in enabling workers to keep connected and to check in on  wellbeing. In industries like construction or engineering where physical proximity is more often needed for certain activities, digital technology can help ensure social distancing remains in place.

Wearing a hi-viz jacket in some jobs is pretty commonplace: these kinds of wearable devices have been around for a while with the ability to detect the presence of dangerous substances and keep people safe. But if the same jacket also detects whether you are maintaining social distancing rules at work and sends data to your employer tracking who you congregate with and for how long, doesn’t that shift the boundaries on data and workers’ rights? What happens post-crisis if an employer decides to keep on monitoring?

As lockdown is lifted, the turn to contact tracing may add a whole new layer of data being accumulated about where we go and what we do. The work setting is a critical arena to get right, as the inherent asymmetry of power between employer and employee creates distinct opportunities for abuse, breach of rights and discrimination.   Earlier this month the ICO issued new guidance for employers on workplace testing, but it was largely silent on the role of consultation with workers on negotiating the balance between privacy and legitimate safety needs.

It has been positive to see a number of suggestions in recent weeks about how we get this balance right. This is a welcome briefing from the European Trade Union Institute (ETUI) on contact tracing apps. The work of Lilian Edwards and team on the Coronavirus (Safeguards) Bill has set out a framework to examine data use in a crisis. Christina Colclough and Valerio De Stefano argue strongly about the role of data rights and workers’ voices in testing and tracing. Will workers be pressured to prove that they are using the app? Or to hand over their data to employers? What boundaries should be set for how data on work testing is collection, stored and for how long? Getting this balance right is important to trust, transparency and buy-in, especially if we want to avoid the negative consequences of a discussion around testing or immunity passports becoming about the relative status of infected workers, who could be  denied work unless they disclose their health data.

Future of work: enforcing consent and data rights in surveillance

We need solutions that can harness the benefits of digital technology while protecting privacy and workers’ rights. How do we harness the value of data for public health without sleepwalking into a world of automated surveillance and decision-making over which we have no control? GDPR and privacy rules exist for a purpose, and a crisis should not be a reason to ignore them. We need to be careful that the precedent we set now, particularly in the work setting, does not become a future screen for permanent disempowerment.

These are four areas that can help address these risks:

  1. We need to make better use of existing legal tools to test and scrutinise surveillance technologies. We need to ensure there is guidance on how existing laws can assess and protect collective data rights. The legal basis for this exists: Article 88 of the GDPR is explicit about the fundamental rights of workers over how our data is processed for recruitment, performance and equality. Article 35 lays out the scope for Data Protection Impact Assessments (DPIAs), including consultation with data subjects and their representatives. Unions and workers’ representatives should be consulted by employers before the introduction of new data processes, including surveillance technology. Similarly, Prospect are working with the Institute for the Future of Work on how the Equality Act could be applied to new workplace technology.
  2. Building trust. Winning public support is necessary to get people to use these apps and make them effective. And trust is all important if tech is to be adopted safely and at a participation level that is effective. This applies to centralised or proprietary contact tracing apps as much as it does to tech in the workplace. How do we ensure oversight and ways to get the public involved – a case that the Ada Lovelace Institute regularly makes.
  3. Discussing data rights as collective rights in the workplace. If we do not understand the contractual relationship at the heart of employment, then it will be difficult to address the scope for discrimination and a breach of group privacy rights. This should be part of a renewed policy focus post-crisis on how worker consultation and unions could be involved in establishing acceptable boundaries for monitoring technology at work.
  4. Data governance as a public good. The debate about contact tracing apps and COVID-19 data highlights again the prevalence of private companies managing our data. Much of the data we need for public decisions – on our movements, our health, our energy – is held in private hands. How do we make this a public good? How do we build a data infrastructure that builds in transparency and control, not just the idea of individual consent. How do we ensure citizens – and, in this case, workers – are actively involved in setting the rules around data?

Privacy, rights and collective empowerment should not be something to be sacrificed in the campaign to contain COVID-19. This isn’t a zero-sum game, and privacy and rights should be seen not as opposites, but as mutually complementary. Getting this right is the only way to unlock the full potential of new technology, both to protect our health and to rebuild and redesign our economy, for the benefit of all.