The second of two virtual events sharing the findings and what we learned from a rapid online deliberation project run with Traverse, Involve and Bang the Table to explore public attitudes to COVID-19 exit strategies. The project was an experiment in working exclusively online, at the same pace as the most rapidly evolving policy process we’ve seen decades.
This post summarises the key points of discussion and debate from the webinar which you can watch in full below:
Governments across the world have opted to develop technologies to help deliver their ‘exit strategies’ from the COVID-19 crisis. In the UK, these technologies include digital contact tracing, symptom tracking and immunity certification, as well as changes to data governance and new relationships between public-sector institutions and private-sector providers.
In response, civil society organisations, academic researchers and privacy experts have been calling for a wider public and societal debate about these technologies. But convening that societal debate poses particular challenges – while it has never been more important to engage the public, it has also never been so difficult – both because of the pace at which these technologies are being proposed and introduced, but also because of the practical constraints of social distancing imposed by lockdown.
Drawing on emerging findings from the Ada Lovelace Institute’s ‘rapid online deliberation’ in May 2020, in collaboration with Traverse, Involve and Bang The Table, we explore what deliberating online can tell us about public perspectives on COVID-19 technologies – and about deliberation in exceptional circumstances.
Anna McKeonDirector, Traverse
Natalie BannerUnderstanding Patient Data Lead, Wellcome Trust
Russell GormleyResearch participant
Mandy PhillipsResearch participant
This webinar reflects on some of the emerging findings from an online deliberative process that we hosted over a period of three weeks, convening seven workshops with a range of participants from all walks of life to think about what would build public confidence in the use of COVID-19 technologies.
The Ada Lovelace Institute worked on the content and providing subject matter expertise in this space and is also working to translate the findings from this work into policy impact.
Traverse are an independent research and engagement consultancy and they designed and delivered the deliberation process.
Involve are an independent public participation organisation, who offered us challenge and feedback on the process.
And BangTheTable are an organisation that enabled a slightly different type of online engagement, which was reflecting through journals and storytelling on an online platform called Engagement HQ.
The process was run considering the question, ‘what would help build public confidence in the use of COVID-19 exit strategy technologies?’ We were thinking about a range of technologies, not just a digital contact tracing app, which of course has been the subject of much public debate, but also symptom tracking and a range of data architectures and infrastructures that are being used to understand and respond to COVID-19.
One of the things I’d like to flag before we welcome our speakers and our panelists is the nature of the timeframe within which this online public deliberation happened, and this is what we think makes it quite unusual. It was shortly after the Isle of Wight contact tracing trial began that we started these deliberations. And then midway through the process, the Dominic Cummings story broke. We then had the death of George Floyd and protests erupting across the USA.
During the course of this deliberative process, many participants were reflecting on some of these events and changes, and one of the really interesting aspects of this process was the extent to which there was a fluidity in the way that people engaged with and thought about that.
Russell and Mandy join us on the panel today and they were involved as participants in this process. What was really striking from our perspective was the whole system emphasis that this deliberation generated. The extent to which people recognised the challenges and the limitations of simply focusing on the technology, rather than thinking about how the technology was situated within a hotly contested conversation about what it meant for the UK Government to leave lockdown and how it would impact upon different groups.
Russell and Mandy, what did you find interesting about the process and what did you feel the key insights and findings were?
I really enjoyed the format proposed. I thought it was good for people to be able to connect in specific areas around the country without leaving their house. I do think this will probably be the new norm in the future of how people talk to each other and in business and commerce.
I think the bubble of being able to be at home and being able to project your thoughts safely is a huge plus point. Sometimes, face-to-face I’m not always good at projecting what I want to, but I felt very safe in being able to give my opinion. The presentations were phenomenally useful. They were clear and concise, which then led to better debate. I would have been happy to carry on and debate longer in our breakout groups, sometimes I felt we got to pivotal points and we didn’t quite clear up everything we needed to. I did feel at a couple of points that we didn’t quite have all the facts that we might have needed to make informed decisions. I appreciate that was quite difficult with what was going on at the time.
I’m from the hospitality industry and following on from Tuesday’s announcements regarding the reopening of hospitality, it has emerged that businesses will be required to log all people entering their establishments to aid with track and trace. This was obviously a primitive form of what I was hoping for when I started this process. I was hoping that I might be able to see the technology of an app that could get me back to doing weddings and events quickly. Some sort of tracing app that would be linked to the main app with a medical certificate to help get more people into a smaller space again. I’ve seen that there’s a bit of panic already about the data handling and I’m a little bit puzzled by this, because we do this every day. You book a table in a restaurant, and they ask for your telephone number, your address, some restaurants take your credit card details. It’s just a slight extension of this. And if we could help save lives through that, it is a small price to pay.
On what we learnt, the main thing is people want to feel safe, both physically and digitally. The word that came up over and over again over the course of the three weeks was transparency. I think there’s a lot of distrust and it comes from a lack of education from the public in terms of data handling. Some of it almost feels a little bit cloak-and-dagger and I feel the Government missed an opportunity at the beginning to be as transparent as possible.
I know my data is out there and I’m quite comfortable with it, so I came from a point of view where I was quite blasé to the fact. But there were other people in the group who weren’t as willing to give their data up.
Finally, we learned that there is no perfect solution. We won’t please everyone, but we should try if we can. If we can sort out transparency, there was a good feeling to have a go and this could be the key to opening society fully again.
I feel very passionately that technology should be used more for the benefit of society. I’ve spent a lot of time on the commercial side of things, and probably my last three big projects have been marketing, number-crunching, shopping habit buying type systems. I felt that the technology was there in some form that could support us through this process but quickly discovered that some of the assumptions around the technology, i.e. Bluetooth and whether it would be suitable for this, seem to have been ignored.
I was very intrigued about people’s views around the data and data-sharing. A lot of people I know have got public profiles on Facebook, they’re sharing their address, their date of birth, the information about where they’ve worked, where they went to school, which is actually very powerful in the hands of a scammer. I was quite relaxed about my data. For me, what was interesting on that side was how little people understood about the GDPR legislation. A lot of the concerns people had about looking after the data, how it was shared and how it was used should be covered by that legislation, so I felt quite comfortable around that.
One thing I felt very strongly about was accessibility, because the whole app business relies on people having a phone and having a phone that would run the software. That is not the case for many people. A lot of people would have been left out of this process and I felt quite strongly that whoever it was who designed this application or architecture aren’t really addressing that.
In terms of the impact and effectiveness of the app, we have not seen the project specification and aren’t aware of what ‘good’ looks like. With my IT contacts, I’d actually heard very early on in the process that things weren’t all well in this process, so I’d been doing a lot of reading around this and by the time we got to the end of the process those of us who had concerns about it were proved to be right.
For me, from a technical background, it was great being in an audience where most people weren’t technical. Many people had concerns and issues, and they were quite justified. It taught me, as somebody from the tech industry, how we fail in these types of projects because we use language that the average person on the street just doesn’t understand.
In terms of the deliberation process, I think it’s fantastic, and I’d like to see this type of process also being used in commercial environments, because as we move forward, this is probably going to become the new normal for how we get around the table and discuss things.
It was also interesting that a lot of us who signed up for this had been in lockdown, in the same four walls for 12 weeks, and I thought, “Fantastic, an opportunity to meet new people.” It was great. I found it very beneficial and, in fact, the app itself, for me, started to recede into the distance, and I started to enjoy the conversation. I spend a lot of time looking at technology and tools that can help me look after my dad, and I just think that this type of deliberation around what we can do in many related fields is interesting. I would certainly get involved again and I would recommend to anyone who gets the opportunity to join in.
Natalie, joined this project as one of the independent experts, would you be able to offer some reflections?
I’ll talk a little bit about the methodology and the process and then come on to some reflections on the content. First, kudos to the Ada Lovelace Institute, Traverse, Involve and Bang the Table for managing to pull something together on this so quickly. I think the world was reeling from the pandemic and the impact and the implications and it took a great deal of foresight to be able to say, “Hang on a minute we have the capacity to do something here and turn it around quickly”. It was a huge challenge, not only logistically, and in terms of design, to create a process like this, but also to develop and refine the questions to make sure they could make a meaningful contribution to what is a very, very important area of debate and discussion, especially in light of how politically charged and rapidly evolving the whole environment is.
I wanted to reflect on the comparison with other events that we’ve run in the past at Understanding Patient Data. Last year, we published some public attitudes research which explored how people felt about NHS organisations partnering with third parties when it came to data. By comparison that process took months. It took a very long time to refine the research questions, to explore and have lots of back and forth with a wide range of stakeholders on what the key questions and priorities were. We also spent a long-time developing case studies and materials to present to people.
And for our citizens’ jury deliberative process, which ran over a compressed time frame of two and a half days, this had several rounds of iteration, we paid lots of attention to whether they were fair and balanced, and were presenting the case impartially, and we spent a lot of time thinking about what questions people might have and trying to ensure we had all the materials to answer those questions. That whole process was quite involved, was quite intense, and there was a lot of consideration about how to frame the questions, exactly what to ask and where we thought it could be most useful to inform policy. None of that is possible with a quick turnaround.
But in a sense, it was quite refreshing to participate in the discussions on that basis, because frankly the participants were as knowledgeable as the rest of us. I provided some input as someone who has some knowledge of the app but also the broader space around how data, and particularly health data, is being used, and by all sorts of different types of organisations – the NHS but also commercial providers as well. But I certainly didn’t have answers to all the questions that people posed to me and it did feel like there was a sense of, “Well, we are all trying to navigate this really complex, rapidly evolving environment together” and things were constantly shifting. I think that made the process much more reflective of the usual decision-making context that happens when it comes to designing policy and governance and developing initiatives for the use of data.
Sometimes in these processes, at the end of it, you can end up with a kind of really idealistic wish list of things that you know in practice are not going to be feasible financially or in terms of the buy-in you are going to get. The things people want in an ideal world are much greater information provision, things like information campaigns, more accountability structures, that feasibly are not necessarily going to be things that can be picked up. Whereas in this process, because we were all in the situation of only partial knowledge and lots of unanswered questions, it did feel messier, but much more realistic and therefore much more reflective of what might be doable in practice.
But there is a major challenge of trying to operate these kinds of deliberative processes in an environment that’s evolving so quickly. When we are trying to ask these difficult societal questions around the role of technology before we know what the technology can do. The effectiveness of the technology is very important in considering trade-offs, risks, benefits, what you would or wouldn’t be willing to do, who you would or wouldn’t be willing to provide data to. If it’s not going to work as advertised, that will affect how you make those trade-offs and what you consider is acceptable or not.
There is a little risk when you are trying to run these processes in such a fast-paced environment where you don’t necessarily understand or have evidence of what the technology can actually do, that you can get a little bit swept up in hypotheticals and the conditionals. When it is a bit too early to tell what the tech can do you might lose the possibility of having the kind of impact that you are trying to achieve. It might be helpful to policy-makers and developers in terms of the broad principles of what is necessary for, in this case, public confidence in these technologies, but you are really unable to get at that more granular level because you don’t have the evidence base.
You can have a very fascinating and important deliberation that does have value in and of itself, but that might not end up having the influence you intend. As we saw from the app, during this process it wasn’t possible to get that kind of fine detailed insight into what people would or would not find acceptable. That is one of the challenges of trying to operate in this space so quickly when the evidence base is changing quite rapidly.
Those are thoughtful reflections and observations on the difference between the types of deliberative processes that you run. There is an interesting question about the design of these spaces in an online context and some of the challenges and limitations around them. Anna, can you talk a little bit the design of this process, which was quite unusual?
I’m just going to add a few reflections about the use of technology in creating and designing civic or democratic spaces.
So in this project, what we really did was replace one space – an events space, in a community hall, for example – with three different online civic spaces and the reason for this is that we were running this as a deliberative process, so participants needed to be able to access information, reflect on that information, share their views, hear views that were different to their own and then discuss all of that together. We needed multiple spaces online in order to do that.
Within a usual process like this, you might have one full day or maybe two full days, but we split it out across shorter sessions, across three weeks. We had seven sessions and participants had the opportunity to use the online platform in the interim. While it was certainly challenging to steer the course of the discussion, we did have a bit of flexibility to be able to responsive and reactive. This did mean that we didn’t necessarily need at the beginning to have decided everything that we were going to do. We could adapt to it.
In terms of the spaces that we created, the first space was the plenary space and we did that using Zoom. These were more akin to a webinar where guest speakers gave presentations. However the civic space within that was really the chat box, as participants commented on what they were hearing, they challenged it, they built on each other’s questions. It was a little bit like Gogglebox, because usually we don’t hear people’s narratives.
The second group was the discussion group with eight people, where people chatted together and used the chat box as well. This was on Zoom as well, and this was more a traditional discussion space.
The third space was the Engagement HQ platform. In this, people commented on each other’s ideas, liked them, and shared them, and that enabled the asynchronous contributions.
This allowed us to do some new things, but it also constrained us. For example, in formal interactions that are still a key part of our civic spaces, those chats over coffee, the one-to-one exchanges, that is not as possible using online technologies like this. Additionally, it was much more challenging to work collaboratively with participants on materials. There are platforms like Google Jam and Miro, but we were aware that people would maybe be joining just using their phone and wouldn’t really perhaps have the set-up to make participating across multiple platforms possible so we didn’t want to do this.
Finally, not everyone is online or confident with technology so that provides a limitation. However, I think the overarching lesson we took away from this process, both from some of the findings, and how we delivered the process is that focusing the conversation on the specific technology is pretty narrow so one of the key things that we learned from participants is that an app is not as important as having trust and understanding in the system that lies behind it. Similarly, our focus with civic tech shouldn’t just be on the tech or the ways in which it is different to face-to-face work. We need to look at the ways in which our civic processes and systems and spaces work and how they include and exclude people more generally.
So, one reflection we had at Traverse, was that civic tech has developed alongside traditional engagement practice, rather than always disrupting it. Everyone can log on and see council meetings streamed live these days, but I’m not sure that’s really changed how council meetings work. It’s important to provide people perhaps with data-enabled tablets and support them in using these platforms that we’re experimenting with and enable digital inclusion but that won’t be successful if people have no trust in the process that they’re participating in, and they don’t see the value that it gives to them and their communities.
The question that I’m left with, after running this project with all these brilliant collaborators who have given so much over the last few weeks is how we can use this opportunity not just to replicate things online, but to use technology to transform our civic and democratic processes for the better, and to interrogate the views and structures that resist some of those changes.
For Mandy and Russell: through the process, was there a reflection on public sharing of data, depending on whether it’s going to a private company or the state? And any differences in views on that?
I think there was a lot of worry – there was a particular guy in our group – about private companies and he sort of came out with some names of private companies that he thought might be involved and I think there was a real worry that that would be the case. I think a more centralised system would probably be better.
Can you say a little bit more about the debates upon which that consideration was arrived at?
I suppose because they weren’t sure about how, once it goes behind a closed door, what is going to happen with their data. What are their agendas with the data, and are they going to make money from the data?
Having seen various blogs and Twitter feeds around the app and data during this process, a lot of these concerns about data seem to be driven by some bizarre conspiracy theories. I feel reasonably comfortable when it comes to some aspects of commercial versus government. My concern is that whoever it is, the framework needs to be in place to protect our information, and to look after it, and to adhere to appropriate legislation. I know that certainly last year a lot of us got upset when our Health Minister managed to sell off loads of our data and I think that upset and made people feel very uncomfortable.
There’s an interesting question which relates to building confidence and competencies when it comes to the use of technology in deliberative discussion and how we best do that, and indeed whether that is an important thing to do.
It is important. We would have liked to do more of it. I mentioned briefly when I was speaking before that supporting people who are digitally excluded and providing them with the technology and onboarding them just to be able to use that in their lives, not just to take part in a process like this, would have been something we would have loved to do in the forerunner to this project. We were just constrained by time and resources in this instance, but it is something that we’re looking at in our work more generally and believe is important.
What we tried to do within this project, within how constrained we were, is we kept the platforms as simple as we could. We didn’t work across multiple platforms, we chose a video conferencing platform that we thought was most likely to have been used by some people, we didn’t ask people to do too many things at the same time and we tried to encourage people to feel comfortable in their surroundings, as much as possible. Usually people had their video on, sometimes they had it off, but if there was interruptions, if there were kids, if there were dogs, that kind of thing, it wasn’t an issue and we talked people through any technical issues as we went. We found that while some people had connectivity issues, which occasionally meant they dropped out of sessions, it wasn’t a barrier to them completing the entire process. So there might be an issue that happened the one day but we tended to be able to fix it for the next time but there’s always more you can do with that and certainly in this project we barely scratched the surface.
Trying to build confidence and competence for participating in this sort of process but, more broadly, around digital skills and participation that there is a real challenge if you’re not starting from where people are at, the questions they have, what matters to them and what’s important to them. You can flip that process on its head and say, “Well, what matters to you, and are digital tools and technologies and participation going to help you address and deal with them and manage the things that you care about?” Then I think you’ve got a much better starting point for engaging with people and creating tools and processes that are going to be beneficial for them.
Mandy and Russell: what is the one key thing you would want to pitch to a policymaker or politician if they had the opportunity from the deliberative process?
With my project manager hat on, if I had the chance to sit down in front of these people I would like to understand what their planning process is, because we’ve not seen any plan, we’ve not seen any briefing document and normally any project would have a brief that would identify objectives, requirements, risks, issues, mitigation. If that was made available, I think people would have more confidence in what they were trying to do. I would ask them to be transparent about their decision-making processes and the thinking behind this app or any future apps.
I would probably want to ask someone, if this is still actually going to happen, if this track and trace system can help society get back. Because it feels like it has stopped, and if anything, it is going backwards. For this period now, but also looking into the future, if this happens again, or a second wave, are we going to be ready for that?
We’ve got a question which is about whether the public have an understanding about decentralised versus centralised and any kind of reflections or observations about that part of the process.
I would suggest that, perhaps, the general public didn’t. It never actually featured very highly in my mental thought processes.
We’ve got some questions about how people were supported to get on to the platform and the technical issues encountered during the participation, any lessons learned and whether the discussion was being recorded and how people felt about being on the record.
In terms of joining the platform we sent joining instructions with screenshots over email. It is a relatively intuitive platform and we basically said at any point if you have any problems email us and let us know. We didn’t have anybody emailing us to say that they were struggling to join the platform so that was remarkably smooth.
In terms of the technical issues that we encountered, we asked participants about whether they found any barriers to participation in general, and the only barrier that people mentioned was when they had connectivity issues, which sometimes meant that they struggled joining a Zoom call, or they were on the call and things were kind of going in and out and they missed things. What we tried to do to cover that was to have live notetaking in the plenary sessions and in the discussion sessions for the slides and the notes on the online platform for people to look in their own time. Some people liked that because they found it a lot to absorb there and then and they wanted to go back and look at it and they appreciated the chance to be able to do that. And if people had had to miss a session for whatever reason, they then caught up.
In terms of helping people overcome connectivity issues, other than addressing the actual technology they have in the home, I think giving people the opportunity to find the information and go back to the information in their own time was helpful for that. We recorded the plenary sessions but not the discussion sessions. Instead, we had note-takers on the call with us, who were taking verbatim notes at the same time, so we had effectively an immediate transcription available for our analysis purposes and for the participants to look back on.
We’ve got a question which is whether the lack of evidence and information during the deliberation process is a challenge to policy-makers to be more open and provide the information needed to evaluate and get to grips with and inform public debate about any new technology. And how does, or how could, that work when the policy and technology is being developed in tandem?
This is a huge challenge, especially when things are being developed in tandem. My own view is that the worst thing to do is to say, “Don’t worry, we’ve got it covered, we’ll tell you when we’ve figured everything out.” I don’t think that is an environment that is conducive to trust. In a context in which you are developing things that are going to rapidly iterate, where you are going to have to make decisions quickly, possibly pivot those decisions, change your mind on things because in the light of new evidence there is a shift of direction in travel, you can only do that if you’ve created an environment, or architecture of trust and trustworthiness. It’s about saying at the outset: “Right, here is something we’re going to try, we don’t know if it is going to work, it’s a bit of an experiment, but we want to try and develop it in a way that’s going to take account of the potential for exclusion and being mindful of inequalities.”
If you can create that environment at the outset, I imagine most people would think, OK, give it a go and if you are going to make mistakes we will understand. But if you don’t set that environment out at the start, it makes it incredibly difficult. You are accused of U-turns on things if you change your mind, rather than saying this is a natural part of an iterative process, that we are shifting direction and doing things slightly differently. I don’t think this Government has taken on that idea of openness and transparency and caring about trustworthiness as a feature of the system. The focus has been on world-beating system that is going to be in place by May, June, July? That is toxic from the perspective of trust of citizens. Creating an environment that is more conducive to trust at the outset is a critical part, where you are developing policy and technology in tandem with your app and with one another.
How would you compare the use of digital technologies and its relationship to public trust in comparison to previous large-scale crises, for instance, after 9/11 or the financial crash in 2008?
There are several things that are different about the evolution and development of digital technologies and public trust that make it different from kind of other crisis situations. One is, in a sense, although this stuff is developing rapidly, it’s still a relatively slow burn compared to a sudden crisis such as 9/11, or the financial crash.
Data has been used by companies and governments for over a decade or more. We are slowly waking up to the fact that data is being used in all sorts of ways that we might not have anticipated, but in a sense, the problems are not new. They might be more prominent and the profile around them has certainly risen in light of COVID-19 but they haven’t completely come as a bolt from the blue and there has been a huge amount of work that has been done on questions around privacy and equity. I don’t think they always have been, and some of the discussions that are emerging at the moment about particularly racist technologies and algorithmic bias are very pertinent but in a sense those problems are not completely new.
The other thing I would say is the really interesting contrasts, or perhaps it is not a contrast, between the role of commercial providers and governments and health systems in all of this. Historically, there has been big questions around how companies are managing and using data, we’re learning from things like the Cambridge Analytica scandal and so on, that data brokering and data harvesting is really big business but typically government, and particularly the health system, has been more trusted in using data. I wonder if that is now being challenged somewhat such as with questions around government surveillance etc., maybe that dynamic of a difference between commercial providers and government providers is not as stark as it perhaps used to be. There may be a slight difference there, perhaps, from previous crises where we would turn to government for leadership in this space. Maybe people are questioning that a little bit more now.
The conversation about trust exploded when Dominic Cummings went to Barnard Castle and that Monday was when we had a discussion group, after the press conference in the garden. The conversation then wasn’t about the technology, it was about, ‘all are we all in this together?’ It was also about transparency and the behaviour of our leaders. For me it’s about the underpinning infrastructure, systems, leaders and the trust that is inherent in those. If that is there, then the trust in the technology, would probably then come from that.
We will be publishing the findings from this work, and we’re having conversations with a range of government organisations about the insights from this work in order to influence and shape and change policy, across the UK.
There will also be a policy stakeholder roundtable, so do get in touch with us if that’s something you’re interested in contributing to, and participating in, and learning more from the published findings of this work.
A tracker collating developments in policy and practices around vaccine certification and COVID status apps as they emerge around the world.
Shedding light on the capacity of technology to trace and monitor the movement of individuals.
Considering the question: ‘What would help build public confidence in the use of COVID-19 exit strategy technologies?’
The societal impacts of introducing a public health identity system: legal, social and ethical issues
The second in our series of events addressing the nascent ‘public health identity’ systems developing around the world.