On Monday 8 June, we were delighted to kick off CogX 2020 by curating the first day of the Ethics & Society stage. 25 speakers across six panels joined us to tackle the knotty, real-life trade-offs of benefits and harms that emerging technologies bring to people and society. This is a summary of the fourth panel of the day – Investigating and Prosecuting AI – which you can watch in full below:
Martha SpurrierDirector, Liberty
Adam SatarianoTechnology Correspondent, The New York Times
Cori CriderDirector, Floxglove
Ravi NaikLegal director, AWO
This session is all about questions of accountability in the context of tech. The panels will share their experience of being on the frontline in trying to bring transparency and accountability to the fast-paced tech world. Liberty is a human rights organisation and we do work on tech and data rights and along with lawyers, campaigners, journalists and academics. And we are all trying to grapple with how you have meaningful tools of accountability and transparency in technology where the pace of change is very fast, the level of literacy in our democratic institutions is relatively low, and old systems of regulation and accountability are simply not muscular enough to keep up with technological developments. What we’re often seeing is tech solutions exacerbating existing inequality, injustice and existing unfairness.
I’d like to hear from the panelists the tools and strategies that they use to try to expose some of those inequities and what systems and processes we can put in place to make sure tech can be much more fair for everybody.
Many of these challenges have been brought to the fore by the COVID-19 crisis and we’re seeing tech as offered up as a grand solution to a public health problem, but it also a huge barrier to accessing solutions to public health issues.
Some of the barriers are that big business are much more involved in this space than in other areas of public health and public decision making in which lawyers and activists are used to working in. Tech is often difficult and impenetrable and that, in turn, inhibits democratic participation, whether by the public, parliamentarians, activists or lawyers.
Much of the tech is also often shielded by a corporate veil, contained within a black box and so what we understand traditionally to being accountability and transparency isn’t realisable in the same way as in other contexts that we are used to working in.
I want to start by considering what prosecuting AI means in practice and where there are gaps and that may need to be plugged? And certainly a question of accountability. To be clear, the regulations around technology are not new. We can look back to a long history of legislation that’s been designed to contain and provide accountability for digital automation. We can look for example, to the 1974 Privacy Act which was created in the shadow of the Watergate scandal about how the creation and use of computerized databases may impact on individual privacy rights. And today we have the GDPR which provides a framework for the idea in the EU Charter of Fundamental Rights that data protection is a fundamental right in and of itself. The GDPR is an evolution of decades of data protection law which now provides a detailed and coherent charter of rights for individuals to have against those that control their data. I’ve been making use of that framework to bring pressure to bear on modern technologies as we start to understand it.
Our most notable case is our proceedings for Professor David Caroll against Cambridge Analytica. Cambridge Analytica are a small political consultancy in London who have said to be conducting huge data processing operations for political campaigns. It remains quite unclear what they did, but Professor Caroll was able to use the data protection regime, before the GDPR, to access his information. He received some limited information and that was quite illuminating because it showed that the company had been processing data about him and his political opinions. We argued that the way that data was processed was incomplete and that the processing of his opinions did not have sufficient legal underpinning to be lawful. The ICO agreed with our position and ultimately prosecuted the company.
Since then, I co-established a data rights agency called AWO and our entire case load is about modern technology and holding that technology to account. For example, we lead on the regulatory compliant of ad tech which has led to the ICO reporting that the industry is essentially unable to comply with the GDPR in its current form. Further we act on a case about the data black hole that is Google and we challenged Google to explain what they do with the data when it enters the Google complex. Our client, Dr Johnny Ryan was concerned that there’s no limit to what Google could do once the companies under the Google banner received your personal information. This creates an effective data free for all within Google. When we pushed Google on what they do and what’s their limits on data use, Google refused to adequately explain what they are doing. This is now subject to a regulatory complaint before the Data Protection Commissioner in Ireland.
Most recently, the AWO instructed, alongside a council team led by Mathew Ryder QC, to produce a legal opinion on the human rights implications in the tech responses to coronavirus. We’ve also been instructed by Open Rights Group to challenge the lack of safeguards around the implementation of the test and trace system in the UK and other measures such as data sharing. Other organizations such as Foxglove have done the same.
More broadly the capacity of the regulation is yet to be fully tested. For example, there are compliance measures in the GDPR which are very broad, for example to stop processing. Such orders are quite dramatic and could shift an industry and change their entire practices.
These are just some examples of the GDPR to leverage modern technology and modern power. The scope of that regulation to engender change is quite vast. Most critiques of this regime are about enforcement deficit, rather than substantive problems.
Most cases that we’re involved in are about quite rudimentary data processing activities. Even with quite sophisticated tech, you can still normally identify the controller of that information and bring accountability to bare. We are not really shown cases where tech itself is a problem, but rather how tech gets used by the ones controlling it. There’s a chain of causation for mistakes and accountability. But the point will come when the relationship between an individual affected by technology and an individual controlling that technology breaks down. And what happens when that control is not identifiable? This is an abstract notion. Google argued, for example that ‘search’ operates without any input so therefore they should not be responsible for what search does. Courts rejected that argument. But what happens when there’s more sophisticated technology? That’s where accountability deficit starts to ensue. As we move to a future of data solutionism we need to address these problems quite head on.
Adam Satariano writes on a whole range of issues related to tech, privacy, disinformation, from both a European and US perspective. One of Adam’s most notable pieces exposed the way in which algorithmic decision making makes decisions central to everybody’s freedom. This draws on the work of Virginia Eubank on algorithms in the welfare system but also goes wider and looking at AI in the criminal justice system and employment.
I’ll talk a little bit about my arc as a journalist because it explains how coverage of technology has changed over the years. I first started to write about technology about nine years ago in Silicon Valley and my main responsibility was covering Apple and some of the other big companies. At the time, coverage was about the gadgets and there was this incredible excitement about apps that were being created. There was a great promise around technology and excitement around that.
But now we see how quickly things have changed and the role of journalism around technology has changed with it. As these companies have gotten so much bigger and have so much influence on our society, they should be covered as such. That’s one thing that I see as my responsibility as a journalist and something we try and do at the New York Times. I work with a number of colleagues who cover technology and we look at technology through this prism that technology is changing the way we live and work and how people, governments and companies are responding to that change.
AI is just one branch of that and is one of the most challenging ones, for many of the reasons already covered. One is its complexity which makes it often inaccessible to a reader or to a journalist to convey what is going on. Another is that a lot of what is done is done by private companies whose responsibility to disclose how decision making is made is very opaque. The third thread is to try to find the people who are being affected by this.
Cori has founded her own NGO, Foxglove, which focuses specifically on the abuse of power by technology and standing up to the tech giants on behalf of the user, the worker and public good.
Martha and I started Foxglove because having done national security for over a decade we felt that the next big threat to social justice was basically concentrated tech power, be it in the Facebook’s and the Google’s or the governments of the world. We’re a non-profit organisation investigating and litigating those issues.
If I had a slide today I would have put up the Wizard of Oz. I would put up the Wizard, behind the curtain and say that what we all need to do is pay attention to the man behind the curtain. Early in the debates when AI started to become politicized you would hear tons of discussions about ethics and AI, differential privacy, explainable algorithms etc. Quite technical debates. Those debates are important but at this moment, with the use of mass data systems by the police, by the government, by these corporations – that’s not actually the core issue. The core issue is the man behind the curtain. It’s all of us. The problem is the balance of power in our society. The roll out of these systems for the acquisition and use of information about all of us on a mass scale is driven by people. Not by algorithms by themselves, but by people. Those people have values that get baked into the algorithms, just as they would into any piece of law, policy paper. Those values are not going to privilege everyone equally.
So, what does that mean in the UK today?
There’s an effort to maintain a power imbalance and power asymmetry that permits the aggregation of more and more information about you and me and everybody else, whereas behind these systems, behind these systems, behind the wall, we know less and less and less about them.
One recent example from a case that Foxglove bought. At the height of the coronavirus crisis, the Government announced that it would create a COVID-19 data store to provide a single source of truth about the pandemic. What that meant was, instead of going through a public tender process where companies bid, overnight contracts were awarded to some of the biggest names in tech such as Microsoft, Amazon Web Services and some other lesser known, but pretty concerning companies such as FacultyAI and Palantir. FacultyAI are the data science companies best known for running operations for Vote Leave and they have now won eight contracts with the Government of 1.8 million pounds in the last 18 months. Palantir is a data aggregator, data crunching, security firm that cut their teeth assisting the operations in Iraq and Afghanistan and who have contracts with the US police system and who came under intense fire last year for their support of immigration and customs enforcements in the US regime of deportation and family separations at the US border. That was the company that was contracted with the NHS to provide a ‘single source of truth about the pandemic’.
We had questions about these partnerships:
- What are the terms under which these companies are getting access to the data?
- How are we sure they aren’t taking NHS for a ride?
- There is a deep question about public value for public assets. NHS data is supposed to be worth 10 billion pounds per year. How do we make sure that the public value for the public asset was protected?
- Then all the core questions about privacy that everybody has. There have been prior problems with NHS data not being kept simply for public health purposes, but being used to enforce things like the hostile environment. How are these systems going to be anonymized?
- Finally, there’s a question of moral fitness and moral hazard. If a company like Palantir, whose bread and butter since its very inception was to support the military and to support the intelligence services and now they want to pivot to health because there’s more money in health, is that a fit and proper partner for our national health service?
We started that debate by saying the Government needed to disclose the data sharing agreements underpinning this giant data store and data deal. We are about to sue on Friday and to their credit they did finally give redacted versions of some of the documents. We’re still analyzing the documents, but there are real problems that showed why you need more transparency over these systems. For example, original versions of the contracts apparently released the IP that was going to work on the systems to the companies. So to companies such as FacultyAI which exist to ingest data, find patterns from the data and build models on top of that and then sell it on, potentially could have profited from potentially the largest longitudinal heath dataset in the world. Then they said they have modified that after Foxglove’s FOIA request was put in, they now say that they have relinquished all the IP and put it back to the NHS. This raises questions:
- Did they do it just because the FOIA was there?
- What should the standard be going forward? If there are going to be partnerships with public bodies, what are the terms in which companies are going to get access to that data and how do we preserve public value for a public asset?
You can all go onto the Open Democracy website who are our partners in this, and look through the documents and push for a wider debate about these issues going forward.
The next thing I am going to talk about is bias and the extent to which racism gets baked into these systems because racism is out there in the world. We also brought with the Joint Council on the Welfare of Immigrants what I think it’s probably the first case in the UK about an algorithmic decision-making system. The Home Office admitted that visa every applicant to the UK will be graded for risk: green, yellow and red. That will screen their application and determine the level of scrutiny they get. They have a list of suspect nationalities who are much more likely to be red. The nationalities more likely to be red tend to be nations who tend to those who have an overwhelming majority of non-white citizens. In our pre-action correspondence we asked the Government: how does a country get on the suspect list? The response was by negative or adverse events. What is an adverse event? It turns out to be denial of a visa. If there is already a race or nationality imbalance around how visas were granted and you then feed that data back into your algorithm, it’s going to have the same problems as predictive policing systems did in the US. It’s going to tell you to do what you have already done before.
All of us who have started to realise these systems have been operating with a democratic deficit for many years are seeking to pull them back into the realm of contested politics and law. Let’s not anthropomorphize these systems. Let’s recognise it’s humans who built them, human who run them and humans who we need to account for them.
Given the imbalance of power, how can you use law and journalism to meaningful attack the systems of state oppression – whether it is racist policing, Islamophobic counter-terror policy or xenophobic hostile environment policies? How can you get to those and how can you break down, expose and challenge these big systems that are allowing these inequalities to be replicated and entrenched at a large scale?
The key is getting the evidence of the wrongdoing and then being able to demonstrate. For instance, a story that involved a programme in the Netherlands that used a software system to know whether somebody was committing welfare fraud. The government in Rotterdam was using this system but only in certain neighbuorhoods. They were overwhelmingly low income, immigrant neighborhoods. How can you show this system is working, what is the evidence? In many respects the Government could not provide the evidence to show that what they were doing was working. As something like that got publicised, and as more transparency came to light, a court struck down the use of the technology. So it’s about having more transparency and bringing to light how these systems work and how they’re affecting people’s lives.
Ravi, how might it be the case that the law can be used to dismantle some of these structural inequalities?
To get transparency around what happened is usually very difficult. But it’s about looking, not only at what the technology has done, but at the humans involved in the process, what’s the decision that was made, how do you hold that decision to account. But then we use the frameworks to say processing needs to be fair. You look at the core of how that decision was made. Core concept in the GDPR is that data should be processed fairly. It’s those basis formulations around the law that allow you to have a lot of impact. It allows you to say that what you’re doing is against basic principles and human rights norms. It’s quite an open field for us to move to big tech with those basic principles of fairness, equality and rational decision making.
Cori, it feels like part of what you were saying is that law is part of the answer but you’re asking people to get involved in analysing documents and to move towards something more participatory than perhaps the traditional setup of a legal challenge with a client and a lawyer in front of a judge and then a decision that perhaps not many people would be able to access or understand. What do you think law needs alongside it to create the kind of movements for change that would actually result in greater equality, transparency and accountability?
For one it needs journalists like Adam to help us study and understand what’s going on. But it also takes all of us to understand that power is being exercised. Only people see power being exercised do they act to take it back. With the platforms like Facebook or Google, we saw that they amassed state-like power over the public sphere. It’s particularly stark now all of us are at home but it was true before as well. They built the usership and extraordinary power base, almost free of regulation. The penny has dropped and people now realised that power is being exercised and we’re seeing it in all kinds of ways.
One of the movements I am most excited about is the increasing politization of tech workers themselves for example the Google walkouts. Now we are starting to see Facebook walkouts because they see a direct threat to citizens with the President inciting violence on Facebook and Facebook is doing nothing about it. The workers are the ones who exerted that power. At Foxglove we have helped some content moderators, the people at the front lines of this public discourse, to put out a solidarity statement with those employees saying that they’re in too much of a precarious employment situation to walk out but are the people who are seeing racism, hate speech, police brutality coming up every day and there is a lot more that Facebook can do. What will persuade an entity like Facebook to change course is everybody collectively, letting them know that their conduct is unacceptable.
In a domain such as false information, it’s hard to evidence a person who has been harmed by misinformation on Facebook for example. How do you evidence the harm that things like a privacy violation does to an individual but also to the fabric of democracy and the public good?
In the case of Cambridge Analytica we were trying to show how the use of data can affect an individual’s participation in democracy. We asked the questions to Cambridge Analytica: what data do you have and what do you do with that data? Those simple questions led to transparency around what they were doing – they were profiling individuals based on their political beliefs. We took that and said that it was not fair, it was misusing personal information and misusing data – it was taking confidential data and providing this to third parties. Using those traditional frameworks of traditional human rights to say to a company that the corporate acts you are doing are not lawful and you have to stop what you’re doing. One of the solutions we tried to push for in the court was to say that we need an order from courts to stop this company processing the data in the way they are. It’s those concepts of trying to change behaviour within a company that will lead to a lot of change in the corporate misuse of information.
There’s also a policy angle, which is why we set up AWO. We are looking at a bigger issue than what the law can do. We have to look at the way the law is shaped, the way the law applied, and the way the law is enforced. The way we use the law is starting to change because the conversation around the law is developing.
It’s a fascinating moment from a policy perspective. You see governments starting to think about how these platforms can be regulated. Something I’m keeping an eye out for as I report on this topic is the unintended consequences of these laws. There’s an immediate instinct to want to regulate but the fallout from that can have unintended consequences. I’m cautious when I see this urge to regulate.
The Online Harms Bill in the UK can have the potential to really shape this space and it’s such a fraught area and can have so many major consequences including democratic participation, and for access to online services. It’s a hard space to regulate because it’s a space that touches everything we do.
Ada Lovelace Institute’s JUST AI network announces £40,000 AHRC support for projects addressing racial justice and AI ethics
Grants for projects designed to surface alternative, critical and diverse perspectives on data and AI in relation to racial justice.
Jeremy Crampton, Professor of Urban Data Analysis at Newcastle University on the three principles that could underpin a more mindful approach to AI.
The monopolisation of AI is not just – or even primarily – a data issue.
A summary of techUK and Nuffield Foundation workshop exploring the role and responsibilities for industry in embedding ethical AI.