What is the next digital revolution and how can the UK further embrace it to remain a world-leading digital economy? How can industry and government ensure citizens remain central to emerging tech and the changing world?
Ada Lovelace is widely regarded as world’s first computer programmer. In 1842, aged 27, she wrote an algorithm to run on Charles Babbage’s still-unrealised Analytical Machine.
Yet her contribution to the discipline of computer science – and indeed, to the digital revolution we are living through today – was even more fundamental. She looked at Babbage’s protypical computer, intended by its inventor to be a means of automating calculations, and saw the immense power of computing to transform every aspect of a future, modern world. Ada’s pioneering accomplishment was to look at a computer and see a universal machine, one which might one day be able to be tasked towards an infinite number of ends – the composition of music, she imagined, or the authorship of books. Crucially, Ada’s vision for computing understood that technology must be applied “for the purposes of mankind”; this is the phrase she used. It is this notion from which the Ada Lovelace Institute, of which I am the director, derives its own mission – to ensure that data and AI work for people and society. Technology which works for people and society is that which is designed and deployed in service of humanity, not the other way around.
And yet, in recent years, concerns have emerged and anxieties arisen that in fact it is we, humanity, which is in the service of technology. Our smart phones constantly demand our attention and erode our relationships and remind us we’re not getting enough steps in. All the while, there is a sense that our data, collectively produced, is being commodified – and the benefits are not being realised or seen by ourselves. We’re overcome by a collective fatalism and the sense that our individual agency is slipping through our fingers, and so we click ‘Agree’ to yet another long and convoluted Terms and Conditions without ever feeling like we’ve had a choice.
These individual anxieties play out against a backdrop of broader social and political discourse about the ethics of new technologies, a discourse fuelled by every high profile tech scandal and grounded in emerging research on the prevalence of bias and discrimination in algorithmic and AI systems. All the while, the public alarm is incrementally rising as supermarket checkouts become automated and yet we still don’t have flying cars. AI, according to the press, is seemingly everywhere – solving cancer and writing newspaper articles and selecting job applicants and sentencing offenders – but in the average experience of most people, it is difficult to see how it is improving their lives.
Much can be said and debated about the validity and accuracy of the individual and societal concerns about new technologies. But from this emerges an undeniable truth: despite the immense potential of data driven technologies to make our lives easier, more prosperous, more convenient, and more environmentally friendly, there is a real risk that the digital revolution is developing a legitimacy problem. That, on the current trajectory, there is a diminishing social licence for new technologies, and a deficit of public trust in and understanding of how the fourth industrial revolution will change and improve our lives.
The establishment of the Ada Lovelace Institute last year responds directly to this risk. If we’re to avoid the societal and political upheavals caused by previous Industrial Revolutions, we should heed their lessons: that labour-replacing technological change may be vigorously resisted by those most likely to be impacted, and that the benefits of disruptive technological change are unlikely to be equally distributed, or even broadly experienced until many years after the revolution is completed. As economist Dr Carl Benedikt Frey argues in his recent book, The Technology Trap, “In a world where technology creates few jobs and enormous wealth, the challenge is a distributional one.”
The challenge we face, therefore, is to ensure that this digital revolution gives birth to new technologies which respect and reinforce modern societal values: equity of outcomes and of opportunities, democracy, solidarity and community, equality and diversity, sustainability, human rights and the rule of law. If these values are to be preserved, and not become collateral damage of this Fourth Industrial Revolution, they must be hardwired into the innovation and technological advancement. Engineers, technologists, and data scientists have as important role as any in ensuring that new technologies reinforce the values upon which our society is build.
The key, however, to ensuring a legitimate and sustainable digital revolution – and to cementing the UK’s leading role in that revolution – is to ensure that the British publics have an active role in designing that revolution. We must eschew the notion that people are “users” of technologies. Rather, let’s begin thinking of them as digital citizens who are entitled to be heard, and from whose buy-in the legitimacy of AI and data-driven technologies is derived. Citizens need to be in the loop at the earliest possible stage – industry should be engaging the public in technological design, and government should consult citizens before procurement and deployment begins. At Ada, we are piloting new ways of involving citizens in this digital revolution, working with the Wellcome Trust to run citizen juries on the sharing of health data, and scoping out the possibility of a Citizens’ Biometrics Council, both of which are forms of deliberative democracy that bring experts and citizens together. We’ve begun public attitudes research into public understandings of and appetite for facial recognition technologies in policing, schools, and cities. And we’re thinking about how public education can be deployed to build the legitimacy of AI, and ultimately strengthen the social licence for and public trust in technologies.
The UK has an opportunity to be the only country in the world putting ethics, legitimacy, and citizen involvement at the heart of its digital strategy, encouraging industry to put stakeholder value, rather than shareholder value, front and centre of all innovation. We at the Ada Lovelace Institute look forward to working hand in hand with government, industry and civil society to see this opportunity realised.
The failure of the A-level algorithm highlights the need for a more transparent, accountable and inclusive process in the deployment of algorithms.
A new, a vast, and a powerful language.
To mark the beginning of an independent review on the governance of biometric data, Ada hosted a debate on UK biometrics regulation.
Dr Steven Cave's speech from an Ada Lovelace Institute event held at the Nuffield Foundation on 4 December 2018.