I would like in a moment to set out the proposal for an independent Convention on Data Ethics and Artificial Intelligence. This has been developed by the Nuffield Foundation over recent months in partnership with The Alan Turing Institute, the Royal Statistical Society, the Nuffield Council on Bioethics, the Wellcome Trust, the Royal Society, the British Academy, techUK and Omidyar Network’s Governance & Citizen Engagement Initiative.
Let me first set our proposal in the wider context. Today’s event has been a thoughtful contribution to a public debate that has reached a new intensity in the past 12 months, leading to the announcement by the Government, and amplified by the Minister today, of its Centre for Data Ethics and Innovation.
It is increasingly apparent that certain principles that go to the heart of our understanding of individual and social well-being need re-examination as a result of the rapid advances in data, algorithms and AI. The benefits to our everyday lives are undoubted, and the potential for public good colossal, but there is also an undercurrent of unease at the unknown implications of these innovations as they accelerate – in consumer transactions; the distribution of public goods; in social relationships, the rights of the citizen in relation to the state; and the asymmetries of information between individuals and the global tech sector, who now arguably exercise more direct influence over individual decisions than do states themselves.
Beyond all this is the underlying philosophical question of individual autonomy – what it is the future of human agency in a world of manufactured intelligence. These are no longer academic abstractions but the most pressing of questions. They have an urgent bearing not only on the immediacy of our lives but in turn will shape what will come afterwards.
To adapt the greatest of the metaphysical poets, John Donne:
And new technologie calls all in doubt,
In the seventeenth century, Donne wrote his “Anatomie of the World” in ferment over the new philosophy of scientific enquiry, and concluded:
‘Tis all in pieces, all coherence gone,
All just supply, and all relation;
Today has been a serious effort towards creating some coherence and just supply in a world of new technology. What can be too easily overlooked in the more alarmist visions of an AI and data-driven future, is that many in the tech industries, whose imagination has created this world, are now thinking seriously about practical guidance on the enormous ethical issues their creativity has raised. If such ethical awareness had existed in the culture of the financial industry in the decades before the financial crisis of 2008, then some of its disastrous consequences might have been mitigated. So, this is a good place to be starting from.
It follows that any serious discussion of data ethics, if it is not to be either ignored or irrelevant, has no option but to place the creators at the centre of the debate, That is why the leadership of Tech UK in this area has been so vital.
How does one begin to untangle the complexities of the arguments? Data driven technologies axiomatically break apart established frameworks and re-shape boundaries – between public and private, civic and social, the factual and the imaginative, and the boundaries between national jurisdictions – they all become inextricably blurred.
The meaning of terms such as privacy, consent and ownership long used in policy, law and public discussion are now continually challenged by technological developments. Questions about the ethical uses of data today become intermeshed with longer-term questions concerning the future relationship of human and artificial intelligence. The discussion rapidly becomes about everything and nothing.
That said, beneath all these different issues is an underlying principle: in a rules based, democratic system, future innovations in big data or AI that have the power to shape and reorganise the society we live in will have to be able to ensure that the broader public is party to the process – that this takes place in the service of democracy. This will mean different things in different domains – in some cases, getting the public’s consent, and in others, facilitating individual control – but underlying all of this is the need to secure the public’s understanding, and this entails ensuring codes of behaviour that are deserving of trust. This is the role for practical data ethics.
The hard part is obviously how we should do this.
Our starting point must be the recognition that we are in a new landscape, not yet mapped. This is not the normal terrain where businesses operate in conventional markets, where regulation governed by law shapes any decision, and where ethical frameworks are well-known and broadly stable:
- It is a disconnected landscape, within which disparate conversations address shared questions with no unified perspective.
- There are sets of issues that have no simple or quick route into policy or regulation, and for which there is no space to invest in the thinking or consider larger social questions.
- There is a lack of objective evidence about the implications of data use, including little empirical clarity about the distributional effects of the data economy on different sectors of society.
- Too little attention is given to the cumulative impacts of data on civil society.
This agenda is so wide that no one body can take it on in its entirety without failing to deliver in some part of its remit.
So how can we collectively construct a coherent framework for addressing these questions?
We have set out three different spheres within which they can be addressed.
First, regulation; We are a fortunate in the UK to have, in the ICO, a regulator that is respected as authoritative and clear in its objectives and responsibilities, with a clear focus on the immediate issues of privacy and consent.
Second, there needs to be a space for wider oversight – part of what has been described as “stewardship” – outside of regulation; this ought now to be provided by the government’s Centre for Data Ethics and Innovation – as its remit is to advise government, regulators and industry on how they need to respond.
These public regulatory and advisory bodies will be vital co-ordinates in the landscape, but between them they cannot, and, as arms of government, should not cover all the ground.
The Government rightly wishes to encourage the many advantages to working in the global tech hub that is London. However, if government is to achieve its goal of making all of the UK a successful space for innovative enterprise – and one in which the tech industry can demonstrate its contribution to the public good – then it must also appreciate that government cannot alone formally determine the outcomes. Questions of good practice and regulation are just part of a far broader challenge.
We believe there is additionally a need for a third sphere, independent of government, in which different interests have the opportunity to stand back and engage with one another, a space where there is additional capacity for foresight.
The Nuffield Foundation, in dialogue with government and its partners – who themselves represent a wide range of different perspectives – have therefore proposed the creation of an ‘outer’ body – a Convention – independent from regulation or government. This body should take an uncompromisingly international perspective, placing the debate in this country in a global context. It should anticipate and shape emerging issues through shared deliberation between different disciplines and perspectives, public and private. It would work by investigation and experimentation with a view to offering practical solutions, based on empirical research. It would build a stronger evidence base and establish methodologies to understand the impact on society; (you can’t decide whether an action is ethical, without understanding the implications of that action). It would explore the public’s understanding of the questions it identifies; and, working with local, national and international stakeholders, would develop frameworks, norms, and practices to foster ethical decision making.
The remit is: to ensure that the power of data – combined with the automated technologies that serve to augment it – (including AI) – is harnessed to promote human flourishing, both for society as a whole and for the different groups within it.
This body – we have yet to settle on a final name but characterise it as a Convention on Data Ethics and Artifical Intelligence – would consider questions, problems and opportunities arising from uses of data and AI, which are not unlawful but have potential to:
- Cause widespread or profound economic or social harm – or good- to society, or different groups within society.
- Challenge or destabilise social and democratic norms or principles (such as ownership; consent; privacy; professional expertise; regulation).
- Facilitate future developments with unknown consequences.
- Introduce inconsistencies between treatment or rights in the offline and digital spheres, or between different domains.
- Effect change in the UK but inform thinking internationally.
We have set ourselves three key aims:
- To be a leading voice representing the interests of society in debates on ethical data use at a national and international level.
- To promote and support a common set of data practices that are deserving of trust, and are understandable, challengeable and accountable.
- To convene different interests to develop shared terminology for data ethics and promote human flourishing.
The underlying objective of human flourishing is taken from The British Academy and Royal Society’s report, earlier in the year, Data Management and Use: Governance in the 21st century, which has done much of the initial thinking in this space
I understand the term “human flourishing”, has caused government lawyers some headaches because it is difficult to frame in legislation. This seems to me a good reason for our proposed Convention to hold onto it. If we are to create in the United Kingdom an exemplar of how a data-driven economy can prosper, and a data-enhanced society flourish, the debate cannot only be about the exercise of powers and authority, or a reductive measure of positive economic impact.
The condition of human flourishing recognises that the implications of machine learning and a data driven society go beyond general questions of social well-being or the aggregate benefit to society. They reach to the core of each person’s sense of identity as an autonomous individual, alongside questions about the progression of humankind.
Focusing on human flourishing also challenges us to think broadly. The exceptional challenge of defining the ethics of the use of data and AI lies in the myriad ways that they affect everyday life. We tend to take “horizon-scanning” to mean theorising about the distant future; but the speed and scale and scope of the changes that are in train are such that success will depend upon anticipating ethical challenges before they are upon us, even as we make judgements on innovations that are much closer at hand and already taking place, in the light of what is likely to follow from them. If we can’t explain why humans are flourishing or failing to flourish in the context of existing technology, we will struggle to explain how things will look many years down the line.
Tech UK highlighted another crucial component of human flourishing in its response to the data governance report. This stated: “the fundamental principle of promoting human flourishing … will be essential to ensure that intelligent, machine learning and AI driven machines are developed and act in the interests of humans,” and to: “ensure that the decisions these machines make are auditable, challengeable and ultimately understandable by humans”.
“Auditable”, “Challengeable” and “Understandable” – to which one should add “deserving of public trust”- are co-ordinates I suggest we should hold onto as we frame our objectives.
The value of such a Convention to the tech industry will lie in the fact that, while it might complement government related interventions, and its thinking may well inform them, it will also look outward towards the interaction between the innovators and the public. It will consider the practical frameworks of ethics, exploring how to build capacity for ethical thinking inside business models – something which may have little to do with regulatory compliance.
The Convention is based on a number of interlocking principles:
- It will be independent of government or any vested interest.
- It will be principled – its conclusions should stem from normative judgements.
- It will consider a plurality of approaches from a range of disciplines and perspectives which in turn will give space for a range of solutions that are not those narrowly enshrined in regulation and law.
- It will also strive to be positive. This doesn’t just mean putting the most dystopian visions to one side. It also means framing the questions in a way that recognises balance of risks in holding back from enabling the use of data as much as in the overreach of that use.
- It should have a bias towards impact – even if this risks failure on occasion. We want the outputs of the Convention to be tangible, and beneficial to real life and real lives and so secure their trust. This is, after all, the shared interest of all those driving forward entrepreneurial innovation.
Finally, the Convention should be reflective and evaluative in the way it operates. We are not going to solve longstanding ethical debates overnight. The thinking will be iterative, based what is already in place and previous experience, but it should be unafraid of rethinking such norms and taking them further.
The Nuffield Council on Bioethics
Some of our thinking derives from the initiative twenty five years ago to establish the Nuffield Council on Bioethics. Co-funded by The Wellcome Trust, the Medical Research Council and the Nuffield Foundation, the Council is today recognised as the UK’s national bioethics body, with the international reputation of its work based in part on the fact of its independence from government (unlike the national bioethics bodies in other countries).
The aim set out in the original minute was “an enquiry into the best means of informing public policy and professional practice on the moral problems raised by research in biology, medicine and health”. It went on: “It would be in the public interest if the investigation and discussion of these moral problems could anticipate the application of new techniques rather than follow after public disquiet and anxiety had already been caused by their use. This would help both to achieve acceptable standards in the practice of research and to allay fears about its consequences”.
The Council’s influence rests not only on its independence but on its lack of any formal powers. It has created a trusted forum where those in the vanguard of bioscience research open up their thinking to engage in deliberative enquiry from different intellectual perspectives and interests in civil society. By working from normative principles towards publishing practical recommendations, the Council has exercised that original ambition of providing foresight, informing regulatory and policy thinking, without formally being related to that process. Though it is in a separate sphere, it has a shared realm of interest with the Human Fertilisation and Embryo Authority, and the Human Tissue Authority, regulators whose careful judgements have allowed the UK to be in the vanguard of research within a framework of accountability and trust.
The bioethics analogy of course has limitations. Ethics have always been a formal part of the architecture of medical culture. In bioethics, technological advances that pose ethical challenges mostly emerge from the frameworks of institutionalised scrutiny in universities and bioscience research industries. The pathway from innovation to public use is a long and careful one, with powers to implement held by few. In wider digital society, there are innumerably more actors, across every sphere – economic, social and private – and innovation can move from thought to public practice almost instantly.
Despite these differences, many of the choices that defined bioethics a generation ago read across quite easily into the contemporary data ethics debate. Indeed the separation of bioethics from wider data ethics is in itself increasingly problematic.
How the Convention might work
So what will the Covention look like? The success will clearly depend upon its ability to draw together the different types of experts and practitioners, to create a shared vocabulary and deliberate, to build evidence of problems, a deep understanding of social impact and of public understanding, trust an values, and in developing shared solutions and approaches.
We envisage the Convention, chaired by someone with authority in the field, comprising around 12 to 15 members, all there in their own right as individuals rather than representing official positions. These individuals will have recognised expertise and experience drawn from data science, social science, law and philosophy, from the Academy, private sector entrepreneurs, those with experience of public policy, regulators, those representing civil society.
The members of the Convention, supported by an executive of around 10, would scope the agenda and prioritise key concerns. They would commission work through Working Parties, led by members but comprising a wider range of contributors. By bringing together different perspectives focused on tangible problems, these working parties will generate not only recommendations in the form of advice, but also ideas, solutions, best practices, effective policies, good software, benevolent algorithms – whatever will have the most salient impact to the problem we’re addressing.
One of the most thought-provoking suggestions as to how to deliver practical benefit, encouraged by Omidyar Network, is to promote capacity building, addressing the lack of data ethicists currently available to help companies and governments. One option would be to establish a cadre of data ethicists in tech companies or to provide the opportunity for those working with data to spend some time within the Convention. The aim would not be to create the data ethicist as the compliance person in the corner but to create, perhaps through a Fellowship programme, a rigorous and analytical ethical mindset across this field.
To make a practical difference in a new and rapidly evolving field is not straight-forward – and the Convention will have to prove its worth to those who might make use of its findings. The easy choice would be to create a vehicle for different types of academics to come together. The harder, but far more valuable, challenge is to bring those disciplines together, but then bridge the gap between academic thinking and the world of policy and practice into the development of usable, practical aproaches which can take steps to build a flourishing society.
To plan our next steps, we would greatly value your engagement. We want to test our ideas and ensure we have the right working relationship with industry and third sector, and continue our dialogue with regulators and with government to come to a shared vision about a future landscape. One critical area where we have further to define our thinking is how best to approach public engagement.
In the first quarter of next year we will organise some workshops with Tech UK, as well as with a wider community, and invite anyone here, who would wish to be involved, to help us address these immediate questions. In the interim, if anyone would like to register interest, please email Imogen Parker – at firstname.lastname@example.org
We want to ensure that this independent Convention has a distinctive place in this new landscape, reaching beyond the United Kingdom to create an international reputation. It should help Britain, at the moment one of the most innovative places in the world, to intervene in the debates, the thinking and practices globally, and help shape them for the common good. In this way, we believe we can have a real impact on the most complex and profound set of questions that face us all at this present time.
Tim Gardam is Chief Executive of the Nuffield Foundation. This is a transcript of his speech to the techUK Digital Ethics Summit in December 2017.