Skip to content
Blog

The role of the arts and humanities in thinking about artificial intelligence (AI)

Reclaiming a broad and foundational understanding of ethics in the AI domain, with radical implications for the re-ordering of social power

John Tasioulas

14 June 2021

The Creation of Adam, detail

What is the contribution that the arts and humanities can make to our engagement with the increasingly pervasive technology of artificial intelligence? My aim in this short article is to sketch some of these potential contributions.

Choice

Perhaps the most fundamental contribution of the arts and humanities is to make vivid the fact that the development of AI is not a matter of destiny, but instead involves successive waves of highly consequential human choices. It’s important to identify the choices, to frame them in the right way, and to raise the question: who gets to make them and how?

This is important because AI, and digital technology generally, has become the latest focus of the historicist myth that social evolution is preordained, that our social world is determined by independent variables over which we, as individuals or societies, are able to exert little control. So we either go with the flow, or go under. As Aristotle put it: ‘No one deliberates about things that are invariable, nor about things that it is impossible for him to do.’1

Not long ago, processes of economic globalisation were being presented as invariable in this way until a populist backlash and then the COVID-19 pandemic kicked in. Today, it is technological developments that are portrayed in this deterministic fashion. An illustration of this trend is a recent speech by Tony Blair identifying the ‘21st-century technological revolution’ as defining the progressive task. As the political scientist Helen Thompson pointed out, technology has replaced globalisation in Blair’s rhetoric of historicist progressivism. 2

The humanities are vital to combatting this historicist tendency, which is profoundly disempowering for individuals and democratic publics alike. They can do so by reminding us, for example, of other technological developments that arose the day before yesterday – such as the harnessing of nuclear power – and how their development and deployment were always contingent on human choices, and therefore hostage to systems of value and to power structures that could have been otherwise.

Ethics

Having highlighted the necessity for choice, the second contribution the arts and humanities can make is to emphasise the inescapability of ethics in framing and thinking through these choices.

Ethics is inescapable because it concerns the ultimate values in which our choices are anchored, whether we realise it or not. These are values that define what it is to have a good life, and what we owe to others, including non-human animals and to nature. Therefore, all forms of ‘regulation’ that might be proposed for AI, whether one’s self-regulation in deciding whether to use a social robot to keep one’s aged mother company, or the content of the social and legal norms that should govern the use of such robots, ultimately implicate choices that reflect ethical judgments about salient values and their prioritisation.

The arts and humanities in general, and not just philosophy, engage directly with the question of ethics – the ultimate ends of human life. And, in the context of AI, it is vital for them to fight against a worrying contraction that the notion of ethics is apt to undergo. Thanks in part to the incursion of big tech into the AI ethics space, ‘ethics’ is often interpreted in an unduly diminished way. For example, as a form of soft, self-regulation lacking legal enforceability. Or, even more strangely, it is identified with a narrow sub-set of ethical values.

So, for example, in her recent book, Atlas of AI, Kate Crawford writes, ‘we must focus less on ethics and more on power’ because ‘AI is invariably designed to amplify and reproduce the forms of power it has been deployed to optimize’.3 But what would the recommended focus on power entail? Crawford tells us it would interrogate the power structures in which AI is embedded, in terms of ideas of equality, justice, and democracy. The irony here is that these ideas are either themselves core ethical values, or – in the case of democracy – to be explicated and defended in terms of such values. We must appeal to them to frame what it is to live a flourishing human life and what we owe to others engaged in the same enterprise; only this can provide an adequate critical standpoint from which to engage with power structures.

It would be a hugely damaging capitulation to the distortions wrought by big tech to adopt their anaemic understanding of ethics as essentially self-regulation, at best, or corporate PR, at worst. Reclaiming a broad and foundational understanding of ethics in the AI domain, with radical implications for the re-ordering of social power, will be an important task of the arts and humanities.

This, of course, is easier said than done, because there are traditions within the humanities themselves that purport to be sceptical of ethics as such, but on closer inspection, it seems to me that even these sceptical traditions cannot escape ethical commitments of their own – even if it is just the commitment to confronting grim realities unflinchingly.

The dominant approach

The next question we might ask is: what is the shape of the ethical self-understanding that the arts and humanities can help to generate? The starting-point, I think, is to recognise that there is already a dominant approach in this area, that it has grave deficiencies, and that a key task for the humanities is to help us elaborate a more robust and capacious approach to ethics that overcomes these deficiencies. I take the dominant approach to be that which is found most congenial by the powerful scientific, economic and governmental actors in this field.

Like anyone else, AI scientists are prone to the illusion that the intellectual tools at their disposal have a far greater problem-solving purchase than is actually warranted. This is a phenomenon that Plato diagnosed long ago with respect to the technical experts of his day, such as cobblers and ship-builders. The mind-set of scientists working in AI tends to be data-driven, it places great emphasis on optimisation as the core operation of rationality, and it prioritises formal and quantitative techniques.

Given that intellectual framework, it is little wonder that a leading AI scientist like Stuart Russell, in his book Human Compatible, finds himself drawn to a preference-based utilitarianism as his overarching ethics. Russell’s book is concerned with the worry that AI will eventually spiral out of control – no longer constrained by human morality – with cataclysmic consequences. But what is human morality? According to Russell, the morally right thing to do is that which will maximise the fulfilment of human preferences.4 So, ethics is reduced to an exercise in prediction and optimisation – deciding which act or policy is likely to lead to the optimal fulfilment of human preferences.

But this view of ethics is, of course, notoriously open to serious challenge. Its concern with aggregating preferences threatens to override important rights that erect strong barriers to what can be done to individuals. And that’s even before we start observing that some human preferences may themselves be infected with racist, sexist or other prejudices. Ethics operates in the crucial space of reflection on what our preferences should be, a vital consideration that makes a belated appearance in the last few pages of Russell’s book.5 It does not take those preferences as ultimate determinants of value.

Small wonder, too, that Russell accepts a conception of intelligence – effectively, as means-ends reasoning which makes choice of ends extraneous to the operations of intelligence – according to which superintelligence is compatible with the worst forms of sociopathy. On this degraded view of intelligence, which Russell treats as ‘just a given’, a machine that annihilated humanity in order to maximise the number of paper clips in existence could nonetheless qualify as super-intelligent.6

This crude, preference-based utilitarianism also exerts considerable power as an ideology among leading economic and governmental actors. This is less easy to see, because the doctrine has been modified by positing wealth-maximisation as the more readily measureable proxy for preference-satisfaction. Hence the tendency of GDP to hijack governmental decision-making around economically consequential technologies such as AI, with the consequent side-lining of values that are not readily expressed by market demand. Hence, also, the legitimation of profit-maximisation by corporations as the most effective institutional means to societal wealth-maximisation.

The three Ps – Pluralism, Procedures and Participation

So the kind of ethics we should hope the arts and humanities steer us towards is one that ameliorates and transcends the limitations and distortions of this dominant paradigm derived from science and economics. I think such a humanistic ethic, informed by the arts and humanities, would have at least the following three features (the three Ps):

  1. Pluralism – it would emphasise the plurality of values, both in terms of the elements of human wellbeing and the core components of morality. This pluralism calls into question the availability of some optimising function in determining what is all-things-considered the right thing to do. It also undermines the facile assumption that the key to the ethics of AI will be found in one single master-concept, whether that be trustworthiness or human rights or something else. How could human rights be the overarching framework for AI ethics when, for example, AI has a serious environmental impact that cannot be exclusively cashed out in terms of its bearing on anthropocentric concerns? And what about those human values to which we do not think of ourselves as having a right but which are nonetheless important, such as mercy or solidarity? Nor can trustworthiness be the master value, despite the emphasis that it is repeatedly given in documents on AI ethics. Trustworthiness is at best parasitic on compliance with more basic values, it cannot displace the need to investigate those values.Admitting the existence of a plurality of values, with their nuanced relations and messy conflicts, heightens the need for choice adverted to previously, and accentuates the question of whose decision will prevail. This sensitive exploration of a plurality of values and their interactions is what the arts and humanities, at their best, do. I say at their best because, of course, they often fail in this task.My own discipline, philosophy, has itself in recent years often propagated the highly systematising and formal approach to ethics that I have condemned. I feel philosophers have a lot to learn from closer engagement with other humanities disciplines, like classics and history, and with the arts, especially fiction, which often gets to the heart of issues like the significance of distinctively human interactions, or the nature of human emotion, in ways that the more discursive methods of philosophy cannot.
  2. Procedures not just outcomes – I come now to the second feature of a humanistic approach to ethics, which is the importance of procedures not just outcomes. Of course, we want AI to achieve valuable social goals, such as improving access to education, justice and health care, in an effective and efficient way. The COVID-19 pandemic has cast into sharp relief the question of what outcomes AI is being used to pursue – is it helping us, for example, to reduce the need for our fellow citizens to undertake dangerous and mind-numbing labour in the delivery of vital services, or is it engaged in profit-making activities, like vacuuming up people’s attention online, that have little or no redeeming social value?The second feature of a humanistic approach to ethics is to drive home the important point that what we rightly care about is not just the value of the outcomes that AI can deliver, but the processes through which it does so.Take the example of the use of AI in cancer diagnosis and its use in the sentencing of criminals. Intuitively, the two cases seem to exhibit a difference in the comparative valuing of the soundness of the eventual decision or diagnosis and the process through which it is reached. When it comes to cancer, what may be all-important is getting the most accurate diagnosis, and it is largely a matter of indifference whether this comes through the use of an AI diagnostic tool or the exercise of human judgement. In criminal sentencing, however, there is a powerful intuition that being sentenced by the robot judge – even if the sentence is likely to be less biased or more consistent than one rendered by a human counterpart – means sacrificing important values relating to the process of decision. This point is familiar, of course, in relation to such process values as transparency, procedural fairness, explainability. But it goes even deeper, because of the dread many understandably feel in contemplating a dehumanised world in which judgements that bear on our deepest interests and moral standing have, at least as their proximate decision-makers, autonomous machines that do not have a share in human solidarity and cannot be held accountable for their decisions in the way that a human judge can.
  3. Participation – the third feature relates to the importance of participation in the process of decision-making with respect to AI, whether participation as an individual or as part of a group of self-governing democratic citizens. At the level of individual wellbeing, this takes the focus away from theories that equate human wellbeing with some end-state, such as pleasure or preference-satisfaction. Such end states could in principle be brought about through a process in which the person who enjoys them is entirely passive, for example, by putting some anti-depressant drug in the water supply. Contrary to this passive view of wellbeing, it would stress, as Alasdair MacIntyre did in After Virtue, that the ‘good life for man is the life spent in seeking for the good life for man’.7 Or, put slightly differently, that successful engagement with valuable pursuits is at the core of human wellbeing. If the conception of human wellbeing that emerges is deeply participatory, then this has immense relevance for assessing the significance of increased delegations of decision-making power to AI. One of the most important sites of participation in constructing a good life, in modern societies, is the workplace. According to a McKinsey study, around 30% of all work activities in 60% of occupations are capable of being automated.8 Can we accept the idea that the large-scale elimination of job opportunities, due to automation, can be compensated for by the extra ‘goodies’ that automation makes available? The answer depends on whether the participatory self-fulfilment of work can, any time soon, be feasibly replaced by other activities, such as art, friendship, play or religion. If it cannot, addressing the problem with a mechanism like universal basic income (UBI), which involves the passive receipt of a benefit, will not be enough. Similarly, we value citizen participation as part of collective democratic self-government. And, arguably, we do so not just because of the instrumental benefits of democratic decision-making in reaching better decisions (‘the wisdom of crowds’ factor), but because of the way in which participatory decision-making processes affirm the status of citizens as free and equal members of the community. This is an essential plank in the defence against the tendency of AI to be co-opted by technocratic modes of decision-making that erode democratic values by seeking to convert matters of political judgement into questions of technical expertise. At present, much of the culture in which AI is embedded is distinctly technocratic, with decisions regarding the ‘values’ encoded in AI applications being taken by elites within the corporate or bureaucratic sectors, often largely shielded from democratic control.9 Indeed, a small group of tech giants accounts for the lion’s share of investment in AI research, dictating its overall direction. Meanwhile, we know that AI-enabled social media poses risks to the quality of public deliberation that a genuine democracy involves by promoting the spread of disinformation, aggravating political polarisation, and so on. Similarly, the use of AI as part of corporate and governmental attempts to monitor and manipulate individuals undermines privacy and threatens the exercise of basic liberties, effectively discouraging citizen participation in democratic politics.We need to think seriously about how AI and digital technology more generally can enable, rather than hinder and distort, democratic participation.10 This is all the more urgent given the declining faith in democracy across the globe in recent years, including in long-established democracies such as the UK and the US. Indeed, the disillusionment is such that a recent report found that 51% of Europeans favoured replacing at least some of their parliamentarians with AI.11 Most enthusiastic were the Spaniards at 66%. Outside Europe, 75% of people surveyed in China supported the proposal. Fortunately, in the UK 69% of respondents opposed the idea, the number falling to 60% in the US. There is still time to salvage the democratic ideal that an essential part of citizen dignity is active participation in self-government.

Which brings me to my final point. If the arts and humanities are to advance the agenda of the kind of humanistic AI ethics I have sketched, then they themselves need to be democratised. In a democracy, it’s not enough to give people a vote while effectively excluding them from deliberation; and if they are to deliberate as equals, they have to have access to the key sites in which basic ideas about justice and the good are worked out.

The arts and humanities are prominent among those sites. Hence the wisdom of Article 27 of the Universal Declaration of Human Rights, which includes a right to participation in science and culture over and above purely political participation. We can see manifestations of this right, enabled by digital technology, in the resurgent citizen science movement.

But we also have to address the exclusion of our fellow citizens, itself often highly discriminatory in nature, from the domains of artistic creativity and humanistic enquiry. This means that the kind of research we should aim to do on AI within the arts and humanities should not merely be accessible to a wider public, nor should it merely model civil and rational debate for that public – however vital both of those things are. It should also afford ordinary citizens the opportunity to articulate their views in dialogue with others. I think one of the most important goals for the arts and humanities is to develop formats that facilitate such wide-ranging democratic dialogue.


This is the first in the series of posts considering the role of the arts and humanities in thinking about AI.

Array
(
    [s] => 
    [posts_per_page] => 12
    [meta_key] => sb_post_date
    [order] => DESC
    [orderby] => meta_value
    [paged] => 1
    [post_type] => Array
        (
            [0] => blog-post
            [1] => case-study
            [2] => evidence-review
            [3] => feature
            [4] => job
            [5] => media
            [6] => news
            [7] => press-release
            [8] => project
            [9] => policy-briefing
            [10] => report
            [11] => resource
            [12] => summary
            [13] => survey
            [14] => toolkit
            [15] => event
            [16] => person
        )

)

Also from this series

John Tasioulas is Director of the Institute for Ethics in AI at the University of Oxford.

Image credit: diuno

Footnotes

  1. Aristotle, Nicomachean Ethics Book VI, 5.
  2. Helen Thompson, ‘In Blair-World, Tech is the Bright New Progressive Cause. But He Ignores the Real Reason for Change’, New Statesman, May 19, 2021. https://www.newstatesman.com/politics/uk/2021/05/blair-world-tech-bright-new-progressive-cause-he-ignores-real-reasons-change.
  3. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), p.224.
  4. Stuart Russell, Human Compatible: AI and the Problem of Control (Allen Lane, 2019), p.178.
  5. Id, p. 255.
  6. Id, p.167.
  7. Alasdair MacIntyre, After Virtue (Bloomsbury Revelations, 2013), p.254.
  8. James Manyika and Kevin Sneader, ‘AI, automation, and the future of work: Ten things to solve for’, June 1, 2018 https://www.mckinsey.com/featured-insights/future-of-work/ai-automation-and-the-future-of-work-ten-things-to-solve-for.
  9. Glen Weyl, ‘Why I Am Not a Technocrat’, August 19, 2019 https://www.radicalxchange.org/media/blog/2019-08-19-bv61r6/.
  10. For some positive thinking along these lines, see Hélène Landemore, ‘Open Democracy and Digital Technologies’, in L. Bernholz, H. Landemore, R. Reich (eds), Digital Technology and Democratic Theory (University of Chicago Press, 2021).
  11. ‘More Than Half of Europeans Want to Replace Lawmakers with AI, Study Finds’, May 27, 2021 https://www.cnbc.com/2021/05/27/europeans-want-to-replace-lawmakers-with-ai.html.

Related content