Skip to content
Blog

Can computers surprise us?

Why Lady Lovelace’s ‘originality insight’ matters for today’s ‘learning machines’

Sylvie Delacroix

12 October 2021

Reading time: 9 minutes

Ada Lovelace and Alan Turing

We have devised machines that help us keep track of time, cultivate the earth, mend our bodies, survey the skies… the list goes on. Some aim to overcome specific physical limitations; other machines are designed to entertain. Many do both. Most have had a profound impact on our understanding of the world, and the role we can play within it. None more so than one of our more recent inventions: computers.1

In that context, to ask whether computers are able to ‘take us by surprise’ may sound like a redundant question. When raised by Alan Turing in 1950,2 nobody was in a position to predict the depth and extent of the sociocultural upheavals brought about by their near-universal use today. Yet this historical upheaval is not what Turing has in mind when he floats the ‘computers cannot take us by surprise’ hypothesis, only to dismiss it as unsubstantiated. Turing observes that computers do take him by surprise all the time, given their ability to fill the gaps in his incomplete calculations.

This ‘computers cannot take us by surprise’ idea is raised by Turing in the context of his enquiry into whether machines can think. To disambiguate this enquiry, Turing proposes to replace ‘can machines think?’ with a different question: ‘are there imaginable, ‘digital’ computers that would do well in an ‘imitation game’? The purpose of that exercise is to fool a human observer, whose aim is to distinguish the computer from the human in a question-and-answer game.

Among the arguments that may be raised to dispute the idea that computers could ever do well at that game, one may point at various ‘disabilities’ of computers. Having surveyed the latter, Turing considers whether Lady Lovelace raised a different sort of argument when, describing Babbage’s ‘Analytical Engine’ (in 1843), she noted that this Engine ‘has no pretension to originate anything. It can do whatever we know how to order it to perform’ (note G).3

Turing’s translation of the above insight – about the (in)ability to originate something – in terms of an (in)ability to surprise him is peculiar. What is lost in translation is key to some of the ‘blind spots’ of today’s ‘learning machines’. To grasp this – and the reasons why originality and surprise only occasionally overlap – it is helpful to start with an example.

The kind of surprise triggered by AlphaGo’s ‘move 37’

When AlphaGo came up with a ‘new’ move,4 one that had never been considered before, did it ‘originate’ anything? The move itself was one of the x-many moves compliant with the rules of Go. Its significance stemmed from its challenging of strategic assumptions widely shared among Go experts. The extent of AlphaGo’s operational autonomy (which stemmed from a sophisticated learning process,) combined with the vast search space (something like 10^170), increased its ability to challenge the expectations of even the most learned Go experts. None of them had anticipated the value of ‘move 37’. This anticipation failure forced Go experts to reconsider their understanding of the game.

Were other members of the public surprised by the move itself? No. If they were surprised, it was by that system’s ability to surprise Go experts: this prompted widespread reconsideration of what ‘digital machinery’ could do. In that sense the surprise at stake was non-trivial. Unlike the kind of surprise experienced by Turing when his calculations were shown to be deficient, the surprise prompted by AlphaGo’s move comes closer to what Fisher describes as an experience of ‘wonder’. Such an experience leads us to question our understanding of ourselves or what surrounds us.5

These non-trivial surprises can stem from a variety of things, from rainbows to artworks, via human encounters. A variety of factors – from rigid certainties to fear – can lead us, humans, to lose the capacity to be surprised in that way. Do today’s ‘learning machines’ ever come close to being able to experience this kind of surprise? Reversing Turing’s ‘surprise’ question brings out its connection to Lovelace’s ‘originality insight’.

Obstacles to surprise readiness: from model certainty to interpretive capabilities

The difficulty inherent in preserving a system’s ability to recognise the extent to which some input challenges a given model is commonly associated with Bayesian learning methods. As the number of data samples increases, model uncertainty can reach close to zero. This type of surprise impediment has seen a resurgence of interest among those seeking to optimise the learning performance of algorithms deployed in changing, dynamic environments. Yet the resulting surprise quantification endeavours tend to overlook another potential impediment to surprise readiness: poor interpretive capabilities.

Outside well-contained environments (such as mazes or video games), a system’s ability to ‘do as children do’ (learn through wonder) not only presupposes the kind of operational autonomy brilliantly displayed by AlphaGo. It also presupposes hermeneutic autonomy: an ability to interpret and reappropriate the fabric of sociocultural expectations that is regularly transformed through events, encounters or creative interventions.

For humans, this ability is also a necessity, since it allows for the possibility of social and moral change. The norms and expectations that structure the environment in which we grow up mould us. That some contingent history has made us value X or Y doesn’t mean we can’t set in motion a chain of events that will change our attitudes and aspirations.

The positive value of surprise and its link to originality as a human necessity

If what drives originality as an effort (and perhaps a sign of intelligence) is the need to be able to challenge and enrich the web of sociocultural expectations that has shaped our normative expectations thus far, then there must be room for sophisticated learning machines that have no such need for originality. Unlike humans, surely such machines could be in a position to control what parts of their environments they let themselves be ‘shaped’ by?

In its pragmatism the above question confuses needs and imperatives, and in doing so misses the insight underlying Lovelace’s ‘[this Engine] has no pretension to originate anything’. The heuristic value of that insight has only grown since Turing first sought to capture it. For better or worse, many efforts to build learning agents are premised upon the validity of a young human–machine analogy: ‘We believe that to truly build intelligent agents, they must do as children do’.6

So far this analogy has been interpreted narrowly. Efforts to understand children’s learning processes often proceed from the assumption that they are mostly aimed at building a model of the world that minimises prediction errors. A good fit for many problems involving perception and motor control, this prediction goal certainly makes for smoother interactions with one’s environment. Yet it arguably leaves out much that matters.

This brings us back to the contemporary salience of Lady Lovelace’s ‘originality insight’ and its link to different kinds of autonomy. While a pupil who has mastered long division may be said to have ‘operational’ autonomy in that respect (just like a computer that can play chess or Go), creative autonomy presupposes the ability to question and change given rules or conventions.7

This is different from what may be referred to as ‘fantastic’ autonomy: it is much more difficult to create something new from within a background of accepted norms and conventions (i.e. creative autonomy) than to do so in the absence of any prior constraint or interpretive convention. In the same way, to be capable of originating something – as per Lovelace’s quote – requires interpreting a shared web of sociocultural expectations in such a way as to remain intelligible while at the same time challenging those expectations.

Since today’s learning machines arguably lack such interpretive capabilities, Lady Lovelace’s insight still rings true. Instead of seeing this as a shortfall, we are better off considering what these disparate capabilities entail when designing the variety of systems deployed in ethically significant contexts. Whether they inform college admissions, parole decisions or healthcare provision, these systems do impact the dynamism of our interpretive practices.

Since these practices continually reshape our aspirations in domains such as education, justice or healthcare, what we need to ask ourselves is: which system design choices are more likely to foster, rather than compromise, our drive to question the norms that structure our ethically loaded practices? Interactive, collective contestability mechanisms are in their infancy: we need to make them a priority if we are not to suffer from the same limitations as Babbage’s Engine and its ‘inability to originate something’.

Sylvie Delacroix, Professor in Law and Ethics, University of Birmingham and Fellow, Alan Turing Institute and Mozilla.

 

This blog post is based on Delacroix, S. (2021). ‘Computing Machinery, Surprise and Originality’, Philosophy & Technology. Available at: https://link.springer.com/article/10.1007/s13347-021-00453-8

Footnotes

  1. The word ‘computer’ is meant to be understood loosely as any device that can store and process information: thus it is used to refer to machines from Babbage’s ‘Analytical Engine’ all the way through to speculative, autonomous artificial agents. The only meaning of ‘computer’ that is not included under this very loose definition is that which is used to characterise humans performing calculations.
  2. Turing, A. M. (1950). ‘Computing machinery and intelligence’. Mind, 59, 433-460.
  3. Lovelace, A. A. (1843). Notes by A.A.L. [August Ada Lovelace]. Taylor’s Scientific Memoirs, III, 666-731.
  4. Silver, D., Huang, A., Maddison, C. J., Guez, A. & Others, A. (2016). ‘Mastering the game of Go with deep neural networks and tree search’. Nature, 529, 484-489.
  5. Fisher, P. (1998). Wonder, the rainbow, and the aesthetics of rare experiences, Harvard University Press.
  6. Kosoy, E., Collins, J., Chan, D. M., Hamrick, J. B., Huang, S., Gopnik, A. & Canny, J. (2020). Exploring Exploration: Comparing Children with RL Agents in Unified Environments. Bridging AI and Cognitive Science (ICLR 2020).
  7. Boden, M. A. (1998). ‘Creativity and artificial intelligence’. Artificial Intelligence, 103, 347-356.

Related content