A website flickered across social media consciousness recently: thispersondoesnotexist. Visiting the URL, the viewer is confronted simply with a photograph of a face.
Each time the webpage is refreshed, a new face appears. There is a limitless supply of people. The people are of different ages, ethnicities. Many seem happy, confident, self-possessed. They are often smiling. The background is usually blurred. These people exist in front of nowhere specified, just a fuzzy blueness or a greeny haze.
Occasionally, little signs indicate that all is not quite as it should be. An ordinary-looking middle-aged woman is wearing just one large earring, some younger women have two mismatched ones and another woman has just a fragment of gold in the middle of one ear lobe. Sometimes the pink of the lips smudges onto teeth.
Sporadically, a ghosted image of another face appears in a corner or the distorted semi-face of another person looms on one edge of the image. The faces are usually looking at us, gazing through the screen to lock eyes with the viewer. But they do not see, because photographed people cannot see.
More than that, their eyes really have never had sight – these people do not exist. These are AI-generated faces, the visages of new crowds or, in other words, nobodies who look like somebodies, somebody enough to pass the test of the GAN, the generative adversarial network.
These networks must decide which confection meets enough superficial criteria to plausibly possess a real-world body and, therefore, credible life. And which is just some pixels thrown together out of a melange of every face that does exist and could exist in different combinations, but does not convince.
This infinite assembly is a crowd in the cloud – or also not there, because they only flash into being within that context for a moment and are not to be seen again. They are the denizens of a vague crowd, impossible to mobilise for anything, even though mobilisation has long been the purpose of crowds: to be whipped up for politics or warfare, consuming or fun.
Might someone give over their heart to these AI confections? Whoever does so gives something of themselves in actuality to the cloud, where these faces are born and die constantly, a cloud which is itself a fantasy, just as the loved object is one, an ideal version of itself.
The name ‘cloud’ encourages us to imagine the cloud of Cloud Computing to be a nowhere place, heavenly. If it is nowhere, it is also a somewhere that is or might be everywhere. The Cloud is a vaguely grasped storage space in amongst the actual clouds, rather than a materially robust and energy-hungry infrastructure, an arrangement of hubs and servers, on and under the ground.
We notice clouds when they are distinct against blue sky, and when the sun shines through them and they shimmer or their outlines are illuminated. Cloud Computing – despite the cumulus cloud icon – is more like grey overcast skies, the cloud coverage pervasive and yet unseen. The coverage is total. And we are all connected under this one sky.
Never more so than now. As the COVID-19 pandemic affected life on the planet, it was cloud infrastructure, with its switches, routers, firewalls, load balancers, storage arrays, backup devices, servers, hypervisors, that swelled, becoming the means by which ‘work from home’, with its ‘remote-work-model’, was made possible, as well as home education, online shopping, social meet ups via Zoom and Google Hangouts and everything else.
The tangible result of this commitment to the concrete immateriality is a massive new building for Google UK in King’s Cross, designed for 7,000 employees, should they be tempted back into the city, after the public health crisis.
The building is located here to be proximate to Alphabet’s DeepMind, another 11 storeys and Facebook’s three buildings of up to 12 storeys. Here where there were once warehouses and markets, a dense cloud covering gathers. The vagueness of the Cloud and its institutions is countermanded by the concretion of its infrastructure.
But nebulousness subsists. For those left outside, there is only a fuzzy sense of what happens inside. But someone somewhere in that cumulus cluster of concrete must be working on facial recognition technologies (FRT). That suits the location for sure. Granary Square and the 67-acre King’s Cross Estate, new home to a hub of digital and platform capitalism, is a vast brownfield-site development, designed for leisure, shopping, learning and working. It is entirely private, governed by regulations that the estate owner refuses to reveal, and populated with CCTV.
There was controversy in August 2019, when it became apparent that surveillance cameras with facial biometric capabilities had been deployed on the estate between May 2016 and March 2018. These possessed the capacity to track the 150,000 passers-by each day, mapping their movements, their purchases, their behaviours. Article 9 of the European law on General Data Protection Regulation requires that explicit consent be given by an individual, if facial biometric data is to be collected and used. But what rules will apply in a new regulatory post-Brexit framework is unclear.
Now the King’s Cross Estate website carries a statement on its homepage. They do not currently use FRT. They deleted all the data processed rapidly after gathering it. It was given over only to the police to prevent and detect crime in the neighborhood in the interests of public safety. The team had been investigating a new FRT system, but this work is on hold. On hold. Holding on. Ready to be reanimated, in a new world such as the one that is gathering around us?
What is to come at us is as imprecise as a cloud. Clouds come down to earth in the form of fog. A fog is a visible aerosol, a dense concentration of water or ice molecules suspended in the air. We pass through it. The fog of fog computing is not visible, as we pass through it. It is an atmosphere busy with rapid data processing on secure local devices. Data, harvested from multiple devices, stays near the ground, near the event, obfuscated in the fog of our complicating worlds.
As another mode, edge computing develops – where processing occurs within the sensors or close to the gathering device itself – something analytical, operative maybe, happens in the edges of our vision, becomes available. We do not see those edges loom into view, and we may not even know they are there.
Edges, integrated with AI, make autonomous vehicles safer or monitor medical conditions and drug delivery. Edges trigger smart activities in the home. Edges distribute across our surfaces, communicating to someone somewhere whether workers have cleaned recently. Edges counteract the freezes of cloud-based video conferencing. Edges promise to bring fast precision to a foggy, cloudy world, where data is cast over everything, constantly generating more and more, and where efficiency is counted in milliseconds.
Edge computing is deployed very often in surveillance cameras. It allows for a fast and smart trawling through myriad data, in order to identify a face, a wanted person, the habits of consumers, someone with or without required permits. It may even clock perceived emotions. Microsoft Azure service specifies the detection of ‘a range of facial expressions such as happiness, contempt, neutrality and fear’. And on seeing what it is asked to find, the system makes almost instant decisions. Crowds are no longer crowds but pixel spots of identifiability. Our motives, what we are and why we do and what we feel, become a property of the edge, the fog, the cloud – all the most obscure to us.
The cloud is a vague thing, a thing built for our imagination and by it – innocuous in its name, like the cookie before it that was the trace of power, silently stored on our computers and for whom we became trails of crumbs of data, little Hansels and Gretels each, unconsciously marking our wanderings. We might wish to float as free as the cloud.
Those of us who long for such untetheredness would do well to upskill – to find a way to short-circuit the conversations of devices by braiding the AI-generated plausible faces – with all their faked emotions and absent motives – with the facial recognition systems that wish to possess or trace.
The people-comfits in thispersondoesnotexist do not know who they are, or that they are. They cannot consent to be seen. But nor can we. Not really. Our faces are made and remade in the cloud, stored in the cloud, sent up to the cloud, drawn down from the cloud. Our faces and everything else that is and is no longer ours.
Image credit: kschulze
Giles Herdale argues for urgent action on biometric technology to preserve the principle of policing by consent in a digital age.
To mark the beginning of an independent review on the governance of biometric data, Ada hosted a debate on UK biometrics regulation
Proposing a way forward for regulators, policymakers and industry in the UK based on emerging public attitudes research.
First survey of public opinion on the use of facial recognition technology reveals the majority of people in the UK want restrictions on its use