Skip to content
Blog

AI-driven biometry and the infrastructures of everyday life

Normalising the principles of oppression through discriminatory technologies

Mona Sloane

5 May 2022

Reading time: 6 minutes

Phrenology head shot against white background.

Over the past years, we have become witness to the exponentially growing proliferation of biometric technologies: facial recognition technology and fingerprint scanners in our phones, sleep-pattern detection technology on our wrists or speech-recognition software that facilitates auto-dictation such as captioning.

What all these technologies do is measure and record some aspect of the human body or its function: facial recognition technology measures facial features, fingerprint scanners measure the distance between the ridges that make up a unique fingerprint, sleep-pattern detection measures movement in our sleep as a proxy for wakefulness, and so on.

AI is fundamentally a scaling technology. It is walking in the footsteps of many other technologies that have deployed classification and categorisation in the name of making bureaucratic processes more efficient, from ancient library systems to punch cards, to modern computer-vision technologies that ‘know’ the difference between a house, a road, a vehicle and a human. The basic idea of these scaling technologies is to minimise situations in which individual judgement is required (see also Lorraine Daston’s seminal work on rules).

When AI technologies are used to power systems that judge or classify aspects of the human body, then they – inevitably – deploy biometric methods at scale. We often get so bedazzled by the apparent efficiency and convenience of this scaled biometry that we miss the ways that it essentialises pseudoscience and, as such, enshrines it in the infrastructures of our everyday lives.

The histories of biometry and pseudoscience are intimately entangled. Some scientific methods that have become infrastructural to science have their roots in biometric methods deployed to ‘prove’ the ‘natural’ superiority of the white body.

For example, Aubrey Clayton writes about how eugenicist belief systems played a role in giving rise to the idea of the standard deviation as depicted in the bell curve, and other statistical methods. Simone Browne’s notion of ‘black luminosity’ describes the ritualised shining of a light (literally and virtually) on others as a means of control and maintaining oppressive power structures.

Similarly, Natasha Stovall has shown that the concept of ‘intelligence’ as a measurement of human potential and capability, and by extension as an underlying concept of AI, has solidified racialised notions of inequality.

These belief systems, sustained by our most essential scientific and technological systems, are alive and kicking. For instance, it took the US National Football League until October 2021, and a class action lawsuit, to end the practice of ‘race-norming’ when assessing the brain damage sustained by professional football players (the majority of them Black) throughout their career.

‘Race-norming’ is a form of medical racism, packaged as ‘science’, that assumes a standard deviation of Black and Brown bodies from the ‘average’ white body, typically towards less ability. For example, a ‘standard’ lower cognitive capacity (as in the NFL case) or, as Lundy Brown has shown, a ‘standard’ lower lung function.

When these assumptions become hidden in the AI-powered biometry that is now part of our everyday lives, they become ‘infrastructuralised’. More recently, this hidden infrastructure has started to underpin a high-stakes area of social life: recruiting.

The role of AI and AI-powered biometry in recruiting has grown significantly during the COVID-19 pandemic, since many tools, such as one-way video interviews, appear uniquely suited for a socially distant hiring process.

The market leader HireVue reported that between 2020 and 2021 their interview volume grew by 40%, and in September 2021 the company recorded more than one million interviews in one month. Having been unable to connect with job candidates in person for much of 2020 and 2021, recruiters and companies are now faced with the ‘great resignation’ and an extremely competitive labour market.

This heightened competition pushes recruiters towards greater ruse of AI systems to recruit and retain workers across all sectors. The portfolio of AI systems that recruiters are using is vast, and includes biometry, alongside other general-purpose and natural language processing systems.

Among others, it includes AI systems that rank candidates according to inputs such as skills, job title, years of experience, titles, location and more, as well as natural language processing systems that aim to improve targeted writing of job ads or outreach messages, video interviewing software or automated assessments of various types.

There is mounting evidence that the AI systems used in hiring carry biases that disproportionately disadvantage communities and individuals who already face barriers to the job market. An example of a non-biometric system used in recruiting and hiring that has been shown to be discriminatory is the Amazon’s automated resume screener. The now retired system was designed to predict the future success of a candidate as a basis for ranking candidates, which ended up using gender as a proxy and sent women’s resumes to the bottom of the pile.

However, things get muddier when biometry enters the frame. Commonly used AI-powered biometry for recruiting includes automated ‘personality testing’, ‘micro-expression analysis’, and ‘neuro-games’. Personality tests were originally developed to predict shell shock in soldiers after World War I and are deeply entangled with the history of corporate management.

AI-driven personality tests frequently claim to be able to discern a candidate’s personality type (either based on the infamous Myers-Briggs Type Indicator or the ‘Big 5 Personality Traits’). These analyses are performed on, for example, a limited amount of words written by a candidate, their social media profile or their resume. Even though written words are not biological features, they get treated as biometry, i.e. as unique to a person.

Other technology is more explicitly biometric, e.g. technology that is designed to detect personality based on components of vocalisation. Similar to the idea that handwriting can reveal personality, automated text or voice-based personality testing is based on assumptions that are not only unscientific, but have their epistemological roots in openly discriminatory concepts.

A similar picture emerges when we turn to computer vision systems that analyse facial expressions as a basis for job-fit or when candidates’ ability is assessed via neuroscience games to ‘objectively’ weed out candidates whose game behaviour does not match that of the most successful employees in the company. Assessing faces to discern ‘types’ is rooted in the history of mugshots, and skills assessments have been proven to harbour bias against people with disabilities.

Research on the intersection of AI-driven biometry and recruiting is still in its infancy. More work is needed to trace how the claims of these systems are shaping the hiring ecosystem, and how these claims can be assessed. But one thing seems certain: when AI-driven biometry proliferates outside of the public eye and without oversight, such as in recruiting, we run the risk of normalising not only harmful technologies, but also the underlying principles and assumptions of oppression and discrimination.

Related content