This year’s International Women’s Day theme ‘Each for Equal’ has particular resonance for Black women who experience discrimination – for being female, for being Black, but more specifically, for the unique identity of being Black females.
This intersectional group is disproportionately affected by instances of bias in both the design and application of AI, and they are also leading the fight to make new technologies more equitable for everyone.
Intersectionality – the proposition that race, class, gender and other individual characteristics intersect in a way that impacts how a person is viewed, understood and treated – opens possibilities for deeper thinking about how injustices occur in everyday life.
Kimberlé Crenshaw, a professor of law at Columbia and the University of California, Los Angeles, coined the term in 1989 in ‘A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics’. Crenshaw’s work was rooted in Critical Race Theory – the belief that the structure of law and society are intrinsically racist – and she saw the failure to recognise the intersection of race and sex as part of that structure.
In that essay, Crenshaw argues that Black women, through being both Black and female, suffer specific forms of discrimination that Black men or White women may not. Her original goal was to highlight areas of discrimination law that weren’t being focused on in courts, but the term since been adopted as an essential term in the battlefield of oppression: activists have employed intersectionality as an organising principle for substantive change, while some conservatives view it as evidence of hierarchal victimhood. Though debates around the theory can often be fraught, the prominence that it’s given to the experience of Black women has been instructive, and it’s a focus that the AI community could benefit from adopting.
Questions around bias in AI tend to interrogate gender issues or racial disparities, but there is less work on the intersection of race, gender and AI. This is particularly problematic as the implications of bias for Black women can be seen in some of the most ubiquitous and advanced AI technologies. Last year, a study by the National Institute of Standards and Technology (NIST) found that African-American females were the most likely to be misidentified in the algorithms of over ninety AI technology developers.
In other AI spaces, such as Emotion AI, there has been little investigation into the effects of bias on Black women. Human emotion bias occurs when a person is more likely to misinterpret the emotions of one demographic group over another. This phenomenon can now be observed in machines trained on real-world datasets, and yet the technology is increasingly appearing in recruitment and marketing industries. While we shouldn’t assume that these models will internalise and reinterpret cultural racist tropes such as ‘the angry Black woman’ directly, the ambiguity present in the relationship between inputs and outputs makes research in the area imperative.
Similarly, the consequences of bias in Natural Language Processing is unclear, while the use of the technology is spreading. Market intelligence, chatbots and sentiment analysis built on Natural Language Processing models, where embeddings encode information by assessing the context in which a word occurs, might lead machines to associate ‘woman’ with ‘weak’ and ‘Black’ with ‘violent’. We need more awareness and research to understand the repercussions of such inherent language bias for the Black female.
These are the types of difficult questions being tackled by Black women across academia, coding, law and art. Joy Buolamwini, computer scientist, AI researcher and founder of the Algorithmic Justice League, has been working to raise awareness about the impact of AI through experimental research, auditing and art. Recently, she featured in Coded Bias, an official Sundance Film Festival selection documentary film highlighting the potential harms of AI. Rashida Richardson, Director of Policy Research at AI Now, is leading research strategy on AI and civil rights. Shonda Perry in Graft and Ash, uses her art to raise questions about the limitations of AI to reflect a fully realised version of her Black, female form.
The women I describe here do not fit neatly into the ‘hierarchy of victimhood’ that some argue is the goal and product of applying intersectionality as an idea. Instead, these individuals are using a combination of empathy and talent to create a more hopeful future where different types of bias in society are not compounded by the AI technology that is increasingly shaping our world. For that, we all have cause to celebrate.
Charlene Prempeh is founder of the innovative new platform A Vibe Called Tech, which actively seeks to encourage academic, cultural and technological discourses within Black communities.
Image: Stephanie Dinkins Not the Only One: a multigenerational memoir of a black American family told from the perspective of a custom deep learning AI.
Report with recommendations and findings of a public deliberation on biometrics technology, policy and governance
A research partnership with NHS AI Lab exploring the potential for algorithmic impact assessments in an AI imaging case study
Findings from a rapid expert deliberation to consider the risks and benefits of the potential roll-out of digital vaccine passports
Examining how the commitment to responsible data in the UK's National Data Strategy could be realised and what it misses