Skip to content
Blog

Who gets to write the future?

Reimagining policymaking through an anti-racist and creative practice lens

Sarah Devi Chander , Squirrel Nation

24 June 2021

Reading time: 22 minutes

zoomed in image of hands taking notes

This is a conversation between JUST AI racial justice fellows, Sarah Chander, Erinma Ochu and Caroline Ward. The discussion began in March 2021 as the three fellows were confronted with how to equate and stay true to the radical, boundless praxis of racial justice and creative practice. Here they grapple with the tensions and threads interlinking policymaking, creative practice and a tendency for siloing of AI ethics away from debates around anti-racism, equity and equality. They ask: who gets to write the future?

Sarah: How are you doing today?

Erinma: My students are just about to submit transmedia stories reflecting on the times we are living in. They have come up with some wonderful experiences, so I am quite excited about that.

Caroline: I’m writing out a brief, and I’m looking at five other documents and trying to pull from each of them. So it feels like something whole and complete rather than Frankenstein. It’s getting there. What about you Sarah?

Sarah: I have a lot of work to do following the EU’s proposed AI legislation,1 looking at how it’s been developing from a human rights perspective. I’m happy it’s Friday.

Erinma: I was wondering how this ‘what you’re doing in your job’ relates to the policy work for the racial justice fellowship. Do they feel like they are far apart or are they coming together?

Sarah: All of what we are discussing within our fellowship lab2 necessarily informs my policy work, from racial justice and AI, critiques of the fairness discourse, critical perspectives on AI Infrastructures and the extractive nature of AI. Mostly, it’s useful in terms of how I shape recommendations – drawing on concepts of impermissible use3 (citing the work of the Algorithmic Justice League4) and refusal practices against harmful technologies. It’s shaping what I work towards; asking for different, perhaps more systemic, ‘radical’ recommendations than those we otherwise see in the data policy field.

Erinma: I’ve been bringing the fact that we’ve lived through COVID-19 into policy conversations with social scientists. They’ve all been very polite but I’ve had to keep troubling the politeness with stories. These highly intelligent people are listening, but in total silence. Total silence if I talk about the dismantling of The Care Act,5 The Lawrence Review of COVID-196 and the disproportionate impact on racialised people and LGTBQ+ folk.7 Silence, if I talk about the long list of inactions around racial, systemic injustice,8 and the lived experience of that oppression.9 Silence. The silences feel so violent. And I wonder if it’s because, at this point, they’ve not written about the pandemic yet, because it’s something to look back at and get grants to research it in the future. Whatever the reason, it’s as if it’s not happened, because it hasn’t been fully researched, so cannot be counted as data and evidence. So we’ve been grappling with this idea of reimagining ‘proposals’, using the Dartmouth AI study,10 (where the term Artificial Intelligence was first coined in a proposal to do the study) and who gets to imagine the future through research.

Caroline: With creative practices, like storytelling or using lived experiences, we can find ways to frame, and allow people to digest what they’ve heard. But I guess it depends who sets the stage on which of those stories are told.

Erinma: Exactly. Our fellowships had barely started when Dr. Timnit Gebru, co-lead of ethics at Google tried to publish a paper,11 outlining the resources needed to build and sustain AI models, which tend to benefit wealthy organisations, whilst having a high environmental impact on marginalised communities.

Sarah: When I listen to you both, I think about how useful creative practice could be to make policy informed by lived experience. The sense I get is that creative practice resists the siloing of issues and their impacts on people, particularly marginalised people. This is something I’ve identified in my advocacy. Often when advocating on the harms of AI, we hit a wall if we are not speaking to those with the policy mandate to work on social issues or equality. For example, the people leading the EU’s legislation on AI were in another part of the organogram to the people that are working on equality. I guess this siloing also happens quite a lot in academia, and contesting disciplines12 is also really relevant.

Caroline: Yes, with creative practice, there can be less of a tendency to draw to standard boundaries, borders, labels or categories. When we notice that we might be ‘classified’, we’ll move out of that framing. And also we try to come to knowledge and understanding in more of a fluid or interrelated way. This relational way is often key to our work. Our approach isn’t necessarily to put some order to what we’re looking at, structure it and create a framework that is too rigid. There is a competency in remaining quite messy, and not prematurely closing down ideas. This draws from different considerations of what reality, knowledge and time are and what it can be, and how, when people are starting from different positions, different rhythms, then we can collaborate at the intersections of race, class, gender, disability, etc., outside of the matrix of domination.13

Erinma: For our Fellowship we’re thinking through Sylvia Wynter’s work considering the narrative and social production of ideas of what it is to be human.14 With human as a verb, not a noun.

Caroline: Yes, and knowledge not as a subject to be studied or a set of facts, data or texts, but how it frames and structures life, produces a status of human, not-quite-human and nonhuman.

Erinma: We’re also drawing on Wynter’s writing ‘The Ceremony must be found: After Humanism’,15 experimenting with different ways of ‘becoming human together’.

Caroline: Yes, a ritual or ceremony where senses come together, to reveal and surface the systemic issues as well. We are thinking about how that could relate to ‘policymaking as ritual’ and the performances of those people. How would the anti-discrimination people and the AI people get into the same space? Where could that happen, if not a conference? And is it something that could happen with doing…

Erinma: …co-creation of policy?

Caroline: Or, how do you bring imagination into policymaking? One of the things that we’ve been looking at for a while, again, is the lifecycle of an issue. This was a bit of a lightbulb moment for us a few years back. How does an issue become solidified in policy? How does it become text? How does an issue that first arises, because obviously, policy does not in itself, ‘sense’ or ‘respond to an issue, it is written by someone. Policy is a result of events happening in the world. What we were wondering about is the advocacy push that happens towards policy. And then, along the way of policy development, how that becomes ‘universal’ or ‘generic’ or ‘standard’. The reason we are interested in the lifecycle of an issue, like for AI or natural capital, is that we were wondering, how do those that are minoritised become centred in this process, how can we ‘theorize from the borders’,16 evoke ‘border thinking’ to come into play. There are the power forces of imperialism that govern what counts and what gets pushed to the side, along the process. How could this happen differently? How could the making of policy make a difference? I am thinking about both the ‘conjuncture’17 and the ‘Futures Cone’,18 where, with the conjuncture, the present moment is both a moment of danger and a moment of opportunity, in which we can make intellectual interventions, and reassemble them through practice.

‘There is no permanent hegemony: it can only be established, and analysed, in concrete historical conjunctures’ (Hall, 1988)

Erinma: Yes, we might imagine the future doesn’t exist, but the future is being decided right now. And if we choose to go down a particular route, that takes us to a different future than if we just carried on doing the same things. We often make work in these kinds of spaces, thinking outside of market control or kind of sensing that control, looking for autonomy and resistance.

Caroline: Yes, the future is being created in the present with what dominates societal imagination, and that ends up being written. What happens if you refuse those master narratives?

Erinma: So it’s the failure of imagination of the policymakers and where they’re drawing their evidence and practices from? Just like the plantation thinking of siloed disciplines? The disciplinariness of the policy space cuts out everyday life in a spectacular fashion… so how do we bring it back in? How does the policymaker embrace the everyday?

Sarah: Yeah.

Caroline: As a form of sensemaking.

Erinma: So we have this particular kind of social scientist, being very polite in the policy space, not sharing their own papers, in the moment, or their own personal stories, because that is not the done thing, they send in their papers researching the past by email, after the fact. It’s by invitation of course, but this complicity produces the silence. But there is also a point about lived experience being discussed alongside written evidence, it’s re-traumatising, and actually exhausting. So in terms of accountability, as well as opacity,19 at what point do the policymakers get held to account for their policy fails?

Sarah: That would probably require a rethinking of the entire policymaking practice. What does it actually look like to have lived experience in policymaking? We often hear that we need ‘participatory policymaking’. Actually most of what has been put forward as participatory policymaking is often not so, frequently embedding the prior assumptions of those controlling the participation. Instead we have preferred to say ‘this work should center marginalized people’. A lot of academics talk about this too, but how does it work in practice? Jasanoff20 picks up on the European view being ‘the view from everywhere’, because of the model of policies, which is usually based on evidence gathering from a range of different sources. In the eyes of the EU, it’s a very complete and apolitical process, in that you have evidence from many disciplines reflecting the breadth of all possible knowledge sources, with ‘a tacit presumption of informational purity’. The informational purity of established knowledge is often framed as oppositional to everyday lived experience. It’s unscientific, and the ‘view from nowhere’ eliminates the validity of the ‘view from somewhere’, situated knowledges which do not claim to be apolitical, but clearly state a political standpoint. The very design of that system, or elevated functioning of that system, further relegates people’s everyday experience, even though we know that this is probably one of the most powerful ways to change someone’s mind.

‘politics need not be denied or kept at arm’s length in legitimating data for policy making but instead can be accepted and held accountable as an essential and unavoidable feature of producing “serviceable truths” to inform public reason.’ (Jasanoff, 2017)

Erinma: That contradiction is very interesting.

Sarah: Yes, it’s seen as a contradiction. Attempts to bring in lived experience are completely not the norm.

Erinma: So there’s something about the structures of the spaces in which this is created, which eliminates the possibility for margins to be considered.

Sarah: From the policy perspective, there are two main ways that you can bring in the experiences of concerned individuals or groups into consultative processes. But the way they are currently designed is to get general views on predetermined questions which already embed ideological assumptions. The terms of proper access are rarely there, the timelines are often tight and the language filled with jargon. To some extent it reaffirms my point that they’re not necessarily designed to get lived experience or situated knowledge through those processes. And we also need to pay attention to who can respond to consultations. It says a lot about division of power in terms of who responds from industry, academia and civil society and how much money they can spend on lobbying tech policy. These budgets run into millions for big tech.

Caroline: So is it that the consultation is a bit of a parade, theatre as a proxy for engaging lived experience? Whereas it actually further embeds this knowledge, this power comes into implementing these processes.

Sarah: It’s incredibly performative. The other way is through the legislative process, engaging your MEP or MP.

Erinma: It’s like a mountain, a behemoth, and the speed at which these things happen! And then you have mountains of paper to read through. There are examples from documentary and verbatim theatre, where the creators have taken evidence from inquiries, including written oral testimonies and evidence submission, to inform the play and then fiction and evidence are brought together in a space where the audience has paid for a ticket, goes to see the play and listens. You know you watch and you can leave, but you are paying to feel uncomfortable, to make sense of what is presented on stage. And the audience is not there as the policymaker. These spaces are where the audience can bring their own experiences into the mix. And, rather than thinking about ourselves, in terms of, ‘I’ve got to make these decisions’, maybe this offers a different way to make sense of an issue?

Caroline: I wonder if this ‘view from nowhere’ is even helpful to policymakers. Are there situations where the ‘view from nowhere’ does damage to the policymaker, as a citizen? Are there examples of that? It could be an interesting place to look.

Sarah: There is one really good example with AI, related to this topic, focused on mass surveillance. With the EU process now, for AI, there is reluctance to implement a complete limitation on the deployment of technologies that enable mass surveillance. As an individual policymaker, I would be worried about that. Although often policymakers occupy a certain class privilege and are immune to many of these harms, they are not exposed in the same way some others are. But as a citizen who needs to access the public space, ultimately this will impact you. I think this is a good example of how the policymaking process involves policymakers as implementers, and the limitations of this removed approach to policymaking.

Erinma: Wow. It’s just fascinating. It’s just bizarre.

Caroline: Yes I can see the value in this link between creative practice and policymaking, but taking it to a different space, where the forces at play are different.

Sarah: Yes, and also the possibility of engaging lived experience in policymaking, and bringing in racial justice perspectives. I think policymaking in the space of social justice and technology (in think-tanks, too) could serve to gain from ‘the view from somewhere’. Instead of purporting to advance views from everywhere or nowhere, it would be worth analysing where apolitical stances on data politics come from and who they serve.

Erinma: Often these spaces are quite straight. It all needs queering.

Caroline: It can be encouraging that the space is generative, more of a conversation, than a performance. But I am also thinking about an exhibition called ‘Law Shifters’ by Stine Marie Jacobsen that we saw a few years ago in Copenhagen in Nikolaj Kunsthal. It was an exhibition about law and you could write your ideas down and post it into a tube, a kind of hamster run, and your piece of paper got sucked into a big box of desires in relation to the EU. I wonder if they made it into evidence?

Erinma: Just in you saying that maybe there’s something about how you could use documented evidence and almost in the words of the policymaker, as they’re saying these things. Now, if you could get AI to churn that out!

Caroline: Could you use some of those behavioral change tactics to coerce that policymaker? I guess there’s a lot of weight on how credible something is written down.

Sarah: That written contribution forms the basis of being able to get into any consultation is also very problematic for many reasons, right? If we think of how knowledge is produced and valued, the written word is privileged in spaces of power, whereas for many, oral testimony is the important source of knowledge distribution, and yet this is excluded from formal policymaking processes.

Erinma: I think we’ve come up with some cracks in the reality of policymaking in relation to AI ethics and who that works against. There’s something really interesting that we can imagine as a process, critiquing the culture and language surrounding the technology.

Caroline: Could be really interesting.

Sarah: I love it.

Erinma: I mean, it could get us into trouble but as artists we can put words into people’s mouths. Playing around with the structure is really interesting mentally, the structures that dictate the spacing, which words, values or code are written in to produce and structure the data.

Caroline: There’s a whole economy around it isn’t there? There’s a whole economy around the making of policy and the making of data.

Erinma: And it’s tricky because it requires us to imagine beyond racialised capitalism. Being in it, yet finding the opportunity to resist it, to find or open up a different path.

Caroline: Which takes us back to the conjuncture and what we are attempting to do by reimagining the Dartmouth Summerschool where AI21 was originally coined as a term in the 1950s and imagined from the same logics centred around the universal standard of ‘man’ formulated around the rationale of ‘biology’ which as Wynter identifies,22 epistemologically forecloses ways of being human otherwise.

Erinma: We’re wondering about how this relates to AI in policymaking. Would you agree that the work of policy is about imagining what kinds of futures, what kinds of lives are valued as worth living?

Sarah: Definitely. In my work, I see that working in policy is world-making and inherently involves explicit or implicit value judgments about what the present is, what the future should be. This is linked to a political statement of whose lives have value.

To your points about imagining, I’d really like policy to be more about imagining. I often question how far transformative work is possible within current policymaking circles, or whether this space is only capable of engendering different futures through reform, step-by-step change, chipping away at the block. Often ‘policy’ is dictated by what is feasible rather than what is necessary. Resigning ourselves to the slowness of progressive change, or limiting our demands so as not to be perceived ‘too radical’ always felt convenient for those that have the luxury of waiting patiently for change. I’ve never really liked this.

This lack of urgency characterising ‘pragmatic advocacy’ is a symptom of the failure of most working in policy to reckon with processes of violence, exclusion and marginalisation. I think J Khadijah Abdulrahman puts this well in the Moral Collapse of AI ethics.23

‘Innovation and technological progress, in the context of racial capitalism, are predicated on profound extraction so the next disaster is inevitable, our failure to respond and offer an alternative worldview is not. We can do better.’ (Abdurahman, 2020)

Radical imaginings and pushing for transformative change could very much be part of policy work. But, we’d need a drastic and collective institutional shift toward much bolder ways of advocating against structural inequality in a digital context. My project explores how that might happen, what forms of understanding of racism would need to be built, what conditions would need to be set, which structures would need to change.

Caroline: This point is incredibly important, working with creative practice, AI is becoming enthused and infused into art contexts. So there is a real question around how we practice, and avoiding unintentional co-opting of these technologies, even when taking a critical perspective.

Sarah: Yes, instead of discussing tweaking current systems to be slightly less problematic, radical re-imaginings would get to the roots of the problem, to slightly paraphrase Angela Davis.24 We would not spend time trying to document precisely how surveillance causes harm or how to ‘de-bias’ carceral technologies, we would instead imagine: if we were to dismantle surveillance technologies, domination, extraction, what would be there instead? Who gets to write that future?

Erinma: Yes. Alongside that there is also the question of equity and reparation.


Image credit: Squirrel Nation. Licence: https://creativecommons.org/licenses/by-nc-sa/2.0/

Footnotes

  1. European Commission (2021). ‘Proposal for a Regulation Laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts’, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206
  2. Hickman, L., Boudiaf, L, Fubara-Manuel, I. Devi Chander, S, Ochu, E. and Ward, C. (2021). ‘Community Agreement for Open Labs’ accessed here: https://docs.google.com/document/d/e/2PACX-1vSoNVDMnYmcojEUtC4ayxNyZe-vTGtTjYus5MtYK2MAwsQm-coyuDl9XhOQnYf2NF3swr6rA2aPcf6q/pub?urp=gmail_link
  3. EDRi (2020) ‘Recommendations for a fundamental rights-based artificial intelligence response: addressing collective harms, democratic oversight and impermissible use’ available at: https://edri.org/wp-content/uploads/2020/06/AI_EDRiRecommendations.pdf
  4. The Algorithmic Justice League describes the need to centre justice by focusing on impermissible use: ‘Justice requires that we prevent AI from being used by those with power to increase their absolute level of control, particularly where it would automate long-standing patterns of injustice such as racial profiling, in law enforcement, gender bias in hiring and overpolicing of immigrant communities. Justice means protecting those that would be targeted by these systems from their misuse.’ Available at: https://www.ajl.org/learn-more
  5. Disability Rights UK (2020) ‘Don’t withdraw Care Act from Coronavirus Bill, urge disability charities’ accessed here: https://www.disabilityrightsuk.org/news/2020/march/dont-withdraw-care-act-coronavirus-bill-urge-disability-charities
  6. The Lawrence Review (2020): https://www.lawrencereview.co.uk/
  7. LGBT Foundation (2020) ‘Why lgbt people are disproportionately impacted by coronavirus’ Accessed here: https://lgbt.foundation/coronavirus/why-lgbt-people-are-disproportionately-impacted-by-coronavirus
  8. Stuart Hall Foundation Race Report (2021): https://www.stuarthallfoundation.org/foundation/shf-and-code-partner-to-consolidate-40-years-of-inquiries-into-racial-inequality-in-britain/
  9. Ochu, E. (2020) ‘A dangerous game…’ accessed here: https://everyoneandeverything.wordpress.com/2021/04/02/a-dangerous-game/
  10. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’, August 31, 1955. AI Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
  11. Hao, K. (2020). ‘We read the paper that forced Timnit Gebru out of Google. Here’s what it says’. MIT Technology Review, available at: https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
  12. Dervin, B. (1993). ‘Verbing Communication: Mandate for Disciplinary Invention’, Journal of Communication, Vol 43 (3), pp45–54, https://doi.org/10.1111/j.1460-2466.1993.tb01275.x
  13. Collins, P. H. (2000). Black feminist thought: Knowledge, consciousness, and the politics of empowerment (2nd ed.). NY: Routledge
  14. Wynter, S. (2001). ‘Towards the sociogenic principle: Fanon, identity, the puzzle of conscious experience, and what it is like to be “Black”’. In M. F. DuranCogan & A. Gomez-Moriana (Eds.), National identities and sociopolitical changes in Latin America (pp. 30–66). New York: Routledge
  15. Wynter, S. (1984). ‘The Ceremony Must be Found: After Humanism’. Boundary 2, 12/13, 19-70. doi:10.2307/302808
  16. Mignolo, W. D. & Tlostanova, M. V. (2006) Theorizing from the Borders: Shifting to Geo- and Body-Politics of Knowledge. European Journal of Social Theory, 9 (2): 205–221
  17. Hall, S. (1988.) The hard road to renewal: Thatcherism and the Crisis of the Left. London: Verso
  18. Hancock, T. & Bezold, C. (1994), ‘Possible Futures, Preferable Futures’, Healthcare Forum Journal, 37(2):23-29
  19. Glissant, É, 1928-2011. (1997). Poetics of relation. Ann Arbor: University of Michigan Press
  20. Jasanoff, S. (2017). Virtual, visible, and actionable: Data assemblages and the sightlines of justice. Big Data & Society. https://doi.org/10.1177/2053951717724477
  21. McCarthy, J. Minsky, M.L., Rochester, N. and Shannon, C.E. (2006). ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’. AI Magazine Volume 27 Number 4 pp.12-14
  22. Wynter, S. (2003). ‘Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, After Man, Its Overrepresentation–An Argument,’ CR: The New Centennial Review, 3 (3)
  23. Abdurahman. J.K (2020). ‘On the Moral Collapse of AI ethics’, available at: https://upfromthecracks.medium.com/on-the-moral-collapse-of-ai-ethics-791cbc7df872
  24. Davis, A. ‘Address, June 25, 1987, Spellman College. Let Us All Rise Together’, in Davis, A (1989) Women, Culture and Politics (Vintage Books: New York)

Related content