Skip to content
In-person event

A culture of ethical AI

What steps can organisers of AI conferences take to encourage ethical reflection by the AI research community?

A waiting crowd in front of a microphone and podium

This write-up from the Canadian Institute for Advanced Research (CIFAR), the Partnership on AI and the Ada Lovelace Institute explores what steps organisers of AI and machine learning (ML) conferences can take to incentivise ethical research practices within the AI research community.

Further work in this space by the Ada Lovelace Institute includes a forthcoming report from , in partnership with the Alan Turing Institute and the University of Exeter, exploring the role of corporate and academic research ethics committees (RECs) and what steps they can take to support a culture of ethical AI research practices.

Background

The last few years have witnessed a growing awareness of the ethical risks that AI research can pose, including concerns around the privacy risks to the public, discriminatory impacts on particular groups, and the environmental costs of training large AI models. There are growing calls for AI researchers to identify and mitigate ethical risks at all stages of their research, but creating a cultural norm that incentivises these practices will require action from the entire AI research community.

AI and ML conference organisers are uniquely placed to encourage various forms of ethical research practices. Conferences are venues where research is rewarded and celebrated, enabling career advancement and growth opportunities. They are also forums where junior and senior researchers from the public and private sectors create professional networks and discuss field-wide benchmarks, milestones and norms of behaviour.

In a workshop held in February 2022, we invited AI and ML researchers and conference organisers to discuss what steps they can take to create field-wide incentives for more ethical research practices. 

This report synthesises the insights we gathered from the convening, and includes five big ideas for how AI and ML conference organisers can address these challenges, along with a wider list of interventions proposed by participants to foster a more responsible research culture in AI. We view this report as a menu of options for future AI and ML conference organisers to choose from, pilot and iterate on at their conferences. These include:

  1. AI conference organisers can consider a mix of prescriptive and reflexive interventions to improve researchers’ ability to assess the ethical impacts of their work. 
  2. Conference organisers should prioritise training more researchers and conference reviewers on how to examine the potential negative downstream consequences of their work. 
  3. Organisers should engage with research stakeholders including impacted communities to understand how conferences can empower them.
  4. Organisers could spotlight exceptional technical and ethically sound submissions.
  5. Conference organisers could incentivise more deliberative forms of research by enacting policies such as revise-and-resubmit and rolling submissions.

Organisers across different AI conferences should continue to collaborate more closely in forums like our workshop and others, to share lessons learnt and discuss community- wide approaches for encouraging more ethical reflection.


Image credit: brazzo

Related content