Skip to content
Blog

Understanding AI research ethics as a collective problem

Changing the culture on AI-driven harms through Stanford University’s Ethics and Society Review

Quinn Waeiss

12 January 2023

Reading time: 13 minutes

Introduction

Societal harms perpetuated by artificial intelligence are well documented. Although some organisations and individuals have taken steps to counter harms in their work, these problems continue to arise as AI technology proliferates. From discriminatory bail algorithms to racist facial recognition matches to flawed healthcare algorithms, the pernicious consequences of AI technologies beg the question: how can we change the culture of AI research and development to foreground preventing harms to society? 

Generating cultural change is no easy feat. It requires buy-in and participation well beyond the researchers who have been sounding the alarms on harmful AI. And until everyone accounts for the ethical and societal consequences of their AI work, there will continue to be deleterious effects – disproportionately so for groups already marginalised by society.  

Key to this change is acknowledging that ethical and societally responsible AI research must be a collective pursuit, not an individual one, and that individualised notions of harm, entrenched in current solutions, are unsuitable. We must expand our commitments to ethical research and development to being as broad in scope as the potential harms of the work we do.  

Clearly, institutional structures – the rules, incentives and processes that apply to those that work within a given institution – are essential for shaping the behaviour of researchers and promoting a different culture of reflecting and intervening on potential AI-driven harms. 

To foster this cultural shift in AI research at Stanford University in the United States, we have created the Ethics & Society Review (ESR), a new institutional mechanism that embeds a reflective coaching process within grantmaking at the University. The ESR requires that researchers engage with an interdisciplinary panel of ethics experts to identify potential societal harms that could follow from their proposed work and devise appropriate solutions for them. Only after researchers participate in the process and the ESR panel shares its funding recommendation does the grant organisation release funds.

The limits of current solutions

To understand why this matters, we need to examine the traditional ethical gateway, the Institutional Review Board (IRB), which is typically synonymous with ‘research ethics’ in the United States. The primary difference between the ESR and IRBs is that IRBs are expressly disallowed from addressing concerns about harms to society and focus instead on harms to human participants in research only. A large proportion of AI research does not directly engage human subjects, which means many IRBs decline to review AI research, and a significant number of projects never undergoes any ethical review. 

To understand why IRBs focus only on harms to human subjects, we need to consider the historic and political context that shaped their federal mandate. From the horrific experiments Nazi doctors carried out on inmates in concentration camps, to the dubious debriefing and implementation of the Milgram Obedience Study, to the egregious harms of the Tuskegee Syphilis Study on unconsenting Black men, experimental studies through the 1970s were rife with unethical and injurious research practices that affected in particular the human participants undergoing them.  

Pressured by the publicity and outcry surrounding the Tuskegee Syphilis Study, the US Congress passed the National Research Act of 1974, which issued regulations mandating the establishment of ethics committees (now known as IRBs) and established a National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to develop overarching principles for human-subject research. The Commission outlined these principles in the Belmont Report, namely: respect for persons, beneficence and justice.  

Conceptually, each of these principles could be applied to human societies, but the way they’re codified in the Common Rule, which governs IRBs, limits their scope to individual human-subject research participants only. Furthermore, the Common Rule expressly disallows review of consequences to human society: 

The IRB should not consider possible long-range effects of applying knowledge gained in the research [. . .] as among those research risks that fall within the purview of its responsibility.

Therefore, IRBs generally decline to review research risks to society, as these are interpreted as long-range effects. Instead, they focus on issues related to research participants’ selection, privacy and informed consent, as well as considering the balance of risks versus the anticipated benefits for participants (along with the importance of the expected knowledge that the research will generate).  

This leaves a large gap between the relevant concerns that follow from AI research and those that fall under the purview of IRBs. Issues like dual use of data, worker displacement, unrepresentative training data and excluding stakeholders from project design and deployment remain unreviewed and often unmitigated. 

How can we bridge this gap? Some researchers recommend expanding the definition of ‘human subject’ to include societies, as the Microsoft Research Ethics Review Program has done 

Others call for revising the Common Rule to include ‘Respect for Societies’ as a principle or revising it to address substantive ethical concerns in addition to procedural ones. Other alternatives recommend broader impact statement requirements, integrating ethics training into laboratory meetings and course curricula or articulating ethics guidelines by researchers or professional associations 

While it looks like a promising solution, uniformly expanding university IRBs’ interpretation of ‘human subject’ in the Common Rule would require an agreement that such ‘long-range effects’ aren’t outside of IRB purview and coordination across thousands of universities. Revisions to the Common Rule itself would need the approval of the USA Department of Health and Human Services and 15 other federal departments and agencies. There’s a precedent for this, as the Common Rule was revised as recently as 2018, but wholesale changes and/or additions to the Belmont principles have yet to occur.  

More broadly, mechanisms such as requiring impact statements as part of funding proposals or conference papers, running ethics training and supplying specific guidance, while beneficial, rely on the willingness of individuals to carry out ethical and socially responsible work. People who care about the potentially harmful consequences of their work will try to mitigate them. Those who don’t, won’t.  

Current solutions do have the benefit of pointing out areas of need, such as providing guidance for thinking through the ethical and societal implications of research and a mechanism for committing to an ethical framework.  

However, we must reframe these solutions, moving beyond the goodwill of individuals and engaging researchers collectively. This reorientation leads to formulating new goals for the ethical review of AI research projects:    

Goal 1: Require everyone to engage with the ethical and societal consequences of their work, not just those who care enough to participate. 

Goal 2: Bring ethical and societal reflection to bear in the beginning of the research process, when activities like algorithmic training and project design are still in development. 

Goal 3: Coach researchers through the ethical and societal reflection process. 

Goal 4: Provide the scaffolding necessary for researchers to continue mitigate the foreseeable risks of their work and tackle unanticipated issues as they arise. 

Ideally, achieving these goals will help generate a cultural change, in which researchers continuously consider and address the ethical and societal consequences of their work, from the moment when they formulate their research question through to deployment and dissemination. 

A new solution: the Ethics & Society Review

In 2020, Michael Bernstein, Margaret Levi, David Magnus and Debra Satz introduced the Ethics & Society Review (ESR) at the Center for Advanced Study in the Behavioral Sciences at Stanford University and partnered with Stanford’s Institute on Human-Centered Artificial Intelligence (HAI) to achieve the above goals.  

Unlike the IRB, the ESR cannot rely on regulation to require that all relevant research undergoes ESR review. Instead, it functions as a requirement to access some grant funding: researchers cannot receive grant funding from Stanford University programs  that have embedded the ESR, such as the HAI and the Woods Institute for Environment, until they complete the review process for their proposal.  

Conditioning funding on the ESR process helps engage researchers at the formative stages of their research and ensures broad engagement with the process rather than self-selection from those who are motivated. 

Although the process was devised for an academic environment, the ESR programme, has also engaged industry researchers, due to the interdisciplinary and cross-cutting nature of project teams who have submitted proposals to Stanford’s participating grantmaking organisations. We are also building partnerships with technology companies to better understand how aspects of the review process can be leveraged in industry-led work. 

In practice, the ESR process works as follows:  

  • Researchers submit a brief ESR statement alongside their grant proposal. This statement describes: (a) the project’s most salient risks to society, to subgroups within society and to other societies around the world, and (b) how researchers will mitigate the risks they’ve identified.  
  • The funding program conducts its grant merit review, then sends only the grants recommended for funding to the ESR for ethics review.  
  • The ESR convenes an interdisciplinary panel of faculty, with expertise in the ethical and societal considerations in their field. Two faculty are matched to each proposal; they are chosen so that one faculty shares substantive expertise with the proposal topic and another can provide a complementary perspective from another field.  
  • The faculty panellists review the ESR statement and grant proposal to consider the studies’ risks and mitigations and determine whether the ESR statement sufficiently identifies and addresses them.  
  • Following the review, ESR panellists provide written feedback to the project’s principal investigators (PIs) on four dimensions: (1) the areas PIs successfully addressed ethical and/or societal concerns related to their work; (2) the revisions and/or responses panellists require from PIs; (3) the revisions panellists recommend to PIs; and (4) other things for PIs to consider.  
  • If ESR panellists require revisions from PIs, then panellists recommend that researchers iterate the process with them until they reasonably work through the salient risks and mitigations. The ESR has the power to enforce process iteration and could recommend against funding to the granting agency, which retains full power over the final decision but will take the recommendation into account. 

We do not intend for this review to eradicate all negative impacts, which is often impossible to do at the time of the ESR, as some may arise as the project develops. Instead, we aim to work with researchers to identify negative impacts and devise reasonable mitigation strategies.  

Throughout this process, the ESR panelists coach researchers by recommending relevant literature, posing guiding and hypothetical questions that follow from the research and offering potential mitigation strategies to identified risks. After the review process is complete, the ESR submits its recommendations to the funding program and funds are released to the researchers. 

Has the ESR achieved its goals so far?

Mostly. Embedding the ESR within granting organisations surmounts the self-selection issue and ensures researchers engage with the ethical considerations of their work while it’s open to change. Substantively, researchers are considering a wide range of issues in their ESR statements 

Surveys of PIs also reveal that, because of the ESR process, many are having conversations about ethical risks with their project team that they wouldn’t have otherwise. Some PIs have even broadened their research agendas to engage more deeply with the ethical concerns revealed in the ESR feedback and feel better prepared for identifying and mitigating risks throughout the project lifecycle. 

At the same time, also following the PIs’ requests, we continue to refine the ESR feedback process, to help them better prioritise and address risks, and the scaffolding provided to PIs, in our effort to produce effective coaching of researchers and ensure that a system is in place for them to continue to mitigate the risks of harms that may emerge as their work develops.  

Over the years, the ESR statement prompt has been revised to provide examples of the types of risks and mitigations researchers might consider and give access to example ESR statements from previous years.    

Finally, we are collecting resources and developing our own training modules to help AI researchers evaluate their work for ethical and societal consequences.  

Where do we go from here? 

  1. Expanding the evaluation of the ESR: while current evaluative evidence shows that PIs are productively identifying and mitigating risks during the ESR process, we don’t know how PIs are addressing issues that arise in their work after they interact with the ESR. Therefore, in our partnership with Stanford’s HAI, we are requiring that grantees submit updates to their original ESR statement in their grant reporting, including any changes to the risk landscape and mitigations for their project. 
  2. Scaling the ESR: currently, the ESR is partnered with grantmaking at Stanford’s HAI and the Woods Institute for the Environment. We’re seeking other opportunities to embed the ESR process within organisations at Stanford to expand its reach and are currently recruiting PhD students and postdocs to conduct an initial triage of ESR proposals. 
  3. Partnering with industry: the ESR is designed for academic grantmaking institutions. This type of solution cannot be transported directly into industry contexts. Therefore, we’re working with industry partners, like DeepMind, to share insights on effective ethical governance strategies and identify common needs for productive ethics review across academia and industry. 
  4. Developing a consultative arm of the ESR: we plan to build out a consultative arm of the ESR, available to researchers for coaching and feedback throughout the research lifecycle. 
  5. Maintaining flexibility and responsiveness: sometimes identified ethical issues remain unresolved after ESR review. Whatever the ESR process has concluded, we aim to flexibly respond to Pis’ needs. For example, this past year, the ESR review of a particular research proposal revealed serious dual-use concerns regarding a public database of chemical toxicity. The researchers quickly sought advice from industry and policy experts, but could not identify any systematic solutions. As a result, the ESR has partnered with them to convene relevant academic, industry and government experts to identify possible solutions to this problem.  

There is no single panacea for the ethical and societal consequences that follow from AI research. As a novel proposal, the ESR draws on solutions that came before it, while working towards new goals. Through ongoing collaborations with industry and partnerships with grantmaking organisations, we aim to change the culture around research, where ethical and societal reflection is incorporated into the research process from start to finish. 


Acknowledgements

Thank you to the Stanford Institute for Human-Centered Artificial Intelligence and Stanford’s Woods Institute for the Environment for their collaboration. This work was supported by the Public Interest Technology University Network; Stanford’s Ethics, Science and Technology Hub; Stanford’s Institute for Human-Centered Artificial Intelligence, NSF Grant ER2-2124734, and the Patrick J McGovern Foundation. 

The Ethics & Society Review requires the work and support of many people to function. Michael Bernstein, Margaret Levi, David Magnus and Debra Satz chair this process. Quinn Waeiss is the ESR program director and evaluator. Ashlyn Jaeger, Anne Newman and Betsy Rajala have provided guidance on the ESR’s development and served as 2021 coordinating panelists.

In addition to the ESR chairs, many scholars have served as ESR panelists over the years, including: Angèle Christin, Barbara Kiviat, Bruce Cain, Emma Brunskill, James Zou, Johan Ugander, Leif Wenar, Londa Schiebinger, Michelle Mello, Nicole Ardoin, Nicole Martinez-Martin, Rob Jackson, Tina Hernandez-Boussard and Xiaochang Li.

Rishi Bommasani, Dan Friedman, Liz Izhekevich and Ting-An Lin are serving as 2022 ESR coordinating panelists. And thank you to Ellie Vela and Mahmood Jawad for their valuable research assistance. 


This blog post was commissioned in the context of our work on AI ethics research.

You may be interested in reading our report Looking before we leap, on ethical review processes for AI and data science research.