Skip to content
News

What’s industry’s role in shaping ethical AI?

A summary of techUK and Nuffield Foundation workshop exploring the role and responsibilities for industry in embedding ethical AI.

29 May 2018

Reading time: 10 minutes

This note reports the findings of a techUK/Nuffield Foundation workshop exploring the role and responsibilities for industry in embedding ethical AI. Participants felt that increasing public understanding and improving a social dialogue about the ethics of AI would be vital, alongside new ways to consider ethics earlier on in the design and development of AI systems, incentives built within businesses but also the wider industry ecosystem to respond to ethics, and that industry considered its role in affecting equality. Many people also raised the need for a more effective and collaborative governance framework underpinning ethical AI. 

As part of the development work of the Ada Lovelace Institute, the Nuffield Foundation are hosting a series of stakeholder roundtables, workshops and events that explore:

  • How public legitimacy can be built in the use and development of technology and what public engagement approaches can secure/strengthen legitimacy;
  • How civil society can be supported to shape the development of technology for people and society (‘civil society in the loop’);
  • How technology can help tackle inequality and enable social wellbeing; and
  • What empirical research currently exists about how technology affects people, groups and wider society, but also identifying where the gaps currently are and what we might do to help address those gaps.

All of the insights gathered through these events will be informing our prospectus for the Ada Lovelace Institute, alongside early stage research we are commissioning with the Centre for Future Intelligence.

As part of its work scoping the Institute the Nuffield Foundation recently convened an interdisciplinary workshop in partnership with techUK, the UK’s technology trade association, to better understand  the emerging challenges that the development of AI poses for industry, as well as what role the Ada Lovelace Institute might play in tackling them. This workshop was held under Chatham House rules.

Participants in this workshop included management consultancies, law firms, HR and organisational consultants, AI and tech developers and suppliers. We brought these people together in dialogue with the Institute’s own staff, as well as with researchers we are partnering with at the Cambridge University’s Centre for Future Intelligence.

This note summarises the key themes discussed.

Emerging social and ethical issues:

To identify and to scope out the emerging social and ethical issues industry expects it would need to grapple with, we posed the following thought experiment:

Imagine you are still working in your sector in 10 years’ time. What key emerging social and ethical issues do you think your organisation will need to engage with and respond to both externally and internally?

There was consensus from the workshop there are a series of social and ethical challenges which must be addressed to build trust in AI and data driven technologies. We have grouped these against four core (and interlinked) issues:

  1. A lack of public understanding and inclusive dialogue about technology and society; no responsive interface between those who use and those who are affected
  2. Inadequate mechanisms to consider human value and social wellbeing by those developing technology
  3. An unequal distribution of the benefits and harms from technology
  4. No effective national or global framework for governance

 

  1. A lack of public understanding and the need for inclusive societal dialogue about AI systems
‘We need better public understanding and more routes to human agency…can we do this through creating more demand for ‘ethical’ AI?’

A lack of public understanding and education on AI was identified as a growing issue. Participants flagged the importance of this going beyond an education campaign: as AI is increasingly used in society it would be improve to have a inclusive dialogue between those directly affected by the technologies and those who developed them. As such, participants argued both that it was important to improve wider social dialogue about the ethical and social implications ‘beyond the developers of the technologies’, as well as a more responsive and effective interface between those who use and are impacted by the technologies, and those who develop and provide it (inclusive of government as well as industry).

  1. A lack of consideration on human and social well-being by those  designing AI/data enabled systems
How do you do ‘human accountability’ in this space?’

Connected to the need for a more responsive dialogue between those who use and are impacted by technologies, – participants felt ‘human needs’ failed to be fully considered when designing systems, which were primarily driven by shareholder rather than stakeholder value. Participants viewed the fact that technologies often focused on the maximisation of profit at the exclusion of maximisation of social value:  business models which were solely driven by profit may cause future social issues affecting the industry as a whole. It was felt to be critical to be able to ask and answer whether ‘social value’ (understood broadly as inclusive of building community and social capital, supporting the wellbeing of individuals and communities, and the preservation of the environment) was being delivered with their product, and in what ways it might be detrimental to social value.

Some felt that tech systems often failed to take into account, or meet the needs, of those most excluded from society (such as the poorest).

Many participants acknowledged a tension that would need to be negotiated between governance structures which supported a social mission, value and purpose, as well as effective business models. However, respondents identified an urgent need to address and tackle emerging market dominance by larger AI and data providers and controllers as part of this question: some participants felt that currently the largest tech companies (‘GAFA’) inherently ‘set the standards’ in how society is considered given their market share.

Participants welcomed the idea of developing and applying an ethical code of conduct; as well as creating the conditions in which a range of business models working with AI were able to flourish and to work. It was highlighted that this is an area of work techUK is already progressing.

  1. An unequal distribution of the benefits and harms from technology

‘To tackle inequality, we need to find ways of distributing the benefits from technology, as well as more global governance’

Inequality emerged as a key issue for many participants,  who felt that technology companies’ development of tech and decisions had broader social consequences which had to be considered. Many participants saw technologists having a key role in understanding their agency within a larger system, and that this required them to:

  • Think beyond the polarised debate about automation, towards how tech can help build a more economically resilient society
  • Ensure equality of access and inclusion for the technologies, as well as tackling biases or discrimination
  • Broaden the diversity of those who make decisions about and develop technologies, as well as those who influence decision-making; and
  • Acknowledge unintended consequences that emerge, and managing/foreseeing those.

 

  1. The lack of an effective governance framework nationally and globally
 ‘How do we build new law and governance structures that can deal with such new and emerging threats and disruptions to society?’

Contributors argued that new ways to think systemically and work collaboratively as businesses, at a global level, would be of critical importance. The new General Data Protection Regulation (GDPR) was seen by some as value in providing the legal basis and foundations for industry to consider ethical questions more holistically.

There was thoughtful discussion about the tension between openness – facilitating innovation in the use of technologies – and growing geopolitical tensions with some seeing a consensus of promoting co-operation across nations is increasingly at risk.

In the longer term (particularly given trends including the rise of nationalism and increasing global tensions), some participants suggested that a global governance framework would be especially helpful; but also flagged the tensions with the rise of populism and nationalism across the globe as potential barriers to enabling this to take place.

‘How can technical and regulatory solutions interact better (e.g to solve algorithmic bias) and how can they better complement one another?’

There was much discussion on the need for competent and smarter regulation that struck an appropriate balance between fostering innovation and protecting human rights – that itself is in service to the mission of building trustworthiness.

Some advocated for a more ‘agile’ form of governance to keep pace with innovation, while others felt governance and regulation was by definition slower and more permanent. Several participants also mentioned the need to have in place new insurance or liability frameworks that could recompense, provide redress or remedy for negative distributional impacts on people.

There was collective recognition here for industry to work together to anticipate emerging issues. Participants acknowledge he need to for the community to think and act beyond legal compliance, with a focus on creating the cultural norms, values and corporate leadership (underpinned by effective regulation) which lend itself to a relationship between technology and society that engenders public legitimacy and trust. It was suggested that organisations such as the Ada Lovelace Institute might be able to work with industry and government to consider ‘the bigger picture’ and look beyond more immediate pressures to provide longer-term thinking to support a society enabled by data and AI.

Ideas to improve ethical practice

‘How do you measure and enforce more ethical practice? Can we even do that?’

Participants identified a number of skills, tools and capabilities which industry might need to develop or instill to enable them to grapple with some of these ethical issues. These included:

  • Emphasis on soft ethics and instilling cultural norms:
    Participants identified a need for a compelling narrative of the value of ‘soft-ethics’ beyond regulation through: setting corporate values and norms, having corporate social responsibility frameworks; modelling ethical leadership with tone set by management; defining ‘business ethics’ in data and AI; and shifting cultural and sectoral norms.
  • The capability to anticipate risk and work through specific scenarios:
    Given the pace of innovation and the scale of potential impact, it was seen as vital to build the capabilities to forecast future AI challenges. This would need to incorporate scenario planning and risk management, as well as developing risk models to enable businesses to make better risk judgements.
  • External accountability systems which would incorporate initiative and measures such as independent ‘ethical audits’, measuring and enforcing ethical practice from beyond industry itself (‘External ethics insights’); and a clearer definition of metrics of success for use of ethical AI and data ( for instance, fostering and promoting innovation as well as promoting wellbeing.
  • Learning from failure and from success: There was a strong sense of the need for self-reflection and evaluation by understanding, rewarding, modelling and scaling best practices in building ethical AI and Machine Learning, by identifying both what works, and what doesn’t.
  • The promotion of cultures of inclusion and diversity within tech companies that engenders cultural sensitivities and open mindedness
  • The need for the creation of interdisciplinary and multi-practice dialogue on what ethics looks like and how it works in practice. This would require a common, shared language created around that to avoid the phenomenon of dialogue being ‘lost in translation’. Participants suggested a taxonomy of, or shared frameworks for data usage to support a shared language. There is widespread concern that there are many conversations that fail to be sufficiently joined up, and this needs to be considered at a system level: AI may be designed ethically, but its deployment in other contexts may cause harm.
  • Increasing consumer engagement and user involvement and control over decision-making through involvement and voice at governance level.

Next steps and continuing dialogue

This interdisciplinary workshop was the first of a series of seminars, workshops and roundtables we are hosting in collaboration with partners with a view to engage in an interdisciplinary way with perspectives from industry, academia, think-tanks, civil society and the wider public. This will help inform the work and priorities of the Ada Lovelace Institute, ensuring we reflect diverse viewpoints within its design.

Outcomes from this workshop include:

  • A commitment by techUK to continue to engage with the Institute as it develops its thinking.
  • A follow up roundtable discussion to update techUK members on the input received from the other workshops, and to test any conclusions reached
  • techUK will work with the Ada Lovelace Institute to share and test the development of a draft ethical toolkit/framework to help organisations embed ethical thinking into everyday business practices.